Dissertations / Theses on the topic 'Knowledge modelling'

To see the other types of publications on this topic, follow the link: Knowledge modelling.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Knowledge modelling.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Overington, John Paul. "Knowledge-based protein modelling." Thesis, Birkbeck (University of London), 1991. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.715106.

Full text
Abstract:
The automated protein modelling program COMPOSER is tested and improved. A test case of the model building of trypsin is described. Prior to the enhancements made to the program the RMSD of the automatically built model was 3.46A, after the improvements this figure is reduced to 1.58A. The program was applied to two ’real-life’ problems from the pharmaceutical industry. The first is the modelling of the serine proteinase domain of tissue-type plasminogen activator. Predictions are made as to residues likely to be important in binding specific endogenous inhibitors. The second example is the modelling of the proteinase from HIV-l on the basis of structures of the distantly related aspartic proteinases and later on the more similar structure of RSV proteinase. The model was later used in the molecular replacement derived solution of the x-ray structure of HIV -1 proteinase.
APA, Harvard, Vancouver, ISO, and other styles
2

Bakke, Elise. "Knowledge acquisition and modelling for knowledge-intensive CBR." Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2005. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9187.

Full text
Abstract:

This thesis contains a study of state of the art knowledge acquisition modelling principles and methods for modelling general domain knowledge. This includes Newell's knowledge level, knowledge level modelling, Components of Expertise, CommonKADS and the Protégé meta tool. The thesis also includes a short introduction to the knowledge-intensive case-based reasoning system TrollCreek. Based on this background knowledge, one did analysis and comparison of different possible solutions. Then, after justifying the choices made, a knowledge acquisition method for TrollCreek was created. The method was illustrated through an example, evaluated and discussed.

APA, Harvard, Vancouver, ISO, and other styles
3

Motta, Enrico. "Reusable components for knowledge modelling." N.p, 1997. http://ethos.bl.uk/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Motta, Enrico. "Reusable components for knowledge modelling." Thesis, Open University, 1998. http://oro.open.ac.uk/57879/.

Full text
Abstract:
In this work I illustrate an approach to the development of a library of problem solving components for knowledge modelling. This approach is based on an epistemological modelling framework, the Task/Method/Domain/Application (TMDA) model, and on a principled methodology, which provide an integrated view of both library construction and application development by reuse. The starting point of the proposed approach is given by a task ontology. This formalizes a conceptual viewpoint over a class of problems, thus providing a task-specific framework, which can be used to drive the construction of a task model through a process of model-based knowledge acquisition. The definitions in the task ontology provide the initial elements of a task-specific library of problem solving components. In order to move from problem specification to problem solving, a generic, i.e. taskindependent, model of problem solving as search is introduced, and instantiated in terms of the concepts in the relevant task ontology, say T. The result is a task-specific, but method-independent, problem solving model. This generic problem solving model provides the foundation from which alternative problem solving methods for a class of tasks can be defined. Specifically, the generic problem solving model provides i) a highly generic method ontology, say M; ii) a set of generic building blocks (generic tasks), which can be used to construct task-specific problem solving methods; and iii) an initial problem solving method, which can be characterized as the most generic problem solving method, which subscribes to M and is applicable to T. More specific problem solving methods can then be (re-)constructed from the generic problem solving model through a process of method/ontology specialization and method-to-task application. The resulting library of reusable components enjoys a clear theoretical basis and provides robust support for reuse. In the thesis I illustrate the approach in the area of parametric design.
APA, Harvard, Vancouver, ISO, and other styles
5

Kingston, John. "Multi-perspective modelling for knowledge management and knowledge engineering." Thesis, University of Edinburgh, 2007. http://hdl.handle.net/1842/24782.

Full text
Abstract:
The purpose of this thesis is to show how an analytical framework originally intended for information systems architecture can be used to support both knowledge management and knowledge engineering. The framework suggests analysing information or knowledge from six perspectives (Who, What, How, When, Where and Why) at up to six levels of detail (ranging from “scoping” the problem to an implemented solution). The application of this framework to each of CommonKADS’ models is discussed, in the context of several practical applications of the CommonKADS methodology. Strengths and weaknesses in the models that are highlighted by the practical applications are analysed using the framework, with the overall goal of showing where CommonKADS is currently and where it could be usefully extended. The same framework is also applied to knowledge management; it is established that “knowledge management” is in fact a wide collection of different techniques, and the framework appears to be of some use in every case. A specific application of using the framework to resolve common problems in ontology development is presented. The thesis also includes research on mapping knowledge acquisition techniques to CommonKADS’ models (and to the framework); proposing some extensions to CommonKADS’ library of generic inference structures; and it concludes with a suggestion for a “pragmatic” KADS for use on small projects. The aim is to show that this framework both characterises the knowledge required for both knowledge management and knowledge engineering, and to provide a guide to good selection of knowledge management techniques. If the chosen technique should involve knowledge engineering, the wealth of practical advice on CommonKADS in this thesis should also be beneficial.
APA, Harvard, Vancouver, ISO, and other styles
6

Cunningham, James Alexander. "Modelling knowledge through user focused design in knowledge management applications." Thesis, University of Salford, 2009. http://usir.salford.ac.uk/26630/.

Full text
Abstract:
Knowledge management, as an organisational management technique, aims to capture the knowledge of the members of an organisation and to distribute it among those members in a way which encourages new knowledge to emerge. Software, explicitly designed to aid these goals, is seen as a useful tool for knowledge management. The core focus in the design of such software is in creating structures which allow the knowledge being captured to be represented in the software. However this ability to represent knowledge, on its own, will only serve to make explicit what is already there and will not provide the ability to capture new knowledge in different forms to the knowledge already represented. This thesis examines the question of how best to resolve this apparent conflict through the construction of an argument that rethinks the role of the end user and their relationship to software design in knowledge management, along with the development of a knowledge management-specific software development methodology. Through an in-depth analysis of the 'eCognos' project, which aimed to provide knowledge management software for the construction domain, the notion that a key aspect of knowledge management software design must be the realisation that modelling specifically against a single domain will lead to the development of software artefacts which fundamentally constrain their goal of enabling knowledge management is explored.
APA, Harvard, Vancouver, ISO, and other styles
7

Dodd, Tony. "Prior knowledge for time series modelling." Thesis, University of Southampton, 2000. https://eprints.soton.ac.uk/254110/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Schiele, Felix. "Knowledge transfer in business process modelling." Thesis, University of the West of Scotland, 2015. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.690908.

Full text
Abstract:
Knowledge is an important resource, whose transfer is still not completely understood. The underlying belief of this thesis is that knowledge cannot be transferred directly from one person to another but must be converted for the transfer and therefore is subject to loss of knowledge and misunderstanding. This thesis proposes a new model for knowledge transfer and empirically evaluates this model. The model is based on the belief that knowledge must be encoded by the sender to transfer it to the receiver, who has to decode the message to obtain knowledge. To prepare for the model this thesis provides an overview about models for knowledge transfer and factors that influence knowledge transfer. The proposed theoretical model for knowledge transfer is implemented in a prototype to demonstrate its applicability. The model describes the influence of the four layers, namely code, syntactic, semantic, and pragmatic layers, on the encoding and decoding of the message. The precise description of the influencing factors and the overlapping knowledge from sender and receiver facilitate its implementation. The application area of the layered model for knowledge transfer was chosen to be business process modelling. Business processes incorporate an important knowledge resource of an organisation as they describe the procedures for the production of products and services. The implementation in a software prototype allows a precise description of the process by adding semantic to the simple business process modelling language used. This thesis contributes to the body of knowledge by providing a new model for knowledge transfer, which shows the process of knowledge transfer in greater detail and highlights influencing factors. The implementation in the area of business process modelling reveals the support provided by the model. An expert evaluation indicates that the implementation of the proposed model supports knowledge transfer in business process modelling. The results of the qualitative evaluation are supported by the findings of a qualitative evaluation, performed as a quasi-experiment with a pre-test/post-test design and two experimental groups and one control group. Mann-Whitney U tests indicated that the group that used the tool that implemented the layered model performed significantly better in terms of completeness (the degree of completeness achieved in the transfer) in comparison with the group that used a standard BPM tool (Z = 3.057, p = 0.002, r = 0.59) and the control group that used pen and paper (Z = 3.859, p < 0.001, r = 0.72). The experiment indicates that the implementation of the layered model supports the creation of a business process and facilitates a more precise representation.
APA, Harvard, Vancouver, ISO, and other styles
9

Rehman, S. "Knowledge-based cost modelling for innovative design." Thesis, Cranfield University, 2000. http://hdl.handle.net/1826/3971.

Full text
Abstract:
The contribution to new knowledge from this research is a novel method for modelling production costs throughout the design phase of a product's lifecycle, from conceptual to detail design. The provision of cost data throughout the design phase allows management to make more accurate bid estimates and encourages designers to design to cost, leading to a reduction in the amount of design rework and product's time to market. The cost modelling strategy adopted incorporates the use of knowledge-based and case-based approaches. Cost estimation is automated by linking design knowledge, required for predicting design features from incomplete design descriptions, to production knowledge. The link between the different paradigms is achieved through the blackboard framework of problem solving which incorporates both case-baseda nd rule-based reasoning. The method described is aimed at innovative design activities in which original designs are produced which are similar to some extent to past design solutions. The method is validated through a prototyping approach. Tests conducted on the prototype confirm that the designed method models costs sufficiently accurately within the range of its own knowledge base. It can therefore be inferred that the designed cost modelling methodology sets out a feasible approach to cost estimation throughout the design phase.
APA, Harvard, Vancouver, ISO, and other styles
10

Tung, D.-K. "A knowledge-based three-dimensional modelling system." Thesis, Swansea University, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.639269.

Full text
Abstract:
As manufacturers strive towards high-quality production, automated industrial inspection is a potent resource in the design of cost-effective systems which can ensure that products meet all their design specifications. However, in reality, only in well-controlled applications are economically usable systems being taken into daily use. Where such systems are being used, they are seen to be primarily addressing 2-dimensional inspection problems. This is not surprising, given the highly complex problems which must be dealt with in practical, real-world environments. However, there is an urgent need to move towards acceptable machine-vision systems which not only can operate in industrial environments, but also offer the benefits of 3-dimensional visual representation - so vital in any real inspection situation. A fundamental aspect of any inspection system is the development of inspection models - to be used in subsequent inspection procedures. The generation of these models is a non-trivial task, and one which is increasingly being seen to be best done using operator assistance - as shown, for example, in the work of Chen [34]. However, most current work in such model generation has been tackled in the 2-D arena. This thesis addresses the problem of providing high-quality, visually meaningful, representations of 3-dimensional bodies, drawing information from 2 simple, but industrially-rugged, 2-dimensional images, and using operator assistance to determine the final models. When combined, the resulting 3-dimensional representation provides a valuable reference to an object's total physical structure. The models themselves allow for accurate inspection of the objects' physical parameters.
APA, Harvard, Vancouver, ISO, and other styles
11

Chandra, S. "Knowledge-based physical process modelling and explanation." Thesis, University of Bristol, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.334031.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Azzam, Hany. "Modelling semantic search : the evolution of knowledge modelling, retrieval models and query processing." Thesis, Queen Mary, University of London, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.538379.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Wijesekera, Dhammika Harindra, and n/a. "A form based meta-schema for information and knowledge elicitation." Swinburne University of Technology, 2006. http://adt.lib.swin.edu.au./public/adt-VSWT20060904.123024.

Full text
Abstract:
Knowledge is considered important for the survival and growth of an enterprise. Currently knowledge is stored in various places including the bottom drawers of employees. The human being is considered to be the most important knowledge provider. Over the years knowledge based systems (KBS) have been developed to capture and nurture the knowledge of domain experts. However, such systems were considered to be separate and different from the traditional information systems development. Many KBS development projects have failed. The main causes for such failures have been recognised as the difficulties associated with the process of knowledge elicitation, in particular the techniques and methods employed. On the other hand, the main emphasis of information systems development has been in the areas of data and information capture relating to transaction based systems. For knowledge to be effectively captured and nurtured it is necessary for knowledge to be part of the information systems development activity. This thesis reports on a process of investigation and analysis conducted into the areas of information, knowledge and the overlapping areas. This research advocates a hybrid approach, where knowledge and information capture to be considered as one in a unified environment. A meta-schema design based on Formal Object Role Modelling (FORM), independent of implementation details, is introduced for this purpose. This is considered to be a key contribution of this research activity. Both information and knowledge is expected to be captured through this approach. Meta data types are provided for the capture of business rules and they form part of the knowledge base of an organisation. The integration of knowledge with data and information is also described. XML is recognised by many as the preferred data interchange language and it is investigated for the purpose of rule interchange. This approach is expected to enable organisations to interchange business rules and their meta-data, in addition to data and their schema. During interchange rules can be interpreted and applied by receiving systems, thus providing a basis for intelligent behaviour. With the emergence of new technologies such as the Internet the modelling of an enterprise as a series of business processes has gained prominence. Enterprises are moving towards integration, establishing well-described business processes within and across enterprises, to include their customers and suppliers. The purpose is to derive a common set of objectives and benefit from potential economic efficiencies. The suggested meta-schema design can be used in the early phases of requirements elicitation to specify, communicate, comprehend and refine various artefacts. This is expected to encourage domain experts and knowledge analysts work towards describing each business process and their interactions. Existing business processes can be documented and business efficiencies can be achieved through a process of refinement. The meta-schema design allows for a ?systems view? and sharing of such views, thus enabling domain experts to focus on their area of specialisation whilst having an understanding of other business areas and their facts. The design also allows for synchronisation of mental models of experts and the knowledge analyst. This has been a major issue with KBS development and one of the main reasons for the failure of such projects. The intention of this research is to provide a facility to overcome this issue. The natural language based FORM encourages verbalisation of the domain, hence increasing the understanding and comprehension of available business facts.
APA, Harvard, Vancouver, ISO, and other styles
14

Strickrodt, M. "An integrated knowledge engineering approach to process modelling." Thesis, University of South Wales, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.265743.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Jiao, Hong. "Integrated knowledge-based hierarchical modelling of manufacturing organizations." Thesis, Loughborough University, 1991. https://dspace.lboro.ac.uk/2134/32104.

Full text
Abstract:
The objective of this thesis is to research into an integrated knowledge-based simulation method, which combines the capability of knowledge based simulation and a structured analysis method, for the design and analysis of complex and hierarchical manufacturing organizations. This means manufacturing organizations analysed according to this methodology can manage the tactical and operational planning as well as the direct operation of shop floor.
APA, Harvard, Vancouver, ISO, and other styles
16

Hutton, Douglas. "Knowledge based flowsheet modelling for chemical process design." Thesis, University of Edinburgh, 1990. http://hdl.handle.net/1842/15084.

Full text
Abstract:
The aim of this work was to develop an experimental tool to perform flow-sheeting tasks throughout the course of chemical process design. Such design proceeds in a hierarchical manner increasing the amount of detail in the description of the plant, and, correspondingly, in the mathematical models used to describe the plant. The models range from the simplest overall mass balance to rigorous unit models, and the calculations required in the course of a design may include the modelling of the complete plant or any of its constituent parts at any level of detail between these two extremes. Object oriented programming has been used to represent the hierarchy of units required throughout a hierarchical design. A flexible modelling tool requires that models compatible with both the designer's intention and the context of the design are created. Sets of equations are defined in a generic form independent of process units with their selection as part of a model being dependent on the function and context of the unit being modelled. The expansion of the generic equation descriptions is achieved with reference to the structure of the unit, e.g. number of inlets and outlets, while the context of an equation determines the form of the equation to be applied, e.g. ideal or non-ideal behaviour. Equations are, therefore, represented as relations between a process item and its structural and contextual properties. An increase in modelling flexibility is obtained by allowing the designer to interact with generated models. Different sets of equations can be selected within constraints imposed by the system. At a lower level, terms in individual equations can be modified for particular applications. In chemical process design, many different analyses are performed. To demonstrate the application of different tools to a central model, the modelling system has been incorporated within a process synthesis framework. The application of the system to simple design case studies is described.
APA, Harvard, Vancouver, ISO, and other styles
17

Cottam, Hugh. "An ontological framework for knowledge mapping." Thesis, University of Nottingham, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.325709.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Conlon, Thomas Hugh. "Beyond rules : development and evaluation of knowledge acquisition systems for educational knowledge-based modelling." Thesis, University of Edinburgh, 1997. http://hdl.handle.net/1842/7514.

Full text
Abstract:
The technology of knowledge-based systems undoubtedly offers potential for educational modelling, yet its practical impact on today's school classrooms is very limited. To an extent this is because the tools presently used in schools are EMYCIN -type expert system shells. The main argument of this thesis is that these shells make knowledge-based modelling unnecessarily difficult and that tools which exploit knowledge acquisition technologies empower learners to build better models. We describe how such tools can be designed. To evaluate their usability a model-building course was conducted in five secondary schools. During the course pupils built hundreds of models in a common range of domains. Some of the models were built with an EMYCIN -type shell whilst others were built with a variety of knowledge acquisition systems. The knowledge acquisition systems emerged as superior in important respects. We offer some explanations for these results and argue that although problems remain, such as in teacher education, design of classroom practice, and assessment of learning outcomes, it is clear that knowledge acquisition systems offer considerable potential to develop improved forms of educational knowledge-based modelling.
APA, Harvard, Vancouver, ISO, and other styles
19

au, skhor@iinet net, and Sebastian Wankun Khor. "A Fuzzy Knowledge Map Framework for Knowledge Representation." Murdoch University, 2007. http://wwwlib.murdoch.edu.au/adt/browse/view/adt-MU20070822.32701.

Full text
Abstract:
Cognitive Maps (CMs) have shown promise as tools for modelling and simulation of knowledge in computers as representation of real objects, concepts, perceptions or events and their relations. This thesis examines the application of fuzzy theory to the expression of these relations, and investigates the development of a framework to better manage the operations of these relations. The Fuzzy Cognitive Map (FCM) was introduced in 1986 but little progress has been made since. This is because of the difficulty of modifying or extending its reasoning mechanism from causality to relations other than causality, such as associative and deductive reasoning. The ability to express the complex relations between objects and concepts determines the usefulness of the maps. Structuring these concepts and relations in a model so that they can be consistently represented and quickly accessed and anipulated by a computer is the goal of knowledge representation. This forms the main motivation of this research. In this thesis, a novel framework is proposed whereby single-antecedent fuzzy rules can be applied to a directed graph, and reasoning ability is extended to include noncausality. The framework provides a hierarchical structure where a graph in a higher layer represents knowledge at a high level of abstraction, and graphs in a lower layer represent the knowledge in more detail. The framework allows a modular design of knowledge representation and facilitates the creation of a more complex structure for modelling and reasoning. The experiments conducted in this thesis show that the proposed framework is effective and useful for deriving inferences from input data, solving certain classification problems, and for prediction and decision-making.
APA, Harvard, Vancouver, ISO, and other styles
20

Loughlin, Simon Patrick. "Modelling expertise in quantitative scientific problem solving." Thesis, Queen's University Belfast, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.268228.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Kim, Tag Gon. "A knowledge-based environment for hierarchical modelling and simulation." Diss., The University of Arizona, 1988. http://hdl.handle.net/10150/184380.

Full text
Abstract:
Hierarchical, modular specification of discrete-event models offers a basis for reusable model bases and hence for enhanced simulation of truly varied design alternatives. This dissertation develops a knowledge-based environment for hierarchical modelling and simulation of discrete-event systems as the major part of a longer, ongoing research project in artificial intelligence and distributed simulation. In developing the environment, a knowledge representation framework for modelling and simulation, which unifies structural and behavioral knowledge of simulation models, is proposed by incorporating knowledge representation schemes in artificial intelligence within simulation models. The knowledge base created using the framework is composed of a structural knowledge base called entity structure base and a behavioral knowledge base called model base. The DEVS-Scheme, a realization of DEVS (Discrete Event System Specification) formalism in a LISP-based, object-oriented environment, is extended to facilitate the specification of behavioral knowledge of models, especially for kernel models that are suited to model massively parallel computer architectures. The ESP-Scheme, a realization of entity structure formalism in a frame-theoretic representation, is extended to represent structural knowledge of models and to manage it in the structural knowledge base. An advantage of the knowledge-based environment is that it is capable of automatically synthesizing hierarchical, modular models from model base resident components defined by the extended DEVS-Scheme under the direction of structural knowledge using the extended ESP-Scheme. Since both implementation and the underlying LISP language are accessible to the user, the result is a medium capable of combining simulation modelling and artificial intelligence techniques. To show the power of the environment, modelling and simulation methodology in the environment are presented using an example of modelling a hypercube computer architecture. Applications of the environment to knowledge-based computer systems design, communications network design, and diagnostic expert systems design are discussed. Since structure descriptions in the environment are susceptible to run-time modification, the environment provides a convenient basis for developing variable family and variable structure simulation models such as adaptive computer architectures. Thus, the environment represents a significant step toward realizing powerful concepts of system-theoretic based formalisms. The environment also serves as a medium for developing distributed simulation architectures for hierarchical, modular discrete-event models.
APA, Harvard, Vancouver, ISO, and other styles
22

Martinez-Alvarez, Miguel. "Knowledge-enhanced text classification : descriptive modelling and new approaches." Thesis, Queen Mary, University of London, 2014. http://qmro.qmul.ac.uk/xmlui/handle/123456789/27205.

Full text
Abstract:
The knowledge available to be exploited by text classification and information retrieval systems has significantly changed, both in nature and quantity, in the last years. Nowadays, there are several sources of information that can potentially improve the classification process, and systems should be able to adapt to incorporate multiple sources of available data in different formats. This fact is specially important in environments where the required information changes rapidly, and its utility may be contingent on timely implementation. For these reasons, the importance of adaptability and flexibility in information systems is rapidly growing. Current systems are usually developed for specific scenarios. As a result, significant engineering effort is needed to adapt them when new knowledge appears or there are changes in the information needs. This research investigates the usage of knowledge within text classification from two different perspectives. On one hand, the application of descriptive approaches for the seamless modelling of text classification, focusing on knowledge integration and complex data representation. The main goal is to achieve a scalable and efficient approach for rapid prototyping for Text Classification that can incorporate different sources and types of knowledge, and to minimise the gap between the mathematical definition and the modelling of a solution. On the other hand, the improvement of different steps of the classification process where knowledge exploitation has traditionally not been applied. In particular, this thesis introduces two classification sub-tasks, namely Semi-Automatic Text Classification (SATC) and Document Performance Prediction (DPP), and several methods to address them. SATC focuses on selecting the documents that are more likely to be wrongly assigned by the system to be manually classified, while automatically labelling the rest. Document performance prediction estimates the classification quality that will be achieved for a document, given a classifier. In addition, we also propose a family of evaluation metrics to measure degrees of misclassification, and an adaptive variation of k-NN.
APA, Harvard, Vancouver, ISO, and other styles
23

Höfler, Veit, Christine Wessollek, and Pierre Karrasch. "Knowledge-based modelling of historical surfaces using lidar data." SPIE, 2016. https://tud.qucosa.de/id/qucosa%3A35116.

Full text
Abstract:
Currently in archaeological studies digital elevation models are mainly used especially in terms of shaded reliefs for the prospection of archaeological sites. Hesse (2010) provides a supporting software tool for the determination of local relief models during the prospection using LiDAR scans. Furthermore the search for relicts from WW2 is also in the focus of his research.¹ In James et al. (2006) the determined contour lines were used to reconstruct locations of archaeological artefacts such as buildings.² This study is much more and presents an innovative workflow of determining historical high resolution terrain surfaces using recent high resolution terrain models and sedimentological expert knowledge. Based on archaeological field studies (Franconian Saale near Bad Neustadt in Germany) the sedimentological analyses shows that archaeological interesting horizon and geomorphological expert knowledge in combination with particle size analyses (Köhn, DIN ISO 11277) are useful components for reconstructing surfaces of the early Middle Ages.³ Furthermore the paper traces how it is possible to use additional information (extracted from a recent digital terrain model) to support the process of determination historical surfaces. Conceptual this research is based on methodology of geomorphometry and geo-statistics. The basic idea is that the working procedure is based on the different input data. One aims at tracking the quantitative data and the other aims at processing the qualitative data. Thus, the first quantitative data were available for further processing, which were later processed with the qualitative data to convert them to historical heights. In the final stage of the work ow all gathered information are stored in a large data matrix for spatial interpolation using the geostatistical method of Kriging. Besides the historical surface, the algorithm also provides a first estimation of accuracy of the modelling. The presented workflow is characterized by a high exibility and the opportunity to include new available data in the process at any time.
APA, Harvard, Vancouver, ISO, and other styles
24

Abanda, Fonbeyin Henry. "Knowledge modelling of emerging technologies for sustainable building development." Thesis, Oxford Brookes University, 2011. https://radar.brookes.ac.uk/radar/items/d8e77b5c-04e1-4fdb-8fd5-1574deab180f/1/.

Full text
Abstract:
In the quest for improved performance of buildings and mitigation of climate change, governments are encouraging the use of innovative sustainable building technologies. Consequently, there is now a large amount of information and knowledge on sustainable building technologies over the web. However, internet searches often overwhelm practitioners with millions of pages that they browse to identify suitable innovations to use on their projects. It has been widely acknowledged that the solution to this problem is the use of a machine-understandable language with rich semantics - the semantic web technology. This research investigates the extent to which semantic web technologies can be exploited to represent knowledge about sustainable building technologies, and to facilitate system decision-making in recommending appropriate choices for use in different situations. To achieve this aim, an exploratory study on sustainable building and semantic web technologies was conducted. This led to the use of two most popular knowledge engineering methodologies - the CommonKADS and "Ontology Development 101" in modelling knowledge about sustainable building technology and PV -system domains. A prototype system - Photo Voltaic Technology ONtology System (PV -TONS) - that employed sustainable building technology and PV -system domain knowledge models was developed and validated with a case study. While the sustainable building technology ontology and PV -TONS can both be used as generic knowledge models, PV -TONS is extended to include applications for the design and selection of PV -systems and components. Although its focus was on PV -systems, the application of semantic web technologies can be extended to cover other areas of sustainable building technologies. The major challenges encountered in this study are two-fold. First, many semantic web technologies are still under development and very unstable, thus hindering their full exploitation. Second, the lack of learning resources in this field steepen the learning curve and is a potential set-back in using semantic web technologies.
APA, Harvard, Vancouver, ISO, and other styles
25

Kim, Sanghee. "User modelling for knowledge sharing in e-mail communication." Thesis, University of Southampton, 2002. https://eprints.soton.ac.uk/45959/.

Full text
Abstract:
This thesis addresses the problem of sharing and transferring knowledge within knowledge-intensive organisations from a user modelling perspective with the purpose of improving individual and group performance. It explores the idea of creating organisational environments from which any of the users involved can benefit by being aware of each other such that sharing expertise between those who are knowledge providers and those who are knowledge seekers can be maximised. In order to encourage individuals to share such valuable expertise, it also explores the idea of keeping a balance between ensuring the availability of information and the increase in user workloads due to the need to handle unwanted information. In an attempt to demonstrate the ideas mentioned above, this research examines the application of user modelling techniques to the development of communication-based task learning systems based on e-mail communication. The design rationale for using e-mail is that personally held expertise is often explicated through e-mail exchanges since it provides a good source for extracting user knowledge. The provision of an automatic message categorisation system that combines knowledge acquired from both statistical and symbolic text learning techniques is one of the three themes of this work. The creation of a new user model that captures the different levels of expertise reflected in exchanged e-mail messages, and makes use of them in linking knowledge providers and knowledge seekers is the second. The design of a new information distribution method to reduce both information overload and underload is the third.
APA, Harvard, Vancouver, ISO, and other styles
26

Abdullah, Sophiana Chua. "Student modelling by adaptive testing : a knowledge-based approach." Thesis, University of Kent, 2003. https://kar.kent.ac.uk/13956/.

Full text
Abstract:
An adaptive test is one in which the number of test items and the order in which the items are presented are computed during the delivery of the test so as to obtain an accurate estimate of a student's knowledge, with a minimum number of test items. This thesis is concerned with the design and development of computerised adaptive tests for use within educational settings. Just as, in the same setting, intelligent tutoring systems are designed to emulate human tutors, adaptive testing systems can be designed to mimic effective informal examiners. The thesis focuses on the role of adaptive testing in student modelling, and demonstrates the practicality of constructing such tests using expert emulation. The thesis makes the case that, for small scale adaptive tests, a construction process based on the knowledge acquisition technique of expert systems is practical and economical. Several experiments in knowledge acquisition for the construction of an adaptive test are described, in particular, experiments to elicit information for the domain knowledge, the student model and the problem progression strategy. It shows how a description of a particular problem domain may be captured using traditional techniques that are supported by software development in the constraint logic extension to Prolog. It also discusses knowledge acquisition techniques for determining the sequence in which questions should be asked. A student modelling architecture called SKATE is presented. This incorporates an adaptive testing strategy called XP, which was elicted from a human expert. The strategy, XP, is evaluated using simulations of students. This approach to evaluation facilitates comparisons between approaches to testing and is potentially useful in tuning adaptive tests.
APA, Harvard, Vancouver, ISO, and other styles
27

Lawson, Kevin W. "A semantic modelling approach to knowledge based statistical software." Thesis, Aston University, 1989. http://publications.aston.ac.uk/10642/.

Full text
Abstract:
The topic of this thesis is the development of knowledge based statistical software. The shortcomings of conventional statistical packages are discussed to illustrate the need to develop software which is able to exhibit a greater degree of statistical expertise, thereby reducing the misuse of statistical methods by those not well versed in the art of statistical analysis. Some of the issues involved in the development of knowledge based software are presented and a review is given of some of the systems that have been developed so far. The majority of these have moved away from conventional architectures by adopting what can be termed an expert systems approach. The thesis then proposes an approach which is based upon the concept of semantic modelling. By representing some of the semantic meaning of data, it is conceived that a system could examine a request to apply a statistical technique and check if the use of the chosen technique was semantically sound, i.e. will the results obtained be meaningful. Current systems, in contrast, can only perform what can be considered as syntactic checks. The prototype system that has been implemented to explore the feasibility of such an approach is presented, the system has been designed as an enhanced variant of a conventional style statistical package. This involved developing a semantic data model to represent some of the statistically relevant knowledge about data and identifying sets of requirements that should be met for the application of the statistical techniques to be valid. Those areas of statistics covered in the prototype are measures of association and tests of location.
APA, Harvard, Vancouver, ISO, and other styles
28

Wedgwood, Owen. "A knowledge-based approach to modelling fast response catchments." Thesis, University of Salford, 1993. http://usir.salford.ac.uk/42961/.

Full text
Abstract:
This thesis describes research in to flood forecasting on rapid response catchments, using knowledge based principles. Extensive use was made of high resolution single site radar data from the radar site at Hameldon Hill in North West England. Actual storm events and synthetic precipitation data were used in an attempt to identify 'knowledge' of the rainfall - runoff process. Modelling was carried out with the use of transfer functions, and an analysis is presented of the problems in using this type of model in hydrological forecasting. A physically realisable' transfer function model is outlined, and storm characteristics were analysed to establish information about model tuning. The knowledge gained was built into a knowledge based system (KBS) to enable real-time optimisation of model parameters. A rainfall movement forecasting program was used to provide input to the system. Forecasts using the KBS tuned parameters proved better than those from a naive transfer function model in most cases. In order to further improve flow forecasts a simple catchment wetness procedure was developed and included in the system, based on antecedent precipitation index, using radar rainfall input. A new method of intensity - duration - frequency analysis was developed using distributed radar data at a 2Km by 2Km resolution. This allowed a new application of return periods in real time, in assessing storm severity as it occurs. A catchment transposition procedure was developed allowing subjective catchment placement infront of an approaching event, to assess rainfall `risk', in terms of catchment history, before the event reaches it. A knowledge based approach, to work in real time, was found to be successful. The main drawback is the initial procurement of knowledge, or information about thresholds, linkages and relationships.
APA, Harvard, Vancouver, ISO, and other styles
29

Dogbey, Felix Kwame Atsu. "GEST translator within the knowledge-based modelling system MAGEST." Thesis, University of Ottawa (Canada), 1985. http://hdl.handle.net/10393/4866.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Morales, Gamboa Rafael. "Exploring participative learner modelling and its effects on learner behaviour." Thesis, University of Edinburgh, 2000. http://hdl.handle.net/1842/6666.

Full text
Abstract:
The educational benefits of involving learners as active players in the learner modelling process have been an important motivation for research on this form of learner modelling, henceforth referred to as participative learner modelling. Such benefits, conceived as the promotion of learners' reflection on and awareness of their own knowledge, have in most cases been asserted on the grounds of system design and supported only by anecdotal evidence. This dissertation explores the issue of whether participative learner modelling actually promotes learners' reflection and awareness. It does so by firstly interpreting 'reflection' and 'awareness' in light of "classical" theories of human cognitive architecture, skill acquisition and meta-cognition, in order to infer changes in learner abilities (and therefore behaviour) amenable to empirical corroboration. The occurrence of such changes is then tested for an implementation of a paradigmatic form of participative learner modelling: allowing learners to inspect and modify their learner models. The domain of application centres on the sensorimotor skill of controlling a pole on a cart and represents a novel type of domain for participative learner modelling. Special attention is paid to evaluating the method developed for constructing learner models and the form of presenting them to learners: the former is based on a method known as behavioural cloning for acquiring expert knowledge by means of machine learning; the latter deals with the modularity of the learner models and the modality and interactivity of their presentation. The outcome of this research suggests that participative learner modelling may increase the abilities of learners to report accurately their problem-solving knowledge and to carry out novel tasks in the same domain—the sort of behavioural changes expected from increased learners' awareness and reflection. More importantly perhaps, the research suggests a viable methodology for examining the educational benefits of participative learner modelling. It also exemplifies the difficulties that such endeavours will face.
APA, Harvard, Vancouver, ISO, and other styles
31

Fowell, Susan Patricia. "The performance modelling of preferential choice : a knowledge engineering approach." Thesis, Leeds Beckett University, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.335892.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

MacNish, Craig Gordon. "Nonmonotonic inference systems for modelling dynamic processes." Thesis, University of Cambridge, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.240195.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Benn, Neil Jefferson Lavere. "Modelling Scholarly Debate Conceptual foundations for knowledge domain analysis technology." Thesis, Open University, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.518179.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Taalab, Khaled Paul. "Modelling soil bulk density using data-mining and expert knowledge." Thesis, Cranfield University, 2013. http://dspace.lib.cranfield.ac.uk/handle/1826/8273.

Full text
Abstract:
Data about the spatial variation of soil attributes is required to address a great number of environmental issues, such as improving water quality, flood mitigation, and determining the effects of the terrestrial carbon cycle. The need for a continuum of soils data is problematic, as it is only possible to observe soil attributes at a limited number of locations, beyond which, prediction is required. There is, however, disparity between the way in which much of the existing information about soil is recorded and the format in which the data is required. There are two primary methods of representing the variation in soil properties, as a set of distinct classes or as a continuum. The former is how the variation in soils has been recorded historically by the soil survey, whereas the latter is how soils data is typically required. One solution to this issue is to use a soil-landscape modelling approach which relates the soil to the wider landscape (including topography, land-use, geology and climatic conditions) using a statistical model. In this study, the soil-landscape modelling approach has been applied to the prediction of soil bulk density (Db). The original contribution to knowledge of the study is demonstrating that producing a continuous surface of Db using a soil-landscape modelling approach is that a viable alternative to the ‘classification’ approach which is most frequently used. The benefit of this method is shown in relation to the prediction of soil carbon stocks, which can be predicted more accurately and with less uncertainty. The second part of this study concerns the inclusion of expert knowledge within the soil-landscape modelling approach. The statistical modelling approaches used to predict Db are data driven, hence it is difficult to interpret the processes which the model represents. In this study, expert knowledge is used to predict Db within a Bayesian network modelling framework, which structures knowledge in terms of probability. This approach creates models which can be more easily interpreted and consequently facilitate knowledge discovery, it also provides a method for expert knowledge to be used as a proxy for empirical data. The contribution to knowledge of this section of the study is twofold, firstly, that Bayesian networks can be used as tools for data-mining to predict a continuous soil attribute such as Db and that in lieu of data, expert knowledge can be used to accurately predict landscape-scale trends in the variation of Db using a Bayesian modelling approach.
APA, Harvard, Vancouver, ISO, and other styles
35

Fouche, Carien. "Elliptical applicator design through analysis, modelling and material property knowledge." Thesis, Stellenbosch : University of Stellenbosch, 2006. http://hdl.handle.net/10019.1/2936.

Full text
Abstract:
The properties of an elliptical microwave applicator are investigated. The investigation includes the analytical solution of the cutoff frequencies and electromagnetic field patterns in elliptical waveguides. This requires the solution of Mathieu Functions and becoming familiar with an orthogonal elliptical coordinate system. The study forms part of a wider investigation into the microwave heating of minerals and a cavity is designed in such a way that modes are produced at 896MHz. Extensive use is made of simulation packages. These software packages require that the user knows the dielectric properties of materials that are part of simulations. Therefore, the determination of these properties through measurement and the use of genetic algorithms is considered. A method to improve an S-band waveguide measurement system by implementing time domain gating and an offline calibration code previously written forms an integral part of this section of the project. It is found that, within limits, elliptical waveguides tend to produce a greater number of modes within a certain frequency range when compared to rectangular waveguides.
APA, Harvard, Vancouver, ISO, and other styles
36

Ip, Saimond. "A knowledge representation approach to information systems analysis and modelling." Thesis, University of Cambridge, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.260555.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Ahmad, Ali. "Towards a knowledge-based discrete simulation modelling environment using Prolog." Thesis, University of Warwick, 1989. http://wrap.warwick.ac.uk/106507/.

Full text
Abstract:
The initial chapters of this thesis cover a survey of literature relating to problem solving, discrete simulation, knowledge-based systems and logic programming. The main emphasis in these chapters is on a review of the state of the art in the use of Artificial Intelligence methods in Operational Research in general and Discrete Simulation in particular. One of the fundamental problems in discrete simulation is to mimic the operation of a system as a part of problem solving relating to the system. A number of methods of simulated behaviour generation exist which dictate the form in which a simulation model must be expressed. This thesis explores the possibility of employing logic programming paradigm for this purpose as it has been claimed to offer a number of advantages over procedural programming paradigm. As a result a prototype simulation engine has been implemented using Prolog which can generate simulated behaviour from an articulation of model using a three phase or process 'world views' (or a sensible mixture of these). The simulation engine approach can offer the advantage of building simulation models incrementally. A new paradigm for computer software systems in the form of Know ledge-Based Systems has emerged from the research in the area of Artificial Intelligence. Use of this paradigm has been explored in the area of simulation model building. A feasible method of knowledge-based simulation model generation has been proposed and using this method a prototype knowledge-based simulation modelling environment has been implemented using Prolog. The knowledge based system paradigm has been seen to offer a number of advantages which include the possibility of representing both the application domain knowledge and the simulation methodology knowledge which can assist in the model definition as well as in the generation of executable code. These, in turn, may offer a greater amount of computer assistance in developing simulation models than would be possible otherwise. The research aim is to make advances towards the goal of 'intelligent' simulation modelling environments. It consolidates the knowledge related to simulated behaviour generation methods using symbolic representation for the system state while permitting the use of alternate (and mixed) 'world views' for the model articulation. It further demonstrates that use of the knowledge-based systems paradigm for implementing a discrete simulation modelling environment is feasible and advantageous.
APA, Harvard, Vancouver, ISO, and other styles
38

Karmali, Anjum. "Restrained knowledge-based sampling : applications to structure prediction and modelling." Thesis, University of Cambridge, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.609440.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Massie, Stewart. "Complexity modelling for case knowledge maintenance in case-based reasoning." Thesis, Robert Gordon University, 2006. http://hdl.handle.net/10059/472.

Full text
Abstract:
Case-based reasoning solves new problems by re-using the solutions of previously solved similar problems and is popular because many of the knowledge engineering demands of conventional knowledge-based systems are removed. The content of the case knowledge container is critical to the performance of case-based classification systems. However, the knowledge engineer is given little support in the selection of suitable techniques to maintain and monitor the case base. This research investigates the coverage, competence and problem-solving capacity of case knowledge with the aim of developing techniques to model and maintain the case base. We present a novel technique that creates a model of the case base by measuring the uncertainty in local areas of the problem space based on the local mix of solutions present. The model provides an insight into the structure of a case base by means of a complexity profile that can assist maintenance decision-making and provide a benchmark to assess future changes to the case base. The distribution of cases in the case base is critical to the performance of a case-based reasoning system. We argue that classification boundaries represent important regions of the problem space and develop two complexity-guided algorithms which use boundary identification techniques to actively discover cases close to boundaries. We introduce a complexity-guided redundancy reduction algorithm which uses a case complexity threshold to retain cases close to boundaries and delete cases that form single class clusters. The algorithm offers control over the balance between maintaining competence and reducing case base size. The performance of a case-based reasoning system relies on the integrity of its case base but in real life applications the available data invariably contains erroneous, noisy cases. Automated removal of these noisy cases can improve system accuracy. In addition, error rates can often be reduced by removing cases to give smoother decision boundaries between classes. We show that the optimal level of boundary smoothing is domain dependent and, therefore, our approach to error reduction reacts to the characteristics of the domain by setting an appropriate level of smoothing. We introduce a novel algorithm which identifies and removes both noisy and boundary cases with the aid of a local distance ratio. A prototype interface has been developed that shows how the modelling and maintenance approaches can be used in practice in an interactive manner. The interface allows the knowledge engineer to make informed maintenance choices without the need for extensive evaluation effort while, at the same time, retaining control over the process. One of the strengths of our approach is in applying a consistent, integrated method to case base maintenance to provide a transparent process that gives a degree of explanation.
APA, Harvard, Vancouver, ISO, and other styles
40

Abdullah, Mohd Syazwan. "A UML profile for conceptual modelling of knowledge-based systems." Thesis, University of York, 2006. http://etheses.whiterose.ac.uk/10988/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Dzbor, Martin. "Design as interactions of problem framing and problem solving : a formal and empirical basis for problem framing in design." Thesis, Open University, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.250506.

Full text
Abstract:
In this thesis, I present, illustrate and empirically validate a novel approach to modelling and explaining design process. The main outcome of this work is the formal definition of the problem framing, and the formulation of a recursive model of framing in design. The model (code-named RFD), represents a formalisation of a grey area in the science of design, and sees the design process as a recursive interaction of problem framing and problem solving. The proposed approach is based upon a phenomenon introduced in cognitive science and known as (reflective) solution talkback. Previously, there were no formalisations of the knowledge interactions occurring within this complex reasoning operation. The recursive model is thus an attempt to express the existing knowledge in a formal and structured manner. In spite of rather abstract, knowledge level on which the model is defined, it is a firm step in the clarification of design process. The RFD model is applied to the knowledge-level description of the conducted experimental study that is annotated and analysed in the defined terminology. Eventually, several schemas implied by the model are identified, exemplified, and elaborated to reflect the empirical results. The model features the mutual interaction of predicates ‘specifies’ and ‘satisfies’. The first asserts that a certain set of explicit statements is sufficient for expressing relevant desired states the design is aiming to achieve. The validity of predicate ‘specifies’ might not be provable directly in any problem solving theory. A particular specification can be upheld or rejected only by drawing upon the validity of a complementary predicate ‘satisfies’ and the (un-)acceptability of the considered candidate solution (e.g. technological artefact, product). It is the role of the predicate ‘satisfies’ to find and derive such a candidate solution. The predicates ‘specifies’ and ‘satisfies’ are contextually bound and can be evaluated only within a particular conceptual frame. Thus, a solution to the design problem is sound and admissible with respect to an explicit commitment to a particular specification and design frame. The role of the predicate ‘acceptable’ is to compare the admissible solutions and frames against the ‘real’ design problem. As if it answered the question: “Is this solution really what I wanted/intended?” Furthermore, I propose a set of principled schemas on the conceptual (knowledge) level with an aim to make the interactive patterns of the design process explicit. These conceptual schemas are elicited from the rigorous experiments that utilised the structured and principled approach to recording the designer’s conceptual reasoning steps and decisions. They include • the refinement of an explicit problem specification within a conceptual frame; • the refinement of an explicit problem specification using a re-framed reference; and • the conceptual re-framing (i.e. the identification and articulation of new conceptual terms) Since the conceptual schemas reflect the sequence of the ‘typical’ decisions the designer may make during the design process, there is no single, symbol-level method for the implementation of these conceptual patterns. Thus, when one decides to follow the abstract patterns and schemas, this abstract model alone can foster a principled design on the knowledge level. It must be acknowledged that for the purpose of computer-based support, these abstract schemas need to be turned into operational models and consequently suitable methods. However, such operational perspective was beyond the time and resource constraints placed on this research.
APA, Harvard, Vancouver, ISO, and other styles
42

Rojo, Vicente Guerrero. "MML, a modelling language with dynamic selection of methods." Thesis, University of Sussex, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.285103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Gómez, Martínez Mario. "Open, Reusable and Configurable Multi Agent Systems: A Knowledge Modelling Approach." Doctoral thesis, Universitat Autònoma de Barcelona, 2004. http://hdl.handle.net/10803/3049.

Full text
Abstract:
Aunque los Sistemas Multiagente se suponen abiertos, la mayor parte de la investigación realizada se ha centrado en sistemas cerrados, diseñados por un sólo equipo de desarrollo, sobre un entorno homogéneo, y un único dominio.
Esta tesis pretende avanzar hacia la consecución de Sistemas Multiagente abiertos. Nuestros esfuerzos se han centrado en desarrollar un marco de trabajo para Sistemas Multiagente que permita maximizar la reutilización de agentes en diferentes dominios, y soporte la formación de equipos bajo demanda, satisfaciendo los requerimientos de cada problema particular.
Por un lado, este trabajo investiga el uso de Métodos de Solución de Problemas para describir las capacidades de los agentes con el objetivo de mejorar su reutilización. Hemos tenido que adaptar el modelo para trabajar con aspectos específicos de los agentes, como el lenguaje de comunicación y los protocolos de interacción.
Por otro lado, esta tesis propone un nuevo modelo para el Proceso de Solución de Problemas Cooperativo, el cual introduce una fase de configuración previa a la formación de un equipo. El proceso de configuración se encarga de obtener un diseño de equipo, expresado en términos de las tareas a resolver, las capacidades a utilizar, y el conocimiento del dominio disponible.
El marco de trabajo desarrollado ha sido puesto a prueba mediante la implementación de una infraestructura para agentes. Esta infraestructura proporciona un nivel de mediación social para los proveedores y clientes del sistema de resolución de problemas, sin imponer una arquitectura particular para los agentes participantes, ni un modelo mental o lógico para explicar la cooperación.
Las contribuciones de este trabajo adoptan la forma de un marco de trabajo multi-capa, desde los conceptos más abstractos a los más concretos, para terminar con la implementación de una aplicación particular basada en agentes de información cooperativos.
Although Multi Agent Systems are supposed to be open systems, most of the initial research has focused on closed systems, which are designed by one developer team for one homogeneous environment, and one single domain.
This thesis aims to advance some steps towards the realization of the open Multi Agent Systems vision. Our work has been materialized into a framework for developing Multi Agent Systems that maximize the reuse of agent capabilities across multiple application domains, and support the automatic, on-demand configuration of agent teams according to stated problem requirements.
On the one hand, this work explores the feasibility of the Problem Solving Methods approach to describe agent capabilities in a way that maximizes their reuse. However, since Problem Solving Methods are not designed for agents, we have had to adapt them to deal with agent specific concepts concerning the agent communication languages and interaction protocols.
One the other hand, this thesis proposes a new model of the Cooperative Problem Solving process that introduces a Knowledge Configuration stage previous to the Team Formation stage. The Knowledge Configuration process performs a bottom-up design of a team in term of the tasks to be solved, the capabilities required, and the domain knowledge available.
The statements made herein are endorsed by the implementation of an agent infrastructure that has been tested in practice. This infrastructure has been developed according to the electronic institutions formalism to specifying open agent societies. This infrastructure provides a social mediation layer for both requesters and providers of capabilities, without imposing neither an agent architecture, nor an attitudinal theory of cooperation.
The contributions of our work are presented as a multilayered framework, going from the more abstract aspects, to the more concrete, implementation dependent aspects, concluding with the implementation of the agent infrastructure and a particular application example for cooperative information agents.
APA, Harvard, Vancouver, ISO, and other styles
44

Kynn, Mary. "Eliciting Expert Knowledge for Bayesian Logistic Regression in Species Habitat Modelling." Queensland University of Technology, 2005. http://eprints.qut.edu.au/16041/.

Full text
Abstract:
This research aims to develop a process for eliciting expert knowledge and incorporating this knowledge as prior distributions for a Bayesian logistic regression model. This work was motivated by the need for less data reliant methods of modelling species habitat distributions. A comprehensive review of the research from both cognitive psychology and the statistical literature provided specific recommendations for the creation of an elicitation scheme. These were incorporated into the design of a Bayesian logistic regression model and accompanying elicitation scheme. This model and scheme were then implemented as interactive, graphical software called ELICITOR created within the BlackBox Component Pascal environment. This software was specifically written to be compatible with existing Bayesian analysis software, winBUGS as an odd-on component. The model, elicitation scheme and software were evaluated through five case studies of various fauna and flora species. For two of these there were sufficient data for a comparison of expert and data-driven models. The case studies confirmed that expert knowledge can be quantified and formally incorporated into a logistic regression model. Finally, they provide a basis for a thorough discussion of the model, scheme and software extensions and lead to recommendations for elicitation research.
APA, Harvard, Vancouver, ISO, and other styles
45

Lundell, James. "Knowledge extraction and the modelling of expertise in a diagnostic task /." Thesis, Connect to this title online; UW restricted, 1988. http://hdl.handle.net/1773/9088.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Zhang, Yan. "Computer-based modelling and management for current working knowledge evolution support." Thesis, University of Strathclyde, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.322053.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Duffy, A. H. B. "Computer modelling of early stage numerical ship design knowledge and expertise." Thesis, University of Strathclyde, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.381341.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Yahya, Zaharah. "Modelling information of architectural heritage : a model for preserving design knowledge." Thesis, University of Strathclyde, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.436094.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Shehab, Esam. "An intelligent knowledge based cost modelling system for innovative product development." Thesis, De Montfort University, 2001. http://hdl.handle.net/2086/11605.

Full text
Abstract:
This research work aims to develop an intelligent knowledge-based system for product cost modelling and design for automation at an early design stage of the product development cycle, that would enable designers/manufacturing planners to make more accurate estimates of the product cost. Consequently, a quicker response to customers’ expectations. The main objectives of the research are to: (1) develop a prototype system that assists an inexperienced designer to estimate the manufacturing cost of the product, (2) advise designers on how to eliminate design and manufacturing related conflicts that may arise during the product development process, (3) recommend the most economic assembly technique for the product in order to consider this technique during the design process and provide design improvement suggestions to simplify the assembly operations (i.e. to provide an opportunity for designers to design for assembly (DFA)), (4) apply a fuzzy logic approach to certain cases, and (5) evaluate the developed prototype system through five case studies. The developed system for cost modelling comprises of a CAD solid modelling system, a material selection module, knowledge-based system (KBS), process optimisation module, design for assembly module, cost estimation technique module, and a user interface. In addition, the system encompasses two types of databases, permanent (static) and temporary (dynamic). These databases are categorised into five separate groups of database, Feature database, Material database, Machinability database, Machine database, and Mould database. The system development process has passed through four major steps: firstly, constructing the knowledge-based and process optimisation system, secondly developing a design for assembly module. Thirdly, integrating the KBS with both material selection database and a CAD system. Finally, developing and implementing a ii fuzzy logic approach to generate reliable estimation of cost and to handle the uncertainty in cost estimation model that cannot be addressed by traditional analytical methods. The developed system has, besides estimating the total cost of a product, the capability to: (1) select a material as well as the machining processes, their sequence and machining parameters based on a set of design and production parameters that the user provides to the system, and (2) recommend the most economic assembly technique for a product and provide design improvement suggestion, in the early stages of the design process, based on a design feasibility technique. It provides recommendations when a design cannot be manufactured with the available manufacturing resources and capabilities. In addition, a feature-by-feature cost estimation report was generated using the system to highlight the features of high manufacturing cost. The system can be applied without the need for detailed design information, so that it can be implemented at an early design stage and consequently cost redesign, and longer lead-time can be avoided. One of the tangible advantages of this system is that it warns users of features that are costly and difficult to manufacture. In addition, the system is developed in such a way that, users can modify the product design at any stage of the design processes. This research dealt with cost modelling of both machined components and injection moulded components. The developed cost effective design environment was evaluated on real products, including a scientific calculator, a telephone handset, and two machined components. Conclusions drawn from the system indicated that the developed prototype system could help companies reducing product cost and lead time by estimating the total product cost throughout the entire product development cycle including assembly cost. Case studies demonstrated that designing a product using the developed system is more cost effective than using traditional systems. The cost estimated for a number of products used in the case studies was almost 10 to 15% less than cost estimated by the traditional system since the latter does not take into consideration process optimisation, design alternatives, nor design for assembly issues.
APA, Harvard, Vancouver, ISO, and other styles
50

Suraweera, Pramuditha. "Widening the Knowledge Acquisition Bottleneck for Intelligent Tutoring Systems." Thesis, University of Canterbury. Computer Science and Software Engineering, 2007. http://hdl.handle.net/10092/1150.

Full text
Abstract:
Empirical studies have shown that Intelligent Tutoring Systems (ITS) are effective tools for education. However, developing an ITS is a labour-intensive and time-consuming process. A major share of the development effort is devoted to acquiring the domain knowledge that accounts for the intelligence of the system. The goal of this research is to reduce the knowledge acquisition bottleneck and enable domain experts to build the domain model required for an ITS. In pursuit of this goal an authoring system capable of producing a domain model with the assistance of a domain expert was developed. Unlike previous authoring systems, this system (named CAS) has the ability to acquire knowledge for non-procedural as well as procedural tasks. CAS was developed to generate the knowledge required for constraint-based tutoring systems, reducing the effort as well as the amount of expertise in knowledge engineering and programming required. Constraint-based modelling is a student modelling technique that assists in somewhat easing the knowledge acquisition bottleneck due to the abstract representation. CAS expects the domain expert to provide an ontology of the domain, example problems and their solutions. It uses machine learning techniques to reason with the information provided by the domain expert for generating a domain model. A series of evaluation studies of this research produced promising results. The initial evaluation revealed that the task of composing an ontology of the domain assisted with the manual composition of a domain model. The second study showed that CAS was effective in generating constraints for the three vastly different domains of database modelling, data normalisation and fraction addition. The final study demonstrated that CAS was also effective in generating constraints when assisted by novice ITS authors, producing constraint sets that were over 90% complete.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography