Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Multi-model model.

Rozprawy doktorskie na temat „Multi-model model”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Multi-model model”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Anastasopoulos, Achilles. "Cross model access in the multi-lingual, multi-model database management system". Thesis, Monterey, California. Naval Postgraduate School, 1997. http://hdl.handle.net/10945/8178.

Pełny tekst źródła
Streszczenie:
Approved for public release; distribution is unlimited
Relational, hierarchical, network, functional, and object oriented databases support its corresponding query language, SQL, DL/I, CODASYL-DML, DAPLEX, and OO-DML, respectively. However, each database type may be accessed only by its own language. The goal of M2DBMS is to provide a heterogeneous environment in which any supported database is accessible by any supported query language. This is known as cross model access capability. In this thesis, relational to object oriented database cross model access is successfully implemented for a test database. Data from the object oriented database EWIROODB is accessed and retrieved, using an SQL query from the relational database EWIROODB. One problem is that the two interfaces (object oriented and relational) create catalog files with different formation, which makes the cross model access impossible, initially. In this thesis the relational created catalog file is used, and the cross model access capability is achieved. The object oriented catalog file must be identical with the relational one. Therefore, work yet to be done is to write a program that automatically reformats the object oriented catalog file into an equivalent relational catalog file
Style APA, Harvard, Vancouver, ISO itp.
2

Blackburn, Ian Russell. "A conceptual multi-model HCI model for the blind". Thesis, Curtin University, 2011. http://hdl.handle.net/20.500.11937/575.

Pełny tekst źródła
Streszczenie:
The ability for blind people to read and write Braille aids literacy development. A good level of literacy enables a person to function well in society in terms of employment, education and daily living. The learning of Braille has traditionally been done with hard copy Braille produced by manual and more recently electronic Braille writers and printers. Curtin University is developing an electronic Braille writer and the research on an interface for Braille keyboard devices, presented in this thesis, forms part of the Curtin University Brailler project.The Design Science approach was the research method chosen for this research because of the flexibility of the approach and because it focuses upon the building of artefacts and theory development. The small sample size meant that both individual interviews and a focus group were employed to gather relevant data from respondents. The literature review covers a variety of areas related to computer interfaces and Braille keyboard devices. A key finding is that the interaction paradigm for Braille keyboard devices needs to differ to interfaces for sighted individuals because of the audio, tactile and serial nature of the information gathering strategies employed by blind people as compared with the visual and spatial information gathering strategies employed by sighted individuals. In terms of usability attributes designed to evaluate the interface consistency was found to be a key factor because of its importance to learning and memory retention.However, two main functions carried out on a computer system are navigating and editing. Thus the model of interface for Braille keyboard devices presented in this thesis focuses upon navigation support and editing support.Feedback was sort from by interviews with individuals and a focus group. Individual interviews were conducted face to face and via the telephone and the focus group was conducted via Skype conference call to enable participants from all over the world to provide feedback on the model.The model was evaluated using usability attributes. Usability was important to the respondents, in particular consistency, learnability, simplicity and ease of use were important. The concept of rich navigation and infinitely definable key maps were understood by respondents and supported. Braille output is essential including the ability to show formatting information in Braille.The limitations of the research included the few respondents to the interviews and the choice to focus upon a theoretical model rather than implementing the model on an actual device. Future research opportunities include implementing the interface concepts from the model on to touch screen devices to aid further development of the interface and implementing the interface on a physical device such as the Curtin University Brailler.
Style APA, Harvard, Vancouver, ISO itp.
3

Francisco-Revilla, Luis. "Multi-model adaptive spatial hypertext". Texas A&M University, 2004. http://hdl.handle.net/1969.1/1444.

Pełny tekst źródła
Streszczenie:
Information delivery on the Web often relies on general purpose Web pages that require the reader to adapt to them. This limitation is addressed by approaches such as spatial hypermedia and adaptive hypermedia. Spatial hypermedia augments the representation power of hypermedia and adaptive hypermedia explores the automatic modification of the presentation according to user needs. This dissertation merges these two approaches, combining the augmented expressiveness of spatial hypermedia with the flexibility of adaptive hypermedia. This dissertation presents the Multi-model Adaptive Spatial Hypermedia framework (MASH). This framework provides the theoretical grounding for the augmentation of spatial hypermedia with dynamic and adaptive functionality and, based on their functionality, classifies systems as generative, interactive, dynamic or adaptive spatial hypermedia. Regarding adaptive hypermedia, MASH proposes the use of multiple independent models that guide the adaptation of the presentation in response to multiple relevant factors. The framework is composed of four parts: a general system architecture, a definition of the fundamental concepts in spatial hypermedia, an ontological classification of the adaptation strategies, and the philosophy of conflict management that addresses the issue of multiple independent models providing contradicting adaptation suggestions. From a practical perspective, this dissertation produced WARP, the first MASH-based system. WARP’s novel features include spatial transclusion links as an alternative to navigational linking, behaviors supporting dynamic spatial hypermedia, and personal annotations to spatial hypermedia. WARP validates the feasibility of the multi-model adaptive spatial hypermedia and allows the exploration of other approaches such as Web-based spatial hypermedia, distributed spatial hypermedia, and interoperability issues between spatial hypermedia systems. In order to validate the approach, a user study comparing non-adaptive to adaptive spatial hypertext was conducted. The study included novice and advanced users and produced qualitative and quantitative results. Qualitative results revealed the emergence of reading behaviors intrinsic to spatial hypermedia. Users moved and modified the objects in order to compare and group objects and to keep track of what had been read. Quantitative results confirmed the benefits of adaptation and indicated a possible synergy between adaptation and expertise. In addition, the study created the largest spatial hypertext to date in terms of textual content.
Style APA, Harvard, Vancouver, ISO itp.
4

Raimondi, Franco. "Model checking multi-agent systems". Thesis, University College London (University of London), 2006. http://discovery.ucl.ac.uk/5627/.

Pełny tekst źródła
Streszczenie:
A multi-agent system (MAS) is usually understood as a system composed of interacting autonomous agents. In this sense, MAS have been employed successfully as a modelling paradigm in a number of scenarios, especially in Computer Science. However, the process of modelling complex and heterogeneous systems is intrinsically prone to errors: for this reason, computer scientists are typically concerned with the issue of verifying that a system actually behaves as it is supposed to, especially when a system is complex. Techniques have been developed to perform this task: testing is the most common technique, but in many circumstances a formal proof of correctness is needed. Techniques for formal verification include theorem proving and model checking. Model checking techniques, in particular, have been successfully employed in the formal verification of distributed systems, including hardware components, communication protocols, security protocols. In contrast to traditional distributed systems, formal verification techniques for MAS are still in their infancy, due to the more complex nature of agents, their autonomy, and the richer language used in the specification of properties. This thesis aims at making a contribution in the formal verification of properties of MAS via model checking. In particular, the following points are addressed: • Theoretical results about model checking methodologies for MAS, obtained by extending traditional methodologies based on Ordered Binary Decision Diagrams (OBDDS) for temporal logics to multi-modal logics for time, knowledge, correct behaviour, and strategies of agents. Complexity results for model checking these logics (and their symbolic representations). • Development of a software tool (MCMAS) that permits the specification and verification of MAS described in the formalism of interpreted systems. • Examples of application of MCMAS to various MAS scenarios (communication, anonymity, games, hardware diagnosability), including experimental results, and comparison with other tools available.
Style APA, Harvard, Vancouver, ISO itp.
5

Kwon, Ky-Sang. "Multi-layer syntactical model transformation for model based systems engineering". Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/42835.

Pełny tekst źródła
Streszczenie:
This dissertation develops a new model transformation approach that supports engineering model integration, which is essential to support contemporary interdisciplinary system design processes. We extend traditional model transformation, which has been primarily used for software engineering, to enable model-based systems engineering (MBSE) so that the model transformation can handle more general engineering models. We identify two issues that arise when applying the traditional model transformation to general engineering modeling domains. The first is instance data integration: the traditional model transformation theory does not deal with instance data, which is essential for executing engineering models in engineering tools. The second is syntactical inconsistency: various engineering tools represent engineering models in a proprietary syntax. However, the traditional model transformation cannot handle this syntactic diversity. In order to address these two issues, we propose a new multi-layer syntactical model transformation approach. For the instance integration issue, this approach generates model transformation rules for instance data from the result of a model transformation that is developed for user model integration, which is the normal purpose of traditional model transformation. For the syntactical inconsistency issue, we introduce the concept of the complete meta-model for defining how to represent a model syntactically as well as semantically. Our approach addresses the syntactical inconsistency issue by generating necessary complete meta-models using a special type of model transformation.
Style APA, Harvard, Vancouver, ISO itp.
6

Strack, Beata. "Multi-column multi-layer computational model of neocortex". VCU Scholars Compass, 2013. http://scholarscompass.vcu.edu/etd/3279.

Pełny tekst źródła
Streszczenie:
We present a multi-layer multi-column computational model of neocortex that is built based on the activity and connections of known neuronal cell types and includes activity-dependent short term plasticity. This model, a network of spiking neurons, is validated by showing that it exhibits activity close to biology in terms of several characteristics: (1) proper laminar flow of activity; (2) columnar organization with focality of inputs; (3) low-threshold-spiking (LTS) and fast-spiking (FS) neurons function as observed in normal cortical circuits; and (4) different stages of epileptiform activity can be obtained with either increasing the level of inhibitory blockade, or simulation of NMDA receptor enhancement. The aim of this research is to provide insight into the fundamental properties of vertical and horizontal inhibition in neocortex and their influence on epileptiform activity. The developed model was used to test novel ideas about modulation of inhibitory neuronal types in a developmentally malformed cortex. The novelty of the proposed research includes: (1) design and implementation of a multi-layer multi-column model of the cortex with multiple neuronal types and short-time plasticity, (2) modification of the Izhikevich neuron model in order to model biological maximum firing rate property, (3) generating local field potential (LFP) and EEG signals without modeling multiple neuronal compartments, (4) modeling several known conditions to validate that the cortex model matches the biology in several aspects,(5) modeling different abnormalities in malformed cortex to test existing and to generate novel hypotheses.
Style APA, Harvard, Vancouver, ISO itp.
7

Titman, Andrew Charles. "Model diagnostics in multi-state models of biological systems". Thesis, University of Cambridge, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.612189.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Kelly, R. J. "Mathematical model of multi-phase snowmelt". Thesis, University of East Anglia, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.377740.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

Schagerström, Lukas. "Valideringsstudie av Multi-Zone Fire Model". Thesis, Luleå tekniska universitet, Institutionen för samhällsbyggnad och naturresurser, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-78682.

Pełny tekst źródła
Streszczenie:
Det finns ett flertal brandsimuleringsprogram på marknaden som används i olika utsträckning varav ett är Fire Dynamics Simulator (FDS). En av nackdelarna med FDS är att det kan ta mycket tid att göra en brandsimulering. Det finns brandsimuleringsprogram som med stor sannolikhet utför brandsimuleringar snabbare än FDS. För några av dessa brandsimuleringsprogram finns det inte någon dokumentation om hur resultaten som brandsimuleringsprogrammen producerar ställer sig mot det som skulle hända i verkligheten vid en brand, något som kallas att vara valideratdet vill säga programmen är inte validerade. Ett av dessa brandsimuleringsprogram är Multi-Zone Fire Model (MZ-Fire Model). Brandsimuleringsprogrammet MZ-Fire Model bygger på ett multizonkoncept framtaget av Suzuki et al. Multizonkonceptet har validerats i tidigare studier varav ett är en brand i tunnel men även bränder i mindre lokaler har prövats. Det finns utrymme för ökad kunskap om hur multizonkonceptet hanterar bränder i stora rumslokaler då det inte finns någon känd dokumentation kring detta. Det finns i dagsläget inte en enda studie som behandlar brandsimuleringsprogrammet MZ-Fire Model. I rapporten redogörs för simulerande av en brand i 4 olika rum av brandsimuleringsprogrammen MZ-Fire Model och FDS, dess simulerade värden är sedan jämförda mot varandra.
There are a number of fire simulation programs on the market that are used to varying degrees, one of which is Fire Dynamics Simulator (FDS). One of the disadvantages of FDS is that it can take a lot of time to do a fire simulation. There are fire simulation programs that are very likely to perform fire simulations faster than FDS. For some of these fire simulation programs, there is no documentation on how the results produced by the programs compare with what would happen in the event of a real fire, something called to bethat is they are not validated. One of these fire simulation programs is Multi-Zone Fire Model (MZ-Fire Model). The fire simulation program MZ-Fire Model is based on a multi-zone concept developed by Suzuki et al. The multi-zone concept has been validated in previous studies, one of which is a fire in a tunnel but fires in smaller premises have also been tested. There is room for increased knowledge about how the multi-zone concept handles fires in large rooms, as there is no known documentation on this. Currently, there is not a single study dealing with the MZ-Fire Model program. The report describes the simulation of a fire in 4 different rooms by the programs MZ-Fire Model and FDS, its simulated values ​​are then compared against each other.
Style APA, Harvard, Vancouver, ISO itp.
10

Breschi, Valentina. "Model learning from data: from centralized multi-model regression to distributed cloud-aided single-model estimation". Thesis, IMT Alti Studi Lucca, 2018. http://e-theses.imtlucca.it/256/1/Breschi_phdthesis.pdf.

Pełny tekst źródła
Streszczenie:
This thesis presents a collection of methods for learning models from data, looking at this problem from two perspectives: learning multiple models from a single data source and how to switch among them, and learning a single model from data collected from multiple sources. Regarding the first, to describe complex phenomena with simple but yet complete models, we propose a computationally efficient method for Piecewise Affine (PWA) regression. This approach relies on the combined use (i) multi-model Recursive Least-Squares (RLS) and (ii) piecewise linear multi- category discrimination, and shows good performances when used for the identification of Piecewise Affine dynamical systems with eXogenous inputs (PWARX) and Linear Parameter Varying (LPV) models. The technique for PWA regression is then extended to handle the problem of black-box identification of Discrete Hybrid Automata (DHA) from input/output observations, with hidden operating modes. The method for DHA identification is based on multi-model RLS and multicategory discrimination and it can approximate both the continuous affine dynamics and the Finite State Machine (FSM) governing the logical dynamics of the DHA. Two more approaches are presented to tackle the problem of learning models that jump over time. While the technique designed to learn Rarely Jump Models (RJMs) from data relies on the combined solution of a convex optimization problem and the use of Dynamic Programming, the method proposed for Markov Jump Models (MJMs) learning is based on the joint use of clustering plus multi-model RLS and a probabilistic clustering technique. The results of the tests performed on the method for RJMs learning have motivated the design of two techniques for Non-Intrusive Load Monitoring, i.e., to estimate the power consumed by the appliances in an household from aggregated measurements, which are also presented in the thesis. In particular, methods based on (i) the optimization of a least-square error cost function, modified to account for the changes in the appliances operating regime, and relying on (ii) multi-model Kalman filters are proposed. Regarding the second perspective, we propose methods for cloud-aided consensus-based parameter estimation over a multitude of similar devices (such as a mass production). In particular, we focus on the design of RLS-based estimators, which allow to handle (i) linear and (ii) nonlinear consensus constraints and (iii) multi-class estimation.
Style APA, Harvard, Vancouver, ISO itp.
11

Kabilan, Vandana. "Using multi tier contract ontology to model contract workflow models". Licentiate thesis, KTH, Computer and Systems Sciences, DSV, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-1666.

Pełny tekst źródła
Streszczenie:

Legal Business Contracts govern the business relationshipbetween trading business partners. Business contracts are likeblueprints of expected business behaviour from all thecontracting parties involved. Business contracts bind theparties to obligations that must be fulfilled by expectedperformance events. Contractual violations can lead to bothlegal and business consequences. Thus it is in the bestinterests of all parties concerned to organise their businessprocess flows to be compliant to the stipulated businesscontracts terms and conditions.

However, Contract Management and Business Process Managementin the current information systems domain are not closelyintegrated. Also it is not easy for business domain experts orinformation systems experts to understand and interpret thelegal terms and conditions into their respective domain needsand requirements. This thesis addresses the above two issues inan attempt to build a semantic bridge across the differentdomains of a legal business contract. This thesis focuses onthe contract execution phase of typical business contracts andas such views contract obligations as processes that need to beexecuted and monitored. Business workflows need to be as closeas possible to the stated contract obligation executionworkflow.

In the first phase, a framework for modelling andrepresenting contractual knowledge in the form of Multi TierContract Ontology (MTCO) is proposed. The MTCO uses conceptualmodels as knowledge representation methodology. It proposes astructured and layered collection of individual ontologiesmoving from the top generic level progressively down tospecific template ontologies. The MTCO is visualised as areusable, flexible, extendable and shared knowledge base.

In the second phase, a methodology for deducing the ContractWorkflow Model (CWM) is proposed. The CWM is deduced from theMTCO and a contract instance document in a stepwise userguideline. The CWM outlines the preferred choreography ofbusiness performance that successfully fulfils the execution ofcontract obligations. The deduced CWM is visualised as an aidto monitor the contract, as a starting point for businessprocess integration and business process workflow design.

Style APA, Harvard, Vancouver, ISO itp.
12

Johnston, Richard Karl. "The relational-to-object-oriented cross-model accessing capability in a multi-model and multi-lingual database system". Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from the National Technical Information Service, 1993. http://handle.dtic.mil/100.2/ADA264911.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
13

Lam, Samuel Kar Kin Cassidy Daniel Thomas. "Multi-component defect model for semiconductor lasers /". *McMaster only, 2003.

Znajdź pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
14

Gonzalez, Pavel. "Model checking GSM-based multi-agent systems". Thesis, Imperial College London, 2014. http://hdl.handle.net/10044/1/39038.

Pełny tekst źródła
Streszczenie:
Business artifacts are a growing topic in service oriented computing. Artifact systems include both data and process descriptions at interface level thereby providing more sophisticated and powerful service inter-operation capabilities. The Guard-Stage-Milestone (GSM) language provides a novel framework for specifying artifact systems that features declarative descriptions of the intended behaviour without requiring an explicit specification of the control flow. While much of the research is focused on the design, deployment and maintenance of GSM programs, the verification of this formalism has received less attention. This thesis aims to contribute to the topic. We put forward a holistic methodology for the practical verification of GSM-based multi-agent systems via model checking. The formal verification faces several challenges: the declarative nature of GSM programs; the mechanisms for data hiding and access control; and the infinite state spaces inherent in the underlying data. We address them in stages. First, we develop a symbolic representation of GSM programs, which makes them amenable to model checking. We then extend GSM to multi-agent systems and map it into a variant of artifact-centric multi-agent systems (AC-MAS), a paradigm based on interpreted systems. This allows us to reason about the knowledge the agents have about the artifact system. Lastly, we investigate predicate abstraction as a key technique to overcome the difficulty of verifying infinite state spaces. We present a technique that lifts 3-valued abstraction to epistemic logic and makes GSM programs amenable to model checking against specifications written in a quantified version of temporal-epistemic logic. The theory serves as a basis for developing a symbolic model checker that implements SMT-based, 3-valued abstraction for GSM-based multi-agent systems. The feasibility of the implementation is demonstrated by verifying GSM programs for concrete applications from the service community.
Style APA, Harvard, Vancouver, ISO itp.
15

Lu, Gehao. "Neural trust model for multi-agent systems". Thesis, University of Huddersfield, 2011. http://eprints.hud.ac.uk/id/eprint/17817/.

Pełny tekst źródła
Streszczenie:
Introducing trust and reputation into multi-agent systems can significantly improve the quality and efficiency of the systems. The computational trust and reputation also creates an environment of survival of the fittest to help agents recognize and eliminate malevolent agents in the virtual society. The thesis redefines the computational trust and analyzes its features from different aspects. A systematic model called Neural Trust Model for Multi-agent Systems is proposed to support trust learning, trust estimating, reputation generation, and reputation propagation. In this model, the thesis innovates the traditional Self Organizing Map (SOM) and creates a SOM based Trust Learning (STL) algorithm and SOM based Trust Estimation (STE) algorithm. The STL algorithm solves the problem of learning trust from agents' past interactions and the STE solve the problem of estimating the trustworthiness with the help of the previous patterns. The thesis also proposes a multi-agent reputation mechanism for generating and propagating the reputations. The mechanism exploits the patterns learned from STL algorithm and generates the reputation of the specific agent. Three propagation methods are also designed as part of the mechanism to guide path selection of the reputation. For evaluation, the thesis designs and implements a test bed to evaluate the model in a simulated electronic commerce scenario. The proposed model is compared with a traditional arithmetic based trust model and it is also compared to itself in situations where there is no reputation mechanism. The results state that the model can significantly improve the quality and efficacy of the test bed based scenario. Some design considerations and rationale behind the algorithms are also discussed based on the results.
Style APA, Harvard, Vancouver, ISO itp.
16

Dostaler, Marc. "Multi-level random data based correlator model". Thesis, University of Ottawa (Canada), 2005. http://hdl.handle.net/10393/26893.

Pełny tekst źródła
Streszczenie:
The thesis proposes to develop a virtual prototyping environment (VPE) allowing the study of multi-level random data processing. This VPE allows to experimentally study the effect of different quantization levels and dither signal parameters on the performance of generalized random data correlator architectures.
Style APA, Harvard, Vancouver, ISO itp.
17

Tatikunta, Raju. "TraGent : a multi agent stock exchange model /". Available to subscribers only, 2006. http://proquest.umi.com/pqdweb?did=1240702281&sid=18&Fmt=2&clientId=1509&RQT=309&VName=PQD.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
18

高銘謙 i Ming-him Ko. "A multi-agent model for DNA analysis". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1999. http://hub.hku.hk/bib/B31222778.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
19

Schubert, Jan. "Multi-model-analysis of Arctic climate trends". Universität Leipzig, 2018. https://ul.qucosa.de/id/qucosa%3A31798.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
20

Alamad, Ruba Amin. "SURGERY DURATION ESTIMATION USING MULTI-REGRESSION MODEL". University of Akron / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=akron1498073495501962.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
21

Russo, Francesco. "Abstraction in model checking multi-agent systems". Thesis, Imperial College London, 2012. http://hdl.handle.net/10044/1/9294.

Pełny tekst źródła
Streszczenie:
This thesis presents existential abstraction techniques for multi-agent systems preserving temporal-epistemic specifications. Multi-agent systems, defined in the interpreted system frameworks, are abstracted by collapsing the local states and actions of each agent. The goal of abstraction is to reduce the state space of the system under investigation in order to cope with the state explosion problem that impedes the verification of very large state space systems. Theoretical results show that the resulting abstract system simulates the concrete one. Preservation and correctness theorems are proved in this thesis. These theorems assure that if a temporal-epistemic formula holds on the abstract system, then the formula also holds on the concrete one. These results permit to verify temporal-epistemic formulas in abstract systems instead of the concrete ones, therefore saving time and space in the verification process. In order to test the applicability, usefulness, suitability, power and effectiveness of the abstraction method presented, two different implementations are presented: a tool for data-abstraction and one for variable-abstraction. The first technique achieves a state space reduction by collapsing the values of the domains of the system variables. The second technique performs a reduction on the size of the model by collapsing groups of two or more variables. Therefore, the abstract system has a reduced number of variables. Each new variable in the abstract system takes values belonging to a new domain built automatically by the tool. Both implementations perform abstraction in a fully automatic way. They operate on multi agents models specified in a formal language, called ISPL (Interpreted System Programming Language). This is the input language for MCMAS, a model checker for multi-agent systems. The output is an ISPL file as well (with a reduced state space). This thesis also presents several suitable temporal-epistemic examples to evaluate both techniques. The experiments show good results and point to the attractiveness of the temporal-epistemic abstraction techniques developed in this thesis. In particular, the contributions of the thesis are the following ones: • We produced correctness and preservation theoretical results for existential abstraction. • We introduced two algorithms to perform data-abstraction and variable-abstraction on multi-agent systems. • We developed two software toolkits for automatic abstraction on multi-agent scenarios: one tool performing data-abstraction and the second performing variable-abstraction. • We evaluated the methodologies introduced in this thesis by running experiments on several multi-agent system examples.
Style APA, Harvard, Vancouver, ISO itp.
22

Ko, Ming-him. "A multi-agent model for DNA analysis /". Hong Kong : University of Hong Kong, 1999. http://sunzi.lib.hku.hk/hkuto/record.jsp?B21949116.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
23

Belkhouche, Mohammed Yassine. "Multi-perspective, Multi-modal Image Registration and Fusion". Thesis, University of North Texas, 2012. https://digital.library.unt.edu/ark:/67531/metadc149562/.

Pełny tekst źródła
Streszczenie:
Multi-modal image fusion is an active research area with many civilian and military applications. Fusion is defined as strategic combination of information collected by various sensors from different locations or different types in order to obtain a better understanding of an observed scene or situation. Fusion of multi-modal images cannot be completed unless these two modalities are spatially aligned. In this research, I consider two important problems. Multi-modal, multi-perspective image registration and decision level fusion of multi-modal images. In particular, LiDAR and visual imagery. Multi-modal image registration is a difficult task due to the different semantic interpretation of features extracted from each modality. This problem is decoupled into three sub-problems. The first step is identification and extraction of common features. The second step is the determination of corresponding points. The third step consists of determining the registration transformation parameters. Traditional registration methods use low level features such as lines and corners. Using these features require an extensive optimization search in order to determine the corresponding points. Many methods use global positioning systems (GPS), and a calibrated camera in order to obtain an initial estimate of the camera parameters. The advantages of our work over the previous works are the following. First, I used high level-features, which significantly reduce the search space for the optimization process. Second, the determination of corresponding points is modeled as an assignment problem between a small numbers of objects. On the other side, fusing LiDAR and visual images is beneficial, due to the different and rich characteristics of both modalities. LiDAR data contain 3D information, while images contain visual information. Developing a fusion technique that uses the characteristics of both modalities is very important. I establish a decision-level fusion technique using manifold models.
Style APA, Harvard, Vancouver, ISO itp.
24

Ogbonna, Emmanuel. "A multi-parameter empirical model for mesophilic anaerobic digestion". Thesis, University of Hertfordshire, 2017. http://hdl.handle.net/2299/17467.

Pełny tekst źródła
Streszczenie:
Anaerobic digestion, which is the process by which bacteria breakdown organic matter to produce biogas (renewable energy source) and digestate (biofertiliser) in the absence of oxygen, proves to be the ideal concept not only for sustainable energy provision but also for effective organic waste management. However, the production amount of biogas to keep up with the global demand is limited by the underperformance in the system implementing the AD process. This underperformance is due to the difficulty in obtaining and maintaining the optimal operating parameters/states for anaerobic bacteria to thrive with regards to attaining a specific critical population number, which results in maximising the biogas production. This problem continues to exist as a result of insufficient knowledge of the interactions between the operating parameters and bacterial community. In addition, the lack of sufficient knowledge of the composition of bacterial groups that varies with changes in the operating parameters such as temperature, substrate and retention time. Without sufficient knowledge of the overall impact of the physico-environmental operating parameters on anaerobic bacterial growth and composition, significant improvement of biogas production may be difficult to attain. In order to mitigate this problem, this study has presented a nonlinear multi-parameter system modelling of mesophilic AD. It utilised raw data sets generated from laboratory experimentation of the influence of four operating parameters, temperature, pH, mixing speed and pressure on biogas and methane production, signifying that this is a multiple input single output (MISO) system. Due to the nonlinear characteristics of the data, the nonlinear black-box modelling technique is applied. The modelling is performed in MATLAB through System Identification approach. Two nonlinear model structures, autoregressive with exogenous input (NARX) and Hammerstein-Wiener (NLHW) with different nonlinearity estimators and model orders are chosen by trial and error and utilised to estimate the models. The performance of the models is determined by comparing the simulated outputs of the estimated models and the output in the validation data. The approach is used to validate the estimated models by checking how well the simulated output of the models fits the measured output. The best models for biogas and methane production are chosen by comparing the outputs of the best NARX and NLHW models (each for biogas and methane production), and the validation data, as well as utilising the Akaike information criterion to measure the quality of each model relative to each of the other models. The NLHW models mhw2 and mhws2 are chosen for biogas and methane production, respectively. The identified NLHW models mhw2 and mhws2 represent the behaviour of the production of biogas and methane, respectively, from mesophilic AD. Among all the candidate models studied, the nonlinear models provide a superior reproduction of the experimental data over the whole analysed period. Furthermore, the models constructed in this study cannot be used for scale-up purpose because they are not able to satisfy the rules and criteria for applying dimensional analysis to scale-up.
Style APA, Harvard, Vancouver, ISO itp.
25

Shan, Liang. "Joint Gaussian Graphical Model for multi-class and multi-level data". Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/81412.

Pełny tekst źródła
Streszczenie:
Gaussian graphical model has been a popular tool to investigate conditional dependency between random variables by estimating sparse precision matrices. The estimated precision matrices could be mapped into networks for visualization. For related but different classes, jointly estimating networks by taking advantage of common structure across classes can help us better estimate conditional dependencies among variables. Furthermore, there may exist multilevel structure among variables; some variables are considered as higher level variables and others are nested in these higher level variables, which are called lower level variables. In this dissertation, we made several contributions to the area of joint estimation of Gaussian graphical models across heterogeneous classes: the first is to propose a joint estimation method for estimating Gaussian graphical models across unbalanced multi-classes, whereas the second considers multilevel variable information during the joint estimation procedure and simultaneously estimates higher level network and lower level network. For the first project, we consider the problem of jointly estimating Gaussian graphical models across unbalanced multi-class. Most existing methods require equal or similar sample size among classes. However, many real applications do not have similar sample sizes. Hence, in this dissertation, we propose the joint adaptive graphical lasso, a weighted L1 penalized approach, for unbalanced multi-class problems. Our joint adaptive graphical lasso approach combines information across classes so that their common characteristics can be shared during the estimation process. We also introduce regularization into the adaptive term so that the unbalancedness of data is taken into account. Simulation studies show that our approach performs better than existing methods in terms of false positive rate, accuracy, Mathews correlation coefficient, and false discovery rate. We demonstrate the advantage of our approach using liver cancer data set. For the second one, we propose a method to jointly estimate the multilevel Gaussian graphical models across multiple classes. Currently, methods are still limited to investigate a single level conditional dependency structure when there exists the multilevel structure among variables. Due to the fact that higher level variables may work together to accomplish certain tasks, simultaneously exploring conditional dependency structures among higher level variables and among lower level variables are of our main interest. Given multilevel data from heterogeneous classes, our method assures that common structures in terms of the multilevel conditional dependency are shared during the estimation procedure, yet unique structures for each class are retained as well. Our proposed approach is achieved by first introducing a higher level variable factor within a class, and then common factors across classes. The performance of our approach is evaluated on several simulated networks. We also demonstrate the advantage of our approach using breast cancer patient data.
Ph. D.
Style APA, Harvard, Vancouver, ISO itp.
26

Yang, Ang Information Technology &amp Electrical Engineering Australian Defence Force Academy UNSW. "A networked multi-agent combat model : emergence explained". Awarded by:University of New South Wales - Australian Defence Force Academy. School of Information Technology and Electrical Engineering, 2007. http://handle.unsw.edu.au/1959.4/38823.

Pełny tekst źródła
Streszczenie:
Simulation has been used to model combat for a long time. Recently, it has been accepted that combat is a complex adaptive system (CAS). Multi-agent systems (MAS) are also considered as a powerful modelling and development environment to simulate combat. Agent-based distillations (ABD) - proposed by the US Marine Corp - are a type of MAS used mainly by the military for exploring large scenario spaces. ABDs that facilitated the analysis and understanding of combat include: ISAAC, EINSTein, MANA, CROCADILE and BactoWars. With new concepts such as networked forces, previous ABDs can implicitly simulate a networked force. However, the architectures of these systems limit the potential advantages gained from the use of networks. In this thesis, a novel network centric multi-agent architecture (NCMAA) is pro-posed, based purely on network theory and CAS. In NCMAA, each relationship and interaction is modelled as a network, with the entities or agents as the nodes. NCMAA offers the following advantages: 1. An explicit model of interactions/relationships: it facilitates the analysis of the role of interactions/relationships in simulations; 2. A mechanism to capture the interaction or influence between networks; 3. A formal real-time reasoning framework at the network level in ABDs: it interprets the emergent behaviours online. For a long time, it has been believed that it is hard in CAS to reason about emerging phenomena. In this thesis, I show that despite being almost impossible to reason about the behaviour of the system by looking at the components alone because of high nonlinearity, it is possible to reason about emerging phenomena by looking at the network level. This is undertaken through analysing network dynamics, where I provide an English-like reasoning log to explain the simulation. Two implementations of a new land-combat system called the Warfare Intelligent System for Dynamic Optimization of Missions (WISDOM) are presented. WISDOM-I is built based on the same principles as those in existing ABDs while WISDOM-II is built based on NCMAA. The unique features of WISDOM-II include: 1. A real-time network analysis toolbox: it captures patterns while interaction is evolving during the simulation; 2. Flexible C3 (command, control and communication) models; I 3. Integration of tactics with strategies: the tactical decisions are guided by the strategic planning; 4. A model of recovery: it allows users to study the role of recovery capability and resources; 5. Real-time visualization of all possible information: it allows users to intervene during the simulation to steer it differently in human-in-the-loop simulations. A comparison between the fitness landscapes of WISDOM-I and II reveals similarities and differences, which emphasise the importance and role of the networked architecture and the addition of strategic planning. Lastly but not least, WISDOM-II is used in an experiment with two setups, with and without strategic planning in different urban terrains. When the strategic planning was removed, conclusions were similar to traditional ABDs but were very different when the system ran with strategic planning. As such, I show that results obtained from traditional ABDs - where rational group planning is not considered - can be misleading. Finally, the thesis tests and demonstrates the role of communication in urban ter-rains. As future warfighting concepts tend to focus on asymmetric warfare in urban environments, it was vital to test the role of networked forces in these environments. I demonstrate that there is a phase transition in a number of situations where highly dense urban terrains may lead to similar outcomes as open terrains, while medium to light dense urban terrains have different dynamics
Style APA, Harvard, Vancouver, ISO itp.
27

Jorgensen, Joni Renee. "Ground verification of a multi-stage deployment model". Diss., Connect to online resource, 2006. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:1433470.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
28

Blömeling, Frank. "Multi-level substructuring methods for model order reduction". Berlin dissertation.de, 2008. http://d-nb.info/988537184/04.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
29

Rajhans, Akshay H. "Multi-Model Heterogeneous Verification of Cyber-Physical Systems". Research Showcase @ CMU, 2013. http://repository.cmu.edu/dissertations/251.

Pełny tekst źródła
Streszczenie:
Complex systems are designed using the model-based design paradigm in which mathematical models of systems are created and checked against specifications. Cyber-physical systems (CPS) are complex systems in which the physical environment is sensed and controlled by computational or cyber elements possibly distributed over communication networks. Various aspects of CPS design such as physical dynamics, software, control, and communication networking must interoperate correctly for correct functioning of the systems. Modeling formalisms, analysis techniques and tools for designing these different aspects have evolved independently, and remain dissimilar and disparate. There is no unifying formalism in which one can model all these aspects equally well. Therefore, model-based design of CPS must make use of a collection of models in several different formalisms and use respective analysis methods and tools together to ensure correct system design. To enable doing this in a formal manner, this thesis develops a framework for multi-model verification of cyber-physical systems based on behavioral semantics. Heterogeneity arising from the different interacting aspects of CPS design must be addressed in order to enable system-level verification. In current practice, there is no principled approach that deals with this modeling heterogeneity within a formal framework. We develop behavioral semantics to address heterogeneity in a general yet formal manner. Our framework makes no assumptions about the specifics of any particular formalism, therefore it readily supports various formalisms, techniques and tools. Models can be analyzed independently in isolation, supporting separation of concerns. Mappings across heterogeneous semantic domains enable associations between analysis results. Interdependencies across different models and specifications can be formally represented as constraints over parameters and verification can be carried out in a semantically consistent manner. Composition of analysis results is supported both hierarchically across different levels of abstraction and structurally into interacting component models at a given level of abstraction. The theoretical concepts developed in the thesis are illustrated using a case study on the hierarchical heterogeneous verification of an automotive intersection collision avoidance system.
Style APA, Harvard, Vancouver, ISO itp.
30

Ng, Peng-Teng Peter. "Distributed dynamic resource allocation in multi-model situations". Thesis, Massachusetts Institute of Technology, 1985. http://hdl.handle.net/1721.1/15184.

Pełny tekst źródła
Streszczenie:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1986.
MICROFICHE COPY AVAILABLE IN ARCHIVES AND ENGINEERING.
Bibliography: leaves 351-354.
by Peng-Teng Peter Ng.
Ph.D.
Style APA, Harvard, Vancouver, ISO itp.
31

Cohas, François. "Market share model for a multi-airport system". Thesis, Massachusetts Institute of Technology, 1993. http://hdl.handle.net/1721.1/49902.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
32

Bindrees, Mohammed. "Multi-factor motivation model in software engineering environments". Thesis, Heriot-Watt University, 2015. http://hdl.handle.net/10399/3041.

Pełny tekst źródła
Streszczenie:
In software engineering environments, motivation has become an imperative tool for increasing the productivity and creativity levels of projects. The aim of this research is to develop a validated conceptual multifactor and motivating model that represents the interaction between the organisational, occupational and interpersonal factors in software engineering environments. However, the application of well-known motivation tools cannot guarantee high motivational levels among the members of software engineering teams. Therefore, several phenomena have been monitored and empirically tested related to the daily practices in the software engineering industry. Reviewing the literature on motivation in software engineering uncovered a list of influential factors that could motivate individuals in the workplace. These factors have been suggested as being grouped into three categories (interpersonal, occupational and organisational). The literature review stage was followed by a preliminary study to discuss and validate these factors in greater detail by interviewing eight experts drawn from the software engineering industry. The preliminary study provided this research with an initial conceptual model that could broaden the understanding of the recent state of motivation in software engineering environments. The initial model was validated and expanded by conducting two types of research (quantitative and qualitative) based on the type of information gleaned. Accordingly, 208 experienced software engineers and members of teams in the software development industry were involved in this research. The results from this research revealed a statistically significant interaction between factors from different categories (interpersonal, occupational and organisational). This interaction has helped in developing an updated new model of motivation in software engineering. In addition, the application of motivation theories in software engineering could be affected by some work-related factors. These factors were found in this research to be member role, contract types, age, organisational structure and citizenship status. Thus, all these factors have been given a high consideration when designing rewards systems in software engineering.
Style APA, Harvard, Vancouver, ISO itp.
33

Sharman, Karl J. "Non-invasive multi-view 3D dynamic model extraction". Thesis, University of Southampton, 2002. https://eprints.soton.ac.uk/256804/.

Pełny tekst źródła
Streszczenie:
A non-invasive system is presented which is capable of extracting and describing the three-dimensional nature of human gait thereby extending the potential use of gait as a biometric. Of current three-dimensional systems, those using multiple views appear to be the most suitable. Reformulating the three-dimensional anal- ysis algorithm known as Volume Intersection as an evidence gathering process for abstract scene reconstruction provides a new way to overcome concavities and to handle noise and occlusion. After analysis of the standard voxel-based three-dimensional representation, a new data representation called 2.75D is suggested which allows the scene to be analysed at the most appropriate resolution, avoiding further discretisation. With a sequence of three-dimensional frames, another evidence gathering algo- rithm is applied to extract and describe the motion of moving objects. No current techniques have exploited the sequence as a whole during such an operation and in this thesis, a method to incorporate successive frames, and therefore time, as an additional dimension to the extraction process is described. Results on synthetic and real images show that the techniques do indeed process a multi-view image sequence to derive the parameters of interest, thereby provid- ing a suitable basis for future development as a marker-less three-dimensional gait analysis system. In particular, the parameters of a ball moving under the in uence of gravity are extracted with accuracy from a 3D scene. Also, a walking human is extracted and overlaying the result onto the original images conrms that the correct extraction has been made; the result is also supported by medical studies.
Style APA, Harvard, Vancouver, ISO itp.
34

Bradel, Lauren C. "Multi-Model Semantic Interaction for Scalable Text Analytics". Diss., Virginia Tech, 2015. http://hdl.handle.net/10919/52785.

Pełny tekst źródła
Streszczenie:
Learning from text data often involves a loop of tasks that iterate between foraging for information and synthesizing it in incremental hypotheses. Past research has shown the advantages of using spatial workspaces as a means for synthesizing information through externalizing hypotheses and creating spatial schemas. However, spatializing the entirety of datasets becomes prohibitive as the number of documents available to the analysts grows, particularly when only a small subset are relevant to the tasks at hand. To address this issue, we developed the multi-model semantic interaction (MSI) technique, which leverages user interactions to aid in the display layout (as was seen in previous semantic interaction work), forage for new, relevant documents as implied by the interactions, and then place them in context of the user's existing spatial layout. This results in the ability for the user to conduct both implicit queries and traditional explicit searches. A comparative user study of StarSPIRE discovered that while adding implicit querying did not impact the quality of the foraging, it enabled users to 1) synthesize more information than users with only explicit querying, 2) externalize more hypotheses, 3) complete more synthesis-related semantic interactions. Also, 18% of relevant documents were found by implicitly generated queries when given the option. StarSPIRE has also been integrated with web-based search engines, allowing users to work across vastly different levels of data scale to complete exploratory data analysis tasks (e.g. literature review, investigative journalism). The core contribution of this work is multi-model semantic interaction (MSI) for usable big data analytics. This work has expanded the understanding of how user interactions can be interpreted and mapped to underlying models to steer multiple algorithms simultaneously and at varying levels of data scale. This is represented in an extendable multi-model semantic interaction pipeline. The lessons learned from this dissertation work can be applied to other visual analytics systems, promoting direct manipulation of the data in context of the visualization rather than tweaking algorithmic parameters and creating usable and intuitive interfaces for big data analytics.
Ph. D.
Style APA, Harvard, Vancouver, ISO itp.
35

Fechteler, Philipp. "Multi-View Motion Capture based on Model Adaptation". Doctoral thesis, Humboldt-Universität zu Berlin, 2019. http://dx.doi.org/10.18452/20803.

Pełny tekst źródła
Streszczenie:
Fotorealistische Modellierung von Menschen ist in der Computer Grafik von besonderer Bedeutung, da diese allgegenwärtig in Film- und Computerspiel-Produktionen benötigt wird. Heutige Modellierungs-Software vereinfacht das Generieren realistischer Modelle. Hingegen ist das Erstellen realitätsgetreuer Abbilder real existierender Personen nach wie vor eine anspruchsvolle Aufgabe. Die vorliegende Arbeit adressiert die automatische Modellierung von realen Menschen und die Verfolgung ihrer Bewegung. Ein Skinning-basierter Ansatz wurde gewählt, um effizientes Generieren von Animationen zu ermöglichen. Für gesteigerte Realitätstreue wurde eine artefaktfreie Skinning-Funktion um den Einfluss mehrerer kinematischer Gelenke erweitert. Dies ermöglicht eine große Vielfalt an real wirkenden komplexen Bewegungen. Zum Erstellen eines Personen-spezifischen Modells wird hier ein automatischer, datenbasierter Ansatz vorgeschlagen. Als Eingabedaten werden registrierte, geschlossene Beispiel-Meshes verschiedener Posen genutzt. Um bestmöglich die Trainingsdaten zu approximieren, werden in einer Schleife alle Komponenten des Modells optimiert: Vertices, Gelenke und Skinning-Gewichte. Zwecks Tracking von Sequenzen verrauschter und nur teilweise erfasster 3D Rekonstruktionen wird ein markerfreier modelladaptiver Ansatz vorgestellt. Durch die nicht-parametrische Formulierung werden die Gelenke des generischen initialien Tracking-Modells uneingeschränkt optimiert, als auch die Oberfläche frei deformiert und somit individuelle Eigenheiten des Subjekts extrahiert. Integriertes a priori Wissen über die menschliche Gestalt, extrahiert aus Trainingsdaten, gewährleistet realistische Modellanpassungen. Das resultierende Modell mit Animationsparametern ist darauf optimiert, bestmöglich die Eingabe-Sequenz wiederzugeben. Zusammengefasst ermöglichen die vorgestellten Ansätze realitätsgetreues und automatisches Modellieren von Menschen und damit akkurates Tracking aus 3D Daten.
Photorealistic modeling of humans in computer graphics is of special interest because it is required for modern movie- and computer game productions. Modeling realistic human models is relatively simple with current modeling software, but modeling an existing real person in detail is still a very cumbersome task. This dissertation focuses on realistic and automatic modeling as well as tracking human body motion. A skinning based approach is chosen to support efficient realistic animation. For increased realism, an artifact-free skinning function is enhanced to support blending the influence of multiple kinematic joints. As a result, natural appearance is supported for a wide range of complex motions. To setup a subject-specific model, an automatic and data-driven optimization framework is introduced. Registered, watertight example meshes of different poses are used as input. Using an efficient loop, all components of the animatable model are optimized to closely resemble the training data: vertices, kinematic joints and skinning weights. For the purpose of tracking sequences of noisy, partial 3D observations, a markerless motion capture method with simultaneous detailed model adaptation is proposed. The non-parametric formulation supports free-form deformation of the model’s shape as well as unconstrained adaptation of the kinematic joints, thereby allowing to extract individual peculiarities of the captured subject. Integrated a-prior knowledge on human shape and pose, extracted from training data, ensures that the adapted models maintain a natural and realistic appearance. The result is an animatable model adapted to the captured subject as well as a sequence of animation parameters, faithfully resembling the input data. Altogether, the presented approaches provide realistic and automatic modeling of human characters accurately resembling sequences of 3D input data.
Style APA, Harvard, Vancouver, ISO itp.
36

Koetje, Thabo. "Multi-objectives model predictive control of multivariable systems". Master's thesis, University of Cape Town, 2011. http://hdl.handle.net/11427/11426.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
37

Kingetsu, Hiroaki. "Multi-agent Traffic Simulation using Characteristic Behavior Model". Doctoral thesis, Kyoto University, 2021. http://hdl.handle.net/2433/263781.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
38

Hasasneh, Nabil M. "Chip multi-processors using a micro-threaded model". Thesis, University of Hull, 2006. http://hydra.hull.ac.uk/resources/hull:13609.

Pełny tekst źródła
Streszczenie:
Most microprocessor chips today use an out-of-order (OOO) instruction execution mechanism. This mechanism allows superscalar processors to extract reasonably high levels of instruction level parallelism (lLP). The most significant problem with this approach is a large instruction window and the logic to support instruction issue from it. This includes generating wake-up signals to waiting instructions and a selection mechanism for issuing them. Wide-issue width also requires a large multi-ported register file, so that each instruction can read and write its operands simultaneously. Neither structure scales well with issue width leading to poor performance relative to the gates used. Furthermore, to obtain this ILP, the execution of instructions must proceed speculatively. An alternative, which avoids this complexity in instruction issue and eliminates speculative execution, is the microthreaded model. This model fragments sequential code at compile time and executes the fragments OOO while maintaining in-order execution within the fragments. The fragments of code are called microthreads and they capture ILP and loop concurrency. Fragments can be interleaved on a single processor to give tolerance to latency in operands or distributed to many processors to achieve speedup. The major advantage of this model is that it provides sufficient information to implement a penalty free distributed register file organisation. However, the scalability of the microthreaded register file in terms of the number of required logical read and write ports is not clear yet. In this thesis, we looked at the distribution and frequency of access to the asynchronous (non-pipeline) ports in the synchronising memory and provide a detail analysis and evaluation of this issue. It concluded, using an analysis of a range of different code kernel, that a distributed shared synchronising memory could be implemented with 5-ports per processor, where three ports provided single instruction issue per cycle and the other two asynchronous ports were able to manage all other demands on the local register file. Also, in the microthreaded CMP a broadcast bus is used for thread creation and to replicate the compiler-defined global state to each processor's local register file. This is done instead of accessing a centralised register file for global variables. The key problem is that, accessing this bus by multiple processors simultaneously caused contention and unfair communication between processors. Therefore, to avoid processor contention and to take the advantages of asynchronous communication, this thesis presents a scalable and partitionable asynchronous bus arbiter for use with chip multiprocessors (eMP) and its corresponding pre-layout simulation results using VHDL. It is shown in this thesis that this arbiter can be extended easily to support large numbers of processors and can be used for chip multiprocessor arbitration purposes. Furthermore, the microthreaded model requires dynamic register allocation and a hardware scheduler, which can support hundreds of microthreads per processor and their associated microcontexts. The scheduler must support thread creation, context switching and thread rescheduling on every machine cycle to fully support this model, which is a significant challenge. In this thesis, scalable implementations and evaluation of these support structures are presented and the feasibility of large-scale CMPs is investigated by giving detailed area estimate of these structures using 0.07-micron technology.
Style APA, Harvard, Vancouver, ISO itp.
39

Say, Fatih. "Exponential asymptotics : multi-level asymptotics of model problems". Thesis, University of Nottingham, 2016. http://eprints.nottingham.ac.uk/33986/.

Pełny tekst źródła
Streszczenie:
Exponential asymptotics, which deals with the interpretation of divergent series, is a highly topical field in mathematics. Exponentially small quantities frequently arise in applications, and Poincar´e’s definition of an asymptotic expansion, unfortunately, fails to emphasise the importance of such small exponentials, as they are hidden behind the algebraic order terms. In this thesis, we introduce a new method of hyperasymptotic expansion by inspecting resultant remainders of series. We study the method from two different concepts. First, deriving the singularities and the late order terms, where we truncate expansions at the least value and observe if the remainder is exponentially small. Substitution of the truncated remainder into original differential equation generates an inhomogeneous differential equation for the remainders. We expand the remainder as an asymptotic power series, and then the truncation leads to a new remainder which is exponentially smaller whence the related error estimate gets smaller, so that the numerical precision increases. Systematically repeating this process of reexpansions of the truncated remainders derives the exponential improvement in the approximate solution of the expansions and minimises the ignored terms, i.e., error estimate. Second, in establishing the level one error, which is a function of level zero and level one truncation points, we study asymptotic behaviour in terms of the truncation points and allow them to vary. Writing the estimate as a function of the preceding level truncation point and varying the number of the terms decreases the error dramatically. We also discuss the Stokes lines originating from the singularities of the expansion(s) and the switching on and off behaviour of the subdominant exponentials across these lines. A key result of this thesis is that when the higher levels of the expansions are considered in terms of the truncation points of preceding stages, the error estimate is minimised. This is demonstrated via several differential equations provided in the thesis.
Style APA, Harvard, Vancouver, ISO itp.
40

Liu, Xiang. "A Multi-Indexed Logistic Model for Time Series". Digital Commons @ East Tennessee State University, 2016. https://dc.etsu.edu/etd/3140.

Pełny tekst źródła
Streszczenie:
In this thesis, we explore a multi-indexed logistic regression (MILR) model, with particular emphasis given to its application to time series. MILR includes simple logistic regression (SLR) as a special case, and the hope is that it will in some instances also produce significantly better results. To motivate the development of MILR, we consider its application to the analysis of both simulated sine wave data and stock data. We looked at well-studied SLR and its application in the analysis of time series data. Using a more sophisticated representation of sequential data, we then detail the implementation of MILR. We compare their performance using forecast accuracy and an area under the curve score via simulated sine waves with various intensities of Gaussian noise and Standard & Poors 500 historical data. Overall, that MILR outperforms SLR is validated on both realistic and simulated data. Finally, some possible future directions of research are discussed.
Style APA, Harvard, Vancouver, ISO itp.
41

Nichele, Stefano <1994&gt. "Business model innovation in crowdsourcing multi-sided platforms". Master's Degree Thesis, Università Ca' Foscari Venezia, 2019. http://hdl.handle.net/10579/15739.

Pełny tekst źródła
Streszczenie:
Business model has gained a lot of attention over the last years and the concept of business model innovation has probably never been more important than it is today. Phenomena such as digitalization have given rise to new products and service but most importantly, it has pushed companies to explore alternative and novel business model configurations. In particular, we have witnessed the rise of the multi-sided platform as one of the most disruptive and widely-adopted business model design in the current digital era. The thesis seeks to increase the understanding of the multi-sided platform business model and its possible developments with crowdsourcing practices. The study is conducted through a literature review on the concept of business model and business model innovation by relying on the main contributions from several authors on the topic. Thus, the main features of the multi-sided business model and the concept of crowdsourcing with its possible applications and integrations are examined. The study is then completed with a qualitative case study: this company operates in the business of digital testing for software, mobile applications and websites and it has adopted a crowdsourcing multi-sided platform business model in order to link companies that need to test their digital channels with real testers. Born as a start-up, the company has undertaken many innovations in its business model and thanks to those innovations it has become one of the leaders in the business of digital testing. The innovations are examined through a recent framework that aims to integrate the concept of modularity from the theory of complex systems for business modelling and business model innovation.
Style APA, Harvard, Vancouver, ISO itp.
42

Morris, Richard Glenn. "A Hierarchical Multi-Output Nearest Neighbor Model for Multi-Output Dependence Learning". BYU ScholarsArchive, 2013. https://scholarsarchive.byu.edu/etd/3512.

Pełny tekst źródła
Streszczenie:
Multi-Output Dependence (MOD) learning is a generalization of standard classification problems that allows for multiple outputs that are dependent on each other. A primary issue that arises in the context of MOD learning is that for any given input pattern there can be multiple correct output patterns. This changes the learning task from function approximation to relation approximation. Previous algorithms do not consider this problem, and thus cannot be readily applied to MOD problems. To perform MOD learning, we introduce the Hierarchical Multi-Output Nearest Neighbor model (HMONN) that employs a basic learning model for each output and a modified nearest neighbor approach to refine the initial results. This paper focuses on tasks with nominal features, although HMONN has the initial capacity for solving MOD problems with real-valued features. Results obtained using UCI repository, synthetic, and business application data sets show improved accuracy over a baseline that treats each output as independent of all the others, with HMONN showing improvement that is statistically significant in the majority of cases.
Style APA, Harvard, Vancouver, ISO itp.
43

CHIEN, FENG-SUNG, i 簡楓松. "Macroeconomic Multi-Factor Model". Thesis, 2016. http://ndltd.ncl.edu.tw/handle/42486777008104639150.

Pełny tekst źródła
Streszczenie:
碩士
國立中山大學
財務管理學系研究所
104
The purpose of this paper is to construct a macroeconomic multi-factor risk model based on the listed stock market in Taiwan. The first part is to filter out the effective factors. We collect macroeconomic indicators from Taiwan and the USA and perform a unit root test to make sure that our data are stationary time series. We select the effective indicators according to the criteria in this paper and then use factor analysis and principal component analysis to derive our composite macro factors. The second part is to build the risk model, including estimating the factor exposure, specific return, specific risk and factor return covariance matrix. Finally, we conduct a bias test to evaluate the performance of our model during the out-of-sample period from 2008/01 to 2014/12. Our empirical results show the average R-squared of our model is 22%, which indicates that the macro factors selected using the methodology in this paper can explain part of the variability of stock returns and provide fund managers with macroeconomic viewpoints when doing index tracking or constructing enhanced portfolios.
Style APA, Harvard, Vancouver, ISO itp.
44

Hsieh, Hsin-I., i 謝心怡. "Multi-Class Multi-Tiers Dasymetric Demographic Model". Thesis, 2007. http://ndltd.ncl.edu.tw/handle/33465760518963285551.

Pełny tekst źródła
Streszczenie:
碩士
國立臺灣大學
生物環境系統工程學研究所
95
The population distribution plays a crucial role in many research fields. However, the map of the population distribution is usually based on administrative divisions, and only shows the population density or the number of population in the region. Traditional maps of population distribution have problems like inadequate area size that is too large, the inapplicability of the administrative division unit in many applications and the change of the administrative boundaries over time. Many estimation methods of the population distribution have been proposed in literatures to improve the defects of the population distribution maps. However these proposed methods have its advantages and disadvantages. Most researches use a single estimate method. Some recent researches proposed the concept of the multi-layered estimation which integrates various estimation methods in the framework. A multi-layer and multi-class framework is proposed in this study to improve the accuracy in estimating the population distribution. Grid data structure with 40m of resolution was used. Each layer (building, land-use, and traffic accessibility layers) uses individual estimate methods (binary, multi-class, and accessibility) to estimate populations in each cell for better capture of spatial distribution of regional populations. The proposed framework can better capture the true population distribution in comparison with the traditional population density maps based on administrative divisions. The standard deviation of estimation errors decreases from 10.39 to 9.29 in the first layer, to 8.71 in the second layer, and 8.71 in the third layer. The results are also better than decomposing the total population based on administration units, the standard deviation of errors is 9.91(town) and 8.87(village) respectively. The numbers of cell without errors also increases and average errors in the erroneous cells decrease with improvements through layers. The mean error is 10.42 in the first layer, 8.65 in the second layer and 8.57 in the third one. It is also found that the improvement is most significant in the first layer and diminished in the following ones.
Style APA, Harvard, Vancouver, ISO itp.
45

Mallya, Ajay. "Deductive multi-valued model checking /". 2006. http://proquest.umi.com/pqdweb?did=1221734391&sid=4&Fmt=2&clientId=10361&RQT=309&VName=PQD.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
46

Huang, Tzu-Jung, i 黃子容. "Revised Multi-Path MAXBAND model". Thesis, 2017. http://ndltd.ncl.edu.tw/handle/v5dp37.

Pełny tekst źródła
Streszczenie:
碩士
國立交通大學
運輸與物流管理學系
105
Signal control is one of the effective methods to enhance the utility of intersections. The concept is that vary the signal variables such as offsets, cycle length and phases length to maximize the throughput or minimize delays of synchronized intersections. MAXBAND model is a common mathematical programming of signal control researches, which solves the optimized offsets solution in order to maximize the progression bandwidth through a suite of synchronized intersections, and it is often applied in the two-way path progression problems. However, the traditional MAXBAND model cannot consider the effect of left-turn traffic flows discharged in ramps when it comes to the network problem of the surface approaches freeway ramps, such that the model solution sacrifices the efficiency of critical paths. To solve this issue as mentioned above, Yang et al.(2015) provided a multi-path progression MAXBAND model to consider the progression problems that include more than two large-volume paths, and make the model able to optimize the phase sequences. Based on the model Yang et al. (2015) proposed, this work extends the ability of the multi-path MAXBAND model, includes that (a) optimizing the phase length/ratio; (b) considering the traffic dispersion effects; (c) using the shockwave theory to formulate the constraints of maximum queue length in the intersections so that the variables of queue clearance time, offsets and cycle length are related in the model. The research takes the network in Chubei, Hsinchu as the experimental case, and use CORSIM simulator to measure the performances between the on-going timing plan and the solution of the revised multi-path MAXBAND model. The results show that the revised model provided by this work can improve over 40% efficiency of the synchronized intersections, and also improve at most 60% efficiency of each important path. Apart from this, this research provides the code programming in Gurobi C++ interface for the future researches.
Style APA, Harvard, Vancouver, ISO itp.
47

Mukherjee, Prateep. "Active geometric model : multi-compartment model-based segmentation & registration". Thesis, 2014. http://hdl.handle.net/1805/4908.

Pełny tekst źródła
Streszczenie:
Indiana University-Purdue University Indianapolis (IUPUI)
We present a novel, variational and statistical approach for model-based segmentation. Our model generalizes the Chan-Vese model, proposed for concurrent segmentation of multiple objects embedded in the same image domain. We also propose a novel shape descriptor, namely the Multi-Compartment Distance Functions or mcdf. Our proposed framework for segmentation is two-fold: first, several training samples distributed across various classes are registered onto a common frame of reference; then, we use a variational method similar to Active Shape Models (or ASMs) to generate an average shape model and hence use the latter to partition new images. The key advantages of such a framework is: (i) landmark-free automated shape training; (ii) strict shape constrained model to fit test data. Our model can naturally deal with shapes of arbitrary dimension and topology(closed/open curves). We term our model Active Geometric Model, since it focuses on segmentation of geometric shapes. We demonstrate the power of the proposed framework in two important medical applications: one for morphology estimation of 3D Motor Neuron compartments, another for thickness estimation of Henle's Fiber Layer in the retina. We also compare the qualitative and quantitative performance of our method with that of several other state-of-the-art segmentation methods.
Style APA, Harvard, Vancouver, ISO itp.
48

Yi-Tsen-Kuo i 郭羿岑. "Growth-Cycle Decomposition Multi-GenerationalDiffusion Model". Thesis, 2010. http://ndltd.ncl.edu.tw/handle/26332121802274505739.

Pełny tekst źródła
Streszczenie:
碩士
國立成功大學
工業與資訊管理學系碩博士班
98
Sales forecasting is one of the important events to every company. Changing in customers’ requirements and environment are related to procurement operations, inventory management, production scheduling and other enterprise activities. Accurate demand forecasts can achieve lower inventory costs, reduce manpower requirements, improve much customer service quality and business competitiveness. In this study, there are two different methods to forecast sales of DRAM industry. The first is Growth-Cycle Decomposition Multi-Generational Diffusion Model that is to forecast multi-generational DRAM sales. Because DRAM industry is easily affected by business cycle and price, General Multi-Generational Diffusion Model do not consider the business cycle variables. In this study, I use both price and GDP variables, wondering whether these variables are meaningful to the whole model after these two variables added. Norton and Bass (1987) model are extensively cited by many researchers. This study is also compared with Norton and Bass (1987) model. The second method is the grey theory,used to forecast global DRAM sales. Grey Theory method can easily deal with nonlinear problems, less data, small samples of forecasting. This study uses annual sales data of DRAM industry over the decade. Construct an appropriate sales forecasting model to improve the efficiency of enterprise management and decision-making as a reference for managers to improve the competitiveness of management. In this study, use the global DRAM as an empirical subject. Results show that Grey Theory for DRAM has good accuracy, and using only a small number of historical data set can predict well. Moreover, Growth-Cycle Decomposition Multi-Generational Diffusion Model joins price and GDP variables together which also does contribute to the whole model. Key
Style APA, Harvard, Vancouver, ISO itp.
49

Huang, Jing-teng, i 黃敬棠. "Multi-factor model of vertical linkages". Thesis, 2010. http://ndltd.ncl.edu.tw/handle/12572707578855327625.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
50

Gvozdetska, Nataliia. "Transfer Learning for Multi-surrogate-model Optimization". 2020. https://tud.qucosa.de/id/qucosa%3A73313.

Pełny tekst źródła
Streszczenie:
Surrogate-model-based optimization is widely used to solve black-box optimization problems if the evaluation of a target system is expensive. However, when the optimization budget is limited to a single or several evaluations, surrogate-model-based optimization may not perform well due to the lack of knowledge about the search space. In this case, transfer learning helps to get a good optimization result due to the usage of experience from the previous optimization runs. And if the budget is not strictly limited, transfer learning is capable of improving the final results of black-box optimization. The recent work in surrogate-model-based optimization showed that using multiple surrogates (i.e., applying multi-surrogate-model optimization) can be extremely efficient in complex search spaces. The main assumption of this thesis suggests that transfer learning can further improve the quality of multi-surrogate-model optimization. However, to the best of our knowledge, there exist no approaches to transfer learning in the multi-surrogate-model context yet. In this thesis, we propose an approach to transfer learning for multi-surrogate-model optimization. It encompasses an improved method of defining the expediency of knowledge transfer, adapted multi-surrogate-model recommendation, multi-task learning parameter tuning, and few-shot learning techniques. We evaluated the proposed approach with a set of algorithm selection and parameter setting problems, comprising mathematical functions optimization and the traveling salesman problem, as well as random forest hyperparameter tuning over OpenML datasets. The evaluation shows that the proposed approach helps to improve the quality delivered by multi-surrogate-model optimization and ensures getting good optimization results even under a strictly limited budget.:1 Introduction 1.1 Motivation 1.2 Research objective 1.3 Solution overview 1.4 Thesis structure 2 Background 2.1 Optimization problems 2.2 From single- to multi-surrogate-model optimization 2.2.1 Classical surrogate-model-based optimization 2.2.2 The purpose of multi-surrogate-model optimization 2.2.3 BRISE 2.5.0: Multi-surrogate-model-based software product line for parameter tuning 2.3 Transfer learning 2.3.1 Definition and purpose of transfer learning 2.4 Summary of the Background 3 Related work 3.1 Questions to transfer learning 3.2 When to transfer: Existing approaches to determining the expediency of knowledge transfer 3.2.1 Meta-features-based approaches 3.2.2 Surrogate-model-based similarity 3.2.3 Relative landmarks-based approaches 3.2.4 Sampling landmarks-based approaches 3.2.5 Similarity threshold problem 3.3 What to transfer: Existing approaches to knowledge transfer 3.3.1 Ensemble learning 3.3.2 Search space pruning 3.3.3 Multi-task learning 3.3.4 Surrogate model recommendation 3.3.5 Few-shot learning 3.3.6 Other approaches to transferring knowledge 3.4 How to transfer (discussion): Peculiarities and required design decisions for the TL implementation in multi-surrogate-model setup 3.4.1 Peculiarities of model recommendation in multi-surrogate-model setup 3.4.2 Required design decisions in multi-task learning 3.4.3 Few-shot learning problem 3.5 Summary of the related work analysis 4 Transfer learning for multi-surrogate-model optimization 4.1 Expediency of knowledge transfer 4.1.1 Experiments’ similarity definition as a variability point 4.1.2 Clustering to filter the most suitable experiments 4.2 Dynamic model recommendation in multi-surrogate-model setup 4.2.1 Variable recommendation granularity 4.2.2 Model recommendation by time and performance criteria 4.3 Multi-task learning 4.4 Implementation of the proposed concept 4.5 Conclusion of the proposed concept 5 Evaluation 5.1 Benchmark suite 5.1.1 APSP for the meta-heuristics 5.1.2 Hyperparameter optimization of the Random Forest algorithm 5.2 Environment setup 5.3 Evaluation plan 5.4 Baseline evaluation 5.5 Meta-tuning for a multi-task learning approach 5.5.1 Revealing the dependencies between the parameters of multi-task learning and its performance 5.5.2 Multi-task learning performance with the best found parameters 5.6 Expediency determination approach 5.6.1 Expediency determination as a variability point 5.6.2 Flexible number of the most similar experiments with the help of clustering 5.6.3 Influence of the number of initial samples on the quality of expediency determination 5.7 Multi-surrogate-model recommendation 5.8 Few-shot learning 5.8.1 Transfer of the built surrogate models’ combination 5.8.2 Transfer of the best configuration 5.8.3 Transfer from different experiment instances 5.9 Summary of the evaluation results 6 Conclusion and Future work
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii