Dissertations / Theses on the topic 'MULTSIM (Computer program)'

To see the other types of publications on this topic, follow the link: MULTSIM (Computer program).

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'MULTSIM (Computer program).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

McIlroy, Ross. "Using program behaviour to exploit heterogeneous multi-core processors." Thesis, University of Glasgow, 2010. http://theses.gla.ac.uk/1755/.

Full text
Abstract:
Multi-core CPU architectures have become prevalent in recent years. A number of multi-core CPUs consist of not only multiple processing cores, but multiple different types of processing cores, each with different capabilities and specialisations. These heterogeneous multi-core architectures (HMAs) can deliver exceptional performance; however, they are notoriously difficult to program effectively. This dissertation investigates the feasibility of ameliorating many of the difficulties encountered in application development on HMA processors, by employing a behaviour aware runtime system. This runtime system provides applications with the illusion of executing on a homogeneous architecture, by presenting a homogeneous virtual machine interface. The runtime system uses knowledge of a program's execution behaviour, gained through explicit code annotations, static analysis or runtime monitoring, to inform its resource allocation and scheduling decisions, such that the application makes best use of the HMA's heterogeneous processing cores. The goal of this runtime system is to enable non-specialist application developers to write applications that can exploit an HMA, without the developer requiring in-depth knowledge of the HMA's design. This dissertation describes the development of a Java runtime system, called Hera-JVM, aimed at investigating this premise. Hera-JVM supports the execution of unmodified Java applications on both processing core types of the heterogeneous IBM Cell processor. An application's threads of execution can be transparently migrated between the Cell's different core types by Hera-JVM, without requiring the application's involvement. A number of real-world Java benchmarks are executed across both of the Cell's core types, to evaluate the efficacy of abstracting a heterogeneous architecture behind a homogeneous virtual machine. By characterising the performance of each of the Cell processor's core types under different program behaviours, a set of influential program behaviour characteristics is uncovered. A set of code annotations are presented, which enable program code to be tagged with these behaviour characteristics, enabling a runtime system to track a program's behaviour throughout its execution. This information is fed into a cost function, which Hera-JVM uses to automatically estimate whether the executing program's threads of execution would benefit from being migrated to a different core type, given their current behaviour characteristics. The use of history, hysteresis and trend tracking, by this cost function, is explored as a means of increasing its stability and limiting detrimental thread migrations. The effectiveness of a number of different migration strategies is also investigated under real-world Java benchmarks, with the most effective found to be a strategy that can target code, such that a thread is migrated whenever it executes this code. This dissertation also investigates the use of runtime monitoring to enable a runtime system to automatically infer a program's behaviour characteristics, without the need for explicit code annotations. A lightweight runtime behaviour monitoring system is developed, and its effectiveness at choosing the most appropriate core type on which to execute a set of real-world Java benchmarks is examined. Combining explicit behaviour characteristic annotations with those characteristics which are monitored at runtime is also explored. Finally, an initial investigation is performed into the use of behaviour characteristics to improve application performance under a different type of heterogeneous architecture, specifically, a non-uniform memory access (NUMA) architecture. Thread teams are proposed as a method of automatically clustering communicating threads onto the same NUMA node, thereby reducing data access overheads. Evaluation of this approach shows that it is effective at improving application performance, if the application's threads can be partitioned across the available NUMA nodes of a system. The findings of this work demonstrate that a runtime system with a homogeneous virtual machine interface can reduce the challenge of application development for HMA processors, whilst still being able to exploit such a processor by taking program behaviour into account.
APA, Harvard, Vancouver, ISO, and other styles
2

Kim, Minjang. "Dynamic program analysis algorithms to assist parallelization." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/45758.

Full text
Abstract:
All market-leading processor vendors have started to pursue multicore processors as an alternative to high-frequency single-core processors for better energy and power efficiency. This transition to multicore processors no longer provides the free performance gain enabled by increased clock frequency for programmers. Parallelization of existing serial programs has become the most powerful approach to improving application performance. Not surprisingly, parallel programming is still extremely difficult for many programmers mainly because thinking in parallel is simply beyond the human perception. However, we believe that software tools based on advanced analyses can significantly reduce this parallelization burden. Much active research and many tools exist for already parallelized programs such as finding concurrency bugs. Instead we focus on program analysis algorithms that assist the actual parallelization steps: (1) finding parallelization candidates, (2) understanding the parallelizability and profits of the candidates, and (3) writing parallel code. A few commercial tools are introduced for these steps. A number of researchers have proposed various methodologies and techniques to assist parallelization. However, many weaknesses and limitations still exist. In order to assist the parallelization steps more effectively and efficiently, this dissertation proposes Prospector, which consists of several new and enhanced program analysis algorithms. First, an efficient loop profiling algorithm is implemented. Frequently executed loop can be candidates for profitable parallelization targets. The detailed execution profiling for loops provides a guide for selecting initial parallelization targets. Second, an efficient and rich data-dependence profiling algorithm is presented. Data dependence is the most essential factor that determines parallelizability. Prospector exploits dynamic data-dependence profiling, which is an alternative and complementary approach to traditional static-only analyses. However, even state-of-the-art dynamic dependence analysis algorithms can only successfully profile a program with a small memory footprint. Prospector introduces an efficient data-dependence profiling algorithm to support large programs and inputs as well as provides highly detailed profiling information. Third, a new speedup prediction algorithm is proposed. Although the loop profiling can give a qualitative estimate of the expected profit, obtaining accurate speedup estimates needs more sophisticated analysis. Prospector introduces a new dynamic emulation method to predict parallel speedups from annotated serial code. Prospector also provides a memory performance model to predict speedup saturation due to increased memory traffic. Compared to the latest related work, Prospector significantly improves both prediction accuracy and coverage. Finally, Prospector provides algorithms that extract hidden parallelism and advice on writing parallel code. We present a number of case studies how Prospector assists manual parallelization in particular cases including privatization, reduction, mutex, and pipelining.
APA, Harvard, Vancouver, ISO, and other styles
3

Hadhrawi, Mohammad K. "The impact of computer interfaces on multi-objective negotiation problems." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/106055.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2016.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 105-107).
Planning a city is a complex task that requires collaboration between multiple stakeholders who have different and often conflicting goals and objectives. Researchers have studied the role of technology in group collaboration for many years. It has been noted that when the task between collaborators increases in complexity, such as in a decision-making process, the use of computer technology could enhance, or disturb, the collaboration process. This thesis evaluates whether a Tangible User Interface (TUI) is more effective for multi-objective group decision-making than a Graphical User Interface (GUI). To examine this question, I designed and developed the CityGame framework, a web-based negotiation and decision-support game with a multi-modal interface for an urban planning scenario. The interfaces were evaluated in a within-subjects study with 31 participants of varying background, who were assigned a planning task in a gameplay session. Results show that tangible interfaces have some observable advantages over digital interfaces in this scenario.
by Mohammad K. Hadhrawi.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
4

Fainter, Robert Gaffney. "AdaTAD - a debugger for the Ada multi-task environment." Diss., Virginia Polytechnic Institute and State University, 1985. http://hdl.handle.net/10919/54289.

Full text
Abstract:
In a society that is increasingly dependent upon computing machinery, the issues associated with the correct functioning of that machinery are of crucial interest. The consequences of erroneous behavior of computers are dire with the worst case scenario being, conceivably, global thermonuclear war. Therefore, development of procedures and tools which can be used to increase the confidence of the correctness of the software that controls the world's computers is of vital importance. The Department of Defense (DoD) is in the process of adopting a standard computer language for the development of software. This language is called Ada¹. One of the major features of Ada is that it supports concurrent programming via its "task" compilation unit. There are not, however, any automated tools to aid in locating errors in the tasks. The design for such a tool is presented. The tool is named AdaTAD and is a debugger for programs written in Ada. The features of AdaTAD are specific to the problems of concurrent programming. The requirements of AdaTAD are derived from the literature. AdaTAD is, however, a unique tool designed using Ada as a program description language. When AdaTAD is implemented in Ada it becomes portable among all environments which support the Ada language. This offers the advantage that a single debugger is portable to many different machine architectures. Therefore, separate debuggers are not necessary for each implementation of Ada. Moreover, since AdaTAD is designed to allow debugging of tasks, AdaTAD will also support debugging in a distributed environment. That means that, if the tasks of a user's program are running on different computers in a distributed environment, the user is still able to use AdaTAD to debug the tasks as a single program. This feature is unique among automated debuggers. After the design is presented, several examples are offered to explain the operation of AdaTAD and to show that AdaTAD is useful in revealing the location of errors specific to concurrent programming.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
5

Weiser, David A. "Hybrid analysis of multi-threaded Java programs." Laramie, Wyo. : University of Wyoming, 2007. http://proquest.umi.com/pqdweb?did=1400971421&sid=1&Fmt=2&clientId=18949&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Yasin, Atif. "Synergistic Timing Speculation for Multi-Threaded Programs." DigitalCommons@USU, 2016. https://digitalcommons.usu.edu/etd/5229.

Full text
Abstract:
Timing speculation is a promising approach to increase the processor performance and energy efficiency. Under timing speculation, an integrated circuit is allowed to operate at a speed faster than its slowest path|the critical path. It is based on the empirical observation, which is presented later in the thesis, that these critical path delays are rarely manifested during the program execution. Consequently, as long as the processor is equipped with an error detection and recovery mechanism, its performance can be increased and/or energy consumption reduced beyond that achievable by any other conventional operation. While many past works have dealt with timing speculation within a single core, in this work, a new direction is being uncovered | timing speculation for a multi-core processor executing a parallel, multi-threaded application. Through a rigorous cross-layered circuit architectural analysis, it is observed that during the execution of a multi-threaded program, there is a significant variation in circuit delay characteristics across different threads. Synergistic Timing Speculation (SynTS) is proposed to exploit this variation (heterogeneity) in path sensitization delays, to jointly optimize the energy and execution time of the many-core processor. In particular, SynTS uses a sampling based online error probability estimation technique, coupled with a polynomial time algorithm, to optimally determine the voltage, frequency and the amount of timing speculation for each thread. The experimental analysis is presented for three pipe stages, namely, Decode, SimpleALU and ComplexALU, with a reduction in Energy Delay Product by up to 26%, 25% and 7.5% respectively, compared to existing per-core timing speculation scheme. The analysis also embeds a case study for a General Purpose Graphics Processing Unit.
APA, Harvard, Vancouver, ISO, and other styles
7

Karetsos, Athanasios. "Extracting analyzable models from multi-threaded programs." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-114195.

Full text
Abstract:
As technology evolves, the need to use software for critical applications increases. It is then required that this software will always behave correctly. Verification is the process of formally proving that a program is correct. Model checking is a technique used to perform verification, which has been successful with finite state concurrent programs. In the recent years, there has been progress in the area of the verification of infinite state concurrent programs. There can be several sources of infiniteness. Relevant to this thesis are recent model checking techniques developed at LiU, that can automatically establish correctness for programs manipulating variables that range over infinite domains, and spawning arbitrary many threads, which can synchronize using shared variables, barriers, semaphores etc. These techniques resulted in the tool PACMAN for the verification of multi-threaded programs. The aim of this thesis is to extract analyzable models from multi-threaded C programs, in order to use them for verifying the program that they describe, by using PACMAN. In addition, we augment the C programming language to allow the possibility of expressing some important concepts of multi-threaded programs, such as non-determinism, atomicity etc, with the use of the traditional C syntax. In a following step, we target PACMAN's input format, in order to verify our extracted models. Such verification engines usually accept as input the description of a multi-threaded program expressed in some modeling language. We, therefore, translate a minimum subset of the C programming language, which has been augmented, to effectively describe a multithreaded program, to PACMAN's input format and then pass the description to the engine. In the context of this thesis, we have successfully defined a set of annotation for the C programming language, in order to assist the description of multi-threaded programs. We have implemented a tool that effectively translates annotated C code into the modeling language of PACMAN. The output of the tool is later passed to the verification engine. As a result, we have contributed to the automation of verifying multi-threaded C programs.
APA, Harvard, Vancouver, ISO, and other styles
8

Soudamini, Jidesh. "LIFTED MULTIRELATIONS AND PROGRAM SEMANTICS." Kent State University / OhioLINK, 2006. http://rave.ohiolink.edu/etdc/view?acc_num=kent1164077672.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Lin, Jason. "WebSearch: A configurable parallel multi-search web browser." CSUSB ScholarWorks, 1999. https://scholarworks.lib.csusb.edu/etd-project/1948.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Yin, Tzu-Hsiao. "Multi-warehouse inventory control system." CSUSB ScholarWorks, 2006. https://scholarworks.lib.csusb.edu/etd-project/3102.

Full text
Abstract:
The thesis discusses the development of Multi-Warehouse Inventory Control System (MWICS), a uniquely designed web application that targets membership based food wholesalers. The main goal of MWICS is to provide a real-time inventory control ability to all warehouses and present them as if it were single warehouse. The program consists of three main components: user account management sub-system, product and purchase management sub-system, and a warehouse inventory management sub-system. User interfaces are constructed primarily in HTML, PHP, and Javascript. MySQL is used to add, access, and process data.
APA, Harvard, Vancouver, ISO, and other styles
11

Cornwall, Maxwell W. "MEEBS a model for multi-echelon evaluation by simulation /." Thesis, Monterey, California : Naval Postgraduate School, 1990. http://handle.dtic.mil/100.2/ADA237099.

Full text
Abstract:
Thesis (M.S. in Management)--Naval Postgraduate School, June 1990.
Thesis Advisor(s): McMasters, Alan W. ; Bailey, Michael P. "June 1990." Description based on signature page as viewed on October 21, 2009. DTIC Identifier(s): Computerized simulation, logistics management. Author(s) subject terms: Multi-echelon, simulation, SLAM II, models. Includes bibliographical references (p. 142-147). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
12

Samson, Rodelyn Reyes. "A multi-agent architecture for internet distributed computing system." CSUSB ScholarWorks, 2003. https://scholarworks.lib.csusb.edu/etd-project/2408.

Full text
Abstract:
This thesis presents the developed taxonomy of the agent-based distributed computing systems. Based on this taxonomy, a design, implementation, analysis and distribution protocol of a multi-agent architecture for internet-based distributed computing system was developed. A prototype of the designed architecture was implemented on Spider III using the IBM Aglets software development kit (ASDK 2.0) and the language Java.
APA, Harvard, Vancouver, ISO, and other styles
13

Spyrou, Dimitrios. "A new framework for software visualization a multi-layer approach." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2006. http://library.nps.navy.mil/uhtbin/hyperion/06Sep%5FSpyrou.pdf.

Full text
Abstract:
Thesis (M.S. in Computer Science)--Naval Postgraduate School, September 2006.
Thesis Advisor(s): Thomas Wu Otani. "September 2006." Includes bibliographical references (p. 117-128). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
14

Sagal, Ellen Jean 1954. "An object oriented approach to finite element analysis and multi-body dynamic analysis program designs." Thesis, The University of Arizona, 1993. http://hdl.handle.net/10150/278289.

Full text
Abstract:
Procedurally-oriented computer programs used to perform finite element and multibody dynamics analyses are difficult to understand, use, and modify. A new approach, object-oriented programming, was used to develop a finite element code that is easier to apply, understand, and modify. Object-oriented code is easier to understand, as the characteristics and operations associated with a physical phenomena are grouped in a class whose structure closely parallels the modeled entity. Elements, bodies, joints, and mechanisms are modeled as classes. Program application is facilitated by a hierarchy of class structure. Manipulation of higher level body and mechanism class types direct the complicated, lower level code of element calculations. Lower level code is hidden in an object library resulting in a shorter, simpler driver program for an analysis. Modification and expansion of programs is easily accomplished through object-oriented language features such as modularization of code into classes and overloaded functions. Body and element abstract base classes provide "templates" for creation of new type classes used to develop additional analyses.
APA, Harvard, Vancouver, ISO, and other styles
15

Glimmerfors, Tobias, and Ålund Simon Olander. "Modelling Concurrent Programs as Multi-Player Games of Imperfect Information against Nature." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-280337.

Full text
Abstract:
Concurrent programs exists all around us. Whether someone is liking a photo on a social media platform or whether someone is doing a bank-transaction, concurrency is acting behind the scenes. Due to the nature of concurrent pro- grams having multiple Users, or Threads, a range of different problems can emerge. Some of these problems stem from the limited knowledge of each Thread. Similar to how people use their knowledge to make decisions in everyday life, a Thread’s knowledge of a program’s state can aid the decision of which action to take. An interesting question is thus whether we can model the knowledge of a concurrent system in order to aid the execution of instructions. In this report we investigate the plausibility of modelling problematic concur- rent programs as games of imperfect information against nature. Successfully modelled programs are analysed from the perspective of each Thread’s distributed knowledge by applying Multi-Player Knowledge Based Subset Construction (MKBSC). In this report we have successfully modelled three different programs that each handle a concurrency-related problem. We have also shown how we can synthesise winning strategies from the knowledge of each Player for two out of three game models.
Distribuerade system finns överallt i vardagen. Allt från en gillad bild på sociala medier till en genomförd banktransaktion är möjliga på grund av distribuerade system. Distribuerade system är system med fler Användare, eller Trådar, som alla arbetar samtidigt. På grund av denna samtidighet, kan en upp- sjö av olika problem uppstå. Vissa av dessa problem grundar sig i att systemets Trådar har begränsad kunskap av hela systemet. Precis som när människor an- vänder sin kunskap för att göra val i vardagen, så kan Trådarnas kunskap bidra till vilka instruktioner som ska utföras. Således finns det en intressant fråge- ställning huruvida man kan modellera Trådarnas kunskap, för att förebygga eventuella problem i distribuerade system. I denna rapport undersöker möjligheten att modellera Trådarnas kunskap i problematiska distribuerade system som spel med imperfekt information mot naturen. De modeller som lyckas framställas analyseras med ambition att redogöra för Trådens distribuerade förståelse genom att applicera Multi-Player Knowledge Based Subset Construction (MKBSC). Vi lyckades modellera tre olika distribuerade program med varsitt problem. Vi kunde utgöra att den kunskap som Trådarna innehade kunde användas för framställandet av vinnande strategier i två av tre modeller.
APA, Harvard, Vancouver, ISO, and other styles
16

Maddock, Thomas III, and Laurel J. Lacher. "MODRSP: a program to calculate drawdown, velocity, storage and capture response functions for multi-aquifer systems." Department of Hydrology and Water Resources, University of Arizona (Tucson, AZ), 1991. http://hdl.handle.net/10150/620142.

Full text
Abstract:
MODRSP is program used for calculating drawdown, velocity, storage losses and capture response functions for multi - aquifer ground -water flow systems. Capture is defined as the sum of the increase in aquifer recharge and decrease in aquifer discharge as a result of an applied stress from pumping [Bredehoeft et al., 19821. The capture phenomena treated by MODRSP are stream- aquifer leakance, reduction of evapotranspiration losses, leakance from adjacent aquifers, flows to and from prescribed head boundaries and increases or decreases in natural recharge or discharge from head dependent boundaries. The response functions are independent of the magnitude of the stresses and are dependent on the type of partial differential equation, the boundary and initial conditions and the parameters thereof, and the spatial and temporal location of stresses. The aquifers modeled may have irregular -shaped areal boundaries and non -homogeneous transmissive and storage qualities. For regional aquifers, the stresses are generally pumpages from wells. The utility of response functions arises from their capacity to be embedded in management models. The management models consist of a mathematical expression of a criterion to measure preference, and sets of constraints which act to limit the preferred actions. The response functions are incorporated into constraints that couple the hydrologic system with the management system (Maddock, 1972). MODRSP is a modification of MODFLOW (McDonald and Harbaugh, 1984,1988). MODRSP uses many of the data input structures of MODFLOW, but there are major differences between the two programs. The differences are discussed in Chapters 4 and 5. An abbreviated theoretical development is presented in Chapter 2, a more complete theoretical development may be found in Maddock and Lacher (1991). The finite difference technique discussion presented in Chapter 3 is a synopsis of that covered more completely in McDonald and Harbaugh (1988). Subprogram organization is presented in Chapter 4 with the data requirements explained in Chapter 5. Chapter 6 contains three example applications of MODRSP.
APA, Harvard, Vancouver, ISO, and other styles
17

Zibran, Minhaz Fahim, and University of Lethbridge Faculty of Arts and Science. "A multi-phase approach to university course timetabling." Thesis, Lethbridge, Alta. : University of Lethbridge, Faculty of Arts and Science, 2007, 2007. http://hdl.handle.net/10133/633.

Full text
Abstract:
Course timetabling is a well known constraint satisfaction optimization (CSOP) problem, which needs to be solved in educational institutions regularly. Unfortunately, this course timetabling problem is known to be NP-complete [7, 39]. This M.Sc. thesis presents a multi-phase approach to solve the university level course timetabling problem. We decompose the problem into several sub-problems with reduced complexity, which are solved in separate phases. In phase-1a we assign lectures to professors, phase-1b assigns labs and tutorials to academic assistances and graduate assistants. Phase-2 assigns each lecture to one of the two day-sequences (Monday-Wednesday-Friday or Tuesday-Thursday). In Phase-3, lectures of each single day-sequence are then assigned to time-slots. Finally, in phase-4, labs and tutorials are assigned to days and time-slots. This decomposition allows the use of different techniques as appropriate to solve different phases. Currently different phases are solved using constraint programming and integer linear programming. The multi-phase architecture with the graphical user interface allows users to customize constraints as well as to generate new solutions that may incorporate partial solutions from previously generated feasible solutions.
ix, 117 leaves ; 29 cm
APA, Harvard, Vancouver, ISO, and other styles
18

Farrenkopf, Thomas. "Applying semantic technologies to multi-agent models in the context of business simulations." Thesis, Edinburgh Napier University, 2017. http://researchrepository.napier.ac.uk/Output/1033149.

Full text
Abstract:
Agent-based simulations are an effective simulation technique that can flexibly be applied to real-world business problems. By integrating such simulations into business games, they become a widely accepted educational instrument in the context of business training. Not only can they be used to train standard behaviour in training scenarios but they can also be used for open experimentation to discover structure in complex contexts (e.g. complex adaptive systems) and to verify behaviours that have been predicted on the basis of theoretical considerations. Traditional modelling techniques are built on mathematical models consisting of differential or difference equations (e.g. the well-known system dynamics approach). However, individual behaviour is not visible in these equations. This problem is addressed by using software agents to simulate individuals and to model their actions in response to external stimuli. To be effective, business training tools have to provide sufficiently realistic models of real-world aspects. Ideally, system effects on a macroscopic level are caused by behaviour of system components on a more microscopic level. For instance, in modelling market mechanisms market participants can explicitly be modelled as agents with individual behaviour and personal goals. Agents can communicate and act on the basis of what they know and which communication acts they perform. The evolution of the market then depends on the actions of the participants directly and not on abstract mathematical expressions. Generally, agent-based modelling is a challenging task, when modelling knowledge and behaviour. With the rise of the so-called semantic web ontologies have become popular, allowing the representation of knowledge using standardised formal languages which can be made available to agents acting in a simulation. However, the combination of agent-based systems with ontologies has not yet been researched sufficiently, because both concepts (web ontology languages and agent oriented programming languages) have been developed independently and the link has not yet been built adequately. Using ontologies as a knowledge base allows access to powerful standardised inference engines that offer leverage for the decision process of the agent. Agents can then determine their actions in accordance with this knowledge. To model agents using ontologies creates a new perspective for multi-agent simulation scenarios as programming details are reduced and a separation of modelling aspects from coding details is promising as business simulation scenarios can be set up with a reduced development effort. This thesis focuses on how ontologies can be integrated utilising the agent framework Jadex. A basic architecture with layered ontologies and its integration into the belief-desire-intention (BDI) agent model is presented. The abstract level of the approach guarantees applicability to different simulation scenarios which can be modelled by creating appropriate ontologies. Examples are based upon the simulation of market mechanisms within the context of different industries. The approach is implemented in the integrated simulation environment AGADE which incorporates agent-based and semantic technologies. Simulations for different scenarios that model typical market scenarios are presented.
APA, Harvard, Vancouver, ISO, and other styles
19

Hynes, Sean E. "Multi-agent simulations (MAS) for assessing massive sensor coverage and deployment." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2003. http://library.nps.navy.mil/uhtbin/hyperion-image/03sep%5FHynes.pdf.

Full text
Abstract:
Thesis (M.S. in Computer Science)--Naval Postgraduate School, September 2003.
Thesis advisor(s): Neil C. Rowe, Curtis Blais, Don Brutzman. Includes bibliographical references (p. 57-62). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
20

Van, Valkenburgh Kevin. "Measuring and Improving the Potential Parallelism of Sequential Java Programs." The Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=osu1250594496.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Gu, Pei. "Prototyping the simulation of a gate level logic application program interface (API) on an explicit-multi-threaded (XMT) computer." College Park, Md. : University of Maryland, 2005. http://hdl.handle.net/1903/2626.

Full text
Abstract:
Thesis (M.S.) -- University of Maryland, College Park, 2005.
Thesis research directed by: Electrical Engineering. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
22

Herre, Heinrich, and Axel Hummel. "A paraconsistent semantics for generalized logic programs." Universität Potsdam, 2010. http://opus.kobv.de/ubp/volltexte/2010/4149/.

Full text
Abstract:
We propose a paraconsistent declarative semantics of possibly inconsistent generalized logic programs which allows for arbitrary formulas in the body and in the head of a rule (i.e. does not depend on the presence of any specific connective, such as negation(-as-failure), nor on any specific syntax of rules). For consistent generalized logic programs this semantics coincides with the stable generated models introduced in [HW97], and for normal logic programs it yields the stable models in the sense of [GL88].
APA, Harvard, Vancouver, ISO, and other styles
23

Simon, Scott James. "The recursive multi-threaded software life-cycle." CSUSB ScholarWorks, 1997. https://scholarworks.lib.csusb.edu/etd-project/1306.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Ekbom, Andreas. "Studium av Othellospelande program : Design, algoritmer och implementation." Thesis, Linköping University, Department of Computer and Information Science, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2381.

Full text
Abstract:

Att "smarta" brädspelande datorprogram har blivit mycket bättre under de senaste årtiondena har väl knappast kunnat undgå någon. Med brädspel menar jag spel såsom Go, Othello, Backgammon och Schack. Idag spelar program, som körs på en reguljär PC, bättre än de flesta människor. Vad är det som gör dessa program så bra? Hur kan man lära en dator att spela ett så pass komplext spel som Othello på en sådan nivå att ingen människa har en chans att vinna? I detta examensarbete kommer jag att försöka förklara mekanismerna bakom ett toppspelande Othelloprogram. Jag har dessutom implementerat ett eget Othellospelande program som jag använt som testapplikation för att prova olika sökmetoder, metoder för att öka exekveringshastigheten och tekniker för att öka spelskickligheten. Jag kommer att presentera empiriska data där jag utvärderar och jämför flera andra program med mitt eget.

APA, Harvard, Vancouver, ISO, and other styles
25

Pan, Hongqi 1961. "Fuzzy multi-mode resource-constrained project scheduling." Monash University, School of Business Systems, 2003. http://arrow.monash.edu.au/hdl/1959.1/5735.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Bui, Vinh Information Technology &amp Electrical Engineering Australian Defence Force Academy UNSW. "A framework for improving internet end-to-end performance and availability using multi-path overlay networks." Awarded by:University of New South Wales - Australian Defence Force Academy, 2008. http://handle.unsw.edu.au/1959.4/39735.

Full text
Abstract:
Application-layer overlay networks have recently emerged as a promising platform to deploy additional services over the Internet. A virtual network of overlay nodes can be used to regulate traffic flows of an underlay network, without modifying the underlay network infrastructure. As a result, an opportunity to redeem the inefficiency of IP routing and to improve end-to-end performance of the Internet has arisen, by routing traffic over multiple overlay paths. However, to achieve high end-to-end performance over the Internet by means of overlay networks, a number of challenging issues, including limited knowledge of the underlay network characteristics, fluctuations of overlay path performance, and interactions between overlay and the underlay traffic must be addressed. This thesis provides solutions to some of these issues, by proposing a framework to construct a multi-path overlay architecture for improving Internet end-to-end performance and availability. The framework is formed by posing a series of questions, including i) how to model and forecast overlay path performance characteristics; ii) how to route traffic optimally over multiple overlay paths; and iii) how to place overlay nodes to maximally leverage the Internet resource redundancy, while minimizing the deployment cost. To answer those research questions, analytical and experimental studies have been conducted. As a result, i) a loss model and a hybrid forecasting technique are proposed to capture, and subsequently predict end-to-end loss/delay behaviors; with this predictive capability, overlay agents can, for example, select overlay paths that potentially offer good performance and reliability; ii) to take full advantage of the predictive capability and the availability of multiple paths, a Markov Decision Process based multi-path traffic controller is developed, which can route traffic simultaneously over multiple overlay paths to optimize some performance measures, e.g. average loss rate and latency. As there can be multiple overlay controllers, competing for common resources by making selfish decisions, which could jeopardize performance of the networks, game theory is applied here to turn the competition into cooperation; as a consequence, the network performance is improved; iii) furthermore, to facilitate the deployment of the multi-path overlay architecture, a multi-objective genetic-based algorithm is introduced to place overlay nodes to attain a high level of overlay path diversity, while minimizing the number of overlay nodes to be deployed, and thus reducing the deployment cost. The findings of this thesis indicate that the use of multiple overlay paths can substantially improve end-to-end performance. They uncover the potential of multi-path application-layer overlay networks as an architecture for achieving high end-to-end performance and availability over the Internet.
APA, Harvard, Vancouver, ISO, and other styles
27

Zhu, Weirong. "Efficient synchronization for a large-scale multi-core chip architecture." Access to citation, abstract and download form provided by ProQuest Information and Learning Company; downloadable PDF file, 206 p, 2007. http://proquest.umi.com/pqdweb?did=1362532791&sid=27&Fmt=2&clientId=8331&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Bhowmik, Rajdeep. "Optimizing XML-based grid services on multi-core processors using an emulation framework." Diss., Online access via UMI:, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
29

Martin, Cheryl Elizabeth Duty. "Adaptive decision-making frameworks for multi-agent systems." Access restricted to users with UT Austin EID, 2001. http://wwwlib.umi.com/cr/utexas/fullcit?p3023557.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

葉賜權 and Chee-kuen Yip. "Machine recognition of multi-font printed Chinese Characters." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1990. http://hub.hku.hk/bib/B31210120.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Venkataramani, Guru Prasadh V. "Low-cost and efficient architectural support for correctness and performance debugging." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/31747.

Full text
Abstract:
Thesis (Ph.D)--Computing, Georgia Institute of Technology, 2010.
Committee Chair: Prvulovic, Milos; Committee Member: Hughes, Christopher J.; Committee Member: Kim, Hyesoon; Committee Member: Lee, Hsien-Hsin S.; Committee Member: Loh, Gabriel H. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
32

Glass, Kevin Robert. "Automating the conversion of natural language fiction to multi-modal 3D animated virtual environments." Thesis, Rhodes University, 2009. http://hdl.handle.net/10962/d1006518.

Full text
Abstract:
Popular fiction books describe rich visual environments that contain characters, objects, and behaviour. This research develops automated processes for converting text sourced from fiction books into animated virtual environments and multi-modal films. This involves the analysis of unrestricted natural language fiction to identify appropriate visual descriptions, and the interpretation of the identified descriptions for constructing animated 3D virtual environments. The goal of the text analysis stage is the creation of annotated fiction text, which identifies visual descriptions in a structured manner. A hierarchical rule-based learning system is created that induces patterns from example annotations provided by a human, and uses these for the creation of additional annotations. Patterns are expressed as tree structures that abstract the input text on different levels according to structural (token, sentence) and syntactic (parts-of-speech, syntactic function) categories. Patterns are generalized using pair-wise merging, where dissimilar sub-trees are replaced with wild-cards. The result is a small set of generalized patterns that are able to create correct annotations. A set of generalized patterns represents a model of an annotator's mental process regarding a particular annotation category. Annotated text is interpreted automatically for constructing detailed scene descriptions. This includes identifying which scenes to visualize, and identifying the contents and behaviour in each scene. Entity behaviour in a 3D virtual environment is formulated using time-based constraints that are automatically derived from annotations. Constraints are expressed as non-linear symbolic functions that restrict the trajectories of a pair of entities over a continuous interval of time. Solutions to these constraints specify precise behaviour. We create an innovative quantified constraint optimizer for locating sound solutions, which uses interval arithmetic for treating time and space as contiguous quantities. This optimization method uses a technique of constraint relaxation and tightening that allows solution approximations to be located where constraint systems are inconsistent (an ability not previously explored in interval-based quantified constraint solving). 3D virtual environments are populated by automatically selecting geometric models or procedural geometry-creation methods from a library. 3D models are animated according to trajectories derived from constraint solutions. The final animated film is sequenced using a range of modalities including animated 3D graphics, textual subtitles, audio narrations, and foleys. Hierarchical rule-based learning is evaluated over a range of annotation categories. Models are induced for different categories of annotation without modifying the core learning algorithms, and these models are shown to be applicable to different types of books. Models are induced automatically with accuracies ranging between 51.4% and 90.4%, depending on the category. We show that models are refined if further examples are provided, and this supports a boot-strapping process for training the learning mechanism. The task of interpreting annotated fiction text and populating 3D virtual environments is successfully automated using our described techniques. Detailed scene descriptions are created accurately, where between 83% and 96% of the automatically generated descriptions require no manual modification (depending on the type of description). The interval-based quantified constraint optimizer fully automates the behaviour specification process. Sample animated multi-modal 3D films are created using extracts from fiction books that are unrestricted in terms of complexity or subject matter (unlike existing text-to-graphics systems). These examples demonstrate that: behaviour is visualized that corresponds to the descriptions in the original text; appropriate geometry is selected (or created) for visualizing entities in each scene; sequences of scenes are created for a film-like presentation of the story; and that multiple modalities are combined to create a coherent multi-modal representation of the fiction text. This research demonstrates that visual descriptions in fiction text can be automatically identified, and that these descriptions can be converted into corresponding animated virtual environments. Unlike existing text-to-graphics systems, we describe techniques that function over unrestricted natural language text and perform the conversion process without the need for manually constructed repositories of world knowledge. This enables the rapid production of animated 3D virtual environments, allowing the human designer to focus on creative aspects.
APA, Harvard, Vancouver, ISO, and other styles
33

Dal, Taylan. "A dynamic behavior modeler for future inclusion into a multi-tasking motion planning system for material handling in construction." Thesis, This resource online, 1991. http://scholar.lib.vt.edu/theses/available/etd-08142009-040314/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Ruan, Jianhua, Han-Shen Yuh, and Koping Wang. "Spider III: A multi-agent-based distributed computing system." CSUSB ScholarWorks, 2002. https://scholarworks.lib.csusb.edu/etd-project/2249.

Full text
Abstract:
The project, Spider III, presents architecture and protocol of a multi-agent-based internet distributed computing system, which provides a convenient development and execution environment for transparent task distribution, load balancing, and fault tolerance. Spider is an on going distribution computing project in the Department of Computer Science, California State University San Bernardino. It was first proposed as an object-oriented distributed system by Han-Sheng Yuh in his master's thesis in 1997. It has been further developed by Koping Wang in his master's project, of where he made large contribution and implemented the Spider II System.
APA, Harvard, Vancouver, ISO, and other styles
35

Eikenberry, Blake D. "Guidance and navigation software architecture design for the Autonomous Multi-Agent Physically Interacting Spacecraft (AMPHIS) test bed." Thesis, Monterey California. Naval Postgraduate School, 2006. http://hdl.handle.net/10945/2349.

Full text
Abstract:
The Autonomous Multi-Agent Physically Interacting Spacecraft (AMPHIS) test bed examines the problem of multiple spacecraft interacting at close proximity. This thesis contributes to this on-going research by addressing the development of the software architecture for the AMPHIS spacecraft simulator robots and the implementation of a Light Detection and Ranging (LIDAR) unit to be used for state estimation and navigation of the prototype robot. The software modules developed include: user input for simple user tasking; user output for data analysis and animation; external data links for sensors and actuators; and guidance, navigation and control (GNC). The software was developed in the SIMULINK/MATLAB environment as a consistent library to serve as stand alone simulator, actual hardware control on the robot prototype, and any combination of the two. In particular, the software enables hardware-in-the-loop testing to be conducted for any portion of the system with reliable simulation of all other portions of the system. The modularity of this solution facilitates fast proof-of-concept validation for the GNC algorithms. Two sample guidance and control algorithms were developed and are demonstrated here: a Direct Calculus of Variation method, and an artificial potential function guidance method. State estimation methods are discussed, including state estimation from hardware sensors, pose estimation strategies from various vision sensors, and the implementation of a LIDAR unit for state estimation. Finally, the relative motion of the AMPHIS test bed is compared to the relative motion on orbit, including how to simulate the on-orbit behavior using Hill's equations.
APA, Harvard, Vancouver, ISO, and other styles
36

Taylor, Lawrence Clifford. "A Comparative Study on the Impact of a Computer Enhanced Reading Program on First Grade African American Males in an Urban School District in Southeastern Virginia." Diss., Virginia Tech, 2009. http://hdl.handle.net/10919/29513.

Full text
Abstract:
This study examines the effects of the Breakthrough to Literacy (BTL) reading program on first grade African American males in two urban elementary schools in southeastern Virginia. The BTL computer enhanced reading program includes computer assisted instruction as a major component that research from the National Reading Panel (NRP) indicates is beneficial in the education of African American males (NRP, 2000). This is a comparative study utilizing quantitative methodology to report the reading outcomes of African American males in grade one and their teacherâ s perceptions of the BTL program. The study measures reading outcomes as well as teachersâ perceptions of the BTL program. The treatment group consisted of the first grade populations from schools A and B who received the BTL treatment in kindergarten (2006-2007) and first grade (2007-2008). The treatment group was compared to schools C and D, the control group, who received the BTL treatment in kindergarten (2006-2007) only. The data were gathered to determine if there were mean gains from the treatment and control groups through pre and posttests. Frequency, mean, and standard deviation were calculated for each variable. Inferential statistics were used to determine mean differences and comparisons among both groupsâ reading results. To determine if there was a difference in the reading outcomes of African American males who received the BTL treatment as compared to other racial/ethnic groups and gender, ANOVAs were utilized. Overall results indicated higher level performance by the treatment group. The study also incorporated survey methodology to determine the utility of the BTL program on first grade students in the year 2007-2008 from a teacherâ s perspective. The teachers in the BTL treatment group were administered the Childrenâ s Software Evaluation Instrument Surveys (Children's Software Revue, 2008). Out of a 5-point Likert scale, teachers rated the overall value of the BTL program as good (Overall rating 4.0). The teachers also gave overall ratings of good (4.0) and excellent (5.0) in the following areas: Childproof; Ease of Use; Entertaining; Design Feature; and Educational.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
37

Poti, Allison Tamara S. "Building a multi-tier enterprise system utilizing visual Basic, MTS, ASP, and MS SQL." Virtual Press, 2001. http://liblink.bsu.edu/uhtbin/catkey/1221293.

Full text
Abstract:
Multi-tier enterprise systems consist of more than two distributed tiers. The design of multi-tier systems is considerably more involved than two tier systems. Not all systems should be designed as multi-tier, but if the decision to build a multi-tier system is made, there are benefits to this type of system design. CSCources is a system that tracks computer science course information. The requirements of this system indicate that it should be a multi-tier system. This system has three tiers, client, business and data. Microsoft tools are used such as Visual Basic (VB) that was used to build the client tier that physically resides on the client machine. VB is also used to create the business tier. This tier consists of the business layer and the data layer. The business layer contains most of the business logic for the system. The data layer communicates with the data tier. Microsoft SQL Server (MS SQL) is used for the data store. The database containsseveral tables and stored procedures. The stored procedures are used to add, edit, update and delete records in the database. Microsoft Transaction Server (MTS) is used to control modifications to the database. The transaction and security features available in the MTS environment are used. The business tier and data tier may or may not reside on the same physical computer or server. Active Server Pages (ASP) was built that accesses the business tier to retrieve the needed information for display on a web page. The cost of designing a distributed system, building a distributed system, upgrades to the system and error handling are examined.Ball State UniversityMuncie, IN 47306
Department of Computer Science
APA, Harvard, Vancouver, ISO, and other styles
38

Ahciarliu, Cantemir M. "Multi-agent architecture for integrating remote databases and expert sources with situational awareness tools : humanitarian operations scenario /." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2004. http://library.nps.navy.mil/uhtbin/hyperion/04Mar%5FAhciarliu.pdf.

Full text
Abstract:
Thesis (M.S. in Information Technology Management)--Naval Postgraduate School, March 2004.
Thesis advisor(s): Alex Bordetsky, Glenn Cook. Includes bibliographical references (p. 77-79). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
39

Khizakanchery, Natarajan Surya Narayanan. "Modeling performance of serial and parallel sections of multi-threaded programs in many-core era." Thesis, Rennes 1, 2015. http://www.theses.fr/2015REN1S015/document.

Full text
Abstract:
Ce travail a été effectué dans le contexte d'un projet financé par l'ERC, Defying Amdahl's Law (DAL), dont l'objectif est d'explorer les techniques micro-architecturales améliorant la performance des processeurs multi-cœurs futurs. Le projet prévoit que malgré les efforts investis dans le développement de programmes parallèles, la majorité des codes auront toujours une quantité signifiante de code séquentiel. Pour cette raison, il est primordial de continuer à améliorer la performance des sections séquentielles des-dits programmes. Le travail de recherche de cette thèse porte principalement sur l'étude des différences entre les sections parallèles et les sections séquentielles de programmes multithreadés (MT) existants. L'exploration de l'espace de conception des futurs processeurs multi-cœurs est aussi traitée, tout en gardant à l'esprit les exigences concernant ces deux types de sections ainsi que le compromis performance-surface
This thesis work is done in the general context of the ERC, funded Defying Amdahl's Law (DAL) project which aims at exploring the micro-architectural techniques that will enable high performance on future many-core processors. The project envisions that despite future huge investments in the development of parallel applications and porting it to the parallel architectures, most applications will still exhibit a significant amount of sequential code sections and, hence, we should still focus on improving the performance of the serial sections of the application. In this thesis, the research work primarily focuses on studying the difference between parallel and serial sections of the existing multi-threaded (MT) programs and exploring the design space with respect to the processor core requirement for the serial and parallel sections in future many-core with area-performance tradeoff as a primary goal
APA, Harvard, Vancouver, ISO, and other styles
40

Cunningham, Alexander G. "Scalable online decentralized smoothing and mapping." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/51848.

Full text
Abstract:
Many applications for field robots can benefit from large numbers of robots, especially applications where the objective is for the robots to cover or explore a region. A key enabling technology for robust autonomy in these teams of small and cheap robots is the development of collaborative perception to account for the shortcomings of the small and cheap sensors on the robots. In this dissertation, I present DDF-SAM to address the decentralized data fusion (DDF) inference problem with a smoothing and mapping (SAM) approach to single-robot mapping that is online, scalable and consistent while supporting a variety of sensing modalities. The DDF-SAM approach performs fully decentralized simultaneous localization and mapping in which robots choose a relevant subset of variables from their local map to share with neighbors. Each robot summarizes their local map to yield a density on exactly this chosen set of variables, and then distributes this summarized map to neighboring robots, allowing map information to propagate throughout the network. Each robot fuses summarized maps it receives to yield a map solution with an extended sensor horizon. I introduce two primary variations on DDF-SAM, one that uses a batch nonlinear constrained optimization procedure to combine maps, DDF-SAM 1.0, and one that uses an incremental solving approach for substantially faster performance, DDF-SAM 2.0. I validate these systems using a combination of real-world and simulated experiments. In addition, I evaluate design trade-offs for operations within DDF-SAM, with a focus on efficient approximate map summarization to minimize communication costs.
APA, Harvard, Vancouver, ISO, and other styles
41

Asimopoulos, George. "Hartley transform based algorithm for the qualitative and quantitative analysis of multi-component mixtures with the use of emission excitation matrices." Diss., This resource online, 1992. http://scholar.lib.vt.edu/theses/available/etd-06062008-171404/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Fourie, Dehann. "Multi-modal and inertial sensor solutions for navigation-type factor graphs." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/114000.

Full text
Abstract:
Thesis: Ph. D., Joint Program in Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science; and the Woods Hole Oceanographic Institution), 2017.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 335-357).
This thesis presents a sum-product inference algorithm for in-situ, nonparametric platform navigation called Multi-modal iSAM (incremental smoothing and mapping), for problems of thousands of variables. Our method tracks dominant modes in the marginal posteriors of all variables with minimal approximation error, while suppressing almost all low likelihood modes (in a non-permanent manner) to save computation. The joint probability is described by a non-Gaussian factor graph model. Existing inference algorithms in simultaneous localization and mapping assume Gaussian measurement uncertainty, resulting in complex front-end processes that attempt to deal with non-Gaussian measurements. Existing robustness approaches work to remove "outlier" measurements, resulting heuristics and the loss of valuable information. Track different hypotheses in the system has prohibitive computational cost and and low likelihood hypotheses are permanently pruned. Our approach relaxes the Gaussian only restriction allowing the frontend to defer ambiguities (such as data association) until inference. Probabilistic consensus ensures dominant modes across all measurement information. Our approach propagates continuous beliefs on the Bayes (Junction) tree, which is an efficient symbolic refactorization of the nonparametric factor graph, and approximates the underlying Chapman-Kolmogorov equations. Like the predecessor iSAM2 max-product algorithm [Kaess et al., IJRR 2012], we retain the Bayes tree incremental update property, which allows for tractable recycling of previous computations. Several non-Gaussian measurement likelihood models are introduced, such as ambiguous data association or highly non-Gaussian measurement modalities. In addition, keeping with existing inertial navigation for dynamic platforms, we present a novel continuous-time inertial odometry residual function. Inertial odometry uses preintegration to seamlessly incorporate pure inertial sensor measurements into a factor graph, while supporting retroactive (dynamic) calibration of sensor biases. By centralizing our approach around a factor graph, with the aid of modern starved graph database techniques, concerns from different elements of the navigation ecosystem can be separated. We illustrate with practical examples how various sensing modalities can be combined into a common factor graph framework, such as: ambiguous loop closures; raw beam-formed acoustic measurements; inertial odometry; or conventional Gaussian-only likelihoods (parametric) to infer multi-modal marginal posterior belief estimates of system variables.
by Dehann Fourie.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
43

Lima, Weldson Queiroz de. "Um ambiente integrado para manipula??o de tr?fego multicast." Universidade Federal do Rio Grande do Norte, 2004. http://repositorio.ufrn.br:8080/jspui/handle/123456789/15274.

Full text
Abstract:
Made available in DSpace on 2014-12-17T14:55:32Z (GMT). No. of bitstreams: 1 WeldsonQL_capa_ate_pag12.pdf: 7162119 bytes, checksum: 5c9eb475de4851ecfa9f44218d55308a (MD5) Previous issue date: 2004-12-10
In the two last decades of the past century, following the consolidation of the Internet as the world-wide computer network, applications generating more robust data flows started to appear. The increasing use of videoconferencing stimulated the creation of a new form of point-to-multipoint transmission called IP Multicast. All companies working in the area of software and the hardware development for network videoconferencing have adjusted their products as well as developed new solutionsfor the use of multicast. However the configuration of such different solutions is not easy done, moreover when changes in the operational system are also requirede. Besides, the existing free tools have limited functions, and the current comercial solutions are heavily dependent on specific platforms. Along with the maturity of IP Multicast technology and with its inclusion in all the current operational systems, the object-oriented programming languages had developed classes able to handle multicast traflic. So, with the help of Java APIs for network, data bases and hipertext, it became possible to the develop an Integrated Environment able to handle multicast traffic, which is the major objective of this work. This document describes the implementation of the above mentioned environment, which provides many functions to use and manage multicast traffic, functions which existed only in a limited way and just in few tools, normally the comercial ones. This environment is useful to different kinds of users, so that it can be used by common users, who want to join multimedia Internet sessions, as well as more advenced users such engineers and network administrators who may need to monitor and handle multicast traffic
Nas duas ?ltimas d?cadas do s?culo passado, com a consolida??o da Internet como rede mundial de computadores, aplica??es de fluxos mais robustos come?aram a surgir. A crescente uso de videoconfer?ncias impulsionou a cria??o de uma forma de transmiss?o ponto-multiponto chamada Multicast IP. Todas as empresas que desenvolviam software e hardware para videoconfer?ncia adequaram seus produtos e criaram novas solu??es para o uso do fluxo multicast. Entretanto, a configura??o das diversas solu??es n?o ? trivial e, normalmente, altera??es no sistema operacional precisam ser realizadas. Al?m disso, ferramentas gratuitas apresentam funcionalidades limitadas, e as solu??es propriet?rias encontradas na atualidade s?o muito dependentes de plataformas espec?ficas. Com o amadurecimento da tecnologia Multicast IP e com sua inclus?o em todos os sistemas operacionais atuais, as linguagens de programa??o desenvolveram classes capazes de manipular tr?fego multicast. Com as APIs Java para redes, banco de dados e p?ginas Web, tornou-se poss?vel a cria??o de um Ambiente Integrado capaz de manipular tr?fego multicast, que se constitui na proposta central deste trabalho. Esse documento descreve ent?o a implementa??o deste ambiente que agrega diversas funcionalidades para utiliza??o e ger?ncia de tr?fego multicast, funcionalidades at? ent?o presentes de forma limitada em poucas e distintas ferramentas comummente propriet?rias. O ambiente se adequa a diferentes perfis de usu?rio, no sentido de que pode ser usado por leigos em Engenharia de Redes, que desejem apenas participar de sess?es de multim?dia na Internet, como tamb?m por especialistas e administradores de rede que desejem monitorar e manipular o tr?fego multicast
APA, Harvard, Vancouver, ISO, and other styles
44

Dunskus, Bertram V. "Single Function Agents and their Negotiation Behavior in Expert Systems." Digital WPI, 1999. https://digitalcommons.wpi.edu/etd-theses/1079.

Full text
Abstract:
"A Single Function Agent (SiFA) is a software agent, with only one function, one point of view, and one target object on which to act. For example, an agent might be a critic (function) of material (target) from the point of view of cost. This research investigates the possibilities and implications of the SiFA concept, and analyzes the definition language, negotiation language and negotiation strategies of the agents. After defining a domain-independent set of agent types we investigated negotiation, analyzing which pairs/groups of agents have reason to communicate, and what the information passed between them should be, as well as what knowledge was needed to support the negotiation. A library for the CLIPS expert system shell was built, which allows development of SiFA based expert systems from domain independent templates. We will present two such systems, one as implemented for the domain of ceramic component material selection and the other (in development) for simple sailboat design. The effect of negotiation on the design process and the results are discussed, as well as directions for future research into SiFAs."
APA, Harvard, Vancouver, ISO, and other styles
45

Schutte, Jeffrey Scott. "Simultaneous multi-design point approach to gas turbine on-design cycle analysis for aircraft engines." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/28169.

Full text
Abstract:
Thesis (M. S.)--Aerospace Engineering, Georgia Institute of Technology, 2009.
Committee Chair: Mavris, Dimitri; Committee Member: Gaeta, Richard; Committee Member: German, Brian; Committee Member: Jones, Scott; Committee Member: Schrage, Daniel; Committee Member: Tai, Jimmy.
APA, Harvard, Vancouver, ISO, and other styles
46

Sarpong, Boadu Mensah. "Column generation for bi-objective integer linear programs : application to bi-objective vehicle routing problems." Phd thesis, INSA de Toulouse, 2013. http://tel.archives-ouvertes.fr/tel-00919861.

Full text
Abstract:
L'optimisation multi-objectif concerne la résolution de problèmes pour lesquels plusieurs objectifs (ou critères) contradictoires sont pris en compte. Contrairement aux problèmes d'optimisation ayant un seul objectif, un problème multi-objectif ne possède pas une valeur optimale unique mais plutôt un ensemble de points appelés "ensemble non dominé". Les bornes inférieures et supérieures d'un problème multi-objectif peuvent être également décrites par des ensembles. Dans la pratique, les variables utilisées en optimisation multiobjectif représentent souvent des objets non fractionnables et on parle alors de problèmes multi-objectif en nombres entiers. Afin d'obtenir de meilleures bornes qui peuvent être utilisées dans la conception de méthodes exactes, certains problèmes sont formulés avec un nombre exponentiel de variables de décision et ces problèmes sont résolus par la méthode de génération de colonnes. Les travaux de cette thèse visent à contribuer à l'étude de l'utilisation de la génération de colonnes en programmation linéaires en nombres entiers multi-objectif. Pour cela nous étudions un problème de tournées de véhicules bi-objectif qui peut être considéré comme une généralisation de plusieurs autres problèmes de tournées de véhicules. Nous proposons des formulations mathématiques pour ce problème et des techniques pour accélérer le calcul des bornes inférieures par génération de colonnes. Les sous-problèmes qui doivent être résolus pour le calcul des bornes inférieures ont une structure similaire. Nous exploitons cette caractéristique pour traiter simultanément certains sous-problèmes plutôt qu'indépendamment.
APA, Harvard, Vancouver, ISO, and other styles
47

Guney, Murat Efe. "High-performance direct solution of finite element problems on multi-core processors." Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/34662.

Full text
Abstract:
A direct solution procedure is proposed and developed which exploits the parallelism that exists in current symmetric multiprocessing (SMP) multi-core processors. Several algorithms are proposed and developed to improve the performance of the direct solution of FE problems. A high-performance sparse direct solver is developed which allows experimentation with the newly developed and existing algorithms. The performance of the algorithms is investigated using a large set of FE problems. Furthermore, operation count estimations are developed to further assess various algorithms. An out-of-core version of the solver is developed to reduce the memory requirements for the solution. I/O is performed asynchronously without blocking the thread that makes the I/O request. Asynchronous I/O allows overlapping factorization and triangular solution computations with I/O. The performance of the developed solver is demonstrated on a large number of test problems. A problem with nearly 10 million degree of freedoms is solved on a low price desktop computer using the out-of-core version of the direct solver. Furthermore, the developed solver usually outperforms a commonly used shared memory solver.
APA, Harvard, Vancouver, ISO, and other styles
48

Rehn-Sonigo, Veronika. "Multi-criteria Mapping and Scheduling of Workflow Applications onto Heterogeneous Platforms." Phd thesis, Ecole normale supérieure de lyon - ENS LYON, 2009. http://tel.archives-ouvertes.fr/tel-00424118.

Full text
Abstract:
Les travaux présentés dans cette thèse portent sur le placement et l'ordonnancement d'applications de flux de données sur des plates-formes hétérogènes. Dans ce contexte, nous nous concentrons sur trois types différents d'applications :
Placement de répliques dans les réseaux hiérarchiques - Dans ce type d'application, plusieurs clients émettent des requêtes à quelques serveurs et la question est : où doit-on placer des répliques dans le réseau afin que toutes les requêtes puissent être traitées. Nous discutons et comparons plusieurs politiques de placement de répliques dans des réseaux hiérarchiques en respectant des contraintes de capacité de serveur, de qualité
de service et de bande-passante. Les requêtes des clients sont connues a priori, tandis que le nombre et la position des serveurs sont à déterminer. L'approche traditionnelle dans la littérature est de forcer toutes les requêtes d'un client à être traitées par le serveur le plus proche dans le réseau hiérarchique. Nous introduisons et étudions deux nouvelles politiques. Une principale contribution de ce travail est l'évaluation de l'impact de ces nouvelles politiques sur le coût total de replication. Un autre but important est d'évaluer l'impact de l'hétérogénéité des serveurs, d'une perspective à la
fois théorique et pratique. Nous établissons plusieurs nouveaux résultats de complexité, et nous présentons plusieurs heuristiques
efficaces en temps polynomial.
Applications de flux de données - Nous considérons des applications de flux de données qui peuvent être exprimées comme des graphes linéaires. Un exemple pour ce type d'application est le traitement numérique d'images, où les images sont traitées en
régime permanent. Plusieurs critères antagonistes doivent être optimisés, tels que le débit et la latence (ou une combinaison) ainsi que la latence et la fiabilité (i.e. la probabilité que le calcul soit réussi) de l'application. Bien qu'il soit possible de trouver
des algorithmes polynomiaux simples pour les plates-formes entièrement homogènes, le problème devient NP-difficile lorsqu'on s'attaque à des plates-formes hétérogènes. Nous présentons une formulation en programme linéaire pour ce dernier problème. De
plus nous introduisons plusieurs heuristiques bi-critères efficaces en temps polynomial, dont la performance relative est évaluée par des simulations extensives. Dans une étude de cas, nous présentons des simulations et des résultats expérimentaux (programmés en MPI) pour le graphe d'application de l'encodeur JPEG sur une grappe de calcul.
Applications complexes de streaming - Considérons l'exécution d'applications organisées en arbres d'opérateurs, i.e. l'application en régime permanent d'un ou plusieurs arbres d'opérateurs à données multiples qui doivent être mis à jour continuellement à différents endroits du réseau. Un premier but est de fournir à l'utilisateur un ensemble de processeurs qui doit être acheté ou loué pour garantir que le débit minimum de l'application en régime permanent soit atteint. Puis nous étendons notre modèle aux applications multiples : plusieurs applications concurrentes sont exécutées en même
temps dans un réseau, et on doit assurer que toutes les applications puissent atteindre leur débit requis. Une autre contribution de ce travail est d'apporter des résultats de complexité pour des instances variées du problème. La troisième contribution est l'élaboration
de plusieurs heuristiques polynomiales pour les deux modèles d'application. Un objectif premier des heuristiques pour applications concurrentes est la réutilisation des résultats intermédiaires qui sont partagés parmi différentes applications.
APA, Harvard, Vancouver, ISO, and other styles
49

Gaskell, Gary Ian. "Integrating smart cards into Kerberos." Thesis, Queensland University of Technology, 1999. https://eprints.qut.edu.au/36848/1/36848_Gaskell_2000.pdf.

Full text
Abstract:
The aim of this thesis is to identify alternatives for the integration of smart cards into a classic Kerberos system. Some researchers have proposed specific solutions. Each proposal appeared to be limited and, hence, there was a need to identify what other approaches were available. It was identified that smart cards can be added to each of the interfaces of Kerberos (user to authentication server, user to ticket granting server and user to application server). It appears most appropriate to use smart cards in the user to Authentication Server interface. The user's workstation is trusted with application data and so it will be usually appropriate for the application session keys to also be trusted to the user's workstation. The smart card can be integrated into the user to Authentication Server mes­saging implementation so that the user's authentication information is never ex­posed to the workstation. Six options have been identified for the integration in this interface. Some of the concepts developed were prototyped in order to identify the prac­ticality of the suggestions. An early beta release of Kerberos from Massachusetts Institute of Technology was used as the base for prototyping. It was found that complex protocols, such as Zero Knowledge Protocols, can­not be implemented on today's smart cards without special customisations by the smart card vendors. However, protocols that only required the use of com­mon cryptographic functions such as DES (Data Encryption Standard) and RSA (Rivest, Shamir and Adleman) can be implemented.
APA, Harvard, Vancouver, ISO, and other styles
50

Terneux, Efrén Andrés Estrella. "Design of an Algorithm for Aircraft Detection and Tracking with a Multi-coordinate VAUDEO System." Thesis, Blekinge Tekniska Högskola, Institutionen för tillämpad signalbehandling, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2633.

Full text
Abstract:
The combination of a video camera with an acoustic vector sensor (AVS) opens new possibilities in environment awareness applications. The goal of this thesis is the design of an algorithm for detection and tracking of low-flying aircraft using a multi-coordinate VAUDEO system. A commercial webcam placed in line with an AVS in a ground array are used to record real low-flying aircraft data at Teuge international airport. Each frame, the algorithm analyzes a matrix of three orthogonal acoustic particle velocity signals and one acoustic pressure signal using the Singular Value Decomposition to estimate the Direction of Arrival, DoA of propeller aircraft sound. The DoA data is then applied to a Kalman filter and its output is used later on to narrow the region of video frame processed. Background subtraction is applied followed by a Gaussian-weighted intensity mask to assign high priority to moving objects which are closer to the sound source estimated position. The output is applied to another Kalman filter to improve the accuracy of the aircraft location estimation. The performance evaluation of the algorithm proved that it is comparable to the performances of state-of-the-art video alone based algorithms. In conclusion, the combination of video and directional audio increases the accuracy of propeller aircraft detection and tracking comparing to reported previous work using audio alone.
+593 980826278
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography