Дисертації з теми "Linear programming Data processing"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Linear programming Data processing.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 дисертацій для дослідження на тему "Linear programming Data processing".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Olivier, Hannes Friedel. "The expected runtime of the (1+1) evolutionary algorithm on almost linear functions." Virtual Press, 2006. http://liblink.bsu.edu/uhtbin/catkey/1356253.

Повний текст джерела
Анотація:
This Thesis expands the theoretical research done in the area of evolutionary algorithms. The (1+1)EA is a simple algorithm which allows to gain some insight in the behaviour of these randomized search heuristics. This work shows ways to possible improve on existing bounds. The general good runtime of the algorithm on linear functions is also proven for classes of quadratic functions. These classes are defined by the relative size of the quadratic and the linear weights. One proof of the paper looks at a worst case algorithm which always shows a worst case behaviour than many other functions. This algorithm is used as an upper bound for a lot of different classes.
Department of Computer Science
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Barboza, Angela Olandoski. "Simulação e técnicas da computação evolucionária aplicadas a problemas de programação linear inteira mista." Centro Federal de Educação Tecnológica do Paraná, 2005. http://repositorio.utfpr.edu.br/jspui/handle/1/74.

Повний текст джерела
Анотація:
Presently, companies live a reality of rapid economic transformations generated by globalization. The growth of the products and services international trade, the constant exchange of information and the cultural interchange challenge administrators to define new paths for their companies. This dynamics and the increasing competitiveness demand new knowledge and abilities from professionals. In this way, new technologies are researched in order to improve operational efficiency. The Brazilian oil industry in particular has invested in applied research, as well as on development and technological qualification to keep its competitiveness in the international market. Many are the problems that must still be studied in this production sector. Among these, and due their importance, the problems of products storage and transference can be pointed out. This work approaches a scheduling problem that involves diesel oil storage and distribution in an oil refinery. The Mixed Integer Linear Programming (MILP) techniques with representation in the discrete and continuous time were used. The models that were developed were solved by the LINGO 8.0 software, using the branch and bound algorithm. However, due to their combinatorial nature, the expended computational time used for thesolution was excessive. Thus, four new methodologies were developed: Hybrid Steady State Genetic Algorithm (HSSGA) and Transgenetic ProtoG Algorithm, both integrated to Linear Programming (LP), for the representation of discrete time; simulation with optimization using the Genetic Algorithm (GA) and simulation with optimization using the Transgenetic ProtoG Algorithm, for the representation of continuous time. The results obtained through several tests with these new methodologies have shown that they can reach good results in an acceptable computational time. The two techniques for the representation of discrete time have shown satisfactory performance in terms of quality of solution and computational time. Among these, the methodology that uses the Transgenetic ProtoG Algorithm showed the best results. Also, the simulator with optimization using GA and the one that used the Transgenetic ProtoG Algorithm for the representation of continuous time were adequate to substitute the resolution through PLIM, because they reach solutions with a reduced computational time when compared with the time used for the solution with branch and bound.
As empresas vivem hoje uma realidade de transformações econômicas advindas da globalização. O crescimento do comércio internacional de produtos e serviços, a troca constante de informações e o intercâmbio cultural vêm desafiando os administradores a definir novos rumos para suas empresas. Esta dinâmica e a crescente competitividade exigem novos conhecimentos e habilidades dos profissionais. Desta forma, buscam-se novas tecnologias para conseguir-se a melhoria da eficiência operacional. Em especial, a indústria petrolífera brasileira tem investido na pesquisa aplicada, desenvolvimento e capacitação tecnológica para manter-se competitiva no mercado internacional. Muitos são os problemas que ainda devem ser estudados neste setor produtivo. Dentre estes, pode-se destacar os problemas de transferência e estocagem de produtos. Este trabalho aborda um problema de programação da produção (scheduling) envolvendo estocagem e distribuição de diesel em uma refinaria de petróleo. Para solucionar este problema foram utilizados a princípio modelos de Programação Linear Inteira Mista (PLIM) com abordagens para a representação no tempo discreto e contínuo. Os modelos desenvolvidos foram resolvidos com o uso do aplicativo computacional LINGO 8.0 através do algoritmo branch and bound. Devido à natureza combinatorial destes, o tempo computacional despendido na resolução mostrou-se excessivo. Desta forma, foram desenvolvidas quatro novas metodologias buscando amenizar este problema: Algoritmo Genético de Estado Estacionário Híbrido (AGEEH) e Algoritmo Transgenético ProtoG integrados à Programação Linear (PL) para a representação de tempo discreto; simulação com otimização através de Algoritmo Genético (AG) e simulação com otimização através de Algoritmo Transgenético ProtoG na representação de tempo contínuo. Os resultados obtidos através de vários testes com as novas metodologias mostraram que estas podem encontrar bons resultados em tempo computacional aceitável. Para a representação de tempo discreto as duas abordagens obtiveram desempenho satisfatório em termos de qualidade de solução e tempo computacional. Dentre estas, a metodologia que utilizou o Algoritmo Transgenético ProtoG apresentou os melhores resultados. Ainda, o simulador com otimização usando AG e o que utilizou Algoritmo Transgenético ProtoG na representação de tempo contínuo mostraram-se adequados para substituir a resolução através de PLIM por encontrar soluções com tempo computacional muito aquém do tempo despendido na resolução com o branch and bound.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Vanden, Berghen Frank. "Constrained, non-linear, derivative-free, parallel optimization of continuous, high computing load, noisy objective functions." Doctoral thesis, Universite Libre de Bruxelles, 2004. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/211177.

Повний текст джерела
Анотація:
The main result is a new original algorithm: CONDOR ("COnstrained, Non-linear, Direct, parallel Optimization using trust Region method for high-computing load, noisy functions"). The aim of this algorithm is to find the minimum x* of an objective function F(x) (x is a vector whose dimension is between 1 and 150) using the least number of function evaluations of F(x). It is assumed that the dominant computing cost of the optimization process is the time needed to evaluate the objective function F(x) (One evaluation can range from 2 minutes to 2 days). The algorithm will try to minimize the number of evaluations of F(x), at the cost of a huge amount of routine work. CONDOR is a derivate-free optimization tool (i.e. the derivatives of F(x) are not required. The only information needed about the objective function is a simple method (written in Fortran, C++,) or a program (a Unix, Windows, Solaris, executable) which can evaluate the objective function F(x) at a given point x. The algorithm has been specially developed to be very robust against noise inside the evaluation of the objective function F(x). This hypotheses are very general, the algorithm can thus be applied on a vast number of situations. CONDOR is able to use several CPU's in a cluster of computers. Different computer architectures can be mixed together and used simultaneously to deliver a huge computing power. The optimizer will make simultaneous evaluations of the objective function F(x) on the available CPU's to speed up the optimization process. The experimental results are very encouraging and validate the quality of the approach: CONDOR outperforms many commercial, high-end optimizer and it might be the fastest optimizer in its category (fastest in terms of number of function evaluations). When several CPU's are used, the performances of CONDOR are currently unmatched (may 2004). CONDOR has been used during the METHOD project to optimize the shape of the blades inside a Centrifugal Compressor (METHOD stands for Achievement Of Maximum Efficiency For Process Centrifugal Compressors THrough New Techniques Of Design). In this project, the objective function is based on a 3D-CFD (computation fluid dynamic) code which simulates the flow of the gas inside the compressor.
Doctorat en sciences appliquées
info:eu-repo/semantics/nonPublished
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Karamalis, Constantinos. "Data perturbation analyses for linear programming." Thesis, University of Ottawa (Canada), 1994. http://hdl.handle.net/10393/6709.

Повний текст джерела
Анотація:
This thesis focuses on several aspects of data perturbation for Linear Programming. Classical questions of degeneracy and post-optimal analysis are given a unified presentation, in a view of new interior point methods of linear programming. The performance of these methods is compared to the simplex algorithm; interior point methods are shown to alleviate some difficulties of representation and solution of linear programs. An affine scaling algorithm is implemented in conjunction with a simple rounding heuristic to asses the benefit of interior point trajectories to provide approximate solutions of linear integer programming.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Li, Jun-Sheng. "Design and scheduling of chemical bath processing lines." Diss., Georgia Institute of Technology, 1989. http://hdl.handle.net/1853/24380.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Geske, Ulrich, and Hans-Joachim Goltz. "Efficiency of difference-list programming." Universität Potsdam, 2010. http://opus.kobv.de/ubp/volltexte/2010/4156/.

Повний текст джерела
Анотація:
The difference-list technique is described in literature as effective method for extending lists to the right without using calls of append/3. There exist some proposals for automatic transformation of list programs into differencelist programs. However, we are interested in construction of difference-list programs by the programmer, avoiding the need of a transformation step. In [GG09] it was demonstrated, how left-recursive procedures with a dangling call of append/3 can be transformed into right-recursion using the unfolding technique. For simplification of writing difference-list programs using a new cons/2 procedure was introduced. In the present paper, we investigate how efficieny is influenced using cons/2. We measure the efficiency of procedures using accumulator technique, cons/2, DCG’s, and difference lists and compute the resulting speedup in respect to the simple procedure definition using append/3. Four Prolog systems were investigated and we found different behaviour concerning the speedup by difference lists. A result of our investigations is, that an often advice given in the literature for avoiding calls append/3 could not be confirmed in this strong formulation.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Wilkes, Charles Thomas. "Programming methodologies for resilience and availability." Diss., Georgia Institute of Technology, 1987. http://hdl.handle.net/1853/8308.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Nader, Babak. "Parallel solution of sparse linear systems." Full text open access at:, 1987. http://content.ohsu.edu/u?/etd,138.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Wang, Zongyan 1969. "Implementation of distributed data processing in a database programming language." Thesis, McGill University, 2002. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=79201.

Повний текст джерела
Анотація:
This thesis discusses the design and implementation of integrating the Internet capability into a database programming language JRelix, so that it not only possesses data organization; storage and indexing capabilities of normal DBMS, but also possesses remote data processing capabilities across the Internet.
A URL-based name extension to database elements in a database programming language is adopted, which gives it collaborative and distributed capability over the Internet with no changes in syntax or semantics apart from the new structure in names. Relations, computations, statements (or queries) and relational expression are treated uniformly as database elements in our implementation. These database elements are enabled to be accessed or executed remotely. As a result, remote data accessing or processing, as well as Remote Procedure Call (RPC) are supported.
Sharing resource is a main achievement of the implementation. In addition, site autonomy and performance transparency are accomplished; distributed view management is provided; sites need not be geographically distant; security management is implemented.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Ashoor, Khalil Layla Ali. "Performance analysis integrating data envelopment analysis and multiple objective linear programming." Thesis, University of Manchester, 2013. https://www.research.manchester.ac.uk/portal/en/theses/performance-analysis-integrating-data-envelopment-analysis-and-multiple-objective-linear-programming(65485f28-f6c5-4eff-b422-6dd05f1b46fe).html.

Повний текст джерела
Анотація:
Firms or organisations implement performance assessment to improve productivity but evaluating the performance of firms or organisations may be complex and complicated due to the existence of conflicting objectives. Data Envelopment Analysis (DEA) is a non-parametric approach utilized to evaluate the relative efficiencies of decision making units (DMUs) within firms or organizations that perform similar tasks. Although DEA measures the relative efficiency of a set of DMUs the efficiency scores generated do not consider the decision maker’s (DM’s) or expert preferences. DEA is used to measure efficiency and can be extended to include DM’s and expert preferences by incorporating value judgements. Value judgements can be implemented by two techniques: weight restrictions or constructing an equivalence Multiple Objective Linear Programming (MOLP) model. Weight restrictions require prior knowledge to be provided by the DM and moreover the DM cannot interfere during the assessment analysis. On the other hand, the second approach enables the DM to interfere during performance assessment without prior knowledge whilst providing alternative objectives that allow the DM to reach the most preferred decision subject to available resources. The main focus of this research was to establish interactive frameworks to allow the DM to set targets, according to his preferences, and to test alternatives that can realistically be measured through an interactive procedure. These frameworks are based on building an equivalence model between extended DEA and MOLP minimax formulation incorporating an interactive procedure. In this study two frameworks were established. The first is based on an equivalence model between DEA trade-off approach and MOLP minimax formulation which allows for incorporating DM’s and expert preferences. The second is based on an equivalence model between DEA bounded model and MOLP minimax formulation. This allows for integrating DM’s preferences through interactive steps to measure the whole efficiency score (i.e. best and worst efficiency) of individual DMU. In both approaches a gradient projection interactive approach is implemented to estimate, regionally, the most preferred solution along the efficient frontier. The second framework was further extended by including ranking based on the geometric average. All the frameworks developed and presented were tested through implementation on two real case studies.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Schrijvers, Tom. "Overview of the monadic constraint programming framework." Universität Potsdam, 2010. http://opus.kobv.de/ubp/volltexte/2010/4141/.

Повний текст джерела
Анотація:
A constraint programming system combines two essential components: a constraint solver and a search engine. The constraint solver reasons about satisfiability of conjunctions of constraints, and the search engine controls the search for solutions by iteratively exploring a disjunctive search tree defined by the constraint program. The Monadic Constraint Programming framework gives a monadic definition of constraint programming where the solver is defined as a monad threaded through the monadic search tree. Search and search strategies can then be defined as firstclass objects that can themselves be built or extended by composable search transformers. Search transformers give a powerful and unifying approach to viewing search in constraint programming, and the resulting constraint programming system is first class and extremely flexible.
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Hanus, Michael, and Sven Koschnicke. "An ER-based framework for declarative web programming." Universität Potsdam, 2010. http://opus.kobv.de/ubp/volltexte/2010/4144/.

Повний текст джерела
Анотація:
We describe a framework to support the implementation of web-based systems to manipulate data stored in relational databases. Since the conceptual model of a relational database is often specified as an entity-relationship (ER) model, we propose to use the ER model to generate a complete implementation in the declarative programming language Curry. This implementation contains operations to create and manipulate entities of the data model, supports authentication, authorization, session handling, and the composition of individual operations to user processes. Furthermore and most important, the implementation ensures the consistency of the database w.r.t. the data dependencies specified in the ER model, i.e., updates initiated by the user cannot lead to an inconsistent state of the database. In order to generate a high-level declarative implementation that can be easily adapted to individual customer requirements, the framework exploits previous works on declarative database programming and web user interface construction in Curry.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

Clayton, Peter Graham. "Interrupt-generating active data objects." Thesis, Rhodes University, 1990. http://hdl.handle.net/10962/d1006700.

Повний текст джерела
Анотація:
An investigation is presented into an interrupt-generating object model which is designed to reduce the effort of programming distributed memory multicomputer networks. The object model is aimed at the natural modelling of problem domains in which a number of concurrent entities interrupt one another as they lay claim to shared resources. The proposed computational model provides for the safe encapsulation of shared data, and incorporates inherent arbitration for simultaneous access to the data. It supplies a predicate triggering mechanism for use in conditional synchronization and as an alternative mechanism to polling. Linguistic support for the proposal requires a novel form of control structure which is able to interface sensibly with interrupt-generating active data objects. The thesis presents the proposal as an elemental language structure, with axiomatic guarantees which enforce safety properties and aid in program proving. The established theory of CSP is used to reason about the object model and its interface. An overview is presented of a programming language called HUL, whose semantics reflect the proposed computational model. Using the syntax of HUL, the application of the interrupt-generating active data object is illustrated. A range of standard concurrent problems is presented to demonstrate the properties of the interrupt-generating computational model. Furthermore, the thesis discusses implementation considerations which enable the model to be mapped precisely onto multicomputer networks, and which sustain the abstract programming level provided by the interrupt-generating active data object in the wider programming structures of HUL.
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Konis, Kjell Peter. "Linear programming algorithms for detecting separated data in binary logistic regression models." Thesis, University of Oxford, 2007. http://ora.ox.ac.uk/objects/uuid:8f9ee0d0-d78e-4101-9ab4-f9cbceed2a2a.

Повний текст джерела
Анотація:
This thesis is a study of the detection of separation among the sample points in binary logistic regression models. We propose a new algorithm for detecting separation and demonstrate empirically that it can be computed fast enough to be used routinely as part of the fitting process for logistic regression models. The parameter estimates of a binary logistic regression model fit using the method of maximum likelihood sometimes do not converge to finite values. This phenomenon (also known as monotone likelihood or infinite parameters) occurs because of a condition among the sample points known as separation. There are two classes of separation. When complete separation is present among the sample points, iterative procedures for maximizing the likelihood tend to break down, when it would be clear that there is a problem with the model. However, when quasicomplete separation is present among the sample points, the iterative procedures for maximizing the likelihood tend to satisfy their convergence criterion before revealing any indication of separation. The new algorithm is based on a linear program with a nonnegative objective function that has a positive optimal value when separation is present among the sample points. We compare several approaches for solving this linear program and find that a method based on determining the feasibility of the dual to this linear program provides a numerically reliable test for separation among the sample points. A simulation study shows that this test can be computed in a similar amount of time as fitting the binary logistic regression model using the method of iteratively reweighted least squares: hence the test is fast enough to be used routinely as part of the fitting procedure. An implementation of our algorithm (as well as the other methods described in this thesis) is available in the R package safeBinaryRegression.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Archer, Cynthia. "A framework for representing non-stationary data with mixtures of linear models /." Full text open access at:, 2002. http://content.ohsu.edu/u?/etd,585.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Campanella, William C. "The nature of the problem statement in architectural programming : a critical analysis of three programming processes." Thesis, Georgia Institute of Technology, 1987. http://hdl.handle.net/1853/23156.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Vinjarapu, Saranya S. "GPU Based Scattered Data Modeling." University of Akron / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=akron1335297259.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Ilberg, Peter. "Floyd : a functional programming language with distributed scope." Thesis, Georgia Institute of Technology, 1998. http://hdl.handle.net/1853/8187.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
19

John, Ranjit. "Implementing and programming weakly consistent memories." Diss., Georgia Institute of Technology, 1994. http://hdl.handle.net/1853/12890.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Hofuku, Yoyoi, Shinya Cho, Tomohiro Nishida, and Susumu Kanemune. "Why is programming difficult? : proposal for learning programming in “small steps” and a prototype tool for detecting “gaps”." Universität Potsdam, 2013. http://opus.kobv.de/ubp/volltexte/2013/6445/.

Повний текст джерела
Анотація:
In this article, we propose a model for an understanding process that learners can use while studying programming. We focus on the “small step” method, in which students learn only a few concepts for one program to avoid having trouble with learning programming. We also analyze the difference in the description order between several C programming textbooks on the basis of the model. We developed a tool to detect “gaps” (a lot of concepts to be learned in a program) in programming textbooks.
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Menon, Sathis N. "Asynchronous events : tools for distributed programming on concurrent object-based systems." Diss., Georgia Institute of Technology, 1994. http://hdl.handle.net/1853/9147.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Chin, Roger Steven. "Issues in designing a distributed, object-based programming system." Thesis, University of British Columbia, 1988. http://hdl.handle.net/2429/27858.

Повний текст джерела
Анотація:
Objects are entities which encapsulate data and those operations which manipulate the data. A distributed, object-based programming system (or DOBPS) is a distributed operating system which has been designed to support an object-based programming language and, in particular, an object abstraction. DOBPSs have the benefits of simplifying program construction and improving the performance of programs by providing efficient, system-level support for the abstractions used by the language. Many DOBPSs also permit hardware and software failures to be tolerated. This thesis introduces a definition for the term "distributed, object-based programming system" and identifies the features, that are related to objects, which are required by an operating system of a DOBPS. A classification scheme is presented that categorizes and characterizes these features to permit a number of implementation techniques to be easily examined, compared, and contrasted.
Science, Faculty of
Computer Science, Department of
Graduate
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Davies, S. J. "Frequency-selective excitation and non-linear data processing in nuclear magnetic resonance." Thesis, University of Oxford, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.233510.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Yaman, Sibel. "A multi-objective programming perspective to statistical learning problems." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/26470.

Повний текст джерела
Анотація:
Thesis (Ph.D)--Electrical and Computer Engineering, Georgia Institute of Technology, 2009.
Committee Chair: Chin-Hui Lee; Committee Member: Anthony Yezzi; Committee Member: Evans Harrell; Committee Member: Fred Juang; Committee Member: James H. McClellan. Part of the SMARTech Electronic Thesis and Dissertation Collection.
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Adhikari, Sameer. "Programming Idioms and Runtime Mechanisms for Distributed Pervasive Computing." Diss., Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/4820.

Повний текст джерела
Анотація:
The emergence of pervasive computing power and networking infrastructure is enabling new applications. Still, many milestones need to be reached before pervasive computing becomes an integral part of our lives. An important missing piece is the middleware that allows developers to easily create interesting pervasive computing applications. This dissertation explores the middleware needs of distributed pervasive applications. The main contributions of this thesis are the design, implementation, and evaluation of two systems: D-Stampede and Crest. D-Stampede allows pervasive applications to access live stream data from multiple sources using time as an index. Crest allows applications to organize historical events, and to reason about them using time, location, and identity. Together they meet the important needs of pervasive computing applications. D-Stampede supports a computational model called the thread-channel graph. The threads map to computing devices ranging from small to high-end processing elements. Channels serve as the conduits among the threads, specifically tuned to handle time-sequenced streaming data. D-Stampede allows the dynamic creation of threads and channels, and for the dynamic establishment (and removal) of the plumbing among them. The Crest system assumes a universe that consists of participation servers and event stores, supporting a set of applications. Each application consists of distributed software entities working together. The participation server helps the application entities to discover each other for interaction purposes. Application entities can generate events, store them at an event store, and correlate events. The entities can communicate with one another directly, or indirectly through the event store. We have qualitatively and quantitatively evaluated D-Stampede and Crest. The qualitative aspect refers to the ease of programming afforded by our programming abstractions for pervasive applications. The quantitative aspect measures the cost of the API calls, and the performance of an application pipeline that uses the systems.
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Mayott, Stewart W. "Implementation of a module implementor for an activity based distributed system /." Online version of thesis, 1988. http://hdl.handle.net/1850/10223.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Lakshmanan, Nithya M. "Estimation and control of nonlinear batch processes using multiple linear models." Thesis, Georgia Institute of Technology, 1997. http://hdl.handle.net/1853/11835.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Mandviwala, Hasnain A. "Capsules expressing composable computations in a parallel programming model /." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/26684.

Повний текст джерела
Анотація:
Thesis (Ph.D)--Computing, Georgia Institute of Technology, 2009.
Committee Chair: Ramachandran, Umakishore; Committee Member: Knobe Kathleen; Committee Member: Pande, Santosh; Committee Member: Prvulovic, Milos; Committee Member: Rehg, James M.. Part of the SMARTech Electronic Thesis and Dissertation Collection.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Eben-Chaime, Moshe. "The physical design of printed circuit boards : a mathematical programming approach." Diss., Georgia Institute of Technology, 1989. http://hdl.handle.net/1853/25505.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Aygar, Alper. "Doppler Radar Data Processing And Classification." Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/12609890/index.pdf.

Повний текст джерела
Анотація:
In this thesis, improving the performance of the automatic recognition of the Doppler radar targets is studied. The radar used in this study is a ground-surveillance doppler radar. Target types are car, truck, bus, tank, helicopter, moving man and running man. The input of this thesis is the output of the real doppler radar signals which are normalized and preprocessed (TRP vectors: Target Recognition Pattern vectors) in the doctorate thesis by Erdogan (2002). TRP vectors are normalized and homogenized doppler radar target signals with respect to target speed, target aspect angle and target range. Some target classes have repetitions in time in their TRPs. By the use of these repetitions, improvement of the target type classification performance is studied. K-Nearest Neighbor (KNN) and Support Vector Machine (SVM) algorithms are used for doppler radar target classification and the results are evaluated. Before classification PCA (Principal Component Analysis), LDA (Linear Discriminant Analysis), NMF (Nonnegative Matrix Factorization) and ICA (Independent Component Analysis) are implemented and applied to normalized doppler radar signals for feature extraction and dimension reduction in an efficient way. These techniques transform the input vectors, which are the normalized doppler radar signals, to another space. The effects of the implementation of these feature extraction algoritms and the use of the repetitions in doppler radar target signals on the doppler radar target classification performance are studied.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Grover, Samir. "Solving layout compaction and wire-balancing problem using linear programming on the Monsoon multiprocessor." Thesis, Connect to online version, 1995. http://0-wwwlib.umi.com.mercury.concordia.ca/cr/concordia/fullcit?pMQ90885.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Caneill, Matthieu. "Contributions to large-scale data processing systems." Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAM006/document.

Повний текст джерела
Анотація:
Cette thèse couvre le sujet des systèmes de traitement de données àgrande échelle, et plus précisément trois approches complémentaires :la conception d'un système pour prédir des défaillances de serveursgrâce à l'analyse de leurs données de supervision; l'acheminement dedonnées dans un système à temps réel en étudiant les corrélationsentre les champs des messages pour favoriser la localité; etfinalement un environnement de développement innovateur pour concevoirdes transformations de donées en utilisant des graphes orientés deblocs.À travers le projet Smart Support Center, nous concevons unearchitecture qui passe à l'échelle, afin de stocker des sériestemporelles rapportées par des moteurs de supervision, qui vérifienten permanence la santé des systèmes informatiques. Nous utilisons cesdonnées pour effectuer des prédictions, et détecter de potentielsproblèmes avant qu'ils ne ne produisent.Nous nous plongeons ensuite dans les algorithmes d'acheminement pourles sytèmes de traitement de données en temps réel, et développons unecouche pour acheminer les messages plus efficacement, en évitant lesrebonds entre machines. Dans ce but, nous identifions en temps réelles corrélations qui apparaissent entre les champs de ces messages,tels les mots-clics et leur localisation géographique, par exempledans le cas de micromessages. Nous utilisons ces corrélations pourcréer des tables d'acheminement qui favorisent la colocation desacteurs traitant ces messages.Pour finir, nous présentons λ-blocks, un environnement dedéveloppement pour effectuer des tâches de transformations de donnéessans écrire de code source, mais en créant des graphes de blocs decode. L'environnement est rapide, et est distribué avec des pilesincluses: libraries de blocs, modules d'extension, et interfaces deprogrammation pour l'étendre. Il est également capable de manipulerdes graphes d'exécution, pour optimisation, analyse, vérification, outout autre but
This thesis covers the topic of large-scale data processing systems,and more precisely three complementary approaches: the design of asystem to perform prediction about computer failures through theanalysis of monitoring data; the routing of data in a real-time systemlooking at correlations between message fields to favor locality; andfinally a novel framework to design data transformations usingdirected graphs of blocks.Through the lenses of the Smart Support Center project, we design ascalable architecture, to store time series reported by monitoringengines, which constantly check the health of computer systems. We usethis data to perform predictions, and detect potential problems beforethey arise.We then dive in routing algorithms for stream processing systems, anddevelop a layer to route messages more efficiently, by avoiding hopsbetween machines. For that purpose, we identify in real-time thecorrelations which appear in the fields of these messages, such ashashtags and their geolocation, for example in the case of tweets. Weuse these correlations to create routing tables which favor theco-location of actors handling these messages.Finally, we present λ-blocks, a novel programming framework to computedata processing jobs without writing code, but rather by creatinggraphs of blocks of code. The framework is fast, and comes withbatteries included: block libraries, plugins, and APIs to extendit. It is also able to manipulate computation graphs, foroptimization, analyzis, verification, or any other purposes
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Soroush, Amirali. "Extended Kalman filters and piece-wise linear segmentation for the processing of drilling data." Thesis, Curtin University, 2012. http://hdl.handle.net/20.500.11937/57584.

Повний текст джерела
Анотація:
This research is oriented to the development and implementation of signal processing techniques for the analysis of drilling data with a focus on adaptive filters and segmentation schemes. The thesis is divided into two distinct parts; the first part deals with the use of extended Kalman filters to estimate in real-time the instantaneous angular velocity of the drilling bit using downhole measurements, while the second part is devoted to a novel method for the segmentation of piece-wise linear signals corrupted with noise.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Grundmann, Matthias. "Computational video: post-processing methods for stabilization, retargeting and segmentation." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/47596.

Повний текст джерела
Анотація:
In this thesis, we address a variety of challenges for analysis and enhancement of Computational Video. We present novel post-processing methods to bridge the difference between professional and casually shot videos mostly seen on online sites. Our research presents solutions to three well-defined problems: (1) Video stabilization and rolling shutter removal in casually-shot, uncalibrated videos; (2) Content-aware video retargeting; and (3) spatio-temporal video segmentation to enable efficient video annotation. We showcase several real-world applications building on these techniques. We start by proposing a novel algorithm for video stabilization that generates stabilized videos by employing L1-optimal camera paths to remove undesirable motions. We compute camera paths that are optimally partitioned into constant, linear and parabolic segments mimicking the camera motions employed by professional cinematographers. To achieve this, we propose a linear programming framework to minimize the first, second, and third derivatives of the resulting camera path. Our method allows for video stabilization beyond conventional filtering, that only suppresses high frequency jitter. An additional challenge in videos shot from mobile phones are rolling shutter distortions. Modern CMOS cameras capture the frame one scanline at a time, which results in non-rigid image distortions such as shear and wobble. We propose a solution based on a novel mixture model of homographies parametrized by scanline blocks to correct these rolling shutter distortions. Our method does not rely on a-priori knowledge of the readout time nor requires prior camera calibration. Our novel video stabilization and calibration free rolling shutter removal have been deployed on YouTube where they have successfully stabilized millions of videos. We also discuss several extensions to the stabilization algorithm and present technical details behind the widely used YouTube Video Stabilizer. We address the challenge of changing the aspect ratio of videos, by proposing algorithms that retarget videos to fit the form factor of a given device without stretching or letter-boxing. Our approaches use all of the screen's pixels, while striving to deliver as much video-content of the original as possible. First, we introduce a new algorithm that uses discontinuous seam-carving in both space and time for resizing videos. Our algorithm relies on a novel appearance-based temporal coherence formulation that allows for frame-by-frame processing and results in temporally discontinuous seams, as opposed to geometrically smooth and continuous seams. Second, we present a technique, that builds on the above mentioned video stabilization approach. We effectively automate classical pan and scan techniques by smoothly guiding a virtual crop window via saliency constraints. Finally, we introduce an efficient and scalable technique for spatio-temporal segmentation of long video sequences using a hierarchical graph-based algorithm. We begin by over-segmenting a volumetric video graph into space-time regions grouped by appearance. We then construct a "region graph" over the obtained segmentation and iteratively repeat this process over multiple levels to create a tree of spatio-temporal segmentations. This hierarchical approach generates high quality segmentations, and allows subsequent applications to choose from varying levels of granularity. We demonstrate the use of spatio-temporal segmentation as users interact with the video, enabling efficient annotation of objects within the video.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Kusalik, Anthony Joseph. "Logic programming as a formalism for specification and implementation of computer systems." Thesis, University of British Columbia, 1988. http://hdl.handle.net/2429/28848.

Повний текст джерела
Анотація:
The expressive power of logic-programming languages allows utilization of conventional constructs in development of computer systems based on logic programming. However, logic-programming languages have many novel features and capabilities. This thesis investigates how advantage can be taken of these features in the development of a logic-based computer system. It demonstrates that innovative approaches to software, hardware, and computer system design and implementation are feasible in a logic-programming context and often preferable to adaptation of conventional ones. The investigation centers on three main ideas: executable specification, declarative I/O, and implementation through transformation and meta-interpretation. A particular class of languages supporting parallel computation, committed-choice logic-programming languages, are emphasized. One member of this class, Concurrent Prolog, serves as the machine, specification, and implementation language. The investigation has several facets. Hardware, software, and overall system models for a logic-based computer are determined and examined. The models are described by logic programs. The computer system is represented as a goal for resolution. The clauses involved in the subsequent reduction steps constitute its specification. The same clauses also describe the manner in which the computer system is initiated. Frameworks are given for developing models of peripheral devices whose actions and interactions can be declaratively expressed. Interactions do not rely on side-effects or destructive assignment, and are term-based. A methodology is presented for realizing (prototypic) implementations from device specifications. The methodology is based on source-to-source transformation and meta-interpretation. A magnetic disk memory is used as a representative example, resulting in an innovative approach to secondary storage in a logic-programming environment. Building on these accomplishments, a file system for a logic-based computer system is developed. The file system follows a simple model and supports term-based, declarative I/O. Throughout the thesis, features of the logic-programming paradigm are demonstrated and exploited. Interesting and innovative concepts established include: device processes and device processors; restartable and perpetual devices and systems; peripheral devices modelled as function computations or independent logical (inference) systems; unique, compact representations of terms; lazy term expansion; files systems as perpetual processes maintaining local states; and term- and unification-based file abstractions. Logic programs are the sole formalism for specifications and implementations.
Science, Faculty of
Computer Science, Department of
Graduate
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Gong, Yun. "On semidefinite programming and vector quantization with application to image coding." Diss., Georgia Institute of Technology, 2000. http://hdl.handle.net/1853/14876.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Romanycia, Marc Hector Joseph. "The design and control of visual routines for the computation of simple geometric properties and relations." Thesis, University of British Columbia, 1987. http://hdl.handle.net/2429/26526.

Повний текст джерела
Анотація:
The present work is based on the Visual Routine theory of Shimon Ullman. This theory holds that efficient visual perception is managed by first applying spatially parallel methods to an initial input image in order to construct the basic representation-maps of features within the image. Then, this phase is followed by the application of serial methods - visual routines - which are applied to the most salient items in these and other subsequently created maps. Recent work in the visual routine tradition is reviewed, as well as relevant psychological work on preattentive and attentive vision. An analysis is made of the problem of devising a visual routine language for computing geometric properties and relations. The most useful basic representations to compute directly from a world of 2-D geometric shapes are determined. An argument is made for the case that an experimental program is required to establish which basic operations and which methods for controlling them will lead to the efficient computation of geometric properties and relations. A description is given of an implemented computer system which can correctly compute, in images of simple 2-D geometric shapes, the properties vertical, horizontal, closed, and convex, and the relations inside, outside, touching, centred-in, connected, parallel, and being-part-of. The visual routines which compute these, the basic operations out of which the visual routines are composed, and the important logic which controls the goal-directed application of the routines to the image are all described in detail. The entire system is embedded in a Question-and-Answer system which is capable of answering questions of an image, such as "Find all the squares inside triangles" or "Find all the vertical bars outside of closed convex shapes." By asking many such questions about various test images, the effectiveness of the visual routines and their controlling logic is demonstrated.
Science, Faculty of
Computer Science, Department of
Graduate
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Linderoth, Jeffrey T. "Topics in parallel integer optimization." Diss., Georgia Institute of Technology, 1998. http://hdl.handle.net/1853/24285.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Gujberová, Monika, and Peter Tomcsányi. "Environments for programming in primary education." Universität Potsdam, 2013. http://opus.kobv.de/ubp/volltexte/2013/6449/.

Повний текст джерела
Анотація:
The aim of our article is to collect and present information about contemporary programming environments that are suitable for primary education. We studied the ways they implement (or do not implement) some programming concepts, the ways programs are represented and built in order to support young and novice programmers, as well as their suitability to allow different forms of sharing the results of pupils’ work. We present not only a short description of each considered environment and the taxonomy in the form of a table, but also our understanding and opinions on how and why the environments implement the same concepts and ideas in different ways and which concepts and ideas seem to be important to the creators of such environments.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Zuriekat, Faris Nabeeh. "Parallel remote interactive management model." CSUSB ScholarWorks, 2007. https://scholarworks.lib.csusb.edu/etd-project/3222.

Повний текст джерела
Анотація:
This thesis discusses PRIMM which stands for Parallel Remote Interactive Management Model. PRIMM is a framework for object oriented applications that relies on grid computing. It works as an interface between the remote applications and the parallel computing system. The thesis shows the capabilities that could be achieved from PRIMM architecture.
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Passos, Alexandre Tachard 1986. "Combinatorial algorithms and linear programming for inference in natural language processing = Algoritmos combinatórios e de programação linear para inferência em processamento de linguagem natural." [s.n.], 2013. http://repositorio.unicamp.br/jspui/handle/REPOSIP/275609.

Повний текст джерела
Анотація:
Orientador: Jacques Wainer
Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Computação
Made available in DSpace on 2018-08-24T00:42:33Z (GMT). No. of bitstreams: 1 Passos_AlexandreTachard_D.pdf: 2615030 bytes, checksum: 93841a46120b968f6da6c9aea28953b7 (MD5) Previous issue date: 2013
Resumo: Em processamento de linguagem natural, e em aprendizado de máquina em geral, é comum o uso de modelos gráficos probabilísticos (probabilistic graphical models). Embora estes modelos sejam muito convenientes, possibilitando a expressão de relações complexas entre várias variáveis que se deseja prever dado uma sentença ou um documento, algoritmos comuns de aprendizado e de previsão utilizando estes modelos são frequentemente ineficientes. Por isso têm-se explorado recentemente o uso de relaxações usando programação linear deste problema de inferência. Esta tese apresenta duas contribuições para a teoria e prática de relaxações de programação linear para inferência em modelos probabilísticos gráficos. Primeiro, apresentamos um novo algoritmo, baseado na técnica de geração de colunas (dual à técnica dos planos de corte) que acelera a execução do algoritmo de Viterbi, a técnica mais utilizada para inferência em modelos lineares. O algoritmo apresentado também se aplica em modelos que são árvores e em hipergrafos. Em segundo mostramos uma nova relaxação linear para o problema de inferência conjunta, quando se quer acoplar vários modelos, em cada qual inferência é eficiente, mas em cuja junção inferência é NP-completa. Esta tese propõe uma extensão à técnica de decomposição dual (dual decomposition) que permite além de juntar vários modelos a adição de fatores que tocam mais de um submodelo eficientemente
Abstract: In natural language processing, and in general machine learning, probabilistic graphical models (and more generally structured linear models) are commonly used. Although these models are convenient, allowing the expression of complex relationships between many random variables one wants to predict given a document or sentence, most learning and prediction algorithms for general models are inefficient. Hence there has recently been interest in using linear programming relaxations for the inference tasks necessary when learning or applying these models. This thesis presents two contributions to the theory and practice of linear programming relaxations for inference in structured linear models. First we present a new algorithm, based on column generation (a technique which is dual to the cutting planes method) to accelerate the Viterbi algorithm, the most popular exact inference technique for linear-chain graphical models. The method is also applicable to tree graphical models and hypergraph models. Then we present a new linear programming relaxation for the problem of joint inference, when one has many submodels and wants to predict using all of them at once. In general joint inference is NP-complete, but algorithms based on dual decomposition have proven to be efficiently applicable for the case when the joint model can be expressed as many separate models plus linear equality constraints. This thesis proposes an extension to dual decomposition which allows also the presence of factors which score parts that belong in different submodels, improving the expressivity of dual decomposition at no extra computational cost
Doutorado
Ciência da Computação
Doutor em Ciência da Computação
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Xu, Cong. "Multi-objective optimization approaches to efficiency assessment and target setting for bank branches." Thesis, University of Manchester, 2018. https://www.research.manchester.ac.uk/portal/en/theses/multiobjective-optimization-approaches-to-efficiency-assessment-and-target-setting-for-bank-branches(eef70a4a-359d-40ed-9b6c-3eeb98fe477a).html.

Повний текст джерела
Анотація:
This thesis focuses on combining data envelopment analysis (DEA) and multi-objective linear programming (MOLP) methods to set targets by referencing peers' performances and decision-makers' (DMs) preferences. A large number of past papers have proven the importance of a company having a target; however, obtaining a feasible but challenging target has always been a difficult topic for companies. Since DEA was proposed in 1978, it has become one of the most popular performance assessment tools. The performance possibility set and efficient frontier established by DEA provide solid and scientific reference information for managers to evaluate an individual's efficiency. Based on the successful experience of DEA in performance assessment, many scholars have mentioned that DEA can be used to set appropriate targets as well; however, traditional DEA models do not include DMs' preference information that is crucial to a target-setting process. Therefore, several MOLP methods have been introduced to include DMs' preferences in the target-setting process based on the DEA efficient frontier and performance possibility set. The trade-off-based method is one of the most popular interactive methods that have been incorporated with DEA. However, there are several gaps in the current research: (1) the trade-off-based method could take so many interactions that no DMs could finish the interactive process; (2) DMs might find it very difficult to provide the preference information required by MOLP models; and (3) DMs cannot have an intuitive view in terms of the efficient frontier. Regarding the gaps above, this thesis proposes three new trade-off-based interactive target-setting models based on the DEA performance possibility set and efficient frontier to improve DMs' experience when setting targets. The three models can work independently or can be combined during the decision-making process. The piecewise linear model uses a piecewise linear assumption to simulate DMs' real utility function. It gradually narrows down the region that could contain DMs' most-preferred solution (MPS) until it reaches an acceptable range. This model could help DMs who have limited time for interaction but want to have a global view of the entire efficient frontier. This model has also been proven very helpful when DMs are not sensitive to close efficient solutions. The prioritized trade-off model provides a new way for a DM to know about the efficient frontier, which allows the DM to explore the efficient frontier following the preferred direction with a series of trade-off tables and trade-off figures as visual aids. The stepwise trade-off model focuses on situations where the number of objectives (outputs/inputs for the DEA model) is quite large and DMs cannot provide all indifference trade-offs between all the objectives simultaneously. To release the DMs' burden, the stepwise model starts from two objectives and gradually includes new objectives in the decision-making process, with the assumption that the indifference trade-offs between previous objectives are fixed, until all objectives are included. All three models have been validated through numerical examples and case studies of a Chinese state-owned bank to help DMs to explore their MPS in the DEA production possibility set.
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Erdman, Robert W. "Using experimental design and data analysis to study the enlisted specialty model fo the U.S. Army GI." Monterey, California : Naval Postgraduate School, 2010. http://edocs.nps.edu/npspubs/scholarly/theses/2010/Jun/10Jun%5FErdman.pdf.

Повний текст джерела
Анотація:
Thesis (M.S. in Operations Analysis)--Naval Postgraduate School, June 2010.
Thesis Advisor(s): Johnson, Rachel ; Second Reader: Lucas, Tom. "June 2010." Description based on title screen as viewed on July 14, 2010. Author(s) subject terms: Enlisted specialty model, manpower, design of experiments, linear programming, Plackett-Burman design, D-optimal Latin hypercube Includes bibliographical references (p. 49). Also available in print.
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Kohout, James. "Design and performance analysis of MPI-SHARC a high-speed network service for distributed digital signal processor systems /." [Gainesville, Fla.] : University of Florida, 2001. http://etd.fcla.edu/etd/UF/anp4297/MASTER.pdf.

Повний текст джерела
Анотація:
Thesis (M.S.)--University of Florida, 2001.
Title from first page of PDF file. Document formatted into pages; contains ix, 69 p.; also contains graphics. Vita. Includes bibliographical references (p. 66-68).
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Kraemer, Eileen T. "A framework, tools, and methodology for the visualization of parallel and distributed systems." Diss., Georgia Institute of Technology, 1995. http://hdl.handle.net/1853/9214.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Knee, Simon. "Opal : modular programming using the BSP model." Thesis, University of Oxford, 1997. http://ora.ox.ac.uk/objects/uuid:97d95f01-a098-499c-8c07-303b853c2460.

Повний текст джерела
Анотація:
Parallel processing can provide the huge computational resources that are required to solve todays grand challenges, at a fraction of the cost of developing sequential machines of equal power. However, even with such attractive benefits the parallel software industry is still very small compared to its sequential counterpart. This has been attributed to the lack of an accepted parallel model of computation, therefore leading to software which is architecture dependent with unpredictable performance. The Bulk Synchronous Parallel (BSP) model provides a solution to these problems and can be compared to the Von Neumann model of sequential computation. In this thesis we investigate the issues involved in providing a modular programming environment based on the BSP model. Using our results we present Opal, a BSP programming language that has been designed for parallel programming-in-the-large. While other BSP languages and libraries have been developed, none of them provide support for libraries of parallel algorithms. A library mechanism must be introduced into BSP without destroying the existing cost model. We examine such issues and show that the active library mechanism of Opal leads to algorithms which still have predictable performance. If algorithms are to retain acceptable levels of performance across a range of machines then they must be able to adapt to the architecture that they are executing on. Such adaptive algorithms require support from the programming language, an issue that has been addressed in Opal. To demonstrate the Opal language and its modular features we present a number of example algorithms. Using an Opal compiler that has been developed we show that we can accurately predict the performance of these algorithms. The thesis concludes that by using Opal it is possible to program the BSP model in a modular fashion that follows good software engineering principles. This enables large scale parallel software to be developed that is architecture independent, has predictable performance and is adaptive to the target architecture.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Lin, Chungping. "The RMT (Recursive multi-threaded) tool: A computer aided software engineeering tool for monitoring and predicting software development progress." CSUSB ScholarWorks, 1998. https://scholarworks.lib.csusb.edu/etd-project/1787.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Zhu, Xinjie, and 朱信杰. "START : a parallel signal track analytical research tool for flexible and efficient analysis of genomic data." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2015. http://hdl.handle.net/10722/211136.

Повний текст джерела
Анотація:
Signal Track Analytical Research Tool (START), is a parallel system for analyzing large-scale genomic data. Currently, genomic data analyses are usually performed by using custom scripts developed by individual research groups, and/or by the integrated use of multiple existing tools (such as BEDTools and Galaxy). The goals of START are 1) to provide a single tool that supports a wide spectrum of genomic data analyses that are commonly done by analysts; and 2) to greatly simplify these analysis tasks by means of a simple declarative language (STQL) with which users only need to specify what they want to do, rather than the detailed computational steps as to how the analysis task should be performed. START consists of four major components: 1) A declarative language called Signal Track Query Language (STQL), which is a SQL-like language we specifically designed to suit the needs for analyzing genomic signal tracks. 2) A STQL processing system built on top of a large-scale distributed architecture. The system is based on the Hadoop distributed storage and the MapReduce Big Data processing framework. It processes each user query using multiple machines in parallel. 3) A simple and user-friendly web site that helps users construct and execute queries, upload/download compressed data files in various formats, man-age stored data, queries and analysis results, and share queries with other users. It also provides a complete help system, detailed specification of STQL, and a large number of sample queries for users to learn STQL and try START easily. Private files and queries are not accessible by other users. 4) A repository of public data popularly used for large-scale genomic data analysis, including data from ENCODE and Roadmap Epigenomics, that users can use in their analyses.
published_or_final_version
Computer Science
Doctoral
Doctor of Philosophy
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Andersson, Jakob. "Automatic Invoice Data Extraction as a Constraint Satisfaction Problem." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-411596.

Повний текст джерела
Анотація:
Invoice processing has traditionally been heavily dependent onmanual labor, where the task is to identify and move certaininformation from an origin to a destination. A time demandingtask with a high interest of automation to reduce time ofexecution, fault-risk and cost.With the evergrowing interest in automation and ArtificialIntelligence (AI), this thesis will explore the possibilities ofautomating the task of extracting and mapping information ofinterest by defining the problem as a Constraint OptimizationProblem (COP) using numeric relations between present information.The problem is then solved by extracting the numericalvalues in a document and utilizing it as an input space whereeach combination of numeric values are tested using a backendsolver.Several different models were defined, using different approachesand constraints on relations between possible existingfields. A solution to an invoice was considered correct if thetotal, tax, net and rounding amounts were estimated correctly.The final best achieved results were 84.30% correct and8.77% incorrect solutions on a set of 1400 various types of invoices.The achieved results show a promising alternative route toproposed solutions using e.g. machine learning or other intelligentsolutions using graphical or positional data. While only regardingthe numerical values present in each document, the proposedsolution becomes decentralized and therefor can be implementedand ran on any set of invoices without any pre-training phase.
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Shaw, Robert. "Implementation of an activity coordinator for an activity-based distributed system /." Online version of thesis, 1988. http://hdl.handle.net/1850/10450.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії