Teses / dissertações sobre o tema "Qa76.54"

Siga este link para ver outros tipos de publicações sobre o tema: Qa76.54.

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 44 melhores trabalhos (teses / dissertações) para estudos sobre o assunto "Qa76.54".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja as teses / dissertações das mais diversas áreas científicas e compile uma bibliografia correta.

1

Triastuti, Sugiyarto Endang. "Analysing rounding data using radial basis function neural networks model". Thesis, University of Northampton, 2007. http://nectar.northampton.ac.uk/2809/.

Texto completo da fonte
Resumo:
Unspecified counting practices used in a data collection may create rounding to certain ‘based’ number that can have serious consequences on data quality. Statistical methods for analysing missing data are commonly used to deal with the issue but it could actually aggravate the problem. Rounded data are not missing data, instead some observations were just systematically lumped to certain based numbers reflecting the rounding process or counting behaviour. A new method to analyse rounded data would therefore be academically valuable. The neural network model developed in this study fills the gap and serves the purpose by complementing and enhancing the conventional statistical methods. The model detects, analyses, and quantifies the existence of periodic structures in a data set because of rounding. The robustness of the model is examined using simulated data sets containing specific rounding numbers of different levels. The model is also subjected to theoretical and numerical tests to confirm its validity before being used on real applications. Overall, the model performs very well making it suitable for many applications. The assessment results show the importance of using the right best fit in rounding detection. The detection power and cut-off point estimation also depend on data distribution and rounding based numbers. Detecting rounding of prime numbers is easier than non-prime numbers due to the unique characteristics of the former. The bigger the number, the easier is the detection. This is in a complete contrast with non-prime numbers, where the bigger the number, the more will be the “factor” numbers distracting rounding detection. Using uniform best fit on uniform data produces the best result and lowest cut-off point. The consequence of using a wrong best fit on uniform data is however also the worst. The model performs best on data containing 10-40% rounding levels as less or more rounding levels produce unclear rounding pattern or distort the rounding detection, respectively. The modulo-test method also suffers the same problem. Real data applications on religious census data confirms the modulo-test finding that the data contains rounding base 5, while applications on cigarettes smoked and alcohol consumed data show good detection results. The cigarettes data seem to contain rounding base 5, while alcohol consumption data indicate no rounding patterns that may be attributed to the ways the two data were collected. The modelling applications can be extended to other areas in which rounding is common and can have significant consequences. The modelling development can he refined to include data-smoothing process and to make it user friendly as an online modelling tool. This will maximize the model’s potential use
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Ampratwum, Cecilia S. "Identification of chemical species using artificial intelligence to interpret optical emission spectra". Thesis, University of Northampton, 1999. http://nectar.northampton.ac.uk/3004/.

Texto completo da fonte
Resumo:
The nonlinear modeling capabilities of artificial neural networks (ANN’s) are renowned in the field of artificial intelligence (Al) for capturing knowledge that can be very difficult to understand otherwise. Their ability to be trained on representative data within a particular problem domain and generalise over a set of data make them efficient predictive models. One problem domain that contains complex data that would benefit from the predictive capabilities of ANN’s is that of optical emission spectra (OES). OES is an important diagnostic for monitoring plasma species within plasma processing. Normally, OES spectral interpretation requires significant prior expertise from a spectroscopist. One way of alleviating this intensive demand in order to quickly interpret OES spectra is to interpret the data using an intelligent pattern recognition technique like ANN’s. This thesis investigates and presents MLP ANN models that can successfully classify chemical species within OES spectral patterns. The primary contribution of the thesis is the creation of deployable ANN species models that can predict OES spectral line sizes directly from six controllable input process parameters; and the implementation of a novel rule extraction procedure to relate the real multi-output values of the spectral line sizes to individual input process parameters. Not only are the trained species models excellent in their predictive capability, but they also provide the foundation for extracting comprehensible rules. A secondary contribution made by this thesis is to present an adapted fuzzy rule extraction system that attaches a quantitative measure of confidence to individual rules. The most significant contribution to the field of Al that is generated from the work presented in the thesis is the fact that the rule extraction procedure utilises predictive ANN species models that employ real continuously valued multi-output data. This is an improvement on rule extraction from trained networks that normally focus on discrete binary outputs
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Johnson, Mark. "The Dyslexic User's Interface Support Tool (DUIST) : a framework for performance enhancing interface adaptation strategies for dyslexic computer users". Thesis, University of Northampton, 2007. http://nectar.northampton.ac.uk/2683/.

Texto completo da fonte
Resumo:
Due to the nature of the symptoms experienced by dyslexic individuals (e.g. defective visual processing, short term memory deficit and motor control problems) an investigation into support strategies to aid persons suffering from the condition seems strongly justifiable. As such, an extensive review of existing support techniques for dyslexic computer users are explored leading to the formulation of four central research models; dyslexia symptoms, symptom alleviating interface strategies, adjustable interface components and a dynamically adaptable interface preference elicitation mechanism. These models provide the foundation for the design of the Dyslexic User’s Interface Support Tool (DUIST) framework. Using a user centred design approach, the support framework is developed, tested and subsequently evaluated with positive results. Performance gains for dyslexic subjects in reading speed and reading accuracy exemplify the apparent benefits of framework utilisation (e.g. dyslexic mean reading speed increased by 4.98 wpm vs. control gains of 0.18 wpm; dyslexic mean reading errors reduced by 0.64 per 100 words vs. control reductions of 0.06 fewer errors per 100 words). Subsequent research into the long-term impact of framework utilisation; the perceived benefits of applying research formulated models to interfaces designed for dyslexics; and alternative strategies to portability all now seem justified. That said, the findings presented thus far warrants investigation by any reader actively interested in dyslexia; strategies for dyslexia symptom relief support environments for dyslexic computer users; applications of adaptive interfaces; and all potential system designers who may be considering developing any type of graphical interface for a dyslexic user group
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Wang, Yijun. "Development of an artificial neural network model to predict expert judgement of leather handle from instrumentally measured parameters". Thesis, University of Northampton, 2009. http://nectar.northampton.ac.uk/3581/.

Texto completo da fonte
Resumo:
Leather is a widely used material whose handling character is still assessed manually by experienced people in the leather industry. The aim of this study was to provide a new approach to such characterisation by developing Artificial Neural Network models to investigate the relationship between the subjective assessment of leather handle and its measureable physical characteristics. Two collections of commercial leather samples provided by TFL and PITTARDS were studied in this project. While the handle of the TFL collection covered a varied range, the PITTARDS collection was all relatively soft leather and with less difference within the collection. Descriptive Sensory Analysis was used to identify and quantify the subjective assessment of leather handle. A panel constituted of leather experts was organised and trained to: 1) define attributes describing leather handle; 2) assess specific leather handle by responding to questionnaires seeking information about the above attributes. According to the analysis of the raw data and the assessment observation, the attributes that should be used for training the artificial network models were "stiff", "empty", "smooth", "firm", "high density" and "elastic". Various physical measurements relating to leather handle were carried out as follows: standard leather thickness, apparent density, thickness with 1 gram load and 2 gram load, resistance to compression, resistance to stretching, surface friction, modified vertical loop deformation, drooping angle and BLC softness. The parameters from each measurement were all scaled on range 0 to 1 before being fed into network models. Artificial neural networks were developed through learning from the TFL examples and then tested on the PITTARDS collection. In the training stage, parameters from physical measurements and attribute gradings provided by descriptive sensory analysis were fed into the networks as input and desired output respectively. In the testing stage, physical measurement parameters were input to the trained network and the output of the network, which was the prediction of the leather handle, was compared with the gradings given by the panel. The testing results showed that the neural network models developed were able to judge the handle of a newly presented leather as well as an expert. Statistical methods were explored in the development of artificial neural network models. Principal Component Analysis was used to classify the attributes of leather handle and demonstrated that the predominant and most representative attributes out of the six attributes were "stiff", "empty" and "smooth". A network model called physical2panel, predicting the above three attributes from three physical parameters was built up by adopting a novel pruning method termed "Double-Threshold" which was used to decide the irrelevance of an input to a model. This pruning method was based on Bayesian methodology and implemented by comparing the overall connection weight of each input to each output with the limitation of two thresholds. The pruning results revealed that among the sixteen physical parameters, only three of them, - the reading from BLC softness guage, the compression secant modulus and the leather thickness measured under 1 gram load were important to the model. Another network model, termed panel2panel, that predicts the other three attributes "firm", "high density" and "elastic" from the prediction of the model physical2panel was developed and also proved to work as well as a leather expert panel. The conception of a 3D handle space was explored and shown to be a powerful means of demonstrating the findings.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Bosman, Oscar. "Testing object-oriented software". Master's thesis, 1999. http://hdl.handle.net/1885/144495.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Tridgell, Andrew. "Efficient algorithms for sorting and synchronization". Phd thesis, 1999. http://hdl.handle.net/1885/144682.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Trentelman, Kerry. "Aspects of Java program verification". Phd thesis, 2006. http://hdl.handle.net/1885/151803.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Ma, Wanli. "T-Cham : a programming language based on transactions and the Chemical Abstract Machine". Phd thesis, 2001. http://hdl.handle.net/1885/148085.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

"A tunable version control system for virtual machines in an open-source cloud". 2013. http://repository.lib.cuhk.edu.hk/en/item/cuhk-1291493.

Texto completo da fonte
Resumo:
Open-source cloud platforms provide a feasible alternative of deploying cloud computing in low-cost commodity hardware and operating systems. To enhance the reliability of an open-source cloud, we design and implement CloudVS, a practical add-on system that enables version control for virtual machines (VMs). CloudVS targets a commodity cloud platform that has limited available resources. It exploits content similarities across different VM versions using redundancy elimination (RE), such that only non-redundant data chunks of a VM version are transmitted over the network and kept in persistent storage. Using RE as a building block, we propose a suite of performance adaptation mechanisms that make CloudVS amenable to different commodity settings. Specifically, we propose a tunable mechanism to balance the storage and disk seek overheads, as well as various I/O optimization techniques to minimize the interferences to other co-resident processes. We further exploit a higher degree of content similarity by applying RE to multiple VM images simultaneously, and support the copy-on-write image format. Using real-world VM snapshots, we experiment CloudVS in an open-source cloud testbed built on Eucalyptus. We demonstrate how CloudVS can be parameterized to balance the performance trade-offs between version control and normal VM operations.
開源雲端平台為供低成本硬件及作業系統提供一個可行的替代方案。為了提高開源雲的可靠性,我們設計及實踐了CloudVS,一個針對虛擬機的實用版本控制系統。CloudVS針對有限資源的低成本硬件雲平台,利用內容相似性,在不同的虛擬機版本使用冗餘消除。這樣,在虛擬機版本數據中只有非冗餘的部分在網絡上傳輸,並保存在持久存儲。使用冗餘消除作為構建塊,我們提出了一套性能適應機制,使CloudVS適合於不同的低成本硬件配置。具體而言,我們提出了一種可調諧的機制來平衡存儲和磁盤尋道開銷,以及應用各種I/O優化技術去最大限度地減少對其他同時運行進程的干擾。我們應用冗餘消除多個虛擬機影像去進一步利用其內容相似度,同時,我們更進一步支持寫時複製格式。使用來自真實世界的虛擬機快照,我們嘗試在開放源碼的雲測試平台Eucalyptus中測試CloudVS。我們演示CloudVS如何可以參數化,以平衡版本控制和正常的虛擬機操作之間的性能取捨。
Tang, Chung Pan.
Thesis M.Phil. Chinese University of Hong Kong 2013.
Includes bibliographical references (leaves 57-65).
Abstracts also in Chinese.
Title from PDF title page (viewed on 07, October, 2016).
Detailed summary in vernacular field only.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Nagappan, Rajehndra Yegappan. "Mining multidimensional data through compositional visualisation". Phd thesis, 2001. http://hdl.handle.net/1885/146042.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

Armstrong, Warren Haley. "Swift : a flexible framework for runtime performance tuning". Phd thesis, 2011. http://hdl.handle.net/1885/151487.

Texto completo da fonte
Resumo:
Many computational kernels require extensive tuning to achieve optimal performance. The tuning process for some kernels must take into account the architecture on which they are executing, the input data that they are processing and the changing levels of contention for limited system resources. Maintaining performance in the face of such fluctuating influences requires kernels to continuously adapt. Swift is a software tool that performs this adaptation. It can be applied to many different target applications. Such an approach is more efficient than developing application-specific code for continuous tuning. Swift performs controlled experiments to gauge the performance of the target application. Results from these experiments are used to guide the execution of the target application. Swift performs periodic re-evaluations of the application and updates the application if environmental conditions or the internal state of the application have caused performance to degrade. The frequency of evaluation is scaled with its likely necessity -Swift performs few evaluations until it detects a potential performance degradation, at which point more detailed assessments are conducted. Swift is constructed using the DynInst library to modify and tune the executing kernel. The effectiveness of Swift depends on the computational expense of utilising this library. A suite of micro-benchmarks was developed to measure this expense. These benchmarks are not specific to Swift, and could guide the design of future DynInst-enabled applications. Swift was applied to tune sparse matrix-vector multiplication kernels. Tun{u00AC}ing such kernels requires selecting a matrix storage format and the associated multiplication algorithm. The choice of format depends on the characteristics of the matrix being multiplied as well as on prevailing system conditions and the number of multiplications being conducted. Swift was evaluated using both simulated environments and physical hardware. Simulated evaluation demonstrated that Swift could correctly select the best matrix format and could react to changing conditions. Evaluations on physical hardware demonstrated that automatic tuning was viable under certain conditions.
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

Fatwanto, Agung. "A concern-aware requirements engineering framework". Phd thesis, 2011. http://hdl.handle.net/1885/150279.

Texto completo da fonte
Resumo:
Poorly understood and articulated requirements have been widely acknowledged as the main contributor for software development problems. A number of studies suggest that a holistic understanding of the concerns (goals and issues) surrounding software development and stakeholders' active participation are two critical factors for the success of requirements engineering. The research as documented in this thesis thus aims to solve the problem by developing and demonstrating a new approach necessary for eliciting, analyzing, and specifying various stakeholders' concerns. The aim has been achieved with the development and demonstration of the Concern-Aware Requirements Engineering (CARE) method. The CARE method was developed by combining goal-oriented, scenario-based, and actor-oriented approach together with a consideration to object-oriented approach. This combination allows the CARE method to provide a novel way to requirements engineering. It is novel in the sense that: (i) it combines goal-oriented, scenario-based, and actor-oriented approach, (ii) it considers object-oriented specification as the reference for final format into which the acquired (elicited, analyzed, and specified) information can potentially be transformed, and (iii) it introduces multidimensional information specification by providing the coverage to describe: multi-feature, multi-description, and multi-domain information. A validation (proof-of-concept) of the CARE method's capability has been conducted by means of demonstration using the Voter Tracking System (VTS) as an example. The demonstration provides a proof-of-concept, provides incentive to study the method further, and illustrates the potential value of combining goal-oriented, scenario-based, and actor-oriented approach, together with an object-oriented approach, for developing a new requirements engineering method for socio-technical systems. A verification of the CARE method's suitability to engineer the requirements of socio-technical systems has also been conducted by means of assessment against the requirements engineering analysis framework. The validation and verification show that the CARE method is capable comprehensively and systematically acquiring (eliciting, analyzing, and specifying) various concerns (goals and issues) surrounding software developments. However, the verification of the CARE method against the principles for designing effective visual notations shows that the CARE method does not employ an effective visual notation. A tool has also been developed as an enabling technology for the CARE method. A web-based platform was selected and an artefacts versioning feature is provided, thus allowing asynchronous collaborative works of geographically distributed team members located in different timezones.
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

"Access-pattern-aware data management in cloud platforms". 2015. http://repository.lib.cuhk.edu.hk/en/item/cuhk-1291497.

Texto completo da fonte
Resumo:
Database outsourcing is an emerging paradigm for data management in which data are stored in third-party servers. With the advance of cloud computing, database outsourcing has become popular and highly adopted. However, as a result, many technology challenges have arisen.
In this thesis, we study two problems with respect to the challenges, and propose solutions for each problem with the consideration of access patterns. The first problem is raised from theviewpoint of service providers. We study the problem of data allocation in scalable distributed database systems for achieving the high availability feature of cloud services. We propose a data allocation algorithm, which makes use of time series models from previous access patterns to perform load forecasting and reallocate data fragments to balance the workload within the system. Simulation results show that, with accurate forecasting, the proposed algorithm gives a better performance than general threshold-based algorithms.
The second problem addresses the clients' concern that service providers may not be trustworthy. We first illustrate how service providers can infer sensitive information through query access patterns even when data are encrypted. Then, we propose techniques that break down large queries and randomize query access patterns such that service providers cannot infer sensitive information with a high degree of certainty. Experiments on benchmark data show that a high level of access privacy can be achieved by the proposed techniques with a reasonable overhead.
數據庫外包是近年新興的一種數據管理服務,其特點是數據儲存於第三方的伺服器內。隨著雲端科技的發展,數據庫外包服務日趨普及,同時亦產生不少技術問題。
本文著重探討兩個問題。首先,從服務供應商的角度研究可擴展的分布式數據庫系統如何分配數據來提供高可用性的雲端服務。鑑於用戶訪問模式會隨著時間轉變,我們提出以時間序列模型預測負荷的算法重新分配數據,以平衡系統的工作量。通過模擬實驗可知在準確的負荷預測下,我們提出的算法比基於闆值的算法有更好的表現。
第二個探討的問題是如何保障用戶私隱,避免洩露給服務供應商。文中列舉了數據加密的情況下,服務供應商如何通過分析用戶訪問模式獲取資料,進而提出相應的保障技術。透過用戶訪問模式的隨機化,能使服務供應商無法準確比對用戶的資料。基準數據實驗指出此項技術可有效保護私隱,而且不會對訪問速度造成太大影響。
Li, Shun Pun.
Thesis M.Phil. Chinese University of Hong Kong 2015.
Includes bibliographical references (leaves 86-93).
Abstracts also in Chinese.
Title from PDF title page (viewed on 11, October, 2016).
Detailed summary in vernacular field only.
Detailed summary in vernacular field only.
Detailed summary in vernacular field only.
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Wong, H'sien Jin. "Integrating software distributed shared memory and message passing programming". Phd thesis, 2010. http://hdl.handle.net/1885/151533.

Texto completo da fonte
Resumo:
Software Distributed Shared Memory (SDSM) systems provide programmers with a shared memory programming environment across distributed memory architectures. In contrast to the message passing programming environment, an SDSM can resolve data dependencies within the application without the programmer having to explicitly specify the required communications. Such ease-of-use is, however, provided at a cost to performance. This thesis considers how the SDSM programming model can be combined with the message passing programming model with the goal of avoiding these additional costs when a message passing solution is straightforward. To pursue the above goal a new SDSM library, named Danui, has been developed. The SDSM manages a shared address space in units of pages. Consistency for these pages is maintained using a variety of home-based protocols. In total, the SDSM includes seven different barrier implementations, five of which support home-migration. Danui is designed with portability and modularity in mind. It is written using the standard Message Passing Interface (MPI) to perform all underlying SDSM related communications. MPI was used both to permit the subsequent exploration of user-level MPI/SDSM programming, and also to enable the SDSM library to exploit optimised MPI implementations on a variety of cluster interconnects. A detailed analysis of the costs associated with the various SDSM operations is given. This analysis characterizes each SDSM operation based upon a number of factors. For example, the size of an SDSM page, the number of modifications made to shared memory, the choice of barrier used etc. This information is then used as a guide to determine which parts of an SDSM program would benefit most if replaced by direct calls to MPI. The integration of the shared memory and message passing (SMMP) programming models within Danui is discussed in detail. It is shown that a na{u00EF}ve integration of the SDSM and MPI programming models leads to memory consistency problems when MPI transfers occur to or from memory that is part of the software managed shared address space. These problems can, however, be overcome by noting the semantics of the shared memory and message passing paradigms, and introducing a few extensions to the MPI interface.
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Zigman, John Nigel. "A general framework for the description and construction of hierarchical garbage collection algorithms". Phd thesis, 2004. http://hdl.handle.net/1885/149684.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

Spate, J. M. "Data mining as a tool for investigating environmental systems". Phd thesis, 2006. http://hdl.handle.net/1885/149731.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
17

Sharma, Vikram. "A blackboard architecture for a rule-based SQL optimiser". Master's thesis, 1997. http://hdl.handle.net/1885/144244.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

Awang, Abu Bakar Normi Sham. "The effects of software design complexity on defects : a study in open-source systems". Phd thesis, 2011. http://hdl.handle.net/1885/150085.

Texto completo da fonte
Resumo:
The aim of this thesis is to investigate whether there is a general correlation between post-delivery defects and system design complexity by studying measures relating to Data, Structural and Procedural complexity in object-oriented systems and determining their effect on post delivery defects. A further aim is to determine whether, during the detailed design phase, measured Data Complexity can estimate measured Procedural Complexity and Class Size for the implemented system. This research is based on prior work of Card and Glass, who introduced a System Complexity Model as a combination of Structural and Data Complexity. They applied their model to eight similar FORTRAN (RATFOR) systems. This research both investigates and extends the Card and Glass model for applying to the object-oriented environment. Several adjustments are made to accommodate important characteristics of object-oriented design and language, such as "inheritance" and "encapsulation". Based on these adjustments, a new System Complexity Model is proposed, which is then applied to 104 open-source systems to investigate its effectiveness in estimating post-delivery defects. The necessary data are extracted from the source code of systems maintained within SourceForge - a popular open-source repository. Included in the data are, Version Downloads and the Number of Developers considered as independent variables for predicting user reported defects. The Spearman's rank correlation coefficient and Generalized Linear Model (GLM) with Poisson distribution are used to analyze the collected data. The results show that the newly proposed System Complexity (Structural + Data) is not significant for estimating the volume of post-delivery defects (Post-DD). When Structural and Data Complexity are analyzed separately, the results show that Structural Complexity is highly significant in estimating the number of post-DDs. Other important findings include: 1) Data Complexity can effectively estimate Procedural Complexity and Class Size, 2) The ratio of System Complexity and Procedural Complexity is useful for estimating the probability of Defect Density and Class Size. This ratio represents the mappingofmetricsobtained during thedetailed design phase with Procedural Complexity which is measurable during implementation (writing of the source code).
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

Fenwick, Stephen Peter. "State-based performance analysis of multicomputer object store applications". Phd thesis, 1998. http://hdl.handle.net/1885/145984.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

Hutchins, Matthew Alexander. "Modelling visualisation using formal algebra". Phd thesis, 1999. http://hdl.handle.net/1885/147627.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

"Parallel MIMD computation : the HEP supercomputer and its applications". MIT Press, 1985. http://hdl.handle.net/1721.1/1745.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

Cohen, Jonathan Asher. "Coherence for rewriting 2-theories : general theorems with applications to presentations of Higman-Thompson groups and iterated monoidal categories". Phd thesis, 2008. http://hdl.handle.net/1885/151114.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

Abate, Pietro. "The tableau workbench : a framework for building automated tableau-based theorem provers". Phd thesis, 2007. http://hdl.handle.net/1885/149636.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
24

Murshed, Mohammad Manzur. "The reconfigurable mesh : programming model, self-simulation, adaptability, optimality, and applications". Phd thesis, 1999. http://hdl.handle.net/1885/147938.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
25

Tang, Thanh Tin. "Quality-oriented information retrieval in a health domain". Phd thesis, 2007. http://hdl.handle.net/1885/150696.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
26

Shames, Iman. "Formation control and coordination of autonomous agents". Phd thesis, 2010. http://hdl.handle.net/1885/150541.

Texto completo da fonte
Resumo:
The primary purpose of this thesis is to present new results in the field of localization, co-ordination and control of multi-agent systems. In the first part of the thesis, initially, the problem of localization in the presence of inter-agent noisy measurements is formalized and it is established that approximate localizability, i.e. the ability to calculate the approximate positions of the agents, is a generic property and as long as the magnitude of noise is smaller than an upper bound, one can solve the approximate localization problem. Moreover, it is shown that the accuracy of the approximate localization solution using distance measurements of a formation depends on the choice of the nodes with the known positions, anchors, in the formation. Additionally, a method to select these anchors in the network is introduced which minimizes some performance index associated with the error in the approximate solution for the positions of the agents in the formation. In the next chapter, some methods based on polynomial optimization are proposed that can be employed to solve two important problems of cooperative target localization and reference frame determination using different types of measurements. The first part of the thesis is concluded by addressing another localization problem that arose in an experiment conducted by Australia Defence Science and Technology Organization (DSTO). The problem of interest is to localize a formation of unmanned aerial vehicles (UAVs) capable of measuring the inter-agent distances or angles, and the angles subtended at each of them by two landmarks at known positions. We tackle this problem using tools from graph theory and linkage mechanism design. In the second part of this thesis, we shift our focus to motion control of autonomous agents. First, we address the problem of simultaneous localization and circumnavigation of an initially stationary target at an unknown position by a single agent aware of its own trajectory that is capable of measuring its distance to the target. We propose, a control law and an estimator that achieves this objective exponentially fast. Later, we extend our analysis to the case where the target is moving and calculate an upper bound for the estimation error in terms of the target speed. The last problem that we consider is the problem of forcing a set of agents initially at arbitrary positions subject to a constant speed constraint to rotate around a target at a known position forming a prescribed formation shape while guaranteeing that no collision occurs. We show that our proposed algorithm achieves this objective under some mild and realistic assumptions in finite time.
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

O'Hagan, Rochelle G. "Vision-based gesture recognition as an interface to virtual environments". Phd thesis, 2002. http://hdl.handle.net/1885/148650.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

Lin, Yi. "Virtual machine design and high-level implementation languages". Master's thesis, 2013. http://hdl.handle.net/1885/149698.

Texto completo da fonte
Resumo:
System programming tasks such as implementing language virtual machines (VMs), are, by convention, associated with low-level programming languages, such as C/C++. Low-level programming languages provide efficient semantics to directly access hardware resources, thus are naturally a good choice for system programming. However, there are two trends in system programming that are questioning this convention, challenging the ruling role of low-level languages in this field. On one hand, increasing hardware complexity requires languages to provide abstraction over complicated and diverse architectures. On the other hand, software size and complexity is also increasing as hardware evolves, leaving security, correctness and productivity as an even bigger challenge. Thus, efforts have been made to use higher-level languages to perform system programming in order to improve safety, productivity and correctness, as well as to maintain competitive performance. Research has confirmed the feasibility and performance of implementing language virtual machines in high-level languages. The idea is novel and compelling. However, it leaves a trail of software engineering challenges, such as unclear VM/ application context and poor portability, which may in return hamper the benefits that come with high-level languages. My thesis is that imposing clearly defined constraints on code structure, context transition, and language features addresses important software engineering pitfalls that arise when using high-level languages to design and implement flexible and efficient virtual machines. This thesis is divided into two parts, each addressing a major software engineering pitfall in the area of system programming with high-level languages: 1) clarifying VM/ application interdependencies both statically (reflected by code structure and annotations) and dynamically (maintained as run-time information) with very low overhead; and 2) defining a restricted language called RJava, a restricted but still expressive subset of Java suitable for implementing VM components where portability and bootstrapping are major concerns. Two key factors of system programming with high-level languages that former research paid great attention to are expressiveness of high-level languages and performance, while they paid less attention to software engineering concerns. This thesis focuses on two of the software engineering pitfalls, and shows that they can be fixed while preserving both expressiveness and performance. This thesis should help those designing new virtual machines and help improve existing ones, and so encourage the implementation of virtual machines in high-level languages.
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

Over, Andrew James. "Two approaches to multiprocessor memory-system simulation". Phd thesis, 2008. http://hdl.handle.net/1885/148357.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
30

Mulerikkal, Jaison Paul. "Service oriented approach to high performance scientific computing". Phd thesis, 2013. http://hdl.handle.net/1885/150949.

Texto completo da fonte
Resumo:
Service Oriented Architectures (SOA) are used in business circles to harness the distributed resources of an enterprise, with relative ease and convenience. SOA is perceived to deliver programmability, scalability and efficiency under heterogeneity. The scientific community is always aspired to have a similar convenient approach to solve "not so embarrassingly parallel" scientific problems with expected levels of high performance, especially in heterogeneous conditions. One of the major challenges in parallelizing scientific algorithms is their apparent interdependency of tasks (atomic units of parallel works) which results in too fine granularity. The computational advantages in parallelizing those scientific algorithms can be overshadowed by costs involved in communications between non-optimal fine-grained tasks, in a SOA environment. The aim of this PhD research is to overcome these challenges and to empower scientists and researchers with SOA tools to develop high performance scientific applications with relative ease that can perform well under heterogeneous environments. The research has produced a scalable and heterogeneity-oblivious SOA middleware - ANU-SOAM. It implements a popular enterprise SOA middleware API (IBl\II-Platform Symphony API) and thus ensures programmability. It offers better performance under heterogeneous conditions by implementing load balancing and scheduling techniques. Along with its compute services it provides a Data Service which helps application programmers to develop codes that can effectively circumvent the interdependency of tasks and thereby reduce communications to ensure high performance outcomes. The Data Service achieves this by allowing data to be stored, accessed, modified and synchronized (using add, get, put and sync functionalities) at host and compute nodes according to the application logic. It is also observed that the programming model supported by the Data Service can help ANU-SOAM applications to access compute resources in a "Cloud IaaS" over high latency networks (like the Internet) with much lower overheads compared to the conventional SOA programming models.
Estilos ABNT, Harvard, Vancouver, APA, etc.
31

Teh, Chin Hao Alvin. "Normative manipulation as a way of improving the performance of software engineering groups : three experiments". Phd thesis, 2012. http://hdl.handle.net/1885/149832.

Texto completo da fonte
Resumo:
As the size of Software development projects increase, so too do the number of people working on these projects. This increase of software groups have brought about a new focus on the sociological issues associated with Software Development. There is a growing body of work that seeks to understand how Software Engineers work together effectively as a group, and also to identify factors that enable increased productivity consistently. Social Psychology looks at the interactions between individuals in groups (group dynamics) and may provide an applicable means to address this increased need for enhanced group effectiveness in Software Engineering. The thesis of this research is that it is possible to apply Social Psychology research (in particular Normative Manipulation) to Software Engineering groups. It is possible to use Normative Manipulation effectively to increase the performance of Software Engineering groups in some types of tasks, and finally, this technique is adoptable by practising Software Engineering groups as it is non-intrusive. Normative Manipulation is a technique in which particular behaviours are made to be favoured by group members. This behaviour is then actively practised by all group members. These particular behaviours may in turn increase the effectiveness of groups on particular tasks - for instance, a group favouring the behaviour of objectivity would then be more inclined to assess provided on information its logical merits and may then in turn be more likely to uncover other related, but less obvious information. Since the success of elicitation and specification of Software Requirements is related to how complete the produced specification is, then it follows that such a group could possibly have increased performance in Software Elicitation and Specification tasks. We demonstrate the validity of the thesis claims by performing three studies. The first study attempts to replicate the results of a Social Psychology experiment on a sample of participants drawn from a Software Engineering population. The second study attempts to show that it is possible to affect the effectiveness of Requirements Elicitation by Software Engineering groups by instilling different norms. The third study applies Normative Manipulation on a practising Software Group to identify if the technique can be applied transparently as part of a normal Requirements Elicitation task. -- provided by Candidate.
Estilos ABNT, Harvard, Vancouver, APA, etc.
32

Semenova, Tatiana. "Identification of interesting patterns in large health databases". Phd thesis, 2006. http://hdl.handle.net/1885/151258.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
33

Yang, Xi. "Locality aware zeroing : exploiting both hardware and software semantics". Phd thesis, 2011. http://hdl.handle.net/1885/149962.

Texto completo da fonte
Resumo:
Both managed and native languages use memory safety techniques to ensure program correctness and as a security defense. A critical element of memory safety is to initialize newly allocated memory to zero before making it available to the program. In this thesis I explore the performance impact of zero initialization and show that it comes with a substantial overhead. I also show that this overhead can be largely removed with new designs that exploit both the language semantics of zero initialization and the hardware semantics of memory hierarchies. Programmers increasingly choose managed languages to develop large scale applications. One of the reasons is that managed languages offer automatic memory management (garbage collection), which protects against memory leaks and dangling pointers. Garbage collection encourages programs to allocate large numbers of small and medium size objects, leading to significant overheads of zero initializing objects - on average the direct cost of zeroing is 4% to 6% and up to 50% of total application time on a variety of modern processors. Zeroing incurs indirect costs as well, which include memory bandwidth consumption and cache displacement. Existing virtual machines (VMs) either: a) minimize direct costs by zeroing in large blocks, or b) minimize indirect costs by integrating zeroing into the allocation sequence to reduce cache displacement. This thesis first describes and evaluates zero initialization costs and the two existing designs. The microarchitectural analysis of prior designs inspires two better designs that exploit concurrency and non-temporal cache-bypassing store instructions to reduce the direct and indirect costs simultaneously. The best strategy is to adaptively choose between the two new designs based on CPU utilization. This approach improves over widely used hot-path zeroing by 3% on average and up to 15% on the newest Intel i7-2600 processor, without slowing down any of the benchmarks. These results indicate that zero initialization is a surprisingly important source of overhead in existing VMs and that our new software strategies are effective at reducing this overhead. These findings also invite other optimizations, including software elision of zeroing and microarchitectural support.
Estilos ABNT, Harvard, Vancouver, APA, etc.
34

Hawking, David Anthony. "Text retrieval over distributed collections". Phd thesis, 1998. http://hdl.handle.net/1885/147205.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
35

"Actors : a model of concurrent computation in distributed systems". MIT Press, 1986. http://hdl.handle.net/1721.1/1692.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
36

Zhang, Xiaochun. "User specific aspects of pen-based computer input for identity verification". Phd thesis, 1999. http://hdl.handle.net/1885/147752.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
37

Frampton, Daniel. "Garbage collection and the case for high-level low-level programming". Phd thesis, 2010. http://hdl.handle.net/1885/148396.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
38

Sharma, Vikram. "Informatic techniques for continuous varible quantum key distribution". Phd thesis, 2007. http://hdl.handle.net/1885/150269.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
39

Taylor, Samuel Roger Aldiss. "Streaming geospatial imagery into virtual environments". Phd thesis, 2001. http://hdl.handle.net/1885/146251.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
40

He, Zhen. "Integrated buffer management for object-oriented database systems". Phd thesis, 2004. http://hdl.handle.net/1885/148651.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
41

Blackburn, Stephen. "Persistent store interface : a foundation for scalable persistent system design". Phd thesis, 1998. http://hdl.handle.net/1885/145415.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
42

Wang, Yuanzhi. "Organic aggregation : a human-centred and model-driven approach to engineering service-oriented systems". Phd thesis, 2010. http://hdl.handle.net/1885/151460.

Texto completo da fonte
Resumo:
Owing to a widespread trend of globalisation and service economies, there are exponentially increasing demands for Software-Intensive Systems (SIS) in general, and Service-Oriented Systems (SOS) in particular. However, it presents great challenges to develop and manage these systems. Although current research and practice provide various means to attack these challenges, there are many difficult impediments to overcome. This research is motivated by such demands, challenges, and opportunities. The ultimate objective is to understand and address the critical challenges of services engineering. To do so, we develop a multi-phased and iterative research methodology that is adapted from typical applied science research methodologies, in order to suit the exploratory nature of this research. According to the research methodology, we investigate and analyse the special characteristics of services engineering, such as a high degree of complexity, uncertainty, and volatility. Moreover, some existing approaches and related work are studied and analysed in a critical way. We conclude that the great difficulties of services engineering are fundamentally caused by a lack of disciplined engineering approaches that take into account the rapidly co-evolving socio-technical environments, where both human intellectual capacities and engineering competence need to be well understood and exploited. To realise our vision, we derive a generic engineering framework based on generalisation of other engineering disciplines, based on which, a services engineering framework called Organic Aggregation Services Engineering Framework (OASEF) is proposed. OASEF contains a theoretical foundation that consists of complementary theories and knowledge from multiple disciplines. Some important concepts are also defined, such as services engineering, models and modelling, and Socio-Technical Environments (STE). Moreover, OASEF contains some guiding principles that provide important guidance for the design and realisation of SOS and services engineering. Based on these conceptual resources, a profound concept called organic aggregation is developed, which takes an organic and synthetic approach to grow and manage systems of any kind. Furthermore, OASEF also incorporates: 1) a generic conceptual process model called Organic Aggregation Process (OAP) in support of organic aggregations of human intellectual and technical capacities; 2) a fully integrated model-driven method to realise OASEF/OAP activities in a systematic and automatic way; 3) a range of domain-specific and general purpose modelling languages for OASEF activities; 4) a mechanism to capture and reuse engineering capacities and to realise automatic system generation; and 5) an integrated tool environment in support of OASEF. Two controlled proof-of-concept case studies are conducted in real world settings, which aim to evaluate and improve OASEF concepts, methods, and mechanisms. Results show that OASEF helps to manage system complexity, agility, and productivity when engineering SOS. Some limitations and insufficiencies are also observed, which require future research. Although this research mainly focuses on SOS and services engineering, its engineering framework, or more specifically, the theoretical foundation, guiding principles, and generic process model, can be applied within a wider scope of software engineering and systems engineering.
Estilos ABNT, Harvard, Vancouver, APA, etc.
43

Rowlands, Thomas Aneirin. "Information retrieval through textual annotations". Phd thesis, 2012. http://hdl.handle.net/1885/149606.

Texto completo da fonte
Resumo:
Many well-known information retrieval models rank documents using scores derived from the query and document text. Some other models integrate query-independent evidence, such as indegree or URL structure, typically as prior probabilities. This thesis proposes a model based on a third category of evidence, external textual annotations. The model considers both the degree of match between queries and annotations and the degree of association between annotations and documents. The annotation retrieval model accommodates anchortext as well as additional sources: click-associated queries, folksonomy `tags' and microblogs. The characteristics of these types of annotations and their potential for use in information retrieval are studied. In previous work, annotations have been appended to the document text or treated as a field of the document. A new index structure, designed to support the efficient evaluation of queries with the new model, is evaluated. Secondary storage and query processing resources required for the annotations are reduced while preserving result quality. A useful source of annotations is anchortext, but anchortext can be gathered only after links have been created. Click-associated queries and folksonomy tags also take time to accumulate and may not provide up to the minute evidence. However, microblog posts, by their nature, relate to contemporary events and may provide a good method of finding new documents on the web. A method of collecting and conducting a new-web search system is proposed. The contributions of the thesis are as follows. First, an investigation into the characteristics of the different types of annotations, particularly folksonomy tags and microblogs. Second, a model and investigation of a practical data structure for utilising annotation data in a search system. Third, an evaluation method for real-world retrieval systems. Finally, a method of collecting evidence for new-web search based on annotations. -- provided by Candidate.
Estilos ABNT, Harvard, Vancouver, APA, etc.
44

Lin, Aleck Chao-Hung. "Designing web sites for enjoyment and learning : a study of museum experiences". Phd thesis, 2007. http://hdl.handle.net/1885/146697.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia