Dissertations / Theses on the topic 'General software engineering'

To see the other types of publications on this topic, follow the link: General software engineering.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'General software engineering.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Sezer, Bulent. "Software Engineering Process Improvement." Master's thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/12608338/index.pdf.

Full text
Abstract:
This thesis presents a software engineering process improvement study. The literature on software process improvement is reviewed. Then the current design verification process at one of the Software Engineering Departments of the X Company, Ankara, Tü
rkiye (SED) is analyzed. Static software development process metrics have been calculated for the SED based on a recently proposed approach. Some improvement suggestions have been made based on the metric values calculated according to the proposals of that study. Besides, the author'
s improvement suggestions have been discussed with the senior staff at the department and then final version of the improvements has been gathered. Then, a discussion has been made comparing these two approaches. Finally, a new software design verification process model has been proposed. Some of the suggestions have already been applied and preliminary results have been obtained.
APA, Harvard, Vancouver, ISO, and other styles
2

King, Stephen F. "Using and evaluating CASE tools : from software engineering to phenomenology." Thesis, University of Warwick, 1995. http://wrap.warwick.ac.uk/36230/.

Full text
Abstract:
CASE (Computer-Aided Systems Engineering) is a recent addition to the long line of "silver bullets" that promise to transform information systems development, delivering new levels of quality and productivity. CASE is particularly intriguing because information systems (IS) practitioners spend their working lives applying information technology (IT) to other people's work, and now they are applying it to themselves. CASE research to date has been dominated by accounts of tool development, normative writings (for example practitioner success stories) and surveys recording IT specialists' perceptions. There have been very few in-depth studies of tool use, and very few attempts to quantify benefits, therefore the essence of the CASE process remains largely unexplored, and the views of stakeholders other than the IT specialists have yet to be heard. The research presented here addresses these concerns by adopting a hybrid research approach combining action research, grounded theory and phenoinenology and using both qualitative and quantitative data in order to tell the story of a system developer's experience in using CASE tools in three information systems projects for a major UK car manufacturer over a four year period. The author was the lead developer on all three projects. Action research is a learning process, the researcher is an explorer. At the start of this project it was assumed that the tools would be the focus of the work. As the research progressed it became evident that the tools were but part of a richer organisational context in which culture, politics, history, external initiatives and cognitive limitations played important roles. The author continued to record experiences and impressions of tool use in the project diary together with quality and productivity metrics. But the diary also became home to a story of organisational developments that had not originally been foreseen. The principal contribution made by the work is to identity the narrow positivistic nature of CASE knowledge, and to show via the research stories the overwhelming importance of organisational context to systems development success and how the exploration of context is poorly supported by the tools. Sixteen further contributions are listed in the Conclusions to the thesis, including a major extension to Wynekoop and Conger's CASE research taxonomy, an identification of the potentially misleading nature of quantitative IS assessment and further evidence of the limitations of the "scientific" approach to systems development. The thesis is completed by two proposals for further work. The first seeks to advance IS theory by developing further a number of emerging process models of IS development. The second seeks to advance IS practice by asking the question "How can CASE tools be used to stimulate awareness and debate about the effects of organisational context?", and outlines a programme of research in this area.
APA, Harvard, Vancouver, ISO, and other styles
3

Tauzovich, Branka. "Causal reasoning in a software advisor." Thesis, University of Ottawa (Canada), 1987. http://hdl.handle.net/10393/5310.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Tasim, Taner. "A general framework for scraping newspaper websites." Thesis, Linnéuniversitetet, Institutionen för datavetenskap (DV), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-59044.

Full text
Abstract:
Data streaming nowadays is one of the most used approaches used by websites and applications to supply the end user with the latest articles and news. As a lot of news websites and companies are founded every day, such data centers must be flexible and it must be easy to introduce a new website to keep track of. The main goal of this project is to investigate two frameworks where implementing a robot for given website should take some acceptable amount of time. It is really challenging task, first of all it aims optimizing of a framework which means to put less efforts on something and have the same result and one another thing is that it will be used by professors and students at the end so quality and robustness play big role here. In order to overcome this challenge two different types of news websites were investigated and through this process the approximately time to implement a single robot was extracted. Having in mind the time spent to implement a single robot, the new frameworks were implemented with the goal to spend less time to implement a new web robot. The results are two general frameworks for two different types of websites, where implementing a robot does not take so much efforts and time. The implementation time of a new robot was reduced from 18 hours to approximately 4 hours.
APA, Harvard, Vancouver, ISO, and other styles
5

Lin, Jian. "General-purpose user-defined modelling system (GPMS)." Thesis, Lancaster University, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.335145.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Niculae, Danut. "General Unpacking : Overview and Techniques." Thesis, Linnéuniversitetet, Institutionen för datavetenskap (DV), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-43261.

Full text
Abstract:
Over the years, packed malware have started to appear at a more rapid pace.Hackers are modifying the source code of popular packers to create new typesof compressors which can fool the Anti Virus software. Due to the sheer vol-ume of packer variations, creating unpacking scripts based on the packer’ssignature has become a tedious task. In this paper we will analyse genericunpacking techniques and apply them on ten popular compression software.The techniques prove to be successful in nine out of ten cases, providing aneasy and accessible way to unpack the provided samples
APA, Harvard, Vancouver, ISO, and other styles
7

Hanneghan, Martin. "An architecture to support virtual Concurrent Engineering." Thesis, Liverpool John Moores University, 1998. http://researchonline.ljmu.ac.uk/4902/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Cui, X. "Delay time modelling and software development." Thesis, University of Salford, 2002. http://usir.salford.ac.uk/2159/.

Full text
Abstract:
Delay time modelling (DTM) is the process to establish the mathematical model based on the delay time concept and then to use it for improving plant maintenance management. The delay time model can be divided into a single component model (component-tracking model) or a complex system model (pooled-components model). DTM has been proved to be a methodology readily embraced by engineers for modelling maintenance decisions. The application and research of delay time modelling has come to a stage where a semi-automated tool can be developed. In this thesis, the research on the software development of delay time modelling will be presented. Firstly, delay time models for both a single component (or component-tracking model) and a complex system (or pooled-components model) are introduced. The key part is delay time parameter estimation, which will be presented in details using available subjective, objective or both Secondly, the development of the software package is presented. It includes project analysis, database design, and program design. In the project analysis phase, the delay time models are transformed to program models. All analysis of program models consists of three parts, such as input, processing and output. In the database design phase, some tables are created to store processing information, which is then used in subsequent mathematical modelling. Detailed programming work is given in the program design phase. The major achievement of this research and an open discussion of future work conclude the thesis.
APA, Harvard, Vancouver, ISO, and other styles
9

Bose, Vanu G. (Vanu Gopal). "Design and implementation of software radios using a general purpose processor." Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/9134.

Full text
Abstract:
Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1999.
Includes bibliographical references (p. 113-116).
This dissertation presents the design, implementation and evaluation of a novel software radio architecture based on wideband digitization, a general purpose processor and application level software. The system is designed to overcome the many challenges and exploit the advantages of performing real-time signal processing in a general purpose environment. The main challenge was overcoming the uncertainty in execution times and resource availability. The main advantages are faster clock speeds, large amounts of-memory and better development environments. In addition it is possible to optimize the signal processing in conjunction with the application program, since they are running on the same platform. The system has been used to implement a virtual radio, a wireless communication system in which all of the signal processing from the air interface through the application is performed in software. The only functions performed by dedicated hardware are the down conversion and digitization of a wide band of the RF spectrum. The flexibility enabled by this system provides the means for overcoming many limitations of existing communication systems. Taking a systems design approach, the virtual radio exploits the flexibility of software signal processing coupled with wideband digitization to realize a system in which any aspect of the signal processing can be dynamically modified. The work covers several areas, including: the design of an I/0 system for digitizing wideband signals as well _as transporting the sample stream in and out of application memory; the design of a programming environment supporting real-time signal processing applications in a general purpose environment; a performance evaluation of software radio applications on a general purpose processor; and the design of applications and algorithms suited for a software implementation. Several radio applications including an AMPS cellular receiver and a network link employing frequency hopping with FSK modulation have been implemented and measured. This work demonstrates that it is both useful and feasible to implement real-time signal processing systems on a general purpose platform entirely in software. The virtual radio platform allows new approaches to both system and algorithm design that result in greater flexibility, better technology tracking and improved average performance.
by Vanu G. Bose.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
10

Paventhan, Arumugam. "Grid approaches to data-driven scientific and engineering workflows." Thesis, University of Southampton, 2007. https://eprints.soton.ac.uk/49926/.

Full text
Abstract:
Enabling the full life cycle of scientific and engineering workflows requires robust middleware and services that support near-realtime data movement, high-performance processing and effective data management. In this context, we consider two related technology areas: Grid computing which is fast emerging as an accepted way forward for the large-scale, distributed and multi-institutional resource sharing and Database systems whose capabilities are undergoing continuous change providing new possibilities for scientific data management in Grid. In this thesis, we look into the challenging requirements while integrating data-driven scientific and engineering experiment workflows onto Grid. We consider wind tunnels that house multiple experiments with differing characteristics, as an application exemplar. This thesis contributes two approaches while attempting to tackle some of the following questions: How to allow domain-specific workflow activity development by hiding the underlying complexity? Can new experiments be added to the system easily? How can the overall turnaround time be reduced by an end-to-end experimental workflow support? In the first approach, we show how experiment-specific workflows can help accelerate application development using Grid services. This has been realized with the development of MyCoG, the first Commodity Grid toolkit for .NET supporting multi-language programmability. In the second , we present an alternative approach based on federated database services to realize an end-to-end experimental workflow. We show with the help of a real-world example, how database services can be building blocks for scientific and engineering workflows.
APA, Harvard, Vancouver, ISO, and other styles
11

Zheng, Junyu. "Quantification of Variability and Uncertainty in Emission Estimation: General Methodology and Software Implementation." NCSU, 2002. http://www.lib.ncsu.edu/theses/available/etd-05192002-201242/.

Full text
Abstract:
The use of probabilistic analysis methods for dealing with variability and uncertainty is being more widely recognized and recommended in the development of emission factor and emission inventory. Probabilistic analysis provides decision-makers with quantitative information about the confidence with which an emission factor may be used. Variability refers to the heterogeneity of a quantity with respect to time, space, or different members of a population. Uncertainty refers to the lack of knowledge regarding the true value of an empirical quantity. Ignorance of the distinction between variability and uncertainty may lead to erroneous conclusions regarding emission factor and emission inventory. This dissertation extensively and systematically discusses methodologies associated with quantification of variability and uncertainty in the development of emission factors and emission inventory, including the method based upon use of mixture distribution and the method for accounting for the effect of measurement error on variability and uncertainty analysis. A general approach for developing a probabilistic emission inventory is presented. A few example case studies were conducted to demonstrate the methodologies. The case studies range from utility power plant emission source to highway vehicle emission sources. A prototype software tool, AUVEE, was developed to demonstrate the general approach in developing a probabilistic emission inventory based upon an example utility power plant emission source. A general software tool, AuvTool, was developed to implement all methodologies and algorithms presented in this dissertation for variability and uncertainty analysis. The tool can be used in any quantitative analysis fields where variability and uncertainty analysis are needed in model inputs.
APA, Harvard, Vancouver, ISO, and other styles
12

Jun, Yong-Tae. "A feature-based reverse engineering system using artificial neural networks." Thesis, University of Warwick, 1999. http://wrap.warwick.ac.uk/3674/.

Full text
Abstract:
Reverse Engineering (RE) is the process of reconstructing CAD models from scanned data of a physical part acquired using 3D scanners. RE has attracted a great deal of research interest over the last decade. However, a review of the literature reveals that most research work have focused on creation of free form surfaces from point cloud data. Representing geometry in terms of surface patches is adequate to represent positional information, but can not capture any of the higher level structure of the part. Reconstructing solid models is of importance since the resulting solid models can be directly imported into commercial solid modellers for various manufacturing activities such as process planning, integral property computation, assembly analysis, and other applications. This research discusses the novel methodology of extracting geometric features directly from a data set of 3D scanned points, which utilises the concepts of artificial neural networks (ANNs). In order to design and develop a generic feature-based RE system for prismatic parts, the following five main tasks were investigated. (1) point data processing algorithms; (2) edge detection strategies; (3) a feature recogniser using ANNs; (4) a feature extraction module; (5) a CAD model exchanger into other CAD/CAM systems via IGES. A key feature of this research is the incorporation of ANN in feature recognition. The use of ANN approach has enabled the development of a flexible feature-based RE methodology that can be trained to deal with new features. ANNs require parallel input patterns. In this research, four geometric attributes extracted from a point set are input to the ANN module for feature recognition: chain codes, convex/concave, circular/rectangular and open/closed attribute. Recognising each feature requires the determination of these attributes. New and robust algorithms are developed for determining these attributes for each of the features. This feature-based approach currently focuses on solving the feature recognition problem based on 2.5D shapes such as block pocket, step, slot, hole, and boss, which are common and crucial in mechanical engineering products. This approach is validated using a set of industrial components. The test results show that the strategy for recognising features is reliable.
APA, Harvard, Vancouver, ISO, and other styles
13

Jafar, Ali, and Mohan Maharjan. "Understandability of General Versus Concrete Test Cases." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4358.

Full text
Abstract:
One possibility to automate more of software testing is to have developers write more general test cases. Given a general (parameterized test case), that holds in many situations, software can generate many different test instances and execute them automatically. Thus, even though the developers write fewer and smaller tests they can test more. However, it is not clear what other effects the use of generalized test cases has. One hypothesis is that “More general test cases are harder to understand than concrete ones and thus would lead to overall tests that are harder to understand”. Software understandability can be defined as the system that is written by one person is easy to read and understand by another person easily without any resistance. However, software understandability is hard to measure because understandability depends on the cognitive behavior of human. Software understandability assists in software reusability and software maintainability.
APA, Harvard, Vancouver, ISO, and other styles
14

Kaloskampis, Ioannis. "Recognition of complex human activities in multimedia streams using machine learning and computer vision." Thesis, Cardiff University, 2013. http://orca.cf.ac.uk/59377/.

Full text
Abstract:
Modelling human activities observed in multimedia streams as temporal sequences of their constituent actions has been the object of much research effort in recent years. However, most of this work concentrates on tasks where the action vocabulary is relatively small and/or each activity can be performed in a limited number of ways. In this Thesis, a novel and robust framework for modelling and analysing composite, prolonged activities arising in tasks which can be effectively executed in a variety of ways is proposed. Additionally, the proposed framework is designed to handle cognitive tasks, which cannot be captured using conventional types of sensors. It is shown that the proposed methodology is able to efficiently analyse and recognise complex activities arising in such tasks and also detect potential errors in their execution. To achieve this, a novel activity classification method comprising a feature selection stage based on the novel Key Actions Discovery method and a classification stage based on the combination of Random Forests and Hierarchical Hidden Markov Models is introduced. Experimental results captured in several scenarios arising from real-life applications, including a novel application to a bridge design problem, show that the proposed framework offers higher classification accuracy compared to current activity identification schemes.
APA, Harvard, Vancouver, ISO, and other styles
15

Kirecci, Ali. "Motion design for high speed machines." Thesis, Liverpool John Moores University, 1993. http://researchonline.ljmu.ac.uk/4937/.

Full text
Abstract:
The dynamic performance of a programmable manipulator depends on both the motion profile to be followed and the feedback control method used. To improve this performance the manipulator trajectory requires planning at an advanced level and an efficient control method has to be used. The purpose of this study is to investigate high-level trajectory planning and trajectory tracing problems. It is shown that conventional trajectory planning methods where the motion curves are generated using standard mathematical functions are ineffective for general application especially when velocity and acceleration conditions are included. Polynomial functions are shown to be the most versatile for these applications but these can give curves with unexpected oscillations, commonly called meandering. In this study, a new method using polynomials is developed to overcome this disadvantage. A general motion design computer program (MOTDES) is developed which enables the user to produce motion curves for general body motion in a plane. The program is fully interactive and operates within a graphics environment. A planar manipulator is designed and 'constructed to investigate the practical problems of trajectory control particularly when operating at high speeds. Different trajectories are planned using MOTDES and implemented to the manipulator. The precise tracing of a trajectory requires the use of advanced control methods such as adaptive control or learning. In learning control, the inputs of the current cycle are calculated using the experience of the previous circle. The main advantage of learning control over adaptive control is its simplicity. It can be applied more easily in real time for high-speed systems. However, learning algorithms may cause saturation of the driving servo motors after a few learning cycles due to discontinuities being introduced into the command curve. To prevent this saturation problem a new approach involving the filtering of the input command is developed and tested.
APA, Harvard, Vancouver, ISO, and other styles
16

Howell, Benjamin Paul. "An investigation of Lagrangian Riemann methods incorporating material strength." Thesis, University of Southampton, 2000. https://eprints.soton.ac.uk/47085/.

Full text
Abstract:
The application of Riemann Methods formulated in the Lagrangian reference frame to the numerical simulation of non-linear events in solid materials is investigated. Here, solids are characterised by their ability to withstand shear distortion since they possess material strength. In particular, numerical techniques are discussed for simulating the transient response of solids subjected to extreme loading. In such circumstances, the response of solids will often be highly non-linear, displaying elastic and plastic behaviour, and even moderate compressions will produce strong shock waves. This work reviews the numerical schemes or 'hydrocodes' which have been adopted in the past in order to simulate such systems, identifying the advantages and limitations of such techniques. One of the most prominent limitations of conventional Lagrangian methods is that the computational mesh or grid has fixed-connectivity i.e. mesh nodes are connected to the same nodes for all time. This has significant disadvantages since the computational mesh can easily become tangled as the simulated material distorts. The majority of conventional hydrocodes are also constructed using outdated artificial viscosity schemes which are known to diffuse shock waves and other steep features which may be present in the solution. In the work presented here, a novel two-dimensional Lagrangian solver has been developed Vucalm-EP which overcomes many of the limitations of conventional techniques. By employing the Free-Lagrange Method, whereby the connectivity of the computational mesh is allowed to evolve as the material distorts, problems of arbitarily large deformation can be simulated. With the implementation of a spatially second-order accurate, finite-volume, Godunov-type solver, non-linear waves such as shocks are represented with higher resolution than previously possible with contemporary schemes. The Vucalm-EP solver simulates the transient elastic-perfectly plastic response of solids and displays increased accuracy over alternative Lagrangian techniques developed to simulate large material distortion such as Smoothed particle Hydrodynamics (SPH). Via a variety of challenging numerical simulations the Vucalm-EP solver is compared with contemporary Euler, fixed-connectivity Lagrangian, and meshless SPH solvers. These simulations include the solution of one- and two dimensional shock tube problems in aluminium, simulating the collapse of cylindrical shells and modelling high-velocity projectile impacts. Validation against previously published results, solutions obtained using alternative numerical techniques and analytical models illustrates the versatility and accuracy of the technique. Thus, the Vucalm-EP solver provides a numerical scheme for the Lagrangian simulation of extensive material distortion in materials with strength, which has never previously been possible with mesh-based techniques.
APA, Harvard, Vancouver, ISO, and other styles
17

Sobester, A. "Enhancements to global design optimization techniques." Thesis, University of Southampton, 2003. https://eprints.soton.ac.uk/45904/.

Full text
Abstract:
Modern engineering design optimization relies to a large extent on computer simulations of physical phenomena. The computational cost of such high-fidelity physics-based analyses typically places a strict limit on the number of candidate designs that can be evaluated during the optimization process. The more global the scope of the search, the greater are the demands placed by this limited budget on the efficiency of the optimization algorithm. This thesis proposes a number of enhancements to two popular classes of global optimizers. First, we put forward a generic algorithm template that combines population-based stochastic global search techniques with local hillclimbers in a Lamarckian learning framework. We then test a specific implementation of this template on a simple aerodynamic design problem, where we also investigate the feasibility of using an adjoint flow-solver in this type of global optimization. In the second part of this work we look at optimizers based on low-cost global surrogate models of the objective function. We propose a heuristic that enables efficient parallelisation of such strategies (based on the expected improvement infill selection criterion). We then look at how the scope of surrogate-based optimizers can be controlled and how they can be set up for high efficiency.
APA, Harvard, Vancouver, ISO, and other styles
18

Thomas, Angeli Elizabeth. "Mathematical modelling of evaporation mechanisms and instabilities in cryogenic liquids." Thesis, University of Southampton, 1999. https://eprints.soton.ac.uk/50640/.

Full text
Abstract:
In this thesis we propose a model for laminar natural convection within a mixture of two cryogenic fluids with preferential evaporation. This full model was developed after a number of smaller models of the behaviour of the surface of the fluid had been examined. Throughout we make careful comparison between our analytical and computational work and existing experimental and theoretical results. The coupled differential equations for the main model were solved using an explicit upwind scheme for the vorticity-transport, temperature and concentration equations and the multigrid method for the Poisson equation. From plots of the evolution of the system, it is found that convection becomes stronger when preferential evaporation is included. This new model demonstrates how to include preferential evaporation, and can be applied to other fluid systems.
APA, Harvard, Vancouver, ISO, and other styles
19

Nair, Prasanth B. "Design optimization of flexible space structures for passive vibration suppression." Thesis, University of Southampton, 2000. https://eprints.soton.ac.uk/45938/.

Full text
Abstract:
This research is concerned with the development of a computational framework for the design of large flexible space structures with non periodic geometries to achieve passive vibration suppression. The present system combines an approximation model management framework (AMMF) developed for evolutionary optimization algorithms (EAs) with reduced basis approximate dynamic reanalysis methods. Formulations based on reduced basis representations are presented for approximating the eigenvalues and eigenvectors, which are then used to compute the frequency response. The second method involves direct approximation of the frequency response via a dynamic stiffness matrix formulation. Both the reduced basis methods use the results of a single exact analysis to approximate the dynamic resp9onse. An AMMF is then developed to make use of the computationally cheap approximate analysis techniques in lieu of exact analysis to arrive at better designs on a limited computational budget. A coevolutionary genetic search strategy is developed here to ensure that design changes during the optimization iterations lead to low-rank perturbations of the structural system matrices. This ensures that the reduced basis methods developed here give good quality approximations for moderate changes in the geometrical design variables. The k-means algorithm is employed for cluster analysis of the population of designs to determine design points at which exact analysis should be carried out. The fitness of the designs in an EA generation is then approximated using reduced basis models constructed around the points where exact analysis is carried out. Results are presented for optimal design of a two-dimensional space structure to achieve passive vibration suppression. It is shown that significant vibration isolation of the order of 50 dB over a 100 Hz bandwidth can be achieved. Further, it is demonstrated that the coevolutionary search strategy can arrive at a better design as compared to conventional approaches, when a constraint is imosed on the computational budget available for optimization. Detailed computational studies are presented to gain insights into the mechanisms employed by the optimal design to achieve this performance. It is also shown that the final design is robust to parmetric uncertainties.
APA, Harvard, Vancouver, ISO, and other styles
20

Knittel, Andreas. "Micromagnetic simulations of three dimensional core-shell nanostructures." Thesis, University of Southampton, 2011. https://eprints.soton.ac.uk/333186/.

Full text
Abstract:
In the last 20 years, computer simulations, based on the micromagnetic model, have become an important tool for the characterisation of ferromagnetic structures. This work mainly uses the finite-element (FE) based micromagnetic solver Nmag to analyse the magnetic properties of ferromagnetic shell structures of different shapes and with dimensions below one micrometre. As the magnetic properties of structures in this size regime depend crucially on their shape, they have a potential towards engineering by shape manipulation. The finite-element method (FEM) discretises the micromagnetic equations on an unstructured mesh and, thus, is suited to model structures of arbitrary shape. The standard way to compute the magnetostatic potential within FE based micromagnetics is to use the hybrid finite element method / boundary element method (FEM/BEM), which, however, becomes computationally expensive for structures with a large surface. This work increases the efficiency of the hybrid FEM/BEM by using a data-sparse matrix type (hierarchical matrices) in order to extend the range of structures accessible by micromagnetic simulations. It is shown that this approximation leads only to negligible errors. The performed micromagnetic simulations include the finding of (meta-)stable micromagnetic states and the analysis of the magnetic reversal behaviour along certain spatial directions at different structure sizes and shell thicknesses. In the case of pyramidal shell structures a phase diagram is delineated which specifies the micromagnetic ground state as a function of structure size and shell thickness. An additional study demonstrates that a simple micromagnetic model can be used to qualitatively understand the magnetic reversal of a triangular platelet-shaped core-shell structure, which exhibits specific magnetic properties, as its core material becomes superconducting below a certain critical field Hcrit.
APA, Harvard, Vancouver, ISO, and other styles
21

Dopico, Gonzalez Carolina. "Probabilistic finite element analysis of the uncemented total hip replacement." Thesis, University of Southampton, 2009. https://eprints.soton.ac.uk/68694/.

Full text
Abstract:
There are many interacting factors aecting the performance of a total hip replacement (THR), such as prosthesis design and material properties, applied loads, surgical approach, femur size and quality, interface conditions etc. All these factors are subject to variation and therefore uncertainties have to be taken into account when designing and analysing the performance of these systems. To address this problem, probabilistic design methods have been developed. A computational probabilistic tool to analyse the performance of an uncemented THR has been developed. Monte Carlo Simulation (MCS) was applied to various models with increasing complexity. In the pilot models, MCS was applied to a simplied nite element model (FE) of an uncemented total hip replacement (UTHR). The implant and bone stiness, load magnitude and geometry, and implant version angle were included as random variables and a reliable strain based performance indicator was adopted. The sensitivity results highlighted the bone stiness, implant version and load magnitude as the most sensitive parameters. The FE model was developed further to include the main muscle forces, and to consider fully bonded and frictional interface conditions. Three proximal femurs and two implants (one with a short and another with a long stem) were analysed. Dierent boundary conditions were compared, and convergence was improved when the distal portion of the implant was constrained and a frictional interface was employed. This was particularly true when looking at the maximum nodal micromotion. The micromotion results compared well with previous studies, conrming the reliability and accuracy of the probabilistic nite element model (PFEM). Results were often in uenced by the bone, suggesting that variability in bone features should be included in any probabilistic analysis of the implanted construct. This study achieved the aim of developing a probabilistic nite element tool for the analysis of nite element models of uncemented hip replacements and forms a good basis for probabilistic models of constructs subject to implant position related variability.
APA, Harvard, Vancouver, ISO, and other styles
22

Chippendale, Richard. "Modelling of the thermal chemical damage caused to carbon fibre composites." Thesis, University of Southampton, 2013. https://eprints.soton.ac.uk/361708/.

Full text
Abstract:
Previous investigations relating to lightning strike damage of Carbon Fibre Composites (CFC), have assumed that the energy input from a lightning strike is caused by the resistive (Joule) heating due to the current injection and the thermal heat ux from the plasma channel. Inherent within this statement, is the assumption that CFCs can be regarded as a perfect resistor. The validity of such an assumption has been experimentally investigated within this thesis. This experimental study has concluded that a typical quasi-isotropic CFC panel can be treated as a perfect resistor up to a frequency of at least 10kHz. By considering the frequency components within a lightning strike current impulse, it is evident that the current impulse leads predominately to Joule heating. This thesis has experimentally investigated the damage caused to samples of CFC, due to the different current impulse components, which make up a lightning strike. The results from this experiment have shown that the observed damage on the surface is different for each of the different types of current impulse. Furthermore, the damage caused to each sample indicates that, despite masking only the area of interest, the wandering arc on the surface stills plays an important role in distributing the energy input into the CFC and hence the observed damage. Regardless of the different surface damage caused by the different current impulses, the resultant damage from each component current impulse shows polymer degradation with fracturing and lifting up of the carbon fibres. This thesis has then attempted to numerically investigate the physical processes which lead to this lightning strike damage. Within the current state of the art knowledge there is no proposed method to numerically represent the lightning strike arc attachment and the subsequent arc wandering. Therefore, as arc wandering plays an important role in causing the observed damage, it is not possible to numerically model the lightning strike damage. An analogous damage mechanism is therefore needed so the lighting strike damage processes can be numerically investigated. This thesis has demonstrated that damage caused by laser ablation, represents a similar set of physical processes, to those which cause the lightning strike current impulse damage, albeit without any additional electrical processes. Within the numerical model, the CFC is numerically represented through a homogenisation approach and so the relevance and accuracy of a series of analytical methods for predicting the bulk thermal and electrical conductivity for use with CFCs have been investigated. This study has shown that the electrical conductivity is dominated by the percolation effects due to the fibre to fibre contacts. Due to the more comparable thermal conductivity between the polymer and the fibres, the bulk thermal conductivity is accurately predicted by an extension of the Eshelby Method. This extension allows the bulk conductivity of a composite system with more than two composite components to be calculated. Having developed a bespoke thermo-chemical degradation model, a series of validation studies have been conducted. First, the homogenisation approach is validated by numerically investigating the electrical conduction through a two layer panel of CFC. These numerical predictions showed initially unexpected current ow patterns. These predictions have been validated through an experimental study, which in turn validates the application of the homogenisation approach. The novelty within the proposed model is the inclusion of the transport of produced gasses through the decomposing material. The thermo-chemical degradation model predicts that the internal gas pressure inside the decomposing material can reach 3 orders of magnitude greater than that of atmospheric pressure. This explains the de-laminations and fibre cracking observed within the laser ablated damage samples. The numerical predictions show that the inclusion of thermal gas transport has minimal impact on the predicted thermal chemical damage. The numerical predictions have further been validated against the previously obtained laser ablation results. The predicted polymer degradation shows reasonable agreement with the experimentally observed ablation damage. This along with the previous discussions has validated the physical processes implemented within the thermo-chemical degradation model to investigate the thermal chemical lightning strike damage.
APA, Harvard, Vancouver, ISO, and other styles
23

Reichert, Thomas. "Development of 3D lattice models for predicting nonlinear timber joint behaviour." Thesis, Edinburgh Napier University, 2009. http://researchrepository.napier.ac.uk/Output/2827.

Full text
Abstract:
This work presents the development of a three-dimensional lattice material model for wood and its application to timber joints including the potential strengthening benefit of second order effects. A lattice of discrete elements was used to capture the heterogeneity and fracture behaviour and the model results compared to tested Sitka spruce (Picea sitchensis) specimens. Despite the general applicability of lattice models to timber, they are computationally demanding, due to the nonlinear solution and large number of degrees of freedom required. Ways to reduce the computational costs are investigated. Timber joints fail due to plastic deformation of the steel fastener(s), embedment, or brittle fracture of the timber. Lattice models, contrary to other modelling approaches such as continuum finite elements, have the advantage to take into account brittle fracture, crack development and material heterogeneity by assigning certain strength and stiffness properties to individual elements. Furthermore, plastic hardening is considered to simulate timber embedment. The lattice is an arrangement of longitudinal, lateral and diagonal link elements with a tri-linear load-displacement relation. The lattice is used in areas with high stress gradients and normal continuum elements are used elsewhere. Heterogeneity was accounted for by creating an artificial growth ring structure and density profile upon which the mean strength and stiffness properties were adjusted. Solution algorithms, such as Newton-Raphson, encounter problems with discrete elements for which 'snap-back' in the global load-displacement curves would occur. Thus, a specialised solution algorithm, developed by Jirasek and Bazant, was adopted to create a bespoke FE code in MATLAB that can handle the jagged behaviour of the load displacement response, and extended to account for plastic deformation. The model's input parameters were calibrated by determining the elastic stiffness from literature values and adjusting the strength, post-yield and heterogeneity parameters of lattice elements to match the load-displacement from laboratory tests under various loading conditions. Although problems with the modified solution algorithm were encountered, results of the model show the potential of lattice models to be used as a tool to predict load-displacement curves and fracture patterns of timber specimens.
APA, Harvard, Vancouver, ISO, and other styles
24

Karlsson, Therese Westerlund. "Applying Interaction Design in a Software Development Project : Working out the general user for Messaging Systems." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik och datavetenskap, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-1718.

Full text
Abstract:
It is a challenge to work in a software development project. People with different backgrounds are together working towards the goal of delivering a run able piece of software. The influences to the design are many and all of them will affect how the program will be designed. During this spring we have been involved in a large software engineering project. Our part of the project has been focusing on interface design and using the design method persona. In this bachelor thesis we describe our experiences of participating in a software development project. We will explain how our design work was affected by the organisation of the project and how we have worked with adjusting the method of persona to the conditions given in the project. We will also describe the importance of communicating the design within the project. The main purpose of the report is to show how we during the project have become aware of the importance of tracing design decisions back to its origin. Many attributes has come to inform our design and this has made us aware of the importance of having a traceability of our work.
APA, Harvard, Vancouver, ISO, and other styles
25

Wu, Chia-Chin. "Static and dynamic analyses of mountain bikes and their riders." Thesis, University of Glasgow, 2013. http://theses.gla.ac.uk/4159/.

Full text
Abstract:
Mountain biking is a globally popular sport, in which the rider uses a mountain bike to ride on off-road terrain. A mountain bike has either a front suspension system only or a full-suspension system to decrease the external vibration resulting from the terrain irregularities and to increase riding comfort. Despite the added comfort of full-suspension of mountain bikes, there are some disadvantages because the chain-suspension interaction and bobbing effect absorb some of the rider's pedalling power and lead to the reduction of pedalling efficiency. In this study, a technique for evaluating the pedalling efficiency of a bike rider in seated cycling by using engineering mechanics is developed. This method is also found to be useful for determining the correct crank angle for the beginning of the downstroke and that of the upstroke during each pedalling cycle. Next, five mathematical models of rider-bike systems are developed in Simulink and SimMechanics, including one hard-tail (HT) bike, and four full-suspension (FS) bikes [single pivot, four-bar-linkage horst link, four-bar-linkage faux bar, and virtual pivot point (VPP)]. In each of the five rider-bike systems, a PID controller is applied on the rider's elbow to prevent his upper body from falling down due to gravity. A pedalling controller is also developed in Simulink, which is based on the previous theory for evaluating the rider's pedalling efficiency written in Matlab. Another PID controller is used for the pedalling control by sensing the real-time moving speed and applying a suitable pedalling force to achieve a desired speed. The dynamic responses for each of the five rider-bike systems moving on a flat road surface (without bumps) and rough terrain (with bumps) are investigated. The values determined include the pedalling force, pedalling torque and power, forward velocity, contact forces of front and rear wheels, compressions of front suspension (front fork) and rear suspension (rear shock absorber), sprocket distance, chain tension force, and vertical accelerations of handlebar and seats. The numerical results reveal that, while moving on flat road surface, the pedalling efficiency of hard-tail bike is highest, and the bobbing effect of the VPP bike is most serious. However, while moving on rough terrain, the riding conditions for each of the four full-suspension bikes are more stable than the hard-tail bike.
APA, Harvard, Vancouver, ISO, and other styles
26

Bryan, Rebecca. "Large scale, multi femur computational stress analysis using a statistical shape and intensity model." Thesis, University of Southampton, 2010. https://eprints.soton.ac.uk/185087/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Oskarsson, Andreas. "Efficient transformation from general flow into a specific test case in an automated testing environment." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik och datavetenskap, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3718.

Full text
Abstract:
SIMON is an automated testing application developed by WM-Data Consulting in Växjö, Sweden. Previously the test cases, called BIFs, run by SIMON to test the applications under test has been written manually in a very time consuming manner offering no protection against errors in the structure or misspellings. This thesis investigates a replacement to the manual method when creating the BIFs; my own developed application called the BIF-Editor. The usage of the BIF-Editor guaranteed correct syntax and structure and made the creation of the BIFs faster, but did it increase the quality of the BIFs? So to evaluate the BIF-Editor, the quality regarding path coverage of BIFs manually created was compared with BIFs created during the same elapsed time using the BIF-Editor. This evaluation showed that the usage of the BIF-Editor increased the quality of the BIFs by making the creation safer, but primarily faster which enabled the user to produce more BIFs than previously possible resulting in a raised path cover.
APA, Harvard, Vancouver, ISO, and other styles
28

Heyder, Jakob. "Hierarchical Temporal Memory Software Agent : In the light of general artificial intelligence criteria." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-75868.

Full text
Abstract:
Artificial general intelligence is not well defined, but attempts such as the recent listof “Ingredients for building machines that think and learn like humans” are a startingpoint for building a system considered as such [1]. Numenta is attempting to lead thenew era of machine intelligence with their research to re-engineer principles of theneocortex. It is to be explored how the ingredients are in line with the design princi-ples of their algorithms. Inspired by Deep Minds commentary about an autonomy-ingredient, this project created a combination of Numentas Hierarchical TemporalMemory theory and Temporal Difference learning to solve simple tasks defined in abrowser environment. An open source software, based on Numentas intelligent com-puting platform NUPIC and Open AIs framework Universe, was developed to allowfurther research of HTM based agents on customized browser tasks. The analysisand evaluation of the results show that the agent is capable of learning simple tasksand there is potential for generalization inherent to sparse representations. However,they also reveal the infancy of the algorithms, not capable of learning dynamic com-plex problems, and that much future research is needed to explore if they can createscalable solutions towards a more general intelligent system.
APA, Harvard, Vancouver, ISO, and other styles
29

Barreiro, Lima J. "Methodology for demand-supply selection of commercial off-the-shelf software-based systems : contextual approach of leading contractors in Portugal." Thesis, University of Salford, 2008. http://usir.salford.ac.uk/2098/.

Full text
Abstract:
This research study aims to contribute to the discussion around Information Systems, more precisely on Commercial Off-The-Shelf (COTS) software Based Systems, as a Business Process and as a Supply-Chain enabler tool, within the construction industry in Portugal. Concerned with efficiency improvement of management theory in the construction arena, central discussions of this study are about developing a methodology for demand-supply selection of COTS-Based Systems for leading contractors in Portugal and analysing market offerings of COTS-Based Systems covering Operations Management functional needs of leading contractors in Portugal. Demand-side and supply-side research is undertaken, eliciting contractors’ needs and analysing market offerings of COTS-Based Systems at Operations Management-level from a functional perspective. A multicompany informing case study approach contributes to a better understanding of COTS-Based Systems selection practice, within the demand-supply context of leading contractors in Portugal. Several other appropriate research methods and techniques are employed to collect rich field data. A specific Demand-Supply Selection (DSS) Methodology of COTS-Based Systems is developed, based on methods and techniques in use by practitioners and on review of literature, considering context/stakeholders interaction of leading contractors in Portugal. A systematised list of functional high-level requirements for COTS-Based Systems evaluation is obtained to produce a comprehensive Requirements Reference (RR) Model, so that the effort to elicit the (functional) needs of leading contractors’ Operations Management through further development of requirements models from scratch is reduced. A benchmarking report of leading contractors systems in use and a demand-supply cross analysis delivered on a COTS-Based Systems Market Offerings (MO) Report, answers the question about 'which are the actual market offerings of COTS-Based Systems supporting Operations Management functional needs of leading contractors in Portugal'. Based on these three perspectives, the research study aims to provide an original contribution to knowledge, developing a comprehensive study covering supply and demand viewpoints. In this respect, the direct and practical purpose of this study is to facilitate the planning phase at the beginning of a project and offer comprehensive information to construction industry players (e.g. consultants, COTS-Based Systems providers) that could lead to better products (e.g. changes in functions of COTS-Based Systems) and services (e.g. Requirements Engineering services, COTS-Based Systems selection services) for leading contractors in Portugal.
APA, Harvard, Vancouver, ISO, and other styles
30

Strickland, Anthony Michael. "Enhanced pre-clinical assessment of total knee replacement using computational modelling with experimental corroboration & probabilistic applications." Thesis, University of Southampton, 2009. https://eprints.soton.ac.uk/68695/.

Full text
Abstract:
Demand for Total Knee Replacement (TKR) surgery is high and rising; not just in numbers of procedures, but in the diversity of patient demographics and increase of expectations. Accordingly, greater efforts are being invested into the pre-clinical analysis of TKR designs, to improve their performance in-vivo. A wide range of experimental and computational methods are used to analyse TKR performance pre-clinically. However, direct validation of these methods and models is invariably limited by the restrictions and challenges of clinical assessment, and confounded by the high variability of results seen in-vivo. Consequently, the need exists to achieve greater synergy between different pre-clinical analysis methods. By demonstrating robust corroboration between in-silico and in-vitro testing, and both identifying & quantifying the key sources of uncertainty, greater confidence can be placed in these assessment tools. This thesis charts the development of a new generation of fast computational models for TKR test platforms, with closer collaboration with in-vitro test experts (and consequently more rigorous corroboration with experimental methods) than previously. Beginning with basic tibiofemoral simulations, the complexity of the models was progressively increased, to include in-silico wear prediction, patellofemoral & full lower limb models, rig controller-emulation, and accurate system dynamics. At each stage, the models were compared extensively with data from the literature and experimental tests results generated specifically for corroboration purposes. It is demonstrated that when used in conjunction with, and complementary to, the corresponding experimental work, these higher-integrity in-silico platforms can greatly enrich the range and quality of pre-clinical data available for decision-making in the design process, as well as understanding of the experimental platform dynamics. Further, these models are employed within a probabilistic framework to provide a statistically-quantified assessment of the input factors most influential to variability in the mechanical outcomes of TKR testing. This gives designers a much richer holistic visibility of the true system behaviour than extant 'deterministic' simulation approaches (both computational and experimental). By demonstrating the value of better corroboration and the benefit of stochastic approaches, the methods used here lay the groundwork for future advances in pre-clinical assessment of TKR. These fast, inexpensive models can complement existing approaches, and augment the information available for making better design decisions prior to clinical trials, accelerating the design process, and ultimately leading to improved TKR delivery in-vivo to meet future demands.
APA, Harvard, Vancouver, ISO, and other styles
31

Chen, Li Choo. "Optimisation of rule-based testing for scheduling within a distributed computing environment." Thesis, University of Greenwich, 2007. http://gala.gre.ac.uk/8527/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Chandra, Arjun. "A methodical framework for engineering co-evolution for simulating socio-economic game playing agents." Thesis, University of Birmingham, 2011. http://etheses.bham.ac.uk//id/eprint/2867/.

Full text
Abstract:
Agent based computational economics (ACE), as a research field, has been using co-evolutionary algorithms for modelling the socio-economic learning and adaptation process of players within games that model socio-economic interactions. In addition, it has also been using these algorithms for optimising towards the game equilibria via socio-economic learning. However, the field has been diverging from evolutionary computation, specifically co-evolutionary algorithm design research. It is common practice in ACE to explain the process and outcomes of such co-evolutionary simulations in socio-economic terms. However, co-evolutionary algorithms are known to have unexpected dynamics that lead to unexpected outcomes. This has often lead to mis-interpretations of the process and outcomes in socio-economic terms, a case in point being the lack of a methodical use of the term bounded rationality. This mis-interpretation can be attributed to the lack of a proper consideration of the solution concept being implemented by the coevolutionary algorithm used for the simulation. We propose a holistic methodical framework for analysing and designing co-evolutionary simulations, such that mis-interpretations of socio-economic phenomena be methodically avoided, disabling the algorithm from being mis-interpreted in socio-economic terms, aimed at benefiting ACE as a research field. More specifically, we consider the methodical treatment of co-evolutionary algorithms, as enabled by the framework, such that mis-interpretations of bounded rationality be avoided when these algorithms are used to optimise towards equilibrium solutions in bargaining games. The framework can be broken down into two parts: • Analysing and refining co-evolution for ACE, using the notion behind co-evolutionary solution concepts from co-evolutionary algorithm design research: Challenging the value of the implicit assumption of bounded rationality within co-evolutionary simulations, which leads to it being mis-interpreted, we show that convergence to the equilibrium solutions can be achieved with boundedly rational agents by working on the elements of the implemented co-evolutionary solution concept, as opposed to previous studies where bounded rationality was seen as the cause for deviations from equilibrium. Analysis and refinements guided by the presence of top-down equilibrium solutions, allow for a top-down avoidance of misinterpretations of bounded rationality within simulations. • Analysing and refining co-evolution for ACE, using the notion behind reconciliation variables proposed in the thesis: Reasonably associating mis-interpreted socio-economic phenomena of interest with the elements of the implemented co-evolutionary solution concept, parametrising and quantifying the elements, we obtain our reconciliation variables. Systematically analysing the simulation for its relationship with the reconciliation variables or for its closeness to desired behaviour, using this parametrisation, is the suggested idea. Bounded rationality is taken as a reconciliation variable, reasonably associated with agent strategies, parametrised and quantified, and analysis of simulations with respect to this variable carried out. Analysis and refinements based on such an explicit expression of bounded rationality, as opposed to the erstwhile implicit assumption, allow for a bottom-up avoidance of mis-interpretations of bounded rationality within simulations. We thus remove the causes that lead to bounded rationality being mis-interpreted altogether using this framework. We see this framework as one next step in ACE socio-economic learning simulation research, which must not be overlooked.
APA, Harvard, Vancouver, ISO, and other styles
33

Pardoe, Andrew Charles. "Neural network image reconstruction for nondestructive testing." Thesis, University of Warwick, 1996. http://wrap.warwick.ac.uk/44616/.

Full text
Abstract:
Conventional image reconstruction of advanced composite materials using ultrasound tomography is computationally expensive, slow and unreliable. A neural network system is proposed which would permit the inspection of large composite structures, increasingly important for the aerospace industry. It uses a tomographic arrangement, whereby a number of ultrasonic transducers are positioned along the edges of a square, referred to as the sensor array. Two configurations of the sensor array are utilized. The first contains 16 transducers, 4 of which act as receivers of ultrasound, and the second contains 40 transducers, 8 of which act as receivers. The sensor array has required the development of instrumentation to generate and receive ultrasonic signals, multiplex the transmitting transducers and to store the numerous waveforms generated for each tomographic scan. The first implementation of the instrumentation required manual operation, however, to increase the amount of data available, the second implementation was automated.
APA, Harvard, Vancouver, ISO, and other styles
34

Ganaba, Taher H. "Nonlinear finite element analysis of plates and slabs." Thesis, University of Warwick, 1985. http://wrap.warwick.ac.uk/34590/.

Full text
Abstract:
The behaviour of steel plates and reinforced concrete slabs which undergo large deflections has been investigated using the finite element method. Geometric and material nonlinearities are both considered in the study. Two computer programs have been developed for the analysis of plates and slabs. Ihe first program is for the elastic stability of plates. The elastic buckling loads obtained for plates with and without openings and under different edge loading conditions have been compared with the analytical and numerical results obtained by other investigators using different techniques of analyses. Good correlation between the results obtained and those given by others has been achieved. Improvements in the accuracy of the results and the efficiency of the analysis for plates with openings have been achieved. The second program is for the full range analysis of steel plates and reinforced concrete slabs up to collapse. The analysis can trace the load-deflection response up to collapse including snap-through behaviours. The program allows for the yielding of steel and the cracking and crushing of concrete. The modified Newton-Raphson with load control and displacement control methods is used to trace the structural response up to collapse. The line search technique has been included to improve the rate of convergence in the analysis of reinforced concrete slabs. The program has been tested against experimental and numerical results obatined by other investigators and has been shown to give good agreement. The accuracy of a number of integration rules usually adopted in nonlinear finite elecent analyses to evaluate the stress resultants from the stress distribution throughout concrete sections has been investigated. A new integration rule has been proposed for the integration of stress distributions through cracked concrete sections or cracked and crushed concrete sections.
APA, Harvard, Vancouver, ISO, and other styles
35

Harvey, Carlo. "Modality based perception for selective rendering." Thesis, University of Warwick, 2011. http://wrap.warwick.ac.uk/51765/.

Full text
Abstract:
A major challenge in generating high-fidelity virtual environments for use in Virtual Reality (VR) is to be able to provide interactive rates of realism. The high-fidelity simulation of light and sound wave propagation is still unachievable in real-time. Physically accurate simulation is very computationally demanding. Only recently has visual perception been used in high-fidelity rendering to improve performance by a series of novel exploitations; to render parts of the scene that are not currently being attended by the viewer at a much lower quality with-out the difference being perceived. This thesis investigates the effect spatialised directional sounds, both discrete and converged and smells have on the visual attention of the user towards rendered scene images. These perceptual artefacts are utilised in selective rendering pipelines via the use of multi-modal maps. This work verifies the worth of investigating subliminal saccade shifts (fast movements of the eyes) from directional audio impulses via a pilot study to eye track participant's free viewing a scene with and without an audio impulse and with and without a congruency for that impulse. This experiment showed that even without an acoustic identifier in the scene, directional sound provides an impulse to guide subliminal saccade shifts. A novel technique for generating interactive discrete acoustic samples from arbitrary geometry is also presented. This work is extrapolated by investigating whether temporal auditory sound wave saliencies can be used as a feature vector in the image rendering process. The method works by producing image maps of the sound wave flux and attenuating this map via these auditory saliency feature vectors. Whilst selectively rendering, the method encodes spatial auditory distracters into the standard visual saliency map. Furthermore, this work investigates the effect various smells have on the visual attention of a user when free viewing a set of images whilst being eye tracked. This thesis explores these saccade shifts to a congruent smell object. By analysing the gaze points, the time spent attending a particular area of a scene is considered. The work presents a technique derived from measured data to modulate traditional saliency maps of image features to account for the observed results for smell congruences and shows that smell provides an impulse on visual attention. Finally, the observed data is used in applying modulated image saliency maps to address the additional effects cross-modal stimuli has on human perception when applied to a selective renderer. These multi-modal maps, derived from measured data for smells, and from sound spatialisation techniques attempt to exploit the extra stimuli presented in multi-modal VR environments and help to re-quantify the saliency map to account for observed cross-modal perceptual features of the human visual system. The multi-modal maps are tested through rigorous psychophysical experiments to examine their applicability to selective rendering algorithms, with a series of fixed cost rendering functions, and are found to perform better than image saliency maps that are naively applied to multi-modal virtual environments.
APA, Harvard, Vancouver, ISO, and other styles
36

Lee, Poh Khoon Ernie. "A quest for a better simulation-based knowledge elicitation tool." Thesis, University of Warwick, 2007. http://wrap.warwick.ac.uk/48626/.

Full text
Abstract:
Knowledge elicitation is a well-known bottleneck in the development of Knowledge-Based Systems (KBS). This is mainly due to the tacit property of knowledge, which renders it unfriendly for explication and therefore, analysis. Previous research shows that Visual Interactive Simulation (VIS) can be used to elicit episodic knowledge in the form of example cases of decisions from the decision makers for machine learning purposes, with a view to building a KBS subsequently. Notwithstanding, there are still issues that need to be explored; these include how to make a better use of existing commercial off-the-shelf VIS packages in order to improve the knowledge elicitation process' effectiveness and efficiency. Based in a Ford Motor Company (Ford) engine assembly plant in Dagenham (East London), an experiment was planned and performed to investigate the effects of using various VIS models with different levels of visual fidelity and settings on the elicitation process. The empirical work that was carried out can be grouped broadly into eight activities, which began with gaining an understanding of the case study. Next, it was followed by four concurrent activities of designing the experiment, adapting a current VIS model provided by Ford to support a gaming mode and then assessing it, and devising the meaures for evaluating the elicitation process. Following these, eight Ford personnel, who are proficient decision makers in the simulated operations system, were organised to play with the game models in 48 knowledge elicitation sessions over 19 weeks. In so doing, example cases were collected during the personnel's interactions with the game models. Lastly, the example cases were processed and analysed, and the findings were discussed. Eventually, it seems that the decisions elicited through a 2-Dimensional (2D) VIS model are probably more realistic than those elicited through other equivalent models with a higher level of visual fidelity. Moreover, the former also emerges to be a more efficient knowledge elicitation tool. In addition, it appears that the decisions elicited through a VIS model that is adjusted to simulate more uncommon and extreme scenes are made for a wider range of situations. Consequently, it can be concluded that using a 2D VIS model that has been adjusted to simulate more uncommon and extreme situations is the optimal VIS-based means for eliciting episodic knowledge.
APA, Harvard, Vancouver, ISO, and other styles
37

Ahmad, Ali. "Towards a knowledge-based discrete simulation modelling environment using Prolog." Thesis, University of Warwick, 1989. http://wrap.warwick.ac.uk/106507/.

Full text
Abstract:
The initial chapters of this thesis cover a survey of literature relating to problem solving, discrete simulation, knowledge-based systems and logic programming. The main emphasis in these chapters is on a review of the state of the art in the use of Artificial Intelligence methods in Operational Research in general and Discrete Simulation in particular. One of the fundamental problems in discrete simulation is to mimic the operation of a system as a part of problem solving relating to the system. A number of methods of simulated behaviour generation exist which dictate the form in which a simulation model must be expressed. This thesis explores the possibility of employing logic programming paradigm for this purpose as it has been claimed to offer a number of advantages over procedural programming paradigm. As a result a prototype simulation engine has been implemented using Prolog which can generate simulated behaviour from an articulation of model using a three phase or process 'world views' (or a sensible mixture of these). The simulation engine approach can offer the advantage of building simulation models incrementally. A new paradigm for computer software systems in the form of Know ledge-Based Systems has emerged from the research in the area of Artificial Intelligence. Use of this paradigm has been explored in the area of simulation model building. A feasible method of knowledge-based simulation model generation has been proposed and using this method a prototype knowledge-based simulation modelling environment has been implemented using Prolog. The knowledge based system paradigm has been seen to offer a number of advantages which include the possibility of representing both the application domain knowledge and the simulation methodology knowledge which can assist in the model definition as well as in the generation of executable code. These, in turn, may offer a greater amount of computer assistance in developing simulation models than would be possible otherwise. The research aim is to make advances towards the goal of 'intelligent' simulation modelling environments. It consolidates the knowledge related to simulated behaviour generation methods using symbolic representation for the system state while permitting the use of alternate (and mixed) 'world views' for the model articulation. It further demonstrates that use of the knowledge-based systems paradigm for implementing a discrete simulation modelling environment is feasible and advantageous.
APA, Harvard, Vancouver, ISO, and other styles
38

Staunton, Richard C. "Visual inspection : image sampling, algorithms and architectures." Thesis, University of Warwick, 1991. http://wrap.warwick.ac.uk/108898/.

Full text
Abstract:
The thesis concerns the hexagonal sampling of images, the processing of industrially derived images, and the design of a novel processor element that can be assembled into pipelines to effect fast, economic and reliable processing. A hexagonally sampled two dimensional image can require 13.4% fewer sampling points than a square sampled equivalent. The grid symmetry results in simpler processing operators that compute more efficiently than square grid operators. Computation savings approaching 44% arc demonstrated. New hexagonal operators arc reported including a Gaussian smoothing filter, a binary thinner, and an edge detector with comparable accuracy to that of the Sobel detector. The design of hexagonal arrays of sensors is considered. Operators requiring small local areas of support are shown to be sufficient for processing controlled lighting and industrial images. Case studies show that small features in hexagonally processed images maintain their shape better, and that processes can tolerate a lower signal to noise ratio, than that for equivalent square processed images. The modelling of small defects in surfaces has been studied in depth. The flexible programmable processor element can perform the low level local operators required for industrial image processing on both square and hexagonal grids. The element has been specified and simulated by a high level computer program. A fast communication channel allows for dynamic reprogramming by a control computer, and the video rate element can be assembled into various pipeline architectures, that may eventually be adaptively controlled.
APA, Harvard, Vancouver, ISO, and other styles
39

Winterbottom, Cara. "VRBridge: a Constructivist Approach to Supporting Interaction Design and End-User Authoring in Virtual Reality." Thesis, University of Cape Town, 2010. http://pubs.cs.uct.ac.za/archive/00000607/.

Full text
Abstract:
For any technology to become widely-used and accepted, it must support end-user authoring and customisation. This means making the technology accessible by enabling understanding of its design issues and reducing its technical barriers. Our interest is in enabling end-users to author dynamic virtual environments (VEs), specifically their interactions: player interactions with objects and the environment; and object interactions with each other and the environment. This thesis describes a method to create tools and design aids which enable end-users to design and implement interactions in a VE and assist them in building the requisite domain knowledge, while reducing the costs of learning a new set of skills. Our design method is based in constructivism, which is a theory that examines the acquisition and use of knowledge. It provides principles for managing complexity in knowledge acquisition: multiplicity of representations and perspectives; simplicity of basic components; encouragement of exploration; support for deep reflection; and providing users with control of their process as much as possible. We derived two main design aids from these principles: multiple, interactive and synchronised domain-specific representations of the design; and multiple forms of non-invasive and user-adaptable scaffolding. The method began with extensive research into representations and scaffolding, followed by investigation of the design strategies of experts, the needs of novices and how best to support them with software, and the requirements of the VR domain. We also conducted a classroom observation of the practices of non-programmers in VR design, to discover their specific problems with effectively conceptualising and communicating interactions in VR. Based on our findings in this research and our constructivist guidelines, we developed VRBridge, an interaction authoring tool. This contained a simple event-action interface for creating interactions using trigger-condition-action triads or Triggersets. We conducted two experimental evaluations during the design of VRBridge, to test the effectiveness of our design aids and the basic tool. The first tested the effectiveness of the Triggersets and additional representations: a Floorplan, a Sequence Diagram and Timelines. We used observation, interviews and task success to evaluate how effectively end-users could analyse and debug interactions created with VRBridge. We found that the Triggersets were effective and usable by novices to analyse an interaction design, and that the representations significantly improved end-user work and experience. The second experiment was large-scale (124 participants) and conducted over two weeks. Participants worked on authoring tasks which embodied typical interactions and complexities in the domain. We used a task exploration metric, questionnaires and computer logging to evaluate aspects of task performance: how effectively end-users could create interactions with VRBridge; how effectively they worked in the domain of VR authoring; how much enjoyment or satisfaction they experienced during the process; and how well they learned over time. This experiment tested the entire system and the effects of the scaffolding and representations. We found that all users were able to complete authoring tasks using VRBridge after very little experience with the system and domain; all users improved and felt more satisfaction over time; users with representations or scaffolding as a design aid completed the task more expertly, explored more effectively, felt more satisfaction and learned better than those without design aids; users with representations explored more effectively and felt more satisfaction than those with scaffolding; and users with both design aids learned better but did not improve in any other way over users with a single design aid. We also gained evidence about how the scaffolding, representations and basic tool were used during the evaluation. The contributions of this thesis are: an effective and efficient theory-based design method; a case study in the use of constructivism to structure a design process and deliver effective tools; a proof-of-concept prototype with which novices can create interactions in VR without traditional programming; evidence about the problems that novices face when designing interactions and dealing with unfamiliar programming concepts; empirical evidence about the relative effectiveness of additional representations and scaffolding as support for designing interactions; guidelines for supporting end-user authoring in general; and guidelines for the design of effective interaction authoring systems in general.
APA, Harvard, Vancouver, ISO, and other styles
40

Zhang, X. "Designing a geographic visual information system (GVIS) to support participation in urban planning." Thesis, University of Salford, 2004. http://usir.salford.ac.uk/2178/.

Full text
Abstract:
The growth of the international movement to involve the public in urban planning urges us to find new ways to achieve this. Recent studies have identified information communication technologies (ICT) as a mechanism to support such movement. It has been postulated that integrating geographic information system (GIS), virtual reality (VR) and Internet technologies will facilitate greater participation in planning activity and therefore strengthen and democratise the process. This is a growing area of research. There is, however, concern that a lack of a theoretical basis for these studies might undermine their success and hamper the widespread adoption of GIS-VR combination (GVIS). This thesis presents a theoretical framework based on the Learning System Theory (LST). ICT technologies are then assessed according to the framework. In the light of the assessmenta, prototype has been designed and developed based on a local urban regeneration project in Salford, UK. The prototype is then evaluated through two phases, namely formative evaluation and summative evaluation, to test the feasibility of the framework. The formative evaluation was focused on evaluating the functionality of the prototype system. In this case, evaluators were experts in IT or urban planning. The summative evaluation focused on testing the value of the prototype for different stakeholder groups of the urban regeneration project from local residents to planning officers. The findings from this research indicated that better visualization could help people in understanding planning issues and communicate their visions to others. The interactivity functions could further support interaction among users and the analysis of information. Moreover, the results indicated that the learning system theory could be used as a framework in looking at how GVIS could be developed in order to support public participation in urban planning.
APA, Harvard, Vancouver, ISO, and other styles
41

Piskopakis, Andreas. "Time-domain and harmonic balance turbulent Navier-Stokes analysis of oscillating foil aerodynamics." Thesis, University of Glasgow, 2014. http://theses.gla.ac.uk/5604/.

Full text
Abstract:
The underlying thread of the research work presented in this thesis is the development of a robust, accurate and computationally efficient general-purpose Reynolds-Averaged Navier-Stokes code for the analysis of complex turbulent flow unsteady aerodynamics, ranging from low-speed applications such as hydrokinetic and wind turbine flows to high-speed applications such as vibrating transonic wings. The main novel algorithmic contribution of this work is the successful development of a fully-coupled multigrid solution method of the Reynolds-Averaged Navier-Stokes equations and the two-equation shear stress transport turbulence model of Menter. The new approach, which also includes the implementation of a high-order restriction operator and an effective limiter of the prolonged corrections, is implemented and successfully demonstrated in the existing steady, time-domain and harmonic balance solvers of a compressible Navier-Stokes research code. The harmonic balance solution of the Navier-Stokes equations is a fairly new technology which can substantially reduce the run-time required to compute nonlinear periodic flow fields with respect to the conventional time-domain approach. The thesis also features the investigation of one modelling and one numerical aspect often overlooked or not comprehensively analysed in turbulent computational fluid dynamics simulations of the type discussed in the thesis. The modelling aspect is the sensitivity of the turbulent flow solution to the, to a certain extent, arbitrary value of the scaling factor appearing in the solid wall boundary condition of the second turbulent variable of the Shear Stress Transport turbulence model. The results reported herein highlight that the solution variability associated with the typical choices of such a scaling factor can be similar or higher than the solution variability caused by the choices of different turbulence models. The numerical aspect is the sensitivity of the turbulent flow solution to the order of the discretisation of the turbulence model equations. The results reported herein highlight that the existence of significant solution differences between first and second order space-discretisation of the turbulence equations vary with the flow regime (e.g. fully subsonic or transonic), operating conditions that may or may not result in flow separation (e.g. angle of attack), and also the grid refinement. The newly developed turbulent flow capabilities are validated by considering a wide range of test cases with flow regime varying from low-speed subsonic to transonic. The solutions of the research code are compared with experimental data, theoretical solutions and also numerical solutions obtained with a state-of-the-art time-domain commercial code. The main computational results of this research regard a low-speed renewable energy application and an aeronautical engineering application. The former application is a thorough comparative analysis of a hydrokinetic turbine working in a low-speed laminar and a high-Reynolds number turbulent regime. The time-domain results obtained with the newly developed turbulent code are used to analyse and discusses in great detail the unsteady aerodynamic phenomena occurring in both regimes. The main motivation for analysing this problem is both to highlight the predictive capabilities and the numerical robustness of the developed turbulent time-domain flow solver for complex realistic problems, and to shed more light on the complex physics of this emerging renewable energy device. The latter application is the time-domain and harmonic balance turbulent flow analysis of a transonic wing section animated by pitching motion. The main motivation of these analyses is to assess the computational benefits achievable by using the harmonic balance solution of the Reynolds-Averaged Navier-Stokes and Shear Stress Transport equations rather than the conventional time-domain solution, and also to further demonstrate the predictive capabilities of the developed Computational Fluid Dynamics system. To this aim, the numerical solutions of this research code are compared to both available experimental data, and the time-domain results computed by a state-of-the-art commercial package regularly used by the industry and the Academia worldwide.
APA, Harvard, Vancouver, ISO, and other styles
42

Chi, Yuan. "Machine learning techniques for high dimensional data." Thesis, University of Liverpool, 2015. http://livrepository.liverpool.ac.uk/2033319/.

Full text
Abstract:
This thesis presents data processing techniques for three different but related application areas: embedding learning for classification, fusion of low bit depth images and 3D reconstruction from 2D images. For embedding learning for classification, a novel manifold embedding method is proposed for the automated processing of large, varied data sets. The method is based on binary classification, where the embeddings are constructed so as to determine one or more unique features for each class individually from a given dataset. The proposed method is applied to examples of multiclass classification that are relevant for large scale data processing for surveillance (e.g. face recognition), where the aim is to augment decision making by reducing extremely large sets of data to a manageable level before displaying the selected subset of data to a human operator. In addition, an indicator for a weighted pairwise constraint is proposed to balance the contributions from different classes to the final optimisation, in order to better control the relative positions between the important data samples from either the same class (intraclass) or different classes (interclass). The effectiveness of the proposed method is evaluated through comparison with seven existing techniques for embedding learning, using four established databases of faces, consisting of various poses, lighting conditions and facial expressions, as well as two standard text datasets. The proposed method performs better than these existing techniques, especially for cases with small sets of training data samples. For fusion of low bit depth images, using low bit depth images instead of full images offers a number of advantages for aerial imaging with UAVs, where there is a limited transmission rate/bandwidth. For example, reducing the need for data transmission, removing superfluous details, and reducing computational loading of on-board platforms (especially for small or micro-scale UAVs). The main drawback of using low bit depth imagery is discarding image details of the scene. Fortunately, this can be reconstructed by fusing a sequence of related low bit depth images, which have been properly aligned. To reduce computational complexity and obtain a less distorted result, a similarity transformation is used to approximate the geometric alignment between two images of the same scene. The transformation is estimated using a phase correlation technique. It is shown that that the phase correlation method is capable of registering low bit depth images, without any modi�cation, or any pre and/or post-processing. For 3D reconstruction from 2D images, a method is proposed to deal with the dense reconstruction after a sparse reconstruction (i.e. a sparse 3D point cloud) has been created employing the structure from motion technique. Instead of generating a dense 3D point cloud, this proposed method forms a triangle by three points in the sparse point cloud, and then maps the corresponding components in the 2D images back to the point cloud. Compared to the existing methods that use a similar approach, this method reduces the computational cost. Instated of utilising every triangle in the 3D space to do the mapping from 2D to 3D, it uses a large triangle to replace a number of small triangles for flat and almost flat areas. Compared to the reconstruction result obtained by existing techniques that aim to generate a dense point cloud, the proposed method can achieve a better result while the computational cost is comparable.
APA, Harvard, Vancouver, ISO, and other styles
43

Nord, Lisa. "Programvaruutvecklingen efter GDPR : Effekten av GDPR hos mjukvaruföretag." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-20146.

Full text
Abstract:
GDPR (General data protection regulation, generella dataskyddsförordningen) är en ny europeisk förordning som reglerar behandlingen av känsliga uppgifter samt det fria flödet av dessa inom EU. Förordningen utgör ett skydd för fysiska personer vid behandling av deras personuppgifter inom unionen vilket är en grundläggande rättighet.  GDPR har sedan den trädde i kraft i Maj 2018 varit en förordning att räkna med då dess bötesbelopp är höga. Alla företag inom Europa behöver följa reglerna samt företag utanför EU som hanterar europeiska personuppgifter. Målet med detta arbete är se vilken effekt GDPR har haft hos svenska mjukvaruutvecklare och hur de ser på sin arbetsbörda. Detta har gjorts genom en enkätundersökning hos svenska mjukvaruföretag som blivit slumpmässigt utvalda. Av uppsatsens resultat framgår det att många mjukvaruföretag som skapar egen programvara eller distribuerar programvara för en tredje part har den nya förordningen inneburit ett tyngre arbetslass samt omförhandling av existerande programvarulösningar. Något som inneburit nya arbetsplatser eller arbetsgrupper hos många företag. När GDPR först trädde ikraft lades det ner många arbetstimmar på att omvandla redan existerande lösningar för att uppfylla kraven. Trots detta har det lagts många fler timmar vid utveckling även efter GDPR för att se till att den nya programvaran även den lever upp till de krav som är ställda.  Av resultatet kan vi även finna att många företag ser väldigt strikt på hantering av känsliga uppgifter de samlat in från deras kunder men ser mindre strikt på lagring och hantering av personuppgifter av sina egna anställda.
GDPR(General data protection regulation) is a new European regulation that regulates data, protection, and privacy. It also addresses the transfer of personal data to countries outside of the European Union. Ever since the GDPR was enforceable May 2018, it has been a regulation for businesses to strictly follow and be wary of due to the hefty fines. All European businesses need to follow the new regulation and likewise, so to the businesses outside of the E.U. in which handles any type of personal data of Europeans. The goal with this thesis is to see the effect the GDPR has had for Swedish software developers and how they portray their workload. This data has been shown in the form of a questionnaire which was randomly distributed to a number of Swedish software companies.  In conclusion, this thesis shows that the new regulation has had a big impact on the developers that create new software/distributes software, primarily in form of a heavier workload and the need to re-negotiate already existing software. This has provided new jobs and/or new teams for many of the companies that were a part of this study. When GDPR was first introduced, the software companies spent countless hours on converting already existing software. Even tho they spend a lot of time in the beginning, the dedication of time is spent on every solution to make sure it meets the requirements of GDPR: We can also see that many businesses spend a lot more time and money on data protection for their clients personal data, but they do not treat their employees personal data in the same way.
APA, Harvard, Vancouver, ISO, and other styles
44

Yalim, Baris. "Internet Based Seismic Vulnerability Assessment Software Development For R/c Buildings." Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/12605599/index.pdf.

Full text
Abstract:
Structural evaluation and seismic vulnerability assessment of Reinforced Concrete (R/C) buildings have especially become the focus of many researches in Turkey and abroad especially after the August 17, 1999 earthquake causing major life and property losses. A devastating earthquake being expected in Istanbul-Marmara region raises many questions on how well the existing buildings are constructed and whether they can stand a major earthquake. Evaluation of existing buildings for seismic vulnerability requires time consuming input preparation (pre-processing), modelling, and post processing of analysis results. The objective of the study is to perform automated seismic vulnerability assessment of existing R/C buildings automatically over the internet by asking internet users to enter their building related data, and streamlining the modelling-analysis-reporting phases by intelligent programming. The internet based assessment tool is prepared for two levels of complexity: (a) the detailed level targets to carry out seismic evaluation of the buildings using a linear structural analysis software developed for this study
(b) the simplified level produces seismic evaluation index for buildings, based on simple and easy to enter general building information which can be entered by any person capable of using an internet browser. Detailed level evaluation program includes a user friendly interface between the internet user and analysis software, which will enable data entry, database management, and online evaluation/reporting of R/C buildings. Building data entered by numerous users over the internet will also enable formation of an extensive database of buildings located all around Turkey. 36 buildings from Dü
zce damage database, generated by the cooperation of Scientific and Research Council of Turkey (TÜ
BiTAK) and Structural Engineering Research Unit (SERU) after the 17 August 1999 Kocaeli and the 12 November 1999 Dü
zce earthquakes, are used in the analyses to identify relationship between calculated indices and observed damage levels of buildings, which will enable prediction of building damage levels for future earthquakes. The research is funded by Science Research Program (BAP 2003-03-03-03), NATO-SfP 977231, and TUBITAK ICTAG-I574 projects. The contribution of the research is composed of a) online building index -performance analysis/evaluation software which might be used by any average internet user, b) an ever-growing R/C building database entered by various internet users.
APA, Harvard, Vancouver, ISO, and other styles
45

Kucukcoban, Sezgin. "Development Of A Software For Seismic Damage Estimation: Case Studies." Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/12605087/index.pdf.

Full text
Abstract:
The occurrence of two recent major earthquakes, 17 August 1999 Mw = 7.4 Izmit and 12 November 1999 Mw = 7.1 Dü
zce, in Turkey prompted seismologists and geologists to conduct studies to predict magnitude and location of a potential earthquake that can cause substantial damage in Istanbul. Many scenarios are available about the extent and size of the earthquake. Moreover, studies have recommended rough estimates of risk areas throughout the city to trigger responsible authorities to take precautions to reduce the casualties and loss for the earthquake expected. Most of these studies, however, adopt available procedure by modifying them for the building stock peculiar to Turkey. The assumptions and modifications made are too crude and thus are believed to introduce significant deviations from the actual case. To minimize these errors and use specific damage functions and capacity curves that reflect the practice in Turkey, a study was undertaken to predict damage pattern and distribution in Istanbul for a scenario earthquake proposed by Japan International Cooperation Agency (JICA). The success of these studies strongly depends on the quality and validity of building inventory and site property data. Building damage functions and capacity curves developed from the studies conducted in Middle East Technical University are used. A number of proper attenuation relations are employed. The study focuses mainly on developing a software to carry out all computations and present results. The results of this study reveal a more reliable picture of the physical seismic damage distribution expected in Istanbul.
APA, Harvard, Vancouver, ISO, and other styles
46

Mokhtari, Abbas Harati. "Impact of automatic identification system (AIS) on safety of marine navigation." Thesis, Liverpool John Moores University, 2007. http://researchonline.ljmu.ac.uk/5837/.

Full text
Abstract:
Automatic Identification System (AIS) was introduced with the overall aim to promote efficiency and safety of navigation, protection of environment, and safety of life at sea. Consequently, ship-borne AIS was implemented on a mandatory basis by IMO in 2000 and later amendments to chapter V of Safety of Life at Sea (SOLAS) Convention. Therefore SOLAS Convention vessels were required to carry AIS in a phased approach, from I" July 2002 to end of December 2004. The intention is to provide more precise information and a clear traffic view in navigation operations, particularly in anti-collision operation. This mandatory implementation of AIS has raised a number of issues with respect to its success in fulfilment of the intended role. In order to improve the efficiency of the AIS in navigation operation, this research mainly focused on the accuracy of AIS information, and practical use of the technology on board the ships. The intentions were to assess reliability of data, level of human failure associated with AIS, and the degree of actual use of the technology by navigators. This research firstly provided impressions about AIS technology for anti-collision operation and other marine operation and, about a system's approach to the issue of human failure in marine risk management. Secondly, this research has assessed reliability of AIS data by examination of data collected through three AIS data studies. Thirdly, it has evaluated navigators' attitude and behaviour to AIS usage by analysing the data from navigators' feedback collected through the AIS questionnaire survey focused on their perceptions about different aspects of AIS related to its use. This research revealed that some aspects of the AIS technology and some features of its users need further attention and improvement, so as to achieve its intended objectives in navigation. This study finally contributed in proposing the AIS User Satisfaction Model as a suitable framework for evaluation of navigators' satisfaction and extent of the use of AIS. This model can probably be used as the basis for measuring navigators' attitude and behaviour about other similar maritime technologies.
APA, Harvard, Vancouver, ISO, and other styles
47

Morley, Deborah G. "Design and adaptation of a general purpose, user friendly statistical software package for the IBM personal computer and IBM PC compatibles (PC VSTAT)." Ohio : Ohio University, 1986. http://www.ohiolink.edu/etd/view.cgi?ohiou1183141969.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Drofelnik, Jernej. "Massively parallel time- and frequency-domain Navier-Stokes Computational Fluid Dynamics analysis of wind turbine and oscillating wing unsteady flows." Thesis, University of Glasgow, 2017. http://theses.gla.ac.uk/8284/.

Full text
Abstract:
Increasing interest in renewable energy sources for electricity production complying with stricter environmental policies has greatly contributed to further optimisation of existing devices and the development of novel renewable energy generation systems. The research and development of these advanced systems is tightly bound to the use of reliable design methods, which enable accurate and efficient design. Reynolds-averaged Navier-Stokes Computational Fluid Dynamics is one of the design methods that may be used to accurately analyse complex flows past current and forthcoming renewable energy fluid machinery such as wind turbines and oscillating wings for marine power generation. The use of this simulation technology offers a deeper insight into the complex flow physics of renewable energy machines than the lower-fidelity methods widely used in industry. The complex flows past these devices, which are characterised by highly unsteady and, often, predominantly periodic behaviour, can significantly affect power production and structural loads. Therefore, such flows need to be accurately predicted. The research work presented in this thesis deals with the development of a novel, accurate, scalable, massively parallel CFD research code COSA for general fluid-based renewable energy applications. The research work also demonstrates the capabilities of newly developed solvers of COSA by investigating complex three-dimensional unsteady periodic flows past oscillating wings and horizontal-axis wind turbines. Oscillating wings for the extraction of energy from an oncoming water or air stream, feature highly unsteady hydrodynamics. The flow past oscillating wings may feature dynamic stall and leading edge vortex shedding, and is significantly three-dimensional due to finite-wing effects. Detailed understanding of these phenomena is essential for maximising the power generation efficiency. Most of the knowledge on oscillating wing hydrodynamics is based on two-dimensional low-Reynolds number computational fluid dynamics studies and experimental testing. However, real installations are expected to feature Reynolds numbers of the order of 1 million and strong finite-wing-induced losses. This research investigates the impact of finite wing effects on the hydrodynamics of a realistic aspect ratio 10 oscillating wing device in a stream with Reynolds number of 1.5 million, for two high-energy extraction operating regimes. The benefits of using endplates in order to reduce finite-wing-induced losses are also analyzed. Three-dimensional time-accurate Reynolds-averaged Navier-Stokes simulations using Menter's shear stress transport turbulence model and a 30-million-cell grid are performed. Detailed comparative hydrodynamic analyses of the finite and infinite wings highlight that the power generation efficiency of the finite wing with sharp tips for the considered high energy-extraction regimes decreases by up to 20 %, whereas the maximum power drop is 15 % at most when using the endplates. Horizontal-axis wind turbines may experience strong unsteady periodic flow regimes, such as those associated with the yawed wind condition. Reynolds-averaged Navier-Stokes CFD has been demonstrated to predict horizontal-axis wind turbine unsteady flows with accuracy suitable for reliable turbine design. The major drawback of conventional Reynolds-averaged Navier-Stokes CFD is its high computational cost. A time-step-independent time-domain simulation of horizontal-axis wind turbine periodic flows requires long runtimes, as several rotor revolutions have to be simulated before the periodic state is achieved. Runtimes can be significantly reduced by using the frequency-domain harmonic balance method for solving the unsteady Reynolds-averaged Navier-Stokes equations. This research has demonstrated that this promising technology can be efficiently used for the analyses of complex three-dimensional horizontal-axis wind turbine periodic flows, and has a vast potential for rapid wind turbine design. The three-dimensional simulations of the periodic flow past the blade of the NREL 5-MW baseline horizontal-axis wind turbine in yawed wind have been selected for the demonstration of the effectiveness of the developed technology. The comparative assessment is based on thorough parametric time-domain and harmonic balance analyses. Presented results highlight that horizontal-axis wind turbine periodic flows can be computed by the harmonic balance solver about fifty times more rapidly than by the conventional time-domain analysis, with accuracy comparable to that of the time-domain solver.
APA, Harvard, Vancouver, ISO, and other styles
49

Lee, ChuanChe. "Parallel programming on General Block Min Max Criterion." CSUSB ScholarWorks, 2006. https://scholarworks.lib.csusb.edu/etd-project/3065.

Full text
Abstract:
The purpose of the thesis is to develop a parallel implementation of the General Block Min Max Criterion (GBMM). This thesis deals with two kinds of parallel overheads: Redundant Calculations Parallel Overhead (RCPO) and Communication Parallel Overhead (CPO).
APA, Harvard, Vancouver, ISO, and other styles
50

Månsson, Jakob. "Comparative Study of CPU and GPGPU Implementations of the Sievesof Eratosthenes, Sundaram and Atkin." Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-21111.

Full text
Abstract:
Background. Prime numbers are integers divisible only on 1 and themselves, and one of the oldest methods of finding them is through a process known as sieving. A prime number sieving algorithm produces every prime number in a span, usually from the number 2 up to a given number n. In this thesis, we will cover the three sieves of Eratosthenes, Sundaram, and Atkin. Objectives. We shall compare their sequential CPU implementations to their parallel GPGPU (General Purpose Graphics Processing Unit) counterparts on the matter of performance, accuracy, and suitability. GPGPU is a method in which one utilizes hardware indented for graphics rendering to achieve a high degree of parallelism. Our goal is to establish if GPGPU sieving can be more effective than the sequential way, which is currently commonplace.   Method. We utilize the C++ and CUDA programming languages to implement the algorithms, and then extract data regarding their execution time and accuracy. Experiments are set up and run at several sieving limits, with the upper bound set by the memory capacity of available GPU hardware. Furthermore, we study each sieve to identify what characteristics make them fit or unfit for a GPGPU approach. Results. Our results show that the sieve of Eratosthenes is slow and ill-suited for GPGPU computing, that the sieve of Sundaram is both efficient and fit for parallelization, and that the sieve of Atkin is the fastest but suffers from imperfect accuracy.   Conclusions. Finally, we address how the lesser concurrent memory capacity available for GPGPU limits the ranges that can be sieved, as compared to CPU. Utilizing the beneficial characteristics of the sieve of Sundaram, we propose a batch-divided implementation that would allow the GPGPU sieve to cover an equal range of numbers as any of the CPU variants.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography