Dissertations / Theses on the topic 'Software engineering – Mathematics'

To see the other types of publications on this topic, follow the link: Software engineering – Mathematics.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Software engineering – Mathematics.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Pollard, Janelle. "A software engineering approach to the integration of computer technology into mathematics education /." [St. Lucia, Qld.], 2004. http://www.library.uq.edu.au/pdfserve.php?image=thesisabs/absthe18424.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Owusu-Tieku, Kwaku. "Using software engineering metrics in AP modularization." [Johnson City, Tenn. : East Tennessee State University], 2001. http://etd-submit.etsu.edu/etd/theses/available/etd-0718101-161918/unrestricted/owusu-tiekuk0813a.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Gill, Mandeep Singh. "Application of software engineering methodologies to the development of mathematical biological models." Thesis, University of Oxford, 2013. http://ora.ox.ac.uk/objects/uuid:35178f3a-7951-4f1c-aeab-390cdd622b05.

Full text
Abstract:
Mathematical models have been used to capture the behaviour of biological systems, from low-level biochemical reactions to multi-scale whole-organ models. Models are typically based on experimentally-derived data, attempting to reproduce the observed behaviour through mathematical constructs, e.g. using Ordinary Differential Equations (ODEs) for spatially-homogeneous systems. These models are developed and published as mathematical equations, yet are of such complexity that they necessitate computational simulation. This computational model development is often performed in an ad hoc fashion by modellers who lack extensive software engineering experience, resulting in brittle, inefficient model code that is hard to extend and reuse. Several Domain Specific Languages (DSLs) exist to aid capturing such biological models, including CellML and SBML; however these DSLs are designed to facilitate model curation rather than simplify model development. We present research into the application of techniques from software engineering to this domain; starting with the design, development and implementation of a DSL, termed Ode, to aid the creation of ODE-based biological models. This introduces features beneficial to model development, such as model verification and reproducible results. We compare and contrast model development to large-scale software development, focussing on extensibility and reuse. This work results in a module system that enables the independent construction and combination of model components. We further investigate the use of software engineering processes and patterns to develop complex modular cardiac models. Model simulation is increasingly computationally demanding, thus models are often created in complex low-level languages such as C/C++. We introduce a highly-efficient, optimising native-code compiler for Ode that generates custom, model-specific simulation code and allows use of our structured modelling features without degrading performance. Finally, in certain contexts the stochastic nature of biological systems becomes relevant. We introduce stochastic constructs to the Ode DSL that enable models to use Stochastic Differential Equations (SDEs), the Stochastic Simulation Algorithm (SSA), and hybrid methods. These use our native-code implementation and demonstrate highly-efficient stochastic simulation, beneficial as stochastic simulation is highly computationally intensive. We introduce a further DSL to model ion channels declaratively, demonstrating the benefits of DSLs in the biological domain. This thesis demonstrates the application of software engineering methodologies, and in particular DSLs, to facilitate the development of both deterministic and stochastic biological models. We demonstrate their benefits with several features that enable the construction of large-scale, reusable and extensible models. This is accomplished whilst providing efficient simulation, creating new opportunities for biological model development, investigation and experimentation.
APA, Harvard, Vancouver, ISO, and other styles
4

Johnson, Stephen Philip. "Mapping numerical software onto distributed memory parallel systems." Thesis, University of Greenwich, 1992. http://gala.gre.ac.uk/8676/.

Full text
Abstract:
The aim of this thesis is to further the use of parallel computers, in particular distributed memory systems, by proving strategies for parallelisation and developing the core component of tools to aid scalar software porting. The ported code must not only efficiently exploit available parallel processing speed and distributed memory, but also enable existing users of the scalar code to use the parallel version with identical inputs and allow maintenance to be performed by the scalar code author in conjunction with the parallel code. The data partition strategy has been used to parallelise an in-house solidification modelling code where all requirements for the parallel software were successfully met. To confirm the success of this parallelisation strategy, a much sterner test was used, parallelising the HARWELL-FLOW3D fluid flow package. The performance results of the parallel version clearly vindicate the conclusions of the first example. Speedup efficiencies of around 80 percent have been achieved on fifty processors for sizable models. In both these tests, the alterations to the code were fairly minor, maintaining the structure and style of the original scalar code which can easily be recognised by its original author. The alterations made to these codes indicated the potential for parallelising tools since the alterations were fairly minor and usually mechanical in nature. The current generation of parallelising compilers rely heavily on heuristic guidance in parallel code generation and other decisions that may be better made by a human. As a result, the code they produce will almost certainly be inferior to manually produced code. Also, in order not to sacrifice parallel code quality when using tools, the scalar code analysis to identify inherent parallelism in a application code, as used in parallelising compilers, has been extended to eliminate dependencies conservatively assumed, since these dependencies can greatly inhibit parallelisation. Extra information has been extracted both from control flow and from processing symbolic information. The tests devised to utilise this information enable the non-existence of a significant number of previously assumed dependencies to be proved. In some cases, the number of true dependencies has been more than halved. The dependence graph produced is of sufficient quality to greatly aid the parallelisation, with user interaction and interpretation, parallelism detection and code transformation validity being less inhibited by assumed dependencies. The use of tools rather than the black box approach removes the handicaps associated with using heuristic methods, if any relevant heuristic methods exist.
APA, Harvard, Vancouver, ISO, and other styles
5

Vũ, John Huân. "Software Internationalization: A Framework Validated Against Industry Requirements for Computer Science and Software Engineering Programs." DigitalCommons@CalPoly, 2010. https://digitalcommons.calpoly.edu/theses/248.

Full text
Abstract:
View John Huân Vũ's thesis presentation at http://youtu.be/y3bzNmkTr-c. In 2001, the ACM and IEEE Computing Curriculum stated that it was necessary to address "the need to develop implementation models that are international in scope and could be practiced in universities around the world." With increasing connectivity through the internet, the move towards a global economy and growing use of technology places software internationalization as a more important concern for developers. However, there has been a "clear shortage in terms of numbers of trained persons applying for entry-level positions" in this area. Eric Brechner, Director of Microsoft Development Training, suggested five new courses to add to the computer science curriculum due to the growing "gap between what college graduates in any field are taught and what they need to know to work in industry." He concludes that "globalization and accessibility should be part of any course of introductory programming," stating: A course on globalization and accessibility is long overdue on college campuses. It is embarrassing to take graduates from a college with a diverse student population and have to teach them how to write software for a diverse set of customers. This should be part of introductory software development. Anything less is insulting to students, their family, and the peoples of the world. There is very little research into how the subject of software internationalization should be taught to meet the major requirements of the industry. The research question of the thesis is thus, "Is there a framework for software internationalization that has been validated against industry requirements?" The answer is no. The framework "would promote communication between academia and industry ... that could serve as a common reference point in discussions." Since no such framework for software internationalization currently exists, one will be developed here. The contribution of this thesis includes a provisional framework to prepare graduates to internationalize software and a validation of the framework against industry requirements. The requirement of this framework is to provide a portable and standardized set of requirements for computer science and software engineering programs to teach future graduates.
APA, Harvard, Vancouver, ISO, and other styles
6

Borgers, Jocelyn. "Web 2.0 Technologies in the Software Development Process." Digital Commons @ East Tennessee State University, 2013. https://dc.etsu.edu/honors/164.

Full text
Abstract:
Software engineers must communicate with many different people, likely in different locations, in order to create a successful piece of software. Social media can be used to communicate quickly and efficiently to minimize miscommunications and facilitate collaboration in the software development process. Research in this area has been sparse but significant because initial findings show that social media is being used in innovative ways to improve software development. Surveys of what social media some companies are currently using along with information about new social media systems indicate possible uses for these technologies on future software development projects such as documentation maintenance, employee training, and predicting and thus preventing build failures.
APA, Harvard, Vancouver, ISO, and other styles
7

Ewer, John Andrew Clark. "An investigation into the feasibility, problems and benefits of re-engineering a legacy procedural CFD code into an event driven, object oriented system that allows dynamic user interaction." Thesis, University of Greenwich, 2000. http://gala.gre.ac.uk/6165/.

Full text
Abstract:
This research started with questions about how the overall efficiency, reliability and ease-of-use of Computational Fluid Dynamics (CFD) codes could be improved using any available software engineering and Human Computer Interaction (HCI) techniques. Much of this research has been driven by the difficulties experienced by novice CFD users in the area of Fire Field Modelling where the introduction of performance based building regulations have led to a situation where non CFD experts are increasingly making use of CFD techniques, with varying degrees of effectiveness, for safety critical research. Formerly, such modelling has not been helped by the mode of use, high degree of expertise required from the user and the complexity of specifying a simulation case. Many of the early stages of this research were channelled by perceived limitations of the original legacy CFD software that was chosen as a framework for these investigations. These limitations included poor code clarity, bad overall efficiency due to the use of batch mode processing, poor assurance that the final results presented from the CFD code were correct and the requirement for considerable expertise on the part of users. The innovative incremental re-engineering techniques developed to reverse-engineer, re-engineer and improve the internal structure and usability of the software were arrived at as a by-product of the research into overcoming the problems discovered in the legacy software. The incremental reengineering methodology was considered to be of enough importance to warrant inclusion in this thesis. Various HCI techniques were employed to attempt to overcome the efficiency and solution correctness problems. These investigations have demonstrated that the quality, reliability and overall run-time efficiency of CFD software can be significantly improved by the introduction of run-time monitoring and interactive solution control. It should be noted that the re-engineered CFD code is observed to run more slowly than the original FORTRAN legacy code due, mostly, to the changes in calling architecture of the software and differences in compiler optimisation: but, it is argued that the overall effectiveness, reliability and ease-of-use of the prototype software are all greatly improved. Investigations into dynamic solution control (made possible by the open software architecture and the interactive control interface) have demonstrated considerable savings when using solution control optimisation. Such investigations have also demonstrated the potential for improved assurance of correct simulation when compared with the batch mode of processing found in most legacy CFD software. Investigations have also been conducted into the efficiency implications of using unstructured group solvers. These group solvers are a derivation of the simple point-by-point Jaccobi Over Relaxation (JOR) and Successive Over Relaxation (SOR) solvers [CROFT98] and using group solvers allows the computational processing to be more effectively targeted on regions or logical collections of cells that require more intensive computation. Considerable savings have been demonstrated for the use of both static- and dynamic- group membership when using these group solvers for a complex 3-imensional fire modelling scenario. Furthermore the improvements in the system architecture (brought about as a result of software re-engineering) have helped to create an open framework that is both easy to comprehend and extend. This is in spite of the underlying unstructured nature of the simulation mesh with all of the associated complexity that this brings to the data structures. The prototype CFD software framework has recently been used as the core processing module in a commercial Fire Field Modelling product (called "SMARTFIRE" [EWER99-1]). This CFD framework is also being used by researchers to investigate many diverse aspects of CFD technology including Knowledge Based Solution Control, Gaseous and Solid Phase Combustion, Adaptive Meshing and CAD file interpretation for ease of case specification.
APA, Harvard, Vancouver, ISO, and other styles
8

Tillenius, Martin. "Leveraging multicore processors for scientific computing." Licentiate thesis, Uppsala universitet, Avdelningen för beräkningsvetenskap, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-181266.

Full text
Abstract:
This thesis deals with how to develop scientific computing software that runs efficiently on multicore processors. The goal is to find building blocks and programming models that increase the productivity and reduce the probability of programming errors when developing parallel software. In our search for new building blocks, we evaluate the use of hardware transactional memory for constructing atomic floating point operations. Using benchmark applications from scientific computing, we show in which situations this achieves better performance than other approaches. Driven by the needs of scientific computing applications, we develop a programming model and implement it as a reusable library. The library provides a run-time system for executing tasks on multicore architectures, with efficient and user-friendly management of dependencies. Our results from scientific computing benchmarks show excellent scaling up to at least 64 cores. We also investigate how the execution time depend on the task granularity, and build a model for the performance of the task library.
UPMARC
eSSENCE
APA, Harvard, Vancouver, ISO, and other styles
9

Jayawardena, Mahen. "Parallel algorithms and implementations for genetic analysis of quantitative traits." Licentiate thesis, Uppsala universitet, Avdelningen för teknisk databehandling, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-85815.

Full text
Abstract:
Many important traits in plants, animals and humans are quantitative, and most such traits are generally believed to be regulated by multiple genetic loci. Standard computational tools for analysis of quantitative traits use linear regression models for relating the observed phenotypes to the genetic composition of individuals in a population. However, using these tools to simultaneously search for multiple genetic loci is very computationally demanding. The main reason for this is the complex nature of the optimization landscape for the multidimensional global optimization problems that must be solved. This thesis describes parallel algorithms and implementation techniques for such optimization problems. The new computational tools will eventually enable genetic analysis exploiting new classes of multidimensional statistical models, potentially resulting in interesting results in genetics. We first describe how the algorithm used for global optimization in the standard, serial software is parallelized and implemented on a grid system. Then, we also describe a parallelized version of the more elaborate global optimization algorithm DIRECT and show how this can be deployed on grid systems and other loosely-coupled architectures. The parallel DIRECT scheme is further developed to exploit both coarse-grained parallelism in grid or clusters as well as fine-grained, tightly-coupled parallelism in multi-core nodes. The results show that excellent speedup and performance can be archived on grid systems and clusters, even when using a tightly-coupled algorithms such as DIRECT. Finally, a pilot implementation of a grid portal providing a graphical front-end for our code is implemented. After some further development, this portal can be utilized by geneticists for performing multidimensional genetic analysis of quantitative traits on a regular basis.
APA, Harvard, Vancouver, ISO, and other styles
10

Löf, Henrik. "Parallelizing the Method of Conjugate Gradients for Shared Memory Architectures." Licentiate thesis, Uppsala universitet, Avdelningen för teknisk databehandling, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-86295.

Full text
Abstract:
Solving Partial Differential Equations (PDEs) is an important problem in many fields of science and engineering. For most real-world problems modeled by PDEs, we can only approximate the solution using numerical methods. Many of these numerical methods result in very large systems of linear equations. A common way of solving these systems is to use an iterative solver such as the method of conjugate gradients. Furthermore, due to the size of these systems we often need parallel computers to be able to solve them in a reasonable amount of time. Shared memory architectures represent a class of parallel computer systems commonly used both in commercial applications and in scientific computing. To be able to provide cost-efficient computing solutions, shared memory architectures come in a large variety of configurations and sizes. From a programming point of view, we do not want to spend a lot of effort optimizing an application for a specific computer architecture. We want to find methods and principles of optimizing our programs that are generally applicable to a large class of architectures. In this thesis, we investigate how to implement the method of conjugate gradients efficiently on shared memory architectures. We seek algorithmic optimizations that result in efficient programs for a variety of architectures. To study this problem, we have implemented the method of conjugate gradients using OpenMP and we have measured the runtime performance of this solver on a variety of both uniform and non-uniform shared memory architectures. The input data used in the experiments come from a Finite-Element discretization of the Maxwell equations in three dimensions of a fighter-jet geometry. Our results show that, for all architectures studied, optimizations targeting the memory hierarchy exhibited the largest performance increase. Improving the load balance, by balancing the arithmetical work and minimizing the number of global barriers showed to be of lesser importance. Overall, bandwidth minimization of the iteration matrix showed to be the most efficient optimization. On non-uniform architectures, proper data distribution showed to be very important. In our experiments we used page migration to improve the data distribution during runtime. Our results indicate that page migration can be very efficient if we can keep the migration cost low. Furthermore, we believe that page migration can be introduced in a portable way into OpenMP in the form of a directive with a affinity-on-next-touch semantic.
APA, Harvard, Vancouver, ISO, and other styles
11

Lipkin, Ilya. "Testing Software Development Project Productivity Model." University of Toledo / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1321593577.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Nettelblad, Carl. "Using Markov models and a stochastic Lipschitz condition for genetic analyses." Licentiate thesis, Uppsala universitet, Avdelningen för teknisk databehandling, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-120295.

Full text
Abstract:
A proper understanding of biological processes requires an understanding of genetics and evolutionary mechanisms. The vast amounts of genetical information that can routinely be extracted with modern technology have so far not been accompanied by an equally extended understanding of the corresponding processes. The relationship between a single gene and the resulting properties, phenotype of an individual is rarely clear. This thesis addresses several computational challenges regarding identifying and assessing the effects of quantitative trait loci (QTL), genomic positions where variation is affecting a trait. The genetic information available for each individual is rarely complete, meaning that the unknown variable of the genotype in the loci modelled also needs to be addressed. This thesis contains the presentation of new tools for employing the information that is available in a way that maximizes the information used, by using hidden Markov models (HMMs), resulting in a change in algorithm runtime complexity from exponential to log-linear, in terms of the number of markers. It also proposes the introduction of inferred haplotypes to further increase the power to assess these unknown variables for pedigrees of related genetically diverse individuals. Modelling consequences of partial genetic information are also treated. Furthermore, genes are not directly affecting traits, but are rather expressed in the environment of and in concordance with other genes. Therefore, significant interactions can be expected within genes, where some combination of genetic variation gives a pronounced, or even opposite, effect, compared to when occurring separately. This thesis addresses how to perform efficient scans for multiple interacting loci, as well as how to derive highly accurate empirical significance tests in these settings. This is done by analyzing the mathematical properties of the objective function describing the quality of model fits, and reformulating it through a simple transformation. Combined with the presented prototype of a problem-solving environment, these developments can make multi-dimensional searches for QTL routine, allowing the pursuit of new biological insight.
eSSENCE
APA, Harvard, Vancouver, ISO, and other styles
13

Hekimoglu, Ozge. "Comparison Of The Resource Allocation Capabilities Of Project Management Software Packages In Resource Constrained Project Scheduling Problems." Master's thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/12608203/index.pdf.

Full text
Abstract:
In this study, results of a comparison on benchmark test problems are presented to investigate the performance of Primavera V.4.1 with its two resource allocation priority rules and MS Project 2003. Resource allocation capabilities of the packages are measured in terms of deviation from the upper bound of the minimum makespan. Resource constrained project scheduling problem instances are taken from PSPLIB which are generated under a factorial design from ProGen. Statistical tests are applied to the results for investigating the significance effectiveness of the parameters.
APA, Harvard, Vancouver, ISO, and other styles
14

Emelko, Glenn A. "A New Algorithm for Efficient Software Implementation of Reed-Solomon Encoders for Wireless Sensor Networks." Cleveland, Ohio : Case Western Reserve University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=case1238457115.

Full text
Abstract:
Thesis (Ph.D.)--Case Western Reserve University, 2009
Department of Electrical Engineering Abstract Title from OhioLINK abstract screen (viewed on 10 April 2009) Available online via the OhioLINK ETD Center
APA, Harvard, Vancouver, ISO, and other styles
15

Gupta, Jatin. "Application of Hazard and Operability (HAZOP) Methodology to Safety-Related Scientific Software." The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1398983873.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Haraldsson, Saemundur Oskar. "Genetic improvement of software : from program landscapes to the automatic improvement of a live system." Thesis, University of Stirling, 2017. http://hdl.handle.net/1893/26007.

Full text
Abstract:
In today’s technology driven society, software is becoming increasingly important in more areas of our lives. The domain of software extends beyond the obvious domain of computers, tablets, and mobile phones. Smart devices and the internet-of-things have inspired the integra- tion of digital and computational technology into objects that some of us would never have guessed could be possible or even necessary. Fridges and freezers connected to social media sites, a toaster activated with a mobile phone, physical buttons for shopping, and verbally asking smart speakers to order a meal to be delivered. This is the world we live in and it is an exciting time for software engineers and computer scientists. The sheer volume of code that is currently in use has long since outgrown beyond the point of any hope for proper manual maintenance. The rate of which mobile application stores such as Google’s and Apple’s have expanded is astounding. The research presented here aims to shed a light on an emerging field of research, called Genetic Improvement ( GI ) of software. It is a methodology to change program code to improve existing software. This thesis details a framework for GI that is then applied to explore fitness landscape of bug fixing Python software, reduce execution time in a C ++ program, and integrated into a live system. We show that software is generally not fragile and although fitness landscapes for GI are flat they are not impossible to search in. This conclusion applies equally to bug fixing in small programs as well as execution time improvements. The framework’s application is shown to be transportable between programming languages with minimal effort. Additionally, it can be easily integrated into a system that runs a live web service.
APA, Harvard, Vancouver, ISO, and other styles
17

Frazier, David E. "Requirement elicitation of large web projects." [Johnson City, Tenn. : East Tennessee State University], 2004. http://etd-submit.etsu.edu/etd/theses/available/etd-1109104-113450/unrestricted/FrazierD112304f.pdf.

Full text
Abstract:
Thesis (M.S.)--East Tennessee State University, 2004.
Title from electronic submission form. ETSU ETD database URN: etd-1109104-113450 Includes bibliographical references. Also available via Internet at the UMI web site.
APA, Harvard, Vancouver, ISO, and other styles
18

Moody, James David. "Categorizing Non-Functional Requirements Using a Hierarchy in UML." Digital Commons @ East Tennessee State University, 2003. https://dc.etsu.edu/etd/763.

Full text
Abstract:
Non-functional requirements (NFRs) are a subset of requirements, the means by which software system developers and clients communicate about the functionality of the system to be built. This paper has three main parts: first, an overview of how non-functional requirements relate to software engineering is given, along with a survey of NFRs in the software engineering literature. Second, a collection of 161 NFRs is diagrammed using the Unified Modelling Language, forming a tool with which developers may more easily identify and write additional NFRs. Third, a lesson plan is presented, a learning module intended for an undergraduate software engineering curriculum. The results of presenting this learning module to a class in Spring, 2003 is presented.
APA, Harvard, Vancouver, ISO, and other styles
19

Kutomi, Esdras. "Supporting Support Engineers." BYU ScholarsArchive, 2020. https://scholarsarchive.byu.edu/etd/8431.

Full text
Abstract:
The steady and uninterrupted availability of systems is essential for the mission of many companies and other organizations. This responsibility relies mostly upon support engineers, who are responsible to respond to incidents. Incident response is a unique type of task in software engineering, given it carries distinguishing characteristics like risks, pressure, incomplete information and urgency. Despite the importance of this task for many organizations, little can be found in the literature about the incident response task and model. To fill the gap, we created a theoretical foundation to foster research on incident response. We conducted an interview study, asking 12 support engineers about their experiences dealing with outages, service degradation, and other incidents that demanded an urgent response. We used our 22 collected cases to identify important concepts of incidents and their dimensions, and created an ontology of incidents and a model of the incident response. To validate the usefulness of our results, we analyzed our incidents based on our ontology and model, providing some insights related to detection of incidents, investigation and the hand over process. We also provide analytical insights related to the prevention of resource limitation incidents. Finally, we validate the usefulness of our research by proposing an improvement on monitoring tools used by support engineers.
APA, Harvard, Vancouver, ISO, and other styles
20

Ying, Tiancheng. "CandyFactory: Cloud-Based Educational Game for Teaching Fractions." Thesis, Virginia Tech, 2019. http://hdl.handle.net/10919/90218.

Full text
Abstract:
Nowadays cross platform software development is more expensive than ever before in terms of time and effort. Meantime with increasing number of personal devices, it is harder for local applications to synchronize and connect to the Internet. In terms of educational games, they can be divided into "local educational game" and "web educational game." "Local game" indicates the ones either on tablets, mobile devices or PC, which is an application on the corresponding platform. This kind of game mostly does not have backend support nor cross platform features such as the iPad version of CandyFactory. For one specific game, if the developer wants it to run on iPad and Android tablets, they need to develop two applications based on corresponding development framework, which is time and effort consuming. "Web game" indicates the ones on websites, which support cross platforms, but do not have backend support. Usually they are pure JavaScript or flash games with no backend recording the performances and the achievements. Software development for each individual platform is time and effort consuming. In order to achieve cross platform development, many programming languages and platforms like Java, Python, and JVM appear. Among all the cross platform approaches, cloud-based software development is the most universal solution to this problem. With web browsers built into every operating system, cloud software can be compatible with almost any device. Moreover, "Software-as-a-Service" (SaaS) is becoming a new software engineering paradigm and cloud-based software development is more popular because of its flexible scalability and cross platform features. In this thesis, we create a cloud-based educational game, CandyFactory, based on an iPad version of CandyFactory, and add backend to it to record user performance as well as achievements. Firstly, we re-develop the whole game from the iOS platform to the cloud-based Java EE platform. Secondly, we add new features to improve the game play such as ruler functionality and achievements animation. Thirdly, we add backend support to CandyFactory, including user account creation, course creation and performance report generation. With this functionality, teachers can monitor their students' performances and generate course reports. Moreover, teachers can view a specific student's report in order to provide more specific and effective help to their students. Lastly, with the advantages of cloud-based software development, we can update the whole application at any time without forcing the user to reinstall the update or re-download the game. With the hot update, the cloud-based CandyFactory is highly maintainable. The cloud-based CandyFactory runs on any computer that supports minimum 1024x768 screen resolution. The computer could be iPads, Android or Microsoft tablets, Windows or Mac laptops and desktops, and any other computer with a web browser. The advantages of cloud-based educational games over local educational games and web educational games are: firstly, they have cross platform features; secondly, they have backend data collection support; thirdly, they are consistent even if users log in with different computers, their game record and history will always be the same; lastly, the teacher can always keep track of his/her students' performance and provide more specific help and feedback.
Master of Science
Providing services on the cloud has become universal. The term “Cloud-Based” indicates that the software application runs on a server computer and users access the application by using a web browser anywhere and anytime. This thesis presents a cloud-based educational game called CandyFactory to teach fractions. The users can use CandyFactory under a web browser on an Internet-connected tablet, laptop, or desktop computer with minimum 1024x768 screen resolution. User’s game performance data is recorded on the server computer regardless of which tablet, laptop, or desktop computer the user uses to play the game. Cloud-based CandyFactory has four kinds of users: Individual, Teacher, Student, Administrator. Individual users can play the game to learn fractions as well as generate performance reports. Teachers can create a course, automatically generate student accounts under a course, and generate performance reports for individual students or for the whole class. Students can play the game under the account provided by the teacher and view their performance reports. Administrator is a built-in account user for maintaining the cloud-based software application. By developing the cloud-based CandyFactory educational game, we provide the users a crossplatform and cross-computers solution which helps the teachers and students learn fractions more efficiently and effectively.
APA, Harvard, Vancouver, ISO, and other styles
21

Tomalik, Edyta. "Image-based Microscale Particle Velocimetry in Live Cell Microscopy." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2564.

Full text
Abstract:
Background: Nowadays, one of the medical problem is rolling cell adhesion. Rolling cell adhesion is a complex process that requires the analysis of the challenging environment such as body fluid and is the process responsible for recruiting the cell to specific organs. In order to explore the rolling cell adhesion, mathematical model is proposed. Different image processing methods are created, such as optical flow - Lucas Kanade algorithm, and other type of methods related to mechanical fluid, namely PIV (Particle Image Velocimetry). Aim: The aim of this master thesis is the identification of challenges while using PIV in live cell images and propose the algorithm, which may analyze the rolling cell adhesion problem. Methods: In order to understand properly the rolling cell adhesion problem from biological site, literature review combined with the expert consultation is performed. According to gather information, mathematical model is proposed. Particle Image Velocimetry is explained according to literature review, where at the beginning the expert recommends some books as a primary research. As a result of this research, PIV challenges are identified and generally PIV idea is explained. Then two experiments are performed. The first experiment evaluates detection algorithms and the second one, analyses track algorithm vs. PIV. In order to evaluate the mentioned algorithms, some evaluation method are selected and some criteria are defined. Unfortunately the found methods are not perfect, therefore a new method related to performance evaluation using time series is proposed. Thesis result: The result of this thesis is a proposition of the algorithm, which can be used in the rolling cell adhesion. The algorithm is formed according to the detailed exploration of the rolling cell adhesion and analysis of the selected algorithms related to the image analysis during the theoretical research and experiments.
APA, Harvard, Vancouver, ISO, and other styles
22

Ana, Cavalcanti. "A refinement calculus for Z." Thesis, University of Oxford, 1997. http://ora.ox.ac.uk/objects/uuid:ee9c7207-01f6-4bac-8ed1-c354a2551f9c.

Full text
Abstract:
The lack of a method for developing programs from Z specifications is a difficulty that is now widely recognised. As a contribution to solving this problem, we present ZRC, a refinement calculus based on Morgan's work that incorporates the Z notation and follows its style and conventions. Other refinement techniques have been proposed for Z; ZRC builds upon some of them, but distinguishes itself in that it is completely formalised. As several other refinement techniques, ZRC is formalised in terms of weakest preconditions. In order to define the semantics of its language, ZRC-L, we construct a weakest precondition semantics for Z based on a relational semantics proposed by the Z standards panel. The resulting definition is not unexpected, but its construction provides evidence for its suitability and, additionally, establishes connections between predicate transformers and two different relational models. The weakest precondition semantics of the remaining constructs of ZRC-L justify several assumptions that permeate the formalisation of Morgan's refinement calculus. Based on the semantics of ZRC-L, we derive all laws of ZRC. Typically the refinement of a schema in ZRC begins with the application of a conversion law that translates it to a notation convenient for refinement, and proceeds with the application of refinement laws. The conversion laws of ZRC formalise the main strategies and rules of translation available in the literature; its set of refinement laws is extensive and includes support for procedures, parameters, recursion, and data refinement. Morgan and Back have proposed different formalisations of procedures and parameters in the context of refinement techniques. We investigate a surprising and intricate relationship between these works and the substitution operator that renames the free variables of a program, and reveal an inconsistency in Morgan's calculus, Back's approach does not suffer from this inconsistency, but he does not present refinement laws. We benefit from both works and use a model based on Back's formalism to derive refinement laws similar to those in Morgan's calculus. Furthermore, we derive additional laws that formalise Morgan's approach to recursion. Three case studies illustrate the application of ZRC. They show that ZRC can be useful as a technique of formal program development, but are by no means enough to ascertain the general adequacy of its conversion and refinement laws. Actually, since Z does not enforce a specific style of structuring specifications, it is likely that new laws will be proved useful for particular system specifications: two of our case studies exemplify this situation. Our hope is that ZRC and its formalisation will encourage further investigation into the refinement of Z specifications and the proper justification of any emerging strategies or techniques.
APA, Harvard, Vancouver, ISO, and other styles
23

Chi, Yuan. "Machine learning techniques for high dimensional data." Thesis, University of Liverpool, 2015. http://livrepository.liverpool.ac.uk/2033319/.

Full text
Abstract:
This thesis presents data processing techniques for three different but related application areas: embedding learning for classification, fusion of low bit depth images and 3D reconstruction from 2D images. For embedding learning for classification, a novel manifold embedding method is proposed for the automated processing of large, varied data sets. The method is based on binary classification, where the embeddings are constructed so as to determine one or more unique features for each class individually from a given dataset. The proposed method is applied to examples of multiclass classification that are relevant for large scale data processing for surveillance (e.g. face recognition), where the aim is to augment decision making by reducing extremely large sets of data to a manageable level before displaying the selected subset of data to a human operator. In addition, an indicator for a weighted pairwise constraint is proposed to balance the contributions from different classes to the final optimisation, in order to better control the relative positions between the important data samples from either the same class (intraclass) or different classes (interclass). The effectiveness of the proposed method is evaluated through comparison with seven existing techniques for embedding learning, using four established databases of faces, consisting of various poses, lighting conditions and facial expressions, as well as two standard text datasets. The proposed method performs better than these existing techniques, especially for cases with small sets of training data samples. For fusion of low bit depth images, using low bit depth images instead of full images offers a number of advantages for aerial imaging with UAVs, where there is a limited transmission rate/bandwidth. For example, reducing the need for data transmission, removing superfluous details, and reducing computational loading of on-board platforms (especially for small or micro-scale UAVs). The main drawback of using low bit depth imagery is discarding image details of the scene. Fortunately, this can be reconstructed by fusing a sequence of related low bit depth images, which have been properly aligned. To reduce computational complexity and obtain a less distorted result, a similarity transformation is used to approximate the geometric alignment between two images of the same scene. The transformation is estimated using a phase correlation technique. It is shown that that the phase correlation method is capable of registering low bit depth images, without any modi�cation, or any pre and/or post-processing. For 3D reconstruction from 2D images, a method is proposed to deal with the dense reconstruction after a sparse reconstruction (i.e. a sparse 3D point cloud) has been created employing the structure from motion technique. Instead of generating a dense 3D point cloud, this proposed method forms a triangle by three points in the sparse point cloud, and then maps the corresponding components in the 2D images back to the point cloud. Compared to the existing methods that use a similar approach, this method reduces the computational cost. Instated of utilising every triangle in the 3D space to do the mapping from 2D to 3D, it uses a large triangle to replace a number of small triangles for flat and almost flat areas. Compared to the reconstruction result obtained by existing techniques that aim to generate a dense point cloud, the proposed method can achieve a better result while the computational cost is comparable.
APA, Harvard, Vancouver, ISO, and other styles
24

Salin, Eliana Bevilacqua. "Matemática dinâmica : uma abordagem para o ensino de funções afim e quadrática a partir de situações geométricas." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2014. http://hdl.handle.net/10183/108425.

Full text
Abstract:
Esta pesquisa tratou de investigar o papel dos registros de representação semiótica na construção do conceito de função, em particular daquelas do tipo afim e quadrática. Parte da pesquisa também foi investigar de que forma o uso de um software de matemática dinâmica, o GeoGebra, pode ajudar o processo de aprendizagem do tópico em questão. A metodologia de pesquisa utilizada é inspirada nos moldes da Engenharia Didática de Artigue. É através das atividades da sequência didática projetada e implementada em 2013, com turma de alunos do primeiro ano do Ensino Médio de uma escola estadual de Porto Alegre, que vamos mostrar como aconteceu o desenvolvimento de conexões múltiplas entre as diversas representações de uma função (algébrica, gráfica e numérica), bem como a importância do software GeoGebra como recurso pedagógico quando se quer trabalhar com múltiplas representações. As respostas apresentadas, pelos alunos, foram analisadas tendo como referência a Teoria dos Registros de Representação Semiótica de Duval. A observação de relações entre variáveis a partir da manipulação de pontos em uma construção no Geogebra propiciou a compreensão do conceito de função e gráfico, através de constante processo de conversão de registros.
This research aims to investigate the role of semiotic representation registers in the construction of the concept of function, in particular those of the affine and quadratic type. Part of the research was also to investigate how the use of a dynamic mathematics software GeoGebra can help the learning process. The research methodology is inspired by the lines of Didactic Engineering. Through a instructional sequence designed and implemented in 2013, with a group of students of the first year of high school to a state school in Porto Alegre, it is shown how they developed the connections between different registers of representations (algebraic , graphical and numerical). The experience has also shown the importance of GeoGebra as a tool to explore the different registers of representation. The production of students were analyzed taking into acount the theory of semiotic representations of Duval and it was posible to identify their understanding of the concept of variable, function and graph, through the an active process of conversion of registers.
APA, Harvard, Vancouver, ISO, and other styles
25

Smith, Michael Anthony. "Embedding an object calculus in the unifying theories of programming." Thesis, University of Oxford, 2010. http://ora.ox.ac.uk/objects/uuid:8b5be90d-59c1-42c0-a996-ecd8015097b3.

Full text
Abstract:
Hoare and He's Unifying Theories of Programming (UTP) provides a rich model of programs as relational predicates. This theory is intended to provide a single framework in which any programming paradigms, languages, and features, can be modelled, compared and contrasted. The UTP already has models for several programming formalisms, such as imperative programming, higher-order programming (e.g. programing with procedures), several styles of concurrent programming (or reactive systems), class-based object-orientation, and transaction processing. We believe that the UTP ought to be able to represent all significant computer programming language formalisms, in order for it to be considered a unifying theory. One gap in the UTP work is that of object-based object-orientation, such as that presented in Abadi and Cardelli's untyped object calculi (sigma-calculi). These sigma-calculi provide a prominent formalism of object-based object-oriented (OO) programs, which models programs as objects. We address this gap within this dissertation by presenting an embedding of an Abadi--Cardelli-style object calculus in the UTP. More formally, the thesis that his dissertation argues is that it is possible to provide an object-based object rientation to the UTP, with value- and reference-based objects, and a fully abstract model of references. We have made three contributions to our area of study: first, to extend the UTP with a notion of object-based object orientation, in contrast with the existing class-based models; second, to provide an alternative model of pointers (references) for the UTP that supports both value-based compound values (e.g. objects) and references (pointers), in contrast to existing UTP models with pointers that have reference-based compound values; and third, to model an Abadi-Cardelli notion of an object in the UTP, and thus demonstrate that it can unify this style of object formalism.
APA, Harvard, Vancouver, ISO, and other styles
26

Bienvenu, Kirk Jr. "Underwater Acoustic Signal Analysis Toolkit." ScholarWorks@UNO, 2017. https://scholarworks.uno.edu/td/2398.

Full text
Abstract:
This project started early in the summer of 2016 when it became evident there was a need for an effective and efficient signal analysis toolkit for the Littoral Acoustic Demonstration Center Gulf Ecological Monitoring and Modeling (LADC-GEMM) Research Consortium. LADC-GEMM collected underwater acoustic data in the northern Gulf of Mexico during the summer of 2015 using Environmental Acoustic Recording Systems (EARS) buoys. Much of the visualization of data was handled through short scripts and executed through terminal commands, each time requiring the data to be loaded into memory and parameters to be fed through arguments. The vision was to develop a graphical user interface (GUI) that would increase the productivity of manual signal analysis. It has been expanded to make several calculations autonomously for cataloging and meta data storage of whale clicks. Over the last year and a half, a working prototype has been developed with MathWorks matrix laboratory (MATLAB), an integrated development environment (IDE). The prototype is now very modular and can accept new tools relatively quickly when development is completed. The program has been named Banshee, as the mythical creatures are known to “wail”. This paper outlines the functionality of the GUI, explains the benefits of frequency analysis, the physical models that facilitate these analytics, and the mathematics performed to achieve these models.
APA, Harvard, Vancouver, ISO, and other styles
27

Teillaud, Monique. "-- Géométrie algorithmique --De la théorie à la pratique,Des objets linéaires aux objets courbes." Habilitation à diriger des recherches, Université de Nice Sophia-Antipolis, 2007. http://tel.archives-ouvertes.fr/tel-00175997.

Full text
Abstract:
Si la communauté internationale de géométrie algorithmique a souvent
la tentation de s'engouffrer dans des recherches essentiellement
théoriques, et en particulier combinatoires, la grande originalité des
travaux à l'INRIA résidait déjà à l'époque de mes débuts dans le
souci de leur validation expérimentale et de leur applicabilité.

Le domaine a suivi globalement une évolution dans cette direction,
en particulier grâce à l'``Impact Task Force Report''. Notre intérêt pour le transfert technologique et
industriel, ainsi que pour l'établissement d'une plateforme pour la
recherche, a pris pendant ce temps une tournure encore plus concrète
avec notre implication très forte dans le projet CGAL
dont notre équipe est l'un des moteurs.

Ce document prend le parti de présenter les travaux sous l'angle de
cette préoccupation pratique.
Il comporte deux chapitres principaux : le premier rassemble
des travaux sur les triangulations, le second présente des travaux sur
les objets courbes. Ces deux chapitres se concluent par un ensemble de
directions ouvertes. Le troisième chapitre survole rapidement d'autres
résultats.
APA, Harvard, Vancouver, ISO, and other styles
28

Schroeder, Andreas. "Software engineering perspectives on physiological computing." Diss., lmu, 2011. http://nbn-resolving.de/urn:nbn:de:bvb:19-139294.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Sjödin, Rickard. "Interpolation and visualization of sparse GPR data." Thesis, Umeå universitet, Institutionen för fysik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-170946.

Full text
Abstract:
Ground Penetrating Radar is a tool for mapping the subsurface in a noninvasive way. The radar instrument transmits electromagnetic waves and records the resulting scattered field. Unfortunately, the data from a survey can be hard to interpret, and this holds extra true for non-experts in the field. The data are also usually in 2.5D, or pseudo 3D, meaning that the vast majority of the scanned volume is missing data. Interpolation algorithms can, however, approximate the missing data, and the result can be visualized in an application and in this way ease the interpretation. This report has focused on comparing different interpolation algorithms, with extra focus on behaviour when the data get sparse. The compared methods were: Linear, inverse distance weighting, ordinary kriging, thin plate splines and fk domain zone-pass POCS. They were all found to have some strengths and weaknesses in different aspects, although ordinary kriging was found to be the most accurate and created the least artefacts. Inverse distance weighting performed surprisingly well considering its simplicity and low computational cost. A web-based, easy-to-use visualization application was developed in order to view the results from the interpolations. Some of the tools implemented include time slice, crop of a 3D cube, and iso surface.
APA, Harvard, Vancouver, ISO, and other styles
30

Kraus, Andreas. "Model Driven Software Engineering for Web Applications." Diss., lmu, 2007. http://nbn-resolving.de/urn:nbn:de:bvb:19-79362.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Nimal, Vincent P. J. "Static analyses over weak memory." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:469907ec-6f61-4015-984e-7ca8757b992c.

Full text
Abstract:
Writing concurrent programs with shared memory is often not trivial. Correctly synchronising the threads and handling the non-determinism of executions require a good understanding of the interleaving semantics. Yet, interleavings are not sufficient to model correctly the executions of modern, multicore processors. These executions follow rules that are weaker than those observed by the interleavings, often leading to reorderings in the sequence of updates and readings from memory; the executions are subject to a weaker memory consistency. Reorderings can produce executions that would not be observable with interleavings, and these possible executions also depend on the architecture that the processors implement. It is therefore necessary to locate and understand these reorderings in the context of a program running, or to prevent them in an automated way. In this dissertation, we aim to automate the reasoning behind weak memory consistency and perform transformations over the code so that developers need not to consider all the specifics of the processors when writing concurrent programs. We claim that we can do automatic static analysis for axiomatically-defined weak memory models. The method that we designed also allows re-use of automated verification tools like model checkers or abstract interpreters that were not designed for weak memory consistency, by modification of the input programs. We define an abstraction in detail that allows us to reason statically about weak memory models over programs. We locate the parts of the code where the semantics could be affected by the weak memory consistency. We then provide a method to explicitly reveal the resulting reorderings so that usual verification techniques can handle the program semantics under a weaker memory consistency. We finally provide a technique that synthesises synchronisations so that the program would behave as if only interleavings were allowed. We finally test these approaches on artificial and real software. We justify our choice of an axiomatic model with the scalability of the approach and the runtime performance of the programs modified by our method.
APA, Harvard, Vancouver, ISO, and other styles
32

Hartikka, Alice, and Simon Nordenhög. "Emission Calculation Model for Vehicle Routing Planning : Estimation of emissions from heavy transports and optimization with carbon dioxide equivalents for a route planning software." Thesis, Linköpings universitet, Energisystem, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-178065.

Full text
Abstract:
The transport sector is a major cause of emissions both in Sweden and globally. This master thesis aims to develop a model for estimating emissions from heavy transport on a specific route. The emissions can be used in a route planning software and help the driver choose a route that contributes to reduced emissions. The methodology was to investigate attributes, like vehicle-related attributes and topography, and their impact on transport emissions. The carbon dioxide, methane and nitrous oxide emissions were converted into carbon dioxide equivalents, which were incorporated as one cost together with a precalculated driving time as a second cost in a multi objective function used for route planning. Different tests were conducted to investigate the accuracy and the usability of the model. First, a validation test was performed, where the optimized routes were analyzed. The test showed that the model was more likely to choose a shorter route in general. The fuel consumption values largely met expectations towards generic values and measurements, that were gathered from research. A second test of the model made use of the driving time combined with the emissions in a multi objective function. In this test, a weighting coefficient was varied and analyzed to understand the possibility to find a value of the coefficient for the best trade-off. The result showed that the model generates different solutions for different coefficients and that it is possible to find a suitable trade-off between the driving time and emissions. Therefore, this study shows that there is a possibility to combine emission with other objectives such as driving time for route optimization. Finally, a sensitivity analysis was performed, where attribute factors and assumptions were varied to see how sensitive they were and, in turn, how much a change would impact the calculated emissions. The result from the sensitivity analysis showed that the changes in topography-attributes had less impact than changes on vehicle-related attributes. In conclusion, this thesis has built a foundation for route planning, based on the environmental aspect, for heavy transports.
APA, Harvard, Vancouver, ISO, and other styles
33

Pakala, Akshay Kumar. "Aerodynamic Analysis of Conventional and Spherical Tires." University of Akron / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=akron1606237030779529.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

De, Falco Marc. "Géométrie de l'Interaction et Réseaux Différentiels." Phd thesis, Université de la Méditerranée - Aix-Marseille II, 2009. http://tel.archives-ouvertes.fr/tel-00392242.

Full text
Abstract:
La Géométrie de l'Interaction (GdI) de Girard est une sémantique des langage de programmations tenant compte de leur dynamique de réduction.
Dans un premier temps, on présente les réseaux d'interaction de Lafont comme une instance particulière de GdI. Puis, on définis un cadre général d'étude de la GdI à partir d'un ensemble de symboles et de règles d'interaction.
Dans un second temps, on introduit une notion de concision associée à la GdI et on montre dans quelle mesure cette notion fait du sens à l'aide d'une famille d'exemple basée sur les entiers de Church.
Dans un dernier temps, on présente les réseaux d'interaction différentiels d'Ehrhard et Regnier et on définit leur GdI. On montre que la théorie usuelle de Danos-Regnier est entièrement récupérée.
APA, Harvard, Vancouver, ISO, and other styles
35

Wintz, Julien. "Méthodes algébriques pour la modélisation géometrique." Phd thesis, Université de Nice Sophia-Antipolis, 2008. http://tel.archives-ouvertes.fr/tel-00347162.

Full text
Abstract:
Les domaines de géométrie algébrique et de géométrie algorithmique, bien qu'étroitement liés, sont traditionnellement représentés par des communautés de recherche disjointes. Chacune d'entre elles utilisent des courbes et surfaces, mais représentent les objets de différentes manières. Alors que la géométrie algébrique définit les objets par le biais d'équations polynomiales, la géométrie algorithmique a pour habitude de manipuler des modèles linéaires. La tendance actuelle est d'appliquer les algorithmes traditionnels de géométrie algorithmique sur des modèles non linéaires tels que ceux trouvés en géométrie algébrique. De tels algorithmes jouent un rôle important dans de nombreux champs d'application tels que la Conception Assistée par Ordinateur. Leur utilisation soulève d'importantes questions en matière de développement logiciel. Tout d'abord, la manipulation de leur représentation implique l'utilisation de calculs symboliques numériques qui représentent toujours un domaine de recherche majeur. Deuxièmement, leur visualisation et leur manipulation n'est pas évidente, en raison de leur caractère abstrait.

La première partie de cette thèse porte sur l'utilisation de méthodes algébriques en modélisation géométrique, l'accent étant mis sur la topologie, l'intersection et l'auto-intersection dans le cadre du calcul d'arrangement d'ensembles semi-algébriques comme les courbes et surfaces à représentation implicite ou paramétrique. Une attention particulière est portée à la généricité des algorithmes qui peuvent être spécifiés quel que soit le contexte, puis spécialisés pour répondre aux exigences d'une certaine représentation.

La seconde partie de cette thèse présente le prototypage d'un environnement de modélisation géométrique dont le but est de fournir un moyen générique et efficace pour modéliser des solides à partir d'objets géométriques à re\-pré\-sen\-ta\-tion algébrique tels que les courbes et surfaces implicites ou paramétriques, à la fois d'un point de vue utilisateur et d'un point de vue de développeur, par l'utilisation de librairies de calcul symbolique numérique pour la
manipulation des polynômes définissant les objets géométriques.
APA, Harvard, Vancouver, ISO, and other styles
36

Blazy, Sandrine. "Sémantiques formelles." Habilitation à diriger des recherches, Université d'Evry-Val d'Essonne, 2008. http://tel.archives-ouvertes.fr/tel-00336576.

Full text
Abstract:
Ce mémoire présente plusieurs définitions de sémantiques formelles et de transformations de programmes, et expose les choix de conception associés. En particulier, ce mémoire décrit une transformation de programmes inspirée de l'évaluation partielle et dédiée à la compréhension de programmes scientifiques écrits en Fortran. Il détaille également le front-end d'un compilateur réaliste du langage C, ayant été formellement vérifié en C.
APA, Harvard, Vancouver, ISO, and other styles
37

Oliveira, Eliane Alves de. "Uma engenharia didática para abordar o conceito de equação diferencial em cursos de Engenharia." Pontifícia Universidade Católica de São Paulo, 2014. https://tede2.pucsp.br/handle/handle/11018.

Full text
Abstract:
Made available in DSpace on 2016-04-27T16:57:35Z (GMT). No. of bitstreams: 1 Eliane Alves de Oliveira.pdf: 5276695 bytes, checksum: 3ae0043a45ace509116acacd60a2301b (MD5) Previous issue date: 2014-12-19
This research had the target to check into teaching strategies that could favour students to learn about Ordinary Differential Equations and their applications inEngineering graduation courses. The study directed to the elaboration of a didactic engineering and was centered in the casting definition of these engineering components, having graphic, algebraic and numerical approaches which involved problem situations by means of the use of computational resources. The Theory of Didactical Situations, by Guy Brosseau and the Didactical Engineering, by Michèle Artigue compose the main theoretical-methodological inputs of the research. Sixteen students of the second year of Environmental Engineering and Production Engineering graduation courses of a higher graduation institution voluntarily joined the experiment. The GeoGebra Software was utilized for that. The data collection was made by using the following instruments: activities guide, initial and final knowledge tests and field diary. The results indicated that the software use favored the activities accomplishment and revealed the importance and productivity of arguments in pairs. The obtained data analysis enabled us to assert that the didactical engineering characteristics developed in this workfavouredthe construction of concepts of Ordinary Differential Equations by the students, attending the research aims
Esta pesquisa teve por objetivo investigar estratégias de ensino com vistas a favorecer a aprendizagem deestudantes acerca de Equações Diferenciais Ordinárias e suas aplicações em cursos de graduação em Engenharia. O estudo direcionou-se para a elaboração deuma engenharia didática, e centrou-se na definição do elenco de componentes dessa engenharia, tendo por alvo abordagens gráfica, algébrica e numérica, que envolvessem situações-problema, por meio da utilização de recursos computacionais.ATeoriadas Situações Didáticas de Guy Brousseau e a Engenharia Didática segundo Michèle Artigue compõem os aportesteórico-metodológicos principais da pesquisa. Dezesseis alunos do segundo ano de graduação em Engenharia Ambiental e Engenharia de Produção de uma Instituição de Ensino Superior participaram voluntariamente do experimento. Foi utilizado o software GeoGebra. A coleta de dados foi realizada por meio dos seguintes instrumentos:guias de atividades, teste inicial e final de conhecimentos e diário de campo. Os resultados indicaram que o uso do software favoreceua realização das atividades e revelaram a importância e a produtividade das discussões em dupla. A análise dos dados obtidos possibilitou afirmar que as características da engenharia didática desenvolvida no trabalho favorecerama construção de conceitos de Equações Diferenciais Ordinárias pelos alunos, atendendo os objetivos da pesquisa
APA, Harvard, Vancouver, ISO, and other styles
38

Oliver, Stephen E. "T[subscript]EXspec, a Computer Aided Software Engineering tool for scientific and mathematical applications." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp05/mq62813.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Boulier, François. "Étude et implantation de quelques algorithmes en algèbre différentielle." Phd thesis, Université des Sciences et Technologie de Lille - Lille I, 1994. http://tel.archives-ouvertes.fr/tel-00137866.

Full text
Abstract:
Le but de cette thèse est de rendre effectifs certains théorèmes et d'implanter efficacement certains algorithmes en algèbre différentielle, en vue d'une application à l'automatique non linéaire. Nous présentons trois résultats originaux. Le premier est un algorithme, Rosenfeld-Gröbner, qui décrit les modèles d'un système d'équations et d'inéquations polynomiales en algèbre différentielle ordinaire comme en algèbre différentielle partielle. L'algorithme décide du vide et donc de l'appartenance au radical d'un idéal différentiel de type fini. Notre deuxième résultat est une méthode qui calcule un ensemble caractéristique d'un idéal différentiel premier donné par une famille génératrice. Nous donnons enfin de nouvelles preuves des algorithmes d'élimination de Seidenberg. Les algorithmes que nous décrivons sont effectifs : ils n'utilisent que l'addition, la multiplication, les dérivations et le test d'égalité à zéro dans le corps de base des polynômes.
APA, Harvard, Vancouver, ISO, and other styles
40

Erson, E. Zeynep. "Development, Integration and Simulation of Multiscale Mathematical Models of Physiological Processes: A Software Engineering Perspective." Case Western Reserve University School of Graduate Studies / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=case1289789036.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Mwangi, Timothy M. "Software tools for elementary math education : animated mathematical proofs." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/85451.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2013.
Cataloged from PDF version of thesis.
Includes bibliographical references (page 47).
The National Council of Teachers of Mathematics [6] has identified the learning of proofs as a critical goal for students from pre-kindergarten through grade 12 (p. 56). A proof for elementary students is not the highly structured mathematical argument seen in high school algebra classes. It is, however, a rational mathematical argument created by students using the appropriate vocabulary for their level of understanding. To aid students in learning to create mathematical proofs software that enables them to create simple animations is invaluable. This thesis looks at the characteristics, design, testing and evaluation of such software. An initial design is presented and the feedback gained from testing its implementation in a class setting is discussed along with the changes that were required to improve the software in light of the feedback. A comparison is then made between the final implementation of the software and other similar programs. The results indicate that the software enables students to create, share and discuss mathematical proofs in the form of simple animations.
by Timothy M. Mwangi.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
42

Faitelson, David. "Program synthesis from domain specific object models." Thesis, University of Oxford, 2008. http://ora.ox.ac.uk/objects/uuid:0c5a992e-dad4-435c-a576-e3ed504bcdbd.

Full text
Abstract:
Automatically generating a program from its specification eliminates a large source of errors that is often unavoidable in a manual approach. While a general purpose code generator is impossible to build, it is possible to build a practical code generator for a specific domain. This thesis investigates the theory behind Booster — a domain specific, object based specification language and automatic code generator. The domain of Booster is information systems — systems that consist of a rich object model in which the objects refer to each other to form a complicated network of associations. The operations of such systems are conceptually simple (changing the attributes of objects, adding or removing new objects and creating or destroying associations) but they are tricky to implement correctly. The thesis focuses on the theoretical foundation of the Booster approach, in particular on three contributions: semantics, model completion, and code generation. The semantics of a Booster model is a single abstract data type (ADT) where the invariants and the methods of all the classes in the model are promoted to the level of the ADT. This is different from the traditional view that considers each class as a separate ADT. The thesis argues that the Booster semantics is a better model of object oriented systems. The second important contribution is the idea of model completion — a process that augments the postconditions of methods with additional predicates that follow from the system’s invariant and the method’s original intention. The third contribution describes a simple but effective code generation technique that is based on interpreting postconditions as executable statements and uses weakest preconditions to ensure that the generated code refines its specification.
APA, Harvard, Vancouver, ISO, and other styles
43

Mastronicola, Natália Ojeda. "Trigonometria por apps." Universidade Federal de São Carlos, 2014. https://repositorio.ufscar.br/handle/ufscar/4469.

Full text
Abstract:
Made available in DSpace on 2016-06-02T20:02:57Z (GMT). No. of bitstreams: 1 6419.pdf: 2714679 bytes, checksum: e7b28ca713a4181fb73285416b6bb39b (MD5) Previous issue date: 2014-10-31
In this work the author intends to show an experience of implementation of alternative activities to traditional teaching of trigonometry in 9th year of Fundamental School (Brazilian curriculum). The theme choice came from the author's finding of difficulties presented by pupils in face of a great amount of formulas without meaning for them. The goal of these activities was to stimulate the students to make them construct their own knowledge, culminating in a meaningful learning. It was intended to formulate activities with the use of smartphones and tablets. The technological advance of cell phones has provided a range of different tools that fit in someone's palm and can be accessed immediately. Some of these tools provide the use of tablets and smartphones in the educational context, aiding student's learning in an innovative form, making it more pleasurable. All activities followed the guidelines of didactical engineering as research methodology. The activities were applied in a school were the author is a teacher.
Neste trabalho, a autora pretende mostrar uma experiência de aplicação de atividades alternativas ao ensino tradicional de trigonometria no 9º ano do Ensino Fundamental. A escolha do tema veio da constatação dessa pesquisadora das dificuldades apresentadas pelos alunos ao se depararem com um amontoado de fórmulas sem significado algum para eles. O objetivo das atividades foi estimular os alunos a construir o seu próprio saber, culminando em uma aprendizagem significativa. Pretendeu-se formular atividades com o uso de aplicativos para smartphones e tablets. O avanço tecnológico dos celulares tem proporcionado um leque de diferentes ferramentas que cabem na palma da mão e podem ser acessadas instantaneamente. Algumas dessas ferramentas proporcionam o uso de tablets e smartphones no contexto educacional, auxiliando a aprendizagem dos alunos de forma inovadora, tornando-a mais prazerosa. Todas as atividades seguiram o percurso da Engenharia Didática como metodologia de pesquisa. As atividades foram aplicadas em uma escola em que a pesquisadora é professora.
APA, Harvard, Vancouver, ISO, and other styles
44

van, Schaik Sebastiaan Johannes. "A framework for processing correlated probabilistic data." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:91aa418d-536e-472d-9089-39bef5f62e62.

Full text
Abstract:
The amount of digitally-born data has surged in recent years. In many scenarios, this data is inherently uncertain (or: probabilistic), such as data originating from sensor networks, image and voice recognition, location detection, and automated web data extraction. Probabilistic data requires novel and different approaches to data mining and analysis, which explicitly account for the uncertainty and the correlations therein. This thesis introduces ENFrame: a framework for processing and mining correlated probabilistic data. Using this framework, it is possible to express both traditional and novel algorithms for data analysis in a special user language, without having to explicitly address the uncertainty of the data on which the algorithms operate. The framework will subsequently execute the algorithm on the probabilistic input, and perform exact or approximate parallel probability computation. During the probability computation, correlations and provenance are succinctly encoded using probabilistic events. This thesis contains novel contributions in several directions. An expressive user language – a subset of Python – is introduced, which allows a programmer to implement algorithms for probabilistic data without requiring knowledge of the underlying probabilistic model. Furthermore, an event language is presented, which is used for the probabilistic interpretation of the user program. The event language can succinctly encode arbitrary correlations using events, which are the probabilistic counterparts of deterministic user program variables. These highly interconnected events are stored in an event network, a probabilistic interpretation of the original user program. Multiple techniques for exact and approximate probability computation (with error guarantees) of such event networks are presented, as well as techniques for parallel computation. Adaptations of multiple existing data mining algorithms are shown to work in the framework, and are subsequently subjected to an extensive experimental evaluation. Additionally, a use-case is presented in which a probabilistic adaptation of a clustering algorithm is used to predict faults in energy distribution networks. Lastly, this thesis presents techniques for integrating a number of different probabilistic data formalisms for use in this framework and in other applications.
APA, Harvard, Vancouver, ISO, and other styles
45

Wu, Di. "Goal-based requirements engineering -- exploring with the "RADIE" approach for ontological elaboration." HKBU Institutional Repository, 2008. http://repository.hkbu.edu.hk/etd_ra/920.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Jackson, Gregory M. "A Test Suite Generator For Struts Based Applications." UNF Digital Commons, 2004. http://digitalcommons.unf.edu/etd/294.

Full text
Abstract:
Testing web-based enterprise applications requires the use of automated testing frameworks. The testing framework's ability to run suites of test cases through development ensures enhancements work as required and have not caused defects in previously developed sub systems. Open source testing frameworks like JUnit and Cactus have addressed the requirements to test web-based enterprise applications, however they do not address the generation of test cases based on direct analysis of the code under test. This paper presents a tool to generate test cases for web-based enterprise applications. The generator focuses on creating test cases used to test applications built on the Struts MVC framework for the J2EE platform. Using the Struts configuration files, test cases are generated to test each request path and response. The created test cases take advantage of the StrutsTestCase library and run using the JUnit and Cactus frameworks. The generated test cases follow a consistent pattern for the test cases and reduce the time required build the automated testing for the application.
APA, Harvard, Vancouver, ISO, and other styles
47

Zuiani, Federico. "Multi-objective optimisation of low-thrust trajectories." Thesis, University of Glasgow, 2015. http://theses.gla.ac.uk/6311/.

Full text
Abstract:
This research work developed an innovative computational approach to the preliminary design of low-thrust trajectories optimising multiple mission criteria. Low-Thrust (LT) propulsion has become the propulsion system of choice for a number of near Earth and interplanetary missions. Consequently, in the last two decades a good wealth of research has been devoted to the development of computational method to design low-thrust trajectories. Most of the techniques, however, minimise or maximise a single figure of merit under a set of design constraints. Less effort has been devoted to the development of efficient methods for the minimisation (or maximisation) of two or more figures of merit. On the other hand, in the preliminary mission design phase, the decision maker is interested in analysing as many design solutions as possible against different trade-off criteria. Therefore, in this PhD work, an innovative Multi-Objective (MO), memetic optimisation algorithm, called Multi-Agent Collaborative Search (MACS2), has been implemented to tackle low-thrust trajectory design problems with multiple figures of merit. Tests on both academic and real-world problems showed that the proposed MACS2 paradigm performs better than or as well as other state-of-the-art Multi-Objective optimisation algorithms. Concurrently, a set of novel approximated, first-order, analytical formulae has been developed, to obtain a fast but reliable estimation of the main trade-off criteria. These formulae allow for a fast propagation of the orbital motion under a constant perturbing acceleration. These formulae have been shown to allow for the fast and relatively accurate propagation of long LT trajectories under the typical acceleration level delivered by current engine technology. Various applications are presented to demonstrate the validity of the combination of the analytical formulae with MACS2. Among them, the preliminary design of the JAXA low-cost DESTINY mission to L2, a novel approach to the optimisation under uncertainty of deflection actions for Near Earth Objects (NEO), and the de-orbiting of space debris with low-thrust and with a combination of low-thrust and solar radiation pressure.
APA, Harvard, Vancouver, ISO, and other styles
48

Yau, Shuk-Han Ada. "Numerical analysis of finite difference schemes in automatically generated mathematical modeling software." Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/35407.

Full text
Abstract:
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1994.
Includes bibliographical references (leaves 64-65).
by Shuk-Han Ada Yau.
M.S.
APA, Harvard, Vancouver, ISO, and other styles
49

Barbier, Fabrice. "Résultats de théorie abstraite des modèles dans le cadre des institutions : vers la combinaison de logiques." Phd thesis, Université d'Evry-Val d'Essonne, 2005. http://tel.archives-ouvertes.fr/tel-00087587.

Full text
Abstract:
De nombreux travaux ont montré l'importance de l'interpolation de Craig pour la structuration et la modularité des spécifications de type axiomatique. En vue d'en donner des conditions suffisantes dans un cadre théorique adapté à l'informatique, nous nous sommes intéressé à une propriété équivalente à l'interpolation de Craig dans le cadre de la théorie standard des modèles : la consistance de Robinson. L'étude de cette dernière propriété nous a amené à généraliser dans une spécialisation des institutions les notions classiques de diagrammes complets et de morphismes élémentaires. Ceci nous a alors permis de généraliser quelques résultats classiques de théorie des modèles tels que les théorèmes de Löwenheim-Skolem ou l'union de chaînes de Tarski. En fin, les constructeurs de formules étant explicites dans notre cadre théorique, nous nous sommes naturellemant intéressés à la combinaison de logiques et à la préservation de l'interpolation de Craig et de la consistance de Robinson.
APA, Harvard, Vancouver, ISO, and other styles
50

Feng, Lu. "On learning assumptions for compositional verification of probabilistic systems." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:12502ba2-478f-429a-a250-6590c43a8e8a.

Full text
Abstract:
Probabilistic model checking is a powerful formal verification method that can ensure the correctness of real-life systems that exhibit stochastic behaviour. The work presented in this thesis aims to solve the scalability challenge of probabilistic model checking, by developing, for the first time, fully-automated compositional verification techniques for probabilistic systems. The contributions are novel approaches for automatically learning probabilistic assumptions for three different compositional verification frameworks. The first framework considers systems that are modelled as Segala probabilistic automata, with assumptions captured by probabilistic safety properties. A fully-automated approach is developed to learn assumptions for various assume-guarantee rules, including an asymmetric rule Asym for two-component systems, an asymmetric rule Asym-N for n-component systems, and a circular rule Circ. This approach uses the L* and NL* algorithms for automata learning. The second framework considers systems where the components are modelled as probabilistic I/O systems (PIOSs), with assumptions represented by Rabin probabilistic automata (RPAs). A new (complete) assume-guarantee rule Asym-Pios is proposed for this framework. In order to develop a fully-automated approach for learning assumptions and performing compositional verification based on the rule Asym-Pios, a (semi-)algorithm to check language inclusion of RPAs and an L*-style learning method for RPAs are also proposed. The third framework considers the compositional verification of discrete-time Markov chains (DTMCs) encoded in Boolean formulae, with assumptions represented as Interval DTMCs (IDTMCs). A new parallel operator for composing an IDTMC and a DTMC is defined, and a new (complete) assume-guarantee rule Asym-Idtmc that uses this operator is proposed. A fully-automated approach is formulated to learn assumptions for rule Asym-Idtmc, using the CDNF learning algorithm and a new symbolic reachability analysis algorithm for IDTMCs. All approaches proposed in this thesis have been implemented as prototype tools and applied to a range of benchmark case studies. Experimental results show that these approaches are helpful for automating the compositional verification of probabilistic systems through learning small assumptions, but may suffer from high computational complexity or even undecidability. The techniques developed in this thesis can assist in developing scalable verification frameworks for probabilistic models.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography