Academic literature on the topic 'Other theoretical computer science and computational mathematics'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Other theoretical computer science and computational mathematics.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Other theoretical computer science and computational mathematics"

1

Kumar, Rakesh. "FUTURE FOR SCIENTIFIC COMPUTING USING PYTHON." International Journal of Engineering Technologies and Management Research 2, no. 1 (January 29, 2020): 30–41. http://dx.doi.org/10.29121/ijetmr.v2.i1.2015.28.

Full text
Abstract:
Computational science (scientific computing or scientific computation) is concerned with constructing mathematical models as well as quantitative analysis techniques and using computers to analyze as well as solve scientific problems. In practical use, it is basically the application of computer simulation as well as other forms of computation from numerical analysis and theoretical computer science to problems in different scientific disciplines. The scientific computing approach is to gain understanding, basically through the analysis of mathematical models implemented on computers. Python is frequently used for highperformance scientific applications and widely used in academia as well as scientific projects because it is easy to write and performs well. Due to its high performance nature, scientific computing in Python often utilizes external libraries like NumPy, SciPy and Matplotlib etc.
APA, Harvard, Vancouver, ISO, and other styles
2

Gadanidis, George. "Artificial intelligence, computational thinking, and mathematics education." International Journal of Information and Learning Technology 34, no. 2 (March 6, 2017): 133–39. http://dx.doi.org/10.1108/ijilt-09-2016-0048.

Full text
Abstract:
Purpose The purpose of this paper is to examine the intersection of artificial intelligence (AI), computational thinking (CT), and mathematics education (ME) for young students (K-8). Specifically, it focuses on three key elements that are common to AI, CT and ME: agency, modeling of phenomena and abstracting concepts beyond specific instances. Design/methodology/approach The theoretical framework of this paper adopts a sociocultural perspective where knowledge is constructed in interactions with others (Vygotsky, 1978). Others also refers to the multiplicity of technologies that surround us, including both the digital artefacts of our new media world, and the human methods and specialized processes acting in the world. Technology is not simply a tool for human intention. It is an actor in the cognitive ecology of immersive humans-with-technology environments (Levy, 1993, 1998) that supports but also disrupts and reorganizes human thinking (Borba and Villarreal, 2005). Findings There is fruitful overlap between AI, CT and ME that is of value to consider in mathematics education. Originality/value Seeing ME through the lenses of other disciplines and recognizing that there is a significant overlap of key elements reinforces the importance of agency, modeling and abstraction in ME and provides new contexts and tools for incorporating them in classroom practice.
APA, Harvard, Vancouver, ISO, and other styles
3

Armoni, Michal, Judith Gal-Ezer, and Dina Tirosh. "Solving Problems Reductively." Journal of Educational Computing Research 32, no. 2 (March 2005): 113–29. http://dx.doi.org/10.2190/6pcm-447v-wf7b-qeuf.

Full text
Abstract:
Solving problems by reduction is an important issue in mathematics and science education in general (both in high school and in college or university) and particularly in computer science education. Developing reductive thinking patterns is an important goal in any scientific discipline, yet reduction is not an easy subject to cope with. Still, the use of reduction usually is insufficiently reflected in high school mathematics and science programs. Even in academic computer science programs the concept of reduction is mentioned explicitly only in advanced academic courses such as computability and complexity theory. However, reduction can be applied in other courses as well, even on the high school level. Specifically, in the field of computational models, reduction is an important method for solving design and proof problems. This study focuses on high school students studying the unit “computational models”—a unique unit, which is part of the new Israeli computer science high school curriculum. We examined whether high school students tend to solve problems dealing with computational models reductively, and if they do, what is the nature of their reductive solutions. To the best of our knowledge, the tendency to reductive thinking in theoretical computer science has not been studied before. Our findings show that even though many students use reduction, many others prefer non-reductive solutions, even when reduction can significantly decrease the technical complexity of the solution. We discuss these findings and suggest possible ways to improve reductive thinking.
APA, Harvard, Vancouver, ISO, and other styles
4

Landsberg, J. M. "Algebraic geometry and representation theory in the study of matrix multiplication complexity and other problems in theoretical computer science." Differential Geometry and its Applications 82 (June 2022): 101888. http://dx.doi.org/10.1016/j.difgeo.2022.101888.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Pop, Nicolae, Marin Marin, and Sorin Vlase. "Mathematics in Finite Element Modeling of Computational Friction Contact Mechanics 2021–2022." Mathematics 11, no. 1 (January 3, 2023): 255. http://dx.doi.org/10.3390/math11010255.

Full text
Abstract:
In engineering practice, structures with identical components or parts are useful from several points of view: less information is needed to describe the system; designs can be conceptualized quicker and easier; components are made faster than during traditional complex assembly; and finally, the time needed to achieve the structure and the cost involved in manufacturing decrease. Additionally, the subsequent maintenance of this system then becomes easier and cheaper. The aim of this Special Issue is to provide an opportunity for international researchers to share and review recent advances in the finite element modeling of computational friction contact mechanics. Numerical modeling in mathematics, mechanical engineering, computer science, computers, etc. presents many challenges. The finite element method applied in solid mechanics was designed by engineers to simulate numerical models in order to reduce the design costs of prototypes, tests and measurements. This method was initially validated only by measurements but gave encouraging results. After the discovery of Sobolev spaces, the abovementioned results were obtained, and today, numerous researchers are working on improving this method. Some of applications of this method in solid mechanics include mechanical engineering, machine and device design, civil engineering, aerospace and automotive engineering, robotics, etc. Frictional contact is a complex phenomenon that has led to research in mechanical engineering, computational contact mechanics, composite material design, rigid body dynamics, robotics, etc. A good simulation requires that the dynamics of contact with friction be included in the formulation of the dynamic system so that an approximation of the complex phenomena can be made. To solve these linear or nonlinear dynamic systems, which often have non-differentiable terms, or discontinuities, software that considers these high-performance numerical methods and computers with high computing power are needed. This Special Issue is dedicated to this kind of mechanical structure and to describing the properties and methods of analysis of these structures. Discrete or continuous structures in static and dynamic cases are also considered. Additionally, theoretical models, mathematical methods and numerical analysis of these systems, such as the finite element method and experimental methods, are used in these studies. Machine building, automotive, aerospace and civil engineering are the main areas in which such applications appear, but they can also be found in most other engineering fields. With this Special Issue, we want to disseminate knowledge among researchers, designers, manufacturers and users in this exciting field.
APA, Harvard, Vancouver, ISO, and other styles
6

Rodis, Panteleimon. "On defining and modeling context-awareness." International Journal of Pervasive Computing and Communications 14, no. 2 (June 4, 2018): 111–23. http://dx.doi.org/10.1108/ijpcc-d-18-00003.

Full text
Abstract:
Purpose This paper aims to present a methodology for defining and modeling context-awareness and describing efficiently the interactions between systems, applications and their context. Also, the relation of modern context-aware systems with distributed computation is investigated. Design/methodology/approach On this purpose, definitions of context and context-awareness are developed based on the theory of computation and especially on a computational model for interactive computation which extends the classical Turing Machine model. The computational model proposed here encloses interaction and networking capabilities for computational machines. Findings The definition of context presented here develops a mathematical framework for working with context. Also, the modeling approach of distributed computing enables us to build robust, scalable and detailed models for systems and application with context-aware capabilities. Also, it enables us to map the procedures that support context-aware operations providing detailed descriptions about the interactions of applications with their context and other external sources. Practical implications A case study of a cloud-based context-aware application is examined using the modeling methodology described in the paper so as to demonstrate the practical usage of the theoretical framework that is presented. Originality/value The originality on the framework presented here relies on the connection of context-awareness with the theory of computation and distributed computing.
APA, Harvard, Vancouver, ISO, and other styles
7

ALT, HELMUT, and LUDMILA SCHARF. "COMPUTING THE HAUSDORFF DISTANCE BETWEEN CURVED OBJECTS." International Journal of Computational Geometry & Applications 18, no. 04 (August 2008): 307–20. http://dx.doi.org/10.1142/s0218195908002647.

Full text
Abstract:
The Hausdorff distance between two sets of curves is a measure for the similarity of these objects and therefore an interesting feature in shape recognition. If the curves are algebraic computing the Hausdorff distance involves computing the intersection points of the Voronoi edges of the one set with the curves in the other. Since computing the Voronoi diagram of curves is quite difficult we characterize those points algebraically and compute them using the computer algebra system SYNAPS. This paper describes in detail which points have to be considered, by what algebraic equations they are characterized, and how they actually are computed.
APA, Harvard, Vancouver, ISO, and other styles
8

Lütgehetmann, Daniel, Dejan Govc, Jason P. Smith, and Ran Levi. "Computing Persistent Homology of Directed Flag Complexes." Algorithms 13, no. 1 (January 7, 2020): 19. http://dx.doi.org/10.3390/a13010019.

Full text
Abstract:
We present a new computing package Flagser, designed to construct the directed flag complex of a finite directed graph, and compute persistent homology for flexibly defined filtrations on the graph and the resulting complex. The persistent homology computation part of Flagser is based on the program Ripser by U. Bauer, but is optimised specifically for large computations. The construction of the directed flag complex is done in a way that allows easy parallelisation by arbitrarily many cores. Flagser also has the option of working with undirected graphs. For homology computations Flagser has an approximate option, which shortens compute time with remarkable accuracy. We demonstrate the power of Flagser by applying it to the construction of the directed flag complex of digital reconstructions of brain microcircuitry by the Blue Brain Project and several other examples. In some instances we perform computation of homology. For a more complete performance analysis, we also apply Flagser to some other data collections. In all cases the hardware used in the computation, the use of memory and the compute time are recorded.
APA, Harvard, Vancouver, ISO, and other styles
9

Mitrović, Melanija, Mahouton Norbert Hounkonnou, and Marian Alexandru Baroni. "Theory of Constructive Semigroups with Apartness – Foundations, Development and Practice." Fundamenta Informaticae 184, no. 3 (February 15, 2022): 233–71. http://dx.doi.org/10.3233/fi-2021-2098.

Full text
Abstract:
This paper has several purposes. We present through a critical review the results from already published papers on the constructive semigroup theory, and contribute to its further development by giving solutions to open problems. We also draw attention to its possible applications in other (constructive) mathematics disciplines, in computer science, social sciences, economics, etc. Another important goal of this paper is to provide a clear, understandable picture of constructive semigroups with apartness in Bishop’s style both to (classical) algebraists and the ones who apply algebraic knowledge.
APA, Harvard, Vancouver, ISO, and other styles
10

SOLGHAR, ALIREZA ARAB, and S. A. GANDJALIKHAN NASSAB. "THE INVESTIGATION OF THERMOHYDRODYNAMIC CHARACTERISTIC OF SINGLE AXIAL GROOVE JOURNAL BEARINGS WITH FINITE LENGTH BY USING CFD TECHNIQUES." International Journal of Computational Methods 10, no. 05 (May 2013): 1350031. http://dx.doi.org/10.1142/s021987621350031x.

Full text
Abstract:
The three-dimensional steady state thermohydrodynamic (THD) analysis of an axial grooved oil journal bearing is obtained theoretically. Navier–Stokes equations are solved simultaneously along with turbulent kinetic energy and its dissipation rate equations coupled with the energy equation in the lubricant flow and the heat conduction equation in the bush. The AKN low-Re κ–ε turbulence model is used to simulate the mean turbulent flow field. Considering the complexity of the physical geometry, conformal mapping is used to generate an orthogonal grid and the governing equations are transformed into the computational domain. Discretized forms of the transformed equations are obtained by the control volume method and solved by the SIMPLE algorithm. The numerical results of this analysis can be used to investigate the pressure distribution, volumetric oil flow rate and the loci of shaft in the journal bearings. To validate the computational results, comparison with the experimental and theoretical data of other investigators is made, and reasonable agreement is found.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Other theoretical computer science and computational mathematics"

1

Williamson, Alexander James. "Methods, rules and limits of successful self-assembly." Thesis, University of Oxford, 2011. http://ora.ox.ac.uk/objects/uuid:9eb549f9-3372-4a38-9370-a9b0e58ca26b.

Full text
Abstract:
The self-assembly of structured particles into monodisperse clusters is a challenge on the nano-, micro- and even macro-scale. While biological systems are able to self-assemble with comparative ease, many aspects of this self-assembly are not fully understood. In this thesis, we look at the strategies and rules that can be applied to encourage the formation of monodisperse clusters. Though much of the inspiration is biological in nature, the simulations use a simple minimal patchy particle model and are thus applicable to a wide range of systems. The topics that this thesis addresses include: Encapsulation: We show how clusters can be used to encapsulate objects and demonstrate that such `templates' can be used to control the assembly mechanisms and enhance the formation of more complex objects. Hierarchical self-assembly: We investigate the use of hierarchical mechanisms in enhancing the formation of clusters. We find that, while we are able to extend the ranges where we see successful assembly by using a hierarchical assembly pathway, it does not straightforwardly provide a route to enhance the complexity of structures that can be formed. Pore formation: We use our simple model to investigate a particular biological example, namely the self-assembly and formation of heptameric alpha-haemolysin pores, and show that pore insertion is key to rationalising experimental results on this system. Phase re-entrance: We look at the computation of equilibrium phase diagrams for self-assembling systems, particularly focusing on the possible presence of an unusual liquid-vapour phase re-entrance that has been suggested by dynamical simulations, using a variety of techniques.
APA, Harvard, Vancouver, ISO, and other styles
2

Blakey, Edward William. "A model-independent theory of computational complexity : from patience to precision and beyond." Thesis, University of Oxford, 2010. http://ora.ox.ac.uk/objects/uuid:5db40e2c-4a22-470d-9283-3b59b99793dc.

Full text
Abstract:
The field of computational complexity theory--which chiefly aims to quantify the difficulty encountered when performing calculations--is, in the case of conventional computers, correctly practised and well understood (some important and fundamental open questions notwithstanding); however, such understanding is, we argue, lacking when unconventional paradigms are considered. As an illustration, we present here an analogue computer that performs the task of natural-number factorization using only polynomial time and space; the system's true, exponential complexity, which arises from requirements concerning precision, is overlooked by a traditional, `time-and-space' approach to complexity theory. Hence, we formulate the thesis that unconventional computers warrant unconventional complexity analysis; the crucial omission from traditional analysis, we suggest, is consideration of relevant resources, these being not only time and space, but also precision, energy, etc. In the presence of this multitude of resources, however, the task of comparing computers' efficiency (formerly a case merely of comparing time complexity) becomes difficult. We resolve this by introducing a notion of overall complexity, though this transpires to be incompatible with an unrestricted formulation of resource; accordingly, we define normality of resource, and stipulate that considered resources be normal, so as to rectify certain undesirable complexity behaviour. Our concept of overall complexity induces corresponding complexity classes, and we prove theorems concerning, for example, the inclusions therebetween. Our notions of resource, overall complexity, normality, etc. form a model-independent framework of computational complexity theory, which allows: insightful complexity analysis of unconventional computers; comparison of large, model-heterogeneous sets of computers, and correspondingly improved bounds upon the complexity of problems; assessment of novel, unconventional systems against existing, Turing-machine benchmarks; increased confidence in the difficulty of problems; etc. We apply notions of the framework to existing disputes in the literature, and consider in the context of the framework various fundamental questions concerning the nature of computation.
APA, Harvard, Vancouver, ISO, and other styles
3

Dunn, Sara-Jane Nicole. "Towards a computational model of the colonic crypt with a realistic, deformable geometry." Thesis, University of Oxford, 2011. http://ora.ox.ac.uk/objects/uuid:c3c9440a-52ac-4a3d-8e1c-5dc276b8eb6c.

Full text
Abstract:
Colorectal cancer (CRC) is one of the most prevalent and deadly forms of cancer. Its high mortality rate is associated with difficulties in early detection, which is crucial to survival. The onset of CRC is marked by macroscopic changes in intestinal tissue, originating from a deviation in the healthy cell dynamics of glands known as the crypts of Lieberkuhn. It is believed that accumulated genetic alterations confer on mutated cells the ability to persist in the crypts, which can lead to the formation of a benign tumour through localised proliferation. Stress on the crypt walls can lead to buckling, or crypt fission, and the further spread of mutant cells. Elucidating the initial perturbations in crypt dynamics is not possible experimentally, but such investigations could be made using a predictive, computational model. This thesis proposes a new discrete crypt model, which focuses on the interaction between cell- and tissue-level behaviour, while incorporating key subcellular components. The model contains a novel description of the role of the surrounding tissue and musculature, which allows the shape of the crypt to evolve and deform. A two-dimensional (2D) cross-sectional geometry is considered. Simulation results reveal how the shape of the crypt base may contribute mechanically to the asymmetric division events typically associated with the stem cells in this region. The model predicts that epithelial cell migration may arise due to feedback between cell loss at the crypt collar and density-dependent cell division, an hypothesis which can be investigated in a wet lab. Further, in silico experiments illustrate how this framework can be used to investigate the spread of mutations, and conclude that a reduction in cell migration is key to confer persistence on mutant cell populations. A three-dimensional (3D) model is proposed to remove the spatial restrictions imposed on cell migration in 2D, and preliminary simulation results agree with the hypotheses generated in 2D. Computational limitations that currently restrict extension to a realistic 3D geometry are discussed. These models enable investigation of the role that mechanical forces play in regulating tissue homeostasis, and make a significant contribution to the theoretical study of the onset of crypt deformation under pre-cancerous conditions.
APA, Harvard, Vancouver, ISO, and other styles
4

Leonard, Katherine H. L. "Mathematical and computational modelling of tissue engineered bone in a hydrostatic bioreactor." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:05845740-1a74-4e19-95ea-6b5229d1af27.

Full text
Abstract:
In vitro tissue engineering is a method for developing living and functional tissues external to the body, often within a device called a bioreactor to control the chemical and mechanical environment. However, the quality of bone tissue engineered products is currently inadequate for clinical use as the implant cannot bear weight. In an effort to improve the quality of the construct, hydrostatic pressure, the pressure in a fluid at equilibrium that is required to balance the force exerted by the weight of the fluid above, has been investigated as a mechanical stimulus for promoting extracellular matrix deposition and mineralisation within bone tissue. Thus far, little research has been performed into understanding the response of bone tissue cells to mechanical stimulation. In this thesis we investigate an in vitro bone tissue engineering experimental setup, whereby human mesenchymal stem cells are seeded within a collagen gel and cultured in a hydrostatic pressure bioreactor. In collaboration with experimentalists a suite of mathematical models of increasing complexity is developed and appropriate numerical methods are used to simulate these models. Each of the models investigates different aspects of the experimental setup, from focusing on global quantities of interest through to investigating their detailed local spatial distribution. The aim of this work is to increase understanding of the underlying physical processes which drive the growth and development of the construct, and identify which factors contribute to the highly heterogeneous spatial distribution of the mineralised extracellular matrix seen experimentally. The first model considered is a purely temporal model, where the evolution of cells, solid substrate, which accounts for the initial collagen scaffold and deposited extracellular matrix along with attendant mineralisation, and fluid in response to the applied pressure are examined. We demonstrate that including the history of the mechanical loading of cells is important in determining the quantity of deposited substrate. The second and third models extend this non-spatial model, and examine biochemically and biomechanically-induced spatial patterning separately. The first of these spatial models demonstrates that nutrient diffusion along with nutrient-dependent mass transfer terms qualitatively reproduces the heterogeneous spatial effects seen experimentally. The second multiphase model is used to investigate whether the magnitude of the shear stresses generated by fluid flow, can qualitatively explain the heterogeneous mineralisation seen in the experiments. Numerical simulations reveal that the spatial distribution of the fluid shear stress magnitude is highly heterogeneous, which could be related to the spatial heterogeneity in the mineralisation seen experimentally.
APA, Harvard, Vancouver, ISO, and other styles
5

Dutta, Sara. "A multi-scale computational investigation of cardiac electrophysiology and arrhythmias in acute ischaemia." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:f5f68d8b-7a60-4109-91c8-6b1d80c7ee5b.

Full text
Abstract:
Sudden cardiac death is one of the leading causes of mortality in the western world. One of the main factors is myocardial ischaemia, when there is a mismatch between blood demand and supply to the heart, which may lead to disturbed cardiac excitation patterns, known as arrhythmias. Ischaemia is a dynamic and complex process, which is characterised by many electrophysiological changes that vary through space and time. Ischaemia-induced arrhythmic mechanisms, and the safety and efficacy of certain therapies are still not fully understood. Most experimental studies are carried out in animal, due to the ethical and practical limitations of human experiments. Therefore, extrapolation of mechanisms from animal to human is challenging, but can be facilitated by in silico models. Since the first cardiac cell model was built over 50 years ago, computer simulations have provided a wealth of information and insight that is not possible to obtain through experiments alone. Therefore, mathematical models and computational simulations provide a powerful and complementary tool for the study of multi-scale problems. The aim of this thesis is to investigate pro-arrhythmic electrophysiological consequences of acute myocardial ischaemia, using a multi-scale computational modelling and simulation framework. Firstly, we present a novel method, combining computational simulations and optical mapping experiments, to characterise ischaemia-induced spatial differences modulating arrhythmic risk in rabbit hearts. Secondly, we use computer models to extend our investigation of acute ischaemia to human, by carrying out a thorough analysis of recent human action potential models under varied ischaemic conditions, to test their applicability to simulate ischaemia. Finally, we combine state-of-the-art knowledge and techniques to build a human whole ventricles model, in which we investigate how anti-arrhythmic drugs modulate arrhythmic mechanisms in the presence of ischaemia.
APA, Harvard, Vancouver, ISO, and other styles
6

Kay, Sophie Kate. "Cell fate mechanisms in colorectal cancer." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:f19bf73d-0c0e-4fff-9589-bf43f9ff12f0.

Full text
Abstract:
Colorectal cancer (CRC) arises in part from the dysregulation of cellular proliferation, associated with the canonical Wnt pathway, and differentiation, effected by the Notch signalling network. In this thesis, we develop a mathematical model of ordinary differential equations (ODEs) for the coupled interaction of the Notch and Wnt pathways in cells of the human intestinal epithelium. Our central aim is to understand the role of such crosstalk in the genesis and treatment of CRC. An embedding of this model in cells of a simulated colonic tissue enables computational exploration of the cell fate response to spatially inhomogeneous growth cues in the healthy intestinal epithelium. We also examine an alternative, rule-based model from the literature, which employs a simple binary approach to pathway activity, in which the Notch and Wnt pathways are constitutively on or off. Comparison of the two models demonstrates the substantial advantages of the equation-based paradigm, through its delivery of stable and robust cell fate patterning, and its versatility for exploring the multiscale consequences of a variety of subcellular phenomena. Extension of the ODE-based model to include mutant cells facilitates the study of Notch-mediated therapeutic approaches to CRC. We find a marked synergy between the application of γ-secretase inhibitors and Hath1 stabilisers in the treatment of early-stage intestinal polyps. This combined treatment is an efficient means of inducing mitotic arrest in the cell population of the intestinal epithelium through enforced conversion to a secretory phenotype and is highlighted as a viable route for further theoretical, experimental and clinical study.
APA, Harvard, Vancouver, ISO, and other styles
7

Björck, Olof. "Creating Interactive Visualizations for Twitter Datasets using D3." Thesis, Uppsala universitet, Matematiska institutionen, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-351802.

Full text
Abstract:
Project Meme Evolution Programme (Project MEP) is a research program directed by Raazesh Sainudiin, Uppsala University, Sweden, that collects and analyzes datasets from Twitter. Twitter can be used to understand how ideas spread in social media. This project aims to produce interactive visualizations for datasets collected in Project MEP. Such interactive visualizations will facilitate exploratory data analysis in Project MEP. Several technologies had to be learned to produce the visualizations, most notably JavaScript, D3, and Scala. Three interactive visualizations were produced; one that allows for exploration of a Twitter user timeline and two that allows for exploration and understanding of a Twitter retweet network. The interactive visualizations are accessible as Scala functions and in a website developed in this project and uploaded to GitHub. The interactive visulizations contain some known bugs but they still allow for useful exploratory data analysis of Project MEP datasets and the project goal is therefore considered met.
Project Meme Evolution Programme
APA, Harvard, Vancouver, ISO, and other styles
8

Wredh, Simon, Anton Kroner, and Tomas Berg. "A Comparison of Three Time-stepping Methods for the LLG Equation in Dynamic Micromagnetics." Thesis, Uppsala universitet, Avdelningen för beräkningsvetenskap, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-323537.

Full text
Abstract:
Micromagnetism is the study of magnetic materials on the microscopic length scale (of nano to micrometers), this scale does not take quantum mechanical effects into account, but is small enough to neglect certain macroscopic effects of magnetism in a material. The Landau-Lifshitz-Gilbert (LLG) equation is used within micromagnetism to determine the time evolution of the magnetisation vector field in a ferromagnetic solid. It is a partial differential equation with high non linearity, which makes it very difficult so solve analytically. Thus numerical methods have been developed for approximating the solution using computers. In this report we compare the performance of three different numerical methods for the LLG equation, the implicit midpoint method (IMP), the midpoint with extrapolation method (MPE), and the Gauss-Seidel Projection method (GSPM). It was found that all methods have convergence rates as expected; second order for IMP and MPE, and first order for GSPM. Energy conserving properties of the schemes were analysed and neither MPE or GSPM conserve energy. The computational time required for each method was determined to be very large for the IMP method in comparison to the other two. Suggestions for different areas of use for each method are provided.
APA, Harvard, Vancouver, ISO, and other styles
9

Ueda, Maria. "Programmering i matematik ur elevernas perspektiv : En fallstudie i en niondeklass." Thesis, KTH, Lärande, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-291255.

Full text
Abstract:
Beslutsfattare i Sverige och internationellt har kommit fram till att undervisning i programmering är viktigt. I Sverige infördes således 2018 programmering i den svenska läroplanen där delar av programmeringsundervisningen ska bedrivas i matematikämnet. Många matematiklärare känner sig dock osäkra på hur denna undervisning ska utföras rent praktiskt. Syftet med detta arbete är att studera programmering i matematikämnet ur högstadieelevers perspektiv, och speciellt vad gäller matematiskt lärande och att lära sig hur man tänker vid programmering (datalogiskt tänkande). Detta görs som en fallstudie i en niondeklass som undervisades i programmering under 6 lektioner vid 5 tillfällen. Studien består av klassrumsobservationer, korta enkäter samt intervjuer med eleverna. Slutsatsen av fallstudien är att eleverna är övervägande positiva till programmering i matematik och ser det som något nytt och annorlunda. De ser programmeringen som kreativ jämfört med andra matematiklektioner och uppskattar den direkta responsen datorerna ger. Däremot har de svårt att se direkt lärande i matematik, förutom att de får använda variabler. Det räcker inte att programmeringsuppgifterna innehåller matematik för att eleverna ska uppleva att de lär sig matematik när de programmerar. Vilka tecken på datalogiskt tänkande som visar sig efter programmeringslektionerna beror på hur datalogiskt tänkande definieras. I detta arbete indelas datalogiskt tänkande i sex huvudkoncept: abstraktion, algoritmiskt tänkande, automatisering, nedbrytning i komponenter, felsökning och generalisering, varav eleverna i detta arbete visar tecken på algoritmiskt tänkande, uttrycker att de uppskattar automatisering och lär sig arbeta genom felsökning. Dessutom beskriver de att de samarbetar och kommunicerar mer på programmeringslektionerna än på andra matematiklektioner och uppskattar att få skapa egna programmeringsuppgifter och arbeta med öppna problem.
Decision makers in Sweden and internationally have come to the conclusion that teaching programming is important. In Sweden, programming was thus introduced in the Swedish curriculum in 2018, where parts of the programming education will be conducted in the subject of mathematics. Many mathematics teachers, however, feel uncertain about how this teaching should be carried out. The purpose of this work is to study programming in the subject of mathematics from the perspective of students in year 7-9, and especially in terms of mathematical learning and how to think when programming (computational thinking). This is done as a case study in a class in ninth grade that was taught programming during 6 lessons on 5 occasions. The study consists of classroom observations, short questionnaires and interviews with students. The conclusion of the case study is that the students are predominantly positive about programming in mathematics and see it as something new and different. They see programming as compared to other math lessons and appreciate the direct response the computers give. However, they have difficulty seeing direct learning in mathematics, except that the use of variables. It is not enough that the programming tasks contain mathematics for the students to experience that they learn mathematics when they program. What signs of computational thinking appear after the programming lessons depends on how computational thinking is defined. In this work, computational thinking is divided into six main concepts: abstraction, algorithmic thinking, automation, decomposition, troubleshooting and generalization, of which the students in this work show signs of algorithmic thinking, express that they appreciate automation and learn to work through troubleshooting. In addition, they describe that they collaborate and communicate more in the programming lessons than in other mathematics lessons and appreciate being able to create their own programming tasks and work with open problems.
APA, Harvard, Vancouver, ISO, and other styles
10

Xi, Jiahe. "Cardiac mechanical model personalisation and its clinical applications." Thesis, University of Oxford, 2013. http://ora.ox.ac.uk/objects/uuid:0db4cf52-4f64-4ee0-8933-3fb49d64aee6.

Full text
Abstract:
An increasingly important research area within the field of cardiac modelling is the development and study of methods of model-based parameter estimation from clinical measurements of cardiac function. This provides a powerful approach for the quantification of cardiac function, with the potential to ultimately lead to the improved stratification and treatment of individuals with pathological myocardial mechanics. In particular, the diastolic function (i.e., blood filling) of left ventricle (LV) is affected by its capacity for relaxation, or the decay in residual active tension (AT) whose inhibition limits the relaxation of the LV chamber, which in turn affects its compliance (or its reciprocal, stiffness). The clinical determination of these two factors, corresponding to the diastolic residual AT and passive constitutive parameters (stiffness) in the cardiac mechanical model, is thus essential for assessing LV diastolic function. However these parameters are difficult to be assessed in vivo, and the traditional criterion to diagnose diastolic dysfunction is subject to many limitations and controversies. In this context, the objective of this study is to develop model-based applicable methodologies to estimate in vivo, from 4D imaging measurements and LV cavity pressure recordings, these clinically relevant parameters (passive stiffness and active diastolic residual tension) in computational cardiac mechanical models, which enable the quantification of key clinical indices characterising cardiac diastolic dysfunction. Firstly, a sequential data assimilation framework has been developed, covering various types of existing Kalman filters, outlined in chapter 3. Based on these developments, chapter 4 demonstrates that the novel reduced-order unscented Kalman filter can accurately retrieve the homogeneous and regionally varying constitutive parameters from the synthetic noisy motion measurements. This work has been published in Xi et al. 2011a. Secondly, this thesis has investigated the development of methods that can be applied to clinical practise, which has, in turn, introduced additional difficulties and opportunities. This thesis has presented the first study, to our best knowledge, in literature estimating human constitutive parameters using clinical data, and demonstrated, for the first time, that while an end-diastolic MR measurement does not constrain the mechanical parameters uniquely, it does provide a potentially robust indicator of myocardial stiffness. This work has been published in Xi et al. 2011b. However, an unresolved issue in patients with diastolic dysfunction is that the estimation of myocardial stiffness cannot be decoupled from diastolic residual AT because of the impaired ventricular relaxation during diastole. To further address this problem, chapter 6 presents the first study to estimate diastolic parameters of the left ventricle (LV) from cine and tagged MRI measurements and LV cavity pressure recordings, separating the passive myocardial constitutive properties and diastolic residual AT. We apply this framework to three clinical cases, and the results show that the estimated constitutive parameters and residual active tension appear to be a promising candidate to delineate healthy and pathological cases. This work has been published in Xi et al. 2012a. Nevertheless, the need to invasively acquire LV pressure measurement limits the wide application of this approach. Chapter 7 addresses this issue by analysing the feasibility of using two kinds of non-invasively available pressure measurements for the purpose of inverse parameter estimation. The work has been submitted for publication in Xi et al. 2012b.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Other theoretical computer science and computational mathematics"

1

S, Hofer Thomas, and Kugler Michael D, eds. The basics of theoretical and computational chemistry. Weinheim: Wiley-VCH, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

No, Jong-Seon. Mathematical Properties of Sequences and Other Combinatorial Structures. Boston, MA: Springer US, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

1952-, Calude Cristian, Dinneen M. J. 1957-, and Vajnovszki Vincent, eds. Discrete mathematics and theoretical computer science: 4th international conference, DMTCS 2003, Dijon, France, July 2003, proceedings. Berlin: Springer, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Cangiani, Andrea. Numerical Mathematics and Advanced Applications 2011: Proceedings of ENUMATH 2011, the 9th European Conference on Numerical Mathematics and Advanced Applications, Leicester, September 2011. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Computational chemistry: A practical guide for applying techniques to real world problems. New York: Wiley, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hui, Wai-How. Computational Fluid Dynamics Based on the Unified Coordinates. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Sirca, Simon. Computational Methods for Physicists: Compendium for Students. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Wu, Xinyuan. Structure-Preserving Algorithms for Oscillatory Differential Equations. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Dietmar, Kröner, Resch Michael, and SpringerLink (Online service), eds. High Performance Computing in Science and Engineering '11: Transactions of the High Performance Computing Center, Stuttgart (HLRS) 2011. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Griebel, Michael. Meshfree Methods for Partial Differential Equations VI. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Other theoretical computer science and computational mathematics"

1

Rajeswari, S., M. Indhumathy, A. Somasundaram, Neeru Sood, and S. Arumugam. "Identification of Salinity Stress Tolerant Proteins in Sorghum Bicolor Computational Approach." In Theoretical Computer Science and Discrete Mathematics, 318–25. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-64419-6_41.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Cooperman, Gene, and Larry Finkelstein. "Combinatorial tools for computational group theory." In DIMACS Series in Discrete Mathematics and Theoretical Computer Science, 53–86. Providence, Rhode Island: American Mathematical Society, 1993. http://dx.doi.org/10.1090/dimacs/011/05.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Havas, George, and Edmund Robertson. "Application of computational tools for finitely presented groups." In DIMACS Series in Discrete Mathematics and Theoretical Computer Science, 29–39. Providence, Rhode Island: American Mathematical Society, 1994. http://dx.doi.org/10.1090/dimacs/015/03.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Farley, Jonathan. "Chain decomposition theorems for ordered sets and other musings." In DIMACS Series in Discrete Mathematics and Theoretical Computer Science, 3–13. Providence, Rhode Island: American Mathematical Society, 1997. http://dx.doi.org/10.1090/dimacs/034/01.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Winfree, Erik. "On the computational power of DNA annealing and ligation." In DIMACS Series in Discrete Mathematics and Theoretical Computer Science, 199–221. Providence, Rhode Island: American Mathematical Society, 1996. http://dx.doi.org/10.1090/dimacs/027/09.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Anderson, Richard, and Joao Setubal. "Goldberg’s algorithm for maximum flow in perspective: a computational study." In DIMACS Series in Discrete Mathematics and Theoretical Computer Science, 1–18. Providence, Rhode Island: American Mathematical Society, 1993. http://dx.doi.org/10.1090/dimacs/012/01.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Arora, Sanjeev. "Nearly linear time approximation schemes for Euclidean TSP and other geometric problems." In DIMACS Series in Discrete Mathematics and Theoretical Computer Science, 1–2. Providence, Rhode Island: American Mathematical Society, 1998. http://dx.doi.org/10.1090/dimacs/040/01.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

McShine, Lisa, and Prasad Tetali. "On the mixing time of the triangulation walk and other Catalan structures." In DIMACS Series in Discrete Mathematics and Theoretical Computer Science, 147–60. Providence, Rhode Island: American Mathematical Society, 1998. http://dx.doi.org/10.1090/dimacs/043/09.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

O’Donnell, Michael J. "Introduction: Logic and Logic Programming Languages." In Handbook of Logic in Artificial Intelligence and Logic Programming: Volume 5: Logic Programming. Oxford University Press, 1998. http://dx.doi.org/10.1093/oso/9780198537922.003.0004.

Full text
Abstract:
Logic, according to Webster’s dictionary [Webster, 1987], is ‘a science that deals with the principles and criteria of validity of inference and demonstration: the science of the formal principles of reasoning.' Such 'principles and criteria’ are always described in terms of a language in which inference, demonstration, and reasoning may be expressed. One of the most useful accomplishments of logic for mathematics is the design of a particular formal language, the First Order Predicate Calculus (FOPC). FOPC is so successful at expressing the assertions arising in mathematical discourse that mathematicians and computer scientists often identify logic with classical logic expressed in FOPC. In order to explore a range of possible uses of logic in the design of programming languages, we discard the conventional identification of logic with FOPC, and formalize a general schema for a variety of logical systems, based on the dictionary meaning of the word. Then, we show how logic programming languages may be designed systematically for any sufficiently effective logic, and explain how to view Prolog, Datalog, λProlog, Equational Logic Programming, and similar programming languages, as instances of the general schema of logic programming. Other generalizations of logic programming have been proposed independently by Meseguer [Meseguer, 1989], Miller, Nadathur, Pfenning and Scedrov [Miller et al., 1991], Goguen and Burstall [Goguen and Burstall, 1992]. The purpose of this chapter is to introduce a set of basic concepts for understanding logic programming, not in terms of its historical development, but in a systematic way based on retrospective insights. In order to achieve a systematic treatment, we need to review a number of elementary definitions from logic and theoretical computer science and adapt them to the needs of logic programming. The result is a slightly modified logical notation, which should be recognizable to those who know the traditional notation. Conventional logical notation is also extended to new and analogous concepts, designed to make the similarities and differences between logical relations and computational relations as transparent as possible. Computational notation is revised radically to make it look similar to logical notation.
APA, Harvard, Vancouver, ISO, and other styles
10

Kalai, Gil, and Shmuel Safra. "Threshold Phenomena and Influence: Perspectives from Mathematics, Computer Science, and Economics." In Computational Complexity and Statistical Physics. Oxford University Press, 2005. http://dx.doi.org/10.1093/oso/9780195177374.003.0008.

Full text
Abstract:
Threshold phenomena refer to settings in which the probability for an event to occur changes rapidly as some underlying parameter varies. Threshold phenomena play an important role in probability theory and statistics, physics, and computer science, and are related to issues studied in economics and political science. Quite a few questions that come up naturally in those fields translate to proving that some event indeed exhibits a threshold phenomenon, and then finding the location of the transition and how rapid the change is. The notions of sharp thresholds and phase transitions originated in physics, and many of the mathematical ideas for their study came from mathematical physics. In this chapter, however, we will mainly discuss connections to other fields. A simple yet illuminating example that demonstrates the sharp threshold phenomenon is Condorcet's jury theorem, which can be described as follows. Say one is running an election process, where the results are determined by simple majority, between two candidates, Alice and Bob. If every voter votes for Alice with probability p > 1/2 and for Bob with probability 1 — p, and if the probabilities for each voter to vote either way are independent of the other votes, then as the number of voters tends to infinity the probability of Alice getting elected tends to 1. The probability of Alice getting elected is a monotone function of p, and when there are many voters it rapidly changes from being very close to 0 when p < 1/2 to being very close to 1 when p > 1/2. The reason usually given for the interest of Condorcet's jury theorem to economics and political science [535] is that it can be interpreted as saying that even if agents receive very poor (yet independent) signals, indicating which of two choices is correct, majority voting nevertheless results in the correct decision being taken with high probability, as long as there are enough agents, and the agents vote according to their signal. This is referred to in economics as asymptotically complete aggregation of information.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Other theoretical computer science and computational mathematics"

1

"Impact of Mathematics on the Theoretical Computer Science Course Units in the General Degree Program in Computer Science at Sri Lankan State Universities." In InSITE 2018: Informing Science + IT Education Conferences: La Verne California. Informing Science Institute, 2018. http://dx.doi.org/10.28945/4057.

Full text
Abstract:
[This Proceedings paper was revised and published in the 2018 issue of the journal Issues in Informing Science and Information Technology, Volume 15] ABSTRACT Mathematics is fundamental to the study of Computer Science. In Sri Lankan state universities, students have been enrolled only from the Physical Science stream with minimum ‘C’ grade in Mathematics in the advanced level examination to do a degree program in Computer Science. In addition to that universities have been offering some course units in Mathematics covering basis in Discrete Mathematics, Calculus, and Algebra to provide the required mathematical maturity to Computer Science under-graduates. Despite of this it is observed that the failure rate in fundamental theoretical Computer Science course units are much higher than other course units offered in the general degree program every year. The purpose of this study is to identify how Advanced level Mathematics and Mathematics course units offered at university level do impact on the academic performance of theoretical Computer Science course units and to make appropriate recommendations based on our findings. Academic records comprised of 459 undergraduates from three consecutive batches admitted to the degree program in Computer Science from a university was considered for this study. Results indicated that Advanced level Mathematics does not have any significant effect on the academic performance of theoretical Computer Science course units. Even though all Mathematics course units offered in the first and second year of studies were significantly correlated with academic performance of every theoretical Computer Science course unit, only the Discrete Mathematics course unit highly impact-ed on the academic performance of all three theoretical Computer Science course units. Further this study indicates that the academic performance of female undergraduates is better than males in all theoretical Computer Science and Mathematics course units.
APA, Harvard, Vancouver, ISO, and other styles
2

Herta, Christian, Benjamin Voigt, Patrick Baumann, Klaus Strohmenger, Christoph Jansen, Oliver Fischer, Gefei Zhang, and Peter Hufnagel. "Deep Teaching: Materials for Teaching Machine and Deep Learning." In Fifth International Conference on Higher Education Advances. Valencia: Universitat Politècnica València, 2019. http://dx.doi.org/10.4995/head19.2019.9177.

Full text
Abstract:
Machine learning (ML) is considered to be hard because it is relatively complicated in comparison to other topics of computer science. The reason is that machine learning is based heavily on mathematics and abstract concepts. This results in an entry barrier for students: Most students want to avoid such difficult topics in elective courses or self-study. In the project Deep.Teaching we address these issues: We motivate by selected applications and support courses as well as self-study by giving practical exercises for different topics in machine learning. The teaching material, provided as jupyter notebooks, consists of theoretical and programming sections. For didactical reasons, we designed programming exercises such that the students have to deeply understand the concepts and principles before they can start to implement a solution. We provide all necessary boilerplate code such that the students can primarily focus on the educational objectives of the exercises. We used different ways to give feedback for self-study: obscured solutions for mathematical results, software tests with assert statements, and graphical illustrations of sample solutions. All of the material is published under a permissive license. Developing jupyter notebooks collaboratively for educational purposes poses some problems. We address these issues and provide solutions/best practices.
APA, Harvard, Vancouver, ISO, and other styles
3

Carrano, Dominic, Ilya Chugunov, Jonathan Lee, and Babak Ayazifar. "Self-Contained Jupyter Notebook Labs Promote Scalable Signal Processing Education." In Sixth International Conference on Higher Education Advances. Valencia: Universitat Politècnica de València, 2020. http://dx.doi.org/10.4995/head20.2020.11308.

Full text
Abstract:
Our upper-division course in Signals and Systems at UC Berkeley comprises primarily sophomore and junior undergraduates, and assumes only a basic background in Electrical Engineering and Computer Science. We’ve introduced Jupyter Notebook Python labs to complement the theoretical material covered in more traditional lectures and homeworks. Courses at other institutions have created labs with a similar goal in mind. However, many have a hardware component or involve in-person lab sections that require teaching staff to monitor progress. This presents a significant barrier for deployment in larger courses. Virtual labs—in particular, pure software assignments using the Jupyter Notebook framework—recently emerged as a solution to this problem. Some courses use programming-only labs that lack the modularity and rich user interface of Jupyter Notebook’s cell-based design. Other labs based on the Jupyter Notebook have not yet tapped the full potential of its versatile features. Our labs (1) demonstrate real-life applications; (2) cultivate computational literacy; and (3) are structured to be self-contained. These design principles reduce overhead for teaching staff and give students relevant experience for research and industry.
APA, Harvard, Vancouver, ISO, and other styles
4

Chen, Cheng, Xiaobo Zhong, Jun Xiao, Yong Zhu, and Jiao Jiang. "Performance Monitoring of Regenerative System Based on Dominant Factor Method." In ASME 2017 Power Conference Joint With ICOPE-17 collocated with the ASME 2017 11th International Conference on Energy Sustainability, the ASME 2017 15th International Conference on Fuel Cell Science, Engineering and Technology, and the ASME 2017 Nuclear Forum. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/power-icope2017-3534.

Full text
Abstract:
Safe and efficient operation of a power plant is the system designers’ target. Regenerative system improves the Rankine Cycle efficiency of a power station. However, it is quite difficult to monitor the regenerative system’s performance in an accurate, economical and real-time way at any operation load. There are two main problems about this. One is that most model based on numerical and statistics approaches cannot be explained by the actual operation mechanism of the actual process. The other is that most mechanism models in the past could not be used to monitor the system performance accurately at real-time. This paper focuses on solving these two problems and finds a better way to monitor the regenerative system’s performance accurately in a real-time by the analysis of the mechanism models and numerical methods. It is called the dominant factor method. Two important parameters (characteristic parameter and dominant factor) and characteristic functions are introduced in this paper. Also, this paper described the analysis process and the model building process. In the paper, the mathematics model building process is based on a 1000MW unit’s regenerative system. Characteristic functions are built based on the specific operating data of the power unit. Combing the general mechanism model and the characteristic function together, this paper builds up a regenerative system off-design mathematical model. First, this paper proved the model accuracy by computer simulation. Then, the models were used to predict the pressure of the piping outlet, the temperature of the outlet feedwater and drain water of heaters in a real-time by computers. The results show that the deviation rate between the theoretical predictions and the actual operation data is less than 0.25% during the whole operation load range. At last, in order to test the fault identification ability of this model, some real tests were done in this 1000MW power plant during the actual operation period. The performance changes are identified via the difference between the predict value and the real time value. The result of the tests shows that the performance’s gradient change and sudden change could be found by the model result easily. In order to verify the adaptability of the model, it was used for another 300MW unit, and done some operation test. The results show that this method can also be used for the 300MW unit’s regenerative system. And it can help the operator to recognize the fault heater. The results of this paper proved that the dominant factor method is feasible for performance monitoring of the regenerative system. It can be used to monitor and find the fault of the regenerative system at any operation load by an accurate and fast way in real-time.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography