Journal articles on the topic 'Other theoretical computer science and computational mathematics'

To see the other types of publications on this topic, follow the link: Other theoretical computer science and computational mathematics.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Other theoretical computer science and computational mathematics.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Kumar, Rakesh. "FUTURE FOR SCIENTIFIC COMPUTING USING PYTHON." International Journal of Engineering Technologies and Management Research 2, no. 1 (January 29, 2020): 30–41. http://dx.doi.org/10.29121/ijetmr.v2.i1.2015.28.

Full text
Abstract:
Computational science (scientific computing or scientific computation) is concerned with constructing mathematical models as well as quantitative analysis techniques and using computers to analyze as well as solve scientific problems. In practical use, it is basically the application of computer simulation as well as other forms of computation from numerical analysis and theoretical computer science to problems in different scientific disciplines. The scientific computing approach is to gain understanding, basically through the analysis of mathematical models implemented on computers. Python is frequently used for highperformance scientific applications and widely used in academia as well as scientific projects because it is easy to write and performs well. Due to its high performance nature, scientific computing in Python often utilizes external libraries like NumPy, SciPy and Matplotlib etc.
APA, Harvard, Vancouver, ISO, and other styles
2

Gadanidis, George. "Artificial intelligence, computational thinking, and mathematics education." International Journal of Information and Learning Technology 34, no. 2 (March 6, 2017): 133–39. http://dx.doi.org/10.1108/ijilt-09-2016-0048.

Full text
Abstract:
Purpose The purpose of this paper is to examine the intersection of artificial intelligence (AI), computational thinking (CT), and mathematics education (ME) for young students (K-8). Specifically, it focuses on three key elements that are common to AI, CT and ME: agency, modeling of phenomena and abstracting concepts beyond specific instances. Design/methodology/approach The theoretical framework of this paper adopts a sociocultural perspective where knowledge is constructed in interactions with others (Vygotsky, 1978). Others also refers to the multiplicity of technologies that surround us, including both the digital artefacts of our new media world, and the human methods and specialized processes acting in the world. Technology is not simply a tool for human intention. It is an actor in the cognitive ecology of immersive humans-with-technology environments (Levy, 1993, 1998) that supports but also disrupts and reorganizes human thinking (Borba and Villarreal, 2005). Findings There is fruitful overlap between AI, CT and ME that is of value to consider in mathematics education. Originality/value Seeing ME through the lenses of other disciplines and recognizing that there is a significant overlap of key elements reinforces the importance of agency, modeling and abstraction in ME and provides new contexts and tools for incorporating them in classroom practice.
APA, Harvard, Vancouver, ISO, and other styles
3

Armoni, Michal, Judith Gal-Ezer, and Dina Tirosh. "Solving Problems Reductively." Journal of Educational Computing Research 32, no. 2 (March 2005): 113–29. http://dx.doi.org/10.2190/6pcm-447v-wf7b-qeuf.

Full text
Abstract:
Solving problems by reduction is an important issue in mathematics and science education in general (both in high school and in college or university) and particularly in computer science education. Developing reductive thinking patterns is an important goal in any scientific discipline, yet reduction is not an easy subject to cope with. Still, the use of reduction usually is insufficiently reflected in high school mathematics and science programs. Even in academic computer science programs the concept of reduction is mentioned explicitly only in advanced academic courses such as computability and complexity theory. However, reduction can be applied in other courses as well, even on the high school level. Specifically, in the field of computational models, reduction is an important method for solving design and proof problems. This study focuses on high school students studying the unit “computational models”—a unique unit, which is part of the new Israeli computer science high school curriculum. We examined whether high school students tend to solve problems dealing with computational models reductively, and if they do, what is the nature of their reductive solutions. To the best of our knowledge, the tendency to reductive thinking in theoretical computer science has not been studied before. Our findings show that even though many students use reduction, many others prefer non-reductive solutions, even when reduction can significantly decrease the technical complexity of the solution. We discuss these findings and suggest possible ways to improve reductive thinking.
APA, Harvard, Vancouver, ISO, and other styles
4

Landsberg, J. M. "Algebraic geometry and representation theory in the study of matrix multiplication complexity and other problems in theoretical computer science." Differential Geometry and its Applications 82 (June 2022): 101888. http://dx.doi.org/10.1016/j.difgeo.2022.101888.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Pop, Nicolae, Marin Marin, and Sorin Vlase. "Mathematics in Finite Element Modeling of Computational Friction Contact Mechanics 2021–2022." Mathematics 11, no. 1 (January 3, 2023): 255. http://dx.doi.org/10.3390/math11010255.

Full text
Abstract:
In engineering practice, structures with identical components or parts are useful from several points of view: less information is needed to describe the system; designs can be conceptualized quicker and easier; components are made faster than during traditional complex assembly; and finally, the time needed to achieve the structure and the cost involved in manufacturing decrease. Additionally, the subsequent maintenance of this system then becomes easier and cheaper. The aim of this Special Issue is to provide an opportunity for international researchers to share and review recent advances in the finite element modeling of computational friction contact mechanics. Numerical modeling in mathematics, mechanical engineering, computer science, computers, etc. presents many challenges. The finite element method applied in solid mechanics was designed by engineers to simulate numerical models in order to reduce the design costs of prototypes, tests and measurements. This method was initially validated only by measurements but gave encouraging results. After the discovery of Sobolev spaces, the abovementioned results were obtained, and today, numerous researchers are working on improving this method. Some of applications of this method in solid mechanics include mechanical engineering, machine and device design, civil engineering, aerospace and automotive engineering, robotics, etc. Frictional contact is a complex phenomenon that has led to research in mechanical engineering, computational contact mechanics, composite material design, rigid body dynamics, robotics, etc. A good simulation requires that the dynamics of contact with friction be included in the formulation of the dynamic system so that an approximation of the complex phenomena can be made. To solve these linear or nonlinear dynamic systems, which often have non-differentiable terms, or discontinuities, software that considers these high-performance numerical methods and computers with high computing power are needed. This Special Issue is dedicated to this kind of mechanical structure and to describing the properties and methods of analysis of these structures. Discrete or continuous structures in static and dynamic cases are also considered. Additionally, theoretical models, mathematical methods and numerical analysis of these systems, such as the finite element method and experimental methods, are used in these studies. Machine building, automotive, aerospace and civil engineering are the main areas in which such applications appear, but they can also be found in most other engineering fields. With this Special Issue, we want to disseminate knowledge among researchers, designers, manufacturers and users in this exciting field.
APA, Harvard, Vancouver, ISO, and other styles
6

Rodis, Panteleimon. "On defining and modeling context-awareness." International Journal of Pervasive Computing and Communications 14, no. 2 (June 4, 2018): 111–23. http://dx.doi.org/10.1108/ijpcc-d-18-00003.

Full text
Abstract:
Purpose This paper aims to present a methodology for defining and modeling context-awareness and describing efficiently the interactions between systems, applications and their context. Also, the relation of modern context-aware systems with distributed computation is investigated. Design/methodology/approach On this purpose, definitions of context and context-awareness are developed based on the theory of computation and especially on a computational model for interactive computation which extends the classical Turing Machine model. The computational model proposed here encloses interaction and networking capabilities for computational machines. Findings The definition of context presented here develops a mathematical framework for working with context. Also, the modeling approach of distributed computing enables us to build robust, scalable and detailed models for systems and application with context-aware capabilities. Also, it enables us to map the procedures that support context-aware operations providing detailed descriptions about the interactions of applications with their context and other external sources. Practical implications A case study of a cloud-based context-aware application is examined using the modeling methodology described in the paper so as to demonstrate the practical usage of the theoretical framework that is presented. Originality/value The originality on the framework presented here relies on the connection of context-awareness with the theory of computation and distributed computing.
APA, Harvard, Vancouver, ISO, and other styles
7

ALT, HELMUT, and LUDMILA SCHARF. "COMPUTING THE HAUSDORFF DISTANCE BETWEEN CURVED OBJECTS." International Journal of Computational Geometry & Applications 18, no. 04 (August 2008): 307–20. http://dx.doi.org/10.1142/s0218195908002647.

Full text
Abstract:
The Hausdorff distance between two sets of curves is a measure for the similarity of these objects and therefore an interesting feature in shape recognition. If the curves are algebraic computing the Hausdorff distance involves computing the intersection points of the Voronoi edges of the one set with the curves in the other. Since computing the Voronoi diagram of curves is quite difficult we characterize those points algebraically and compute them using the computer algebra system SYNAPS. This paper describes in detail which points have to be considered, by what algebraic equations they are characterized, and how they actually are computed.
APA, Harvard, Vancouver, ISO, and other styles
8

Lütgehetmann, Daniel, Dejan Govc, Jason P. Smith, and Ran Levi. "Computing Persistent Homology of Directed Flag Complexes." Algorithms 13, no. 1 (January 7, 2020): 19. http://dx.doi.org/10.3390/a13010019.

Full text
Abstract:
We present a new computing package Flagser, designed to construct the directed flag complex of a finite directed graph, and compute persistent homology for flexibly defined filtrations on the graph and the resulting complex. The persistent homology computation part of Flagser is based on the program Ripser by U. Bauer, but is optimised specifically for large computations. The construction of the directed flag complex is done in a way that allows easy parallelisation by arbitrarily many cores. Flagser also has the option of working with undirected graphs. For homology computations Flagser has an approximate option, which shortens compute time with remarkable accuracy. We demonstrate the power of Flagser by applying it to the construction of the directed flag complex of digital reconstructions of brain microcircuitry by the Blue Brain Project and several other examples. In some instances we perform computation of homology. For a more complete performance analysis, we also apply Flagser to some other data collections. In all cases the hardware used in the computation, the use of memory and the compute time are recorded.
APA, Harvard, Vancouver, ISO, and other styles
9

Mitrović, Melanija, Mahouton Norbert Hounkonnou, and Marian Alexandru Baroni. "Theory of Constructive Semigroups with Apartness – Foundations, Development and Practice." Fundamenta Informaticae 184, no. 3 (February 15, 2022): 233–71. http://dx.doi.org/10.3233/fi-2021-2098.

Full text
Abstract:
This paper has several purposes. We present through a critical review the results from already published papers on the constructive semigroup theory, and contribute to its further development by giving solutions to open problems. We also draw attention to its possible applications in other (constructive) mathematics disciplines, in computer science, social sciences, economics, etc. Another important goal of this paper is to provide a clear, understandable picture of constructive semigroups with apartness in Bishop’s style both to (classical) algebraists and the ones who apply algebraic knowledge.
APA, Harvard, Vancouver, ISO, and other styles
10

SOLGHAR, ALIREZA ARAB, and S. A. GANDJALIKHAN NASSAB. "THE INVESTIGATION OF THERMOHYDRODYNAMIC CHARACTERISTIC OF SINGLE AXIAL GROOVE JOURNAL BEARINGS WITH FINITE LENGTH BY USING CFD TECHNIQUES." International Journal of Computational Methods 10, no. 05 (May 2013): 1350031. http://dx.doi.org/10.1142/s021987621350031x.

Full text
Abstract:
The three-dimensional steady state thermohydrodynamic (THD) analysis of an axial grooved oil journal bearing is obtained theoretically. Navier–Stokes equations are solved simultaneously along with turbulent kinetic energy and its dissipation rate equations coupled with the energy equation in the lubricant flow and the heat conduction equation in the bush. The AKN low-Re κ–ε turbulence model is used to simulate the mean turbulent flow field. Considering the complexity of the physical geometry, conformal mapping is used to generate an orthogonal grid and the governing equations are transformed into the computational domain. Discretized forms of the transformed equations are obtained by the control volume method and solved by the SIMPLE algorithm. The numerical results of this analysis can be used to investigate the pressure distribution, volumetric oil flow rate and the loci of shaft in the journal bearings. To validate the computational results, comparison with the experimental and theoretical data of other investigators is made, and reasonable agreement is found.
APA, Harvard, Vancouver, ISO, and other styles
11

Anderson, Stuart L. "Random Number Generators on Vector Supercomputers and Other Advanced Architectures." SIAM Review 32, no. 2 (June 1990): 221–51. http://dx.doi.org/10.1137/1032044.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Sánchez Couso, José Ramón, José Angel Sanchez Martín, Victor Mitrana, and Mihaela Păun. "Simulations between Three Types of Networks of Splicing Processors." Mathematics 9, no. 13 (June 28, 2021): 1511. http://dx.doi.org/10.3390/math9131511.

Full text
Abstract:
Networks of splicing processors (NSP for short) embody a subcategory among the new computational models inspired by natural phenomena with theoretical potential to handle unsolvable problems efficiently. Current literature considers three variants in the context of networks managed by random-context filters. Despite the divergences on system complexity and control degree over the filters, the three variants were proved to hold the same computational power through the simulations of two computationally complete systems: Turing machines and 2-tag systems. However, the conversion between the three models by means of a Turing machine is unattainable because of the huge computational costs incurred. This research paper addresses this issue with the proposal of direct and efficient simulations between the aforementioned paradigms. The information about the nodes and edges (i.e., splicing rules, random-context filters, and connections between nodes) composing any network of splicing processors belonging to one of the three categories is used to design equivalent networks working under the other two models. We demonstrate that these new networks are able to replicate any computational step performed by the original network in a constant number of computational steps and, consequently, we prove that any outcome achieved by the original architecture can be accomplished by the constructed architectures without worsening the time complexity.
APA, Harvard, Vancouver, ISO, and other styles
13

Chillali, Abdelhakim. "R-prime numbers of degree k." Boletim da Sociedade Paranaense de Matemática 38, no. 2 (February 19, 2018): 75–82. http://dx.doi.org/10.5269/bspm.v38i2.38218.

Full text
Abstract:
In computer science, a one-way function is a function that is easy to compute on every input, but hard to invert given the image of a random input. Here, "easy" and "hard" are to be understood in the sense of computational complexity theory, specifically the theory of polynomial time problems. Not being one-to-one is not considered sufficient of a function for it to be called one-way (see Theoretical Definition, in article). A twin prime is a prime number that has a prime gap of two, in other words, differs from another prime number by two, for example the twin prime pair (5,3). The question of whether there exist infinitely many twin primes has been one of the great open questions in number theory for many years. This is the content of the twin prime conjecture, which states: There are infinitely many primes p such that p + 2 is also prime. In this work we define a new notion: ‘r-prime number of degree k’ and we give a new RSA trap-door one-way. This notion generalized a twin prime numbers because the twin prime numbers are 2-prime numbers of degree 1.
APA, Harvard, Vancouver, ISO, and other styles
14

Getty, Daniel, Hao Li, Masayuki Yano, Charles Gao, and A. E. Hosoi. "Luck and the Law: Quantifying Chance in Fantasy Sports and Other Contests." SIAM Review 60, no. 4 (January 2018): 869–87. http://dx.doi.org/10.1137/16m1102094.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

del Alamo, Miguel, Housen Li, Axel Munk, and Frank Werner. "Variational Multiscale Nonparametric Regression: Algorithms and Implementation." Algorithms 13, no. 11 (November 13, 2020): 296. http://dx.doi.org/10.3390/a13110296.

Full text
Abstract:
Many modern statistically efficient methods come with tremendous computational challenges, often leading to large-scale optimisation problems. In this work, we examine such computational issues for recently developed estimation methods in nonparametric regression with a specific view on image denoising. We consider in particular certain variational multiscale estimators which are statistically optimal in minimax sense, yet computationally intensive. Such an estimator is computed as the minimiser of a smoothness functional (e.g., TV norm) over the class of all estimators such that none of its coefficients with respect to a given multiscale dictionary is statistically significant. The so obtained multiscale Nemirowski-Dantzig estimator (MIND) can incorporate any convex smoothness functional and combine it with a proper dictionary including wavelets, curvelets and shearlets. The computation of MIND in general requires to solve a high-dimensional constrained convex optimisation problem with a specific structure of the constraints induced by the statistical multiscale testing criterion. To solve this explicitly, we discuss three different algorithmic approaches: the Chambolle-Pock, ADMM and semismooth Newton algorithms. Algorithmic details and an explicit implementation is presented and the solutions are then compared numerically in a simulation study and on various test images. We thereby recommend the Chambolle-Pock algorithm in most cases for its fast convergence. We stress that our analysis can also be transferred to signal recovery and other denoising problems to recover more general objects whenever it is possible to borrow statistical strength from data patches of similar object structure.
APA, Harvard, Vancouver, ISO, and other styles
16

Kinne, Jeff, and Dieter van Melkebeek. "Space Hierarchy Results for Randomized and other Semantic Models." computational complexity 19, no. 3 (November 6, 2009): 423–75. http://dx.doi.org/10.1007/s00037-009-0277-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Lori, Nicolas F., Jos Neves, Alex H. Blin, and Victor Alves. "Some considerations on quantum computing at sub-atomic scales and its impact in the future of Moore's law." Quantum Information and Computation 20, no. 1&2 (February 2020): 1–13. http://dx.doi.org/10.26421/qic20.1-2-1.

Full text
Abstract:
The contemporary development of Quantum Computers has opened new possibilities for computation improvements, but the limits of Moore's law validity are starting to show. We analyze here the possibility that miniaturization will continue to be the source of Moore's law validity in the near future, and our conclusion is that miniaturization is no longer a reliable answer for the future development of computer science, but instead we suggest that lateralization is the correct approach. By lateralization, we mean the use of biology as the correct format for the implementation of ubiquitous computerized systems, a format that might in many circumstances eschew miniaturization as an overly expensive useless advantage whereas in other cases miniaturization might play a key role. Thus, the future of computer science is not towards a miniaturization that goes from the atom-scale (its present application scale) towards the nucleus-scale, but rather in developing more integrated circuits at the micrometer to nanometer scale, so as to better mimic and interact with biological systems. We analyze some "almost sci-fi" approaches to the development of better computer systems near the Bekenstein bound limit, and unsurprisingly they fail to have any realistic feasibility. Then, we use the difference between the classical vs. quantum version of the Hammerstein-Clifford theorem to explain why biological systems eschewed quantum computation to represent the world but have chosen classical computation instead. Finally, we analyze examples of recent work which indicate future possibilities of integration between computers and biological systems. As a corollary of that choice by the biological systems, we propose that the predicted lateralization-driven evolution in computer science will not be based in quantum computers, but rather in classical computers.
APA, Harvard, Vancouver, ISO, and other styles
18

Book, Ronald V. "Relativizations of the P =? NP and Other Problems: Developments in Structural Complexity Theory." SIAM Review 36, no. 2 (June 1994): 157–75. http://dx.doi.org/10.1137/1036051.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Benim, Ali Cemal. "Numerical Calculation of Sink Velocities for Helminth Eggs in Water." Computation 9, no. 12 (December 10, 2021): 136. http://dx.doi.org/10.3390/computation9120136.

Full text
Abstract:
The settling velocities of helminth eggs of three types, namely Ascaris suum (ASC), Trichuris suis (TRI), and Oesophagostomum spp. (OES), in clean tap water are computationally determined by means of computational fluid dynamics, using the general-purpose CFD software ANSYS Fluent 18.0. The previous measurements of other authors are taken as the basis for the problem formulation and validation, whereby the latter is performed by comparing the predicted sink velocities with those measured in an Owen tube. To enable a computational treatment, the measured shapes of the eggs are parametrized by idealizing them in terms of elementary geometric forms. As the egg shapes show a variation within each class, “mean” shapes are considered. The sink velocities are obtained through the computationally obtained drag coefficients. The latter are defined by means of steady-state calculations. Predicted sink velocities are compared with the measured ones. It is observed that the calculated values show a better agreement with the measurements, for ASC and TRI, compared to the theoretical sink values delivered by the Stokes theory. However, the observed agreement is still found not to be very satisfactory, indicating the role of further parameters, such as the uncertainties in the characterization of egg shapes or flocculation effects even in clean tap water.
APA, Harvard, Vancouver, ISO, and other styles
20

Bacanin, Nebojsa, Timea Bezdan, Eva Tuba, Ivana Strumberger, and Milan Tuba. "Optimizing Convolutional Neural Network Hyperparameters by Enhanced Swarm Intelligence Metaheuristics." Algorithms 13, no. 3 (March 17, 2020): 67. http://dx.doi.org/10.3390/a13030067.

Full text
Abstract:
Computer vision is one of the most frontier technologies in computer science. It is used to build artificial systems to extract valuable information from images and has a broad range of applications in various areas such as agriculture, business, and healthcare. Convolutional neural networks represent the key algorithms in computer vision, and in recent years, they have attained notable advances in many real-world problems. The accuracy of the network for a particular task profoundly relies on the hyperparameters’ configuration. Obtaining the right set of hyperparameters is a time-consuming process and requires expertise. To approach this concern, we propose an automatic method for hyperparameters’ optimization and structure design by implementing enhanced metaheuristic algorithms. The aim of this paper is twofold. First, we propose enhanced versions of the tree growth and firefly algorithms that improve the original implementations. Second, we adopt the proposed enhanced algorithms for hyperparameters’ optimization. First, the modified metaheuristics are evaluated on standard unconstrained benchmark functions and compared to the original algorithms. Afterward, the improved algorithms are employed for the network design. The experiments are carried out on the famous image classification benchmark dataset, the MNIST dataset, and comparative analysis with other outstanding approaches that were tested on the same problem is conducted. The experimental results show that both proposed improved methods establish higher performance than the other existing techniques in terms of classification accuracy and the use of computational resources.
APA, Harvard, Vancouver, ISO, and other styles
21

Gavrilyuk, Ivan, and Boris N. Khoromskij. "Tensor Numerical Methods: Actual Theory and Recent Applications." Computational Methods in Applied Mathematics 19, no. 1 (January 1, 2019): 1–4. http://dx.doi.org/10.1515/cmam-2018-0014.

Full text
Abstract:
AbstractMost important computational problems nowadays are those related to processing of the large data sets and to numerical solution of the high-dimensional integral-differential equations. These problems arise in numerical modeling in quantum chemistry, material science, and multiparticle dynamics, as well as in machine learning, computer simulation of stochastic processes and many other applications related to big data analysis. Modern tensor numerical methods enable solution of the multidimensional partial differential equations (PDE) in {\mathbb{R}^{d}} by reducing them to one-dimensional calculations. Thus, they allow to avoid the so-called “curse of dimensionality”, i.e. exponential growth of the computational complexity in the dimension size d, in the course of numerical solution of high-dimensional problems. At present, both tensor numerical methods and multilinear algebra of big data continue to expand actively to further theoretical and applied research topics. This issue of CMAM is devoted to the recent developments in the theory of tensor numerical methods and their applications in scientific computing and data analysis. Current activities in this emerging field on the effective numerical modeling of temporal and stationary multidimensional PDEs and beyond are presented in the following ten articles, and some future trends are highlighted therein.
APA, Harvard, Vancouver, ISO, and other styles
22

Castiglione, F., M. Bernaschi, and S. Succi. "Simulating the Immune Response on a Distributed Parallel Computer." International Journal of Modern Physics C 08, no. 03 (June 1997): 527–45. http://dx.doi.org/10.1142/s0129183197000424.

Full text
Abstract:
The application of ideas and methods of statistical mechanics to problems of biological relevance is one of the most promising frontiers of theoretical and computational mathematical physics.1,2 Among others, the computer simulation of the immune system dynamics stands out as one of the prominent candidates for this type of investigations. In the recent years immunological research has been drawing increasing benefits from the resort to advanced mathematical modeling on modern computers.3,4 Among others, Cellular Automata (CA), i.e., fully discrete dynamical systems evolving according to boolean laws, appear to be extremely well suited to computer simulation of biological systems.5 A prominent example of immunological CA is represented by the Celada–Seiden automaton, that has proven capable of providing several new insights into the dynamics of the immune system response. To date, the Celada–Seiden automaton was not in a position to exploit the impressive advances of computer technology, and notably parallel processing, simply because no parallel version of this automaton had been developed yet. In this paper we fill this gap and describe a parallel version of the Celada–Seiden cellular automaton aimed at simulating the dynamic response of the immune system. Details on the parallel implementation as well as performance data on the IBM SP2 parallel platform are presented and commented on.
APA, Harvard, Vancouver, ISO, and other styles
23

Wocjan, P., D. Janzing, and T. Beth. "Simulating arbitrary pair-interactions by a given Hamiltonian: graph-theoretical bounds on the time-complexity." Quantum Information and Computation 2, no. 2 (February 2002): 117–32. http://dx.doi.org/10.26421/qic2.2-2.

Full text
Abstract:
We consider a quantum computer consisting of n spins with an arbitrary but fixed pair-interaction Hamiltonian and describe how to simulate other pair-interactions by interspersing the natural time evolution with fast local transformations. Calculating the minimal time overhead of such a simulation leads to a convex optimization problem. Lower and upper bounds on the minimal time overhead are derived in terms of chromatic indices of interaction graphs and spectral majorization criteria. These results classify Hamiltonians with respect to their computational power. For a specific Hamiltonian, namely \sigma_z\otimes\sigma_z-interactions between all spins, the optimization is mathematically equivalent to a separability problem of n-qubit density matrices. We compare the complexity defined by such a quantum computer with the usual gate complexity.
APA, Harvard, Vancouver, ISO, and other styles
24

Strozecki, Yann. "On Enumerating Monomials and Other Combinatorial Structures by Polynomial Interpolation." Theory of Computing Systems 53, no. 4 (January 24, 2013): 532–68. http://dx.doi.org/10.1007/s00224-012-9442-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Ben-Amram, A. M., B. A. Julstrom, and U. Zwick. "A Note on Busy Beavers and Other Creatures." Mathematical Systems Theory 29, no. 4 (1996): 375. http://dx.doi.org/10.1007/s002240000021.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Ben-Amram, A. M., B. A. Julstrom, and U. Zwick. "A note on busy beavers and other creatures." Mathematical Systems Theory 29, no. 4 (August 1996): 375–86. http://dx.doi.org/10.1007/bf01192693.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Wolf, Wayne. "Complexity Issues in VLSI—Optimal Layouts for the Shuffle-Exchange Graph and Other Networks (Frank Thomson Leighton)." SIAM Review 28, no. 2 (June 1986): 287–88. http://dx.doi.org/10.1137/1028099.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Castonguay, Diane, Elisangela Silva Dias, Fernanda Neiva Mesquita, and Julliano Rosa Nascimento. "Computing some role assignments of Cartesian product of graphs." RAIRO - Operations Research 56, no. 1 (January 2022): 115–21. http://dx.doi.org/10.1051/ro/2021192.

Full text
Abstract:
In social networks, a role assignment is such that individuals play the same role, if they relate in the same way to other individuals playing counterpart roles. When a smaller graph models the social roles in a network, this gives rise to the decision problem called r-Role Assignment whether it exists such an assignment of r distinct roles to the vertices of the graph. This problem is known to be NP-complete for any fixed r ≥ 2. The Cartesian product of graphs is one of the most studied operation on graphs and has numerous applications in diverse areas, such as Mathematics, Computer Science, Chemistry and Biology. In this paper, we determine the computational complexity of r-Role Assignment restricted to Cartesian product of graphs, for r = 2, 3. In fact, we show that the Cartesian product of graphs is always 2-role assignable, however the problem of 3-Role Assignment is still NP-complete for this class.
APA, Harvard, Vancouver, ISO, and other styles
29

Deborah, Joseph, and Paul Young. "A Survey of Some Recent Results on Computational Complexity in Weak Theories of Arithmetic1." Fundamenta Informaticae 8, no. 1 (January 1, 1985): 103–21. http://dx.doi.org/10.3233/fi-1985-8109.

Full text
Abstract:
In spite of the fact that much effort has been expended trying to prove lower bounds for algorithms and trying to solve the P = NP question, only limited progress has been made. Although most computer scientists remain convinced that solutions will be found, others (Hartmanis and Hopcroft, Fortune, Leivant and O’Donnell, and Phillips) have questioned the adequacy of Peano arithmetic for computer science. This uncertainty has only been increased by the recent work of Paris and Harrington showing that certain simple, finitistic, combinatorial statements are in fact independent of Peano Arithmetic. In this paper we survey complexity theoretic statements that are known to be independent of arithmetic theories. In addition we survey recent results analyzing the arithmetic quantifier structure of computational problems.
APA, Harvard, Vancouver, ISO, and other styles
30

Raman, Venkatesh, Saket Saurabh, and Somnath Sikdar. "Efficient Exact Algorithms through Enumerating Maximal Independent Sets and Other Techniques." Theory of Computing Systems 41, no. 3 (October 2007): 563–87. http://dx.doi.org/10.1007/s00224-007-1334-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Raussendorf, Robert. "Cohomological framework for contextual quantum computations." quantum Information and Computation 19, no. 13&14 (November 2019): 1141–70. http://dx.doi.org/10.26421/qic19.13-14-4.

Full text
Abstract:
We describe a cohomological framework for measurement-based quantum computation in which symmetry plays a central role. Therein, the essential information about the computation is contained in either of two topological invariants, namely two cohomology groups. One of them applies only to deterministic quantum computations, and the other to general probabilistic ones. Those invariants characterize the computational output, and at the same time witness quantumness in the form of contextuality. In result, they give rise to fundamental algebraic structures underlying quantum computation.
APA, Harvard, Vancouver, ISO, and other styles
32

JIANG, D., and N. F. STEWART. "FLOATING-POINT ARITHMETIC FOR COMPUTATIONAL GEOMETRY PROBLEMS WITH UNCERTAIN DATA." International Journal of Computational Geometry & Applications 19, no. 04 (August 2009): 371–85. http://dx.doi.org/10.1142/s0218195909003015.

Full text
Abstract:
It has been suggested in the literature that ordinary finite-precision floating-point arithmetic is inadequate for geometric computation, and that researchers in numerical analysis may believe that the difficulties of error in geometric computation can be overcome by simple approaches. It is the purpose of this paper to show that these suggestions, based on an example showing failure of a certain algorithm for computing planar convex hulls, are misleading, and why this is so. It is first shown how the now-classical backward error analysis can be applied in the area of computational geometry. This analysis is relevant in the context of uncertain data, which may well be the practical context for computational-geometry algorithms such as, say, those for computing convex hulls. The exposition will illustrate the fact that the backward error analysis does not pretend to overcome the problem of finite precision: it merely provides a way to distinguish those algorithms that overcome the problem to whatever extent it is possible to do so. It is then shown that often the situation in computational geometry is exactly parallel to other areas, such as the numerical solution of linear equations, or the algebraic eigenvalue problem. Indeed, the example mentioned can be viewed simply as an example of the use of an unstable algorithm, for a problem for which computational geometry has already discovered provably stable algorithms. Finally, the paper discusses the implications of these analyses for applications in three-dimensional solid modeling. This is done by considering a problem defined in terms of a simple extension of the planar convex-hull algorithm, namely, the verification of the well-formedness of extruded objects. A brief discussion concerning more difficult problems in solid modeling is also included.
APA, Harvard, Vancouver, ISO, and other styles
33

М.А., Михеевская,, Бурмистрова, Д.Д., Стородубцева, Т.Н., Швецова, В.В., Рябухин, П.Б., Григорьев, Г.В., and Иготти, М.М. "Theoretical study of wood waste briquetting taking into account nonlinear strengthening of the raw materials." Известия СПбЛТА, no. 240 (December 11, 2022): 175–85. http://dx.doi.org/10.21266/2079-4304.2022.240.175-185.

Full text
Abstract:
Исследование направлено на развитие теоретического описания процесса брикетирования древесного сырья. Цель исследования – унифицировать подход к построению модели прессования древесного материала как упруго-вязко-пластического тела и рассмотреть его упрочнение в соответствии со степенной зависимостью предела пластичности от плотности. Исследование строится на анализе результатов, полученных ранее в области математического моделирования прессования упрочняющихся древесных материалов. Реализация предлагаемых зависимостей выполняется с использованием методов приближенного решения дифференциальных уравнений в системе компьютерной математики Maple. Разработанная унифицированная модель позволяет прогнозировать напряжение, возникающее в брикетируемом древесном материале и, следовательно, оценивать давление прессования, требующееся для формирования брикета с заданной прочностью. Важной особенностью предлагаемой модели является то, что замена уравнения для предела пластичности либо его параметров не требует пересмотра всей ее структуры. Установлено, что реализация численного решения дифференциального уравнения связи напряжения и деформации упруго-вязко-пластического тела со включением нелинейной зависимости предела прочности в современной системе компьютерной математики Maple не вызвала вычислительных сложностей. Перспектива дальнейших исследований состоит в проведении дополнительных экспериментов с целью уточнения нелинейной зависимости предела пластичности от плотности и других физико-механических и качественных параметров обрабатываемого сырья. The study is aimed at developing the theoretical description of the process of briquetting wood raw materials. The purpose of this article is to unify the approach to building a model for pressing a wood material as an elastic-viscous-plastic body and consider its hardening in accordance with the power-law dependence of the plasticity limit on density. The study is based on the analysis of the results obtained earlier in the field of mathematical modeling of the pressing of hardening wood materials. The implementation of the proposed dependencies is carried out using the methods of approximate solution of differential equations in the Maple computer mathematics system. The developed unified model makes it possible to predict the stress that occurs in the briquetted wood material and, therefore, to estimate the pressing pressure required to form a briquette with a given strength. An important feature of the proposed model is that the replacement of the equation for the plasticity limit or its parameters does not require a revision of its entire structure. It has been established that the implementation of the numerical solution of the differential equation of the relationship between stress and strain of an elastic-viscous-plastic body with the inclusion of a nonlinear dependence of the ultimate strength in the modern Maple computer mathematics system did not cause computational difficulties. The prospect of further research is to conduct additional experiments in order to clarify the nonlinear dependence of the plasticity limit on density and other physical, mechanical and qualitative parameters of the processed raw materials.
APA, Harvard, Vancouver, ISO, and other styles
34

ARABSHAHI, H., and M. R. BENAM. "A SHOCK-CAPTURING UPWIND DISCRETIZATION METHOD FOR CHARACTERIZATION OF SiC MESFETs." International Journal of Computational Methods 05, no. 02 (June 2008): 341–49. http://dx.doi.org/10.1142/s0219876208001509.

Full text
Abstract:
A finite difference shock-capturing upwind discretization method in two dimensions is presented in detail for simulation of homogeneous and nonhomogeneous devices. The model is based on the solutions to the highly coupled nonlinear partial differential equations of the full hydrodynamic model. These solutions allow one to calculate the electron drift velocity and other device parameters as a function of the applied electric field. The hydrodynamic model is able to describe inertia effects which play an increasing role in different fields of micro- and optoelectronics where simplified charge transport models like the drift-diffusion model and the energy balance model are no longer applicable. Results of numerical simulations are shown for a two-dimensional SiC MESFET device, and are in fair agreement with other theoretical or experimental methods.
APA, Harvard, Vancouver, ISO, and other styles
35

Georgiou, Konstantinos, Christos Makris, and Georgios Pispirigos. "A Distributed Hybrid Community Detection Methodology for Social Networks." Algorithms 12, no. 8 (August 17, 2019): 175. http://dx.doi.org/10.3390/a12080175.

Full text
Abstract:
Nowadays, the amount of digitally available information has tremendously grown, with real-world data graphs outreaching the millions or even billions of vertices. Hence, community detection, where groups of vertices are formed according to a well-defined similarity measure, has never been more essential affecting a vast range of scientific fields such as bio-informatics, sociology, discrete mathematics, nonlinear dynamics, digital marketing, and computer science. Even if an impressive amount of research has yet been published to tackle this NP-hard class problem, the existing methods and algorithms have virtually been proven inefficient and severely unscalable. In this regard, the purpose of this manuscript is to combine the network topology properties expressed by the loose similarity and the local edge betweenness, which is a currently proposed Girvan–Newman’s edge betweenness measure alternative, along with the intrinsic user content information, in order to introduce a novel and highly distributed hybrid community detection methodology. The proposed approach has been thoroughly tested on various real social graphs, roundly compared to other classic divisive community detection algorithms that serve as baselines and practically proven exceptionally scalable, highly efficient, and adequately accurate in terms of revealing the subjacent network hierarchy.
APA, Harvard, Vancouver, ISO, and other styles
36

Becker, Florent, Tom Besson, Jérôme Durand-Lose, Aurélien Emmanuel, Mohammad-Hadi Foroughmand-Araabi, Sama Goliaei, and Shahrzad Heydarshahi. "Abstract Geometrical Computation 10." ACM Transactions on Computation Theory 13, no. 1 (March 2021): 1–31. http://dx.doi.org/10.1145/3442359.

Full text
Abstract:
Signal machines form an abstract and idealized model of collision computing. Based on dimensionless signals moving on the real line, they model particle/signal dynamics in Cellular Automata. Each particle, or signal , moves at constant speed in continuous time and space. When signals meet, they get replaced by other signals. A signal machine defines the types of available signals, their speeds, and the rules for replacement in collision. A signal machine A simulates another one B if all the space-time diagrams of B can be generated from space-time diagrams of A by removing some signals and renaming other signals according to local information. Given any finite set of speeds S we construct a signal machine that is able to simulate any signal machine whose speeds belong to S . Each signal is simulated by a macro-signal , a ray of parallel signals. Each macro-signal has a main signal located exactly where the simulated signal would be, as well as auxiliary signals that encode its id and the collision rules of the simulated machine. The simulation of a collision, a macro-collision , consists of two phases. In the first phase, macro-signals are shrunk, and then the macro-signals involved in the collision are identified and it is ensured that no other macro-signal comes too close. If some do, the process is aborted and the macro-signals are shrunk, so that the correct macro-collision will eventually be restarted and successfully initiated. Otherwise, the second phase starts: the appropriate collision rule is found and new macro-signals are generated accordingly. Considering all finite sets of speeds S and their corresponding simulators provides an intrinsically universal family of signal machines.
APA, Harvard, Vancouver, ISO, and other styles
37

Chuang, Huan-Ming, and Chia-Cheng Lee. "Effects of Personal Construal Levels and Team Role Ambiguity on the Group Investigation of Junior High School Students’ Programming Ability." Sustainability 13, no. 19 (October 2, 2021): 10977. http://dx.doi.org/10.3390/su131910977.

Full text
Abstract:
Concerns regarding the high demand for skilled personnel in the science, technology, engineering, and mathematics (STEM) fields underline the importance of developing advanced information technology (IT) and programming skills among job candidates. In the past 10 years, computer programming has regained considerable attention because of rapid developments in computer programming technology. Advocates claim that computer programming cultivates other skills, including problem solving, logical thinking, and creativity. Education systems worldwide are developing courses to instruct students in programming and computational thinking. Although the importance of computer programming has been widely recognized, the systematic evaluation of the effectiveness of teaching methods and conditions that promote the learning of programming knowledge and skills has received little scholarly attention. This study thus investigated the moderating roles of learners’ construal levels and their team role ambiguity in the context of group investigation in junior high school programing courses. In this study, junior high school students were divided into pairs to develop Arduino projects. Students applied programming abilities to complete a task involving the use of Arduino boards to simulate the operation of traffic lights. Major research findings indicate that construal levels play a significant role in moderating the relationship between programming ability and learning outcome; however, role ambiguity does not significantly affect this relationship. Theoretical implications are discussed, and managerial implications are suggested.
APA, Harvard, Vancouver, ISO, and other styles
38

Wu, Keke, Babatunde Oluwaseun Onasanya, Longzhou Cao, and Yuming Feng. "Impulsive Control of Some Types of Nonlinear Systems Using a Set of Uncertain Control Matrices." Mathematics 11, no. 2 (January 13, 2023): 421. http://dx.doi.org/10.3390/math11020421.

Full text
Abstract:
So many real life problems ranging from medicine, agriculture, biology and finance are modelled by nonlinear systems. In this case, a chaotic nonlinear system is considered and, as opposed to solving Linear Matrix Inequality (LMI), which is the usual approach but cumbersome, a completely different approach was used. In some other cases, the computation of singular value of matrix was used but the method in this study needs not such. In addition, most models, if not all, concentrate on finding a control matrix J under some sufficient conditions. The problem is that only one such matrix J is provided. In reality, the actual control quantity may have a little deviation from the theoretical J. Hence, the study in this paper provides a set of infinite uncertain matrices Jα which are able to adapt to control the system under uncertain conditions. It turns out that this new method controls the system in shorter time with less computational complexities.
APA, Harvard, Vancouver, ISO, and other styles
39

BOLLOBÁS, BÉLA, IMRE LEADER, and CLAUDIA MALVENUTO. "Daisies and Other Turán Problems." Combinatorics, Probability and Computing 20, no. 5 (August 18, 2011): 743–47. http://dx.doi.org/10.1017/s0963548311000319.

Full text
Abstract:
Our aim in this note is to make some conjectures about extremal densities of daisy-free families, where a ‘daisy’ is a certain hypergraph. These questions turn out to be related to some Turán problems in the hypercube, but they are also natural in their own right. We start by giving the daisy conjectures, and some related problems, and shall then go on to describe the connection with vertex-Turán problems in the hypercube.
APA, Harvard, Vancouver, ISO, and other styles
40

MU, DAN, JIAN-QUAN LI, and YI-HAN ZHOU. "THEORETICAL RESEARCH ON THE REACTION MECHANISM OF O + HCNO." Journal of Theoretical and Computational Chemistry 09, no. 05 (October 2010): 945–62. http://dx.doi.org/10.1142/s0219633610006110.

Full text
Abstract:
We apply a theoretical method to study the O + HCNO reaction, in which the products Pi with i = 1, 2, …, 8 are involved. It is carried out by means of CCSD(T)/6-311G(d,p)//B3LYP/6-311G(d,p) + ZPVE computational method to detect a set of reasonable pathways. It shows that P5(H + NO + CO) and P8(HON + CO) are both the major product channel; in addition, there are some degrees of contributions from P1(HCO + NO) to form P5(H + NO + CO) ; P4(NH + CO2) is considered a minor product channel, and there are some degrees of contributions from P2(HNO + CO) to form P4(NH + CO2) ; whereas the other three channels, P3(NCO + OH) , P6(CNO + OH) , and P7(HCN + O2) are less favorable or even unfavorable. All these theoretical results are in harmony with the experimental facts.
APA, Harvard, Vancouver, ISO, and other styles
41

Itskovich, Elizabeth J., and Vadim E. Levit. "What Do a Longest Increasing Subsequence and a Longest Decreasing Subsequence Know about Each Other?" Algorithms 12, no. 11 (November 7, 2019): 237. http://dx.doi.org/10.3390/a12110237.

Full text
Abstract:
As a kind of converse of the celebrated Erdős–Szekeres theorem, we present a necessary and sufficient condition for a sequence of length n to contain a longest increasing subsequence and a longest decreasing subsequence of given lengths x and y, respectively.
APA, Harvard, Vancouver, ISO, and other styles
42

CHEN, DANNY Z., and EWA MISIOŁEK. "FINDING MANY OPTIMAL PATHS WITHOUT GROWING ANY OPTIMAL PATH TREES." International Journal of Computational Geometry & Applications 20, no. 04 (August 2010): 449–69. http://dx.doi.org/10.1142/s0218195910003384.

Full text
Abstract:
Many algorithms for applications such as pattern recognition, computer vision, and computer graphics seek to compute actual optimal paths in weighted directed graphs. The standard approach for reporting an actual optimal path is based on building a single-source optimal path tree. A technique by Chen et al.2 was given for a class of problems such that a single actual optimal path can be reported without maintaining any single-source optimal path tree, thus significantly reducing the space bound of those problems with no or little increase in their running time. In this paper, we extend the technique by Chen et al.2 to the generalized problem of reporting many actual optimal paths with different starting and ending vertices in certain directed graphs, and show how this new technique yields improved results on several application problems, such as reconstructing a 3-D surface band bounded by two simple closed curves, finding various constrained segmentation of 2-D medical images, and circular string-to-string correction. We also correct an error in the time/space complexity for the well-cited circular string-to-string correction algorithm12 and give an improved result for this problem. Although the generalized many-path problem seems more difficult than the single-path cases, our algorithms have nearly the same space and time bounds as those of the single-path cases. Our technique is likely to help improve many other optimal paths or dynamic programming algorithms.
APA, Harvard, Vancouver, ISO, and other styles
43

STEWART, A. JAMES. "LOCAL ROBUSTNESS AND ITS APPLICATION TO POLYHEDRAL INTERSECTION." International Journal of Computational Geometry & Applications 04, no. 01 (March 1994): 87–118. http://dx.doi.org/10.1142/s0218195994000070.

Full text
Abstract:
The field of solid modeling makes extensive ve use of a variety of geometric algorithms. When implemented on a computer, these algorithms often fail because the computer only provides floating point arithmetic, while the algorithms are expecting infinite precision arithmetic on real numbers. These algorithms are not robust. This paper presents a formal theory of robustness. It is then argued that the elegant theoretical approach to robustness is not viable in practice: algorithms like those used in solid modeling are generally too complex for this approach. This paper presents a practical alternative to the formal theory of robustness; this alternative is called local robustness. Local robustness is applied to the design of a polyhedral intersection algorithm, which is an important component in many solid modelers. The intersection algorithm has been implemented, and, in extensive tests, has never failed to produce a valid polyhedron of intersection. A concise characterization of the locally robust intersection algorithm is presented; this characterization can be used to develop variants of the intersection algorithm, and to develop robust versions of other solid modeling algorithms.
APA, Harvard, Vancouver, ISO, and other styles
44

Kulik, Anatoliy, Andrey Chukhray, and Olena Havrylenko. "Information technology for creating intelligent computer programs for training in algorithmic tasks. Part 1: Mathematical foundations." System research and information technologies, no. 4 (December 22, 2021): 27–41. http://dx.doi.org/10.20535/srit.2308-8893.2021.4.02.

Full text
Abstract:
The existing education system (in particular higher education) due to its focus on basic knowledge is quite inert and cannot satisfy the needs of the modern labor market, which is rapidly developing. Some professions transform or disappear, while the others appear almost every day. Today the employers need specialists with certain skills and abilities, who are able to develop them and adapt to specific projects. That is why short-term courses are very popular today, especially online and with a mentor — a specialist in a particular field. At the same time, graduates of such courses are mostly unable to solve complex problems and make competent decisions on their own. There is a requirement of creation of training programs for testing the development and implementation of tools for productive knowledge and skills transferring in a particular field. The article shows a possible approach to provide some interactivity to computer tutoring tools in addition to the game principle, information visualization and other techniques that have already proven themselves in information systems. It will give an opportunity to create a platform that can accumulate new technologies, integrating them into a digital tutoring environment that can be adapted to each student.
APA, Harvard, Vancouver, ISO, and other styles
45

Ruggieri, Andrea, Francesco Stranieri, Fabio Stella, and Marco Scutari. "Hard and Soft EM in Bayesian Network Learning from Incomplete Data." Algorithms 13, no. 12 (December 9, 2020): 329. http://dx.doi.org/10.3390/a13120329.

Full text
Abstract:
Incomplete data are a common feature in many domains, from clinical trials to industrial applications. Bayesian networks (BNs) are often used in these domains because of their graphical and causal interpretations. BN parameter learning from incomplete data is usually implemented with the Expectation-Maximisation algorithm (EM), which computes the relevant sufficient statistics (“soft EM”) using belief propagation. Similarly, the Structural Expectation-Maximisation algorithm (Structural EM) learns the network structure of the BN from those sufficient statistics using algorithms designed for complete data. However, practical implementations of parameter and structure learning often impute missing data (“hard EM”) to compute sufficient statistics instead of using belief propagation, for both ease of implementation and computational speed. In this paper, we investigate the question: what is the impact of using imputation instead of belief propagation on the quality of the resulting BNs? From a simulation study using synthetic data and reference BNs, we find that it is possible to recommend one approach over the other in several scenarios based on the characteristics of the data. We then use this information to build a simple decision tree to guide practitioners in choosing the EM algorithm best suited to their problem.
APA, Harvard, Vancouver, ISO, and other styles
46

Di Battista, Giuseppe, Fabrizio Frati, and Maurizio Patrignani. "On Embedding a Graph in the Grid with the Maximum Number of Bends and Other Bad Features." Theory of Computing Systems 44, no. 2 (May 7, 2008): 143–59. http://dx.doi.org/10.1007/s00224-008-9115-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

MØGELBERG, RASMUS E., and MARCO PAVIOTTI. "Denotational semantics of recursive types in synthetic guarded domain theory." Mathematical Structures in Computer Science 29, no. 3 (May 15, 2018): 465–510. http://dx.doi.org/10.1017/s0960129518000087.

Full text
Abstract:
Just like any other branch of mathematics, denotational semantics of programming languages should be formalised in type theory, but adapting traditional domain theoretic semantics, as originally formulated in classical set theory to type theory has proven challenging. This paper is part of a project on formulating denotational semantics in type theories with guarded recursion. This should have the benefit of not only giving simpler semantics and proofs of properties such as adequacy, but also hopefully in the future to scale to languages with advanced features, such as general references, outside the reach of traditional domain theoretic techniques.Working inGuarded Dependent Type Theory(GDTT), we develop denotational semantics for Fixed Point Calculus (FPC), the simply typed lambda calculus extended with recursive types, modelling the recursive types of FPC using the guarded recursive types ofGDTT. We prove soundness and computational adequacy of the model inGDTTusing a logical relation between syntax and semantics constructed also using guarded recursive types. The denotational semantics is intensional in the sense that it counts the number of unfold-fold reductions needed to compute the value of a term, but we construct a relation relating the denotations of extensionally equal terms, i.e., pairs of terms that compute the same value in a different number of steps. Finally, we show how the denotational semantics of terms can be executed inside type theory and prove that executing the denotation of a boolean term computes the same value as the operational semantics of FPC.
APA, Harvard, Vancouver, ISO, and other styles
48

Zhang, Yi, Yunchuan Hu, Jiakai Lu, and Zhiqiang Shi. "Research on Path Planning of Mobile Robot Based on Improved Theta* Algorithm." Algorithms 15, no. 12 (December 15, 2022): 477. http://dx.doi.org/10.3390/a15120477.

Full text
Abstract:
The Theta* algorithm is a path planning algorithm based on graph search, which gives the optimal path with more flexibility than A* algorithm in terms of routes. The traditional Theta* algorithm is difficult to take into account with the global and details in path planning and traverses more nodes, which leads to a large amount of computation and is not suitable for path planning in large scenarios directly by the Theta* algorithm. To address this problem, this paper proposes an improved Theta* algorithm, namely the W-Theta* algorithm. The heuristic function of Theta* is improved by introducing a weighting strategy, while the default Euclidean distance calculation formula of Theta* is changed to a diagonal distance calculation formula, which finally achieves a reduction in computation time while ensuring a shorter global path; the trajectory optimization is achieved by curve fitting of the generated path points to make the motion trajectory of the mobile robot smoother. Simulation results show that the improved algorithm can quickly plan paths in large scenarios. Compared with other path planning algorithms, the algorithm has better performance in terms of time and computational cost. In different scenarios, the W-Theta* algorithm reduces the computation time of path planning by 81.65% compared with the Theta* algorithm and 79.59% compared with the A* algorithm; the W-Theta* algorithm reduces the memory occupation during computation by 44.31% compared with the Theta* algorithm and 29.33% compared with the A* algorithm.
APA, Harvard, Vancouver, ISO, and other styles
49

Langdon, W. B., and Riccardo Poli. "Evolving Problems to Learn About Particle Swarm Optimizers and Other Search Algorithms." IEEE Transactions on Evolutionary Computation 11, no. 5 (October 2007): 561–78. http://dx.doi.org/10.1109/tevc.2006.886448.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Mahmood, Waqas, Arfan Bukhtiar, Muhammad Haroon, and Bing Dong. "First principles calculations on theoretical band gap improvement of IIIA-VA zinc-blende semiconductor InAs." International Journal of Modern Physics C 31, no. 12 (October 9, 2020): 2050178. http://dx.doi.org/10.1142/s0129183120501788.

Full text
Abstract:
The structural, electronic, dielectric and vibrational properties of zinc-blende (ZB) InAs were studied within the framework of density functional theory (DFT) by employing local density approximation and norm-conserving pseudopotentials. The optimal lattice parameter, direct band gap, static dielectric constant, phonon frequencies and Born effective charges calculated by treating In-4d electrons as valence states are in satisfactory agreement with other reported theoretical and experimental findings. The calculated band gap is reasonably accurate and improved in comparison to other findings. This work will be useful for more computational studies related to semiconductor devices.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography