Academic literature on the topic 'Theory of computation not elsewhere classified'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Theory of computation not elsewhere classified.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Theory of computation not elsewhere classified"

1

Culberson, Joseph C. "On the Futility of Blind Search: An Algorithmic View of “No Free Lunch”." Evolutionary Computation 6, no. 2 (June 1998): 109–27. http://dx.doi.org/10.1162/evco.1998.6.2.109.

Full text
Abstract:
The paper is in three parts. First, we use simple adversary arguments to redevelop and explore some of the no-free-lunch (NFL) theorems and perhaps extend them a little. Second, we clarify the relationship of NFL theorems to algorithm theory and complexity classes such as NP. We claim that NFL is weaker in the sense that the constraints implied by the conjectures of traditional algorithm theory on what an evolutionary algorithm may be expected to accomplish are far more severe than those implied by NFL. Third, we take a brief look at how natural evolution relates to computation and optimization. We suggest that the evolution of complex systems exhibiting high degrees of orderliness is not equivalent in difficulty to optimizing hard (in the complexity sense) problems, and that the optimism in genetic algorithms (GAs) as universal optimizers is not justified by natural evolution. This is an informal tutorial paper—most of the information presented is not formally proven, and is either “common knowledge” or formally proven elsewhere. Some of the claims are intuitions based on experience with algorithms, and in a more formal setting should be classified as conjectures.
APA, Harvard, Vancouver, ISO, and other styles
2

BRUDA, STEFAN D., and SELIM G. AKL. "ON THE NECESSITY OF FORMAL MODELS FOR REAL-TIME PARALLEL COMPUTATIONS." Parallel Processing Letters 11, no. 02n03 (June 2001): 353–61. http://dx.doi.org/10.1142/s0129626401000646.

Full text
Abstract:
We assume the multitape real-time Turing machine as a formal model for parallel real-time computation. Then, we show that, for any positive integer k, there is at least one language Lk which is accepted by a k-tape real-Turing machine, but cannot be accepted by a (k - 1)-tape real-time Turing machine. It follows therefore that the languages accepted by real-time Turing machines form an infinite hierarchy with respect to the number of tapes used. Although this result was previously obtained elsewhere, our proof is considerably shorter, and explicitly builds the languages Lk. The ability of the real-time Turing machine to model practical real-time and/or parallel computations is open to debate. Nevertheless, our result shows how a complexity theory based on a formal model can draw interesting results that are of more general nature than those derived from examples. Thus, we hope to offer a motivation for looking into realistic parallel real-time models of computation.
APA, Harvard, Vancouver, ISO, and other styles
3

Medeiros, David P. "Optimal Growth in Phrase Structure." Biolinguistics 2, no. 2-3 (September 23, 2008): 152–95. http://dx.doi.org/10.5964/bioling.8639.

Full text
Abstract:
This article claims that some familiar properties of phrase structure reflect laws of form. It is shown that optimal sequencing of recursive Merge operations so as to dynamically minimize c-command and containment relations in unlabeled branching forms leads to structural correlates of projection. Thus, a tendency for syntactic structures to pattern according to the X-bar schema (or other shapes exhibiting endocentricity and maximality of ‘non-head daughters’) is plausibly an emergent epiphenomenon of efficient computation. The specifier-head-complement configuration of X-bar theory is shown to be intimately connected to the Fibonacci sequence, suggesting connections with similar mathematical properties in optimal arboration and optimal packing elsewhere in nature.
APA, Harvard, Vancouver, ISO, and other styles
4

Read, Laura K., and Richard M. Vogel. "Hazard function theory for nonstationary natural hazards." Natural Hazards and Earth System Sciences 16, no. 4 (April 11, 2016): 915–25. http://dx.doi.org/10.5194/nhess-16-915-2016.

Full text
Abstract:
Abstract. Impact from natural hazards is a shared global problem that causes tremendous loss of life and property, economic cost, and damage to the environment. Increasingly, many natural processes show evidence of nonstationary behavior including wind speeds, landslides, wildfires, precipitation, streamflow, sea levels, and earthquakes. Traditional probabilistic analysis of natural hazards based on peaks over threshold (POT) generally assumes stationarity in the magnitudes and arrivals of events, i.e., that the probability of exceedance of some critical event is constant through time. Given increasing evidence of trends in natural hazards, new methods are needed to characterize their probabilistic behavior. The well-developed field of hazard function analysis (HFA) is ideally suited to this problem because its primary goal is to describe changes in the exceedance probability of an event over time. HFA is widely used in medicine, manufacturing, actuarial statistics, reliability engineering, economics, and elsewhere. HFA provides a rich theory to relate the natural hazard event series (X) with its failure time series (T), enabling computation of corresponding average return periods, risk, and reliabilities associated with nonstationary event series. This work investigates the suitability of HFA to characterize nonstationary natural hazards whose POT magnitudes are assumed to follow the widely applied generalized Pareto model. We derive the hazard function for this case and demonstrate how metrics such as reliability and average return period are impacted by nonstationarity and discuss the implications for planning and design. Our theoretical analysis linking hazard random variable X with corresponding failure time series T should have application to a wide class of natural hazards with opportunities for future extensions.
APA, Harvard, Vancouver, ISO, and other styles
5

Read, L. K., and R. M. Vogel. "Hazard function theory for nonstationary natural hazards." Natural Hazards and Earth System Sciences Discussions 3, no. 11 (November 13, 2015): 6883–915. http://dx.doi.org/10.5194/nhessd-3-6883-2015.

Full text
Abstract:
Abstract. Impact from natural hazards is a shared global problem that causes tremendous loss of life and property, economic cost, and damage to the environment. Increasingly, many natural processes show evidence of nonstationary behavior including wind speeds, landslides, wildfires, precipitation, streamflow, sea levels, and earthquakes. Traditional probabilistic analysis of natural hazards based on peaks over threshold (POT) generally assumes stationarity in the magnitudes and arrivals of events, i.e. that the probability of exceedance of some critical event is constant through time. Given increasing evidence of trends in natural hazards, new methods are needed to characterize their probabilistic behavior. The well-developed field of hazard function analysis (HFA) is ideally suited to this problem because its primary goal is to describe changes in the exceedance probability of an event over time. HFA is widely used in medicine, manufacturing, actuarial statistics, reliability engineering, economics, and elsewhere. HFA provides a rich theory to relate the natural hazard event series (X) with its failure time series (T), enabling computation of corresponding average return periods, risk and reliabilities associated with nonstationary event series. This work investigates the suitability of HFA to characterize nonstationary natural hazards whose POT magnitudes are assumed to follow the widely applied Generalized Pareto (GP) model. We derive the hazard function for this case and demonstrate how metrics such as reliability and average return period are impacted by nonstationarity and discuss the implications for planning and design. Our theoretical analysis linking hazard event series X, with corresponding failure time series T, should have application to a wide class of natural hazards with rich opportunities for future extensions.
APA, Harvard, Vancouver, ISO, and other styles
6

Lember, Jüri, and Alexey Koloydenko. "ADJUSTED VITERBI TRAINING." Probability in the Engineering and Informational Sciences 21, no. 3 (July 2007): 451–75. http://dx.doi.org/10.1017/s0269964807000083.

Full text
Abstract:
Viterbi training (VT) provides a fast but inconsistent estimator of hidden Markov models (HMM). The inconsistency is alleviated with a little extra computation when we enable VT to asymptotically fix the true values of the parameters. This relies on infinite Viterbi alignments and associated with them limiting probability distributions. First in a sequel, this article is a proof of concept; it focuses on mixture models, an important but special case of HMM where the limiting distributions can be calculated exactly. A simulated Gaussian mixture shows that our central algorithm (VA1) can significantly improve the accuracy of VT with little extra cost. Next in the sequel, we present elsewhere a theory of the adjusted VT for the general HMMs, where the limiting distributions are more challenging to find. Here, we also present another, more advanced correction to VT and verify its fast convergence and high accuracy; its computational feasibility requires additional investigation.
APA, Harvard, Vancouver, ISO, and other styles
7

LEE, Sang Dong. "Medical knowledge of medieval physician on the cause of plague during 1347/8-1351: traditional understandings to poison theory." Korean Journal of Medical History 31, no. 2 (August 31, 2022): 363–92. http://dx.doi.org/10.13081/kjmh.2022.31.363.

Full text
Abstract:
This article sets its investigative goal on determining the medical knowledge of medieval physicians from 1347-8 to 1351 concerning the causes of plague. As the plague killed a third of Europe’s population, the contemporary witness at the time perceived God as the sender of this plague to punish the human society. However, physicians separated the religious and cultural explanation for the cause of this plague and instead seek the answer to this question elsewhere. Developing on traditional medical knowledges, physicians classified the possible range of the plague’s causes into two areas: universal cause and individual/particular causes. In addition, they also sought to explain the causes by employing the traditional miasma-humoral theory. Unlike the previous ones, however, the plague during 1347-8 to 1351 killed the patients indiscriminately and also incredibly viciously. This phenomenon could not be explained by merely using the traditional medical knowledge and this idiosyncrasy led the physicians employ the poison theory to explain the causes of plague more pragmatically.
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Yun Xia, Zhi Liang Wang, and Cheng Chong Gao. "Research on Modeling Technology of Virtual Resources Cloud Pool for Group Enterprises Based on Ontology." Applied Mechanics and Materials 347-350 (August 2013): 3287–91. http://dx.doi.org/10.4028/www.scientific.net/amm.347-350.3287.

Full text
Abstract:
To realize cloud manufacturing (CMfg) production in group enterprises, manufacturing resources and modeling technologies of cloud pool were studied. According to the characteristics of group enterprises, manufacturing resources were analyzed and classified into human, equipment, materials, cooperation resources and so on. Then, the realization method which manufacturing resources mapped into virtual resources was researched, and a layer platform for cloud manufacturing was proposed. Taking CNC machine tool as an example, the ontology model was built with Semantic Web and OWL based on ontology theory. Finally, using semantic similarity computation method and case-based reasoning, the virtual resources were intelligent searched and matched so that manufacturing resources can realize unification, sharing and reuse.
APA, Harvard, Vancouver, ISO, and other styles
9

El-Sayed, Hesham, Sharmi Sankar, Heng Yu, and Gokulnath Thandavarayan. "Benchmarking of Recommendation Trust Computation for Trust/Trustworthiness Estimation in HDNs." International Journal of Computers Communications & Control 12, no. 5 (September 10, 2017): 612. http://dx.doi.org/10.15837/ijccc.2017.5.2895.

Full text
Abstract:
In the recent years, Heterogeneous Distributed Networks (HDNs) is a predominant technology implemented to enable various application in different fields like transportation, medicine, war zone, etc. Due to its arbitrary self-organizing nature and temporary topologies in the spatial-temporal region, distributed systems are vulnerable with a few security issues and demands high security countermeasures. Unlike other static networks, the unique characteristics of HDNs demands cutting edge security policies. Numerous cryptographic techniques have been proposed by different researchers to address the security issues in HDNs. These techniques utilize too many resources, resulting in higher network overheads. This being classified under light weight security scheme, the Trust Management System (TMS) tends to be one of the most promising technology, featured with more efficiency in terms of availability, scalability and simplicity. It advocates both the node level validation and data level verification enhancing trust between the attributes. Further, it thwarts a wide range of security attacks by incorporating various statistical techniques and integrated security services. In this paper, we present a literature survey on different TMS that highlights reliable techniques adapted across the entire HDNs. We then comprehensively study the existing distributed trust computations and benchmark them in accordance to their effectiveness. Further, performance analysis is applied on the existing computation techniques and the benchmarked outcome delivered by Recommendation Trust Computations (RTC) is discussed. A Receiver Operating Characteristics (ROC) curve illustrates better accuracy for Recommendation Trust Computations (RTC), in comparison with Direct Trust Computations (DTC) and Hybrid Trust Computations (HTC). Finally, we propose the future directions for research and highlight reliable techniques to build an efficient TMS in HDNs.
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Kang-Jia, Jing-Hua Liu, Jing Si, and Guo-Dong Wang. "Nonlinear Dynamic Behaviors of the (3+1)-Dimensional B-Type Kadomtsev—Petviashvili Equation in Fluid Mechanics." Axioms 12, no. 1 (January 16, 2023): 95. http://dx.doi.org/10.3390/axioms12010095.

Full text
Abstract:
This paper provides an investigation on nonlinear dynamic behaviors of the (3+1)-dimensional B-type Kadomtsev—Petviashvili equation, which is used to model the propagation of weakly dispersive waves in a fluid. With the help of the Cole—Hopf transform, the Hirota bilinear equation is established, then the symbolic computation with the ansatz function schemes is employed to search for the diverse exact solutions. Some new results such as the multi-wave complexiton, multi-wave, and periodic lump solutions are found. Furthermore, the abundant traveling wave solutions such as the dark wave, bright-dark wave, and singular periodic wave solutions are also constructed by applying the sub-equation method. Finally, the nonlinear dynamic behaviors of the solutions are presented through the 3-D plots, 2-D contours, and 2-D curves and their corresponding physical characteristics are also elaborated. To our knowledge, the obtained solutions in this work are all new, which are not reported elsewhere. The methods applied in this study can be used to investigate the other PDEs arising in physics.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Theory of computation not elsewhere classified"

1

Zhu, Huaiyu. "Neural networks and adaptive computers : theory and methods of stochastic adaptive computation." Thesis, University of Liverpool, 1993. http://eprints.aston.ac.uk/365/.

Full text
Abstract:
This thesis studies the theory of stochastic adaptive computation based on neural networks. A mathematical theory of computation is developed in the framework of information geometry, which generalises Turing machine (TM) computation in three aspects - It can be continuous, stochastic and adaptive - and retains the TM computation as a subclass called "data processing". The concepts of Boltzmann distribution, Gibbs sampler and simulated annealing are formally defined and their interrelationships are studied. The concept of "trainable information processor" (TIP) - parameterised stochastic mapping with a rule to change the parameters - is introduced as an abstraction of neural network models. A mathematical theory of the class of homogeneous semilinear neural networks is developed, which includes most of the commonly studied NN models such as back propagation NN, Boltzmann machine and Hopfield net, and a general scheme is developed to classify the structures, dynamics and learning rules. All the previously known general learning rules are based on gradient following (GF), which are susceptible to local optima in weight space. Contrary to the widely held belief that this is rarely a problem in practice, numerical experiments show that for most non-trivial learning tasks GF learning never converges to a global optimum. To overcome the local optima, simulated annealing is introduced into the learning rule, so that the network retains adequate amount of "global search" in the learning process. Extensive numerical experiments confirm that the network always converges to a global optimum in the weight space. The resulting learning rule is also easier to be implemented and more biologically plausible than back propagation and Boltzmann machine learning rules: Only a scalar needs to be back-propagated for the whole network. Various connectionist models have been proposed in the literature for solving various instances of problems, without a general method by which their merits can be combined. Instead of proposing yet another model, we try to build a modular structure in which each module is basically a TIP. As an extension of simulated annealing to temporal problems, we generalise the theory of dynamic programming and Markov decision process to allow adaptive learning, resulting in a computational system called a "basic adaptive computer", which has the advantage over earlier reinforcement learning systems, such as Sutton's "Dyna", in that it can adapt in a combinatorial environment and still converge to a global optimum. The theories are developed with a universal normalisation scheme for all the learning parameters so that the learning system can be built without prior knowledge of the problems it is to solve.
APA, Harvard, Vancouver, ISO, and other styles
2

Laverick, Craig. "Enforcing the ISM Code, and improving maritime safety, with an improved Corporate Manslaughter Act : a safety culture theory perspective." Thesis, University of Central Lancashire, 2018. http://clok.uclan.ac.uk/23768/.

Full text
Abstract:
The International Safety Management (ISM) Code was introduced in 1998 in response to a number of high-profile maritime disasters, with the aim of establishing minimum standards for the safe operation of ships and creating an enhanced safety culture. It was the first piece of legislation introduced by the International Maritime Organisation that demanded a change in the behaviour and attitude of the international maritime community. Whilst there is no doubt that the ISM Code has been successful at improving maritime safety, there is now an increasing problem with complacency. The aim of this thesis is to consider how complacency with the ISM Code in the UK can be tackled by using reformed corporate manslaughter legislation. This thesis adopts a Safety Culture Theory approach and uses a multi-model research design methodology; a doctrinal model and a socio-legal model. The thesis hypothesis and the author's proposed corporate manslaughter reforms are tested through case studies and a survey. The thesis proposes the introduction of secondary individual liability for corporate manslaughter, in addition to existing primary corporate liability. If the proposed provisions were to be implemented, a gap in the law would be filled and, for the maritime industry, both the ship company and its corporate individuals would be held accountable for deaths at sea that are attributable to non-implementation of the ISM Code. It is suggested that this would deter further ISM complacency and so encourage the ISM Code’s intended safety culture. This thesis contributes to the intellectual advancement of the significant and developing interplay between criminal and maritime law, by adding to the scholarly understanding of the safety culture operating within the international maritime community, and examining how corporate manslaughter legislation could be used to improve implementation of the ISM Code. It offers sound research for consideration by legal researchers and scholars, and also by those working within the field of maritime safety regulation.
APA, Harvard, Vancouver, ISO, and other styles
3

Oliver, Christine. "Systemic reflexivity : building theory for organisational consultancy." Thesis, University of Bedfordshire, 2012. http://hdl.handle.net/10547/567099.

Full text
Abstract:
This dissertation argues for the value of the concept of systemic reflexivity in sense making, orientation and action in systemic practice, and in organisational practice in particular. The concept emerges as a theme through the development of two specific strands of published work from 1992 to 2013, that of Coordinated Management of Meaning Theory (CMM) and Appreciative Inquiry (AI). Both lines of inquiry highlight the moral dimension of practitioners’ conceptualisation and practice. Systemic reflexivity alerts us to the opportunities and constraints system participants make for the system in focus, facilitating exploration of a system’s coherence, through a detailed framework for systemic thinking which links patterns of communication to their narratives of influence and narrative consequences. It provides the conditions for enabling individual and collective responsibility for the ways that communication shapes our social worlds. The concept is illustrated in practice through a range of case studies within the published works.
APA, Harvard, Vancouver, ISO, and other styles
4

Edmonds, Andrew Nicola. "Time series prediction using supervised learning and tools from chaos theory." Thesis, University of Bedfordshire, 1996. http://hdl.handle.net/10547/582141.

Full text
Abstract:
In this work methods for performing time series prediction on complex real world time series are examined. In particular series exhibiting non-linear or chaotic behaviour are selected for analysis. A range of methodologies based on Takens' embedding theorem are considered and compared with more conventional methods. A novel combination of methods for determining the optimal embedding parameters are employed and tried out with multivariate financial time series data and with a complex series derived from an experiment in biotechnology. The results show that this combination of techniques provide accurate results while improving dramatically the time required to produce predictions and analyses, and eliminating a range of parameters that had hitherto been fixed empirically. The architecture and methodology of the prediction software developed is described along with design decisions and their justification. Sensitivity analyses are employed to justify the use of this combination of methods, and comparisons are made with more conventional predictive techniques and trivial predictors showing the superiority of the results generated by the work detailed in this thesis.
APA, Harvard, Vancouver, ISO, and other styles
5

(9805715), Zhigang Huang. "A recursive algorithm for reliability assessment in water distribution networks with applications of parallel programming techniques." Thesis, 1994. https://figshare.com/articles/thesis/A_recursive_algorithm_for_reliability_assessment_in_water_distribution_networks_with_applications_of_parallel_programming_techniques/13425371.

Full text
Abstract:
Project models the reliability of an urban water distribution network.. Reliability is one of the fundamental considerations in the design of urban water distribution networks. The reliability of a network can be modelled by the probability of the connectedness of a stochastic graph. The enumeration of a set of cuts of the graph, and the calculation of the disjoint probability products of the cuts, are two fundamental steps in the network reliability assessment. An improved algorithm for the enumeration of all the minimal cutsets of a graph is presented. Based on this, a recursive algorithm for the enumeration of all Buzacott cuts (a particular set of ordered cuts) of a graph has been developed. The final algorithm presented in this thesis incorporates the enumeration of Buzacott cuts and the calculation of the disjoint probability products of the cuts to obtain the network reliability. As a result, it is tightly coupled, and very efficient. Experimental results show that this algorithm has a higher efficiency than other reported methods. The parallelism existing in the reliability assessment is investigated. The final algorithm has been implemented in a concurrent computer program. The effectiveness of parallel programming techniques in reducing the computing time required by the reliability assessment has also been discussed.
APA, Harvard, Vancouver, ISO, and other styles
6

(9831926), David Ruxton. "Differential dynamic programming and optimal control of inequality constrained continuous dynamic systems." Thesis, 1991. https://figshare.com/articles/thesis/Differential_dynamic_programming_and_optimal_control_of_inequality_constrained_continuous_dynamic_systems/13430081.

Full text
Abstract:
In this thesis, the development of the differential dynamic programming (DDP) algorithm is extensively reviewed from its introduction to its present status. Following this review, the DDP algorithm is shown to be readily adapted to handle inequality constrained continuous optimal control problems. In particular, a new approach using multiplier penalty functions implemented in conjunction with the DDP algorithm, is introduced and shown to be effective. Emphasis is placed on the practical aspects of implementing and applying DDP algorithm variants. The new DDP and multiplier penalty function algorithm variant is then tested and compared with established DDP algorithm variants as well as another numerical method before being applied to solve a problem involving the control of a robot arm in the plane.
APA, Harvard, Vancouver, ISO, and other styles
7

(9811760), Scott Ladley. "An investigation into the application of evolutionary algorithms on highly constrained optimal control problems and the development of a graphical user interface for comprehensive algorithm control and monitoring." Thesis, 2003. https://figshare.com/articles/thesis/An_investigation_into_the_application_of_evolutionary_algorithms_on_highly_constrained_optimal_control_problems_and_the_development_of_a_graphical_user_interface_for_comprehensive_algorithm_control_and_monitoring/19930160.

Full text
Abstract:

In this thesis we investigate how intelligent techniques, such as Evolutionary Algorithms, can be applied to finding solutions to discrete optimal control problems. Also, a detailed investigation is carried out into the design and development of a superior execution environment for Evolutionary Algorithms.

An overview of the basic processes of an Evolutionary Algorithm is given, as well as detailed descriptions for several genetic operators. Several additional operators that may be applied in conjunction with an Evolutionary Algorithm are also studied. These operators include several versions of the simplex method, as well as 3 distinct hill -climbers, each designed for a specific purpose. The hill -climbing routines have been designed for purposes that include local search, escaping local minima, and a hill -climbing routine designed for self -adaptation to a broad range of problems.

The mathematical programming formulation of discrete optimal control problems is used to generate a class of highly constrained problems. Techniques are developed to accurately and rapidly solve these problems, whilst satisfying the equality constraints to machine accuracy.

The improved execution environment for Evolutionary Algorithms proposes the use of a Graphical User Interface for data visualisation, algorithm control and monitoring, as well as a Client/Server network interface for connecting the GUI to remotely run algorithms.

APA, Harvard, Vancouver, ISO, and other styles
8

Nguyen, Van-Tuong. "An implementation of the parallelism, distribution and nondeterminism of membrane computing models on reconfigurable hardware." 2010. http://arrow.unisa.edu.au:8081/1959.8/100802.

Full text
Abstract:
Membrane computing investigates models of computation inspired by certain features of biological cells, especially features arising because of the presence of membranes. Because of their inherent large-scale parallelism, membrane computing models (called P systems) can be fully exploited only through the use of a parallel computing platform. However, it is an open question whether it is feasible to develop an efficient and useful parallel computing platform for membrane computing applications. Such a computing platform would significantly outperform equivalent sequential computing platforms while still achieving acceptable scalability, flexibility and extensibility. To move closer to an answer to this question, I have investigated a novel approach to the development of a parallel computing platform for membrane computing applications that has the potential to deliver a good balance between performance, flexibility, scalability and extensibility. This approach involves the use of reconfigurable hardware and an intelligent software component that is able to configure the hardware to suit the specific properties of the P system to be executed. As part of my investigations, I have created a prototype computing platform called Reconfig-P based on the proposed development approach. Reconfig-P is the only existing computing platform for membrane computing applications able to support both system-level and region-level parallelism. Using an intelligent hardware source code generator called P Builder, Reconfig-P is able to realise an input P system as a hardware circuit in various ways, depending on which aspects of P systems the user wishes to emphasise at the implementation level. For example, Reconfig-P can realise a P system in a rule-oriented manner or in a region-oriented manner. P Builder provides a unified implementation framework within which the various implementation strategies can be supported. The basic principles of this framework conform to a novel design pattern called Content-Form-Strategy. The framework seamlessly integrates the currently supported implementation approaches, and facilitates the inclusion of additional implementation strategies and additional P system features. Theoretical and empirical results regarding the execution time performance and hardware resource consumption of Reconfig-P suggest that the proposed development approach is a viable means of attaining a good balance between performance, scalability, flexibility and extensibility. Most of the existing computing platforms for membrane computing applications fail to support nondeterministic object distribution, a key aspect of P systems that presents several interesting implementation challenges. I have devised an efficient algorithm for nondeterministic object distribution that is suitable for implementation in hardware. Experimental results suggest that this algorithm could be incorporated into Reconfig-P without too significantly reducing its performance or efficiency.
Thesis (PhDInformationTechnology)--University of South Australia, 2010
APA, Harvard, Vancouver, ISO, and other styles
9

(10514360), Uttara Vinay Tipnis. "Data Science Approaches on Brain Connectivity: Communication Dynamics and Fingerprint Gradients." Thesis, 2021.

Find full text
Abstract:
The innovations in Magnetic Resonance Imaging (MRI) in the recent decades have given rise to large open-source datasets. MRI affords researchers the ability to look at both structure and function of the human brain. This dissertation will make use of one of these large open-source datasets, the Human Connectome Project (HCP), to study the structural and functional connectivity in the brain.
Communication processes within the human brain at different cognitive states are neither well understood nor completely characterized. We assess communication processes in the human connectome using ant colony-inspired cooperative learning algorithm, starting from a source with no a priori information about the network topology, and cooperatively searching for the target through a pheromone-inspired model. This framework relies on two parameters, namely pheromone and edge perception, to define the cognizance and subsequent behaviour of the ants on the network and the communication processes happening between source and target. Simulations with different configurations allow the identification of path-ensembles that are involved in the communication between node pairs. In order to assess the different communication regimes displayed on the simulations and their associations with functional connectivity, we introduce two network measurements, effective path-length and arrival rate. These measurements are tested as individual and combined descriptors of functional connectivity during different tasks. Finally, different communication regimes are found in different specialized functional networks. This framework may be used as a test-bed for different communication regimes on top of an underlying topology.
The assessment of brain fingerprints has emerged in the recent years as an important tool to study individual differences. Studies so far have mainly focused on connectivity fingerprints between different brain scans of the same individual. We extend the concept of brain connectivity fingerprints beyond test/retest and assess fingerprint gradients in young adults by developing an extension of the differential identifiability framework. To do so, we look at the similarity between not only the multiple scans of an individual (subject fingerprint), but also between the scans of monozygotic and dizygotic twins (twin fingerprint). We have carried out this analysis on the 8 fMRI conditions present in the Human Connectome Project -- Young Adult dataset, which we processed into functional connectomes (FCs) and time series parcellated according to the Schaefer Atlas scheme, which has multiple levels of resolution. Our differential identifiability results show that the fingerprint gradients based on genetic and environmental similarities are indeed present when comparing FCs for all parcellations and fMRI conditions. Importantly, only when assessing optimally reconstructed FCs, we fully uncover fingerprints present in higher resolution atlases. We also study the effect of scanning length on subject fingerprint of resting-state FCs to analyze the effect of scanning length and parcellation. In the pursuit of open science, we have also made available the processed and parcellated FCs and time series for all conditions for ~1200 subjects part of the HCP-YA dataset to the scientific community.
Lastly, we have estimated the effect of genetics and environment on the original and optimally reconstructed FC with an ACE model.
APA, Harvard, Vancouver, ISO, and other styles
10

(8713962), James Ulcickas. "LIGHT AND CHEMISTRY AT THE INTERFACE OF THEORY AND EXPERIMENT." Thesis, 2020.

Find full text
Abstract:
Optics are a powerful probe of chemical structure that can often be linked to theoretical predictions, providing robustness as a measurement tool. Not only do optical interactions like second harmonic generation (SHG), single and two-photon excited fluorescence (TPEF), and infrared absorption provide chemical specificity at the molecular and macromolecular scale, but the ability to image enables mapping heterogeneous behavior across complex systems such as biological tissue. This thesis will discuss nonlinear and linear optics, leveraging theoretical predictions to provide frameworks for interpreting analytical measurement. In turn, the causal mechanistic understanding provided by these frameworks will enable structurally specific quantitative tools with a special emphasis on application in biological imaging. The thesis will begin with an introduction to 2nd order nonlinear optics and the polarization analysis thereof, covering both the Jones framework for polarization analysis and the design of experiment. Novel experimental architectures aimed at reducing 1/f noise in polarization analysis will be discussed, leveraging both rapid modulation in time through electro-optic modulators (Chapter 2), as well as fixed-optic spatial modulation approaches (Chapter 3). In addition, challenges in polarization-dependent imaging within turbid systems will be addressed with the discussion of a theoretical framework to model SHG occurring from unpolarized light (Chapter 4). The application of this framework to thick tissue imaging for analysis of collagen local structure can provide a method for characterizing changes in tissue morphology associated with some common cancers (Chapter 5). In addition to discussion of nonlinear optical phenomena, a novel mechanism for electric dipole allowed fluorescence-detected circular dichroism will be introduced (Chapter 6). Tackling challenges associated with label-free chemically specific imaging, the construction of a novel infrared hyperspectral microscope for chemical classification in complex mixtures will be presented (Chapter 7). The thesis will conclude with a discussion of the inherent disadvantages in taking the traditional paradigm of modeling and measuring chemistry separately and provide the multi-agent consensus equilibrium (MACE) framework as an alternative to the classic meet-in-the-middle approach (Chapter 8). Spanning topics from pure theoretical descriptions of light-matter interaction to full experimental work, this thesis aims to unify these two fronts.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Theory of computation not elsewhere classified"

1

Liao, T. Warren. "A New Efficient and Effective Fuzzy Modeling Method for Binary Classification." In Contemporary Theory and Pragmatic Approaches in Fuzzy Computing Utilization, 41–59. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-1870-1.ch004.

Full text
Abstract:
This paper presents a new fuzzy modeling method that can be classified as a grid partitioning method, in which the domain space is partitioned by the fuzzy equalization method one dimension at a time, followed by the computation of rule weights according to the max-min composition. Five datasets were selected for testing. Among them, three datasets are high-dimensional; for these datasets only selected features are used to control the model size. An enumerative method is used to determine the best combination of fuzzy terms for each variable. The performance of each fuzzy model is evaluated in terms of average test error, average false positive, average false negative, training error, and CPU time taken to build model. The results indicate that this method is best, because it produces the lowest average test errors and take less time to build fuzzy models. The average test errors vary greatly with model sizes. Generally large models produce lower test errors than small models regardless of the fuzzy modeling method used. However, the relationship is not monotonic. Therefore, effort must be made to determine which model is the best for a given dataset and a chosen fuzzy modeling method.
APA, Harvard, Vancouver, ISO, and other styles
2

Whitty, Robin, and Robin Wilson. "Introducing Turing’s mathematics." In The Turing Guide. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198747826.003.0048.

Full text
Abstract:
Alan Turing’s mathematical interests were deep and wide-ranging. From the beginning of his career in Cambridge he was involved with probability theory, algebra (the theory of groups), mathematical logic, and number theory. Prime numbers and the celebrated Riemann hypothesis continued to preoccupy him until the end of his life. As a mathematician, and as a scientist generally, Turing was enthusiastically omnivorous. His collected mathematical works comprise thirteen papers, not all published during his lifetime, as well as the preface from his Cambridge Fellowship dissertation; these cover group theory, probability theory, number theory (analytic and elementary), and numerical analysis. This broad swathe of work is the focus of this chapter. But Turing did much else that was mathematical in nature, notably in the fields of logic, cryptanalysis, and biology, and that work is described in more detail elsewhere in this book. To be representative of Turing’s mathematical talents is a more realistic aim than to be encyclopaedic. Group theory and number theory were recurring preoccupations for Turing, even during wartime; they are represented in this chapter by his work on the word problem and the Riemann hypothesis, respectively. A third preoccupation was with methods of statistical analysis: Turing’s work in this area was integral to his wartime contribution to signals intelligence. I. J. Good, who worked with Turing at Bletchley Park, has provided an authoritative account of this work, updated in the Collected Works. By contrast, Turing’s proof of the central limit theorem from probability theory, which earned him his Cambridge Fellowship, is less well known: he quickly discovered that the theorem had already been demonstrated, the work was never published, and his interest in it was swiftly superseded by questions in mathematical logic. Nevertheless, this was Turing’s first substantial investigation, the first demonstration of his powers, and was certainly influential in his approach to codebreaking, so it makes a fitting first topic for this chapter. Turing’s single paper on numerical analysis, published in 1948, is not described in detail here. It concerned the potential for errors to propagate and accumulate during large-scale computations; as with everything that Turing wrote in relation to computation it was pioneering, forward-looking, and conceptually sound. There was also, incidentally, an appreciation in this paper of the need for statistical analysis, again harking back to Turing’s earliest work.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography