Academic literature on the topic 'Applied computing not elsewhere classified'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Applied computing not elsewhere classified.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Applied computing not elsewhere classified"

1

CSG-Ed team. "Global issues." ACM SIGCAS Computers and Society 49, no. 2 (January 22, 2021): 9. http://dx.doi.org/10.1145/3447903.3447906.

Full text
Abstract:
The growing role that computing will play in addressing the world's pressing global issues has begun to move to center state, as Big Data for the SDGs (Sustainable Development Goals) is now included among the United Nations' Global Issues. The UN summarizes this Big Data issue as "The volume of data in the world is increasing exponentially. New sources of data, new technologies, and new analytical approaches, if applied responsibly, can allow to better monitor progress toward achievement of the SDGs in a way that is both inclusive and fair" [2], Elsewhere, we have applauded and argued for computing initiatives, including computer science education, that specifically focus on such "pressing social, environment, and economic problems" [1] and we acknowledge our SIGs commitment to directly tackling such issues.
APA, Harvard, Vancouver, ISO, and other styles
2

Das, D., and S. Santhakumar. "An Euler correction method for computing two-dimensional unsteady transonic flows." Aeronautical Journal 103, no. 1020 (February 1999): 85–94. http://dx.doi.org/10.1017/s0001924000027780.

Full text
Abstract:
AbstractAn Euler correction method is developed for unsteady, transonic inviscid flows. The strategy of this method is to treat the flow-field behind the shock as rotational flow and elsewhere as irrotational flow. The solution for the irrotational flow is obtained by solving the unsteady full-potential equation using Jameson's rotated time-marching finite-difference scheme. Clebsch's representation of velocity is followed for rotational flow. In this representation the velocities are decomposed into a potential part and a rotational part written in terms of scalar functions. The potential part is computed from the unsteady full potential equation with appropriate modification based on Clebsch's representation of velocity. The rotational part is obtained analytically from the unsteady momentum equation written in terms of Clebsch variables. This method is applied to compute the unsteady flow-field characteristics for an oscillating NACA 64A010 aerofoil. The results of the present calculation are found to be in good agreement with both Euler solution and experimental results.
APA, Harvard, Vancouver, ISO, and other styles
3

Wu, Yusen, Hongwei Li, and Ahmed Alsaedi. "Center Conditions and Bifurcation of Limit Cycles Created from a Class of Second-Order ODEs." International Journal of Bifurcation and Chaos 29, no. 01 (January 2019): 1950003. http://dx.doi.org/10.1142/s0218127419500032.

Full text
Abstract:
In this paper, a class of second-order ODEs is investigated. First of all, a method of computing singular point values is established for this kind of systems. Then, two classes of second-order ODEs are studied to illustrate the efficiency of the method, and the center conditions and bifurcation of limit cycles are obtained. Above all, center conditions of a class of non-polynomial systems are classified by using this method.
APA, Harvard, Vancouver, ISO, and other styles
4

Han, Daoqi, Songqi Wu, Zhuoer Hu, Hui Gao, Enjie Liu, and Yueming Lu. "A Novel Classified Ledger Framework for Data Flow Protection in AIoT Networks." Security and Communication Networks 2021 (February 19, 2021): 1–11. http://dx.doi.org/10.1155/2021/6671132.

Full text
Abstract:
The edge computing node plays an important role in the evolution of the artificial intelligence-empowered Internet of things (AIoTs) that converge sensing, communication, and computing to enhance wireless ubiquitous connectivity, data acquisition, and analysis capabilities. With full connectivity, the issue of data security in the new cloud-edge-terminal network hierarchy of AIoTs comes to the fore, for which blockchain technology is considered as a potential solution. Nevertheless, existing schemes cannot be applied to the resource-constrained and heterogeneous IoTs. In this paper, we consider the blockchain design for the AIoTs and propose a novel classified ledger framework based on lightweight blockchain (CLF-LB) that separates and stores data rights at the source and enables a thorough data flow protection in the open and heterogeneous network environment of AIoT. In particular, CLF-LB divides the network into five functional layers for optimal adaptation to AIoTs applications, wherein an intelligent collaboration mechanism is also proposed to enhance the across-layer operation. Unlike traditional full-function blockchain models, our framework includes novel technical modules, such as block regenesis, iterative reinforcement of proof-of-work, and efficient chain uploading via the system-on-chip system, which are carefully designed to fit the cloud-edge-terminal hierarchy in AIoTs networks. Comprehensive experimental results are provided to validate the advantages of the proposed CLF-LB, showing its potentials to address the secrecy issues of data storage and sharing in AIoTs networks.
APA, Harvard, Vancouver, ISO, and other styles
5

Rodriguez, Diego, Diego Gomez, David Alvarez, and Sergio Rivera. "A Review of Parallel Heterogeneous Computing Algorithms in Power Systems." Algorithms 14, no. 10 (September 23, 2021): 275. http://dx.doi.org/10.3390/a14100275.

Full text
Abstract:
The power system expansion and the integration of technologies, such as renewable generation, distributed generation, high voltage direct current, and energy storage, have made power system simulation challenging in multiple applications. The current computing platforms employed for planning, operation, studies, visualization, and the analysis of power systems are reaching their operational limit since the complexity and size of modern power systems results in long simulation times and high computational demand. Time reductions in simulation and analysis lead to the better and further optimized performance of power systems. Heterogeneous computing—where different processing units interact—has shown that power system applications can take advantage of the unique strengths of each type of processing unit, such as central processing units, graphics processing units, and field-programmable gate arrays interacting in on-premise or cloud environments. Parallel Heterogeneous Computing appears as an alternative to reduce simulation times by optimizing multitask execution in parallel computing architectures with different processing units working together. This paper presents a review of Parallel Heterogeneous Computing techniques, how these techniques have been applied in a wide variety of power system applications, how they help reduce the computational time of modern power system simulation and analysis, and the current tendency regarding each application. We present a wide variety of approaches classified by technique and application.
APA, Harvard, Vancouver, ISO, and other styles
6

Çakır, Mustafa, Akhan Akbulut, and Yusuf Hatay Önen. "Analysis of the use of computational intelligence techniques for air-conditioning systems: A systematic mapping study." Measurement and Control 52, no. 7-8 (June 28, 2019): 1084–94. http://dx.doi.org/10.1177/0020294019858108.

Full text
Abstract:
In our systematic mapping study, we examined 289 published works to determine which intelligent computing methods (e.g. Artificial Neural Networks, Machine Learning, and Fuzzy Logic) used by air-conditioning systems can provide energy savings and improve thermal comfort. Our goal was to identify which methods have been used most in research on the topic, which methods of data collection have been employed, and which areas of research have been empirical in nature. We observed the rules for literature reviews in identifying published works on databases (e.g. the Institute of Electrical and Electronics Engineers database, the Association for Computing Machinery Digital Library, SpringerLink, ScienceDirect, and Wiley Online Library) and classified identified works by topic. After excluding works according to the predefined criteria, we reviewed selected works according to the research parameters motivating our study. Results reveal that energy savings is the most frequently examined topic and that intelligent computing methods can be used to provide better indoor environments for occupants, with energy savings of up to 50%. The most common intelligent method used has been artificial neural networks, while sensors have been the tools most used to collect data, followed by searches of databases of experiments, simulations, and surveys accessed to validate the accuracy of findings.
APA, Harvard, Vancouver, ISO, and other styles
7

Choi, Yoonjo, Namhun Kim, Seunghwan Hong, Junsu Bae, Ilsuk Park, and Hong-Gyoo Sohn. "Critical Image Identification via Incident-Type Definition Using Smartphone Data during an Emergency: A Case Study of the 2020 Heavy Rainfall Event in Korea." Sensors 21, no. 10 (May 20, 2021): 3562. http://dx.doi.org/10.3390/s21103562.

Full text
Abstract:
In unpredictable disaster scenarios, it is important to recognize the situation promptly and take appropriate response actions. This study proposes a cloud computing-based data collection, processing, and analysis process that employs a crowd-sensing application. Clustering algorithms are used to define the major damage types, and hotspot analysis is applied to effectively filter critical data from crowdsourced data. To verify the utility of the proposed process, it is applied to Icheon-si and Anseong-si, both in Gyeonggi-do, which were affected by heavy rainfall in 2020. The results show that the types of incident at the damaged site were effectively detected, and images reflecting the damage situation could be classified using the application of the geospatial analysis technique. For 5 August 2020, which was close to the date of the event, the images were classified with a precision of 100% at a threshold of 0.4. For 24–25 August 2020, the image classification precision exceeded 95% at a threshold of 0.5, except for the mudslide mudflow in the Yul area. The location distribution of the classified images showed a distribution similar to that of damaged regions in unmanned aerial vehicle images.
APA, Harvard, Vancouver, ISO, and other styles
8

el-Yazigi, A., K. Chaleby, and C. R. Martin. "A simplified and rapid test for acetylator phenotyping by use of the peak height ratio of two urinary caffeine metabolites." Clinical Chemistry 35, no. 5 (May 1, 1989): 848–51. http://dx.doi.org/10.1093/clinchem/35.5.848.

Full text
Abstract:
Abstract We describe a simplified liquid-chromatographic test in which acetylator phenotype is determined by measuring the peak height ratio of two urinary caffeine metabolites, 5-acetylamino-6-formylamino-3-methyluracil and 1-methylxanthine. We applied this test to determine the acetylator phenotypes of 52 subjects who regularly drink coffee, tea, or caffeinated beverages. Also, we determined the acetylator phenotypes of these subjects according to a well-established sulfasalazine test, which yielded identical results. We established the reproducibility of the described test by determining the acetylator phenotypes of 10 additional subjects on two different days separated by a period of two to five weeks. Of the 52 subjects examined by both tests, 40 (76.9%) were classified as slow acetylators, which agrees well with the percentage reported elsewhere for 297 similar subjects from the Saudi population.
APA, Harvard, Vancouver, ISO, and other styles
9

Conesa, Francesc C., Hector A. Orengo, Agustín Lobo, and Cameron A. Petrie. "An Algorithm to Detect Endangered Cultural Heritage by Agricultural Expansion in Drylands at a Global Scale." Remote Sensing 15, no. 1 (December 22, 2022): 53. http://dx.doi.org/10.3390/rs15010053.

Full text
Abstract:
This article presents AgriExp, a remote-based workflow for the rapid mapping and monitoring of archaeological and cultural heritage locations endangered by new agricultural expansion and encroachment. Our approach is powered by the cloud-computing data cataloguing and processing capabilities of Google Earth Engine and it uses all the available scenes from the Sentinel-2 image collection to map index-based multi-aggregate yearly vegetation changes. A user-defined index threshold maps the first per-pixel occurrence of an abrupt vegetation change and returns an updated and classified multi-temporal image aggregate in almost-real-time. The algorithm requires an input vector table such as data gazetteers or heritage inventories, and it performs buffer zonal statistics for each site to return a series of spatial indicators of potential site disturbance. It also returns time series charts for the evaluation and validation of the local to regional vegetation trends and the seasonal phenology. Additionally, we used multi-temporal MODIS, Sentinel-2 and high-resolution Planet imagery for further photo-interpretation of critically endangered sites. AgriExp was first tested in the arid region of the Cholistan Desert in eastern Pakistan. Here, hundreds of archaeological mound surfaces are threatened by the accelerated transformation of barren lands into new irrigated agricultural lands. We have provided the algorithm code with the article to ensure that AgriExp can be exported and implemented with little computational cost by academics and heritage practitioners alike to monitor critically endangered archaeological and cultural landscapes elsewhere.
APA, Harvard, Vancouver, ISO, and other styles
10

LIN, SONG-SUN, and TZI-SHENG YANG. "ON THE SPATIAL ENTROPY AND PATTERNS OF TWO-DIMENSIONAL CELLULAR NEURAL NETWORKS." International Journal of Bifurcation and Chaos 12, no. 01 (January 2002): 115–28. http://dx.doi.org/10.1142/s0218127402004206.

Full text
Abstract:
This work investigates binary pattern formations of two-dimensional standard cellular neural networks (CNN) as well as the complexity of the binary patterns. The complexity is measured by the exponential growth rate in which the patterns grow as the size of the lattice increases, i.e. spatial entropy. We propose an algorithm to generate the patterns in the finite lattice for general two-dimensional CNN. For the simplest two-dimensional template, the parameter space is split up into finitely many regions which give rise to different binary patterns. Qualitatively, the global patterns are classified for each region. Quantitatively, the upper bound of the spatial entropy is estimated by computing the number of patterns in the finite lattice, and the lower bound is given by observing a maximal set of patterns of a suitable size which can be adjacent to each other.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Applied computing not elsewhere classified"

1

Jones, Christopher Charles Rawlinson. "A study of novel computing methods for solving large electromagnetic hazards problems." Thesis, University of Central Lancashire, 2002. http://clok.uclan.ac.uk/18842/.

Full text
Abstract:
The aim of this work is to explore means to improve the speed of the computational electromagnetics (CEM) processing in use for aircraft design and certification work by a factor of 1000 or so. The investigation addresses particularly the set of problems described as electromagnetic hazards comprising lightning, EMC and the illumination of an aircraft by external radio sources or HIRF (high intensity radiated fields). These are areas which are very much aspects of the engineering of the aircraft where the requirement for accuracy of simulations is of the order of 6dB as build and test repeatability cannot achieve better than this. Computer simulations of these interactions at the outset of this work were often taking 10 days and more on the largest parallel computers then available in the UK (Cray T3D - 40 GFLOPS nominal peak). Such run times made any form of optimisation impossibly lengthy. While the future offered the certain prospect of more powerful computers, the simulations had to become more comprehensive in their representation of materials and features, geometry of the object, and particularly the representation of wires and cables had to improve radically, and tum around times for analysis had to be improved for design assessment as well as to make design optimisation by trade-off studies feasible. All of these could easily consume all the advantage that the new computers would give. The investigation has centred around techniques that might be applied via alteration to the most widely used and usable numerical methods in CEM applied to the electromagnetic hazards, and to techniques that might be applied to the manner of their use. In one case, the investigation has explored a particular possibility for minimising the duration of computation and extrapolating the resulting data to the longest time-scales required. Future improvements in the capabilities of radiating boundary conditions to mimic the effect of an infinite boundary at close range will further improve the benefits already established in this work, but this is not yet realisable. However, it has been established that a combination of techniques with some processes devised through this work can and does deliver the performance improvement sought. It has further been shown that the issues such as object resonance that could have incurred significant error and distrust of computational results can be satisfactorily overcome within the required accuracy. Four papers have been published arsing from this work. Some of these techniques are now in use in routine analyses contributing to BAE SYSTEMS programmes. Plans are in place to incorporate all of the successful techniques and processes.
APA, Harvard, Vancouver, ISO, and other styles
2

Bratton, Daniel. "Simple and adaptive particle swarms." Thesis, Goldsmiths College (University of London), 2010. http://research.gold.ac.uk/4752/.

Full text
Abstract:
The substantial advances that have been made to both the theoretical and practical aspects of particle swarm optimization over the past 10 years have taken it far beyond its original intent as a biological swarm simulation. This thesis details and explains these advances in the context of what has been achieved to this point, as well as what has yet to be understood or solidified within the research community. Taking into account the state of the modern field, a standardized PSO algorithm is defined for benchmarking and comparative purposes both within the work, and for the community as a whole. This standard is refined and simplified over several iterations into a form that does away with potentially undesirable properties of the standard algorithm while retaining equivalent or superior performance on the common set of benchmarks. This refinement, referred to as a discrete recombinant swarm (PSODRS) requires only a single user-defined parameter in the positional update equation, and uses minimal additive stochasticity, rather than the multiplicative stochasticity inherent in the standard PSO. After a mathematical analysis of the PSO-DRS algorithm, an adaptive framework is developed and rigorously tested, demonstrating the effects of the tunable particle- and swarm-level parameters. This adaptability shows practical benefit by broadening the range of problems which the PSO-DRS algorithm is wellsuited to optimize.
APA, Harvard, Vancouver, ISO, and other styles
3

Jenkins, David William. "Risk assessment applied to consumer products with reference to CE marking machines for use at work." Thesis, Aston University, 2004. http://publications.aston.ac.uk/12230/.

Full text
Abstract:
New Approach Directives now govern the health and safety of most products whether destined for workplace or domestic use. These Directives have been enacted into UK law by various specific legislation principally relating to work equipment, machinery and consumer products. This research investigates whether the risk assessment approach used to ensure the safety of machinery may be applied to consumer products. Crucially, consumer products are subject to the Consumer Protection Act (CPA) 1987, where there is no direct reference to 'assessing risk'. This contrasts with the law governing the safety of products used in the workplace, where risk assessment underpins the approach. New Approach Directives are supported by European harmonised standards, and in the case of machinery, further supported by the risk assessment standard, EN 1050. The system regulating consumer product safety is discussed, its key elements identified and a graphical model produced. This model incorporates such matters as conformity assessment, the system of regulation, near miss and accident reporting. A key finding of the research is that New Approach Directives have a common feature of specifying essential performance requirements that provide a hazard prompt-list that can form the basis for a risk assessment (the hazard identification stage). Drawing upon 272 prosecution cases, and with thirty examples examined in detail, this research provides evidence that despite the high degree of regulation, unsafe consumer products still find their way onto the market. The research presents a number of risk assessment tools to help Trading Standards Officers (TSOs) prioritise their work at the initial inspection stage when dealing with subsequent enforcement action.
APA, Harvard, Vancouver, ISO, and other styles
4

Timperley, Matthew. "The integration of explanation-based learning and fuzzy control in the context of software assurance as applied to modular avionics." Thesis, University of Central Lancashire, 2015. http://clok.uclan.ac.uk/16726/.

Full text
Abstract:
A Modular Power Management System (MPMS) is an energy management system intended for highly modular applications, able to adapt to changing hardware intelligently. There is a dearth in the literature on Integrated Modular Avionics (IMA), which has previously not addressed the implications for software operating within this architecture. Namely, the adaptation of control laws to changing hardware. This work proposes some approaches to address this issue. Control laws may require adaptation to overcome hardware degradation, or system upgrades. There is also a growing interest in the ability to change hardware configurations of UASs (Unmanned Aerial Systems) between missions, to better fit the characteristics of each one. Hardware changes in the aviation industry come with an additional caveat: in order for a software system to be used in aviation it must be certified as part of a platform. This certification process has no clear guidelines for adaptive systems. Adapting to a changing platform, as well as addressing the necessary certification effort, motivated the development of the MPMS. The aim of the work is twofold. Firstly, to modify existing control strategies for new hardware. This is achieved with generalisation and transfer earning. Secondly, to reduce the workload involved with maintaining a safety argument for an adaptive controller. Three areas of work are used to demonstrate the satisfaction of this aim. Explanation-Based Learning (EBL) is proposed for the derivation of new control laws. The EBL domain theory embodies general control strategies, which are specialised to form fuzzy rules. A method for translating explanation structures into fuzzy rules is presented. The generation of specific rules, from a general control strategy, is one way to adapt to controlling a modular platform. A fuzzy controller executes the rules derived by EBL. This maintains fast rule execution as well as the separation of strategy and application. The ability of EBL to generate rules which are useful when executed by a fuzzy controller is demonstrated by an experiment. A domain theory is given to control throttle output, which is used to generate fuzzy rules. These rules have a positive impact on energy consumption in simulated flight. EBL is proposed, for rule derivation, because it focuses on generalisation. Generalisations can apply knowledge from one situation, or hardware, to another. This can be preferable to re-derivation of similar control laws. Furthermore, EBL can be augmented to include analogical reasoning when reaching an impasse. An algorithm which integrates analogy into EBL has been developed as part of this work. The inclusion of analogical reasoning facilitates transfer learning, which furthers the flexibility of the MPMS in adapting to new hardware. The adaptive capability of the MPMS is demonstrated by application to multiple simulated platforms. EBL produces explanation structures. Augmenting these explanation structures with a safetyspecific domain theory can produce skeletal safety cases. A technique to achieve this has been developed. Example structures are generated for previously derived fuzzy rules. Generating safety cases from explanation structures can form the basis for an adaptive safety argument.
APA, Harvard, Vancouver, ISO, and other styles
5

Zhu, Huaiyu. "Neural networks and adaptive computers : theory and methods of stochastic adaptive computation." Thesis, University of Liverpool, 1993. http://eprints.aston.ac.uk/365/.

Full text
Abstract:
This thesis studies the theory of stochastic adaptive computation based on neural networks. A mathematical theory of computation is developed in the framework of information geometry, which generalises Turing machine (TM) computation in three aspects - It can be continuous, stochastic and adaptive - and retains the TM computation as a subclass called "data processing". The concepts of Boltzmann distribution, Gibbs sampler and simulated annealing are formally defined and their interrelationships are studied. The concept of "trainable information processor" (TIP) - parameterised stochastic mapping with a rule to change the parameters - is introduced as an abstraction of neural network models. A mathematical theory of the class of homogeneous semilinear neural networks is developed, which includes most of the commonly studied NN models such as back propagation NN, Boltzmann machine and Hopfield net, and a general scheme is developed to classify the structures, dynamics and learning rules. All the previously known general learning rules are based on gradient following (GF), which are susceptible to local optima in weight space. Contrary to the widely held belief that this is rarely a problem in practice, numerical experiments show that for most non-trivial learning tasks GF learning never converges to a global optimum. To overcome the local optima, simulated annealing is introduced into the learning rule, so that the network retains adequate amount of "global search" in the learning process. Extensive numerical experiments confirm that the network always converges to a global optimum in the weight space. The resulting learning rule is also easier to be implemented and more biologically plausible than back propagation and Boltzmann machine learning rules: Only a scalar needs to be back-propagated for the whole network. Various connectionist models have been proposed in the literature for solving various instances of problems, without a general method by which their merits can be combined. Instead of proposing yet another model, we try to build a modular structure in which each module is basically a TIP. As an extension of simulated annealing to temporal problems, we generalise the theory of dynamic programming and Markov decision process to allow adaptive learning, resulting in a computational system called a "basic adaptive computer", which has the advantage over earlier reinforcement learning systems, such as Sutton's "Dyna", in that it can adapt in a combinatorial environment and still converge to a global optimum. The theories are developed with a universal normalisation scheme for all the learning parameters so that the learning system can be built without prior knowledge of the problems it is to solve.
APA, Harvard, Vancouver, ISO, and other styles
6

Rattray, Magnus. "Modelling the dynamics of genetic algorithms using statistical mechanics." Thesis, University of Manchester, 1996. http://publications.aston.ac.uk/598/.

Full text
Abstract:
A formalism for modelling the dynamics of Genetic Algorithms (GAs) using methods from statistical mechanics, originally due to Prugel-Bennett and Shapiro, is reviewed, generalized and improved upon. This formalism can be used to predict the averaged trajectory of macroscopic statistics describing the GA's population. These macroscopics are chosen to average well between runs, so that fluctuations from mean behaviour can often be neglected. Where necessary, non-trivial terms are determined by assuming maximum entropy with constraints on known macroscopics. Problems of realistic size are described in compact form and finite population effects are included, often proving to be of fundamental importance. The macroscopics used here are cumulants of an appropriate quantity within the population and the mean correlation (Hamming distance) within the population. Including the correlation as an explicit macroscopic provides a significant improvement over the original formulation. The formalism is applied to a number of simple optimization problems in order to determine its predictive power and to gain insight into GA dynamics. Problems which are most amenable to analysis come from the class where alleles within the genotype contribute additively to the phenotype. This class can be treated with some generality, including problems with inhomogeneous contributions from each site, non-linear or noisy fitness measures, simple diploid representations and temporally varying fitness. The results can also be applied to a simple learning problem, generalization in a binary perceptron, and a limit is identified for which the optimal training batch size can be determined for this problem. The theory is compared to averaged results from a real GA in each case, showing excellent agreement if the maximum entropy principle holds. Some situations where this approximation brakes down are identified. In order to fully test the formalism, an attempt is made on the strong sc np-hard problem of storing random patterns in a binary perceptron. Here, the relationship between the genotype and phenotype (training error) is strongly non-linear. Mutation is modelled under the assumption that perceptron configurations are typical of perceptrons with a given training error. Unfortunately, this assumption does not provide a good approximation in general. It is conjectured that perceptron configurations would have to be constrained by other statistics in order to accurately model mutation for this problem. Issues arising from this study are discussed in conclusion and some possible areas of further research are outlined.
APA, Harvard, Vancouver, ISO, and other styles
7

Svénsen, Johan F. M. "GTM: the generative topographic mapping." Thesis, Aston University, 1998. http://publications.aston.ac.uk/1245/.

Full text
Abstract:
This thesis describes the Generative Topographic Mapping (GTM) --- a non-linear latent variable model, intended for modelling continuous, intrinsically low-dimensional probability distributions, embedded in high-dimensional spaces. It can be seen as a non-linear form of principal component analysis or factor analysis. It also provides a principled alternative to the self-organizing map --- a widely established neural network model for unsupervised learning --- resolving many of its associated theoretical problems. An important, potential application of the GTM is visualization of high-dimensional data. Since the GTM is non-linear, the relationship between data and its visual representation may be far from trivial, but a better understanding of this relationship can be gained by computing the so-called magnification factor. In essence, the magnification factor relates the distances between data points, as they appear when visualized, to the actual distances between those data points. There are two principal limitations of the basic GTM model. The computational effort required will grow exponentially with the intrinsic dimensionality of the density model. However, if the intended application is visualization, this will typically not be a problem. The other limitation is the inherent structure of the GTM, which makes it most suitable for modelling moderately curved probability distributions of approximately rectangular shape. When the target distribution is very different to that, theaim of maintaining an `interpretable' structure, suitable for visualizing data, may come in conflict with the aim of providing a good density model. The fact that the GTM is a probabilistic model means that results from probability theory and statistics can be used to address problems such as model complexity. Furthermore, this framework provides solid ground for extending the GTM to wider contexts than that of this thesis.
APA, Harvard, Vancouver, ISO, and other styles
8

Csató, Lehel. "Gaussian processes : iterative sparse approximations." Thesis, Aston University, 2002. http://publications.aston.ac.uk/1327/.

Full text
Abstract:
In recent years there has been an increased interest in applying non-parametric methods to real-world problems. Significant research has been devoted to Gaussian processes (GPs) due to their increased flexibility when compared with parametric models. These methods use Bayesian learning, which generally leads to analytically intractable posteriors. This thesis proposes a two-step solution to construct a probabilistic approximation to the posterior. In the first step we adapt the Bayesian online learning to GPs: the final approximation to the posterior is the result of propagating the first and second moments of intermediate posteriors obtained by combining a new example with the previous approximation. The propagation of em functional forms is solved by showing the existence of a parametrisation to posterior moments that uses combinations of the kernel function at the training points, transforming the Bayesian online learning of functions into a parametric formulation. The drawback is the prohibitive quadratic scaling of the number of parameters with the size of the data, making the method inapplicable to large datasets. The second step solves the problem of the exploding parameter size and makes GPs applicable to arbitrarily large datasets. The approximation is based on a measure of distance between two GPs, the KL-divergence between GPs. This second approximation is with a constrained GP in which only a small subset of the whole training dataset is used to represent the GP. This subset is called the em Basis Vector, or BV set and the resulting GP is a sparse approximation to the true posterior. As this sparsity is based on the KL-minimisation, it is probabilistic and independent of the way the posterior approximation from the first step is obtained. We combine the sparse approximation with an extension to the Bayesian online algorithm that allows multiple iterations for each input and thus approximating a batch solution. The resulting sparse learning algorithm is a generic one: for different problems we only change the likelihood. The algorithm is applied to a variety of problems and we examine its performance both on more classical regression and classification tasks and to the data-assimilation and a simple density estimation problems.
APA, Harvard, Vancouver, ISO, and other styles
9

Goldingay, Harry J. "Agent Based Models of Competition and Collaboration." Thesis, Aston University, 2010. http://publications.aston.ac.uk/15212/.

Full text
Abstract:
Swarm intelligence is a popular paradigm for algorithm design. Frequently drawing inspiration from natural systems, it assigns simple rules to a set of agents with the aim that, through local interactions, they collectively solve some global problem. Current variants of a popular swarm based optimization algorithm, particle swarm optimization (PSO), are investigated with a focus on premature convergence. A novel variant, dispersive PSO, is proposed to address this problem and is shown to lead to increased robustness and performance compared to current PSO algorithms. A nature inspired decentralised multi-agent algorithm is proposed to solve a constrained problem of distributed task allocation. Agents must collect and process the mail batches, without global knowledge of their environment or communication between agents. New rules for specialisation are proposed and are shown to exhibit improved eciency and exibility compared to existing ones. These new rules are compared with a market based approach to agent control. The eciency (average number of tasks performed), the exibility (ability to react to changes in the environment), and the sensitivity to load (ability to cope with differing demands) are investigated in both static and dynamic environments. A hybrid algorithm combining both approaches, is shown to exhibit improved eciency and robustness. Evolutionary algorithms are employed, both to optimize parameters and to allow the various rules to evolve and compete. We also observe extinction and speciation. In order to interpret algorithm performance we analyse the causes of eciency loss, derive theoretical upper bounds for the eciency, as well as a complete theoretical description of a non-trivial case, and compare these with the experimental results. Motivated by this work we introduce agent "memory" (the possibility for agents to develop preferences for certain cities) and show that not only does it lead to emergent cooperation between agents, but also to a signicant increase in efficiency.
APA, Harvard, Vancouver, ISO, and other styles
10

Das, Gupta Jishu. "Performance issues for VOIP in Access Networks." Thesis, University of Southern Queensland, 2005. https://eprints.qut.edu.au/12724/1/Das_Gupta_MComputing_Dissertation.pdf.

Full text
Abstract:
The is a general consensus that the Quality of Service (QoS) of Voice over Internet Protocol (VOIP) is of growing importance for research and study. In this dissertation we investigate the performance of VOIP and the impact of resource limitations in the performance of Access Networks. The impact of VOIP performance in access networks is particularly important in regions where Internet resources are limited and the cost of improving these resources is prohibitive. It is clear that perceived VOIP performance, as measured by mean opinion score in experiments where subjects are asked to rate communication quality, is determined by end to end delay on the communication path, delay variation, packet loss, echo, the coding algorithm in use and noise. These performance indicators can be measured and the contribution in the access network can be estimated. The relation between MOS and technical measurement is less well understood. We investigate the contribution of the access network to the overall performance of VOIP services and the ways in which access networks can be designed to improve VOIP performance. Issues of interest include the choice of coding rate, dynamic variation of coding rate, packet length, methods of controlling echo, and the use of Active Queue Management (AQM) in Access Network routers. Methods for analyzing the impact of the access network on VOIP performance will be surveyed and reviewed. Also, we consider some approaches for improving performance of VOIP by doing some experiment using NS2 simulation software with a view to gaining a better understanding of the design of access networks.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Applied computing not elsewhere classified"

1

SheelaS, R. Prema, S. Ramya, and B. Thirumahal. "Brain Tumor Detection Using Gray Level Co-Occurrence Matrix Feature Extraction Technique." In Advances in Parallel Computing. IOS Press, 2021. http://dx.doi.org/10.3233/apc210106.

Full text
Abstract:
From each and every passing year, the world has always witnessed a rise in the number of cases of brain tumor. Brain tumor classification and detection is that the most critical and strenuous task within the field of medical image processing while human aided manual detection leads to imperfect divination and diagnosing. Brain tumors have high heterogeneity in appearance and there is a same feature between tumor and non-tumor tissues and thus the extraction of tumor regions from MRI scan images becomes unyielding. A Gray Level Co-occurrence Matrix(GLCM) is applied on MRI scan images to detect tumor and non-tumor regions in brain. The main aim of medical imaging is to extract meaningful information accurately from the images. The method of detecting brain tumor from an MRI scan images are often classified into four categories: Pre-Processing, Skull Stripping, Segmentation and have Feature Extraction.
APA, Harvard, Vancouver, ISO, and other styles
2

Aravind Prakash M, Indra Gandhi K, Sriram R, and Amaysingh. "An Effective Comparative Analysis of Data Preprocessing Techniques in Network Intrusion Detection System Using Deep Neural Networks." In Advances in Parallel Computing. IOS Press, 2021. http://dx.doi.org/10.3233/apc210005.

Full text
Abstract:
Recently machine learning algorithms are utilized for identifying network threats. Threats otherwise called as intrusions, will harm the network in a stern manner, thus it must be dealt cautiously. In the proposed research work, a deep learning model has been applied to recognize and categorize unanticipated and unpredictable cyber-attacks. The UNSW NB-15 dataset has a vital number of features which will be learned by the hidden layers present in the suggested model and classified by the output layer. The suitable quantity of layers, neurons in each layer and the optimizer utilized in the proposed work are obtained through a sequence of trial and error experiments. The concluding model acquired can be utilized for estimating future malicious attacks. There are several data preprocessing techniques available at our disposal. We used two types of techniques in our experiment: 1) Log transformation, MinMaxScaling and factorize technique; and 2) Z-score encoding and dummy encoding technique. In general, the selection of data preprocessing techniques has a direct impact on the output performed by any machine learning process and our research, attempts to prove this concept.
APA, Harvard, Vancouver, ISO, and other styles
3

Andreotti, Daniele, Armando Fella, and Eleonora Luppi. "Simulated Events Production on the Grid for the BaBar Experiment." In Handbook of Research on Grid Technologies and Utility Computing, 226–34. IGI Global, 2009. http://dx.doi.org/10.4018/978-1-60566-184-1.ch022.

Full text
Abstract:
The BaBar experiment uses data since 1999 in examining the violation of charge and parity (CP) symmetry in the field of high energy physics. This event simulation experiment is a compute intensive task due to the complexity of the Monte-Carlo simulation implemented on the GEANT engine. Data needed as input for the simulation (stored in the ROOT format), are classified into two categories: conditions data for describing the detector status when data are recorded, and background triggers data for noise signal necessary to obtain a realistic simulation. In this chapter, the grid approach is applied to the BaBar production framework using the INFN-GRID network.
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Miao-Ling, and Hsiao-Fan Wang. "Web Mining System for Mobile-Phone Marketing." In Mobile Computing, 2924–35. IGI Global, 2009. http://dx.doi.org/10.4018/978-1-60566-054-7.ch220.

Full text
Abstract:
With the ever-increasing and ever-changing flow of information available on the Web, information analysis has never been more important. Web text mining, which includes text categorization, text clustering, association analysis and prediction of trends, can assist us in discovering useful information in an effective and efficient manner. In this chapter, we have proposed a Web mining system that incorporates both online efficiency and off-line effectiveness to provide the “right” information based on users’ preferences. A Bi- Objective Fuzzy c-Means algorithm and information retrieval technique, for text categorization, clustering and integration, was employed for analysis. The proposed system is illustrated via a case involving the Web site marketing of mobile phones. A variety of Web sites exist on the Internet and a common type involves the trading of goods. In this type of Web site, the question to ask is: If we want to establish a Web site that provides information about products, how can we respond quickly and accurately to queries? This is equivalent to asking: How can we design a flexible search engine according to users’ preferences? In this study, we have applied data mining techniques to cope with such problems, by proposing, as an example, a Web site providing information on mobile phones in Taiwan. In order to efficiently provide useful information, two tasks were considered during the Web design phase. One related to off-line analysis: this was done by first carrying out a survey of frequent Web users, students between 15 and 40 years of age, regarding their preferences, so that Web customers’ behavior could be characterized. Then the survey data, as well as the products offered, were classified into different demand and preference groups. The other task was related to online query: this was done through the application of an information retrieval technique that responded to users’ queries. Based on the ideas above the remainder of the chapter is organized as follows: first, we present a literature review, introduce some concepts and review existing methods relevant to our study, then, the proposed Web mining system is presented, a case study of a mobile-phone marketing Web site is illustrated and finally, a summary and conclusions are offered.
APA, Harvard, Vancouver, ISO, and other styles
5

Karthikeyan P., Karunakaran Velswamy, Pon Harshavardhanan, Rajagopal R., JeyaKrishnan V., and Velliangiri S. "Machine Learning Techniques Application." In Research Anthology on Architectures, Frameworks, and Integration Strategies for Distributed and Cloud Computing, 1396–417. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-5339-8.ch068.

Full text
Abstract:
Machine learning is the part of artificial intelligence that makes machines learn without being expressly programmed. Machine learning application built the modern world. Machine learning techniques are mainly classified into three techniques: supervised, unsupervised, and semi-supervised. Machine learning is an interdisciplinary field, which can be joined in different areas including science, business, and research. Supervised techniques are applied in agriculture, email spam, malware filtering, online fraud detection, optical character recognition, natural language processing, and face detection. Unsupervised techniques are applied in market segmentation and sentiment analysis and anomaly detection. Deep learning is being utilized in sound, image, video, time series, and text. This chapter covers applications of various machine learning techniques, social media, agriculture, and task scheduling in a distributed system.
APA, Harvard, Vancouver, ISO, and other styles
6

S., Sivakumar, Sreedevi E., PremaLatha V., and Haritha D. "Parallel Defect Detection Model on Uncertain Data for GPUs Computing by a Novel Ensemble Learning." In Applications of Artificial Intelligence for Smart Technology, 146–63. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-3335-2.ch010.

Full text
Abstract:
To detect defect is an important concept in machine leaning techniques and ambiguous dataset which develops into a challenging issue, as the software product expands in terms of size and its complexity. This chapter reveals an applied novel multi-learner model which is ensembled to predict software metrics using classification algorithms and propose algorithm applied in parallel method for detection on ambiguous data using density sampling and develop an implementation running on both GPUs and multi-core CPUs. The defect on the NASA PROMISE defect dataset is adequately predicted and classified using these models and implementing GPU computing. The performance compared to the traditional learning models improved algorithm and parallel implementation on GPUs shows less processing time in ensemble model compared to decision tree algorithm and effectively optimizes the true positive rate.
APA, Harvard, Vancouver, ISO, and other styles
7

Cruz-Chávez, Marco Antonio, Abelardo Rodríguez-León, Rafael Rivera-López, Fredy Juárez-Pérez, Carmen Peralta-Abarca, and Alina Martínez-Oropeza. "Grid Platform Applied to the Vehicle Routing Problem with Time Windows for the Distribution of Products." In Logistics Management and Optimization through Hybrid Artificial Intelligence Systems, 52–81. IGI Global, 2012. http://dx.doi.org/10.4018/978-1-4666-0297-7.ch003.

Full text
Abstract:
Around the world there have recently been new and more powerful computing platforms created that can be used to work with computer science problems. Some of these problems that are dealt with are real problems of the industry; most are classified by complexity theory as hard problems. One such problem is the vehicle routing problem with time windows (VRPTW). The computational Grid is a platform which has recently ventured into the treatment of hard problems to find the best solution for these. This chapter presents a genetic algorithm for the vehicle routing problem with time windows. The algorithm iteratively applies a mutation operator, first of the intelligent type and second of the restricting type. The algorithm takes advantage of Grid computing to increase the exploration and exploitation of the solution space of the problem. The Grid performance is analyzed for a genetic algorithm and a measurement of the latencies that affect the algorithm is studied. The convenience of applying this new computing platform to the execution of algorithms specially designed for Grid computing is presented.
APA, Harvard, Vancouver, ISO, and other styles
8

Jiménez-Ruano, Adrián, Pere Joan Gelabert, Victor Resco de Dios, Cristina Vega-García, Luis Torres, Jaime Ribalaygua, and Marcos Rodrigues. "Modeling daily natural-caused ignition probability in the Iberian Peninsula." In Advances in Forest Fire Research 2022, 1214–19. Imprensa da Universidade de Coimbra, 2022. http://dx.doi.org/10.14195/978-989-26-2298-9_184.

Full text
Abstract:
In the European Mediterranean region natural-caused wildfires are a small fraction of total ignitions. Lightning strikes are the most common source of non-human fires, being strongly tied to specific synoptic conditions and patterns associated with atmospheric instability, such as dry thunderstorms. Likewise, lightning-related ignitions often associate with dry fuels and dense vegetation layers. In the case of Iberian Peninsula, the confluence of these factors favors recurrent lightning fires in the eastern Mediterranean mountain ranges and the. However, under appropriate conditions lightning fires can start elsewhere, holding the potential to propagate over vast distances. In this work, we assessed the likelihood of ignition leveraging a large dataset of lightning strikes and historical fires available in Spain. We trained and tested a machine learning model to evaluate the probability of ignition provided that a lightning strikes the ground. Our model was calibrated in the period 2009-2015 using data for mainland Spain plus the Balearic Islands. To build the binary response variable we classified lightning strikes between that triggered a fire event. For each lightning strike we extracted a set of covariates relating fuel moisture conditions, the presence and density of the vegetation layer and the shape of the relief. The final model was subsequently applied to forecast daily probabilities at 1x1 km resolution for the entire Iberian Peninsula. Although the model was originally calibrated in Spain, we extended the predictions to the entire Iberian Peninsula. By doing so we were able to validate in the future our outputs against the Portuguese dataset of recent natural-caused fires (bigger than 1 ha) from 2001 to 2021. Overall, the model attained a great predictive performance with a median AUC of 0.82. Natural-caused ignitions triggered mainly in low dead (dFMC 250) fuel moisture conditions. Lightning strikes with negative polarity seem to trigger fires more frequently when the mean density of discharger was greater than 5. Finally, natural wildfires usually started at higher elevations (above 500 m.a.s.l.).
APA, Harvard, Vancouver, ISO, and other styles
9

Sahu, Tirath Prasad, and Sarang Khandekar. "A Machine Learning-Based Lexicon Approach for Sentiment Analysis." In Research Anthology on Implementing Sentiment Analysis Across Multiple Disciplines, 836–51. IGI Global, 2022. http://dx.doi.org/10.4018/978-1-6684-6303-1.ch044.

Full text
Abstract:
Sentiment analysis can be a very useful aspect for the extraction of useful information from text documents. The main idea for sentiment analysis is how people think for a particular online review, i.e. product reviews, movie reviews, etc. Sentiment analysis is the process where these reviews are classified as positive or negative. The web is enriched with huge amount of reviews which can be analyzed to make it meaningful. This article presents the use of lexicon resources for sentiment analysis of different publicly available reviews. First, the polarity shift of reviews is handled by negations. Intensifiers, punctuation and acronyms are also taken into consideration during the processing phase. Second, words are extracted which have some opinion; these words are then used for computing score. Third, machine learning algorithms are applied and the experimental results show that the proposed model is effective in identifying the sentiments of reviews and opinions.
APA, Harvard, Vancouver, ISO, and other styles
10

McElroy, Michael B. "Power from the Sun Abundant But Expensive." In Energy and Climate. Oxford University Press, 2016. http://dx.doi.org/10.1093/oso/9780190490331.003.0015.

Full text
Abstract:
As discussed in the preceding chapter, wind resources available from nonforested, nonurban, land-based environments in the United States are more than sufficient to meet present and projected future US demand for electricity. Wind resources are comparably abundant elsewhere. As indicated in Table 10.2, a combination of onshore and offshore wind could accommodate prospective demand for electricity for all of the countries classified as top- 10 emitters of CO2. Solar energy reaching the Earth’s surface averages about 200 W m– 2 (Fig. 4.1). If this power source could be converted to electricity with an efficiency of 20%, as little as 0.1% of the land area of the United States (3% of the area of Arizona) could supply the bulk of US demand for electricity. As discussed later in this chapter, the potential source of power from the sun is significant even for sun- deprived countries such as Germany. Wind and solar energy provide potentially complementary sources of electricity in the sense that when the supply from one is low, there is a good chance that it may be offset by a higher contribution from the other. Winds blow strongest typically at night and in winter. The potential supply of energy from the sun, in contrast, is highest during the day and in summer. The source from the sun is better matched thus than wind to respond to the seasonal pattern of demand for electricity, at least for the United States (as indicated in Fig. 10.5).There are two approaches available to convert energy from the sun to electricity. The first involves using photovoltaic (PV) cells, devices in which absorption of radiation results directly in production of electricity. The second is less direct. It requires solar energy to be captured and deployed first to produce heat, with the heat used subsequently to generate steam, the steam applied then to drive a turbine. The sequence in this case is similar to that used to generate electricity in conventional coal, oil, natural gas, and nuclear- powered systems. The difference is that the energy source is light from the sun rather than a carbon- based fossil fuel or fissionable uranium.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Applied computing not elsewhere classified"

1

Siebra, Clauirton, Denise Alencar, Ana Paula Guimarães, and Jefferson Silva. "An Empirical Study about the Vision of a Tutoring Team on the Distance Learning Process." In Workshop sobre Educação em Computação. Sociedade Brasileira de Computação - SBC, 2015. http://dx.doi.org/10.5753/wei.2015.10223.

Full text
Abstract:
Tutors are a fundamental element of the Distance Learning (DL) process. In fact, they complement the role of lecturers, giving a closer assistance to DL students. Considering the importance of tutors, this work investigates the DL process from their perspective. To that end, an empirical analysis was carried out via questionnaires and interviews, which were applied to 28 distance and 17 local tutors of a DL computing degree course. The collected information was analyzed and classified as a way to stress the main features, problems and solutions that have been applied along the past semesters. A list of suggestions to improve the educational environment/tool and pedagogic method was also elaborated as a result from this research.
APA, Harvard, Vancouver, ISO, and other styles
2

Seong, Bo-Ok, Jimin Ahn, Myeongjun Son, and Hyeongok Lee. "Three-degree graph and design of an optimal routing algorithm." In 13th International Conference on Applied Human Factors and Ergonomics (AHFE 2022). AHFE International, 2022. http://dx.doi.org/10.54941/ahfe1001466.

Full text
Abstract:
Learning, as well as the importance of a high-performance computer is significantly emerging. In parallel computing, we denote the concept of interconnection between the single memory and multiple processors as multi-processor. In a similar context, multi-computing signifies the connection of memory-loaded processors through the communication link. The relationship between the performance of multi-computing and the processor’s linkage structure is extremely proximate. Let the connection structure of the processor be an interconnection network. The interconnection network can be modeled through a classical graph consisting of node and edge. In this regard, a multi-computing processor is expressed as a node, communication link as an edge. When categorizing the suggested interconnection network through the criteria of the number of nodes, it can be classified as follows: Mesh class type consisted of the n×k nodes (Torus, Toroidal mesh, Diagonal mesh, Honeycomb mesh), Hypercube class type with 2^n nodes (Hypercube, Folded hypercube, Twisted cube, de Breijin), and Star graph class type (Star graph, Bubblesort star, Alternating group, Macro-star, Transposition) with n! nodes. The mesh type structure is a planar graph that is widely being utilized in the domains such as VLSI circuit design and base station installing (covering) problems in a mobile communication network. Mesh class types are comparatively easier to design and could potentially be implemented in algorithmic domains in a practical manner. Therefore, it is considered as a classical measure that is extensively preferred when designing a parallel computing network system. This study suggests the novel mesh structure De3 with the degree of three and designs an optimal routing algorithm as well as a parallel route algorithm (병렬경로알고리즘) based on the diameter analysis. The address of the node in the De3 graph is expressed with n-bit binary digits, and the edge is noted with the operator %. We built the interval function (구간 함수) that computes the locational property of the corresponding nodes to derive an optimal routing path from node u to node v among the De3 graph structure. We represent the optimal routing algorithm based on the interval function, calculating and validating the diameter of the De3 graph. Furthermore, we propose the algorithm that establishes the node disjoint parallel path which addresses a non-overlap path from node u to v. The outcome of this study proposes a novel interconnection network structure that is applicable in the routing algorithm optimization by limiting the communication links to three while the number of nodes These results implicate the viable operation among the high-performance edge computing system in a cost-efficient and effective manner.
APA, Harvard, Vancouver, ISO, and other styles
3

Felipe Duarte Alves, Luiz, Almir d. De Oliveira Costa Junior, and Jose Anglada Rivera. "Avaliação de Usabilidade do Aplicativo Be a Maker com Alunos de Licenciatura em Computação." In Computer on the Beach. Itajaí: Universidade do Vale do Itajaí, 2022. http://dx.doi.org/10.14210/cotb.v13.p014-020.

Full text
Abstract:
ABSTRACTThis work presents the results of the usability tests of the Be a Maker application, carried out by academics from the Degree in Computing course. Be a Maker is an application that aims to bring together several projects involving Educational Robotics (ER), allowing users (teacher or student) to share their practical experiences of using robotics inside or outside the classroom. Usability tests based on the SUS method were applied through an online questionnaire. In general, the data showed good results in usability tests, obtaining an average of 85 points in the SUS method, being classified as Excellent and Acceptable.
APA, Harvard, Vancouver, ISO, and other styles
4

GOWDRIDGE, T., N. DERVILIS, and K. WORDEN. "ON THE APPLICATION OF TOPOLOGICAL DATA ANALYSIS: A Z24 BRIDGE CASE STUDY." In Structural Health Monitoring 2021. Destech Publications, Inc., 2022. http://dx.doi.org/10.12783/shm2021/36304.

Full text
Abstract:
Topological methods are very rarely used in structural health monitoring (SHM), or indeed in structural dynamics generally, especially when considering the structure and topology of observed data. Topological methods can provide a way of proposing new metrics and methods of scrutinising data, that otherwise may be overlooked. In this work, a method of quantifying the shape of data, via a topic called topological data analysis will be introduced. The main tool within topological data analysis is persistent homology. Persistent homology is a method of quantifying the shape of data over a range of length scales. The required background and a method of computing persistent homology is briefly introduced here. Ideas from topological data analysis are applied to a Z24 Bridge case study, to scrutinise different data partitions, classified by the conditions at which the data were collected. A metric, from topological data analysis, is used to compare between the partitions. The results presented demonstrate that the presence of damage alters the manifold shape more significantly than the effects present from temperature.
APA, Harvard, Vancouver, ISO, and other styles
5

Ahmed, Rizwan, Gyunyoung Heo, Dong-Keun Cho, and Jongwon Choi. "Characterization of Radioactive Waste From Side Structural Components of a CANDU Reactor for Decommissioning Applications in Korea." In ASME 2010 13th International Conference on Environmental Remediation and Radioactive Waste Management. ASMEDC, 2010. http://dx.doi.org/10.1115/icem2010-40201.

Full text
Abstract:
Reactor core components and structural materials of nuclear power plants to be decommissioned have been irradiated by neutrons of various intensities and spectrum. This long term irradiation results in the production of large number of radioactive isotopes that serve as a source of radioactivity for thousands of years for future. Decommissioning of a nuclear reactor is a costly program comprising of dismantling, demolishing of structures and waste classification for disposal applications. The estimate of radio-nuclides and radiation levels forms the essential part of the whole decommissioning program. It can help establishing guidelines for the waste classification, dismantling and demolishing activities. ORIGEN2 code has long been in use for computing radionuclide concentrations in reactor cores and near core materials for various burn-up-decay cycles, using one-group collapsed cross sections. Since ORIGEN2 assumes a constant flux and nuclide capture cross-sections in all regions of the core, uncertainty in its results could increase as region of interest goes away from the core. This uncertainty can be removed by using a Monte Carlo Code, like MCNP, for the correct calculations of flux and capture cross-sections inside the reactor core and in far core regions. MCNP has greater capability to model the reactor problems in much realistic way that is to incorporate geometrical, compositional and spectrum information. In this paper the classification of radioactive waste from the side structural components of a CANDU reactor is presented. MCNP model of full core was established because of asymmetric structure of the reactor. Side structural components of total length 240 cm and radius 16.122 cm were modeled as twelve (12) homogenized cells of 20 cm length each along the axial direction. The neutron flux and one-group collapsed cross-sections were calculated by MCNP simulation for each cell, and then those results were applied to ORIGEN2 simulation to estimate nuclide inventory in the wastes. After retrieving the radiation level of side structural components of in- and ex-core, the radioactive wastes were classified according to the international standards of waste classification. The wastes from first and second cell of the side structural components were found to exhibit characteristics of class C and Class B wastes respectively. However, the rest of the waste was found to have activity levels as that of Class A radio-active waste. The waste is therefore suitable for land disposal in accordance with the international standards of waste classification and disposal.
APA, Harvard, Vancouver, ISO, and other styles
6

Elapolu, Phani Ganesh, Pradip Majumdar, Steven A. Lottes, and Milivoje Kostic. "Development of a Three-Dimensional Iterative Methodology Using a Commercial CFD Code for Flow Scouring Around Bridge Piers." In ASME 2012 Heat Transfer Summer Conference collocated with the ASME 2012 Fluids Engineering Division Summer Meeting and the ASME 2012 10th International Conference on Nanochannels, Microchannels, and Minichannels. American Society of Mechanical Engineers, 2012. http://dx.doi.org/10.1115/ht2012-58491.

Full text
Abstract:
One of the major concerns affecting the safety of bridges with foundation supports in river-beds is the scouring of river-bed material from bridge supports during floods. Scour is the engineering term for the erosion caused by water around bridge elements such as piers, monopiles, or abutments. Scour holes around a monopile can jeopardize the stability of the whole structure and will require deeper piling or local armoring of the river-bed. About 500,000 bridges in the National Bridge Registry are over waterways. Many of these are considered as vulnerable to scour, about five percent are classified as scour critical, and over the last 30 years bridge failures caused by foundation scour have averaged about one every two weeks. Therefore it is of great importance to predict the correct scour development for a given bridge and flood conditions. Apart from saving time and money, integrity of bridges are important in ensuring public safety. Recent advances in computing boundary motion in combination with mesh morphing to maintain mesh quality in computational fluid dynamic analysis can be applied to predict the scour hole development, analyze the local scour phenomenon, and predict the scour hole shape and size around a pier. The main objective of the present study was to develop and implement a three dimensional iterative procedure to predict the scour hole formation around a cylindrical pier using the mesh morphing capabilities in the STARCCM+ commercial CFD code. A computational methodology has been developed using Python and Java Macros and implemented using a Bash script on a LINUX high performance computer cluster. An implicit unsteady approach was used to obtain the bed shear stresses. The mesh was iteratively deformed towards the equilibrium scour position based on the excess shear stress above the critical shear stress (supercritical shear stress). The model solves the flow field using Reynolds Averaged Navier-Stokes (RANS) equations, and the standard k–ε turbulence model. The iterative process involves stretching (morphing) a meshed domain after every time step, away from the bottom where scouring flow parameters are supercritical, and remeshing the relevant computational domain after a certain number of time steps when the morphed mesh compromises the stability of further simulation. The simulation model was validated by comparing results with limited experimental data available in the literature.
APA, Harvard, Vancouver, ISO, and other styles
7

He, Qin, Rubin Wang, and Xiaochuan Pan. "This paper presents a two-dimensional histogram shifting technique for reversible data hiding algorithm. In order to avoid the distortion drift caused by hiding data into stereo H.264 video, we choose arbitrary embeddable blocks from 4×4 quantized discrete cosine transform luminance blocks which will not affect their adjacent blocks. Two coefficients in each embeddable block are chosen as a hiding coefficient pair. The selected coefficient pairs are classified into different sets on the basis of their values. Data could be hidden according to the set which the value of the coefficient pair belongs to. When the value of one coefficient may be changed by adding or subtracting 1, two data bits could be hidden by using the proposed method, whereas only one data bit could be embedded by employing the conventional histogram shifting. Experiments show that this two-dimensional histogram shifting method can be used to improve the hiding performance." In 10th International Conference on Software Engineering and Applications (SEAS 2021). AIRCC Publishing Corporation, 2021. http://dx.doi.org/10.5121/csit.2021.110205.

Full text
Abstract:
Arc, one virus-like gene, crucial for learning and memory, was dis-covered by researchers in neurological disorders fields, Arc mRNA’s single directed path and allowing protein binding regional restric-tively is a potential investigation on helping shuttle toxic proteins responsible for some diseases related to memory deficiency. Mean time to switching (MTS) is calculated explicitly quantifying the switching process in statistical methods combining Hamiltonian Markov Chain(HMC). The model derived from predator and prey with typeII functional response studies the mechanism of normals with intrin-sic rate of increase and the persisters with the instantaneous discovery rate and converting coefficients. During solving the results, since the numeric method is applied for the 2D approximation of Hamiltonion with intrinsic noise induced switching combining geometric minimum action method. In the application of Hamiltonian Markov Chain, the behavior of the convertion (between mRNA and proteins through 6 states from off to on ) is described with probabilistic conditional logic formula and the final concentration is computed with both Continuous and Discret Time Markov Chain(CTMC/DTMC) through Embedding and Switching Diffusion. The MTS, trajectories and Hamiltonian dynamics demonstrate the practical and robust advantages of our model on interpreting the switching process of genes (IGFs, Hax Arcs and etc.) with respects to memory deficiency in aging process which can be useful in further drug efficiency test and disease curing. Coincidentally, the Hamiltonian is also well used in describing quantum mechanics and convenient for computation with time and position information using quantum bits while in the second model we construct, switching between excitatory and inhibitory neurons, similarity of qubit and neuron is an interesting object as well. Especially with the interactions operated with phase gates, the excitation from the ground state to excitation state is a well analogue to the neuron excitation. Not only on theoretical aspect, the experimental methods in neuron switching model is also inspiring to quantum computing. Most basic one is as stimulate hippocampus can be identical to spontaneous neural excitation(|g>|e>), pi-pulse is utilized to drive the ground state to the higher state. There thus exists prosperous potential to study the transfer between states with our switch models both classical and quantum computationally.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography