Dissertations / Theses on the topic 'Applied computing not elsewhere classified'

To see the other types of publications on this topic, follow the link: Applied computing not elsewhere classified.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 28 dissertations / theses for your research on the topic 'Applied computing not elsewhere classified.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Jones, Christopher Charles Rawlinson. "A study of novel computing methods for solving large electromagnetic hazards problems." Thesis, University of Central Lancashire, 2002. http://clok.uclan.ac.uk/18842/.

Full text
Abstract:
The aim of this work is to explore means to improve the speed of the computational electromagnetics (CEM) processing in use for aircraft design and certification work by a factor of 1000 or so. The investigation addresses particularly the set of problems described as electromagnetic hazards comprising lightning, EMC and the illumination of an aircraft by external radio sources or HIRF (high intensity radiated fields). These are areas which are very much aspects of the engineering of the aircraft where the requirement for accuracy of simulations is of the order of 6dB as build and test repeatability cannot achieve better than this. Computer simulations of these interactions at the outset of this work were often taking 10 days and more on the largest parallel computers then available in the UK (Cray T3D - 40 GFLOPS nominal peak). Such run times made any form of optimisation impossibly lengthy. While the future offered the certain prospect of more powerful computers, the simulations had to become more comprehensive in their representation of materials and features, geometry of the object, and particularly the representation of wires and cables had to improve radically, and tum around times for analysis had to be improved for design assessment as well as to make design optimisation by trade-off studies feasible. All of these could easily consume all the advantage that the new computers would give. The investigation has centred around techniques that might be applied via alteration to the most widely used and usable numerical methods in CEM applied to the electromagnetic hazards, and to techniques that might be applied to the manner of their use. In one case, the investigation has explored a particular possibility for minimising the duration of computation and extrapolating the resulting data to the longest time-scales required. Future improvements in the capabilities of radiating boundary conditions to mimic the effect of an infinite boundary at close range will further improve the benefits already established in this work, but this is not yet realisable. However, it has been established that a combination of techniques with some processes devised through this work can and does deliver the performance improvement sought. It has further been shown that the issues such as object resonance that could have incurred significant error and distrust of computational results can be satisfactorily overcome within the required accuracy. Four papers have been published arsing from this work. Some of these techniques are now in use in routine analyses contributing to BAE SYSTEMS programmes. Plans are in place to incorporate all of the successful techniques and processes.
APA, Harvard, Vancouver, ISO, and other styles
2

Bratton, Daniel. "Simple and adaptive particle swarms." Thesis, Goldsmiths College (University of London), 2010. http://research.gold.ac.uk/4752/.

Full text
Abstract:
The substantial advances that have been made to both the theoretical and practical aspects of particle swarm optimization over the past 10 years have taken it far beyond its original intent as a biological swarm simulation. This thesis details and explains these advances in the context of what has been achieved to this point, as well as what has yet to be understood or solidified within the research community. Taking into account the state of the modern field, a standardized PSO algorithm is defined for benchmarking and comparative purposes both within the work, and for the community as a whole. This standard is refined and simplified over several iterations into a form that does away with potentially undesirable properties of the standard algorithm while retaining equivalent or superior performance on the common set of benchmarks. This refinement, referred to as a discrete recombinant swarm (PSODRS) requires only a single user-defined parameter in the positional update equation, and uses minimal additive stochasticity, rather than the multiplicative stochasticity inherent in the standard PSO. After a mathematical analysis of the PSO-DRS algorithm, an adaptive framework is developed and rigorously tested, demonstrating the effects of the tunable particle- and swarm-level parameters. This adaptability shows practical benefit by broadening the range of problems which the PSO-DRS algorithm is wellsuited to optimize.
APA, Harvard, Vancouver, ISO, and other styles
3

Jenkins, David William. "Risk assessment applied to consumer products with reference to CE marking machines for use at work." Thesis, Aston University, 2004. http://publications.aston.ac.uk/12230/.

Full text
Abstract:
New Approach Directives now govern the health and safety of most products whether destined for workplace or domestic use. These Directives have been enacted into UK law by various specific legislation principally relating to work equipment, machinery and consumer products. This research investigates whether the risk assessment approach used to ensure the safety of machinery may be applied to consumer products. Crucially, consumer products are subject to the Consumer Protection Act (CPA) 1987, where there is no direct reference to 'assessing risk'. This contrasts with the law governing the safety of products used in the workplace, where risk assessment underpins the approach. New Approach Directives are supported by European harmonised standards, and in the case of machinery, further supported by the risk assessment standard, EN 1050. The system regulating consumer product safety is discussed, its key elements identified and a graphical model produced. This model incorporates such matters as conformity assessment, the system of regulation, near miss and accident reporting. A key finding of the research is that New Approach Directives have a common feature of specifying essential performance requirements that provide a hazard prompt-list that can form the basis for a risk assessment (the hazard identification stage). Drawing upon 272 prosecution cases, and with thirty examples examined in detail, this research provides evidence that despite the high degree of regulation, unsafe consumer products still find their way onto the market. The research presents a number of risk assessment tools to help Trading Standards Officers (TSOs) prioritise their work at the initial inspection stage when dealing with subsequent enforcement action.
APA, Harvard, Vancouver, ISO, and other styles
4

Timperley, Matthew. "The integration of explanation-based learning and fuzzy control in the context of software assurance as applied to modular avionics." Thesis, University of Central Lancashire, 2015. http://clok.uclan.ac.uk/16726/.

Full text
Abstract:
A Modular Power Management System (MPMS) is an energy management system intended for highly modular applications, able to adapt to changing hardware intelligently. There is a dearth in the literature on Integrated Modular Avionics (IMA), which has previously not addressed the implications for software operating within this architecture. Namely, the adaptation of control laws to changing hardware. This work proposes some approaches to address this issue. Control laws may require adaptation to overcome hardware degradation, or system upgrades. There is also a growing interest in the ability to change hardware configurations of UASs (Unmanned Aerial Systems) between missions, to better fit the characteristics of each one. Hardware changes in the aviation industry come with an additional caveat: in order for a software system to be used in aviation it must be certified as part of a platform. This certification process has no clear guidelines for adaptive systems. Adapting to a changing platform, as well as addressing the necessary certification effort, motivated the development of the MPMS. The aim of the work is twofold. Firstly, to modify existing control strategies for new hardware. This is achieved with generalisation and transfer earning. Secondly, to reduce the workload involved with maintaining a safety argument for an adaptive controller. Three areas of work are used to demonstrate the satisfaction of this aim. Explanation-Based Learning (EBL) is proposed for the derivation of new control laws. The EBL domain theory embodies general control strategies, which are specialised to form fuzzy rules. A method for translating explanation structures into fuzzy rules is presented. The generation of specific rules, from a general control strategy, is one way to adapt to controlling a modular platform. A fuzzy controller executes the rules derived by EBL. This maintains fast rule execution as well as the separation of strategy and application. The ability of EBL to generate rules which are useful when executed by a fuzzy controller is demonstrated by an experiment. A domain theory is given to control throttle output, which is used to generate fuzzy rules. These rules have a positive impact on energy consumption in simulated flight. EBL is proposed, for rule derivation, because it focuses on generalisation. Generalisations can apply knowledge from one situation, or hardware, to another. This can be preferable to re-derivation of similar control laws. Furthermore, EBL can be augmented to include analogical reasoning when reaching an impasse. An algorithm which integrates analogy into EBL has been developed as part of this work. The inclusion of analogical reasoning facilitates transfer learning, which furthers the flexibility of the MPMS in adapting to new hardware. The adaptive capability of the MPMS is demonstrated by application to multiple simulated platforms. EBL produces explanation structures. Augmenting these explanation structures with a safetyspecific domain theory can produce skeletal safety cases. A technique to achieve this has been developed. Example structures are generated for previously derived fuzzy rules. Generating safety cases from explanation structures can form the basis for an adaptive safety argument.
APA, Harvard, Vancouver, ISO, and other styles
5

Zhu, Huaiyu. "Neural networks and adaptive computers : theory and methods of stochastic adaptive computation." Thesis, University of Liverpool, 1993. http://eprints.aston.ac.uk/365/.

Full text
Abstract:
This thesis studies the theory of stochastic adaptive computation based on neural networks. A mathematical theory of computation is developed in the framework of information geometry, which generalises Turing machine (TM) computation in three aspects - It can be continuous, stochastic and adaptive - and retains the TM computation as a subclass called "data processing". The concepts of Boltzmann distribution, Gibbs sampler and simulated annealing are formally defined and their interrelationships are studied. The concept of "trainable information processor" (TIP) - parameterised stochastic mapping with a rule to change the parameters - is introduced as an abstraction of neural network models. A mathematical theory of the class of homogeneous semilinear neural networks is developed, which includes most of the commonly studied NN models such as back propagation NN, Boltzmann machine and Hopfield net, and a general scheme is developed to classify the structures, dynamics and learning rules. All the previously known general learning rules are based on gradient following (GF), which are susceptible to local optima in weight space. Contrary to the widely held belief that this is rarely a problem in practice, numerical experiments show that for most non-trivial learning tasks GF learning never converges to a global optimum. To overcome the local optima, simulated annealing is introduced into the learning rule, so that the network retains adequate amount of "global search" in the learning process. Extensive numerical experiments confirm that the network always converges to a global optimum in the weight space. The resulting learning rule is also easier to be implemented and more biologically plausible than back propagation and Boltzmann machine learning rules: Only a scalar needs to be back-propagated for the whole network. Various connectionist models have been proposed in the literature for solving various instances of problems, without a general method by which their merits can be combined. Instead of proposing yet another model, we try to build a modular structure in which each module is basically a TIP. As an extension of simulated annealing to temporal problems, we generalise the theory of dynamic programming and Markov decision process to allow adaptive learning, resulting in a computational system called a "basic adaptive computer", which has the advantage over earlier reinforcement learning systems, such as Sutton's "Dyna", in that it can adapt in a combinatorial environment and still converge to a global optimum. The theories are developed with a universal normalisation scheme for all the learning parameters so that the learning system can be built without prior knowledge of the problems it is to solve.
APA, Harvard, Vancouver, ISO, and other styles
6

Rattray, Magnus. "Modelling the dynamics of genetic algorithms using statistical mechanics." Thesis, University of Manchester, 1996. http://publications.aston.ac.uk/598/.

Full text
Abstract:
A formalism for modelling the dynamics of Genetic Algorithms (GAs) using methods from statistical mechanics, originally due to Prugel-Bennett and Shapiro, is reviewed, generalized and improved upon. This formalism can be used to predict the averaged trajectory of macroscopic statistics describing the GA's population. These macroscopics are chosen to average well between runs, so that fluctuations from mean behaviour can often be neglected. Where necessary, non-trivial terms are determined by assuming maximum entropy with constraints on known macroscopics. Problems of realistic size are described in compact form and finite population effects are included, often proving to be of fundamental importance. The macroscopics used here are cumulants of an appropriate quantity within the population and the mean correlation (Hamming distance) within the population. Including the correlation as an explicit macroscopic provides a significant improvement over the original formulation. The formalism is applied to a number of simple optimization problems in order to determine its predictive power and to gain insight into GA dynamics. Problems which are most amenable to analysis come from the class where alleles within the genotype contribute additively to the phenotype. This class can be treated with some generality, including problems with inhomogeneous contributions from each site, non-linear or noisy fitness measures, simple diploid representations and temporally varying fitness. The results can also be applied to a simple learning problem, generalization in a binary perceptron, and a limit is identified for which the optimal training batch size can be determined for this problem. The theory is compared to averaged results from a real GA in each case, showing excellent agreement if the maximum entropy principle holds. Some situations where this approximation brakes down are identified. In order to fully test the formalism, an attempt is made on the strong sc np-hard problem of storing random patterns in a binary perceptron. Here, the relationship between the genotype and phenotype (training error) is strongly non-linear. Mutation is modelled under the assumption that perceptron configurations are typical of perceptrons with a given training error. Unfortunately, this assumption does not provide a good approximation in general. It is conjectured that perceptron configurations would have to be constrained by other statistics in order to accurately model mutation for this problem. Issues arising from this study are discussed in conclusion and some possible areas of further research are outlined.
APA, Harvard, Vancouver, ISO, and other styles
7

Svénsen, Johan F. M. "GTM: the generative topographic mapping." Thesis, Aston University, 1998. http://publications.aston.ac.uk/1245/.

Full text
Abstract:
This thesis describes the Generative Topographic Mapping (GTM) --- a non-linear latent variable model, intended for modelling continuous, intrinsically low-dimensional probability distributions, embedded in high-dimensional spaces. It can be seen as a non-linear form of principal component analysis or factor analysis. It also provides a principled alternative to the self-organizing map --- a widely established neural network model for unsupervised learning --- resolving many of its associated theoretical problems. An important, potential application of the GTM is visualization of high-dimensional data. Since the GTM is non-linear, the relationship between data and its visual representation may be far from trivial, but a better understanding of this relationship can be gained by computing the so-called magnification factor. In essence, the magnification factor relates the distances between data points, as they appear when visualized, to the actual distances between those data points. There are two principal limitations of the basic GTM model. The computational effort required will grow exponentially with the intrinsic dimensionality of the density model. However, if the intended application is visualization, this will typically not be a problem. The other limitation is the inherent structure of the GTM, which makes it most suitable for modelling moderately curved probability distributions of approximately rectangular shape. When the target distribution is very different to that, theaim of maintaining an `interpretable' structure, suitable for visualizing data, may come in conflict with the aim of providing a good density model. The fact that the GTM is a probabilistic model means that results from probability theory and statistics can be used to address problems such as model complexity. Furthermore, this framework provides solid ground for extending the GTM to wider contexts than that of this thesis.
APA, Harvard, Vancouver, ISO, and other styles
8

Csató, Lehel. "Gaussian processes : iterative sparse approximations." Thesis, Aston University, 2002. http://publications.aston.ac.uk/1327/.

Full text
Abstract:
In recent years there has been an increased interest in applying non-parametric methods to real-world problems. Significant research has been devoted to Gaussian processes (GPs) due to their increased flexibility when compared with parametric models. These methods use Bayesian learning, which generally leads to analytically intractable posteriors. This thesis proposes a two-step solution to construct a probabilistic approximation to the posterior. In the first step we adapt the Bayesian online learning to GPs: the final approximation to the posterior is the result of propagating the first and second moments of intermediate posteriors obtained by combining a new example with the previous approximation. The propagation of em functional forms is solved by showing the existence of a parametrisation to posterior moments that uses combinations of the kernel function at the training points, transforming the Bayesian online learning of functions into a parametric formulation. The drawback is the prohibitive quadratic scaling of the number of parameters with the size of the data, making the method inapplicable to large datasets. The second step solves the problem of the exploding parameter size and makes GPs applicable to arbitrarily large datasets. The approximation is based on a measure of distance between two GPs, the KL-divergence between GPs. This second approximation is with a constrained GP in which only a small subset of the whole training dataset is used to represent the GP. This subset is called the em Basis Vector, or BV set and the resulting GP is a sparse approximation to the true posterior. As this sparsity is based on the KL-minimisation, it is probabilistic and independent of the way the posterior approximation from the first step is obtained. We combine the sparse approximation with an extension to the Bayesian online algorithm that allows multiple iterations for each input and thus approximating a batch solution. The resulting sparse learning algorithm is a generic one: for different problems we only change the likelihood. The algorithm is applied to a variety of problems and we examine its performance both on more classical regression and classification tasks and to the data-assimilation and a simple density estimation problems.
APA, Harvard, Vancouver, ISO, and other styles
9

Goldingay, Harry J. "Agent Based Models of Competition and Collaboration." Thesis, Aston University, 2010. http://publications.aston.ac.uk/15212/.

Full text
Abstract:
Swarm intelligence is a popular paradigm for algorithm design. Frequently drawing inspiration from natural systems, it assigns simple rules to a set of agents with the aim that, through local interactions, they collectively solve some global problem. Current variants of a popular swarm based optimization algorithm, particle swarm optimization (PSO), are investigated with a focus on premature convergence. A novel variant, dispersive PSO, is proposed to address this problem and is shown to lead to increased robustness and performance compared to current PSO algorithms. A nature inspired decentralised multi-agent algorithm is proposed to solve a constrained problem of distributed task allocation. Agents must collect and process the mail batches, without global knowledge of their environment or communication between agents. New rules for specialisation are proposed and are shown to exhibit improved eciency and exibility compared to existing ones. These new rules are compared with a market based approach to agent control. The eciency (average number of tasks performed), the exibility (ability to react to changes in the environment), and the sensitivity to load (ability to cope with differing demands) are investigated in both static and dynamic environments. A hybrid algorithm combining both approaches, is shown to exhibit improved eciency and robustness. Evolutionary algorithms are employed, both to optimize parameters and to allow the various rules to evolve and compete. We also observe extinction and speciation. In order to interpret algorithm performance we analyse the causes of eciency loss, derive theoretical upper bounds for the eciency, as well as a complete theoretical description of a non-trivial case, and compare these with the experimental results. Motivated by this work we introduce agent "memory" (the possibility for agents to develop preferences for certain cities) and show that not only does it lead to emergent cooperation between agents, but also to a signicant increase in efficiency.
APA, Harvard, Vancouver, ISO, and other styles
10

Das, Gupta Jishu. "Performance issues for VOIP in Access Networks." Thesis, University of Southern Queensland, 2005. https://eprints.qut.edu.au/12724/1/Das_Gupta_MComputing_Dissertation.pdf.

Full text
Abstract:
The is a general consensus that the Quality of Service (QoS) of Voice over Internet Protocol (VOIP) is of growing importance for research and study. In this dissertation we investigate the performance of VOIP and the impact of resource limitations in the performance of Access Networks. The impact of VOIP performance in access networks is particularly important in regions where Internet resources are limited and the cost of improving these resources is prohibitive. It is clear that perceived VOIP performance, as measured by mean opinion score in experiments where subjects are asked to rate communication quality, is determined by end to end delay on the communication path, delay variation, packet loss, echo, the coding algorithm in use and noise. These performance indicators can be measured and the contribution in the access network can be estimated. The relation between MOS and technical measurement is less well understood. We investigate the contribution of the access network to the overall performance of VOIP services and the ways in which access networks can be designed to improve VOIP performance. Issues of interest include the choice of coding rate, dynamic variation of coding rate, packet length, methods of controlling echo, and the use of Active Queue Management (AQM) in Access Network routers. Methods for analyzing the impact of the access network on VOIP performance will be surveyed and reviewed. Also, we consider some approaches for improving performance of VOIP by doing some experiment using NS2 simulation software with a view to gaining a better understanding of the design of access networks.
APA, Harvard, Vancouver, ISO, and other styles
11

Caon, Maurizio. "Context-aware gestural interaction in the smart environments of the ubiquitous computing era." Thesis, University of Bedfordshire, 2014. http://hdl.handle.net/10547/344619.

Full text
Abstract:
Technology is becoming pervasive and the current interfaces are not adequate for the interaction with the smart environments of the ubiquitous computing era. Recently, researchers have started to address this issue introducing the concept of natural user interface, which is mainly based on gestural interactions. Many issues are still open in this emerging domain and, in particular, there is a lack of common guidelines for coherent implementation of gestural interfaces. This research investigates gestural interactions between humans and smart environments. It proposes a novel framework for the high-level organization of the context information. The framework is conceived to provide the support for a novel approach using functional gestures to reduce the gesture ambiguity and the number of gestures in taxonomies and improve the usability. In order to validate this framework, a proof-of-concept has been developed. A prototype has been developed by implementing a novel method for the view-invariant recognition of deictic and dynamic gestures. Tests have been conducted to assess the gesture recognition accuracy and the usability of the interfaces developed following the proposed framework. The results show that the method provides optimal gesture recognition from very different view-points whilst the usability tests have yielded high scores. Further investigation on the context information has been performed tackling the problem of user status. It is intended as human activity and a technique based on an innovative application of electromyography is proposed. The tests show that the proposed technique has achieved good activity recognition accuracy. The context is treated also as system status. In ubiquitous computing, the system can adopt different paradigms: wearable, environmental and pervasive. A novel paradigm, called synergistic paradigm, is presented combining the advantages of the wearable and environmental paradigms. Moreover, it augments the interaction possibilities of the user and ensures better gesture recognition accuracy than with the other paradigms.
APA, Harvard, Vancouver, ISO, and other styles
12

Almalki, Obaid. "A framework for e-government success from the user's perspective." Thesis, University of Bedfordshire, 2014. http://hdl.handle.net/10547/344620.

Full text
Abstract:
This thesis aims to contribute to a better understanding of e-government portal success by developing a e-government success framework from a user’s perspective. The proposed framework is underpinned by relevant theories, such as DeLone and McLean’s IS success model, the Technology Acceptance Model (TAM), self-efficacy theory and trust. The culture aspect has also been taken into consideration by adopting personal values theory introduced by Schwartz (1992). Three data collection methods were used. First, an exploratory study was carried to explore the main aspects and factors for understanding e-government systems success. Second, a Delphi study was conducted to investigate which of the ten value types are particularly relevant to success or have a significant impact. Third, a survey-based study was carried out to validate empirically the proposed theoretical framework. Results of the exploratory study helped to identify the potential success factors of e-government systems. The results of the Delphi study suggest that four of the ten values, namely self-direction, stimulation, security, and tradition, most likely affect e-government portal success. Structural equation modelling techniques were applied to test the research model using a large-scale survey. The findings of hypothesis testing suggested that e-government portal success (i.e. net benefit) was directly affected by actual use and user satisfaction and indirectly affect by a number of factors concerning system quality, service quality, information quality, perceived risk, and computer self-efficacy. By combining IS success model and TAM, this study found system quality, information quality and service quality affected the perceived ease of us, but service quality had no effect on perceived usefulness. However, perceived risk seemed to have no effect on attitudes towards using, but very small negative effect on perceived usefulness. Users’ computer skills was found to have no effect on perceived ease of use and very small effect on perceived usefulness. These indicate that risk and IT skills are playing less significant role in the context of e-government. The research findings confirmed that adoption was not equivalent to success, but it was the necessary precondition to success. In the personal values-attitude-behaviour model, the empirical evidence suggested that Conservation affects attitude towards use which, in turn, affects behavioural intention to re-use. Openness to change had no effect on attitude toward using. The findings provide important implications for e-government research and practice.
APA, Harvard, Vancouver, ISO, and other styles
13

(9189365), Anthony A. Lowe. "The Theory of Applied Mind of Programming." Thesis, 2020.

Find full text
Abstract:

The Theory of Applied Mind of Programming (TAMP) provides a new model for describing how programmers think and learn. Historically, many students have struggled when learning to program. Programming as a discipline lives in logic and reason, but theory and science tell us that people do not always think rationally. TAMP builds upon the groundbreaking work of dual process theory and classical educational theorists (Piaget, Vygotsky, and Bruner) to rethink our assumptions about cognition and learning. Theory guides educators and researchers to improve their practice, not just their work but also their thinking. TAMP provides new theoretical constructs for describing the mental activities of programming, the challenges in learning to program, as well as a guidebook for creating and recognizing the value of theory.

This dissertation is highly nontraditional. It does not include a typical empirical study using a familiar research methodology to guide data collection and analysis. Instead, it leverages existing data, as accumulated over a half-century of computing education research and a century of research into cognition and learning. Since an applicable methodology of theory-building did not exist, this work also defines a new methodology for theory building. The methodology of this dissertation borrows notation from philosophy and methods from grounded theory to define a transparent and rigorous approach to creating applied theories. By revisiting past studies through the lens of new theoretical propositions, theorists can conceive, refine, and internally validate new constructs and propositions to revolutionize how we view technical education.

The takeaway from this dissertation is a set of new theoretical constructs and promising research and pedagogical approaches. TAMP proposes an applied model of Jerome Bruner's mental representations that describe the knowledge and cognitive processes of an experienced programmer. TAMP highlights implicit learning and the role of intuition in decision making across many aspects of programming. This work includes numerous examples of how to apply TAMP and its supporting theories in re-imagining teaching and research to offer alternative explanations for previously puzzling findings on student learning. TAMP may challenge conventional beliefs about applied reasoning and the extent of traditional pedagogy, but it also offers insights on how to promote creative problem-solving in students.


APA, Harvard, Vancouver, ISO, and other styles
14

(9751070), Vaibhav R. Ostwal. "SPINTRONIC DEVICES FROM CONVENTIONAL AND EMERGING 2D MATERIALS FOR PROBABILISTIC COMPUTING." Thesis, 2020.

Find full text
Abstract:

Novel computational paradigms based on non-von Neumann architectures are being extensively explored for modern data-intensive applications and big-data problems. One direction in this context is to harness the intrinsic physics of spintronics devices for the implementation of nanoscale and low-power building blocks of such emerging computational systems. For example, a Probabilistic Spin Logic (PSL) that consists of networks of p-bits has been proposed for neuromorphic computing, Bayesian networks, and for solving optimization problems. In my work, I will discuss two types of device-components required for PSL: (i) p-bits mimicking binary stochastic neurons (BSN) and (ii) compound synapses for implementing weighted interconnects between p-bits. Furthermore, I will also show how the integration of recently discovered van der Waals ferromagnets in spintronics devices can reduce the current densities required by orders of magnitude, paving the way for future low-power spintronics devices.

First, a spin-device with input-output isolation and stable magnets capable of generating tunable random numbers, similar to a BSN, was demonstrated. In this device, spin-orbit torque pulses are used to initialize a nano-magnet with perpendicular magnetic anisotropy (PMA) along its hard axis. After removal of each pulse, the nano-magnet can relax back to either of its two stable states, generating a stream of binary random numbers. By applying a small Oersted field using the input terminal of the device, the probability of obtaining 0 or 1 in binary random numbers (P) can be tuned electrically. Furthermore, our work shows that in the case when two stochastic devices are connected in series, “P” of the second device is a function of “P” of the first p-bit and the weight of the interconnection between them. Such control over correlated probabilities of stochastic devices using interconnecting weights is the working principle of PSL.

Next my work focused on compact and energy efficient implementations of p-bits and interconnecting weights using modified spin-devices. It was shown that unstable in-plane magnetic tunneling junctions (MTJs), i.e. MTJs with a low energy barrier, naturally fluctuate between two states (parallel and anti-parallel) without any external excitation, in this way generating binary random numbers. Furthermore, spin-orbit torque of tantalum is used to control the time spent by the in-plane MTJ in either of its two states i.e. “P” of the device. In this device, the READ and WRITE paths are separated since the MTJ state is read by passing a current through the MTJ (READ path) while “P” is controlled by passing a current through the tantalum bar (WRITE path). Hence, a BSN/p-bit is implemented without energy-consuming hard axis initialization of the magnet and Oersted fields. Next, probabilistic switching of stable magnets was utilized to implement a novel compound synapse, which can be used for weighted interconnects between p-bits. In this experiment, an ensemble of nano-magnets was subjected to spin-orbit torque pulses such that each nano-magnet has a finite probability of switching. Hence, when a series of pulses are applied, the total magnetization of the ensemble gradually increases with the number of pulses

applied similar to the potentiation and depression curves of synapses. Furthermore, it was shown that a modified pulse scheme can improve the linearity of the synaptic behavior, which is desired for neuromorphic computing. By implementing both neuronal and synaptic devices using simple nano-magnets, we have shown that PSL can be realized using a modified Magnetic Random Access Memory (MRAM) technology. Note that MRAM technology exists in many current foundries.

To further reduce the current densities required for spin-torque devices, we have fabricated heterostructures consisting of a 2-dimensional semiconducting ferromagnet (Cr2Ge2Te6) and a metal with spin-orbit coupling metal (tantalum). Because of properties such as clean interfaces, perfect crystalline nanomagnet structure and sustained magnetic moments down to the mono-layer limit and low current shunting, 2D ferromagnets require orders of magnitude lower current densities for spin-orbit torque switching than conventional metallic ferromagnets such as CoFeB.

APA, Harvard, Vancouver, ISO, and other styles
15

(10184063), Younghoon Kim. "Approximate Computing: From Circuits to Software." Thesis, 2021.

Find full text
Abstract:
Many modern workloads such as multimedia, recognition, mining, search, vision, etc. possess the characteristic of intrinsic application resilience: The ability to produce acceptable-quality outputs despite their underlying computations being performed in an approximate manner. Approximate computing has emerged as a paradigm that exploits intrinsic application resilience to design systems that produce outputs of acceptable quality with significant performance/energy improvement. The research community has proposed a range of approximate computing techniques spanning across circuits, architecture, and software over the last decade. Nevertheless, approximate computing is yet to be incorporated into mainstream HW/SW design processes largely due to the deviation from the conventional design flow and the lack of runtime approximation controllability by the user.

The primary objective of this thesis is to provide approximate computing techniques across different layers of abstraction that possess the two following characteristics: (i) They can be applied with minimal change to the conventional design flow, and (ii) the approximation is controllable at runtime by the user with minimal overhead. To this end, this thesis proposes three novel approximate computing techniques: Clock overgating which targets HW design at the Register Transfer Level (RTL), value similarity extensions which enhance general-purpose processors with a set of microarchitectural and ISA extensions, and data subsetting which targets SW executing for commodity platforms.

The thesis first explores clock overgating, which extends the concept of clock gating: A conventional low-power technique that turns off the clock to a Flip-Flop (FF) when the value remains unchanged. In contrast to traditional clock gating, in clock overgating the clock signals to selected FFs in the circuit are gated even when the circuit functionality is sensitive to their state. This saves additional power in the clock tree, the gated FFs and in their downstream logic, while a quality loss occurs if the erroneous FF states propagate to the circuit outputs. This thesis develops a systematic methodology to identify an energy-efficient clock overgating configuration for any given circuit and quality constraint. Towards this end, three key strategies for efficiently pruning the large space of possible overgating configurations are proposed: Significance-based overgating, grouping FFs into overgating islands, and utilizing internal signals of the circuit as triggers for overgating. Across a suite of 6 machine learning accelerators, energy benefits of 1.36X on average are achieved at the cost of a very small (<0.5%) loss in classification accuracy.

The thesis also explores value similarity extensions, a set of lightweight micro-architectural and ISA extensions for general-purpose processors that provide performance improvements for computations on data structures with value similarity. The key idea is that programs often contain repeated instructions that are performed on very similar inputs (e.g., neighboring pixels within a homogeneous region of an image). In such cases, it may be possible to skip an instruction that operates on data similar to a previously executed instruction, and approximate the skipped instruction's result with the saved result of the previous one. The thesis provides three key strategies for realizing this approach: Identifying potentially skippable instructions from user annotations in SW, obtaining similarity information for future load values from the data cache line currently being accessed, and a mechanism for saving & reusing results of potentially skippable instructions. As a further optimization, the thesis proposes to replace multiple loop iterations that produce similar results with a specialized instruction sequence. The proposed extensions are modeled on the gem5 architectural simulator, achieving speedup of 1.81X on average across 6 machine-learning benchmarks running on a microcontroller-class in-order processor.

Finally, the thesis explores a data-centric approach to approximate computing called data subsetting that shifts the focus of approximation from computations to data. The key idea is to restrict the application's data accesses to a subset of its elements so that the overall memory footprint becomes smaller. Constraining the accesses to lie within a smaller memory footprint renders the memory accesses more cache-friendly, thereby improving performance. This thesis presents a C++ data structure template called SubsettableTensor, which embodies mechanisms to define an accessible subset of data and redirect accesses away from non-subset elements, for realizing data subsetting in SW. The proposed concept is evaluated on parallel SW implementations of 7 machine learning applications on a 48-core AMD Opteron server. Experimental results indicate that 1.33X-4.44X performance improvement can be achieved within a <0.5% loss in classification accuracy.

In summary, the proposed approximation techniques have shown significant efficiency improvements for various machine learning applications in circuits, architecture and SW, underscoring their promise as designer-friendly approaches to approximate computing.
APA, Harvard, Vancouver, ISO, and other styles
16

(10994988), Minglu Li. "ENVIRONMENTAL FACTORS AFFECT SOCIAL ENGINEERING ATTACKS." Thesis, 2021.

Find full text
Abstract:

Social engineering attacks can have serious consequences when it comes to information security. A social engineering attack aims at sensitive personal information by using personality weaknesses and using manipulation techniques. Because the user is often seen as the weakest link, techniques like phishing, baiting, and vishing, and deception are used to glean important personal information successfully. This article will analyze the relationship between the environment and social engineering attacks. This data consists of 516 people taking a survey. When it comes to discovering the relationship, there are two parts of the analysis. One is a high-dimensional analysis using multiple algorithms to find a connection between the environment and people’s behavior. The other uses a text analysis algorithm to study the pattern of survey questions, which can help discover why certain people have the same tendency in the same scenario. After combining these two, we might show how people have different reactions when dealing with social engineering attacks due to environmental factors.

APA, Harvard, Vancouver, ISO, and other styles
17

(10711986), Michelle E. Coverdale. "The Effect of Choice on Memory and Value for Consumer Products." Thesis, 2021.

Find full text
Abstract:
There is evidence that after a person chooses between two items, the chosen item is more memorable than the unchosen alternative. This is known as the chosen-item effect (Coverdale & Nairne, 2019). We frequently make choices, such as which restaurant to visit for dinner, or which brand of shampoo to buy, and what we choose in these situations can influence what we remember. In the field of consumer behavior, it is believed that memory for brand names and products influences consumer purchasing behaviors. As such, we were interested in investigating whether the chosen-item effect could be extended to memory for brands and product names. If choosing a brand name or product makes it more memorable, then companies can apply the chosen-item effect to improve an item’s memorability and potentially increase sales of that item. In three experiments we investigated whether the chosen-item effect can be extended to memory for products (Experiment 1) and brand names (Experiment 2 & 4b) and found a mnemonic benefit for items that were chosen over those that were not chosen.
In addition to the relationship between choice and memory, there is also a relationship between choice and value. We hypothesized that people would be willing to pay more for items that they have previously chosen, in addition to having better memory for them. We conducted a second set of experiments (Experiments 3 & 4a) to investigate whether the chosen-item effect extends beyond memory to value. We found that items that have previously been chosen were not perceived as being more valuable than those that were not chosen. This finding has theoretical implications for research on the mechanism(s) responsible for the chosen-item effect.
APA, Harvard, Vancouver, ISO, and other styles
18

(8088431), Gopalakrishnan Srinivasan. "Training Spiking Neural Networks for Energy-Efficient Neuromorphic Computing." Thesis, 2019.

Find full text
Abstract:

Spiking Neural Networks (SNNs), widely known as the third generation of artificial neural networks, offer a promising solution to approaching the brains' processing capability for cognitive tasks. With more biologically realistic perspective on input processing, SNN performs neural computations using spikes in an event-driven manner. The asynchronous spike-based computing capability can be exploited to achieve improved energy efficiency in neuromorphic hardware. Furthermore, SNN, on account of spike-based processing, can be trained in an unsupervised manner using Spike Timing Dependent Plasticity (STDP). STDP-based learning rules modulate the strength of a multi-bit synapse based on the correlation between the spike times of the input and output neurons. In order to achieve plasticity with compressed synaptic memory, stochastic binary synapse is proposed where spike timing information is embedded in the synaptic switching probability. A bio-plausible probabilistic-STDP learning rule consistent with Hebbian learning theory is proposed to train a network of binary as well as quaternary synapses. In addition, hybrid probabilistic-STDP learning rule incorporating Hebbian and anti-Hebbian mechanisms is proposed to enhance the learnt representations of the stochastic SNN. The efficacy of the presented learning rules are demonstrated for feed-forward fully-connected and residual convolutional SNNs on the MNIST and the CIFAR-10 datasets.

STDP-based learning is limited to shallow SNNs (<5 layers) yielding lower than acceptable accuracy on complex datasets. This thesis proposes block-wise complexity-aware training algorithm, referred to as BlocTrain, for incrementally training deep SNNs with reduced memory requirements using spike-based backpropagation through time. The deep network is divided into blocks, where each block consists of few convolutional layers followed by an auxiliary classifier. The blocks are trained sequentially using local errors from the respective auxiliary classifiers. Also, the deeper blocks are trained only on the hard classes determined using the class-wise accuracy obtained from the classifier of previously trained blocks. Thus, BlocTrain improves the training time and computational efficiency with increasing block depth. In addition, higher computational efficiency is obtained during inference by exiting early for easy class instances and activating the deeper blocks only for hard class instances. The ability of BlocTrain to provide improved accuracy as well as higher training and inference efficiency compared to end-to-end approaches is demonstrated for deep SNNs (up to 11 layers) on the CIFAR-10 and the CIFAR-100 datasets.

Feed-forward SNNs are typically used for static image recognition while recurrent Liquid State Machines (LSMs) have been shown to encode time-varying speech data. Liquid-SNN, consisting of input neurons sparsely connected by plastic synapses to randomly interlinked reservoir of spiking neurons (or liquid), is proposed for unsupervised speech and image recognition. The strength of the synapses interconnecting the input and liquid are trained using STDP, which makes it possible to infer the class of a test pattern without a readout layer typical in standard LSMs. The Liquid-SNN suffers from scalability challenges due to the need to primarily increase the number of neurons to enhance the accuracy. SpiLinC, composed of an ensemble of multiple liquids, where each liquid is trained on a unique input segment, is proposed as a scalable model to achieve improved accuracy. SpiLinC recognizes a test pattern by combining the spiking activity of the individual liquids, each of which identifies unique input features. As a result, SpiLinC offers comparable accuracy to Liquid-SNN with added synaptic sparsity and faster training convergence, which is validated on the digit subset of TI46 speech corpus and the MNIST dataset.

APA, Harvard, Vancouver, ISO, and other styles
19

(11008509), Nathanael D. Cox. "Two Problems in Applied Topology." Thesis, 2021.

Find full text
Abstract:
In this thesis, we present two main results in applied topology.
In our first result, we describe an algorithm for computing a semi-algebraic description of the quotient map of a proper semi-algebraic equivalence relation given as input. The complexity of the algorithm is doubly exponential in terms of the size of the polynomials describing the semi-algebraic set and equivalence relation.
In our second result, we use the fact that homology groups of a simplicial complex are isomorphic to the space of harmonic chains of that complex to obtain a representative cycle for each homology class. We then establish stability results on the harmonic chain groups.
APA, Harvard, Vancouver, ISO, and other styles
20

Nguyen, Van-Tuong. "An implementation of the parallelism, distribution and nondeterminism of membrane computing models on reconfigurable hardware." 2010. http://arrow.unisa.edu.au:8081/1959.8/100802.

Full text
Abstract:
Membrane computing investigates models of computation inspired by certain features of biological cells, especially features arising because of the presence of membranes. Because of their inherent large-scale parallelism, membrane computing models (called P systems) can be fully exploited only through the use of a parallel computing platform. However, it is an open question whether it is feasible to develop an efficient and useful parallel computing platform for membrane computing applications. Such a computing platform would significantly outperform equivalent sequential computing platforms while still achieving acceptable scalability, flexibility and extensibility. To move closer to an answer to this question, I have investigated a novel approach to the development of a parallel computing platform for membrane computing applications that has the potential to deliver a good balance between performance, flexibility, scalability and extensibility. This approach involves the use of reconfigurable hardware and an intelligent software component that is able to configure the hardware to suit the specific properties of the P system to be executed. As part of my investigations, I have created a prototype computing platform called Reconfig-P based on the proposed development approach. Reconfig-P is the only existing computing platform for membrane computing applications able to support both system-level and region-level parallelism. Using an intelligent hardware source code generator called P Builder, Reconfig-P is able to realise an input P system as a hardware circuit in various ways, depending on which aspects of P systems the user wishes to emphasise at the implementation level. For example, Reconfig-P can realise a P system in a rule-oriented manner or in a region-oriented manner. P Builder provides a unified implementation framework within which the various implementation strategies can be supported. The basic principles of this framework conform to a novel design pattern called Content-Form-Strategy. The framework seamlessly integrates the currently supported implementation approaches, and facilitates the inclusion of additional implementation strategies and additional P system features. Theoretical and empirical results regarding the execution time performance and hardware resource consumption of Reconfig-P suggest that the proposed development approach is a viable means of attaining a good balance between performance, scalability, flexibility and extensibility. Most of the existing computing platforms for membrane computing applications fail to support nondeterministic object distribution, a key aspect of P systems that presents several interesting implementation challenges. I have devised an efficient algorithm for nondeterministic object distribution that is suitable for implementation in hardware. Experimental results suggest that this algorithm could be incorporated into Reconfig-P without too significantly reducing its performance or efficiency.
Thesis (PhDInformationTechnology)--University of South Australia, 2010
APA, Harvard, Vancouver, ISO, and other styles
21

(8815964), Minsuk Koo. "Energy Efficient Neuromorphic Computing: Circuits, Interconnects and Architecture." Thesis, 2020.

Find full text
Abstract:
Neuromorphic computing has gained tremendous interest because of its ability to overcome the limitations of traditional signal processing algorithms in data intensive applications such as image recognition, video analytics, or language translation. The new computing paradigm is built with the goal of achieving high energy efficiency, comparable to biological systems.
To achieve such energy efficiency, there is a need to explore new neuro-mimetic devices, circuits, and architecture, along with new learning algorithms. To that effect, we propose two main approaches:

First, we explore an energy-efficient hardware implementation of a bio-plausible Spiking Neural Network (SNN). The key highlights of our proposed system for SNNs are 1) addressing connectivity issues arising from Network On Chip (NOC)-based SNNs, and 2) proposing stochastic CMOS binary SNNs using biased random number generator (BRNG). On-chip Power Line Communication (PLC) is proposed to address the connectivity issues in NOC-based SNNs. PLC can use the on-chip power lines augmented with low-overhead receiver and transmitter to communicate data between neurons that are spatially far apart. We also propose a CMOS 'stochastic-bit' with on-chip stochastic Spike Timing Dependent Plasticity (sSTDP) based learning for memory-compressed binary SNNs. A chip was fabricated in 90 nm CMOS process to demonstrate memory-efficient reconfigurable on-chip learning using sSTDP training.

Second, we explored coupled oscillatory systems for distance computation and convolution operation. Recent research on nano-oscillators has shown the possibility of using coupled oscillator networks as a core computing primitive for analog/non-Boolean computations. Spin-torque oscillator (STO) can be an attractive candidate for such oscillators because it is CMOS compatible, highly integratable, scalable, and frequency/phase tunable. Based on these promising features, we propose a new coupled-oscillator based architecture for hybrid spintronic/CMOS hardware that computes multi-dimensional norm. The hybrid system composed of an array of four injection-locked STOs and a CMOS detector is experimentally demonstrated. Energy and scaling analysis shows that the proposed STO-based coupled oscillatory system has higher energy efficiency compared to the CMOS-based system, and an order of magnitude faster computation speed in distance computation for high dimensional input vectors.
APA, Harvard, Vancouver, ISO, and other styles
22

(10506350), Amogh Agrawal. "Compute-in-Memory Primitives for Energy-Efficient Machine Learning." Thesis, 2021.

Find full text
Abstract:
Machine Learning (ML) workloads, being memory and compute-intensive, consume large amounts of power running on conventional computing systems, restricting their implementations to large-scale data centers. Thus, there is a need for building domain-specific hardware primitives for energy-efficient ML processing at the edge. One such approach is in-memory computing, which eliminates frequent and unnecessary data-transfers between the memory and the compute units, by directly computing the data where it is stored. Most of the chip area is consumed by on-chip SRAMs in both conventional von-Neumann systems (e.g. CPU/GPU) as well as application-specific ICs (e.g. TPU). Thus, we propose various circuit techniques to enable a range of computations such as bitwise Boolean and arithmetic computations, binary convolution operations, non-Boolean dot-product operations, lookup-table based computations, and spiking neural network implementation - all within standard SRAM memory arrays.

First, we propose X-SRAM, where, by using skewed sense amplifiers, bitwise Boolean operations such as NAND/NOR/XOR/IMP etc. can be enabled within 6T and 8T SRAM arrays. Moreover, exploiting the decoupled read/write ports in 8T SRAMs, we propose read-compute-store scheme where the computed data can directly be written back in the array simultaneously.

Second, we propose Xcel-RAM, where we show how binary convolutions can be enabled in 10T SRAM arrays for accelerating binary neural networks. We present charge sharing approach for performing XNOR operations followed by a population count (popcount) using both analog and digital techniques, highlighting the accuracy-energy tradeoff.

Third, we take this concept further and propose CASH-RAM, to accelerate non-Boolean operations, such as dot-products within standard 8T-SRAM arrays by utilizing the parasitic capacitances of bitlines and sourcelines. We analyze the non-idealities that arise due to analog computations and propose a self-compensation technique which reduces the effects of non-idealities, thereby reducing the errors.

Fourth, we propose ROM-embedded caches, RECache, using standard 8T SRAMs, useful for lookup-table (LUT) based computations. We show that just by adding an extra word-line (WL) or a source-line (SL), the same bit-cell can store a ROM bit, as well as the usual RAM bit, while maintaining the performance and area-efficiency, thereby doubling the memory density. Further we propose SPARE, an in-memory, distributed processing architecture built on RECache, for accelerating spiking neural networks (SNNs), which often require high-order polynomials and transcendental functions for solving complex neuro-synaptic models.

Finally, we propose IMPULSE, a 10T-SRAM compute-in-memory (CIM) macro, specifically designed for state-of-the-art SNN inference. The inherent dynamics of the neuron membrane potential in SNNs allows processing of sequential learning tasks, avoiding the complexity of recurrent neural networks. The highly-sparse spike-based computations in such spatio-temporal data can be leveraged for energy-efficiency. However, the membrane potential incurs additional memory access bottlenecks in current SNN hardware. IMPULSE triew to tackle the above challenges. It consists of a fused weight (WMEM) and membrane potential (VMEM) memory and inherently exploits sparsity in input spikes. We propose staggered data mapping and re-configurable peripherals for handling different bit-precision requirements of WMEM and VMEM, while supporting multiple neuron functionalities. The proposed macro was fabricated in 65nm CMOS technology. We demonstrate a sentiment classification task from the IMDB dataset of movie reviews and show that the SNN achieves competitive accuracy with only a fraction of trainable parameters and effective operations compared to an LSTM network.

These circuit explorations to embed computations in standard memory structures shows that on-chip SRAMs can do much more than just store data and can be re-purposed as on-demand accelerators for a variety of applications.
APA, Harvard, Vancouver, ISO, and other styles
23

(11218029), Herschel R. Bowling. "A Forensic Analysis of Microsoft Teams." Thesis, 2021.

Find full text
Abstract:
Digital forensic investigators have a duty to understand the relevant components of the cases that they work. However, with the constant evolution of technologies, and the release of new platforms and programs, it is impossible for an investigator to be familiar with every application they encounter. It can also be difficult to know how forensic tools handle certain applications. This is why forensic researchers study and document new and emerging technologies, platforms, and applications, so that investigators have resources to utilize whenever they encounter an unfamiliar element in a case.

n 2017, Microsoft released a new communication platform, Microsoft Teams(Koenigsbauer, 2017). Due to the application’s relatively young age, there has not been any significant forensic research relating to Microsoft Teams. This platform as of April 2021 had 145million daily active users (Wright, 2021), nearly double the number of daily users at the same time in 2020 (Zaveri, 2020). This rapid growth is attributed in part to the need to work from home due to the COVID-19 virus (Zaveri, 2020). Given the size of its user base, it seems likely that forensic investigators will encounter cases where Microsoft Teams is a relevant component but may not have the knowledge required to efficiently investigate the platform.

To help fill this gap, an analysis of data stored at rest by Microsoft Teams was conducted, both on the Windows 10 operating system as well as on mobile operating systems, such as IOS and Android has been conducted. Basic functionality such as messaging, sharing files, participating in video conferences, and other functionalities that Teams provides were performed in an isolated testing environment. These devices were analyzed with both automated forensic tools, and non automated investigation. Specifically, Cellebrite UFED for the mobile devices, and Magnet AXIOM for the Windows device were used. Manual or non-automated investigation recovered, at least partially, the majority of artifacts across all three devices. In this study, the forensic tools used did not recover many of the artifacts that were found with manual investigation. These discovered artifacts, and the results of the tools, are documented in the hopes of aiding future investigations.

APA, Harvard, Vancouver, ISO, and other styles
24

(9868160), Wan-Eih Huang. "Image Processing, Image Analysis, and Data Science Applied to Problems in Printing and Semantic Understanding of Images Containing Fashion Items." Thesis, 2020.

Find full text
Abstract:
This thesis aims to address problems in printing and semantic understanding of images.
The first one is developing a halftoning algorithm for multilevel output with unequal resolution printing pixels. We proposed a design method and implemented several versions of halftone screens. They all show good visual results in a real, low-cost electrophotographic printer.
The second problem is related to printing quality and self-diagnosis. Firstly, we incorporated logistic regression for classification of visible and invisible bands defects in the detection pipeline. In addition, we also proposed a new cost-function based algorithm with synthetic missing bands to estimate the repetitive interval of periodic bands for self-diagnosing the failing component. It is much more accurate than the previous method. Second, we addressed this problem with acoustic signals. Due to the scarcity of printer sounds, an acoustic signal augmentation method is needed to help a classifier perform better. The key idea is to mimic the situation that occurs when a component begins to fail.
The third problem deals with recommendation systems. We explored the similarity metrics in the loss function for a neural matrix factorization network.
The last problem is about image understanding of fashion items. We proposed a weakly supervised framework that includes mask-guided teacher network training and attention-based transfer learning to mitigate the domain gap in datasets and acquire a new dataset with rich annotations.
APA, Harvard, Vancouver, ISO, and other styles
25

(11161374), Emma J. Reid. "Multi-Resolution Data Fusion for Super Resolution of Microscopy Images." Thesis, 2021.

Find full text
Abstract:

Applications in materials and biological imaging are currently limited by the ability to collect high-resolution data over large areas in practical amounts of time. One possible solution to this problem is to collect low-resolution data and apply a super-resolution interpolation algorithm to produce a high-resolution image. However, state-of-the-art super-resolution algorithms are typically designed for natural images, require aligned pairing of high and low-resolution training data for optimal performance, and do not directly incorporate a data-fidelity mechanism.


We present a Multi-Resolution Data Fusion (MDF) algorithm for accurate interpolation of low-resolution SEM and TEM data by factors of 4x and 8x. This MDF interpolation algorithm achieves these high rates of interpolation by first learning an accurate prior model denoiser for the TEM sample from small quantities of unpaired high-resolution data and then balancing this learned denoiser with a novel mismatched proximal map that maintains fidelity to measured data. The method is based on Multi-Agent Consensus Equilibrium (MACE), a generalization of the Plug-and-Play method, and allows for interpolation at arbitrary resolutions without retraining. We present electron microscopy results at 4x and 8x super resolution that exhibit reduced artifacts relative to existing methods while maintaining fidelity to acquired data and accurately resolving sub-pixel-scale features.

APA, Harvard, Vancouver, ISO, and other styles
26

(8772923), Chinyi Chen. "Quantum phenomena for next generation computing." Thesis, 2020.

Find full text
Abstract:
With the transistor dimensions scaling down to a few atoms, quantum phenomena - like quantum tunneling and entanglement - will dictate the operation and performance of the next generation of electronic devices, post-CMOS era. While quantum tunneling limits the scaling of the conventional transistor, Tunneling Field Effect Transistor (TFET) employs band-to-band tunneling for the device operation. This mechanism can reduce the sub-threshold swing (S.S.) beyond the Boltzmann's limit, which is fundamentally limited to 60 mV/dec in a conventional Si-based metal-oxide-semiconductor field-effect transistor (MOSFET). A smaller S.S. ensures TFET operation at a lower supply voltage and, therefore, at lesser power compared to the conventional Si-based MOSFET.

However, the low transmission probability of the band-to-band tunneling mechanism limits the ON-current of a TFET. This can be improved by reducing the body thickness of the devices i.e., using 2-Dimensional (2D) materials or by utilizing heterojunction designs. In this thesis, two promising methods are proposed to increase the ON-current; one for the 2D material TFETs, and another for the III-V heterojunction TFETs.

Maximizing the ON-current in a 2D material TFET by determining an optimum channel thickness, using compact models, is presented. A compact model is derived from rigorous atomistic quantum transport simulations. A new doping profile is proposed for the III-V triple heterojunction TFET to achieve a high ON-current. The optimized ON-current is 325 uA/um at a supply voltage of 0.3 V. The device design is optimized by atomistic quantum transport simulations for a body thickness of 12 nm, which is experimentally feasible.
However, increasing the device's body thickness increases the atomistic quantum transport simulation time. The simulation of a device with a body thickness of over 12 nm is computationally intensive. Therefore, approximate methods like the mode-space approach are employed to reduce the simulation time. In this thesis, the development of the mode-space approximation in modeling the triple heterojunction TFET is also documented.

In addition to the TFETs, quantum computing is an emerging field that utilizes quantum phenomena to facilitate information processing. An extra chapter is devoted to the electronic structure calculations of the Si:P delta-doped layer, using the empirical tight-binding method. The calculations agree with angle-resolved photoemission spectroscopy (ARPES) measurements. The Si:P delta-doped layer is extensively used as contacts in the Phosphorus donor-based quantum computing systems. Understanding its electronic structure paves the way towards the scaling of Phosphorus donor-based quantum computing devices in the future.
APA, Harvard, Vancouver, ISO, and other styles
27

(7464389), Shubham Jain. "IN-MEMORY COMPUTING WITH CMOS AND EMERGING MEMORY TECHNOLOGIES." Thesis, 2019.

Find full text
Abstract:
Modern computing workloads such as machine learning and data analytics perform simple computations on large amounts of data. Traditional von Neumann computing systems, which consist of separate processor and memory subsystems, are inefficient in realizing modern computing workloads due to frequent data transfers between these subsystems that incur significant time and energy costs. In-memory computing embeds computational capabilities within the memory subsystem to alleviate the fundamental processor-memory bottleneck, thereby achieving substantial system-level performance and energy benefits. In this dissertation, we explore a new generation of in-memory computing architectures that are enabled by emerging memory technologies and new CMOS-based memory cells. The proposed designs realize Boolean and non-Boolean computations natively within memory arrays.

For Boolean computing, we leverage the unique characteristics of emerging memories that allow multiple word lines within an array to be simultaneously enabled, opening up the possibility of directly sensing functions of the values stored in multiple rows using single access. We propose Spin-Transfer Torque Compute-in-Memory (STT-CiM), a design for in-memory computing with modifications to peripheral circuits that leverage this principle to perform logic, arithmetic, and complex vector operations. We address the challenge of reliable in-memory computing under process variations utilizing error detecting and correcting codes to control errors during CiM operations. We demonstrate how STT-CiM can be integrated within a general-purpose computing system and propose architectural enhancements to processor instruction sets and on-chip buses for in-memory computing.

For non-Boolean computing, we explore crossbar arrays of resistive memory elements, which are known to compactly and efficiently realize a key primitive operation involved in machine learning algorithms, i.e., vector-matrix multiplication. We highlight a key challenge involved in this approach - the actual function computed by a resistive crossbar can deviate substantially from the desired vector-matrix multiplication operation due to a range of device and circuit level non-idealities. It is essential to evaluate the impact of the errors introduced by these non-idealities at the application level. There has been no study of the impact of non-idealities on the accuracy of large-scale workloads (e.g., Deep Neural Networks [DNNs] with millions of neurons and billions of synaptic connections), in part because existing device and circuit models are too slow to use in application-level evaluation. We propose a Fast Crossbar Model (FCM) to accurately capture the errors arising due to crossbar non-idealities while being four-to-five orders of magnitude faster than circuit simulation. We also develop RxNN, a software framework to evaluate DNN inference on resistive crossbar systems. Using RxNN, we evaluate a suite of large-scale DNNs developed for the ImageNet Challenge (ILSVRC). Our evaluations reveal that the errors due to resistive crossbar non-idealities can degrade the overall accuracy of DNNs considerably, motivating the need for compensation techniques. Subsequently, we propose CxDNN, a hardware-software methodology that enables the realization of large-scale DNNs on crossbar systems with minimal degradation in accuracy by compensating for errors due to non-idealities. CxDNN comprises of (i) an optimized mapping technique to convert floating-point weights and activations to crossbar conductances and input voltages, (ii) a fast re-training method to recover accuracy loss due to this conversion, and (iii) low-overhead compensation hardware to mitigate dynamic and hardware-instance-specific errors. Unlike previous efforts that are limited to small networks and require the training and deployment of hardware-instance-specific models, CxDNN presents a scalable compensation methodology that can address large DNNs (e.g., ResNet-50 on ImageNet), and enables a common model to be trained and deployed on many devices.

For non-Boolean computing, we also propose TiM-DNN, a programmable hardware accelerator that is specifically designed to execute ternary DNNs. TiM-DNN supports various ternary representations including unweighted (-1,0,1), symmetric weighted (-a,0,a), and asymmetric weighted (-a,0,b) ternary systems. TiM-DNN is an in-memory accelerator designed using TiM tiles --- specialized memory arrays that perform massively parallel signed vector-matrix multiplications on ternary values per access. TiM tiles are in turn composed of Ternary Processing Cells (TPCs), new CMOS-based memory cells that function as both ternary storage units and signed scalar multiplication units. We evaluate an implementation of TiM-DNN in 32nm technology using an architectural simulator calibrated with SPICE simulation and RTL synthesis. TiM-DNN achieves a peak performance of 114 TOPs/s, consumes 0.9W power, and occupies 1.96mm2 chip area, representing a 300X improvement in TOPS/W compared to a state-of-the-art NVIDIA Tesla V100 GPU. In comparison to popular quantized DNN accelerators, TiM-DNN achieves 55.2X-240X and 160X-291X improvement in TOPS/W and TOPS/mm2, respectively.

In summary, the dissertation proposes new in-memory computing architectures as well as addresses the need for scalable modeling frameworks and compensation techniques for resistive crossbar based in-memory computing fabrics. Our evaluations show that in-memory computing architectures are promising for realizing modern machine learning and data analytics workloads, and can attain orders-of-magnitude improvement in system-level energy and performance over traditional von Neumann computing systems.
APA, Harvard, Vancouver, ISO, and other styles
28

(5930528), Joseph W. Balazs. "A Forensic Examination of Database Slack." Thesis, 2021.

Find full text
Abstract:
This research includes an examination and analysis of the phenomenon of database slack.
Database forensics is an underexplored subfield of Digital Forensics, and the lack of research is
becoming more important with every breach and theft of data. A small amount of research exists
in the literature regarding database slack. This exploratory work examined what partial records of
forensic significance can be found in database slack. A series of experiments performed update
and delete transactions upon data in a PostgreSQL database, which created database slack.
Patterns of hexadecimal indicators for database slack in the file system were found and analyzed.
Despite limitations in the experiments, the results indicated that partial records of forensic
significance are found in database slack. Significantly, partial records found in database slack
may aid a forensic investigation of a database breach. The details of the hexadecimal patterns of
the database slack fill in gaps in the literature, the impact of log findings on an investigation was
shown, and complexity aspects back up existing parts of database forensics research. This
research helped to lessen the dearth of work in the area of database forensics as well as database slack.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography