Dissertations / Theses on the topic '080399 Computer Software not elsewhere classified'

To see the other types of publications on this topic, follow the link: 080399 Computer Software not elsewhere classified.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 15 dissertations / theses for your research on the topic '080399 Computer Software not elsewhere classified.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Soltani, Hamidreza. "Development and application of real-time and interactive software for complex system." Thesis, University of Central Lancashire, 2016. http://clok.uclan.ac.uk/20443/.

Full text
Abstract:
Soft materials have attracted considerable interest in recent years for predicting the characteristics of phase separation and self-assembly in nanoscale structures. A popular method for demonstrating and simulating the dynamic behaviour of particles (e.g. particle tracking) and to consider effects of simulation parameters is cell dynamic simulation (CDS). This is a cellular computerisation technique that can be used to investigate different aspects of morphological topographies of soft material systems. The acquisition of quantitative data from particles is a critical requirement in order to obtain a better understanding and of characterising their dynamic behaviour. To achieve this objective particle tracking methods considering quantitative data and focusing on different properties and components of particles is essential. Despite the availability of various types of particle tracking used in experimental work, there is no method available to consider uniform computational data. In order to achieve accurate and efficient computational results for cell dynamic simulation method and particle tracking, two factors are essential: computing/calculating time-scale and simulation system size. Consequently, finding available computing algorithms and resources such as sequential algorithm for implementing a complex technique and achieving precise results is critical and rather expensive. Therefore, it is highly desirable to consider a parallel algorithm and programming model to solve time-consuming and massive computational processing issues. Hence, the gaps between the experimental and computational works and solving time consuming for expensive computational calculations need to be filled in order to investigate a uniform computational technique for particle tracking and significant enhancements in speed and execution times. The work presented in this thesis details a new particle tracking method for integrating diblock copolymers in the form of spheres with a shear flow and a novel designed GPU-based parallel acceleration approach to cell dynamic simulation (CDS). In addition, the evaluation of parallel models and architectures (CPUs and GPUs) utilising the mixtures of application program interface, OpenMP and programming model, CUDA were developed. Finally, this study presents the performance enhancements achieved with GPU-CUDA of approximately ~2 times faster than multi-threading implementation and 13~14 times quicker than optimised sequential processing for the CDS computations/workloads respectively.
APA, Harvard, Vancouver, ISO, and other styles
2

Timperley, Matthew. "The integration of explanation-based learning and fuzzy control in the context of software assurance as applied to modular avionics." Thesis, University of Central Lancashire, 2015. http://clok.uclan.ac.uk/16726/.

Full text
Abstract:
A Modular Power Management System (MPMS) is an energy management system intended for highly modular applications, able to adapt to changing hardware intelligently. There is a dearth in the literature on Integrated Modular Avionics (IMA), which has previously not addressed the implications for software operating within this architecture. Namely, the adaptation of control laws to changing hardware. This work proposes some approaches to address this issue. Control laws may require adaptation to overcome hardware degradation, or system upgrades. There is also a growing interest in the ability to change hardware configurations of UASs (Unmanned Aerial Systems) between missions, to better fit the characteristics of each one. Hardware changes in the aviation industry come with an additional caveat: in order for a software system to be used in aviation it must be certified as part of a platform. This certification process has no clear guidelines for adaptive systems. Adapting to a changing platform, as well as addressing the necessary certification effort, motivated the development of the MPMS. The aim of the work is twofold. Firstly, to modify existing control strategies for new hardware. This is achieved with generalisation and transfer earning. Secondly, to reduce the workload involved with maintaining a safety argument for an adaptive controller. Three areas of work are used to demonstrate the satisfaction of this aim. Explanation-Based Learning (EBL) is proposed for the derivation of new control laws. The EBL domain theory embodies general control strategies, which are specialised to form fuzzy rules. A method for translating explanation structures into fuzzy rules is presented. The generation of specific rules, from a general control strategy, is one way to adapt to controlling a modular platform. A fuzzy controller executes the rules derived by EBL. This maintains fast rule execution as well as the separation of strategy and application. The ability of EBL to generate rules which are useful when executed by a fuzzy controller is demonstrated by an experiment. A domain theory is given to control throttle output, which is used to generate fuzzy rules. These rules have a positive impact on energy consumption in simulated flight. EBL is proposed, for rule derivation, because it focuses on generalisation. Generalisations can apply knowledge from one situation, or hardware, to another. This can be preferable to re-derivation of similar control laws. Furthermore, EBL can be augmented to include analogical reasoning when reaching an impasse. An algorithm which integrates analogy into EBL has been developed as part of this work. The inclusion of analogical reasoning facilitates transfer learning, which furthers the flexibility of the MPMS in adapting to new hardware. The adaptive capability of the MPMS is demonstrated by application to multiple simulated platforms. EBL produces explanation structures. Augmenting these explanation structures with a safetyspecific domain theory can produce skeletal safety cases. A technique to achieve this has been developed. Example structures are generated for previously derived fuzzy rules. Generating safety cases from explanation structures can form the basis for an adaptive safety argument.
APA, Harvard, Vancouver, ISO, and other styles
3

Jiang, Feng. "Capturing event metadata in the sky : a Java-based application for receiving astronomical internet feeds : a thesis presented in partial fulfilment of the requirements for the degree of Master of Computer Science in Computer Science at Massey University, Auckland, New Zealand." Massey University, 2008. http://hdl.handle.net/10179/897.

Full text
Abstract:
When an astronomical observer discovers a transient event in the sky, how can the information be immediately shared and delivered to others? Not too long time ago, people shared the information about what they discovered in the sky by books, telegraphs, and telephones. The new generation of transferring the event data is the way by the Internet. The information of astronomical events is able to be packed and put online as an Internet feed. For receiving these packed data, an Internet feed listener software would be required in a terminal computer. In other applications, the listener would connect to an intelligent robotic telescope network and automatically drive a telescope to capture the instant Astrophysical phenomena. However, because the technologies of transferring the astronomical event data are in the initial steps, the only resource available is the Perl-based Internet feed listener developed by the team of eSTAR. In this research, a Java-based Internet feed listener was developed. The application supports more features than the Perl-based application. After applying the rich Java benefits, the application is able to receive, parse and manage the Internet feed data in an efficient way with the friendly user interface. Keywords: Java, socket programming, VOEvent, real-time astronomy
APA, Harvard, Vancouver, ISO, and other styles
4

Blakey, Jeremy Peter. "Database training for novice end users : a design research approach : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Information Systems at Massey University, Albany, New Zealand." Massey University, 2008. http://hdl.handle.net/10179/880.

Full text
Abstract:
Of all of the desktop software available, that for the implementation of a database is some of the most complex. With the increasing number of computer users having access to this sophisticated software, but with no obvious way to learn the rudiments of data modelling for the implementation of a database, there is a need for a simple, convenient method to improve their understanding. The research described in this thesis represents the first steps in the development of a tool to accomplish this improvement. In a preliminary study using empirical research a conceptual model was used to improve novice end users’ understanding of the relational concepts of data organisation and the use of a database software package. The results showed that no conclusions could be drawn about either the artefact used or the method of evaluation. Following the lead of researchers in the fields of both education and information systems, a design research process was developed, consisting of the construction and evaluation of a training artefact. A combination of design research and a design experiment was used in the main study described in this thesis. New to research in information systems, design research is a methodology or set of analytical techniques and perspectives, and this was used to develop a process (development of an artefact) and a product (the artefact itself). The artefact, once developed, needed to be evaluated for its effectiveness, and this was done using a design experiment. The experiment involved exposing the artefact to a small group of end users in a realistic setting and defining a process for the evaluation of the artefact. The artefact was the tool that would facilitate the improvement of the understanding of data modelling, the vital precursor to the development of a database. The research was conducted among a group of novice end users who were exposed to the artefact, facilitated by an independent person. In order to assess whether there was any improvement in the novices’ understanding of relational data modelling and database concepts, they then completed a post-test. Results confirmed that the artefact, trialled through one iteration, was successful in improving the understanding of these novice end users in the area of data modelling. The combination of design research and design experiment as described above gave rise to a new methodology, called experimental design research at this early juncture. The successful outcome of this research will lead to further iterations of the design research methodology, leading in turn to the further development of the artefact which will be both useful and accessible to novice users of personal computers and database software. This research has made the following original contributions. Firstly, the use of the design research methodology for the development of the artefact, which proved successful in improving novice users’ understanding of relational data structures. Secondly, the novel use of a design experiment in an information systems project, which was used to evaluate the success of the artefact. And finally, the combination of the developed artefact followed by its successful evaluation using a design experiment resulted in the hybrid experimental design research methodology. The success of the implementation of the experimental design research methodology in this information systems project shows much promise for its successful application to similar projects.
APA, Harvard, Vancouver, ISO, and other styles
5

Humphreys, Alison M. (Sal). "Massively Multiplayer Online Games. Productive players and their disruptions to conventional media practices." Thesis, QUT, 2005.

Find full text
Abstract:
Summary This thesis explores how massively multiplayer online games (MMOGs), as an exemplary new media form, disrupt practices associated with more conventional media. These intensely social games exploit the interactivity and networks afforded by new media technologies in ways that generate new challenges for the organisation, control and regulation of media. The involvement of players in constituting these games – through their production of game-play, derivative works and strong social networks that drive the profitability of the games – disrupts some of the key foundations that underlie other publication media. MMOGs represent a new and hybrid form of media – part publication and part service. As such they sit within a number of sometimes contradictory organising and regulatory regimes. This thesis examines the negotiations and struggles for control between players, developers and publishers as issues of ownership, governance and access arise out of the new configurations. Using an ethnographic approach to gather information and insights into the practices of players, developers and publishers, this project identifies the characteristics of the distributed production network in this experiential medium. It explores structural components of successful interactive applications and analyses how the advent of player agency and the shift in authorship has meant a shift in control of the text and the relations that surround it. The integration of social networks into the textual environment, and into the business model of the media publishers has meant commerce has become entwined with affect in a new way in this medium. Publishers have moved into the role of both property managers, of the intellectual property associated with the game content, and community managers. Intellectual property management is usually associated with the reproduction and distribution of finished media products, and this sits uneasily with the performative and mutable form of this medium. Service provision consists of maintaining the game world environment, community management, providing access for players to other players and to the content generated both by the developers and the other players. Content in an MMOG is identified in this project as both the ‘tangible’ assets of code and artwork, rules and text, and the ‘intangible’ or immaterial assets of affective networks. Players are no longer just consumers of media, or even just active interpreters of media. They are co-producing the media as it is developed. This thesis frames that productiveness as unpaid labour, in an attempt to denaturalise the dominant discourse which casts players as consumers. The regulation of this medium is contentious. Conventional forms of media regulation – such as copyright, or content regulation regimes are inadequate for regulating the hybrid service/publication medium. This thesis explores how the use of contracts as the mechanism which constitutes the formal relations between players, publishers and developers creates challenges to some of the regimes of juridical and political rights held by citizens more generally. This thesis examines the productive practices of players and how the discourses of intellectual property and the discourses of the consumer are mobilised to erase the significance of those productive contributions. It also shows, using a Foucauldian analysis of the power negotiations, that players employ many counter-strategies to circumvent the more formal legal structures of the publishers. The dialogic relationship between players, developers and publishers is shown to mobilise various discursive constructions of the role of each. The outcome of these ongoing negotiations may well shape future interactive applications and the extent to which their innovative capacities will be available for all stakeholders to develop.
APA, Harvard, Vancouver, ISO, and other styles
6

(8071232), Patrick Raymond Glass. "THE EFFECTS OF COMPUTER SIMULATION ON REDUCING THE INCIDENCE OF MEDICAL ERRORS ASSOCIATED WITH MASS DISTRIBUTION OF CHEMOPROPHYLAXIS AS A RESULT OF A BIOTERRORISM EVENT." Thesis, 2019.

Find full text
Abstract:
The objective of research is to develop a computer simulation modeltoprovide a means to effectively and efficiently reduce medication errors associated with points of distribution sitesby identifying and manipulating screeners with a high probability of generating errors.Points of distribution sites are used to rapidly distribute chemoprophylaxis to a large population in response to a pandemic event or a bioterrorism attack. Because of the nature of therapid response, points of distribution sites require the use of peer-trained helpers who volunteer their services.The implications are that peer-trained helperscould have a variety of experience or education levels. Thesefactors increase the risk of medical errors. Reducing medical errors is accomplished through changing the means in which healthcare providers are trained and focusing on a team approach to healthcare delivery. Computer simulations have been used in the past to identify sources of inefficiency and potential of error. Data for the model werecollected over the course of two semesters. Of the 349 data points collected from the first semester, only 137 data points were usable for the purposes of modelbuilding. When the experiment was conducted again for the second semester, similar results werefound. The control simulation was run 20 times with each screener generating errors with a probability of 0.101 following a Bernoulli distribution. The variable simulation was run 30 times with each screener generating the same probability of errors; however, the researcher identified the screeners generating the errors and immediately stopped them from processing additional agents once they reached five errors. An ANOVA was conducted on the percent errors generated from each simulation run. The results of the ANOVA showedsignificant difference between individuals within the groups. A simulation model wasbuilttoreflect the differences in medical error rates between screeners. By comparing the results of the simulation as the screeners are manipulated in the system, the model can be used to show how medical errors can be reduced in points of distribution sites
APA, Harvard, Vancouver, ISO, and other styles
7

(10184063), Younghoon Kim. "Approximate Computing: From Circuits to Software." Thesis, 2021.

Find full text
Abstract:
Many modern workloads such as multimedia, recognition, mining, search, vision, etc. possess the characteristic of intrinsic application resilience: The ability to produce acceptable-quality outputs despite their underlying computations being performed in an approximate manner. Approximate computing has emerged as a paradigm that exploits intrinsic application resilience to design systems that produce outputs of acceptable quality with significant performance/energy improvement. The research community has proposed a range of approximate computing techniques spanning across circuits, architecture, and software over the last decade. Nevertheless, approximate computing is yet to be incorporated into mainstream HW/SW design processes largely due to the deviation from the conventional design flow and the lack of runtime approximation controllability by the user.

The primary objective of this thesis is to provide approximate computing techniques across different layers of abstraction that possess the two following characteristics: (i) They can be applied with minimal change to the conventional design flow, and (ii) the approximation is controllable at runtime by the user with minimal overhead. To this end, this thesis proposes three novel approximate computing techniques: Clock overgating which targets HW design at the Register Transfer Level (RTL), value similarity extensions which enhance general-purpose processors with a set of microarchitectural and ISA extensions, and data subsetting which targets SW executing for commodity platforms.

The thesis first explores clock overgating, which extends the concept of clock gating: A conventional low-power technique that turns off the clock to a Flip-Flop (FF) when the value remains unchanged. In contrast to traditional clock gating, in clock overgating the clock signals to selected FFs in the circuit are gated even when the circuit functionality is sensitive to their state. This saves additional power in the clock tree, the gated FFs and in their downstream logic, while a quality loss occurs if the erroneous FF states propagate to the circuit outputs. This thesis develops a systematic methodology to identify an energy-efficient clock overgating configuration for any given circuit and quality constraint. Towards this end, three key strategies for efficiently pruning the large space of possible overgating configurations are proposed: Significance-based overgating, grouping FFs into overgating islands, and utilizing internal signals of the circuit as triggers for overgating. Across a suite of 6 machine learning accelerators, energy benefits of 1.36X on average are achieved at the cost of a very small (<0.5%) loss in classification accuracy.

The thesis also explores value similarity extensions, a set of lightweight micro-architectural and ISA extensions for general-purpose processors that provide performance improvements for computations on data structures with value similarity. The key idea is that programs often contain repeated instructions that are performed on very similar inputs (e.g., neighboring pixels within a homogeneous region of an image). In such cases, it may be possible to skip an instruction that operates on data similar to a previously executed instruction, and approximate the skipped instruction's result with the saved result of the previous one. The thesis provides three key strategies for realizing this approach: Identifying potentially skippable instructions from user annotations in SW, obtaining similarity information for future load values from the data cache line currently being accessed, and a mechanism for saving & reusing results of potentially skippable instructions. As a further optimization, the thesis proposes to replace multiple loop iterations that produce similar results with a specialized instruction sequence. The proposed extensions are modeled on the gem5 architectural simulator, achieving speedup of 1.81X on average across 6 machine-learning benchmarks running on a microcontroller-class in-order processor.

Finally, the thesis explores a data-centric approach to approximate computing called data subsetting that shifts the focus of approximation from computations to data. The key idea is to restrict the application's data accesses to a subset of its elements so that the overall memory footprint becomes smaller. Constraining the accesses to lie within a smaller memory footprint renders the memory accesses more cache-friendly, thereby improving performance. This thesis presents a C++ data structure template called SubsettableTensor, which embodies mechanisms to define an accessible subset of data and redirect accesses away from non-subset elements, for realizing data subsetting in SW. The proposed concept is evaluated on parallel SW implementations of 7 machine learning applications on a 48-core AMD Opteron server. Experimental results indicate that 1.33X-4.44X performance improvement can be achieved within a <0.5% loss in classification accuracy.

In summary, the proposed approximation techniques have shown significant efficiency improvements for various machine learning applications in circuits, architecture and SW, underscoring their promise as designer-friendly approaches to approximate computing.
APA, Harvard, Vancouver, ISO, and other styles
8

(11173323), Hanlin Chen. "Adaptive Safety and Cyber Security for Connected and Automated Vehicle System." Thesis, 2021.

Find full text
Abstract:

This dissertation discussed the potential benefits that CAV systems can bring to the general well-being, and how the threat lies within the CAV system can affect its performance and functionality.

Particularly, this dissertation discovered how CAV technology can benefit homeland security and crime investigations involving child abduction crimes. By proposing the initial design network, this dissertation proposed a solution that enhances the current AMBER Alert system using CAV technology. This dissertation also discussed how CAV technology can help perception in corner-case driving scenarios and reduce the risk of traffic accidents, by proposing a dataset that covers various corner cases including different weather and lighting conditions targeting the work zone. Evaluation is made on the collected data and several impact factors have been figured out.

This dissertation also discussed an attack scenario that a ROS-based CAV platform was attacked by DoS attacks. We analized the system response after we attacked the system. Discussion and analysis was made on the functionality and stability of the system.

Overall, we determined that CAV technology can greatly benefit in general well-being, and threats within the CAV system can cast potential negative benefits once the CAV system is being attacked.

APA, Harvard, Vancouver, ISO, and other styles
10

Mastilovich, Nikola. "Automatisation of programming of a PLC code : a thesis presented in partial fulfilment of the requirements of the degree of Masters of Engineering in Mechatronics." 2010. http://hdl.handle.net/10179/1681.

Full text
Abstract:
Appendix D, CD content can be found with print thesis held at Turitea library, Palmerston North. Content: Empty APCG program Empty RSLogix5000 l5k file Empty RSLogix5000 ACD file Real Life project - APCG program (only partial) Real Life project - RSLogix5000 l5k file (only partial) Real Life project - RSLogix5000 ACD file (only partial)
A competitive edge is one of the requirements of a successful business. Tools, which increase an engineer s productivity and minimize cost, can be considered as a competitive edge. The objective of this thesis was to design, create, and implement Automatic PLC Code Generator (APCG) software. A secondary objective was to demonstrate that the use of the APCG software will lead to improved project efficiency and enhanced profit margin. To create the APCG software, the MS Excel and Visual Basic for Applications (VBA) programs were used as the platform. MS Excel sheets were used as a user interface, while VBA creates the PLC code from the information entered by the engineer. The PLC code, created by the APCG software, follows the PLC structure of the Realcold Milmech Pty. Ltd, as well as the research Automatic generation of PLC code beyond the nominal sequence written by Guttel et al [1]. The APCG software was used to design and create a PLC code for one of the projects undertaken by Realcold Milmech Pty. Ltd. By using APCG software, time to design, create, and test the PLC code was improved when compared to the budgeted time. In addition, the project's profit margin was increased. Based on the results of this thesis it is expected that the APCG software will be useful for programmers that tend to handle a variety of projects on a regular basis, where programming in a modular way is not appropriate.
APA, Harvard, Vancouver, ISO, and other styles
11

(10063480), Monil Vallabhbh Chheta. "DESIGN AND IMPLEMENTATION OF ENERGY USAGE MONITORING AND CONTROL SYSTEMS USING MODULAR IIOT FRAMEWORK." Thesis, 2021.

Find full text
Abstract:

This project aims to develop a cloud-based platform that integrates sensors with business intelligence for real-time energy management at the plant level. It provides facility managers, an energy management platform that allows them to monitor equipment and plant-level energy consumption remotely, receive a warning, identify energy loss due to malfunction, present options with quantifiable effects for decision-making, and take actions, and assess the outcomes. The objectives consist of:

  1. Developing a generic platform for the monitoring energy consumption of industrial equipment using sensors

  2. Control the connected equipment using an actuator

  3. Integrating hardware, cloud, and application algorithms into the platform

  4. Validating the system using an Energy Consumption Forecast scenario

A Demo station was created for testing the system. The demo station consists of equipment such as air compressor, motor and light bulb. The current usage of these equipment is measured using current sensors. Apart from current sensors, temperature sensor, pres- sure sensor and CO2 sensor were also used. Current consumption of these equipment was measured over a couple of days. The control system was tested randomly by turning on equipment at random times. Turning on the equipment resulted in current consumption which ensured that the system is running. Thus, the system worked as expected and user could monitor and control the connected equipment remotely.

APA, Harvard, Vancouver, ISO, and other styles
12

(8072417), Braiden M. Frantz. "Active Shooter Mitigation for Open-Air Venues." Thesis, 2021.

Find full text
Abstract:

This dissertation examines the impact of active shooters upon patrons attending large outdoor events. There has been a spike in shooters targeting densely populated spaces in recent years, to include open-air venues. The 2019 Gilroy Garlic Festival was selected for modeling replication using AnyLogic software to test various experiments designed to reduce casualties in the event of an active shooter situation. Through achievement of validation to produce identical outcomes of the real-world Gilroy Garlic Festival shooting, the researcher established a reliable foundational model for experimental purposes. This active shooter research project identifies the need for rapid response efforts to neutralize the shooter(s) as quickly as possible to minimize casualties. Key findings include the importance of armed officers patrolling event grounds to reduce response time, the need for adequate exits during emergency evacuations, incorporation of modern technology to identify the shooter’s location, and applicability of a 1:548 police to patron ratio.

APA, Harvard, Vancouver, ISO, and other styles
13

(6838184), Parami Wijesinghe. "Neuro-inspired computing enhanced by scalable algorithms and physics of emerging nanoscale resistive devices." 2019.

Find full text
Abstract:

Deep ‘Analog Artificial Neural Networks’ (AANNs) perform complex classification problems with high accuracy. However, they rely on humongous amount of power to perform the calculations, veiling the accuracy benefits. The biological brain on the other hand is significantly more powerful than such networks and consumes orders of magnitude less power, indicating some conceptual mismatch. Given that the biological neurons are locally connected, communicate using energy efficient trains of spikes, and the behavior is non-deterministic, incorporating these effects in Artificial Neural Networks (ANNs) may drive us few steps towards a more realistic neural networks.

Emerging devices can offer a plethora of benefits including power efficiency, faster operation, low area in a vast array of applications. For example, memristors and Magnetic Tunnel Junctions (MTJs) are suitable for high density, non-volatile Random Access Memories when compared with CMOS implementations. In this work, we analyze the possibility of harnessing the characteristics of such emerging devices, to achieve neuro-inspired solutions to intricate problems.

We propose how the inherent stochasticity of nano-scale resistive devices can be utilized to realize the functionality of spiking neurons and synapses that can be incorporated in deep stochastic Spiking Neural Networks (SNN) for image classification problems. While ANNs mainly dwell in the aforementioned classification problem solving domain, they can be adapted for a variety of other applications. One such neuro-inspired solution is the Cellular Neural Network (CNN) based Boolean satisfiability solver. Boolean satisfiability (k-SAT) is an NP-complete (k≥3) problem that constitute one of the hardest classes of constraint satisfaction problems. We provide a proof of concept hardware based analog k-SAT solver that is built using MTJs. The inherent physics of MTJs, enhanced by device level modifications, is harnessed here to emulate the intricate dynamics of an analog, CNN based, satisfiability (SAT) solver.

Furthermore, in the effort of reaching human level performance in terms of accuracy, increasing the complexity and size of ANNs is crucial. Efficient algorithms for evaluating neural network performance is of significant importance to improve the scalability of networks, in addition to designing hardware accelerators. We propose a scalable approach for evaluating Liquid State Machines: a bio-inspired computing model where the inputs are sparsely connected to a randomly interlinked reservoir (or liquid). It has been shown that biological neurons are more likely to be connected to other neurons in the close proximity, and tend to be disconnected as the neurons are spatially far apart. Inspired by this, we propose a group of locally connected neuron reservoirs, or an ensemble of liquids approach, for LSMs. We analyze how the segmentation of a single large liquid to create an ensemble of multiple smaller liquids affects the latency and accuracy of an LSM. In our analysis, we quantify the ability of the proposed ensemble approach to provide an improved representation of the input using the Separation Property (SP) and Approximation Property (AP). Our results illustrate that the ensemble approach enhances class discrimination (quantified as the ratio between the SP and AP), leading to improved accuracy in speech and image recognition tasks, when compared to a single large liquid. Furthermore, we obtain performance benefits in terms of improved inference time and reduced memory requirements, due to lower number of connections and the freedom to parallelize the liquid evaluation process.

APA, Harvard, Vancouver, ISO, and other styles
14

(8771429), Ashley S. Dale. "3D OBJECT DETECTION USING VIRTUAL ENVIRONMENT ASSISTED DEEP NETWORK TRAINING." Thesis, 2021.

Find full text
Abstract:

An RGBZ synthetic dataset consisting of five object classes in a variety of virtual environments and orientations was combined with a small sample of real-world image data and used to train the Mask R-CNN (MR-CNN) architecture in a variety of configurations. When the MR-CNN architecture was initialized with MS COCO weights and the heads were trained with a mix of synthetic data and real world data, F1 scores improved in four of the five classes: The average maximum F1-score of all classes and all epochs for the networks trained with synthetic data is F1∗ = 0.91, compared to F1 = 0.89 for the networks trained exclusively with real data, and the standard deviation of the maximum mean F1-score for synthetically trained networks is σ∗ F1 = 0.015, compared to σF 1 = 0.020 for the networks trained exclusively with real data. Various backgrounds in synthetic data were shown to have negligible impact on F1 scores, opening the door to abstract backgrounds and minimizing the need for intensive synthetic data fabrication. When the MR-CNN architecture was initialized with MS COCO weights and depth data was included in the training data, the net- work was shown to rely heavily on the initial convolutional input to feed features into the network, the image depth channel was shown to influence mask generation, and the image color channels were shown to influence object classification. A set of latent variables for a subset of the synthetic datatset was generated with a Variational Autoencoder then analyzed using Principle Component Analysis and Uniform Manifold Projection and Approximation (UMAP). The UMAP analysis showed no meaningful distinction between real-world and synthetic data, and a small bias towards clustering based on image background.

APA, Harvard, Vancouver, ISO, and other styles
15

(8803076), Jordan M. McGraw. "Implementation and Analysis of Co-Located Virtual Reality for Scientific Data Visualization." Thesis, 2020.

Find full text
Abstract:
Advancements in virtual reality (VR) technologies have led to overwhelming critique and acclaim in recent years. Academic researchers have already begun to take advantage of these immersive technologies across all manner of settings. Using immersive technologies, educators are able to more easily interpret complex information with students and colleagues. Despite the advantages these technologies bring, some drawbacks still remain. One particular drawback is the difficulty of engaging in immersive environments with others in a shared physical space (i.e., with a shared virtual environment). A common strategy for improving collaborative data exploration has been to use technological substitutions to make distant users feel they are collaborating in the same space. This research, however, is focused on how virtual reality can be used to build upon real-world interactions which take place in the same physical space (i.e., collaborative, co-located, multi-user virtual reality).

In this study we address two primary dimensions of collaborative data visualization and analysis as follows: [1] we detail the implementation of a novel co-located VR hardware and software system, [2] we conduct a formal user experience study of the novel system using the NASA Task Load Index (Hart, 1986) and introduce the Modified User Experience Inventory, a new user study inventory based upon the Unified User Experience Inventory, (Tcha-Tokey, Christmann, Loup-Escande, Richir, 2016) to empirically observe the dependent measures of Workload, Presence, Engagement, Consequence, and Immersion. A total of 77 participants volunteered to join a demonstration of this technology at Purdue University. In groups ranging from two to four, participants shared a co-located virtual environment built to visualize point cloud measurements of exploded supernovae. This study is not experimental but observational. We found there to be moderately high levels of user experience and moderate levels of workload demand in our results. We describe the implementation of the software platform and present user reactions to the technology that was created. These are described in detail within this manuscript.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography