Dissertations / Theses on the topic 'Data management and data science not elsewhere classified'

To see the other types of publications on this topic, follow the link: Data management and data science not elsewhere classified.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 15 dissertations / theses for your research on the topic 'Data management and data science not elsewhere classified.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Woon, Wei Lee. "Analysis of magnetoencephalographic data as a nonlinear dynamical system." Thesis, Aston University, 2002. http://publications.aston.ac.uk/13266/.

Full text
Abstract:
This thesis presents the results from an investigation into the merits of analysing Magnetoencephalographic (MEG) data in the context of dynamical systems theory. MEG is the study of both the methods for the measurement of minute magnetic flux variations at the scalp, resulting from neuro-electric activity in the neocortex, as well as the techniques required to process and extract useful information from these measurements. As a result of its unique mode of action - by directly measuring neuronal activity via the resulting magnetic field fluctuations - MEG possesses a number of useful qualities which could potentially make it a powerful addition to any brain researcher's arsenal. Unfortunately, MEG research has so far failed to fulfil its early promise, being hindered in its progress by a variety of factors. Conventionally, the analysis of MEG has been dominated by the search for activity in certain spectral bands - the so-called alpha, delta, beta, etc that are commonly referred to in both academic and lay publications. Other efforts have centred upon generating optimal fits of "equivalent current dipoles" that best explain the observed field distribution. Many of these approaches carry the implicit assumption that the dynamics which result in the observed time series are linear. This is despite a variety of reasons which suggest that nonlinearity might be present in MEG recordings. By using methods that allow for nonlinear dynamics, the research described in this thesis avoids these restrictive linearity assumptions. A crucial concept underpinning this project is the belief that MEG recordings are mere observations of the evolution of the true underlying state, which is unobservable and is assumed to reflect some abstract brain cognitive state. Further, we maintain that it is unreasonable to expect these processes to be adequately described in the traditional way: as a linear sum of a large number of frequency generators. One of the main objectives of this thesis will be to prove that much more effective and powerful analysis of MEG can be achieved if one were to assume the presence of both linear and nonlinear characteristics from the outset. Our position is that the combined action of a relatively small number of these generators, coupled with external and dynamic noise sources, is more than sufficient to account for the complexity observed in the MEG recordings. Another problem that has plagued MEG researchers is the extremely low signal to noise ratios that are obtained. As the magnetic flux variations resulting from actual cortical processes can be extremely minute, the measuring devices used in MEG are, necessarily, extremely sensitive. The unfortunate side-effect of this is that even commonplace phenomena such as the earth's geomagnetic field can easily swamp signals of interest. This problem is commonly addressed by averaging over a large number of recordings. However, this has a number of notable drawbacks. In particular, it is difficult to synchronise high frequency activity which might be of interest, and often these signals will be cancelled out by the averaging process. Other problems that have been encountered are high costs and low portability of state-of-the- art multichannel machines. The result of this is that the use of MEG has, hitherto, been restricted to large institutions which are able to afford the high costs associated with the procurement and maintenance of these machines. In this project, we seek to address these issues by working almost exclusively with single channel, unaveraged MEG data. We demonstrate the applicability of a variety of methods originating from the fields of signal processing, dynamical systems, information theory and neural networks, to the analysis of MEG data. It is noteworthy that while modern signal processing tools such as independent component analysis, topographic maps and latent variable modelling have enjoyed extensive success in a variety of research areas from financial time series modelling to the analysis of sun spot activity, their use in MEG analysis has thus far been extremely limited. It is hoped that this work will help to remedy this oversight.
APA, Harvard, Vancouver, ISO, and other styles
2

Jiang, Feng. "Capturing event metadata in the sky : a Java-based application for receiving astronomical internet feeds : a thesis presented in partial fulfilment of the requirements for the degree of Master of Computer Science in Computer Science at Massey University, Auckland, New Zealand." Massey University, 2008. http://hdl.handle.net/10179/897.

Full text
Abstract:
When an astronomical observer discovers a transient event in the sky, how can the information be immediately shared and delivered to others? Not too long time ago, people shared the information about what they discovered in the sky by books, telegraphs, and telephones. The new generation of transferring the event data is the way by the Internet. The information of astronomical events is able to be packed and put online as an Internet feed. For receiving these packed data, an Internet feed listener software would be required in a terminal computer. In other applications, the listener would connect to an intelligent robotic telescope network and automatically drive a telescope to capture the instant Astrophysical phenomena. However, because the technologies of transferring the astronomical event data are in the initial steps, the only resource available is the Perl-based Internet feed listener developed by the team of eSTAR. In this research, a Java-based Internet feed listener was developed. The application supports more features than the Perl-based application. After applying the rich Java benefits, the application is able to receive, parse and manage the Internet feed data in an efficient way with the friendly user interface. Keywords: Java, socket programming, VOEvent, real-time astronomy
APA, Harvard, Vancouver, ISO, and other styles
3

Gales, Mathis. "Collaborative map-exploration around large table-top displays: Designing a collaboration interface for the Rapid Analytics Interactive Scenario Explorer toolkit." Thesis, Ludwig-Maximilians-University Munich, 2018. https://eprints.qut.edu.au/115909/1/Master_Thesis_Mathis_Gales_final_opt.pdf.

Full text
Abstract:
Sense-making of spatial data on an urban level and large-scale decisions on new infrastructure projects need teamwork from experts with varied backgrounds. Technology can facilitate this collaboration process and magnify the effect of collective intelligence. Therefore, this work explores new useful collaboration interactions and visualizations for map-exploration software with a strong focus on usability. Additionally, for same-time and same-place group work, interactive table-top displays serve as a natural platform. Thus, the second aim of this project is to develop a user-friendly concept for integrating table-top displays with collaborative map-exploration. To achieve these goals, we continuously adapted the user-interface of the map-exploration software RAISE. We adopted a user-centred design approach and a simple iterative interaction design lifecycle model. Alternating between quick prototyping and user-testing phases, new design concepts were assessed and consequently improved or rejected. The necessary data was gathered through continuous dialogue with users and experts, a participatory design workshop, and a final observational study. Adopting a cross-device concept, our final prototype supports sharing information between a user’s personal device and table-top display(s). We found that this allows for a comfortable and practical separation between private and shared workspaces. The tool empowers users to share the current camera-position, data queries, and active layers between devices and with other users. We generalized further findings into a set of recommendations for designing user-friendly tools for collaborative map-exploration. The set includes recommendations regarding the sharing behaviour, the user-interface design, and the idea of playfulness in collaboration.
APA, Harvard, Vancouver, ISO, and other styles
4

(7360664), Gary Lee Johns. "STEM AND DATA: INSTRUCTIONAL DECISION MAKING OF SECONDARY SCIENCE AND MATHEMATICS TEACHERS." Thesis, 2019.

Find full text
Abstract:
This research is focused on the intersection of secondary teachers’ data-use to inform instructional decisions and their teaching of STEM in STEM-focused high schools. Teaching STEM requires presenting more than just the content knowledge of the STEM domains. The methods of inquiry (e.g., scientific inquiry, engineering design) are skills that should be taught as part of STEM activities (e.g., science labs). However, under the data- and standards-based accountability focus of education, it is unclear how data from STEM activities is used in instructional decision-making. While teachers give tremendous weight to the data they collect directly from their observations of their classrooms, it is data from standardized testing that strongly influences practices through accountability mandates. STEM education alters this scenario because, while there is a growing focus on teaching STEM, important aspects of STEM education are not readily standardized. This mixed-methods study will examine the perspectives of 9th through 12th grade science and mathematics teachers, in STEM-focused schools, on data-use and STEM teaching. We developed a framework, adapted from existing frameworks of data-use, to categorize these perspectives and outline contexts influencing them. Through a concurrent triangulation design we will combine quantitative and qualitative data for a comprehensive synthesis of these perspectives.
APA, Harvard, Vancouver, ISO, and other styles
5

(10514360), Uttara Vinay Tipnis. "Data Science Approaches on Brain Connectivity: Communication Dynamics and Fingerprint Gradients." Thesis, 2021.

Find full text
Abstract:
The innovations in Magnetic Resonance Imaging (MRI) in the recent decades have given rise to large open-source datasets. MRI affords researchers the ability to look at both structure and function of the human brain. This dissertation will make use of one of these large open-source datasets, the Human Connectome Project (HCP), to study the structural and functional connectivity in the brain.
Communication processes within the human brain at different cognitive states are neither well understood nor completely characterized. We assess communication processes in the human connectome using ant colony-inspired cooperative learning algorithm, starting from a source with no a priori information about the network topology, and cooperatively searching for the target through a pheromone-inspired model. This framework relies on two parameters, namely pheromone and edge perception, to define the cognizance and subsequent behaviour of the ants on the network and the communication processes happening between source and target. Simulations with different configurations allow the identification of path-ensembles that are involved in the communication between node pairs. In order to assess the different communication regimes displayed on the simulations and their associations with functional connectivity, we introduce two network measurements, effective path-length and arrival rate. These measurements are tested as individual and combined descriptors of functional connectivity during different tasks. Finally, different communication regimes are found in different specialized functional networks. This framework may be used as a test-bed for different communication regimes on top of an underlying topology.
The assessment of brain fingerprints has emerged in the recent years as an important tool to study individual differences. Studies so far have mainly focused on connectivity fingerprints between different brain scans of the same individual. We extend the concept of brain connectivity fingerprints beyond test/retest and assess fingerprint gradients in young adults by developing an extension of the differential identifiability framework. To do so, we look at the similarity between not only the multiple scans of an individual (subject fingerprint), but also between the scans of monozygotic and dizygotic twins (twin fingerprint). We have carried out this analysis on the 8 fMRI conditions present in the Human Connectome Project -- Young Adult dataset, which we processed into functional connectomes (FCs) and time series parcellated according to the Schaefer Atlas scheme, which has multiple levels of resolution. Our differential identifiability results show that the fingerprint gradients based on genetic and environmental similarities are indeed present when comparing FCs for all parcellations and fMRI conditions. Importantly, only when assessing optimally reconstructed FCs, we fully uncover fingerprints present in higher resolution atlases. We also study the effect of scanning length on subject fingerprint of resting-state FCs to analyze the effect of scanning length and parcellation. In the pursuit of open science, we have also made available the processed and parcellated FCs and time series for all conditions for ~1200 subjects part of the HCP-YA dataset to the scientific community.
Lastly, we have estimated the effect of genetics and environment on the original and optimally reconstructed FC with an ACE model.
APA, Harvard, Vancouver, ISO, and other styles
6

(8802305), Tian Qi. "THE IMPACT OF DATA BREACH ON SUPPLIERS' PERFORMANCE: THE CASE OF TARGET." Thesis, 2020.

Find full text
Abstract:
The author investigated the condition under which competition effect and contagion effect impact the suppliers of the firm encountering data breach. An event study was conducted to analyze the stock price of 104 suppliers of Target after the large-scale data breach in 2013. The result showed that suppliers with high dependence on Target experienced negative abnormal return on the day after Target’s announcement, while those with low dependence experienced positive abnormal return. After regressing the abnormal return on some explanatory variables, the result showed that firms with better operational performance and high information technology capability were less negatively affected. This study suggested that suppliers who relatively highly rely on one customer company are susceptible for the negative shock from that customer because of contagion effect. Furthermore, maintaining good performance and investing in information technology can help firms reduce losses from negative events happened in customer companies.
APA, Harvard, Vancouver, ISO, and other styles
7

(11167785), Nicolae Christophe Iovanac. "GENERATIVE, PREDICTIVE, AND REACTIVE MODELS FOR DATA SCARCE PROBLEMS IN CHEMICAL ENGINEERING." Thesis, 2021.

Find full text
Abstract:
Data scarcity is intrinsic to many problems in chemical engineering due to physical constraints or cost. This challenge is acute in chemical and materials design applications, where a lack of data is the norm when trying to develop something new for an emerging application. Addressing novel chemical design under these scarcity constraints takes one of two routes: the traditional forward approach, where properties are predicted based on chemical structure, and the recent inverse approach, where structures are predicted based on required properties. Statistical methods such as machine learning (ML) could greatly accelerate chemical design under both frameworks; however, in contrast to the modeling of continuous data types, molecular prediction has many unique obstacles (e.g., spatial and causal relationships, featurization difficulties) that require further ML methods development. Despite these challenges, this work demonstrates how transfer learning and active learning strategies can be used to create successful chemical ML models in data scarce situations.
Transfer learning is a domain of machine learning under which information learned in solving one task is transferred to help in another, more difficult task. Consider the case of a forward design problem involving the search for a molecule with a particular property target with limited existing data, a situation not typically amenable to ML. In these situations, there are often correlated properties that are computationally accessible. As all chemical properties are fundamentally tied to the underlying chemical topology, and because related properties arise due to related moieties, the information contained in the correlated property can be leveraged during model training to help improve the prediction of the data scarce property. Transfer learning is thus a favorable strategy for facilitating high throughput characterization of low-data design spaces.
Generative chemical models invert the structure-function paradigm, and instead directly suggest new chemical structures that should display the desired application properties. This inversion process is fraught with difficulties but can be improved by training these models with strategically selected chemical information. Structural information contained within this chemical property data is thus transferred to support the generation of new, feasible compounds. Moreover, transfer learning approach helps ensure that the proposed structures exhibit the specified property targets. Recent extensions also utilize thermodynamic reaction data to help promote the synthesizability of suggested compounds. These transfer learning strategies are well-suited for explorative scenarios where the property values being sought are well outside the range of available training data.
There are situations where property data is so limited that obtaining additional training data is unavoidable. By improving both the predictive and generative qualities of chemical ML models, a fully closed-loop computational search can be conducted using active learning. New molecules in underrepresented property spaces may be iteratively generated by the network, characterized by the network, and used for retraining the network. This allows the model to gradually learn the unknown chemistries required to explore the target regions of chemical space by actively suggesting the new training data it needs. By utilizing active learning, the create-test-refine pathway can be addressed purely in silico. This approach is particularly suitable for multi-target chemical design, where the high dimensionality of the desired property targets exacerbates data scarcity concerns.
The techniques presented herein can be used to improve both predictive and generative performance of chemical ML models. Transfer learning is demonstrated as a powerful technique for improving the predictive performance of chemical models in situations where a correlated property can be leveraged alongside scarce experimental or computational properties. Inverse design may also be facilitated through the use of transfer learning, where property values can be connected with stable structural features to generate new compounds with targeted properties beyond those observed in the training data. Thus, when the necessary chemical structures are not known, generative networks can directly propose them based on function-structure relationships learned from domain data, and this domain data can even be generated and characterized by the model itself for closed-loop chemical searches in an active learning framework. With recent extensions, these models are compelling techniques for looking at chemical reactions and other data types beyond the individual molecule. Furthermore, the approaches are not limited by choice of model architecture or chemical representation and are expected to be helpful in a variety of data scarce chemical applications.
APA, Harvard, Vancouver, ISO, and other styles
8

(9868160), Wan-Eih Huang. "Image Processing, Image Analysis, and Data Science Applied to Problems in Printing and Semantic Understanding of Images Containing Fashion Items." Thesis, 2020.

Find full text
Abstract:
This thesis aims to address problems in printing and semantic understanding of images.
The first one is developing a halftoning algorithm for multilevel output with unequal resolution printing pixels. We proposed a design method and implemented several versions of halftone screens. They all show good visual results in a real, low-cost electrophotographic printer.
The second problem is related to printing quality and self-diagnosis. Firstly, we incorporated logistic regression for classification of visible and invisible bands defects in the detection pipeline. In addition, we also proposed a new cost-function based algorithm with synthetic missing bands to estimate the repetitive interval of periodic bands for self-diagnosing the failing component. It is much more accurate than the previous method. Second, we addressed this problem with acoustic signals. Due to the scarcity of printer sounds, an acoustic signal augmentation method is needed to help a classifier perform better. The key idea is to mimic the situation that occurs when a component begins to fail.
The third problem deals with recommendation systems. We explored the similarity metrics in the loss function for a neural matrix factorization network.
The last problem is about image understanding of fashion items. We proposed a weakly supervised framework that includes mask-guided teacher network training and attention-based transfer learning to mitigate the domain gap in datasets and acquire a new dataset with rich annotations.
APA, Harvard, Vancouver, ISO, and other styles
9

(8067608), Zhi Li. "COPING WITH LIMITED DATA: MACHINE-LEARNING-BASED IMAGE UNDERSTANDING APPLICATIONS TO FASHION AND INKJET IMAGERY." Thesis, 2019.

Find full text
Abstract:
Machine learning has been revolutionizing our approach to image understanding problems. However, due to the unique nature of the problem, finding suitable data or learning from limited data properly is a constant challenge. In this work, we focus on building machine learning pipelines for fashion and inkjet image analysis with limited data.

We first look into the dire issue of missing and incorrect information on online fashion marketplace. Unlike professional online fashion retailers, sellers on P2P marketplaces tend not to provide correct color categorical information, which is pivotal for fashion shopping. Therefore, to assist users to provide correct color information, we aim to build an image understanding pipeline that can extract garment region in the fashion image and match the color of the fashion item to a pre-defined color categories on the fashion marketplace. To cope with the challenges of lack of suitable data, we propose an autonomous garment color extraction system that uses both clustering and semantic segmentation algorithm to extract the identify fashion garments in the image. In addition, a psychophysical experiment is designed to collect human subjects' color naming schema, and a random forest classifier is trained to given close prediction of color label for the fashion item. Our system is able to perform pixel level segmentation on fashion product portraits and parse human body parts and various fashion categories with human presence.

We also develop an inkjet printing analysis pipeline using pre-trained neural network. Our pipeline is able to learn to perceive print quality, namely high frequency noise level, of the test targets, without intense training. Our research also suggests that in spite of being trained on large scale dataset for object recognition, features generated from neural networks reacts to textural component of the image without any localized features. In addition, we expand our pipeline to printer forensics, and the pipeline is able to identify the printer model by examining the inkjet dot pattern at a microscopic level. Overall, the data-driven computer vision approach presents great value and potential to improve future inkjet imaging technology, even when the data source is limited.
APA, Harvard, Vancouver, ISO, and other styles
10

(10292846), Zhipeng Deng. "RECOGNITION OF BUILDING OCCUPANT BEHAVIORS FROM INDOOR ENVIRONMENT PARAMETERS BY DATA MINING APPROACH." Thesis, 2021.

Find full text
Abstract:
Currently, people in North America spend roughly 90% of their time indoors. Therefore, it is important to create comfortable, healthy, and productive indoor environments for the occupants. Unfortunately, our resulting indoor environments are still very poor, especially in multi-occupant rooms. In addition, energy consumption in residential and commercial buildings by HVAC systems and lighting accounts for about 41% of primary energy use in the US. However, the current methods for simulating building energy consumption are often not accurate, and various types of occupant behavior may explain this inaccuracy.
This study first developed artificial neural network models for predicting thermal comfort and occupant behavior in indoor environments. The models were trained by data on indoor environmental parameters, thermal sensations, and occupant behavior collected in ten offices and ten houses/apartments. The models were able to predict similar acceptable air temperature ranges in offices, from 20.6 °C to 25 °C in winter and from 20.6 °C to 25.6 °C in summer. We also found that the comfortable air temperature in the residences was 1.7 °C lower than that in the offices in winter, and 1.7 °C higher in summer. The reason for this difference may be that the occupants of the houses/apartments were responsible for paying their energy bills. The comfort zone obtained by the ANN model using thermal sensations in the ten offices was narrower than the comfort zone in ASHRAE Standard 55, but that using behaviors was wider.
Then this study used the EnergyPlus program to simulate the energy consumption of HVAC systems in office buildings. Measured energy data were used to validate the simulated results. When using the collected behavior from the offices, the difference between the simulated results and the measured data was less than 13%. When a behavioral ANN model was implemented in the energy simulation, the simulation performed similarly. However, energy simulation using constant thermostat set point without considering occupant behavior was not accurate. Further simulations demonstrated that adjusting the thermostat set point and the clothing could lead to a 25% variation in energy use in interior offices and 15% in exterior offices. Finally, energy consumption could be reduced by 30% with thermostat setback control and 70% with occupancy control.
Because of many contextual factors, most previous studies have built data-driven behavior models with limited scalability and generalization capability. This investigation built a policy-based reinforcement learning (RL) model for the behavior of adjusting the thermostat and clothing level. We used Q-learning to train the model and validated with collected data. After training, the model predicted the behavior with R2 from 0.75 to 0.80 in an office building. This study also transferred the behavior knowledge of the RL model to other office buildings with different HVAC control systems. The transfer learning model predicted with R2 from 0.73 to 0.80. Going from office buildings to residential buildings, the transfer learning model also had an R2 over 0.60. Therefore, the RL model combined with transfer learning was able to predict the building occupant behavior accurately with good scalability, and without the need for data collection.
Unsuitable thermostat settings lead to energy waste and an undesirable indoor environment, especially in multi-occupant rooms. This study aimed to develop an HVAC control strategy in multi-occupant offices using physiological parameters measured by wristbands. We used an ANN model to predict thermal sensation from air temperature, relative humidity, clothing level, wrist skin temperature, skin relative humidity and heart rate. Next, we developed a control strategy to improve the thermal comfort of all the occupants in the room. The control system was smart and could adjust the thermostat set point automatically in real time. We improved the occupants’ thermal comfort level that over half of the occupants reported feeling neutral, and fewer than 5% still felt uncomfortable. After coupling with occupancy-based control by means of lighting sensors or wristband Bluetooth, the heating and cooling loads were reduced by 90% and 30%, respectively. Therefore, the smart HVAC control system can effectively control the indoor environment for thermal comfort and energy saving.
As for proposed studies in the future, at first, we will use more advanced sensors to collect more kinds of occupant behavior-related data. We will expand the research on more occupant behavior related to indoor air quality, noise and illuminance level. We can use these data to recognize behavior instead of questionnaire survey now. We will also develop a personalized zonal control system for the multi-occupant office. We can find the number and location of inlet diffusers by using inverse design.
APA, Harvard, Vancouver, ISO, and other styles
11

(9183329), Moonsik Shin. "Essays on Product Innovation and Failures." Thesis, 2020.

Find full text
Abstract:

In this dissertation, I investigate how firms’ various strategic decisions lead to innovation failures. Extant research in the strategic management field has suggested that a firms’ strategic choices determine its innovation trajectories and outcomes. While previous studies predominantly have emphasized firms’ successful innovation outcomes, very little research has been conducted on the antecedents of innovation failures. Although firms’ successful innovation outcomes provide important implications in understanding the source of firms’ competitive advantages, failed innovations would provide us with critical insight about firms’ ability to survive and develop as they may result in unfavorable consequences, such as financial risks and negative impacts on firms’ reputations In this light, I examine how various strategic choices – such as interorganizational relationships, acquisitions, and internal R&D – affect firm’s innovation trajectories and failures.

In Essay 1, I explore how firms’ decision to form interorganizational relationships can affect their innovation failures. In particular, I investigate how a venture’s choice to form an investment relationship with a particular venture capitalist (VC) could determine the venture’s innovation failures. I propose that the time pressure that VCs face may elicit negative consequences for their portfolio companies’ innovation quality. In Essay 2, I examine how firms’ efforts to acquire technology and knowledge from external markets through acquisitions could affect their innovation failure rates. I suggest and find that adverse selection and post-acquisition integration problems impose substantial costs on firms pursuing acquisitions leading them to experience high rate of innovation failures. In Essay 3, I examine how firms’ efforts to develop new products incrementally affect their innovation failures. I suggest that, due to the path dependent nature of product development, when firms develop and introduce new products through an incremental approach, they may face the risk of their new products being exposed to the failure associated with the products and underlying technologies upon which the new products are built.


APA, Harvard, Vancouver, ISO, and other styles
12

(11198013), Kevin Wee. "Creation, deconstruction, and evaluation of a biochemistry animation about the role of the actin cytoskeleton in cell motility." Thesis, 2021.

Find full text
Abstract:

External representations (ERs) used in science education are multimodal ensembles consisting of design elements to convey educational meanings to the audience. As an example of a dynamic ER, an animation presenting its content features (i.e., scientific concepts) via varying the feature’s depiction over time. A production team invited the dissertation author to inspect their creation of a biochemistry animation about the role of the actin cytoskeleton in cell motility and the animation’s implication on learning. To address this, the author developed a four-step methodology entitled the Multimodal Variation Analysis of Dynamic External Representations (MVADER) that deconstructs the animation’s content and design to inspect how each content feature is conveyed via the animation’s design elements.


This dissertation research investigated the actin animation’s educational value and the MVADER’s utility in animation evaluation. The research design was guided by descriptive case study methodology and an integrated framework consisting of the variation theory, multimodal analysis, and visual analytics. As stated above, the animation was analyzed using MVADER. The development of the actin animation and the content features the production team members intended to convey via the animation were studied by analyzing the communication records between the members, observing the team meetings, and interviewing the members individually. Furthermore, students’ learning experiences from watching the animation were examined via semi-structured interviews coupled with post- storyboarding. Moreover, the instructions of MVADER and its applications in studying the actin animation were reviewed to determine the MVADER’s usefulness as an animation evaluation tool.


Findings of this research indicate that the three educators in the production team intended the actin animation to convey forty-three content features to the undergraduate biology students. At least 50% of the student who participated in this thesis learned thirty-five of these forty-three (> 80%) features. Evidence suggests that the animation’s effectiveness to convey its features was associated with the features’ depiction time, the number of identified design elements applied to depict the features, and the features’ variation of depiction over time.


Additionally, one-third of the student participants made similar mistakes regarding two content features after watching the actin animation: the F-actin elongation and the F-actin crosslink structure in lamellipodia. The analysis reveals the animation’s potential design flaws that might have contributed to these common misconceptions. Furthermore, two disruptors to the creation process and the educational value of the actin animation were identified: the vagueness of the learning goals and the designer’s placement of the animation’s beauty over its reach to the learning goals. The vagueness of the learning goals hampered the narration scripting process. On the other hand, the designer’s prioritization of the animation’s aesthetic led to the inclusion of a “beauty shot” in the animation that caused students’ confusion.


MVADER was used to examine the content, design, and their relationships in the actin animation at multiple aspects and granularities. The result of MVADER was compared with the students’ learning outcomes from watching the animation to identify the characteristics of content’s depiction that were constructive and disruptive to learning. These findings led to several practical recommendations to teach using the actin animation and create educational ERs.


To conclude, this dissertation discloses the connections between the creation process, the content and design, and the educational implication of a biochemistry animation. It also introduces MVADER as a novel ER analysis tool to the education research and visualization communities. MVADER can be applied in various formats of static and dynamic ERs and beyond the disciplines of biology and chemistry.

APA, Harvard, Vancouver, ISO, and other styles
13

Beckett, Jason. "Forensic computing : a deterministic model for validation and verification through an ontological examination of forensic functions and processes." 2010. http://arrow.unisa.edu.au:8081/1959.8/93190.

Full text
Abstract:
This dissertation contextualises the forensic computing domain in terms of validation of tools and processes. It explores the current state of forensic computing comparing it to the traditional forensic sciences. The research then develops a classification system for the disciplines functions to establish the extensible base for which a validation system is developed.
Thesis (PhD)--University of South Australia, 2010
APA, Harvard, Vancouver, ISO, and other styles
14

(9708611), Zackery Ray Roberson. "Advances in Gas Chromatography and Vacuum UV Spectroscopy: Applications to Fire Debris Analysis & Drugs of Abuse." Thesis, 2021.

Find full text
Abstract:
In forensic chemistry, a quicker and more accurate analysis of a sample is always being pursued. Speedy analyses allow the analyst to provide quick turn-around times and potentially decrease back-logs that are known to be a problem in the field. Accurate analyses are paramount with the futures and lives of the accused potentially on the line. One of the most common methods of analysis in forensic chemistry laboratories is gas chromatography, chosen for the relative speed and efficiency afforded by this method. Two major routes were attempted to further improve on gas chromatography applications in forensic chemistry.
The first route was to decrease separation times for analysis of ignitable liquid residues by using micro-bore wall coated open-tubular columns. Micro-bore columns are much shorter and have higher separation efficiencies than the standard columns used in forensic chemistry, allowing for faster analysis times while maintaining the expected peak separation. Typical separation times for fire debris samples are between thirty minutes and one hour, the micro-bore columns were able to achieve equivalent performance in three minutes. The reduction in analysis time was demonstrated by analysis of ignitable liquid residues from simulated fire debris exemplars.
The second route looked at a relatively new detector for gas chromatography known as a vacuum ultraviolet (VUV) spectrophotometer. The VUV detector uses traditional UV and far-ultraviolet light to probe the pi and sigma bonds of the gas phase analytes as well as Rydberg traditions to produce spectra that are nearly unique to a compound. Thus far, the only spectra that were not discernable were from enantiomers, otherwise even diastereomers have been differentiated. The specificity attained with the VUV detector has achieved differentiation of compounds that mass spectrometry, the most common detection method for chromatography in forensic chemistry labs, has difficulty distinguishing. This specificity has been demonstrated herein by analyzing various classes of drugs of abuse and applicability to “real world” samples has been demonstrated by analysis of de-identified seized samples.
APA, Harvard, Vancouver, ISO, and other styles
15

(8771429), Ashley S. Dale. "3D OBJECT DETECTION USING VIRTUAL ENVIRONMENT ASSISTED DEEP NETWORK TRAINING." Thesis, 2021.

Find full text
Abstract:

An RGBZ synthetic dataset consisting of five object classes in a variety of virtual environments and orientations was combined with a small sample of real-world image data and used to train the Mask R-CNN (MR-CNN) architecture in a variety of configurations. When the MR-CNN architecture was initialized with MS COCO weights and the heads were trained with a mix of synthetic data and real world data, F1 scores improved in four of the five classes: The average maximum F1-score of all classes and all epochs for the networks trained with synthetic data is F1∗ = 0.91, compared to F1 = 0.89 for the networks trained exclusively with real data, and the standard deviation of the maximum mean F1-score for synthetically trained networks is σ∗ F1 = 0.015, compared to σF 1 = 0.020 for the networks trained exclusively with real data. Various backgrounds in synthetic data were shown to have negligible impact on F1 scores, opening the door to abstract backgrounds and minimizing the need for intensive synthetic data fabrication. When the MR-CNN architecture was initialized with MS COCO weights and depth data was included in the training data, the net- work was shown to rely heavily on the initial convolutional input to feed features into the network, the image depth channel was shown to influence mask generation, and the image color channels were shown to influence object classification. A set of latent variables for a subset of the synthetic datatset was generated with a Variational Autoencoder then analyzed using Principle Component Analysis and Uniform Manifold Projection and Approximation (UMAP). The UMAP analysis showed no meaningful distinction between real-world and synthetic data, and a small bias towards clustering based on image background.

APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography