Dissertations / Theses on the topic 'Computer experimental analysis'

To see the other types of publications on this topic, follow the link: Computer experimental analysis.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Computer experimental analysis.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Kang, Lulu. "Computer and physical experiments: design, modeling, and multivariate interpolation." Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/34805.

Full text
Abstract:
Many problems in science and engineering are solved through experimental investigations. Because experiments can be costly and time consuming, it is important to efficiently design the experiment so that maximum information about the problem can be obtained. It is also important to devise efficient statistical methods to analyze the experimental data so that none of the information is lost. This thesis makes contributions on several aspects in the field of design and analysis of experiments. It consists of two parts. The first part focuses on physical experiments, and the second part on computer experiments. The first part on physical experiments contains three works. The first work develops Bayesian experimental designs for robustness studies, which can be applied in industries for quality improvement. The existing methods rely on modifying effect hierarchy principle to give more importance to control-by-noise interactions, which can violate the true effect order of a system because the order should not depend on the objective of an experiment. The proposed Bayesian approach uses a prior distribution to capture the effect hierarchy property and then uses an optimal design criterion to satisfy the robustness objectives. The second work extends the above Bayesian approach to blocked experimental designs. The third work proposes a new modeling and design strategy for mixture-of-mixtures experiments and applies it in the optimization of Pringles potato crisps. The proposed model substantially reduces the number of parameters in the existing multiple-Scheffé model and thus, helps the engineers to design much smaller experiments. The second part on computer experiments introduces two new methods for analyzing the data. The first is an interpolation method called regression-based inverse distance weighting (RIDW) method, which is shown to overcome some of the computational and numerical problems associated with kriging, particularly in dealing with large data and/or high dimensional problems. In the second work, we introduce a general nonparametric regression method, called kernel sum regression. More importantly, we make an interesting discovery by showing that a particular form of this regression method becomes an interpolation method, which can be used to analyze computer experiments with deterministic outputs.
APA, Harvard, Vancouver, ISO, and other styles
2

Kharechko, Andriy. "Linear and ellipsoidal pattern separation : theoretical aspects and experimental analysis." Thesis, University of Southampton, 2009. https://eprints.soton.ac.uk/195011/.

Full text
Abstract:
This thesis deals with a pattern classification problem, which geometrically implies data separation in some Euclidean feature space. The task is to infer a classifier (a separating surface) from a set or sequence of observations. This classifier would later be used to discern observations of different types. In this work, the classification problem is viewed from the perspective of the optimization theory: we suggest an optimization problem for the learning model and adapt optimization algorithms for this problem to solve the learning problem. The aim of this research is twofold, so this thesis can be split into two self-contained parts because it deals with two different type of classifiers each in a different learning setting. The first part deals with linear classification in the online learning setting and includes analysis of existing polynomial-time algorithms: the ellipsoid algorithm and the perceptron rescaling algorithm. We establish that they are based on different types of the same space dilation technique, and derive the parametric version of the latter algorithm, which allows to improve its complexity bound and exploit some extra information about the problem. We also interpret some results from the information-based complexity theory to the optimization model to suggest tight lower bounds on the learning complexity of this family of problems. To conclude this study, we experimentally test both algorithms on the positive semidefinite constraint satisfaction problem. Numerical results confirm our conjectures on the behaviour of the algorithms when the dimension of the problem grows. In the second part, we shift our focus from linear to ellipsoidal classifiers, which form a subset of second-order decision surfaces, and tackle a pattern separation problem with two concentric ellipsoids where the inner encloses one class (which is normally our class of interest, if we have one) and the outer excludes inputs of the other class(es). The classification problem leads to semidefinite program, which allows us to harness the efficient interior-point algorithms for solving it. This part includes analysis of the maximal separation ratio algorithm
APA, Harvard, Vancouver, ISO, and other styles
3

Woods, Duncan E. "The experimental analysis and computer simulation of bielectrical referencing systems." Thesis, University of Surrey, 1994. http://epubs.surrey.ac.uk/843128/.

Full text
Abstract:
The measurement of bioelectric signals, e.g. the electrocardiogram and electromyogram (EMG), from the body is susceptible to noise, artefacts and interference; such as power-line interference. To reduce these effects differential-input amplifiers are commonly used. Rejection is further enhanced by implementing a referencing system into the design, such as connecting a third electrode to circuit common or driving the shields. However, there has been no quantitative study into the relative merits of the different systems. For this reason, an experimental analysis and computer simulation was undertaken. In the experimental tests, a rig and instrumentation system were developed to record the surface isometric EMG from biceps brachii. Published referencing systems were used in the trials and the recorded signals were quantified using repeatable parameters. These were calculated from the frequency domain and included the median frequency and a measure of the spectral power distribution. The results showed a significant reduction in the interference when a third electrode or isolation was used. No further reduction was observed if a more complex circuit, such as a driven referencing system, was used. In the computer simulation, a SPICE computer model of the recording environment was developed. This included the preamplifier and the interference sources from displacement currents induced into the leads and the body. The different referencing systems were incorporated within this basic structure. The results confirmed those from the experimental work and, by comparison to a 1% tolerance criterion, it was concluded that the driven-right leg or the three electrode driven shield isolated systems were superior. It is concluded that referencing significantly affects interference levels on the bioelectric signal. These results may be used to facilitate a standard system to be adopted in bioelectric amplifiers. The implication for this in clinical practice is that it may improve the measurement of bioelectric signals.
APA, Harvard, Vancouver, ISO, and other styles
4

Wood, Duncan E. "The experimental analysis and computer simulation of bioelectric referencing systems." Thesis, University of Surrey, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.308638.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hoeke, Marilynn Carol. "A computer instructional support environment for information literacy: An experimental analysis." Diss., The University of Arizona, 1988. http://hdl.handle.net/10150/184375.

Full text
Abstract:
A pretest-treatment-posttest experimental comparison of three individual study support environments for an introductory Management Information Systems course indicated a positive trend in student achievement on structured examinations. The three environments studied include a textbook and class notes presentation method, and two Computer Instructional Support Environments (CISE), drill and practice, and tutorial. Of these, the drill and practice individual study support method displayed a consistently positive effect on examination achievement within each of seven MIS topics. Each topic examined could be classified on the basis of the level of learning objective, and further statistical analysis results indicate a strong correlation between the CISE drill and practice method and attaining a learning objective level of knowledge. Previous research in CISE implementation has been limited by the assumptions of single learning objective levels and single presentation methods, when in fact, the environment is highly complex. A series of experimental observations for introductory MIS topics, in which the learning objective level for each is identified, compares three support environments for individual study. Separate statistical analyses, performed on individual topics, indicate a higher level of achievement by student participants in the CISE Drill and Practice environment for score improvement and improvement in the time required to complete the posttest activity. Two ANOVA models examined the relationships between individual study support methods and topics within two classifications of learning objectives. These results indicate a strong relationship between study support method and learning objective level for text score improvement. The pretest-treatment-posttest experimental design used in this analysis may be replicated across additional topics within the Introductory MIS course to increase the number of topic observations in each learning objective classification. In addition, the experiment can be performed using the same topics to increase the sample size and further clarify the statistical results.
APA, Harvard, Vancouver, ISO, and other styles
6

Boulos, Emile T. "Experimental Analysis of Probabilistic Smart Terrain in Unity." Youngstown State University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=ysu1525432760403556.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Brubaker, Briana. "Experimental Evaluation and Computer Analysis of Multi-Spiral Confinement in Reinforced Concrete Columns." Kansas State University, 2017. http://hdl.handle.net/2097/35324.

Full text
Abstract:
Master of Science
Department of Civil Engineering
Asadollah Esmaeily
Bridge and building construction in areas that sustain frequent seismic activity require the use of heavy lateral steel reinforcement within concrete columns to handle the lateral loads. Multi-spiral lateral reinforcement has been recently introduced to the construction field to offer an alternative to the traditional hoop and tie reinforcement. This report evaluates the experimental data observed in multiple experimental studies done on different concrete specimens. These specimens include multiple rectilinear reinforcement and several multi-spiral configurations in both rectangular and oblong columns. Due to multi-spiral reinforcement being a relatively new design, traditional computer programs have yet to include design analysis for this type of reinforcement in computer programs. Dr. Asad Esmaeily developed the program KSU RC 2.0 that can implement multiple analytical models to evaluate different multi-spiral configurations, as well as traditional hoop and tie confinement, that may be compared with experimental data. This report illustrates the comparative data from several different reinforced concrete column models. The data clearly indicates that multi-spiral reinforced columns exhibit higher compressive strength in the axial direction as well as higher ductility capabilities when compared to traditional rectilinear reinforcement of similar lateral steel reinforcement ratios. The use of multi-spiral reinforcement is also shown to lower costs for both the work time needed to install the structures as well as lowering the required steel ratio; all while maintaining the structural integrity of the columns.
APA, Harvard, Vancouver, ISO, and other styles
8

Sahama, Tony. "Some practical issues in the design and analysis of computer experiments." Thesis, Victoria University, Melbourne, 2003. https://eprints.qut.edu.au/60715/1/Sahama_2003compressed.pdf.

Full text
Abstract:
Deterministic computer simulations of physical experiments are now common techniques in science and engineering. Often, physical experiments are too time consuming, expensive or impossible to conduct. Complex computer models or codes, rather than physical experiments lead to the study of computer experiments, which are used to investigate many scientific phenomena of this nature. A computer experiment consists of a number of runs of the computer code with different input choices. The Design and Analysis of Computer Experiments is a rapidly growing technique in statistical experimental design. This thesis investigates some practical issues in the design and analysis of computer experiments and attempts to answer some of the questions faced by experimenters using computer experiments. In particular, the question of the number of computer experiments and how they should be augmented is studied and attention is given to when the response is a function over time.
APA, Harvard, Vancouver, ISO, and other styles
9

Muthitacharoen, Athicha 1976. "An experimental analysis of exception handling services for multi-agent systems." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/86529.

Full text
Abstract:
Thesis (S.B. and M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2000.
Includes bibliographical references (leaves 43-45).
by Athicha Muthitacharoen.
S.B.and M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
10

Hung, Ying. "Contributions to computer experiments and binary time series." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/24707.

Full text
Abstract:
Thesis (Ph.D.)--Industrial and Systems Engineering, Georgia Institute of Technology, 2008.
Committee Chair: C. F. Jeff Wu; Committee Co-Chair: Roshan Joseph Vengazhiyil; Committee Member: Kwok L. Tsui; Committee Member: Ming Yuan; Committee Member: Shreyes N. Melkote
APA, Harvard, Vancouver, ISO, and other styles
11

Duncan, Tyler Baxter. "Theoretical analysis and experimental investigation of a "tower" heat pipe for desktop computer cooling /." free to MU campus, to others for purchase, 2004. http://wwwlib.umi.com/cr/mo/fullcit?p1426054.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Papaioannou, Ioannis. "Theoretical and experimental analysis for optimizing the performance of a SLPP." Thesis, McGill University, 1985. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=63388.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Kiraly, Bret D. "An Experimental Application of Formal Concept Analysis to Research Communities." Case Western Reserve University School of Graduate Studies / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=case1228497076.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Ganatra, Nirmal Kirtikumar. "Validation of computer-generated results with experimental data obtained for torsional vibration of synchronous motor-driven turbomachinery." Thesis, Texas A&M University, 2004. http://hdl.handle.net/1969.1/499.

Full text
Abstract:
Torsional vibration is an oscillatory angular twisting motion in the rotating members of a system. It can be deemed quite dangerous in that it cannot be detected as easily as other forms of vibration, and hence, subsequent failures that it leads to are often abrupt and may cause direct breakage of the shafts of the drive train. The need for sufficient analysis during the design stage of a rotating machine is, thus, well justified in order to avoid expensive modifications during later stages of the manufacturing process. In 1998, a project was initiated by the Turbomachinery Research Consortium (TRC) at Texas A&M University, College Station, TX, to develop a suite of computer codes to model torsional vibration of large drive trains. The author had the privilege of developing some modules in Visual Basic for Applications (VBA-Excel) for this suite of torsional vibration analysis codes, now collectively called XLTRC-Torsion. This treatise parleys the theory behind torsional vibration analysis using both the Transfer Matrix approach and the Finite Element approach, and in particular, validates the results generated by XLTRC-Torsion based on those approaches using experimental data available from tests on a 66,000 HP Air Compressor.
APA, Harvard, Vancouver, ISO, and other styles
15

Ramos-Murguialday, Ander [Verfasser], and Niels [Akademischer Betreuer] Birbaumer. "Afferent effects on brain computer interfaces : an experimental analysis / Ander Ramos-Murguialday ; Betreuer: Niels Birbaumer." Tübingen : Universitätsbibliothek Tübingen, 2011. http://d-nb.info/1161735232/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Zhang, Yulei. "Computer Experiments with Both Quantitative and Qualitative Inputs." The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1408042133.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Adiga, Nagesh. "Contributions to variable selection for mean modeling and variance modeling in computer experiments." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/43592.

Full text
Abstract:
This thesis consists of two parts. The first part reviews a Variable Search, a variable selection procedure for mean modeling. The second part deals with variance modeling for robust parameter design in computer experiments. In the first chapter of my thesis, Variable Search (VS) technique developed by Shainin (1988) is reviewed. VS has received quite a bit of attention from experimenters in industry. It uses the experimenters' knowledge about the process, in terms of good and bad settings and their importance. In this technique, a few experiments are conducted first at the best and worst settings of the variables to ascertain that they are indeed different from each other. Experiments are then conducted sequentially in two stages, namely swapping and capping, to determine the significance of variables, one at a time. Finally after all the significant variables have been identified, the model is fit and the best settings are determined. The VS technique has not been analyzed thoroughly. In this report, we analyze each stage of the method mathematically. Each stage is formulated as a hypothesis test, and its performance expressed in terms of the model parameters. The performance of the VS technique is expressed as a function of the performances in each stage. Based on this, it is possible to compare its performance with the traditional techniques. The second and third chapters of my thesis deal with variance modeling for robust parameter design in computer experiments. Computer experiments based on engineering models might be used to explore process behavior if physical experiments (e.g. fabrication of nanoparticles) are costly or time consuming. Robust parameter design (RPD) is a key technique to improve process repeatability. Absence of replicates in computer experiments (e.g. Space Filling Design (SFD)) is a challenge in locating RPD solution. Recently, there have been studies (e.g. Bates et al. (2005), Chen et al. (2006), Dellino et al. (2010 and 2011), Giovagnoli and Romano (2008)) of RPD issues on computer experiments. Transmitted variance model (TVM) proposed by Shoemaker and Tsui. (1993) for physical experiments can be applied in computer simulations. The approaches stated above rely heavily on the estimated mean model because they obtain expressions for variance directly from mean models or by using them for generating replicates. Variance modeling based on some kind of replicates relies on the estimated mean model to a lesser extent. To the best of our knowledge, there is no rigorous research on variance modeling needed for RPD in computer experiments. We develop procedures for identifying variance models. First, we explore procedures to decide groups of pseudo replicates for variance modeling. A formal variance change-point procedure is developed to rigorously determine the replicate groups. Next, variance model is identified and estimated through a three-step variable selection procedure. Properties of the proposed method are investigated under various conditions through analytical and empirical studies. In particular, impact of correlated response on the performance is discussed.
APA, Harvard, Vancouver, ISO, and other styles
18

Chaaban, Farid B. "Computer aided analysis, modelling and experimental assessment of permanent magnet synchronous machines with rare earth magnets." Thesis, University of Liverpool, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.293710.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Wu, Sichao. "Computational Framework for Uncertainty Quantification, Sensitivity Analysis and Experimental Design of Network-based Computer Simulation Models." Diss., Virginia Tech, 2017. http://hdl.handle.net/10919/78764.

Full text
Abstract:
When capturing a real-world, networked system using a simulation model, features are usually omitted or represented by probability distributions. Verification and validation (V and V) of such models is an inherent and fundamental challenge. Central to V and V, but also to model analysis and prediction, are uncertainty quantification (UQ), sensitivity analysis (SA) and design of experiments (DOE). In addition, network-based computer simulation models, as compared with models based on ordinary and partial differential equations (ODE and PDE), typically involve a significantly larger volume of more complex data. Efficient use of such models is challenging since it requires a broad set of skills ranging from domain expertise to in-depth knowledge including modeling, programming, algorithmics, high- performance computing, statistical analysis, and optimization. On top of this, the need to support reproducible experiments necessitates complete data tracking and management. Finally, the lack of standardization of simulation model configuration formats presents an extra challenge when developing technology intended to work across models. While there are tools and frameworks that address parts of the challenges above, to the best of our knowledge, none of them accomplishes all this in a model-independent and scientifically reproducible manner. In this dissertation, we present a computational framework called GENEUS that addresses these challenges. Specifically, it incorporates (i) a standardized model configuration format, (ii) a data flow management system with digital library functions helping to ensure scientific reproducibility, and (iii) a model-independent, expandable plugin-type library for efficiently conducting UQ/SA/DOE for network-based simulation models. This framework has been applied to systems ranging from fundamental graph dynamical systems (GDSs) to large-scale socio-technical simulation models with a broad range of analyses such as UQ and parameter studies for various scenarios. Graph dynamical systems provide a theoretical framework for network-based simulation models and have been studied theoretically in this dissertation. This includes a broad range of stability and sensitivity analyses offering insights into how GDSs respond to perturbations of their key components. This stability-focused, structure-to-function theory was a motivator for the design and implementation of GENEUS. GENEUS, rooted in the framework of GDS, provides modelers, experimentalists, and research groups access to a variety of UQ/SA/DOE methods with robust and tested implementations without requiring them to necessarily have the detailed expertise in statistics, data management and computing. Even for research teams having all the skills, GENEUS can significantly increase research productivity.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
20

Lasisi, Ramoni Olaoluwa. "Experimental Analysis of the Effects of Manipulations in Weighted Voting Games." DigitalCommons@USU, 2013. https://digitalcommons.usu.edu/etd/1771.

Full text
Abstract:
Weighted voting games are classic cooperative games which provide compact representation for coalition formation models in human societies and multiagent systems. As useful as weighted voting games are in modeling cooperation among players, they are, however, not immune from the vulnerability of manipulations (i.e., dishonest behaviors) by strategic players that may be present in the games. With the possibility of manipulations, it becomes difficult to establish or maintain trust, and, more importantly, it becomes difficult to assure fairness in such games. For these reasons, we conduct careful experimental investigations and analyses of the effects of manipulations in weighted voting games, including those of manipulation by splitting, merging, and annexation . These manipulations involve an agent or some agents misrepresenting their identities in anticipation of gaining more power or obtaining a higher portion of a coalition's profits at the expense of other agents in a game. We consider investigation of some criteria for the evaluation of game's robustness to manipulation. These criteria have been defined on the basis of theoretical and experimental analysis. For manipulation by splitting, we provide empirical evidence to show that the three prominent indices for measuring agents' power, Shapley-Shubik, Banzhaf, and Deegan-Packel, are all susceptible to manipulation when an agent splits into several false identities. We extend a previous result on manipulation by splitting in exact unanimity weighted voting games to the Deegan-Packel index, and present new results for excess unanimity weighted voting games. We partially resolve an important open problem concerning the bounds on the extent of power that a manipulator may gain when it splits into several false identities in non-unanimity weighted voting games. Specifically, we provide the first three non-trivial bounds for this problem using the Shapley-Shubik and Banzhaf indices. One of the bounds is also shown to be asymptotically tight. Furthermore, experiments on non-unanimity weighted voting games show that the three indices are highly susceptible to manipulation via annexation while they are less susceptible to manipulation via merging. Given that the problems of calculating the Shapley-Shubik and Banzhaf indices for weighted voting games are NP-complete, we show that, when the manipulators' coalitions sizes are restricted to a small constant, manipulators need to do only a polynomial amount of work to find a much improved power gain for both merging and annexation, and then present two enumeration-based pseudo-polynomial algorithms that manipulators can use. Finally, we argue and provide empirical evidence to show that despite finding the optimal beneficial merge is an NP-hard problem for both the Shapley-Shubik and Banzhaf indices, finding beneficial merge is relatively easy in practice. Also, while it appears that we may be powerless to stop manipulation by merging for a given game, we suggest a measure, termed quota ratio, that the game designer may be able to control. Thus, we deduce that a high quota ratio decreases the number of beneficial merges.
APA, Harvard, Vancouver, ISO, and other styles
21

Henning, Gustav. "Visualization of neural data : Dynamic representation and analysis of accumulated experimental data." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-166770.

Full text
Abstract:
The scientific method is an integral part of the investigation and exploration of hypotheses. Although procedures may vary from one field to the next, most have common identifiable stages. Today, there is no lack of tools that illustrate data in different graphical mediums. This thesis focuses instead on the type of tools that researchers use to investigate their hypotheses’ validity.When a sufficient amount of data is gathered, it can be presented for analysis in meaningful ways to illustrate patterns or abnormalities that would otherwise go unnoticed when only viewed in raw numbers. However useful static visualization of data can be when presented in ascientific paper, researchers are often overwhelmed by the number of plots and graphs that can be made using only a sliver of data. Therefore, this thesis will introduce software which purpose is to demonstrate the needs of researchers inanalyzing data from repeated experiments in order to speed up the process of recognizing variations between them.
Den vetenskapliga metoden är en integral del av undersökningen och utforskandet av hypoteser. Medan procedurer varierar mellan fält liknar de varandra i stora drag. Idag finns det ingen brist på verktyg som visualiserar data i olika grafiska kontexter. Istället fokuserar denna tes på de typ av verktyg som forskare använder för att undersöka integriteten av hypoteser.           När tillräckligt med data samlats finns det olika sätt att presentera denna på ett meningsfullt sätt för att demonstrera mönster och avvikelser som skulle förbli osedda i endast siffror.             Hurvida användbar statisk visualisering av data är som grafik till vetenskapliga rapporter gäller nödvändigtvis inte samma sak vid analys på grund av de många kombinationer av visualisering som ofta finns. Mjukvara kommer att introduceras för att demonstrera behovet av dynamisk representation vid analys av ackumulerad data för att påskynda upptäckten av mönster och avvikelser.
APA, Harvard, Vancouver, ISO, and other styles
22

Gounder, James Dakshina. "An experimental investigation of non-reacting and reacting spray jets." Phd thesis, School of Aerospace, Mechanical and Mechatronic Engineering, 2009. http://hdl.handle.net/2123/14424.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Sehgal, Rahul. "Greedy routing in a graph by aid of its spanning tree experimental results and analysis /." [Kent, Ohio] : Kent State University, 2009. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=kent1232166476.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Sadjadee, Sahand. "Meteor framework, a new approach to webdevelopment: an experimental analysis." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-103802.

Full text
Abstract:
The traditional definition of a dynamic web application is about a collection of programs executed at server-side to provide content for clients. These types of web applications produce content at server-side and deliver it to their clients via multiple pages. In result, the client-side has the responsibility to render the content and perform a limited amount of calculations to increase the performance and user experience.    Meteor is a web framework designed for developing Single Page Applications and compared with traditional web frameworks, it takes a new approach in which most of the computations are done at the client-side. This leads to having the server-side primarily used for data storage and secondarily performing a limited amount of computations based on the Model View View-Model pattern.    This thesis tries to examine how web development is affected by Meteor framework from different angles by performing an experimental analysis on Meteor framework. It will investigate different attributes of Meteor framework used for developing a real-world application and finally concludes by presenting the advantages and disadvantages of using it.
APA, Harvard, Vancouver, ISO, and other styles
25

Laitakari, J. (Jaakko). "Computer-assisted quantitative image analysis of cell proliferation, angiogenesis and stromal markers in experimental and laryngeal tumor development." Doctoral thesis, University of Oulu, 2003. http://urn.fi/urn:isbn:9514269497.

Full text
Abstract:
Abstract Automated quantitative computer-assisted morphometric analysis of immunohistochemical expression of markers of neoplastic development and progression in experimentally induced and in human neoplasms showed very high sensitivity and reproducibility, allowing analysis of large numbers of cell and tissue components. Totals of 26 million pixels, 25,000 cells and 1500 vessels were examined, with a sensitivity exceeding 99% and reproducibility exceeding 99%. The total expression of proliferating cell nuclear antigen (PCNA) and p53 increased consistently during 7H-dibenz[c, g] carbazole (DBC)-induced formation of dysplasias and squamous cell carcinomas (SCC:s) in hamster lung. In dysplasia, nuclear size and PCNA staining intensity increased; in SCC:s nuclear size decreased. In a retrospective study on archival material of human laryngeal squamous cell carcinomas, the occurrence and location of PCNA-positive cells were specifically related to the degree of differentiation. In SCC:s nuclear size decreased, while shape alterations and PCNA staining intensity increased in relation to degree of malignancy. In DBC-induced respiratory carcinogenesis increased collagen matrix synthesis occurred prior to neoplasm development. Among squamous cell carcinomas, in well-differentiated tumors, collagen deposition increased, as did fiber size, in moderately differentiated tumors collagen synthesis and the deposition of new collagen decreased. The increase in transforming growth factor beta expression in differentiated cells and in the matrix was isoform-specific. Increased angiogenesis in laryngeal tumor development occurred in preneoplastic states and in SCC: s, inversely related to the degree of differentiation. In well-differentiated neoplasms the vessels were lying in the direction of the BM, in moderately differentiated neoplasms vessels were lying in the direction of tumor invasion and in poorly differentiated neoplasms irregular, partly abnormal vessels intermixed with tumor cells. Small regular vessels predominated in benign conditions and large, irregular vessels in malignant conditions. Experimental models provided the advantage of examining homogenous, well-characterized neoplasm progression without interfering with the process. Morphometric methods provided detailed information on large numbers of cells, useful for studies of tumor behavior and with potential clinical applications.
APA, Harvard, Vancouver, ISO, and other styles
26

Kamineni, Surya Bharat. "Experimental Analysis on the Feasibility of Voice Based Symmetric Key Generation for Embedded Devices." Scholar Commons, 2017. http://scholarcommons.usf.edu/etd/6874.

Full text
Abstract:
In this thesis, we present results of an experimental study in order to generate a secure cryptographic key from the user’s voice which is to be shared between two mobile devices. We identified two security threats related to this problem, discussed the challenges to design the key generation/sharing mechanism, and proposed a new protocol based on bloom filters that overcomes the two main attacks by the intruder. One is when the attacker places its device in the close vicinity of the location where the user attempts to generate/share the key in order to derive the key from eavesdropping on communication messages. The second is when the attacker visually observes the experiment being performed and it tries to replicate the same experiment to reproduce the key. We present several results that demonstrate the practicality of our proposed technique in the context of communications between smart-phone
APA, Harvard, Vancouver, ISO, and other styles
27

Condict, Nahlah. "EXPERIMENTAL ANALYSIS OF MULTI-PURPOSE UWB RF SYSTEM FOR AD-HOC RADAR SENSOR NETWORK APPLICATIONS." Miami University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=miami1533915320524546.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Liu, Kwong Ip. "Digital net experimental designs, function interpolations using low discrepancy sequence and goodness of fit tests by discrepancy." HKBU Institutional Repository, 2007. http://repository.hkbu.edu.hk/etd_ra/807.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Sinclair, James A. "Establishing a New Paradigm in Engineering and Technology education: An Experimental Analysis of Multiple Learning Methodologies and Examination of Cognitive Profiles of Continuing Education Students." NSUWorks, 2006. http://nsuworks.nova.edu/gscis_etd/843.

Full text
Abstract:
This study consisted of two inter-related components. The first part compared three instructional methods for delivering a Computer Aided Design (CAD) course for adult Continuing Education (CE) students. The second part established a comprehensive cognitive profile matrix of adult continuing education students entering careers in technology. The first part examined the use of three delivery methods, using three randomly selected groups of students. The three methods were as following: Traditional classroom-based training. Instructor-facilitated course presented online. Independent study, using a CD-ROM tutorial. The experimental design consisted of three randomly selected sample groups’ of20 students. The independent variable in the study was the instructional method. The dependent variables were the academic achievement scores and the satisfaction levels of the participants. As a second component, the study determined a cognitive profile of adult continuing education students. This analysis involved the same group of 60 students and presented a quantitative matrix of learning styles (by way of the bi-polar cognitive profile matrix. After obtaining all of the statistical data, a correlation analysis was performed, comparing cognitive profiles students versus the instructional methodology. The academic achievement analysis yielded the following results: There was a significant difference between in-class and online course, where in-class method showed higher academic achievement results. There was a significant difference between online course and the CD-ROM based course. In this case the CD method was more effective than the online method. There was no statistically significant difference between the in-class and the independent CD-ROM methods. The correlation analysis established that no significant correlation existed between the achievement and learning styles of the students. The results indicated that overall academic achievement within the subject of CAD are equal for all cognitive profile categories, allowing people with different learning styles to achieve their desired levels of academic success, as well as to meet their educational goals. The results of the Objective Course Satisfaction and indicated that there was no significant difference among the three groups. It was, therefore concluded that the objective course satisfaction was equal among the three methods of instruction described in this study.
APA, Harvard, Vancouver, ISO, and other styles
30

Xie, Huizhi. "Some contributions to latin hypercube design, irregular region smoothing and uncertainty quantification." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44770.

Full text
Abstract:
In the first part of the thesis, we propose a new class of designs called multi-layer sliced Latin hypercube design (DSLHD) for running computer experiments. A general recursive strategy for constructing MLSLHD has been developed. Ordinary Latin hypercube designs and sliced Latin hypercube designs are special cases of MLSLHD with zero and one layer respectively. A special case of MLSLHD with two layers, doubly sliced Latin hypercube design, is studied in detail. The doubly sliced structure of DSLHD allows more flexible batch size than SLHD for collective evaluation of different computer models or batch sequential evaluation of a single computer model. Both finite-sample and asymptotical sampling properties of DSLHD are examined. Numerical experiments are provided to show the advantage of DSLHD over SLHD for both sequential evaluating a single computer model and collective evaluation of different computer models. Other applications of DSLHD include design for Gaussian process modeling with quantitative and qualitative factors, cross-validation, etc. Moreover, we also show the sliced structure, possibly combining with other criteria such as distance-based criteria, can be utilized to sequentially sample from a large spatial data set when we cannot include all the data points for modeling. A data center example is presented to illustrate the idea. The enhanced stochastic evolutionary algorithm is deployed to search for optimal design. In the second part of the thesis, we propose a new smoothing technique called completely-data-driven smoothing, intended for smoothing over irregular regions. The idea is to replace the penalty term in the smoothing splines by its estimate based on local least squares technique. A close form solution for our approach is derived. The implementation is very easy and computationally efficient. With some regularity assumptions on the input region and analytical assumptions on the true function, it can be shown that our estimator achieves the optimal convergence rate in general nonparametric regression. The algorithmic parameter that governs the trade-off between the fidelity to the data and the smoothness of the estimated function is chosen by generalized cross validation (GCV). The asymptotic optimality of GCV for choosing the algorithm parameter in our estimator is proved. Numerical experiments show that our method works well for both regular and irregular region smoothing. The third part of the thesis deals with uncertainty quantification in building energy assessment. In current practice, building simulation is routinely performed with best guesses of input parameters whose true value cannot be known exactly. These guesses affect the accuracy and reliability of the outcomes. There is an increasing need to perform uncertain analysis of those input parameters that are known to have a significant impact on the final outcome. In this part of the thesis, we focus on uncertainty quantification of two microclimate parameters: the local wind speed and the wind pressure coefficient. The idea is to compare the outcome of the standard model with that of a higher fidelity model. Statistical analysis is then conducted to build a connection between these two. The explicit form of statistical models can facilitate the improvement of the corresponding modules in the standard model.
APA, Harvard, Vancouver, ISO, and other styles
31

Rekanar, Kaavya. "Text Classification of Legitimate and Rogue online Privacy Policies : Manual Analysis and a Machine Learning Experimental Approach." Thesis, Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-13363.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Muller, Sandra, and n/a. "Poetics in the digital age : media-specific analysis of experimental poetry on and off the screen." University of Otago. Department of English, 2009. http://adt.otago.ac.nz./public/adt-NZDU20090501.132423.

Full text
Abstract:
As an alternative to print media, digital media make us newly aware of the materiality of experimental poetic texts and require us to account for their media-specific differences. Although already several theoretical models have been put forward to define these differences, so far few poems have been analyzed in terms of their media-specific textual materiality. This thesis seeks to fill this gap in the applied media-specific analysis of experimental poetry. It combines traditional close reading with a media-specific approach in order to investigate the relationship between the physical characteristics and signifying strategies of four experimental poetic texts in various digital and non-digital media. It critically interrogates the specific use of the given medium in each poem, and illustrates that their respective textual materiality cannot be specified in advance based on general assumptions concerning the medium in question. A digital poem is not inherently more innovative than a non-digital poem. Rather, a poem is perceived as innovative if it resists conventional reading strategies by establishing a particularly complex, dynamic, and effectively anomalous sense of textual materiality, which necessarily only emerges from the direct interplay among text, object, and reader.
APA, Harvard, Vancouver, ISO, and other styles
33

Rothwell, Clayton D. "Recurrence Quantification Models of Human Conversational Grounding Processes: Informing Natural Language Human-Computer Interaction." Wright State University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=wright1527591081613424.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Giedt, Randy James. "Mitochondrial Network Dynamics in Vascular Endothelial Cells Exposed to Mechanochemical Stimuli: Experimental and Mathematical Analysis." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1333985787.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Silversved, Nicklas, and Hampus Runesson. "A comparison of the security in ZigBee and the IEEE 802.15.9 standard and an experimental analysis of communication over IEEE 802.15.4." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-157762.

Full text
Abstract:
The increasing number of IoT devices used in today’s society has led to a demand for better security in order to prevent attackers from gaining access to private information. The IoT brings a wide application scope and because of that there are a lot of ways to set up a secure network and manage keys in these kinds of networks. This paper presents a comparison between the security model in Zigbee and the new recommended practice for Key Management Protocols defined by the IEEE 802.15.9 standard. We investigate key establishment and transportation together with the vulnerabilities that this might bring regarding potential attacks like DoS and MitM. Since these protocols are built on the IEEE 802.15.4 standard, experimental tests have been made where we analyze the throughput, RTT and packet loss over varied distances and we try to determine the maximum transmission range for devices using IEEE 802.15.4 modules. The IEEE 802.15.9 standard works with different KMPs and depending on the KMP being used we can see both similarities and differences regarding key management and possible attacks when comparing it to ZigBee. Furthermore, we found that attacks on a ZigBee device is more likely to compromise the whole network while similar attacks would only affect the specific peers in an IEEE 802.15.9 communication. Based on the experiments we find that open areas, distance and interference have a negative effect on the throughput, RTT and packet loss of the communication.
APA, Harvard, Vancouver, ISO, and other styles
36

Kim, Yoonhwak. "The Effects of Assumption on Subspace Identification using Simulation and Experiment Data." Master's thesis, University of Central Florida, 2013. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/5666.

Full text
Abstract:
In the modern dynamic engineering field, experimental dynamics is an important area of study. This area includes structural dynamics, structural control, and structural health monitoring. In experimental dynamics, methods to obtain measured data have seen a great influx of research efforts to develop an accurate and reliable experimental analysis result. A technical challenge is the procurement of informative data that exhibits the desired system information. In many cases, the number of sensors is limited by cost and difficulty of data archive. Furthermore, some informative data has technical difficulty when measuring input force and, even if obtaining the desired data were possible, it could include a lot of noise in the measuring data. As a result, researchers have developed many analytical tools with limited informative data. Subspace identification method is used one of tools in these achievements. Subspace identification method includes three different approaches: Deterministic Subspace Identification (DSI), Stochastic Subspace Identification (SSI), and Deterministic-Stochastic Subspace Identification (DSSI). The subspace identification method is widely used for fast computational speed and its accuracy. Based on the given information, such as output only, input/output, and input/output with noises, DSI, SSI, and DSSI are differently applied under specific assumptions, which could affect the analytical results. The objective of this study is to observe the effect of assumptions on subspace identification with various data conditions. Firstly, an analytical simulation study is performed using a six-degree-of-freedom mass-damper-spring system which is created using MATLAB. Various conditions of excitation insert to the simulation test model, and its excitation and response are analyzed using the subspace identification method. For stochastic problems, artificial noise is contained to the excitation and followed the same steps. Through this simulation test, the effects of assumption on subspace identification are quantified. Once the effects of the assumptions are studied using the simulation model, the subspace identification method is applied to dynamic response data collected from large-scale 12-story buildings with different foundation types that are tested at Tongji University, Shanghai, China. Noise effects are verified using three different excitation types. Furthermore, using the DSSI, which has the most accurate result, the effect of different foundations on the superstructure are analyzed.
M.S.
Masters
Civil, Environmental, and Construction Engineering
Engineering and Computer Science
Civil Engineering; Structures and Geotechnical Engineering
APA, Harvard, Vancouver, ISO, and other styles
37

Mitchell, John C. "A use-wear analysis of selected British Lower Palaeolithic handaxes with special reference to the site of Boxgrove (West Sussex) : a study incorporating optical microscopy, computer aided image analysis and experimental archaeology." Thesis, University of Oxford, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.285553.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Rodrigues, Fábio Rodrigo Mandello. "Análise numérica tridimensional e experimental do comportamento mecânico de alças ortodônticas delta." Universidade Tecnológica Federal do Paraná, 2014. http://repositorio.utfpr.edu.br/jspui/handle/1/1012.

Full text
Abstract:
O presente trabalho analisou numericamente e experimentalmente as características mecânicas de alças ortodônticas com geometria delta, com e sem helicóide superior. Estudos dessa natureza sobre alças delta sem o helicóide superior ainda não se encontram na literatura. O material utilizado na confecção das alças foi uma liga formada por titânio-molibdênio, comum em aplicações ortodônticas. As simulações numéricas foram realizadas através do método dos elementos finitos (MEF) utilizando elementos tridimensionais e análise para grandes deslocamentos, diferentemente das análises encontradas até o momento para este tipo de alça que são exclusivamente bidimensionais. As alças foram simuladas de acordo com uma situação real de uso clínico, com ativação horizontal (direção x). Nas análises experimentais, onde foram obtidas as forças e momentos reativos, utilizou-se uma plataforma com extensômetros montados em ponte completa de Wheatstone, os quais fornecem valores de tensões elétricas de saída correspondentes à deformação aplicada na alça (corpo de prova). Para a leitura das variações de tensões elétricas foi utilizado um sistema de aquisição de dados da National Instruments, o qual, através do programa Labview, fornece valores de tensões elétricas que são convertidos em forças e momentos calibrando-se a plataforma. Cada valor de força e momento reativo foi tabulado desde a ativação nula até seu valor máximo de ativação, isto é, um pouco antes de atingir o escoamento do material da alça. Além de forças e momentos reativos, foram determinadas as relações momento força (M/F). Tais relações, segundo a literatura, definem o tipo de movimentação dentária em um caso clínico. Os resultados obtidos mostraram que as forças reativas no eixo x nas alças sem helicóide são maiores que nas alças com helicóide. Já as forças reativas de intrusão/extrusão dentária atuantes no eixo y apresentaram valores similares para os dois tipos de alças, sendo muito pequenas e por isso desconsideras nesta pesquisa devido à sua pouca influência nos resultados. Obteve-se uma curva da variação de tensões ao longo das alças em função da ativação, e observou-se que alças sem helicóide apresentam tensões mais altas em um mesmo valor de ativação e consequentemente maior tendência a plastificação. A relação M/F predominante neste trabalho foi a relação Mz/Fx a qual estabelece a maioria dos tipos de movimentos dentários encontrados na literatura e a única para o tipo delta até então. Os tipos de movimentos dentários originados com o uso das alças delta com e sem helicóide, de acordo com a relação Mz/Fx, no plano xy são de rotação de raiz, rotação de coroa e de translação. Partindo-se dos valores de My/Fx infere-se que não há movimentação dentária para este tipo de alça. Outro fator não explorado na literatura e presente nesta pesquisa é a variação angular entre as extremidades e seu impacto nos resultados finais de forças, momentos e relação M/F.
This study analyzed numerically and experimentally the mechanical characteristics of orthodontic springs with delta geometry with and without an upper loop. To our knowledge this kind of study of delta springs without an upper loop has not yet been described in the literature. The material used to make the springs was a titanium-molybdenum alloy commonly used in orthodontic applications. Numerical simulations were performed with finite element modeling (FEM) using three-dimensional elements and large-displacement analysis, unlike the analyses found in the literature to date for this type of spring, which are exclusively two-dimensional. The springs were simulated to reflect a real clinical situation with horizontal activation (in the x direction). In the experimental analysis to determine the reactive forces and moments, a platform with strain gauges mounted on a full Wheatstone bridge whose output voltages corresponded to the strain applied to the spring (the test specimen) was used. The voltage variations were read with the aid of a National Instruments data acquisition system, which, when used with the LabVIEW program, provides voltage values which are converted into forces and moments to calibrate the platform. Each value of force and reactive moment was recorded in a table, from zero activation to the maximum value, i.e., just before the yield strength of the material was reached. In addition to the reactive forces and moments, the moment-to-force ratios (M:F) were measured. According to the literature, these ratios define the type of tooth movement in a clinical case. The results show that the reactive forces along the x-axis in the springs without a loop were greater than in the springs with a loop. In contrast, the reactive intrusive/extrusive forces in the y-axis, which were very small and could be neglected in this study because they had little influence on the results, were similar for both types of spring. A curve showing the change in stress along the spring as a function of activation was plotted. This showed that springs without a loop had higher stresses for a given activation value and therefore a greater tendency to deform plastically. The predominant M:F ratio in this study was the Mz/Fx ratio, which is the moment-to-force ratio that produces most types of tooth movements described in the literature and is to date the only moment-to-force ratio described in the literature for the delta spring. The tooth movements in the xy-plane as a result of the Mz/Fx moment-to-force ratio produced by delta springs with and without a loop are root rotation, crown rotation and translation. Based on the values of My/Fx observed, it can be inferred that this type of spring does not produce any tooth movement in the xz-plane. Another factor that is not explored in the literature but that was considered here is the variation in the angle between the extremities of the spring and its impact on the final forces, moments and M:F ratio.
APA, Harvard, Vancouver, ISO, and other styles
39

Pivatto, Amanda Brandenburg. "Análise experimental e computacional de vigas biapoiadas de concreto armado reforçadas com CRFC." Universidade Tecnológica Federal do Paraná, 2017. http://repositorio.utfpr.edu.br/jspui/handle/1/2660.

Full text
Abstract:
CAPES
A falta de manutenção, a mudança de carregamentos, as deficiências de projeto, de execução e até mesmo dos materiais constituintes de uma peça estrutural podem levar à necessidade de aplicação de um reforço estrutural. Dentre estes métodos se destaca o reforço estrutural com Compósitos Reforçados com Fibra de Carbono (CRFC). Este trabalho tem como objetivo analisar o comportamento estrutural por meio de métodos experimentais e modelagem numérica de vigas biapoiadas de concreto armado reforçadas à flexão com CRFC, além de analisar a influência da adição de incrementos de ancoragem no comportamento da peça. O modelo computacional foi desenvolvido no software comercial ANSYS, por meio da utilização do Método dos Elementos Finitos. Foi avaliado o ganho de resistência com a implantação de uma e duas camadas de CRFC em comparação à viga sem reforço. A interface concreto – reforço foi considerada na simulação computacional, uma vez que os esforços atuantes nesta região geralmente são a causa da ruptura neste tipo de peça. Para isso, foi utilizado o Modelo da Zona de Coesão como método de representação da interface. Em relação ao programa experimental, além da variação do número de camadas de reforço, também foi avaliada a influência da adição de incrementos de ancoragem lateral no comportamento das vigas estudadas. Como resultados, foi alcançada uma boa acurácia entre os modelos computacionais e a análise experimental em relação ao valor da carga última de cada situação analisada, bem como em relação ao valor de deslocamento encontrado para as vigas reforçadas. Entretanto, percebeu-se que os valores das deformações obtidos no modelo computacional foram superiores à média dos valores encontrados na análise experimental. Além disso, observou-se também que as vigas tiveram sua rigidez aumentada com o acréscimo de camadas de reforço. Ademais, foi verificado que a adição de incrementos de ancoragem levou a um acréscimo na resistência das peças, mais efetivo que o aumento do número de camadas, para o caso deste trabalho.
The lack of maintenance, the change in loadings, project and execution failures, and even failures of constituent materials of a structural part lead to the necessity of applying a structural reinforcement. Among these methods highlights the structural reinforcement with Carbon Fiber Reinforced Polymer (CFRP). This research has the objective to examine whether there is a good relationship between laboratory tests and a simulation model of reinforced concrete beams strengthened in bending with CFRP, in addition to analyze the influence of anchorage increments application in the structural behavior. The computacional model was developed in the comercial software ANSYS, by the utilization of the Finite Element Modeling for the structural analysis. The resistance gain with the implementation of one and two layers of CFRP was evaluated compared with the reference beam. The concrete – reinforce interface was considered in the computacional simulation, since the active efforts in this area are the reason of this type of structure failure. Thus, the Cohesive Zone Model was used for the interface representation. In relation to the experimental tests, beyond the number of layers variation, it was also evaluated the influence of anchorage increments application in the structural behavior. As results, it was obtained a good accuracy of the computacional models and the experimental tests in relation to the rupture load, in addition to the displacements values for the reinforced beams. However, the strain values achieved by the computacional model were higher than the experimental analysis rate values. Besides, it was also noticed that the beams had an increase of the stiffness with the addition of the reinforced layers. Furthermore, it was verified that the anchorage increments application caused an increase of the resistance, even more efficient than the addition of layers, for this case of study.
APA, Harvard, Vancouver, ISO, and other styles
40

Heimisson, Gudmundur Torfi. "The importance of program-delivered differential reinforcement in the development of classical music auditory discrimination." [Tampa, Fla.] : University of South Florida, 2004. http://purl.fcla.edu/fcla/etd/SFE0000440.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Karuri, Stella. "Integration in Computer Experiments and Bayesian Analysis." Thesis, University of Waterloo, 2005. http://hdl.handle.net/10012/1187.

Full text
Abstract:
Mathematical models are commonly used in science and industry to simulate complex physical processes. These models are implemented by computer codes which are often complex. For this reason, the codes are also expensive in terms of computation time, and this limits the number of simulations in an experiment. The codes are also deterministic, which means that output from a code has no measurement error.

One modelling approach in dealing with deterministic output from computer experiments is to assume that the output is composed of a drift component and systematic errors, which are stationary Gaussian stochastic processes. A Bayesian approach is desirable as it takes into account all sources of model uncertainty. Apart from prior specification, one of the main challenges in a complete Bayesian model is integration. We take a Bayesian approach with a Jeffreys prior on the model parameters. To integrate over the posterior, we use two approximation techniques on the log scaled posterior of the correlation parameters. First we approximate the Jeffreys on the untransformed parameters, this enables us to specify a uniform prior on the transformed parameters. This makes Markov Chain Monte Carlo (MCMC) simulations run faster. For the second approach, we approximate the posterior with a Normal density.

A large part of the thesis is focused on the problem of integration. Integration is often a goal in computer experiments and as previously mentioned, necessary for inference in Bayesian analysis. Sampling strategies are more challenging in computer experiments particularly when dealing with computationally expensive functions. We focus on the problem of integration by using a sampling approach which we refer to as "GaSP integration". This approach assumes that the integrand over some domain is a Gaussian random variable. It follows that the integral itself is a Gaussian random variable and the Best Linear Unbiased Predictor (BLUP) can be used as an estimator of the integral. We show that the integration estimates from GaSP integration have lower absolute errors. We also develop the Adaptive Sub-region Sampling Integration Algorithm (ASSIA) to improve GaSP integration estimates. The algorithm recursively partitions the integration domain into sub-regions in which GaSP integration can be applied more effectively. As a result of the adaptive partitioning of the integration domain, the adaptive algorithm varies sampling to suit the variation of the integrand. This "strategic sampling" can be used to explore the structure of functions in computer experiments.
APA, Harvard, Vancouver, ISO, and other styles
42

Rodriguez, George IV. "Finite Element Modeling of Delamination Damage in Carbon Fiber Laminates Subject to Low-Velocity Impact and Comparison with Experimental Impact Tests Using Nondestructive Vibrothermography Evaluation." DigitalCommons@CalPoly, 2016. https://digitalcommons.calpoly.edu/theses/1583.

Full text
Abstract:
Carbon fiber reinforced composites are utilized in many design applications where high strength, low weight, and/or high stiffness are required. While composite materials can provide high strength and stiffness-to-weight ratios, they are also more complicated to analyze due to their inhomogeneous nature. One important failure mode of composite structures is delamination. This failure mode is common when composite laminates are subject to impact loading. Various finite element methods for analyzing delamination exist. In this research, a modeling strategy based on contact tiebreak definitions in LS-DYNA®was used. A finite element model of a low-velocity impact event was created to predict delamination in a composite laminate. The resulting delamination relative size and shape was found to partially agree with analytical and experimental results for similar impact events, while the force-time plot agreed well with experimental results. A small difference in contact time in the simulation compared to experimental testing is likely due to the omission of composite failure modes other than delamination. Experimental impact testing and subsequent vibrothermography analysis showed delamination damage in locations shown in previous research. This confirmed the validity of vibrothermography as a nondestructive evaluation technique for analyzing post-impact delamination.
APA, Harvard, Vancouver, ISO, and other styles
43

Pekar, Marek. "Modální analýza lopatek oběžného kola vírové turbíny." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2014. http://www.nusl.cz/ntk/nusl-231288.

Full text
Abstract:
The aim of this diploma thesis is to determine and compare modal properties of four swirl turbine wheels, each with a different geometry. Natural frequencies and mode shapes were obtained based on computer modelling using Ansys software and they were compared with experimental modal analysis' results. The computer modelling and the experimental modal analysis were carried out for different boundary conditions and in different environments. The beginning of the thesis is dedicated to a brief overview of literature with similar issues. Then a brief introduction of a dynamics theory is mentioned in which equations of motion for a damped and an undamped single degree of freedom system are derived. The creation of a geometry model which is obtained by a reverse engineering is shown in the second part of the thesis. The geometry model was subsequently used for the computer based modelling of the modal parameters. In the third part an experimental equipment, setting, measurement and processing of data are described. The conclusion of the thesis is dedicated to the comparison of the results obtained by the experimental modal analysis and the computing modelling is presented. Moreover, influence of boundary conditions and influence of the environment on the natural frequencies are evaluated.
APA, Harvard, Vancouver, ISO, and other styles
44

Sun, Fangfang. "On A-optimal Designs for Discrete Choice Experiments and Sensitivity Analysis for Computer Experiments." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1345231162.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Drozdzal, Michal. "Sequential image analysis for computer-aided wireless endoscopy." Doctoral thesis, Universitat de Barcelona, 2014. http://hdl.handle.net/10803/145614.

Full text
Abstract:
Wireless Capsule Endoscopy (WCE) is a technique for inner-visualization of the entire small intestine and, thus, offers an interesting perspective on intestinal motility. The two major drawbacks of this technique are: 1) huge amount of data acquired by WCE makes the motility analysis tedious and 2) since the capsule is the first tool that offers complete inner-visualization of the small intestine, the exact importance of the observed events is still an open issue. Therefore, in this thesis, a novel computer-aided system for intestinal motility analysis is presented. The goal of the system is to provide an easily-comprehensible visual description of motility-related intestinal events to a physician. In order to do it, several tools based either on computer vision concepts or on machine learning techniques are presented. A method for transforming 3D video signal to a holistic image of intestinal motility, called motility bar, is proposed. The method calculates the optimal mapping from video into image from the intestinal motility point of view. To characterize intestinal motility, methods for automatic extraction of motility information from WCE are presented. Two of them are based on the motility bar and two of them are based on frame-per-frame analysis. In particular, four algorithms dealing with the problems of intestinal contraction detection, lumen size estimation, intestinal content characterization and wrinkle frame detection are proposed and validated. The results of the algorithms are converted into sequential features using an online statistical test. This test is designed to work with multivariate data streams. To this end, we propose a novel formulation of concentration inequality that is introduced into a robust adaptive windowing algorithm for multivariate data streams. The algorithm is used to obtain robust representation of segments with constant intestinal motility activity. The obtained sequential features are shown to be discriminative in the problem of abnormal motility characterization. Finally, we tackle the problem of efficient labeling. To this end, we incorporate active learning concepts to the problems present in WCE data and propose two approaches. The first one is based the concepts of sequential learning and the second one adapts the partition-based active learning to an error-free labeling scheme. All these steps are sufficient to provide an extensive visual description of intestinal motility that can be used by an expert as decision support system.
APA, Harvard, Vancouver, ISO, and other styles
46

Nagy, Béla. "Valid estimation and prediction inference in analysis of a computer model." Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/1561.

Full text
Abstract:
Computer models or simulators are becoming increasingly common in many fields in science and engineering, powered by the phenomenal growth in computer hardware over the past decades. Many of these simulators implement a particular mathematical model as a deterministic computer code, meaning that running the simulator again with the same input gives the same output. Often running the code involves some computationally expensive tasks, such as solving complex systems of partial differential equations numerically. When simulator runs become too long, it may limit their usefulness. In order to overcome time or budget constraints by making the most out of limited computational resources, a statistical methodology has been proposed, known as the "Design and Analysis of Computer Experiments". The main idea is to run the expensive simulator only at a relatively few, carefully chosen design points in the input space, and based on the outputs construct an emulator (statistical model) that can emulate (predict) the output at new, untried locations at a fraction of the cost. This approach is useful provided that we can measure how much the predictions of the cheap emulator deviate from the real response surface of the original computer model. One way to quantify emulator error is to construct pointwise prediction bands designed to envelope the response surface and make assertions that the true response (simulator output) is enclosed by these envelopes with a certain probability. Of course, to be able to make such probabilistic statements, one needs to introduce some kind of randomness. A common strategy that we use here is to model the computer code as a random function, also known as a Gaussian stochastic process. We concern ourselves with smooth response surfaces and use the Gaussian covariance function that is ideal in cases when the response function is infinitely differentiable. In this thesis, we propose Fast Bayesian Inference (FBI) that is both computationally efficient and can be implemented as a black box. Simulation results show that it can achieve remarkably accurate prediction uncertainty assessments in terms of matching coverage probabilities of the prediction bands and the associated reparameterizations can also help parameter uncertainty assessments.
APA, Harvard, Vancouver, ISO, and other styles
47

Hardas, Manas Sudhakar. "SEGMENTATION AND INTEGRATION IN TEXT COMPREHENSION: A MODEL OF CONCEPT NETWORK GROWTH." Kent State University / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=kent1334593269.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Tamrakar, Anjila. "SPICE: A Software Tool for Studying End-user’s Insecure Cyber Behavior and Personality-traits." ScholarWorks@UNO, 2016. http://scholarworks.uno.edu/td/2236.

Full text
Abstract:
Insecure cyber behavior of end users may expose their computers to cyber-attack. A first step to improve their cyber behavior is to identify their tendency toward insecure cyber behavior. Unfortunately, not much work has been done in this area. In particular, the relationship between end users cyber behavior and their personality traits is much less explored. This paper presents a comprehensive review of a newly developed, easily configurable, and flexible software SPICE for psychologist and cognitive scientists to study personality traits and insecure cyber behavior of end users. The software utilizes well-established cognitive methods (such as dot-probe) to identify number of personality traits, and further allows researchers to design and conduct experiments and detailed quantitative study on the cyber behavior of end users. The software collects fine-grained data on users for analysis.
APA, Harvard, Vancouver, ISO, and other styles
49

Gupta, Abhishek. "Robust design using sequential computer experiments." Thesis, Texas A&M University, 2004. http://hdl.handle.net/1969.1/492.

Full text
Abstract:
Modern engineering design tends to use computer simulations such as Finite Element Analysis (FEA) to replace physical experiments when evaluating a quality response, e.g., the stress level in a phone packaging process. The use of computer models has certain advantages over running physical experiments, such as being cost effective, easy to try out different design alternatives, and having greater impact on product design. However, due to the complexity of FEA codes, it could be computationally expensive to calculate the quality response function over a large number of combinations of design and environmental factors. Traditional experimental design and response surface methodology, which were developed for physical experiments with the presence of random errors, are not very effective in dealing with deterministic FEA simulation outputs. In this thesis, we will utilize a spatial statistical method (i.e., Kriging model) for analyzing deterministic computer simulation-based experiments. Subsequently, we will devise a sequential strategy, which allows us to explore the whole response surface in an efficient way. The overall number of computer experiments will be remarkably reduced compared with the traditional response surface methodology. The proposed methodology is illustrated using an electronic packaging example.
APA, Harvard, Vancouver, ISO, and other styles
50

Sánchez, Secades Juan María. "Multiple feature temporal models for the characterization of semantic video contents." Doctoral thesis, Universitat Autònoma de Barcelona, 2003. http://hdl.handle.net/10803/3036.

Full text
Abstract:
La estructura de alto nivel del vídeo se puede obtener a partir de conocimiento sobre el dominio más una representación de los contenidos que proporcione información semántica. En este contexto, las representaciones de la semántica de nivel medio vienen dadas en términos de características de bajo nivel y de la información que expresan acerca de los contenidos del vídeo. Las representaciones de nivel medio permiten obtener de forma automática agrupamientos semánticamente significativos de los shots, que son posteriormente utilizados conjuntamente con conocimientos de alto nivel específicos del dominio para obtener la estructura del vídeo. En general, las representaciones de nivel medio también dependen del dominio. Los descriptores que forman parte de la representación están específicamente diseñados para una aplicación concreta, teniendo en cuenta los requisitos del dominio y el conocimiento que tenemos del mismo. En esta tesis se propone una representación de nivel medio de los contenidos videográficos que permite obtener agrupamientos de shots que son semánticamente significativos. Esta representación no depende del dominio, y sin embargo aporta la información necesaria para obtener la estructura de alto nivel del vídeo, gracias a la combinación de las contribuciones de diferentes características de bajo nivel de las imágenes a la semántica de nivel medio.
La semántica de nivel medio se encuentra implícita en las características de bajo nivel, dado que un concepto semántico concreto genera una combinación específica de valores de las mismas. El problema consiste en "tender un puente sobre el vacío" entre las características de bajo nivel que se observan y sus correspondientes conceptos semánticos de nivel medio ocultos. Para establecer relaciones entre estos dos niveles, se utilizan técnicas de visión por computador y procesamiento de imágenes. Otras disciplinas como la cinematografía y la semiótica también proporcionan pistas importantes para determinar como se usan las características de bajo nivel para crear conceptos semánticos. Una descripción adecuada de las características de bajo nivel puede proporcionar una representación de sus correspondientes contenidos semánticos. Más en concreto, el color resumido en un histograma se utiliza para representar la apariencia de los objetos. Cuando el objeto es el fondo de la escena, su color aporta información sobre la localización. De la misma manera, en esta tesis se analiza la semántica que transmite una descripción del movimiento. Las características de movimiento resumidas en una matriz de coocurrencias temporales proporcionan información sobre las operaciones de la cámara y el tipo de toma (primer plano, etc.) en función de la distancia relativa entre la cámara y los objetos filmados.
La principal contribución de esta tesis es una representación de los contenidos visuales del vídeo basada en el resumen del comportamiento dinámico de las características de bajo nivel como procesos temporales descritos por cadenas de Markov. Los estados de la cadena de Markov vienen dados por los valores observados de una característica de bajo nivel. A diferencia de las representaciones de los shots basadas en keyframes, el modelo de cadena de Markov considera información de todos los frames del shot en la misma representación. Las medidas de similitud naturales en un marco probabilístico, como la divergencia de Kullback-Leibler, pueden ser utilizadas para comparar cadenas de Markov y, por tanto, el contenido de los shots que representan. En la misma representación se pueden combinar múltiples características de las imágenes mediante el acoplamiento de sus correspondientes cadenas. Esta tesis presenta diferentes formas de acoplar cadenas de Markov, y en particular la llamada Cadenas Acopladas de Markov (Coupled Markov Chains, CMC). También se detalla un método para encontrar la estructura de acoplamiento óptima en términos de coste mínimo y mínima pérdida de información, ya que esta merma se relaciona directamente con la pérdida de precisión de la estructura acoplada para representar contenidos de vídeo. Durante el proceso de cálculo de las representaciones de los shots se detectan las fronteras entre éstos usando el mismo modelo y medidas de similitud.
Cuando las características de color y movimiento se combinan, la representación en cadenas acopladas de Markov proporciona un descriptor semántico de nivel medio que contiene información implícita sobre objetos (sus identidades, tamaños y patrones de movimiento), movimiento de cámara, localización, tipo de toma, relaciones temporales entre los elementos que componen la escena y actividad global, entendida como la cantidad de acción. Conceptos semánticos más complejos emergen de la unión de estos descriptores de nivel medio, tales como "cabeza parlante", que surge de la combinación de un primer plano con el color de la piel de la cara. Añadiendo el componente de localización en el dominio de Noticiarios, las cabezas parlantes se pueden subclasificar en "presentadores" (localizados en estudio) y "corresponsales" (localizados en exteriores). Estas y otras categorías semánticamente significativas aparecen cuando los shots representados usando el modelo CMC se agrupan de forma no supervisada. Los conceptos mejor definidos se corresponden con grupos compactos, que pueden ser detectados usando una medida de densidad. Conocimiento de alto nivel sobre el dominio se puede definir mediante simples reglas basadas en estos conceptos, que establecen fronteras en la estructura semántica del vídeo. El modelado de contenidos de vídeo por cadenas acopladas de Markov unifica los primeros pasos del proceso de análisis semántico de vídeo y proporciona una representación de nivel medio semánticamente significativa sin necesidad de detectar previamente las fronteras entre shots.
The high-level structure of a video can be obtained once we have knowledge about the domain plus a representation of the contents that provides semantic information. In this context, intermediate-level semantic representations are defined in terms of low-level features and the information they convey about the contents of the video. Intermediate-level representations allow us to obtain semantically meaningful clusterings of shots, which are then used together with high-level domain-specific knowledge in order to obtain the structure of the video. Intermediate-level representations are usually domain-dependent as well. The descriptors involved in the representation are specifically tailored for the application, taking into account the requirements of the domain and the knowledge we have about it. This thesis proposes an intermediate-level representation of video contents that allows us to obtain semantically meaningful clusterings of shots. This representation does not depend on the domain, but still provides enough information to obtain the high-level structure of the video by combining the contributions of different low-level image features to the intermediate-level semantics.
Intermediate-level semantics are implicitly supplied by low-level features, given that a specific semantic concept generates some particular combination of feature values. The problem is to bridge the gap between observed low-level features and their corresponding hidden intermediate-level semantic concepts. Computer vision and image processing techniques are used to establish relationships between them. Other disciplines such as filmmaking and semiotics also provide important clues to discover how low-level features are used to create semantic concepts. A proper descriptor of low-level features can provide a representation of their corresponding semantic contents. Particularly, color summarized as a histogram is used to represent the appearance of objects. When this object is the background, color provides information about location. In the same way, the semantics conveyed by a description of motion have been analyzed in this thesis. A summary of motion features as a temporal cooccurrence matrix provides information about camera operation and the type of shot in terms of relative distance of the camera to the subject matter.
The main contribution of this thesis is a representation of visual contents in video based on summarizing the dynamic behavior of low-level features as temporal processes described by Markov chains (MC). The states of the MC are given by the values of an observed low-level feature. Unlike keyframe-based representations of shots, information from all the frames is considered in the MC modeling. Natural similarity measures such as likelihood ratios and Kullback-Leibler divergence are used to compare MC's, and thus the contents of the shots they are representing. In this framework, multiple image features can be combined in the same representation by coupling their corresponding MC's. Different ways of coupling MC's are presented, particularly the one called Coupled Markov Chains (CMC). A method to find the optimal coupling structure in terms of minimal cost and minimal loss of information is detailed in this dissertation. The loss of information is directly related to the loss of accuracy of the coupled structure to represent video contents. During the same process of computing shot representations, the boundaries between shots are detected using the same modeling of contents and similarity measures.
When color and motion features are combined, the CMC representation provides an intermediate-level semantic descriptor that implicitly contains information about objects (their identities, sizes and motion patterns), camera operation, location, type of shot, temporal relationships between elements of the scene and global activity understood as the amount of action. More complex semantic concepts emerge from the combination of these intermediate-level descriptors, such as a "talking head" that combines a close-up with the skin color of a face. Adding the location component in the News domain, talking heads can be further classified into "anchors" (located in the studio) and "correspondents" (located outdoors). These and many other semantically meaningful categories are discovered when shots represented using the CMC model are clustered in an unsupervised way. Well-defined concepts are given by compact clusters, which can be determined by a measure of their density. High-level domain knowledge can then be defined by simple rules on these salient concepts, which will establish boundaries in the semantic structure of the video. The CMC modeling of video shots unifies the first steps of the video analysis process providing an intermediate-level semantically meaningful representation of contents without prior shot boundary detection.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography