Dissertations / Theses on the topic 'Coke – Testing'

To see the other types of publications on this topic, follow the link: Coke – Testing.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Coke – Testing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Liu, Chang Materials Science &amp Engineering Faculty of Science UNSW. "Bulk density and angle of repose of coal." Awarded by:University of New South Wales. Materials Science & Engineering, 2007. http://handle.unsw.edu.au/1959.4/40495.

Full text
Abstract:
This thesis reports a study on the effects of size distribution, moisture content and oil addition on bulk density and angle of repose of coal. The experimental work includes four stages. The first stage is to develop reliable experimental techniques. The results confirm that ASTM cubic foot test is reliable for measurement of bulk density and angle of repose if properly operated, although the latter is better measured in a piling process. Stages 2 and 3 are to investigate the effects of size distribution by using -3.55mm% for stage 2 and mean size do.s for stage 3, water content and oil addition on bulk density and angle of repose of coal. For each of them, empirical equations are formulated to predict bulk density and angle of repose. The results indicate that the fraction -3.55mm cutting size in stage 2 does not affect bulk density significantly, while the increase of do.s decreases bulk density to a minimum and then increases. Particle size distribution does not affect angle of repose much. The increase of moisture content decreases bulk density and increases angle of repose significantly. The increase of oil addition increases bulk density while decreases angle of repose significantly. The correlation between bulk density and angle of repose can also be observed: the higher bulk density, the lower angle of repose. There are other variables affecting bulk density and angle of repose. They include oil type, absorption time discharging height and external loading. Their effects on bulk density and angle of repose are quantified in stage 4. The results suggest that, a higher discharging position or larger external loading increase bulk density significantly. Angle of repose decreases when increase the height of discharging position. Diesel oil performed better than waste oil addition in terms of bulk density enhancement. For most of the cases examined, bulk density and angle of repose become stable after ~24 hours oil absorption time.
APA, Harvard, Vancouver, ISO, and other styles
2

McCallum, Adrian Bruce. "Cone penetration testing in polar snow." Thesis, University of Cambridge, 2012. https://www.repository.cam.ac.uk/handle/1810/244073.

Full text
Abstract:
Innovative Cone Penetration Testing (CPT) using adapted commercial CPT equipment was conducted in Antarctica in early 2010 in an attempt to assess the strength of polar snow; additionally, application of CPT data was considered, particularly in estimating surface bearing capacity. Almost 100 CPT tests were carried out and both qualitative and quantitative analysis of data was undertaken. Additional supporting testing in- cluded snow density assessment, snow strength assessment, extrapolation of CPT data via Ground Penetrating Radar (GPR) and preliminary mini-cone penetrometer testing in Greenland. Analysis of results revealed that assessing the strength of polar snow via CPT is affected by numerous factors including penetration rate, cone size/shape and snow material properties, particularly compaction of the snow undergoing penetration. A density-dependant relationship between CPT resistance and snow shear strength was established, and methods for estimating surface bearing capacity directly from CPT in homogeneous and layered polar snow were proposed. This work applied existing technology in a new material and shows that CPT can be used efficiently in polar environs to provide estimates of snow shear strength and surface bearing capacity, to depths of 10 m or more.
APA, Harvard, Vancouver, ISO, and other styles
3

Stratis, Athanasios. "Model-based Testing on Generated C Code." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-28381.

Full text
Abstract:
In this master thesis we investigated whether it is possible to use automatically generated C code from Function Block Diagram models as an input to the CPAchecker model checker in order to generate automated test cases. Function Block Diagram is a non-executable programming and modeling language. Consequently, we need to transform this language to an executable language that can be model checked. A tool that achieves this is the MITRAC tool, a proprietary development tool used in the embedded system domain, for engineering programmable logic controllers. The purpose of this research was to investigate to what extent the generated C code using MITRAC can be reused as an input to the CPAchecker tool for automated test case generation. In order to achieve this we needed to perform certain transformations steps on the existing code. In addition, necessary instrumentations were needed in order to trigger CPAtiger, an extension of CPAchecker which generates test cases, to achieve maximum condition coverage. We showed that by performing the required modifications it is feasible to reuse the generated C code as an input to CPAchecker tool. We also showed an approach for mapping the generated test cases with the actual Function Block Diagram. We performed mutation analysis in order to evaluate the quality of the generated test cases in terms of the number of injected faults they detect. Test case generation with CPAchecker could be improved in the future in terms of reducing the number of transformations and instrumentations that are currently needed. In order to achieve this we need to add to CPAchecker tool support for structures that are used in C, such as structs. Finally we can extend the type of logic coverage criteria we can use with CPAchecker by adding additional support of FQL language.
APA, Harvard, Vancouver, ISO, and other styles
4

Hansson, Bevin. "Random Testing of Code Generation in Compilers." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-175852.

Full text
Abstract:
Compilers are a necessary tool for all software development. As modern compilers are large and complex systems, ensuring that the code they produce is accurate and correct is a vital but arduous task. Correctness of the code generation stage is important. Maintaining full coverage of test cases in a compiler is virtually impossible due to the large input and output domains. We propose that random testing is a highly viable method for testing a compiler. A method is presented to randomly generate a lower level code representation and use it to test the code generation stage of a compiler. This enables targeted testing of some of the most complex components of a modern compiler (register allocation, instruction scheduling) for the first time. The design is implemented in a state-of-the-art optimizing compiler, LLVM, to determine the effectiveness and viability of the method. Three distinct failures are observed during the evaluation phase. We analyze the causes behind these failures and conclude that the methods described in this work have the potential to uncover compiler defects which are not observable with other testing approaches.
Kompilatorer är nödvändiga för all mjukvaruutveckling. Det ärsvårt att säkerställa att koden som produceras är korrekt, eftersomkompilatorer är mycket stora och komplexa system. Kodriktigheteninom kodgenereringsstadiet (registerallokering och instruktionsschemaläggning) är särskilt viktig. Att uppnå full täckningav testfall i en kompilator är praktiskt taget omöjligt på grund avde stora domänerna för in- och utdata.Vi föreslår att slumpmässig testning är en mycket användbarmetod för att testa en kompilator. En metod presenteras för attgenerera slumpmässig kod på en lägre representationsnivå och testakodgenereringsstadiet i en kompilator. Detta möjliggör riktadtestning av några av de mest komplexa delarna i en modern kompilator(registerallokering, instruktionsschemaläggning) för förstagången.Designen implementeras i en toppmodern optimerande kompilator,LLVM, för att avgöra metodens effektivitet. Tre olika misslyckandenobserveras under utvärderingsfasen. Vi analyserar orsakernabakom dessa misslyckanden och drar slutsatsen att demetoder som beskrivs har potential att finna kompilatordefektersom inte kan observeras med andra testmetoder. Kompilatorer är nödvändiga för all mjukvaruutveckling. Det är svårt att säkerställa att koden som produceras är korrekt, eftersom kompilatorer är mycket stora och komplexa system. Kodriktigheten inom kodgenereringsstadiet (registerallokering och instruktionsschemal äggning) är särskilt viktig. Att uppnå full täckning av testfall i en kompilator är praktiskt taget omöjligt på grund av de stora domänerna för in- och utdata. Vi föreslår att slumpmässig testning är en mycket användbar metod för att testa en kompilator. En metod presenteras för att generera slumpmässig kod på en lägre representationsnivå och testa kodgenereringsstadiet i en kompilator. Detta möjliggör riktad testning av några av de mest komplexa delarna i en modern kompilator (registerallokering, instruktionsschemaläggning) för första gången. Designen implementeras i en toppmodern optimerande kompilator, LLVM, för att avgöra metodens effektivitet. Tre olika misslyckanden observeras under utvärderingsfasen. Vi analyserar orsakerna bakom dessa misslyckanden och drar slutsatsen att de metoder som beskrivs har potential att finna kompilatordefekter som inte kan observeras med andra testmetoder.
APA, Harvard, Vancouver, ISO, and other styles
5

Kurita, Hiroyuki 1958. "Interferometric aspheric surface testing using ray tracing code." Thesis, The University of Arizona, 1989. http://hdl.handle.net/10150/276944.

Full text
Abstract:
Phase shifting interferometry is one of the most promising methods for testing aspheres. However, one will encounter the following problems when it is applied to test an asphere: (1) very tight fringes produced by a strong asphere exceed the test system's resolution, (2) a test wavefront suffers from system aberrations of the interferometer that cause measurement errors, and (3) the wavefront immediately after reflection does not necessarily represent the shape of the test asphere. This thesis used a high density array sensor to detect the dense fringes. In order to solve the system aberration and the ray retrace problems, it is necessary to incorporate a ray trace code and phase shifting interferometry. This measurement principle was applied for an aspheric surface whose asphericity was 100 waves. A phase shifting Fizeau interferometer was incorporated with an optical design program. The attained accuracy was approximately one-tenth of a wave.
APA, Harvard, Vancouver, ISO, and other styles
6

Krawitz, Ronald Michael. "Code Clone Discovery Based on Functional Behavior." NSUWorks, 2012. http://nsuworks.nova.edu/gscis_etd/201.

Full text
Abstract:
Code clone Discovery Based on Functional Behavior by Ronald M Krawitz 2012 Legacy programs are used for many years and experience many cycles of use-maintenance-use-maintenance-use-etc. Source code or source code functionality is frequently replicated within these programs when it is written, as well as when it is maintained. Over time many different developers with greater or lesser understanding of the source code maintain the source code. Maintenance developers, when they have limited time or lack understanding of the program, frequently resort to short cuts that include cutting and pasting existing code and re-implementing functionality instead of refactoring. This means a specific functionality is often repeated several times, sometimes using different source code. Blocks of replicated source code or source code functionality are called code clones. Removing code clones improves extensibility, maintainability, and reusability of a program in addition to making the program more easily understood. It is generally accepted that four types of code clones exist. Type-1 and Type-2 code clones are comparatively straightforward to locate and tools exist to locate them. However, Type-3 and Type-4 code clones are very difficult to locate with only a few specialized tools capable of locating them with a lower level of precision. This dissertation presents a new methodology that discovered code clones by studying the functional behavior of blocks of code. Code Clone Discovery based on Functional Behavior (FCD) located code clone by comparing how the blocks of code reacted to various inputs. FCD stimulated the code blocks with the same input patterns and compared the resulting outputs. When a significant portion of the outputs matched, those blocks were declared to be a code clone candidate. Manual analysis confirmed that those blocks of code were code clones. Since FCD discovered code clones based on their black-box behavior, the actual source code syntax was irrelevant and manual inspection further confirmed FCD located code clones that included Type-3 and Type-4 code clones which are frequently excluded from code clone detection tools. FCD recognized the code clones regardless of whether or not they use identical code, similar code, or totally dissimilar code. This new technique allows for an improvement in software quality and has the potential to significantly reduce the cost of software over its lifetime.
APA, Harvard, Vancouver, ISO, and other styles
7

Jääskelä, E. (Esa). "Genetic algorithm in code coverage guided fuzz testing." Master's thesis, University of Oulu, 2016. http://urn.fi/URN:NBN:fi:oulu-201601151058.

Full text
Abstract:
The security of computers is a growing concern when the amount of devices increases. New and more comprehensive testing methods need to be done to avoid damages to the users and their computers. Fuzzing is a testing method that inserts semi-valid input to the tested system and has before been considered as a good method for the security testing. However, it usually either does not get high code coverage or it requires a long set-up process or a source code analysis to achieve better code coverage. This work presents a genetic algorithm that automatically balances the probabilities of multiple mutators in a fuzzing program. This balancing aims to maximize the code coverage fuzz testing. After fuzzing two different open source libraries it was found that the grey-box approach in fuzzing gives better results than pure black-box fuzzing
Tietokoneiden tietoturva on kasvava huolenaihe, kun laitteiden määrä lisääntyy. Uusia ja kattavampia testauksia täytyy suorittaa, jotta voidaan estää käyttäjille ja heidän laitteilleen tapahtuvat vahingot. Fuzzausta on pidetty hyvänä testausmetodina, mutta yleensä se ei saavuta hyvää koodikattavuutta tai vaatii joko monimutkaisen asennuksen tai lähdekoodianalyysin. Tämä työ esittelee geneettisen algoritmin, joka automaattisesti tasapainottaa fuzzerin eri mutaatiofunktioiden todennäköisyydet. Tämä tasapainotus pyrkii maksimoimaan saavutetun koodikattavuuden ja parantamaan fuzzaamisen tehokkuutta. Kahden avoimen lähdekoodin kirjaston testaamisen perusteella mutatorit koodikattavuuden perusteella tasapainottava työkalu pärjäsi paremmin kuin perinteinen, lisätietoa hyödyntämätön black-box fuzzaus
APA, Harvard, Vancouver, ISO, and other styles
8

Eid, Walid Khaled. "Scaling effect in cone penetration testing in sand." Diss., Virginia Polytechnic Institute and State University, 1987. http://hdl.handle.net/10919/49849.

Full text
Abstract:
The Cone Penetration Test (CPT) was developed originally in Holland in the 1930’s as a device which provides a small scale model of a pile foundation. Early versions were simple cone points for which the only measurement was the thrust required to push the point through the ground. Over the past 20 years, the cone was standardized to a tip area of 10 cm², and an electrical version was produced, which allows for continuous measurement of the cone tip resistance and sleeve friction along with a computer-based data acquisition system. The electrical cone represents a significant step forward for the CPT, since it provides a continuous profile of information that can be used to identify soil type and define important engineering parameters. More recently, the CPT has shown considerable potential for calculation of settlements of footings on sand, determination of pile capacity, assessment of ground pressures, and evaluation of liquefaction potential for cohesionless soils. Along with the widening application of the CPT, new varieties of cone penetrometers have appeared, with different sizes than the standard. Smaller cones are used for instances where relatively small depths of soil need to be penetrated, and larger cones have been developed for penetrating dense and gravelly soils. With the introduction of the new cones, there has been a tendency to assume that the methods for reducing CPT data for the standard sized cone can be extrapolated to the other sizes of cones. That is, it is assumed that there are no scale effects in cones of different sizes. While this may be true, to date, little direct evidence has been produced to support this view, and the issue is an important one from two points of view: 1. The present data analysis technology is based on that primarily from testing with a standard cone. lt is important to know if any changes are needed in this approach, or if the existing methods can be used with confidence for any size cone. 2. If it can be shown that no scale factor exists, then this will allow the use of new, smaller cones in experimentation in modem calibration chambers with the knowledge that the test results are applicable for the cones that a.re more widely used in practice. The smaller cones offer several advantages in this type of work in that they facilitate the research considerably by reducing the effort involved in sample preparation, and they are less likely to produce results influenced by boundary conditions in the chamber. One of the major objectives of this research is to develop an insight into the issue of the scale factor caused by the use of different sizes of cones. This is accomplished through an experimental program conducted in a new large scale calibration chamber recently constructed at Virginia Tech. Many of the latest developments in cone penetration testing have been forthcoming from testing done in calibration chambers where a soil mass can be placed to a controlled density under known stress conditions. To conduct the experimentation of this work, it was necessary to design, fabricate, and bring to an operational stage a calibration chamber. The Virginia Tech chamber is one of the largest in the world. A significant portion of the effort involved in this thesis research was devoted to this task. In particular, attention was devoted to the development of a system for placement of a homogeneous soil mass in the chamber, and the implementation of a microcomputer-based data acquisition unit to record and process the test results. The scale effects investigation was performed using three different sizes of cone penetrometers in a test program conducted in the calibration chamber. Of the three cones, one is smaller than the standard with a tip area of 4.23 cm², one was a standard cone with a tip area of 10 cm², and one was larger than the standard cone with a tip area of 15 cm². A total of 47 tests were carried in the chamber using two different levels of confining stress and two different sand densities. The test results show that while a scale factor might exist, the degree of its influence on interpreted soil parameters for a practical problem does not appear significant.
Ph. D.
incomplete_metadata
APA, Harvard, Vancouver, ISO, and other styles
9

Heikkilä, V. (Ville). "Optimizing continuous integration testing using dynamic source code analysis." Master's thesis, University of Oulu, 2018. http://urn.fi/URN:NBN:fi:oulu-201802131229.

Full text
Abstract:
The amount of different tools and methods available to perform software testing is massive. Exhaustive testing of a project can easily take days, weeks or even months. Therefore, it is generally advisable to prioritize and optimize the performed tests. The optimization method chosen to be studied in this thesis was Dynamic Source Code Analysis (DSCA). In DSCA a piece of target software is monitored during testing to find out what parts of the target code get executed. By finding and storing this information, further code changes can be triggered to execute the stored test cases that caused execution in the modified parts of code. The test setup for this project consisted of three open-source software targets, three fuzz testing test suites, and the DSCA software. Test runs of different lengths were triggered by code changes of varying size. The durations of these test runs and the sizes of the code changes were stored for further analysis. The purpose of this thesis was to create a method for the fuzz testing suite to reliably communicate with the DSCA software. This was done to find out how much time can be saved if CI-testing is optimized by scanning every source code change to obtain a targeted test set as opposed to running a comprehensive set of tests after each change. The data analysis demonstrates with certainty that using DSCA reduces the average run-time of a test by up to 50%
Ohjelmistotestauksessa käytettävien työkalujen ja metodien määrä on massiivinen. Ohjelmistoprojektin läpikotainen testaus saattaa kestää päiviä, viikkoja tai jopa kuukausia. Tämän takia on yleisesti suositeltavaa priorisoida ja optimoida suoritetut testit. Tässä opinnäytetyössä tarkasteltavaksi optimointimetodiksi valittiin dynaaminen lähdekoodianalyysi (DSCA), jossa ohjelmistoa monitoroidaan ajonaikaisesti, jotta saadaan selville mitä osia lähdekoodista mikäkin testi suorittaa. Tämä projekti koostui kolmesta avoimen lähdekoodin ohjelmistoprojektista, kolmesta fuzz-testaustyökalusta sekä DSCA-ohjelmistosta. Erikokoisilla lähdekoodin muutoksilla saatiin aikaan erikokoisia testimääriä uudelleenajettaviksi. Näiden ajojen suuruudet ja kestot tallennetiin, ja niitä vertailtiin. Tämän opinnäytetyön tarkoituksena oli löytää keino saada fuzz-testaustyökalu keskustelemaan DSCA-ohjelmiston kanssa luotettavasti, sekä selvittää kuinka paljon aikaa pystytään säästämään optimoimalla CI-testausta skannaamalla jokainen lähdekoodimuutos kohdennettujen testien saamiseksi verrattuna siihen että jokainen lähdekoodimuutos aiheuttaisi kokonaisvaltaisen testiajon. DSCA-ohjelmistoja käyttämällä saatiin varmuus siitä, että CI-järjestelmien testiajojen pituutta pystytään pienentämään huomattavasti. Keskimääräisen testiajon pituus pystyttiin testeissä jopa puolittamaan
APA, Harvard, Vancouver, ISO, and other styles
10

Wilén, J. (Juhani). "Code change based selective testing in continuous integration environment." Master's thesis, University of Oulu, 2018. http://urn.fi/URN:NBN:fi:oulu-201806062460.

Full text
Abstract:
Continuous integration (CI) is a software engineering practice in which new code is integrated to existing codebase continuously. Integration testing ensures that the changes in code function as intended together with the other parts of the code. The number of tests tend to grow and at some point performing them all becomes infeasible due to limited time between consecutive test executions. Therefore, the traditional retest-all approach becomes inoperative and test optimization techniques are required. Test selection is one of those techniques and it encompasses selecting tests which are relevant to recent changes in the code. The purpose of this thesis is to analyze existing test selection methods, and to implement an initial continuous test selection method in CI environment that reduces duration of integration testing stage and provides faster feedback. The method is aimed to be safe that no additional faults are let through the testing. The test selection is based on changes submitted to version control system (VCS), which are compared with source code file coverages of different hardware variants reported by compilers. In addition, other possible dependencies between variants and code changes are investigated. Those are related to test codes and interfaces. Now the testing of change independent variants can be ignored, and only testing change dependent variants is conducted. At the beginning the implemented test selection method was used in a single software development branch for testing purposes. The results indicate that utilizing the method accomplished slight but statistically significant reduction of integration testing duration with significance level of 0.05. The mean of the testing duration was decreased by 15.2% and the median by 22.2%. However, the implementation still has some inaccuracies in dependency detection, and further improvements are needed to make the test selection method more efficient
Jatkuva integraatio on ohjelmistotuotannon käytäntö, jossa muutokset ohjelmakoodiin integroidaan osaksi jo olemassa olevaa ohjelmistoa jatkuvasti. Integraatiotestauksella varmistetaan, että muutokset koodiin toimivat sen muiden osien kanssa kuten on tarkoitettu. Suoritettavien testien määrä usein kasvaa ajan mittaan, ja jossakin vaiheessa niiden kaikkien suorittaminen ei ole enää järkevää, koska perättäisten testiajojen välinen aika on rajallinen. Siksi perinteinen kaikkien testien uudelleenajaminen tulee haastavaksi ja tarvitaan testien optimointitekniikoita. Testien valinta on yksi näistä tekniikoista. Se sisältää sellaisten testien valinnan, jotka ovat oleellisia testaamaan viimeaikaisia muutoksia koodiin. Tämän diplomityön tarkoituksena on analysoida olemassa olevia testien valintamenetelmiä ja luoda alustava toteutus jatkuvasta testien valintamenetelmästä jatkuvan integraation ympäristössä, millä vähennetään testien kestoaikaa integraatiotestausvaiheessa ja nopeutetaan palautteen saamista. Tavoitteena on, ettei testauksen läpäisseiden vikojen määrä kuitenkaan kasva. Testien valinta perustuu versionhallintajärjestelmään toimitettuihin muutoksiin, joita verrataan kääntäjien raportoimiin lähdekoodikattavuuksiin eri laiteversioille. Lisäksi laiteversioiden riippuvuus testikoodien ja rajapintojen muutoksiin tutkitaan. Ne laiteversiot, jotka eivät ole riippuvaisia mistään muutoksista, jätetään testaamatta, ja ainoastaan muutoksista riippuvaisten laiteversioiden ohjelmakoodit testataan. Testien valintaan toteutettu menetelmä otettiin käyttöön aluksi yhdessä ohjelmistokehityshaarassa sen toiminnan testaamiseksi. Saadut tulokset näyttävät, että menetelmän hyödyntämisellä saavutettiin vähäinen mutta tilastollisesti merkittävä integraatiotestauksen kestoajan lyheneminen merkitsevyystasolla 0,05. Testauksen keston keskiarvo laski 15,2 % ja mediaani 22,2 %. Toteutuksessa on vielä epätarkkuuksia riippuvuuksien havaitsemisessa, ja sitä pitää kehittää paremman tehokkuuden saavuttamiseksi
APA, Harvard, Vancouver, ISO, and other styles
11

Lee, Say Yong. "Centrifuge modelling of cone penetration testing in cohesionless soils." Thesis, University of Cambridge, 1990. https://www.repository.cam.ac.uk/handle/1810/250983.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

O'Donnell, Jeffrey R. (Jeffrey Robert). "Design, construction, and commissioning of an in-core materials testing facility for slow strain rate testing." Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/28129.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Akbar, Aziz. "Development of low cost in-situ testing devices." Thesis, University of Newcastle Upon Tyne, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.364801.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Odia, Osaretin Edwin. "Testing in Software Product Lines." Thesis, Blekinge Tekniska Högskola, Avdelningen för programvarusystem, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3853.

Full text
Abstract:
This thesis presents research aimed at investigating different activities involved in software product lines testing process and possible improvements towards achieving developing high quality software product lines at reduced cost and time. The research was performed using systematic review procedures of Kitchenham. The reviews carried out in this research covers several areas relating to software product lines testing. The reasons for performing a systematic review in this research are to; summarize the existing evidence covering testing in software product line context, to identify gaps in current research and to suggest areas for further research. The contribution of this thesis is research aimed at revealing the different activities, issues and challenges in software product lines testing. The research into the different activities in software product lines lead to the proposed SPLIT Model for software product lines testing. The model helps to clarify the steps and activities involved in the software product line testing process. It provides and easy to follow map for testers and managers in software product line development organizations. The results were mainly on how testing in software product lines can be improved upon, towards achieving software product line goals. The basic contribution is the proposed model for product line testing, investigation into, and possible improvement in, issues related to software product line testing activities.
The main purpose of the research as presented in this thesis is to present a clear picture of testing in the context of software product lines, which is quite different from testing in single product. The focus of this thesis is specifically the different steps and activities involved in software product lines testing and possible improvements in software product lines testing activities and issues towards achieving the goals of developing high quality software product lines at reduced cost and time. But, for software product lines to achieve its goals, there should be a comprehensive set of testing activities in software product lines development. The development activities from performing analyses and creating designs to integrating programs in software product line context, component testing and tools support for software product lines testing should be taken into consideration.
0046762913149 eddy_odia2002@yahoo.co.uk
APA, Harvard, Vancouver, ISO, and other styles
15

Liao, Chung-Lon. "Applications of cone, vane and vane-cone to predict stress-strain behaviour of unsaturated cohesive soil." Thesis, McGill University, 1986. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=72788.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Gao, Shanshan, and 高珊珊. "Coring process monitoring for strength of grout, concrete and rock in laboratory testing." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2010. http://hub.hku.hk/bib/B45530361.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Falagush, Omar. "Discrete element modelling of cone penetration testing in granular materials." Thesis, University of Nottingham, 2014. http://eprints.nottingham.ac.uk/14134/.

Full text
Abstract:
Cone penetration testing (CPT) is one of the most versatile devices for in situ soil testing. With minimal disturbance to the ground, it provides information about soil classification and geotechnical parameters. Several researchers have used different numerical techniques such as strain path methods and finite element methods to study CPT problems. The Discrete Element Method (DEM) is a useful alternative tool for studying cone penetration problems because of its ability to provide micro mechanical insight into the behaviour of granular materials and cone penetration resistance. This study uses three-dimensional DEM to simulate the cone penetration testing of granular materials in a calibration chamber. Due to the geometric symmetry of this study a 90 degree segment of the calibration chamber and the cone penetrometer was initially considered followed by a 30 degree segment to allow for the simulation of smaller particle sizes and to reduce computational time. This research proposes a new particle refinement method, similar to the mesh refinement of finite-element modelling, in the sense that a large number of small particles were brought into contact with the cone tip, while the large particles were distanced further away from the cone, to reduce computational time effectively. Using a radius expansion method for sample preparation and assigning a constant mass to each particle in the sample was found to reduce computational time significantly with little influence on tip resistance. The effects of initial sample conditions and particle friction coefficient were found to have an important influence on the tip resistance. In addition, prohibiting particle rotation was found to increase tip resistance significantly compared to when the particles were permitted to rotate freely. Particle shape in this study was simulated by replacing the spheres with simple two-ball clumps and was found to have an important effect on the tip resistance. DEM simulations of biaxial tests were conducted to investigate the effect of initial sample conditions, particle shape and particle friction coefficient on the stress-strain behaviour of granular materials. All the above mentioned parameters were found to have a significant effect on the stress-strain behaviour of granular materials. Biaxial test simulations were also conducted to obtain basic granular material properties to derive analytical CPT solutions from continuum mechanics principles. Some of the DEM simulation results were found to be in good agreement with the analytical solutions that used a combined cylindrical-spherical cavity expansion method. Particle crushing was simulated during the cone penetration tests by replacing a broken particle with two new equi-sized smaller particles with mass conserved. The results showed considerable reduction in the tip resistance for the crushing model compared to the non-crushing model and this reduction increased as the confining stress increased.
APA, Harvard, Vancouver, ISO, and other styles
18

Rönn, Mattias. "Symvex : A Symbolic Execution System for Machine Code." Thesis, Linköpings universitet, Databas och informationsteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-124181.

Full text
Abstract:
This thesis is a part of an ongoing research project at Link ̈oping University. The goal of the thesis work is to design and implement a prototype for a symbolic execution system that scales well with larger programs and is capable of performing symbolic execution on machine code. For this reason we have analyzed the current state of symbolic executors that are able to perform symbolic execution on machine code to see if we could use that implementation as base for our prototype. We wanted to know if any of the existing systems scaled well with large software. We found that neither of the existing systems worked well with the real life software in our evaluation. Furthermore, even if it would have been possible to fix one of the existing systems, the time required to figure out the faults in their implementation would most likely have been too great. For this reason we decided to create an implementation of our own from scratch. However, we did note that some approaches in the existing systems seemed to work better with large software. Specifically saving as little state as possible about the program seemed favorable. Armed with the knowledge gained from the analysis, we managed to implement a system that compared quite well with the existing systems. Our system was able to execute all the real-life programs used in our tests, but unfortunately had some issues with high memory usage for certain programs. In order to lessen the problem with high memory usage, we present and discuss several potential ways to mitigate this issue.
APA, Harvard, Vancouver, ISO, and other styles
19

Nevito, Gomez Javier. "Design, set up, and testing of a matrix acidizing apparatus." Texas A&M University, 2006. http://hdl.handle.net/1969.1/4282.

Full text
Abstract:
Well stimulation techniques are applied on a regular basis to enhance productivity and maximize recovery in oil and gas wells. Among these techniques, matrix acidizing is probably the most widely performed job because of its relative low cost, compared to hydraulic fracturing, and suitability to both generate extra production capacity and to restore original productivity in damaged wells. The acidizing process leads to increased economic reserves, improving the ultimate recovery in both sandstone and carbonate reservoirs. Matrix acidizing consists of injecting an acid solution into the formation, at a pressure below the fracture pressure to dissolve some of the minerals present in the rock with the primary objective of removing damage near the wellbore, hence restoring the natural permeability and greatly improving well productivity. Reservoir heterogeneity plays a significant role in the success of acidizing treatments because of its influence on damage removal mechanisms, and is strongly related to dissolution pattern of the matrix. The standard acid treatments are HCl mixtures to dissolve carbonate minerals and HCl- HF formulations to attack those plugging minerals, mainly silicates (clays and feldspars). A matrix acidizing apparatus for conducting linear core flooding was built and the operational procedure for safe, easy, and comprehensive use of the equipment was detailed. It was capable of reproducing different conditions regarding flow rate, pressure, and temperature. Extensive preliminary experiments were carried out on core samples of both Berea sandstone and Cream Chalk carbonate to evaluate the effect of rock heterogeneities and treatment conditions on acidizing mechanisms. The results obtained from the experiments showed that the temperature activates the reaction rate of HF-HCl acid mixtures in sandstone acidizing. The use of higher concentrations of HF, particularly at high temperatures, may cause deconsolidation of the matrix adversely affecting the final stimulation results. It was also seen that the higher the flow rate the better the permeability response, until certain optimal flow rates are reached which appears to be 30 ml/min for Berea sandstone. Highly permeable and macroscopic channels were created when acidizing limestone cores with HCl 15%. In carbonate rocks, there is an optimum acid injection rate at which the dominant wormhole system is formed.
APA, Harvard, Vancouver, ISO, and other styles
20

Scott, Hanna E. T. "A Balance between Testing and Inspections : An Extended Experiment Replication on Code Verification." Thesis, Blekinge Tekniska Högskola, Avdelningen för programvarusystem, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-1751.

Full text
Abstract:
An experiment replication comparing the performance of traditional structural code testing with inspection meeting preparation using scenario based reading. Original experiment was conducted by Per Runeson and Anneliese Andrews in 2003 at Washington State University.
En experiment-replikering där traditionell strukturell kod-testning jämförs med inspektionsmötesförberedelse användandes scenario-baserad kodläsning. Det ursprungliga experimentet utfördes av Per Runeson och Anneliese Andrews på Washington State University år 2003.
APA, Harvard, Vancouver, ISO, and other styles
21

Malevris, N. "An effective approach for testing program branches and linear code sequences and jumps." Thesis, University of Liverpool, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.233799.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Morton, Alexander F. "Testing an Original Story in Multiple Artistic Mediums." Digital Commons @ East Tennessee State University, 2015. https://dc.etsu.edu/honors/298.

Full text
Abstract:
The Story is one of the oldest forms of communication between humans. Various methods have enhanced and updated the Art in a variety of ways since the concept was created. In modern times, a story can exist in multiple mediums because of the variations that humans use today to tell stories. I present an artistic project that will show my development of an original universe, plot, and characters into a storyline introduction for enjoyable purposes. The belief was that these ideas I created could succeed in multiple formats, but I would need to narrow it down and test what I had created. I chose two different mediums, a Written Narrative and a Video Game, as means to tell my story as much as I could within the time frame. By using the opinions of others, I’ll learn if either project can be successful in telling my story and which method offered the best experience with my particular story ideas to share with an individual.
APA, Harvard, Vancouver, ISO, and other styles
23

Greig, James William. "Estimating undrained shear strength of clay from cone penetration tests." Thesis, University of British Columbia, 1985. http://hdl.handle.net/2429/25076.

Full text
Abstract:
This paper discusses several proposed methods for estimating undrained shear strength from cone penetration tests. This correlation has been studied in the past, however, most have focussed only on the cone bearing. In addition to discussing these traditional methods, this paper evaluates recently proposed methods of estimating Su from CPT pore pressure data. The results of field vane and cone penetration tests from five lower mainland sites are presented in relation to the different proposed correlation techniques. The results show that there is no unique cone factor for estimating Su from CPT for all clays, however, a reasonable estimate of Su can be made by comparing the predictions from several of the proposed methods. With local correlations these techniques can be quite reliable. The results also show that the estimation of Su from CPT is influenced by various factors relating to: the choice of a reference Su, cone design, CPT test procedures and the soil characteristics. In particular, the estimation of Su from CPT is strongly influenced by such soil parameters as stress history, sensitivity and stiffness. Increases in OCR and sensitivity were reflected by increases in the traditional cone factors Nc and Nk. The use of pore pressure data appears to be a promising means of estimating. Su from CPT. Expressions have been developed that predict excess pore pressures based on cavity expansion theory and attempt to include the effects of sensitivity, stress history and stiffness. In addition, comparisons between friction sleeve measurements and Su and a method for estimating sensitivity from friction ratios are presented. Lastly, recommended procedures for estimating Su from CPT are given.
Applied Science, Faculty of
Civil Engineering, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
24

Stürmer, Ingo. "Systematic testing of code generation tools a test suite oriented approach for safeguarding model based code generation." Berlin Pro Business, 2006. http://deposit.ddb.de/cgi-bin/dokserv?id=2788859&prov=M&dok_var=1&dok_ext=htm.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Kalogridis, Georgios. "Preemptive mobile code protection using spy agents." Thesis, Royal Holloway, University of London, 2011. http://repository.royalholloway.ac.uk/items/1863e436-1c99-0cae-8a59-c6bee0c9553e/9/.

Full text
Abstract:
This thesis introduces 'spy agents' as a new security paradigm for evaluating trust in remote hosts in mobile code scenarios. In this security paradigm, a spy agent, i.e. a mobile agent which circulates amongst a number of remote hosts, can employ a variety of techniques in order to both appear 'normal' and suggest to a malicious host that it can 'misuse' the agent's data or code without being held accountable. A framework for the operation and deployment of such spy agents is described. Subsequently, a number of aspects of the operation of such agents within this framework are analysed in greater detail. The set of spy agent routes needs to be constructed in a manner that enables hosts to be identified from a set of detectable agent-specific outcomes. The construction of route sets that both reduce the probability of spy agent detection and support identification of the origin of a malicious act is analysed in the context of combinatorial group testing theory. Solutions to the route set design problem are proposed. A number of spy agent application scenarios are introduced and analysed, including: a) the implementation of a mobile code email honeypot system for identifying email privacy infringers, b) the design of sets of agent routes that enable malicious host detection even when hosts collude, and c) the evaluation of the credibility of host classification results in the presence of inconsistent host behaviour. Spy agents can be used in a wide range of applications, and it appears that each application creates challenging new research problems, notably in the design of appropriate agent route sets.
APA, Harvard, Vancouver, ISO, and other styles
26

Lapthorn, Andrew Craig. "High Temperature Superconducting Partial Core Transformers." Thesis, University of Canterbury. Electrical and Computer Engineering, 2012. http://hdl.handle.net/10092/7130.

Full text
Abstract:
The thesis begins by providing an introduction to transformer theory. An ideal transformer is examined first, followed by full core transformer theory. The partial core transformer is then introduced and compared to the full core design. An introduction to superconductors is then presented where a simplified theory of superconductivity is given. High temperature superconductors are then examined including their physical structure, superconducting properties and the design of the superconducting wire. The early development of high temperature superconducting partial core transformers at the University of Canterbury is then examined. Early partial core development is discussed followed by some material testing at cryogenic temperatures. This work lead into the development of the first high temperature superconducting partial core transformer. This transformer failed during testing and an examination of the failure mechanisms is presented. The results of the failure investigation prompted an alternative winding insulation design which was implemented in a full core superconducting transformer. The modelling used to design a high temperature superconducting partial core transformer is then presented. Based upon the reverse design method, the modelling is used to determine the components of the Steinmetz equivalent transformer circuit. The modelling includes a combination of circuit theory and finite element analysis. An ac loss model for high temperature superconductors is also presented. A new 15 kVA, 230-230V high temperature superconducting partial core transformer was designed, built and tested. The windings are layer wound with first generation Bi2223 high temperature superconductor. The modelling was used to predict the performance of the transformer as well as the ac losses of the high temperature superconductor. A series of electrical tests were performed on the transformer including open circuit, short circuit, resistive load, overload, ac withstand voltage and fault ride through tests. The test results are compared with the model. The transformer was found to be 98.2% efficient at rated power with 2.86% voltage regulation.
APA, Harvard, Vancouver, ISO, and other styles
27

Saxena, Pallavi. "EVALUATING TESTING EFFORT AND ITS CORELATION TO CYCLOMACTIC COMPLEXITY AND CODE COVERAGE." University of Akron / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=akron1438291462.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Alfsson, Oskar. "An analysis of Mutation testing and Code coverage during progress of projects." Thesis, Umeå universitet, Institutionen för datavetenskap, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-142408.

Full text
Abstract:
In order to deliver high quality software projects, a developing team probably needs a well-developed test suite. There are several methods that aim to evaluate test suites in some way, such as Code coverage and Mutation testing. Code coverage describes the degree of source code that a program executes when running a test suite. Mutation testing measures the test suite effectiveness. More development teams use code coverage to a greater extent than mutation testing. With code coverage being monitored throughout a project, could the development team risk drop of the test suite effectiveness as the codebase getting bigger with each version? In this thesis, a mutation testing tool called PIT is used during progress of four well known open source projects. The reason for this is to show that mutation testing is an important technique to ensure continuously high test suite effectiveness, and does not only rely on code coverage measurements. In general, all projects perform well in both code coverage and test suite effectiveness, with the exception of one project inwhich the test suite effectiveness drops drastically. This drop shows that all projects are at risk of low test suite effectiveness, by not using mutation testing techniques.
APA, Harvard, Vancouver, ISO, and other styles
29

Dulal, Nabin Raj, and Sabindra Maharjan. "A Comparative Study of Component Based Regression Testing Approaches without Source Code." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4357.

Full text
Abstract:
Context: Today, most of the software products are built with COTS components. When a new version of these components is available, it is difficult to perform testing as the vendors of the component do not usually provide source code. Various regression testing techniques have been developed, but most of the techniques are based on the source code for change identification. So, the testers are facing different challenges in performing effective testing. Objectives: The goal of this research is to find out the different approaches that are used to identify changes in modified COTS component, analyze the main characteristics of those approaches and investigate how these characteristics can be used in selection and development of CBRT approach. Methods: To fulfill the aims of the research, we have conducted systematic literature review of different CBRT approaches from the year 1993-2010. From systematic literature we found out 32 papers relevant to our study. Data related to our research are extracted from those papers and conclusion is made. The relevant articles were searched in six scientific databases such as IEEE Explore, ACM Digital Library, SpringerLink, Science Direct, Scopus, and Engineering Village. Furthermore, online survey was conducted based on the characteristics of CBRT approaches. This survey was conducted to validate the SLR result. Results: From the systematic Literature Review we have found out 8 different characteristics of CBRT approaches such as applicability, automation, complexity, behavior model used, coverage criteria, strength and weakness, theory used and input. We observe that these are the most important characteristics in CBRT approaches and these approaches should be considered in selecting or developing new CBRT approach. The results from the survey also validate our findings. From survey some more factors were identified. Conclusion: The research develops the state-of-art of CBRT approaches towards future research. The result of this thesis will be helpful for the researchers as well as practitioners who are working on CBRT. The result of the thesis can be considered as a basis for further study. Based on the result of this thesis further study can be done on making a framework based on these characteristics and support component based regression testing.
Nabin Raj Dulal, 139, Jagriti Tole Marg, Balaju-16, Kathmandu , Nepal ph: +97714351087
APA, Harvard, Vancouver, ISO, and other styles
30

Burns, Susan Elizabeth. "Development, adaptation, and interpretation of cone penetrometer sensors for geoenvironmental subsurface characterization." Diss., Georgia Institute of Technology, 1997. http://hdl.handle.net/1853/23358.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Kwiatkowski, Terese Marie. "The miniature electrical cone penetrometer and data acquisition system." Thesis, Virginia Polytechnic Institute and State University, 1985. http://hdl.handle.net/10919/90934.

Full text
Abstract:
The static cone penetrometer is an in-situ testing tool which was originally developed to derive information on soil type and soil strength. More recently, it has found application in liquefaction assessment. Typical cone penetrometers are heavy duty devices which are operated with the assistance of a drill rig. However, this capacity is not necessary in the case of field studies of liquefaction, since liquefaction usually occurs at relatively shallow depths. This thesis is directed to the goal of the development of a miniature, lightweight cone penetrometer which can be used in earthquake reconnaissance studies related to liquefaction problems. The research for this thesis involved four principal objectives: 1. Development of procedures to automatically acquire and process measurements from a miniature electrical cone; 2. Develop and perform tests in a model soil-filled bin to calibrate the cone; 3. Evaluate the utility and accuracy of the cone results as a means to assess conventional soil properties; and, 4. Conduct a preliminary evaluation of the cone results in the context of recently developed methods to predict liquefaction potential. The work in regard to the first objective involved assembling and writing software for a microcomputer based data acquisition system. Successful implementation of this system allowed data from the tests to be rapidly processed and displayed. Calibration tests with the cone were carried out in a four foot high model bin which was filled ten times with sand formed to variety of densities. The sand used is Monterey No. 0/30, a standard material with well known behavioral characteristics under static and dynamic loading. The test results showed the cone to produce consistent data, and to be able to readily distinguish the varying density configurations of the sand. Using the results in conventional methods for converting cone data into soil parameters yielded values which were consistent with those expected. Liquefaction potential predictions were less satisfying, although not unreasonable. Further research is needed in this area both to check the reliability of the prediction procedures and the ability to achieve the desired objectives.
M.S.
APA, Harvard, Vancouver, ISO, and other styles
32

Cota, Erika Fernandes. "Reuse-based test planning for core-based systems-on-chip." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2003. http://hdl.handle.net/10183/4180.

Full text
Abstract:
O projeto de sistemas eletrônicos atuais segue o paradigma do reuso de componentes de hardware. Este paradigma reduz a complexidade do projeto de um chip, mas cria novos desafios para o projetista do sistema em relação ao teste do produto final. O acesso aos núcleos profundamente embutidos no sistema, a integração dos diversos métodos de teste e a otimização dos diversos fatores de custo do sistema são alguns dos problemas que precisam ser resolvidos durante o planejamento do teste de produção do novo circuito. Neste contexto, esta tese propõe duas abordagens para o planejamento de teste de sistemas integrados. As abordagens propostas têm como principal objetivo a redução dos custos de teste através do reuso dos recursos de hardware disponíveis no sistema e da integração do planejamento de teste no fluxo de projeto do circuito. A primeira abordagem considera os sistemas cujos componentes se comunicam através de conexões dedicadas ou barramentos funcionais. O método proposto consiste na definição de um mecanismo de acesso aos componentes do circuito e de um algoritmo para exploração do espaço de projeto. O mecanismo de acesso prevê o reuso das conexões funcionais, o uso de barramentos de teste locais, núcleos transparentes e outros modos de passagem do sinal de teste. O algoritmo de escalonamento de teste é definido juntamente com o mecanismo de acesso, de forma que diferentes combinações de custos sejam exploradas. Além disso, restrições de consumo de potência do sistema podem ser consideradas durante o escalonamento dos testes. Os resultados experimentais apresentados para este método mostram claramente a variedade de soluções que podem ser exploradas e a efi- ciência desta abordagem na otimização do teste de um sistema complexo. A segunda abordagem de planejamento de teste propõe o reuso de redes em-chip como mecanismo de acesso aos componentes dos sistemas construídos sobre esta plataforma de comunicação. Um algoritmo de escalonamento de teste que considera as restrições de potência da aplicação é apresentado e a estratégia de teste é avaliada para diferentes configurações do sistema. Os resultados experimentais mostram que a capacidade de paralelização da rede em-chip pode ser explorada para reduzir o tempo de teste do sistema, enquanto os custos de área e pinos de teste são drasticamente minimizados. Neste manuscrito, os principais problemas relacionados ao teste dos sistemas integrados baseados em componentes virtuais são identificados e as soluções já apresentadas na literatura são discutidas. Em seguida, os problemas tratados por este traballho são listados e as abordagens propostas são detalhadas. Ambas as técnicas são validadas através dos sistemas disponíveis no ITC’02 SoC Test Benchmarks. As técnicas propostas são ainda comparadas com outras abordagens de teste apresentadas recentemente. Esta comparação confirma a eficácia dos métodos desenvolvidos nesta tese.
Electronic applications are currently developed under the reuse-based paradigm. This design methodology presents several advantages for the reduction of the design complexity, but brings new challenges for the test of the final circuit. The access to embedded cores, the integration of several test methods, and the optimization of the several cost factors are just a few of the several problems that need to be tackled during test planning. Within this context, this thesis proposes two test planning approaches that aim at reducing the test costs of a core-based system by means of hardware reuse and integration of the test planning into the design flow. The first approach considers systems whose cores are connected directly or through a functional bus. The test planning method consists of a comprehensive model that includes the definition of a multi-mode access mechanism inside the chip and a search algorithm for the exploration of the design space. The access mechanism model considers the reuse of functional connections as well as partial test buses, cores transparency, and other bypass modes. The test schedule is defined in conjunction with the access mechanism so that good trade-offs among the costs of pins, area, and test time can be sought. Furthermore, system power constraints are also considered. This expansion of concerns makes it possible an efficient, yet fine-grained search, in the huge design space of a reuse-based environment. Experimental results clearly show the variety of trade-offs that can be explored using the proposed model, and its effectiveness on optimizing the system test plan. Networks-on-chip are likely to become the main communication platform of systemson- chip. Thus, the second approach presented in this work proposes the reuse of the on-chip network for the test of the cores embedded into the systems that use this communication platform. A power-aware test scheduling algorithm aiming at exploiting the network characteristics to minimize the system test time is presented. The reuse strategy is evaluated considering a number of system configurations, such as different positions of the cores in the network, power consumption constraints and number of interfaces with the tester. Experimental results show that the parallelization capability of the network can be exploited to reduce the system test time, whereas area and pin overhead are strongly minimized. In this manuscript, the main problems of the test of core-based systems are firstly identified and the current solutions are discussed. The problems being tackled by this thesis are then listed and the test planning approaches are detailed. Both test planning techniques are validated for the recently released ITC’02 SoC Test Benchmarks, and further compared to other test planning methods of the literature. This comparison confirms the efficiency of the proposed methods.
APA, Harvard, Vancouver, ISO, and other styles
33

Zhao, Lei. "Bench scale apparatus measurement uncertainty and uncertainty effects on measurement of fire characteristics of material systems." Link to electronic thesis, 2005. http://www.wpi.edu/Pubs/ETD/Available/etd-050105-182456/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Bossuyt, Bernard J. Snyder Byron B. "Software testing tools : analyses of effectiveness on procedural and object-orientated source code/." Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2001. http://handle.dtic.mil/100.2/ADA397128.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Kabasele, Philothe Mwamba. "TESTING THE MATRIX LANGUAGE FRAME MODEL WITH EVIDENCE FROM FRENCH-LINGALA CODE-SWITCHING." OpenSIUC, 2011. https://opensiuc.lib.siu.edu/theses/616.

Full text
Abstract:
My thesis investigates the universality of the Matrix Language Frame model developed by Myers-Scotton (2002). The work tests the model by using bilingual data which display code-switching between French and the low variety of Lingala. The main concern of the work is to test the constraints that are posited in terms of principles of the model and which claim that the Matrix Language dictates the morphosyntactic frame of a bilingual Complementizer Phrase (CP). In the light of the findings of this study, it was shown that the ML model failed to account for a number of situations; and such was the case of the Morpheme Order Principle and double morphology, specifically with the outsider late system morphemes.
APA, Harvard, Vancouver, ISO, and other styles
36

Kahle, Nicole L. "The Effects of Core Stability Training on Balance Testing in Young, Healthy Adults." University of Toledo Honors Theses / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=uthonors1245863136.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Snyder, Byron B. "Software testing tools : analyses of effectiveness on procedural and object-orientated source code." Thesis, Monterey, California. Naval Postgraduate School, 2001. http://hdl.handle.net/10945/1938.

Full text
Abstract:
The levels of quality, maintainability, testability, and stability of software can be improved and measured through the use of automated testing tools throughout the software development process. Automated testing tools assist software engineers to gauge the quality of software by automating the mechanical aspects of the softwaretesting task. Automated testing tools vary in their underlying approach, quality, and ease-of-use, among other characteristics. Evaluating available tools and selecting the most appropriate suite of tools can be a difficult and time-consuming process. In this thesis, we propose a suite of objective metrics for measuring tool characteristics, as an aid in systematically evaluating and selecting automated testing tools. Future work includes further research into the validity and utility of this suite of metrics, conducting similar research using a larger software project, and incorporating a larger set of tools into similar research.
US Navy (USN) author
APA, Harvard, Vancouver, ISO, and other styles
38

Fletcher, Adam. "Non-destructive testing of the graphite core within an advanced gas-cooled reactor." Thesis, University of Manchester, 2014. https://www.research.manchester.ac.uk/portal/en/theses/nondestructive-testing-of-the-graphite-core-within-an-advanced-gascooled-reactor(3ca5c904-6860-46b8-8538-4136cb2aedcd).html.

Full text
Abstract:
The aim of this work has been to apply the techniques of non-destructive testing and evaluation to the graphite fuel channel bricks which form the core of an Advanced Gas-Cooled reactor. Two modes of graphite degradation have been studied: subsurface cracks originating from the keyway corners of the bricks and the reduction in material density caused by radiolytic oxidation. This work has focused on electromagnetic inspection techniques. Brick cracking has been studied using a multi-frequency eddy current technique with the aim of determining quantitative information. In order to accurately control the crack dimensions this work has used radially machined slots as an analogue. Two sensor geometries were studied and it was determined that slots of at least 10 mm through-wall extent could be located. A novel, empirical method of determining the slot size is presented using a brick machined with a series of reference slots. Machined slots originating from a keyway could be sized to within 2 mm using this method. A parametric 3D finite element study was also carried out on this problem. These simulations could distinguish the location of the slots and had some sensitivity to their size, however, the model was found to be overly sensitive to the specific mesh used. Two new contributions to the inverse problem are presented. The first is a minor extension to the usual adjoint problem in which one system now contains a gradiometer. The second is a proposed solution to the ambiguous nature of the inner product required by the sensitivity formulation. This solution has been validated with finite element modelling. Density reduction has been studied via its relationship with electrical conductivity using a technique based on impedance spectroscopy. An inverse eddy current problem has been solved using the regularised Gauss-Newton method to determine the conductivity of the brick over its cross section. The associated forward problem has been solved using the finite element method on a simplified geometry. Tikhonov regularisation has been employed to overcome the ill-posed nature of the inverse problem. This method has been applied to a range of sample and sensor geometries and found to produce excellent results from laboratory data provided the finite element model is well calibrated. Bore or surface conductivity values can be reproduced to better than 1% with the accuracy reducing with distance from the sensor. The sensitivity of the algorithm to the regularisation parameter has been studied using the L-curve method and the effect of two regularisation operators has also been examined. A new method of choosing the regularisation parameter a priori is proposed and tested. Data taken during reactor outages produces physically realistic profiles although the results appear off-set from electrical resistivity values measured using the four-point method. The focus of future work should be to remove this effect which will likely require improvements to the forward model.
APA, Harvard, Vancouver, ISO, and other styles
39

Patil, Aniket V. "Programming QR code scanner, communicating Android devices, and unit testing in fortified cards." Thesis, California State University, Long Beach, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10638699.

Full text
Abstract:

In the contemporary world, where smartphones have become an essential part of our day-to-day lives, Fortified Cards aims to let people monitor the security of their payments using their smartphones. Fortified Cards, as a project, is an endeavor to revolutionize credit or debit card payments using the Quick Response (QR) technology and the International Mobile Equipment Identity (IMEI) number.

The emphasis in the Android application of Fortified Cards is on the QR technology, communication between two Android devices, and testing the application under situations that could possibly have a negative impact on the successful implementation of the project. The documentation of the project exemplifies the working of the application in a graphical format using an activity diagram, which is a step-by-step guide for any developer to gain a better insight and the detailed description of the successful implementation of the project.

APA, Harvard, Vancouver, ISO, and other styles
40

Shultz, Jacque. "Authenticating turbocharger performance utilizing ASME performance test code correction methods." Thesis, Kansas State University, 2011. http://hdl.handle.net/2097/8451.

Full text
Abstract:
Master of Science
Department of Mechanical and Nuclear Engineering
Kirby S. Chapman
Continued regulatory pressure necessitates the use of precisely designed turbochargers to create the design trapped equivalence ratio within large-bore stationary engines used in the natural gas transmission industry. The upgraded turbochargers scavenge the exhaust gases from the cylinder, and create the air manifold pressure and back pressure on the engine necessary to achieve a specific trapped mass. This combination serves to achieve the emissions reduction required by regulatory agencies. Many engine owner/operators request that an upgraded turbocharger be tested and verified prior to re-installation on engine. Verification of the mechanical integrity and airflow performance prior to engine installation is necessary to prevent field hardware iterations. Confirming the as-built turbocharger design specification prior to transporting to the field can decrease downtime and installation costs. There are however, technical challenges to overcome for comparing test-cell data to field conditions. This thesis discusses the required corrections and testing methodology to verify turbocharger onsite performance from data collected in a precisely designed testing apparatus. As the litmus test of the testing system, test performance data is corrected to site conditions per the design air specification. Prior to field installation, the turbocharger is fitted with instrumentation to collect field operating data to authenticate the turbocharger testing system and correction methods. The correction method utilized herein is the ASME Performance Test Code 10 (PTC10) for Compressors and Exhausters version 1997.
APA, Harvard, Vancouver, ISO, and other styles
41

Vernersson, Susanne. "Penetration Testing in a Web Application Environment." Thesis, Linnéuniversitetet, Institutionen för datavetenskap, fysik och matematik, DFM, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-8934.

Full text
Abstract:
As the use of web applications is increasing among a number of different industries, many companies turn to online applications to promote their services. Companies see the great advantages with web applications such as convenience, low costs and little need of additional hardware or software configuration. Meanwhile, the threats against web applications are scaling up where the attacker is not in need of much experience or knowledge to hack a poorly secured web application as the service easily can be accessed over the Internet. While common attacks such as cross-site scripting and SQL injection are still around and very much in use since a number of years, the hacker community constantly discovers new exploits making businesses in need of higher security. Penetration testing is a method used to estimate the security of a computer system, network or web application. The aim is to reveal possible vulnerabilities that could be exploited by a malicious attacker and suggest solutions to the given problem at hand. With the right security fixes, a business system can go from being a threat to its users’ sensitive data to a secure and functional platform with just a few adjustments. This thesis aims to help the IT security consultants at Combitech AB with detecting and securing the most common web application exploits that companies suffer from today. By providing Combitech with safe and easy methods to discover and fix the top security deficiencies, the restricted time spent at a client due to budget concerns can be made more efficient thanks to improvements in the internal testing methodology. The project can additionally be of interest to teachers, students and developers who want to know more about web application testing and security as well as common exploit scenarios.
APA, Harvard, Vancouver, ISO, and other styles
42

Ghafghazi, Mohsen. "Towards comprehensive interpretation of the state parameter from cone penetration testing in cohesionless soils." Thesis, University of British Columbia, 2011. http://hdl.handle.net/2429/34090.

Full text
Abstract:
The Cone Penetration Test (CPT) is widely used for determining in-situ properties of soil because of its continuous data measurement and repeatability at relatively low cost. The test is even more attractive in cohesionless soils such as sands, silts and most tailings due to difficulties associated with retrieving undisturbed samples in such soils. Behaviour of soils is highly dependent on both density and stress level. The state parameter is widely accepted to represent the soil behaviour encompassing both density and stress effects. Hence, determining the in-situ state parameter from CPT is of great practical interest. The CPT was analysed using a large strain spherical cavity expansion finite element code using a critical state soil model (NorSand) capable of accounting for both elasticity and plastic compressibility. The constitutive model was calibrated through triaxial tests on reconstituted samples. The state parameter was then interpreted from CPT tip resistance, and the results were verified against an extensive database of calibration chamber tests. The efficiency of the method was further investigated by analysing two well documented case histories confirming that consistent results could be obtained from different in-situ testing methods using the proposed framework. Consequently, cumbersome and expensive testing methods can be substituted by a combination of triaxial testing and finite element analysis producing soil specific correlations. One of the difficulties in analysing the cone penetration problem is the less researched effect of high stresses developing around the cone on the behaviour of the soil. A hypothesis was put forward on the particle breakage process at the particle level and its implications for the behaviour of sands at higher stress levels were discussed. A series of triaxial tests were performed, focusing on the effects of particle breakage on the location of the critical state line. The hypothesis was used to explain the observed behaviour. Particle breakage was shown to cause additional compression and a parallel downward shift in the critical state line. The magnitude of the shift was linked to the amount of breakage and it was argued that significant breakage starts after the capacity for compression due to sliding and rolling is exhausted.
APA, Harvard, Vancouver, ISO, and other styles
43

Foulds, Chantal M. (Chantal Marguerite). "Field testing of five legume forages as interseedings in early and late cole crops." Thesis, McGill University, 1991. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=59942.

Full text
Abstract:
Experimental plots were overlaid on commercial fields of broccoli (Brassica oleraceae L. var. Italica) and cauliflower (Brassica oleraceae L. var. botrytis L.) to evaluate legume species as interseedings in vegetable production. White clover (Trifolium repens L.), red clover (T. pratense L.), yellow sweet clover (Melilotus officinalis L.), hairy vetch (Vicia villosa Roth.) and crimson clover (T. incarnatum L.) were seeded 4-5 weeks after an early planting of broccoli and a late planting of cauliflower. Crop yields, forage biomass, weed biomass and percent fall ground cover were recorded.
A dry year coupled with difficulties in applying the treatments resulted in low forage biomass. Hairy vetch yielded the most within the early broccoli planting system. High rainfall the next year resulted in high biomass yields. Over the two year study, hairy vetch and crimson clover emerged as the two most productive species. Significant effects on fall weed biomass were observed with broccoli in the second year of the study, where interseeded plots reduced weed populations by at least 66%. No evidence was seen of weed suppression by interseedings prior to harvest. Crop yields were not affected by interseedings. All interseeded treatments provided the minimum of 30% ground cover required to help reduce erosion.
APA, Harvard, Vancouver, ISO, and other styles
44

Marinuzzi, Natalie Romina. "LOCATION OF SINKHOLE CONFINING BREACH USING GROUNDWATER FLOW PATTERNS DERIVED FROM CONE PENETRATION TESTING." Master's thesis, University of Central Florida, 2004. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/4442.

Full text
Abstract:
Dynamic forces in the hydrologic cycle move underground water through Florida's carbonate rocks dissolving chemical components of the rocks, leaving behind caves, solution pipes, and other voids that result in a karst terrain. Ravelling is the common subsidence mechanism throughout most of Florida where unconsolidated materials filter downward into voids in the underlying limestone. A cavity in the overburden develops and enlarges over a period of many years. The enlarged cavity is also known as sinkhole. The investigations of sinkhole characteristics and potential involve studying the regional geology, hydrology and mapping historic sinkholes that have occurred in the area. Use of Cone Penetration Test (CPT) soundings, in conjunction with conventional soil borings are becoming more common in the assessment of subsurface soil conditions in the vicinity of sinkhole-related ground surface. The penetration resistance data by CPT can determine the presence and extent of raveled soil zones characteristic of sinkhole features, and the penetration pore water pressure data can be used to determine the integrity of the clay confining unit at each test sounding location. The objective of this study is to identify the possible location of the confining breach at a sinkhole in Seminole County. The methods used in the assessment of the sinkhole's subsurface conditions were Standard Penetration Test (SPT), which provided information that helped to identify the location of the ravelled zones within the soil profile, and Cone Penetration Test that gave information of the piezometric water levels obtained from the pore pressure dissipation curves. The total head was calculated from the piezometric water levels corresponding to the different elevations. The data were found to exhibit a downward behavior of the total head, starting at around elevation 50 feet, NGVD that extended towards lower elevations. The SPT boring log identified a ravelled zone starting at 31 feet approximately. From both information it was possible to establish that the hydraulic head was influenced by the proximity of the ravelled zones, where the head precipitated rapidly as the elevation decreased. From the result of this study, it was concluded that the location of the breach in the confining layer started at 61.8 feet deep below the ground surface. Potentiometric contour lines at elevation 24.40 feet denoted flow patterns of water from the surroundings of the depression towards the approximate location of the center, which is the existing of subsurface cavity.
M.S.
Department of Civil and Environmental Engineering
Engineering and Computer Science
Civil and Environmental Engineering
APA, Harvard, Vancouver, ISO, and other styles
45

Panella, Eugenia. "Penetration testing come metodologia di indagine sulla sicurezza di sistema nel contesto aziendale odierno." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2012. http://amslaurea.unibo.it/3233/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

McKnight, Tobin S. "Engineering properties and cone penetration testing of municipal solid waste to predict landfill settlement." [Gainesville, Fla.] : University of Florida, 2005. http://purl.fcla.edu/fcla/etd/UFE0010520.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

DeMarce, Josephine Marie. "Testing Theoretical Relationships among Alcohol Use, Drinking to Cope, Mood Regulation Expectancies, and Depression." Diss., Virginia Tech, 2006. http://hdl.handle.net/10919/26147.

Full text
Abstract:
Participants (N = 164) completed measures of depression, negative mood regulation expectancies, coping motives for alcohol use, alcohol use, and alcohol-related problems allowing for cross-sectional and prospective examinations of theoretically derived hypotheses regarding motivational models for alcohol use and related problems in a college population. Using hierarchical linear regression techniques, 3 hypotheses were examined. The hypothesis that lower levels of depression and higher levels of negative mood regulations expectancies would interact to predict drinking to cope was not supported. The hypothesis that drinking to cope would be predictive of alcohol-related problems even when alcohol consumption was controlled for was supported. The creation of two subscales intended to measure objective and subjective alcohol-related problems is explained. There was mixed support for the hypothesis that drinking to cope is more predictive of subjective alcohol-related problems compared to objective alcohol-related problems. Findings from the current study provide support for social learning theory and have implications for alcohol intervention programs on college campuses.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
48

Seuring, Markus. "Output space compaction for testing and concurrent checking." Phd thesis, [S.l.] : [s.n.], 2000. http://pub.ub.uni-potsdam.de/2001/0004/seuring.ps.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Khan, Mohammed Salman. "A Topic Modeling approach for Code Clone Detection." UNF Digital Commons, 2019. https://digitalcommons.unf.edu/etd/874.

Full text
Abstract:
In this thesis work, the potential benefits of Latent Dirichlet Allocation (LDA) as a technique for code clone detection has been described. The objective is to propose a language-independent, effective, and scalable approach for identifying similar code fragments in relatively large software systems. The main assumption is that the latent topic structure of software artifacts gives an indication of the presence of code clones. It can be hypothesized that artifacts with similar topic distributions contain duplicated code fragments and to prove this hypothesis, an experimental investigation using multiple datasets from various application domains were conducted. In addition, CloneTM, an LDA-based working prototype for code clone detection was developed. Results showed that, if calibrated properly, topic modeling can deliver a satisfactory performance in capturing different types of code clones, showing particularity good performance in detecting Type III clones. CloneTM also achieved levels of performance comparable to already existing practical tools that adopt different clone detection strategies.
APA, Harvard, Vancouver, ISO, and other styles
50

Norman, Niclas. "Mutation testing as quality assurance in base station software." Thesis, Linköpings universitet, Programvara och system, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-109636.

Full text
Abstract:
Telecom base stations are a critical part of society's information infrastructure. To ensure high quality base station software, automated testing is an important part of development. Ericsson measures the quality of automated tests with statement coverage, counting the number of statements executed by a test suite. Alone, however, statement coverage does not guarantee test quality. Mutation testing is a technique to improve test quality by injecting faults and verifying that test suites detect them. This thesis investigates whether mutation testing is a viable way to increase the reliability of test suites for base station software at Ericsson. Using the open-source mutation testing tool MiLu, we describe a practical method of using mutation testing that is viable for daily development. We also describe how mutation testing reveals a numbers of potential errors in the production code that current test suites miss even though they have very good statement coverage.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography