Dissertations / Theses on the topic 'GDI TECHNIQUE'

To see the other types of publications on this topic, follow the link: GDI TECHNIQUE.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 26 dissertations / theses for your research on the topic 'GDI TECHNIQUE.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Ma, Wenbin. "GDC, a graph drawing application with clustering techniques." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/MQ60460.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Jönsson, Tim. "Efficiency determination of automated techniques for GUI testing." Thesis, Tekniska Högskolan, Högskolan i Jönköping, JTH. Forskningsmiljö Informationsteknik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-24146.

Full text
Abstract:
Efficiency as a term in software testing is, in the research community, a term that is not so well defined. In the industry, and specifically the test tool industry, it has become a sales pitch without meaning. GUI testing in its manual form is a time consuming task, which can be thought of as repetitive and tedious by testers. Using human testers to perform a task, where focus is hard to keep, often ends in defects going unnoticed. The purpose of this thesis is to collect knowledge on the area efficiency in software testing, but focusing more on efficiency in GUI testing in order to keep the scope focused. Part of the purpose is also to test the hypothesis that automated GUI testing is more efficient than traditional, manual GUI testing. In order to reach the purpose, the choice fell to use case study research as the main research method. Through the case study, a theoretical study was performed to gain knowledge on the subject. To gain data used for an analysis in the case study, the choice fell on using a semi-experimental research approach where one automated GUI testing technique called Capture & Replay was tested against a more traditional approach towards GUI testing. The results obtained throughout the case study gives a definition on efficiency in software testing, as well as three measurements on efficiency, those being defect detection, repeatability of test cases, and time spent with human interaction. The result also includes the findings from the semi-experimental research approach where the testing tools Squish, and TestComplete, where used beside a manual testing approach. The main conclusion deducted in this work is that an automated approach towards GUI testing can become more efficient than a manual approach, in the long run. This is when efficiency is determined on the points of defect detection, repeatability, and time.
APA, Harvard, Vancouver, ISO, and other styles
3

Ruffio, Jean-Baptiste, Bruce Macintosh, Jason J. Wang, Laurent Pueyo, Eric L. Nielsen, Robert J. De Rosa, Ian Czekala, et al. "Improving and Assessing Planet Sensitivity of the GPI Exoplanet Survey with a Forward Model Matched Filter." IOP PUBLISHING LTD, 2017. http://hdl.handle.net/10150/624437.

Full text
Abstract:
We present a new matched-filter algorithm for direct detection of point sources in the immediate vicinity of bright stars. The stellar point-spread function (PSF) is first subtracted using a Karhunen-Love image processing (KLIP) algorithm with angular and spectral differential imaging (ADI and SDI). The KLIP-induced distortion of the astrophysical signal is included in the matched-filter template by computing a forward model of the PSF at every position in the image. To optimize the performance of the algorithm, we conduct extensive planet injection and recovery tests and tune the exoplanet spectra template and KLIP reduction aggressiveness to maximize the signalto- noise ratio (S/N) of the recovered planets. We show that only two spectral templates are necessary to recover any young Jovian exoplanets with minimal S/N loss. We also developed a complete pipeline for the automated detection of point-source candidates, the calculation of receiver operating characteristics (ROC), contrast curves based on. false positives, and completeness contours. We process in a uniform manner more than 330 data sets from the Gemini Planet Imager Exoplanet Survey and assess GPI typical sensitivity as a function of the star and the hypothetical companion spectral type. This work allows for the first time a comparison of different detection algorithms at a survey scale accounting for both planet completeness and false-positive rate. We show that the new forward model matched filter allows the detection of 50% fainter objects than a conventional cross-correlation technique with a Gaussian PSF template for the same false-positive rate.
APA, Harvard, Vancouver, ISO, and other styles
4

Rajkumar, Ved. "Predicting surprises to GDP : a comparison of econometric and machine learning techniques." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/109649.

Full text
Abstract:
Thesis: M. Fin., Massachusetts Institute of Technology, Sloan School of Management, Master of Finance Program, 2017.
Cataloged from PDF version of thesis.
Includes bibliographical references (page 35).
This study takes its inspiration from the practice of nowcasting, which involves making short horizon forecasts of specific data items, typically GDP growth in the context of economics. We alter this approach by targeting surprises to GDP growth, where the expectation is defined as the consensus estimate of economists and a surprise is a deviation of the realized value from the expectation. We seek to determine if surprises are predictable at a better than random rate through the use of four statistical techniques: OLS, logit, random forest, and neural network. In addition to evaluating predictability we also seek to compare the four techniques, the former two of which are common in econometric literature and the latter two of which are machine learning algorithms most commonly seen in engineering settings. We find that the neural network technique predicts surprises at an encouraging rate, and while the results are not overwhelmingly positive they do suggest that the model may identify relationships in the data that elude the consensus.
by Ved Rajkumar.
M. Fin.
APA, Harvard, Vancouver, ISO, and other styles
5

Crosara, Edoardo <1995&gt. "Gli atteggiamenti linguistici verso il dialetto nell'Alto Vicentino - un'indagine di matched-guise technique in Veneto." Master's Degree Thesis, Università Ca' Foscari Venezia, 2022. http://hdl.handle.net/10579/21770.

Full text
Abstract:
L’utilizzo dei sistemi linguistici dialettali è, certamente, una delle più importanti caratteristiche del repertorio linguistico dei parlanti della penisola italiana. La maggior parte della popolazione affianca alla propria lingua madre, che nella maggior parte dei casi è ad oggi l’italiano, una seconda lingua di carattere dialettale. Inoltre, in alcune zone più ‘rurali’, l’utilizzo del dialetto rimane, ad oggi, particolarmente vivo e diffuso anche in contesti ufficialmente affidati all’italiano, dimostrando un forte attaccamento dei parlanti alla propria lingua locale. L’obbiettivo di questo studio è quello di analizzare l’atteggiamento linguistico dei giovani nei confronti della propria varietà di dialetto veneto e se la frequenza d’uso di tale dialetto influisca nella diretta percezione che gli stessi hanno di questa lingua. In particolare, la varietà in questione è quella dell’alto vicentino della Valle dell’Agno, in provincia di Vicenza. Per ricavare dei risultati utili a dare una risposta a tale domanda, un campione di giovani dai 18 ai 30 anni è stato sottoposto ad un MGT (matched-guise technique), sottoforma di questionario online, nel quale sono stati tenuti a produrre un giudizio su delle tracce vocali (guises) in italiano e dialetto, prodotte dalle medesime persone. Tali test hanno evidenziato una lieve, ma significativa preferenza per alcuni tratti delle voci dialettali proprio da parte dei giovani che utilizzano attivamente il dialetto.
APA, Harvard, Vancouver, ISO, and other styles
6

BERTOLINI, Cristiano. "Evaluation of GUI testing techniques for system crashing: from real to model-based controlled experiments." Universidade Federal de Pernambuco, 2010. https://repositorio.ufpe.br/handle/123456789/2076.

Full text
Abstract:
Made available in DSpace on 2014-06-12T15:54:24Z (GMT). No. of bitstreams: 2 arquivo7096_1.pdf: 2072025 bytes, checksum: ca8b71b9cfdeb09118a7c281cafe2872 (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2010
Conselho Nacional de Desenvolvimento Científico e Tecnológico
Aplicações para celular estão se tornando cada vez mais complexas, bem como testá-las. Teste de interfaces gráficas (GUI) é uma tendência atual e se faz, em geral, através da simulação de interações do usuário. Várias técnicas são propostas, no qual, eficiência (custo de execução) e eficácia (possibilidade de encontrar bugs) são os aspectosmais cruciais desejados pela industria. No entanto, avaliações mais sistemáticas são necessárias para identificar quais técnicas melhoram a eficiência e eficácia de tais aplicações. Esta tese apresenta uma avaliação experimental de duas técnicas de testes de GUI, denominadas de DH e BxT, que são usadas para testar aplicações de celulares com um histórico de erros reais. Estas técnicas são executadas por um longo período de tempo (timeout de 40h, por exemplo) tentando identificar as situações críticas que levam o sistema a uma situação inesperada, onde o sistema pode não continuar sua execução normal. Essa situação é chamada de estado de crash. A técnicaDHjá existia e é utilizadapela industriade software, propomos outra chamada de BxT. Em uma avaliação preliminar, comparamos eficácia e eficiência entre DH e BxT através de uma análise descritiva. Demonstramos que uma exploração sistemática, realizada pela BxT, é uma abordagem mais interessante para detectar falhas em aplicativos de celulares. Com base nos resultados preliminares, planejamos e executamos um experimento controlado para obter evidência estatística sobre sua eficiência e eficácia. Como ambas as técnicas são limitadas por um timeout de 40h, o experimento controlado apresenta resultados parciais e, portanto, realizamos uma investigação mais aprofundada através da análise de sobrevivência. Tal análise permite encontrar a probabilidade de crash de uma aplicação usando tanto DH quanto BxT. Como experimentos controlados são onerosos, propomos uma estratégia baseada em experimentos computacionais utilizando a linguagem PRISM e seu verificador de modelos para poder comparar técnicas de teste de GUI, em geral, e DH e BxT em particular. No entanto, os resultados para DH e BxT tem uma limitação: a precisão do modelo não é estatisticamente comprovada. Assim, propomos uma estratégia que consiste em utilizar os resultados anteriores da análise de sobrevivência para calibrar nossos modelos. Finalmente, utilizamos esta estratégia, já com os modelos calibrados, para avaliar uma nova técnica de teste de GUI chamada Hybrid-BxT (ou simplesmente H-BxT), que é uma combinação de DH e BxT
APA, Harvard, Vancouver, ISO, and other styles
7

Jansson, Mattias, and Jimmy Johansson. "Interactive Visualization of Statistical Data using Multidimensional Scaling Techniques." Thesis, Linköping University, Department of Science and Technology, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-1716.

Full text
Abstract:

This study has been carried out in cooperation with Unilever and partly with the EC founded project, Smartdoc IST-2000-28137.

In areas of statistics and image processing, both the amount of data and the dimensions are increasing rapidly and an interactive visualization tool that lets the user perform real-time analysis can save valuable time. Real-time cropping and drill-down considerably facilitate the analysis process and yield more accurate decisions.

In the Smartdoc project, there has been a request for a component used for smart filtering in multidimensional data sets. As the Smartdoc project aims to develop smart, interactive components to be used on low-end systems, the implementation of the self-organizing map algorithm proposes which dimensions to visualize.

Together with Dr. Robert Treloar at Unilever, the SOM Visualizer - an application for interactive visualization and analysis of multidimensional data - has been developed. The analytical part of the application is based on Kohonen’s self-organizing map algorithm. In cooperation with the Smartdoc project, a component has been developed that is used for smart filtering in multidimensional data sets. Microsoft Visual Basic and components from the graphics library AVS OpenViz are used as development tools.

APA, Harvard, Vancouver, ISO, and other styles
8

Rodrigues, Lanny Anthony, and Srujan Kumar Polepally. "Creating Financial Database for Education and Research: Using WEB SCRAPING Technique." Thesis, Högskolan Dalarna, Mikrodataanalys, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:du-36010.

Full text
Abstract:
Our objective of this thesis is to expand the microdata database of publicly available corporate information of the university by web scraping mechanism. The tool for this thesis is a web scraper that can access and concentrate information from websites utilizing a web application as an interface for client connection. In our comprehensive work we have demonstrated that the GRI text files approximately consist of 7227 companies; from the total number of companies the data is filtered with “listed” companies. Among the filtered 2252 companies some do not have income statements data. Hence, we have finally collected data of 2112 companies with 36 different sectors and 13 different countries in this thesis. The publicly available information of income statements between 2016 to 2020 have been collected by GRI of microdata department. Collecting such data from any proprietary database by web scraping may cost more than $ 24000 a year were collecting the same from the public database may cost almost nil, which we will discuss further in our thesis.In our work we are motivated to collect the financial data from the annual financial statement or financial report of the business concerns which can be used for the purpose to measure and investigate the trading costs and changes of securities, common assets, futures, cryptocurrencies, and so forth. Stock exchange, official statements and different business-related news are additionally sources of financial data that individuals will scrape. We are helping those petty investors and students who require financial statements from numerous companies for several years to verify the condition of the economy and finance concerning whether to capitalise or not, which is not possible in a conventional way; hence they use the web scraping mechanism to extract financial statements from diverse websites and make the investment decisions on further research and analysis.Here in this thesis work, we have indicated the outcome of the web scraping is to keep the extracted data in a database. The gathered data of the resulted database can be implemented for the required goal of further research, education, and other purposes with the further use of the web scraping technique.
APA, Harvard, Vancouver, ISO, and other styles
9

Garcia, E. Victor, Thayne Currie, Olivier Guyon, Keivan G. Stassun, Nemanja Jovanovic, Julien Lozi, Tomoyuki Kudo, et al. "SCExAO AND GPI Y JH BAND PHOTOMETRY AND INTEGRAL FIELD SPECTROSCOPY OF THE YOUNG BROWN DWARF COMPANION TO HD 1160." IOP PUBLISHING LTD, 2017. http://hdl.handle.net/10150/623097.

Full text
Abstract:
We present high signal-to-noise ratio, precise Y JH photometry and Y band (0.957-1.120 mu m) spectroscopy of HD 1160 B, a young substellar companion discovered from the Gemini NICI Planet Finding Campaign using the Subaru Coronagraphic Extreme Adaptive Optics instrument and the Gemini Planet Imager. HD 1160 B has typical mid-M dwarf-like infrared colors and a spectral type of M5.5(-0.5)(+1.0), where the blue edge of our Y band spectrum rules out earlier spectral types. Atmospheric modeling suggests HD 1160 B has an effective temperature of 3000-3100 K, a surface gravity of log g - 4-4.5, a radius of. 1.55 +/- 0.10 R-J, and a luminosity of log L/L circle dot - 2.76 +/- 0.05. Neither the primary's Hertzspring-Russell diagram position nor atmospheric modeling of HD 1160 B show evidence for a subsolar metallicity. Interpretation of the HD 1160 B spectroscopy depends on which stellar system components are used to estimate the age. Considering HD 1160 A, B and C jointly, we derive an age of 80-125 Myr, implying that HD 1160 B straddles the hydrogen-burning limit (70-90 M-J) If we consider HD 1160 A alone, younger ages (20-125 Myr) and a brown dwarf-like mass (35-90 M-J) are possible. Interferometric measurements of the primary, a precise Gaia parallax, and moderate-resolution spectroscopy can better constrain the system's age and how HD 1160 B fits within the context of (sub) stellar evolution.
APA, Harvard, Vancouver, ISO, and other styles
10

Bacha, Rebiha. "De la gestion de données techniques pour l'ingénierie de production : référentiel du domaine et cadre méthodologique pour l'ingénierie des systèmes d'information techniques en entreprise." Phd thesis, Ecole Centrale Paris, 2002. http://tel.archives-ouvertes.fr/tel-00011949.

Full text
Abstract:
Ce travail traite des problèmes méthodologiques dans les projets de mise en œuvre des Systèmes d'Information Techniques (SIT) en entreprise. Il a pour champ d'application le domaine de l'ingénierie de production. L'objectif est l'amélioration de deux aspects de ces problèmes : le cahier des charges de la maîtrise d'ouvrage et de la maîtrise d'œuvre, et la démarche globale de conduite de ces projets. Pour ce faire, nous proposons un référentiel du domaine, élaboré à l'aide de diagrammes de spécification standards. Orienté expression des besoins en GDT, le référentiel est réutilisable dans les développements futurs de SIT en phase d'analyse du domaine. La démarche d'élaboration du référentiel est rationalisée ; elle se veut un guide méthodologique pour l'ingénierie même des SIT. Elle implique les principaux acteurs suivant un processus itératif. Cette démarche est de plus contextuelle pour mieux appréhender les singularités des projets et suffisamment flexible pour respecter le caractère créatif des métiers d'ingénierie.
APA, Harvard, Vancouver, ISO, and other styles
11

Al, Awadi Wali. "An Assessment of Static and Dynamic malware analysis techniques for the android platform." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2015. https://ro.ecu.edu.au/theses/1635.

Full text
Abstract:
With Smartphones becoming an increasingly important part of human life, the security of these devices is very much at stake. The versatility of these phones and their associated applications has fostered an increasing number of mobile malware attacks. The purpose of the research was to answer the following research questions: 1. What are the existing methods for analysing mobile malware? 2. How can methods for analysing mobile malware be evaluated? 3. What would comprise a suitable test bed(s) for analysing mobile malware? The research analyses and compares the various tools and methods available for compromising the Android OS and observing the malware activity before and after its installation onto an Android emulator. Among several available tools and methods, the approach made use of online scanning engines to perform pre installation of mobile malware analysis and the AppUse (Android Pentest Platform Unified Standalone Environment) tool to perform post installation. Both the above approaches facilitate better analysis of mobile malware before and after being installed onto the mobile device. This is because, with malware being the root cause of many security breaches, the developed mobile malware analysis allows future security practitioners in this field to determine if newly developed applications are malicious and, if so, what would their effect be on the target. In addition, the AppUse tool can allow security practitioners to first establish the behaviour of post installed malware infections onto the Android emulator then be able to effectively eliminate malware from individual systems as well as the Google Play Store. Moreover, mobile malware analysis can help with a successful incident response, assisting with mitigating the loss of intellectual property, personal information as well as other critical private data. It can strive to limit the damage of a security breach or to reduce the scope of damage of an attack. The basic structure of the research work began with a dynamic analysis, followed by a static analysis: a) Mobile malware were collected and downloaded from the Contagio website to compromise an Android emulator, b) Mobile malware were uploaded onto five online scanning engines for dynamic analysis to perform pre installation analysis, and c) AppUse tool was implemented and used for static analysis to perform post installation analysis by making use of its: a. Android emulator and, b. JD-GUI and Dex2Jar tools. The findings were that the AppUse methodology used in the research was successful but the outcome was not as anticipated. This was because the installed malicious applications on the Android emulator did not generate the derived behavioural reports; instead, only manifest files in xml format. To overcome this issue, JD-GUI and Dex2Jar tools were used to manually generate the analysis results from the Android emulator to analyse malware behaviour. The key contribution of this research work is the proposal of a dynamic pre-installation and a static post-installation analysis of ten distinct Android malware samples. To our knowledge, no research has been conducted on post installation of mobile malware analysis and this is the first research that uses the AppUse tool for mobile malware analysis.
APA, Harvard, Vancouver, ISO, and other styles
12

Traore, Papa Silly. "Introduction des techniques numériques pour les capteurs magnétiques GMI (Giant Magneto-Impedance) à haute sensibilité : mise en œuvre et performances." Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAT061/document.

Full text
Abstract:
La Magneto-Impédance Géante (GMI) consiste en une forte variation de l’impédance d’un matériau ferromagnétique doux parcouru par un courant d’excitation alternatif haute fréquence lorsqu’il est soumis à un champ magnétique extérieur. Ce travail de thèse introduit de nouvelles techniques numériques et les pistes d’optimisation associées pour les capteurs GMI à haute sensibilité. L'originalité réside dans l'intégration d'un synthétiseur de fréquence et d'un récepteur entièrement numérique pilotés par un processeur de traitement de signal. Ce choix instrumental se justifie par le souhait de réduire le bruit de l’électronique de conditionnement qui limite le niveau de bruit équivalent en champ. Ce dernier caractérise le plus petit champ mesurable par le capteur. Le système de conditionnement conçu est associé à la configuration magnétique off-diagonal pour accroître la sensibilité intrinsèque de l’élément sensible. Cette configuration magnétique consiste en l’utilisation d’une bobine de détection autour du matériau ferromagnétique. Cette association permet en outre d’obtenir une caractéristique impaire de la réponse du capteur autour du champ nul, et par conséquent de pouvoir mettre en œuvre et d’utiliser le capteur sans avoir recours à une polarisation magnétique. Ce choix permet ainsi d’éliminer, ou au moins de minimiser les problématiques liées aux offsets des dispositifs GMI, tout en validant l’intérêt de cette configuration magnétique, notamment sur le choix du point de fonctionnement. Une modélisation des performances en bruit de toute la chaîne de mesure, incluant le système de conditionnement numérique, est réalisée. Une comparaison entre les niveaux de bruit équivalent en champ attendus par le modèle et mesurés est effectuée. Les résultats obtenus ont permis de dégager des lois générales d’optimisation des performances pour un capteur GMI numérique. Partant de ces pistes d’optimisation, un prototype de capteur complet et optimisé a été implémenté sur FPGA. Ce capteur affiche un niveau de bruit équivalent en champ de l’ordre de 1 pT/√Hz en zone de bruit blanc. En outre, ce travail permet de valider l’intérêt des techniques numériques dans la réalisation de dispositifs de mesure à haute sensibilité
The Giant Magneto-Impedance (GMI) is a large change of the impedance of some soft ferromagnetic materials, supplied by an alternating high-frequency excitation current, when they are submitted to an external magnetic field. This thesis presents the design and performance of an original digital architecture for high-sensitivity GMI sensors. The core of the design is a Digital Signal Processor (DSP) which controls two other key elements: a Direct Digital Synthesizer (DDS) and a Software Defined Radio (SDR) or digital receiver. The choice of these digital concepts is justified by the will to reduce the conditioning electronics noise that limits the equivalent magnetic noise level. The latter characterizes the smallest measurable field by the sensor. The developed conditioning system is associated with the off-diagonal magnetic configuration in order to increase the intrinsic sensitivity of the sensitive element. This magnetic configuration consists of the use of an additional a pick-up coil wound around the ferromagnetic material. This association also makes it possible to obtain an asymmetrical characteristic (odd function) of the sensor response near the zero-field point and to consequently allow for sensor implementation and use without any bias magnetic field. Thus, this choice eliminates, or at least minimizes, the problems related to the offset cancelling of the GMI devices. Also, it validates the advantage of this magnetic configuration, especially the choice of the operating point. Modeling of the noise performance of the entire measurement chain, including the digital conditioning, is performed. A comparison between the expected and measured equivalent magnetic noise levels is then carried out. The results yield general optimization laws for a digital GMI sensor. Using these laws, an optimized prototype of a GMI sensor is designed and implemented on FPGA. An equivalent magnetic noise level in a white noise zone region of approximately 1 pT/√ Hz is obtained. Furthermore, this work also makes it possible to validate the interest of digital techniques in the realization of a high-sensitivity measuring devices
APA, Harvard, Vancouver, ISO, and other styles
13

Calderan, Francesco. "Gli effetti del rinforzo, stretching e tecnica Graston sulle caratteristiche muscolari al fine di indagare l'incidenza degli infortuni agli ischio crurali nei giocatori di rugby. Trial clinico controllato." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/19312/.

Full text
Abstract:
Obiettivo: Verificare l’efficacia di un programma di rinforzo, stretching e applicazione della tecnica Graston sulle caratteristiche muscolari di atleti di una squadra di rugby, al fine di monitorare l’incidenza degli infortuni ai muscoli ischio crurali. Materiali e metodi: 50 atleti totali, divisi in tre gruppi, due gruppi di intervento e uno di controllo. Il gruppo di controllo (16 soggetti) ha svolto solo un programma di riscaldamento; i gruppi sperimentali (18 e 16 soggetti) hanno svolto il riscaldamento, il rinforzo e sono stati trattati con gli strumenti Graston. Ad ogni atleta è stato somministrato un questionario ed in base alle risposte è avvenuta la divisione nei vari gruppi. Gli atleti hanno eseguito test iniziali e finali per la valutazione della forza e la lunghezza dei muscoli ischio crurali. Gli infortuni che sono avvenuti in passato e durante la sperimentazione sono stati registrati. Risultati: Anche se è evidente che i soggetti dei gruppi sperimentali hanno presentato modifiche maggiori delle caratteristiche muscolari, non vi è alcuna significatività a livello statistico (p>0,05). Sedici atleti avevano subito infortuni a carico dei muscoli ischio crurali prima della sperimentazione e, nel periodo di studio in una particolare situazione di gioco, si è verificata una recidiva per un atleta. Conclusioni: Dai risultati dei test, si conclude che non si è registrata una differenza apprezzabile tra le due tipologie di intervento, questo è probabilmente riconducibile all’esiguo numero dei soggetti del campione reclutato per l’analisi e per la brevità del periodo di trattamento.
APA, Harvard, Vancouver, ISO, and other styles
14

Dotto, André Carnieletto. "Espectroscopia do solo no Vis-IR: potencial predictivo e desenvolvimento de uma interface gráfica de usuário em R." Universidade Federal de Santa Maria, 2017. http://repositorio.ufsm.br/handle/1/11343.

Full text
Abstract:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES
This thesis presents a study of Visible Near-infrared spectroscopy technique applied to predict soil properties. The purpose was to develop quantitative soil information due to the demand of digital soil mapping, environmental monitoring, agricultural production and for increasing spatial information on soil. Soil spectroscopy emerge as an alternative to revolutionize soil monitoring, allowing rapid, low-cost, non-destructive samples sampling, environmental-friendly, reproducible, and repeatable analysis. To improve the efficiency of soil prediction using spectral data, several spectral preprocessing techniques and multivariate models were exploited. A graphical user interface (GUI) in R, named Alrad Spectra, was developed to perform preprocessing, multivariate modeling and prediction using spectral data. Hereby, the objectives were: The objectives were: i) to predict soil properties to improve soil information using spectral data, ii) to compare the performance of spectral preprocessing and multivariate calibration methods in the prediction of soil organic carbon, iii) to obtain reliable soil organic carbon prediction, and iv) to develop a graphical user interface that performs spectral preprocessing and prediction of the soil property using spectroscopic data. A total of 595 soil samples were collected in central region of Santa Catarina State, Brazil. Soil spectral reflectance was obtained using a FieldSpec 3 spectroradiometer with a spectral range of 350–2500 nm with 1 nm of spectral resolution. The outcomes of the thesis have demonstrated the great performance of predicting soil properties using Vis-NIR spectroscopy. Apparently, soil properties that are directly related to the chromophores such as organic carbon presented superior prediction statistics than particle size. Spectral preprocessing applied in the soil spectra contribute to the development of high-level prediction model. Comparing different spectral preprocessing techniques for soil organic carbon (SOC) prediction revealed that the scatter–corrective preprocessing techniques presented superior prediction results compared to spectral derivatives. In scatter–correction technique, continuum removal is the most suitable preprocessing to be used for SOC prediction. In the calibration modeling, excepting for random forest, all of methods presented robust prediction, with emphasis on the support vector machine method. The systematic methodology applied in this study can improve the reliability of SOC estimation by examining how techniques of spectral preprocessing and multivariate methods affect the prediction performance using spectral analysis. The development of easy-to-use graphical user interface may benefit a large number of users, who will take advantage of this useful chemometrics analysis. Alrad Spectra is the first GUI of its kind and the expectation is that this tool can expand the application of the spectroscopy technique.
Esta tese apresenta um estudo da técnica de espectroscopia do visível ao infravermelho próximo aplicado à predição de propriedades do solo. O proposito foi de desenvolver informações quantitativas sobre o solo, devido à demanda do mapeamento digital de solos, monitoramento ambiental, produção agrícola e aumento das informações espaciais do solo. A espectroscopia surge como uma alternativa para revolucionar a monitorização do solo, permitindo uma amostragem rápida, de baixo custo, não destrutiva, ambientalmente amigável, reprodutível e repetitiva. Para melhorar a eficiência da predição do solo usando dados espectrais, várias técnicas de pré-processamento espectral e modelos multivariados foram explorados. Uma interface gráfica de usuário (GUI) no R, denominada Alrad Spectra, foi desenvolvida para realizar pré-processamento, modelagem multivariada e predição usando dados espectrais. Os objetivos foram: i) predizer as propriedades do solo para melhorar a informação do solo usando dados espectrais, ii) comparar os desempenhos dos pré-processamentos espectrais e métodos de calibração multivariada na predição do carbono orgânico do solo, iii) obter predições confiáveis do carbono orgânico do solo, e iv) desenvolver uma interface gráfica de usuário que realize o pré-processamento espectral e a predição do atributo solo usando dados espectroscópicos. Um total de 595 amostras de solo foram coletadas na região central do estado de Santa Catarina, Brasil. A reflectância espectral do solo foi obtida utilizando um espectrorradiômetro FieldSpec 3 com uma alcance espectral de 350-2500 nm com 1 nm de resolução espectral. Os resultados da tese demonstraram o grande desempenho da predição de propriedades do solo usando espectroscopia do vísivel ao infravermelho próximo. As propriedades do solo que estão diretamente relacionadas aos cromóforos, como o carbono orgânico, apresentaram predições superiores comparados com o tamanho de partículas. O pré-processamento espectral aplicado nos espectros do solo contribui para o desenvolvimento de um modelo de predição de alto nível. Comparando diferentes técnicas de pré-processamento espectral para a predição de carbono orgânico revelou que as técnicas de pré-processamento de correção de dispersão apresentaram resultados de predição superiores em comparação com as técnicas de derivação espectrais. Na técnica de correção de dispersão, a remoção do contínuo é o pré-processamento mais adequado a ser usado para a predição de carbono. Na modelagem de calibração, com exceção da floresta aleatória, todos os métodos apresentaram uma elevada predição, sendo destaque o método máquina de vetores de suporte. A metodologia sistemática aplicada neste estudo pode melhorar a confiabilidade da estimativa do carbono orgânico ao examinar como as técnicas de pré-processamento espectral e métodos multivariados afetam a performance da predição usando a análise espectral. O desenvolvimento da GUI de fácil utilização pode beneficiar um grande número de usuários, os quais podem tirar proveito desta análise quimiométrica. Alrad Spectra é a primeira GUI desse tipo e a expectativa é que esta ferramenta possa expandir a aplicação da técnica de espectroscopia.
APA, Harvard, Vancouver, ISO, and other styles
15

Vaquerizo, Sánchez Daniel. "Study on Advanced Spray-Guided Gasoline Direct Injection Systems." Doctoral thesis, Universitat Politècnica de València, 2018. http://hdl.handle.net/10251/99568.

Full text
Abstract:
Resumen Los sistemas de inyección directa han sido uno de los principales puntos focales de la investigación en motores, particularmente en sistemas Diésel, donde la geometría interna, movimiento de aguja y comportamiento del flujo afectan el spray externo y por tanto determinan completamente el proceso de combustión dentro del motor. Debido a regulaciones medioambientales y al potencial de los (más ineficientes) motores "Otto", grandes esfuerzos se están aportando en investigación sobre sistemas de inyección directa de gasolina. Los motores GDi tienen el potencial de incrementar sustancialmente la economía de combustible y cumplir con las regulaciones de gases contaminantes y de efecto invernadero, aunque aún existen muchos desafíos por delante. Esta tesis estudia en detalle una moderna tobera GDi que fue específicamente diseñada para el grupo de investigación conocido como Engine Combustion Network (ECN). Con metodologías punteras, este inyector ha sido usado en un amplio abanico de instalaciones experimentales para caracterizar el flujo interno y varias características clave de geometría y funcionamiento, y aplicarlo para evaluar cómo se relaciona con los efectos observados del comportamiento del chorro externo. Para la caracterización interna del flujo, el objetivo ha sido determinar la geometría de la tobera y el desplazamiento de aguja, caracterizar la tasa de inyección y el flujo de cantidad de movimiento, y evaluar el flujo cercano. Algunas metodologías nunca antes habían sido empleadas en inyectores GDi, y muchas otras lo han sido solo eventualmente. Para la geometría interna, el levantamiento de aguja y el flujo cercano, varias técnicas avanzadas con rayos-x fueron aplicadas en las instalaciones de Argonne National Laboratory. Para la tasa de inyección y flujo de cantidad de movimiento, las técnicas disponibles en el departamento han sido adaptadas desde Diésel y aplicadas en inyectores GDi multiorificio. Dado lo novedoso de las técnicas aplicadas, las particularidades de las metodologías han sido discutidas en detalle en el documento. Aún con la elevada turbulencia del flujo interno, el inyector se comporta de forma consistente inyección a inyección, incluso cuando el estudio se centra en la variabilidad orificio a orificio. Esto ha sido atribuido al comportamiento repetitivo de la aguja, evaluado en los experimentos. También fue observado que el flujo estabilizado tiene una variación de alta frecuencia que no pude ser explicado por el movimiento de la aguja, sino por el particular diseño de las toberas. El análisis de geometría interna realizado a ocho toberas nominalmente iguales resultó en la obtención de un punto vista único en la construcción de toberas y la variabilidad de dimensiones clave. Las medidas de tasa de inyección permitieron estudiar la respuesta hidráulica del inyector a varias variables como la presión de inyección, presión de descarga, temperatura de combustible y la duración de la señal de comando. Estas medidas fueron combinadas con medidas de flujo de cantidad de movimiento para estudiar el bajo valor del coeficiente de descarga, el cual fue atribuido al bajo levantamiento de aguja y coeficiente L/D de los orificios. Por otro lado, el estudio del spray externo resultó en la identificación de un importante fenómeno específico a este particular hardware, el colapso del spray. Las extensivas campañas experimentales, utilizando Schlieren e iluminación trasera difusa (DBI) permitieron identificar y describir las características macroscópicas del spray y las condiciones bajo las que el colapso ocurre. El colapso del spray se forma por una combinación de interacción de las diferentes plumas (causado por el flujo interno) y determinadas condiciones ambiente que promueven evaporación y entrada de aire. Fue determinado que a niveles de densidad y temperatura moderados se desarrolla el colapso, modificando completamente el comportamiento espera
Abstract Fuel injection systems have been one of the main focal points of engine research, particularly in Diesel engines, where the internal geometry, needle lift and flow behavior are known to affect the external spray an in turn completely determine the combustion process inside engines. Because of environmental regulation and the potential development of the more inefficient Otto engines, a lot of research efforts are currently focused into gasoline direct injection systems. GDi engines have the potential to greatly increase fuel economy and comply with pollutant and greenhouse gases emissions limits, although many challenges still remain. The current thesis studies in detail a modern type of GDi nozzle that was specifically developed for the international research group known as the Engine Combustion Network (ECN). With the objective of employing state-of-the-art techniques, this hardware has been used in a wide range of experimental facilities in order to characterize the internal flow and several geometrical and constructive aspects like needle lift; and assess how it relates to the effects seen external spray. For the internal flow characterization, the goal was to determine the nozzle geometry and needle displacement, to characterize the rate of injection and rate of momentum, and evaluate the near-nozzle flow. Some methodologies applied here have never been applied to a GDi injector before, and many have only been applied rarely. For the internal geometry, needle lift and near-nozzle flow, several advanced x-rays techniques were used at Argonne National Laboratory. For the rate of injection and rate of momentum measurements, the techniques available in CMT-Motores Térmicos have been adapted from Diesel spray research and brought to multi-hole GDi injectors. Given the novelty of the techniques used, the particular methodologies and setups are discussed in detail. Despite the high turbulence of the flow, it was seen that the injector behaves consistently injection to injection, even when studying variation in individual holes. This is attributed to the repetitive behavior of the needle that was observed in the experiments. It was also observed that the stabilized flow has a high frequency variability that could not be explained by random movement of the needle, but rather by the particular design of the nozzle. The geometrical analysis done to eight, nominally equal nozzles, allowed a unique view into the construction of the nozzle and provided insights about the variability of key dimensions. The rate of injection measurements allowed to study the hydraulic response of the injector to the main variables like rail pressure, discharge pressure, fuel temperature and command signal duration. These measurements were combined with the rate of momentum measurements to study the low value of the discharge coefficient, that ultimately was attributed to the low needle lift and low L/D ratio of the orifices. On the other hand, the study of the external spray yielded the identification of very important phenomena specific to this particular hardware, the spray collapse. The extensive experimental campaigns featuring shadowgraph (Schlieren) and Diffused Back Illumination (DBI) visualization techniques allowed identifying and describing the macroscopic characteristics of the spray and the conditions under which the collapse occurs. The spray collapse engenders from a combination of the internal flow that creates plume interaction, and ambient conditions that promote air entrainment and evaporation. At moderate density and temperature levels the collapse develops, completely modifying the expected trends in the behavior of the plumes.
Resum Els sistemes d'injecció directa han sigut un dels principals punts focals de la investigació en motors, particularment en sistemes dièsel, en què la geometria interna, el moviment de l'agulla i el comportament del flux afecten l'esprai extern i per tant determinen completament el procés de combustió dins del motor. Degut a regulacions mediambientals i al potencial dels (més ineficients) motors "Otto", grans esforços s'estan aportant en investigació sobre sistemes d'injecció directa de gasolina. Els motors GDi tenen el potencial d'incrementar substancialment l'economia del combustible i complir les regulacions de gasos contaminants i d'efecte hivernacle, encara que existeixen molts desafiaments per davant. Esta tesi estudia en detall una moderna tovera GDi que va ser especialment dissenyada per al grup d'investigació conegut com a ECN. Amb l'objectiu de desenvolupar metodologies punteres, este injector ha sigut usat en un ampli ventall d'instal·lacions experimentals per tal de caracteritzar el flux intern i diverses característiques clau de la seua geometria i funcionament, per tal d'avaluar com es relacionen amb els efectes observats del comportament de l'esprai extern. Per a la caracterització interna del flux, l'objectiu ha sigut determinar la geometria de la tovera i el desplaçament de l'agulla, caracteritzar la taxa d'injecció i el flux de quantitat de moviment, i avaluar el flux proper. Algunes metodologies no s'havien empleat abans en injectors GDi, i moltes altres ho han sigut únicament de manera eventual. Per a la geometria interna, l'alçament de l'agulla i el flux proper, s'han aplicat diverses tècniques avançades amb raigsx a les instal·lacions d'Argonne National Laboratory. Per a la taxa d'injecció i el flux de quantitat de moviment, les tècniques disponibles al departament han sigut adaptades des de Dièsel i aplicades a injectors GDi multi-orifici. Considerant la novetat de les tècniques aplicades, les particularitats de les metodologies es discuteixen en detall al document. A pesar de l'elevada turbulència del flux intern, l'injector es comporta de manera consistent injecció a injecció, inclús quan l'estudi se centra en la variabilitat orifici a orifici. Aquest fet s'ha atribuït al comportament repetitiu de l'agulla, avaluat als experiments. També es va observar que el flux estabilitzat té una variació d'altra freqüència que no pot ser explicat pel moviment de l'agulla, sinó pel particular disseny de les toveres. L'anàlisi de la geometria interna realitzat a vuit toveres nominalment iguals va permetre obtenir un punt de vista únic en la construcció de toveres i la variabilitat de dimensions clau. Les mesures de taxa d'injecció van permetre estudiar la resposta hidràulica de l'injector a diverses variables com la pressió d'injecció, la pressió de descàrrega, la temperatura del combustible i la duració de la senyal de comandament. Estes mesures van ser combinades amb mesures de flux de quantitat de moviment per tal d'estudiar el baix valor del coeficient de descàrrega, el qual va ser atribuït al baix alçament de l'agulla i al coeficient L/D dels orificis. D'altra banda, l'estudi de l'esprai extern va permetre identificar un important fenomen específic d'aquest hardware particular: el col·lapse de l'esprai. Les extensives campanyes experimentals, utilitzant Schlieren i il·luminació darrera difusa (DBI) van permetre identificar i descriure les característiques macroscòpiques de l'esprai i les condicions sota les quals el col·lapse té lloc. El col·lapse de l'esprai es forma per una combinació d'interacció de les diverses plomes (causat pel flux intern) i determinades condicions ambient que promouen evaporació i entrada d'aire. Es va determinar a quins nivells de densitat i temperatura moderats es desenvolupa el col·lapse, modificant completament el comportament esperat de l'esprai.
Vaquerizo Sánchez, D. (2018). Study on Advanced Spray-Guided Gasoline Direct Injection Systems [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/99568
TESIS
APA, Harvard, Vancouver, ISO, and other styles
16

Sarmah, Dipsikha. "Evaluation of Spatial Interpolation Techniques Built in the Geostatistical Analyst Using Indoor Radon Data for Ohio,USA." University of Toledo / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1350048688.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Oliver, Desmond Mark. "Cultural appropriation in Messiaen's rhythmic language." Thesis, University of Oxford, 2016. http://ora.ox.ac.uk/objects/uuid:54799b39-3185-4db8-9111-77a8b284b2e7.

Full text
Abstract:
Bruhn (2008) and Griffiths (1978) have referred in passing to Messiaen's use of non-Western content as an appropriation, but a consideration of its potential moral and aesthetic failings within the scope of modern literature on artistic cultural appropriation is an underexplored topic. Messiaen's first encounter with India came during his student years, by way of a Sanskrit version of Saṅgītaratnākara (c. 1240 CE) written by the thirteenth-century Hindu musicologist Śārṅgadeva. I examine Messiaen's use of Indian deśītālas within a cultural appropriation context. Non-Western music provided a safe space for him to explore the familiar, and served as validation for previously held creative interests, prompting the expansion and development of rhythmic techniques from the unfamiliar. Chapter 1 examines the different forms of artistic cultural appropriation, drawing on the ideas of James O. Young and Conrad G. Brunk (2012) and Bruce H. Ziff and Pratima V. Rao (1997). I consider the impact of power dynamic inequality between 'insider' and 'outsider' cultures. I evaluate the relation between aesthetic errors and authenticity. Chapter 2 considers the internal and external factors and that prompted Messiaen to draw on non-Western rhythm. I examine Messiaen's appropriation of Indian rhythm in relation to Bloomian poetic misreading, and whether his appropriation of Indian rhythm reveals an authentic intention. Chapter 3 analyses Messiaen's interpretation of Śārṅgadeva's 120 deśītālas and its underlying Hindu symbolism. Chapter 4 contextualises Messiaen's Japanese poem Sept haïkaï (1962) in relation to other European Orientalist artworks of the late-nineteenth and early-twentieth centuries, and also in relation to Michael Sullivan's (1987: 209) three-tiered definitions of japonism.
APA, Harvard, Vancouver, ISO, and other styles
18

CHOUDHARY, DIVYA. "NEW REVERSE CARRY PROPAGATE ADDER USING MODIFIED GDI TECHNIQUE." Thesis, 2019. http://dspace.dtu.ac.in:8080/jspui/handle/repository/16711.

Full text
Abstract:
Addition is the most important function in arithmetic and logical operations. Approximate Computing can be used to reduce the number of transistors, delay and power constraints in VLSI design, which makes the use of approximate adders possible in error-tolerant applications. Existing Approximate Reverse Carry Propagate Adder designs [1] have proved to be advantageous in improving these constraints. A new design of Reverse Carry Propagate Adder has been proposed using Modified-Gate Diffusion Input (GDI) technique [7]. A 4-bit Multiplier has also been designed using this RCPFA and results verified with Xilinx Tool. Proposed circuit design simulations have been carried out in 45-nm process technology using Cadence Virtuoso. The results indicate 57% and 44% reduction in Power and Delay respectively.
APA, Harvard, Vancouver, ISO, and other styles
19

Wu, Chuan-Wei, and 武傳威. "Nanocrystalline GDC electrolyte fabricated for SOFCs by hydrothermal-coprecipitation technique." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/43730088718088371868.

Full text
Abstract:
碩士
南台科技大學
化學工程系
92
In this research, the gadolinia-doped ceria (GDC) powders were prepared using a hydrothermal-coprecipitation technique, and taken ammonium cerium nitrate, gadolinium oxidize and urea as the raw materials. The hydrothermal-coprecipitation technique was combined hydrothermal method and coprecipitation method. We made well crystallinity GDC powders by hydrothermal method, and took these powders as the precursor. Next, we would be obtained the final GDC powders due to the reaction of introduced the precursor and the other raw materials by coprecipitation method. Furthermore, the final GDC powders sintered at different temperature were used to the electrolyte for SOFC. Precursor was prepared at 100℃~180℃ in 5 hours respectively. The characteristics of precursors were investigated by X-ray diffraction, energy-dispersive spectroscopy, and crystallite size analysis. From the results, the crystallinity would be become larger and the composition became more random when the hydrothermal treatment temperature higher than 120℃. The sinterbility of final powders made from different precursors could achieve higher than 96% relative densities during 1400℃ for 2 hours. The highest relative density (more than 99%) could be found in the final powder from 120℃ precursor, which were significantly higher than that powders formed without precursor about 4~6%. The conductance of the GDC electrolytes were studied on the relationship of sintering temperature and operated temperature. If we used the electrolyte that formed with 120℃ precursor, when the sintering temperature raise from 1400℃ to 1500℃, and the conductance would change from 2.3×10-2Scm-1 to 9.7×10-2Scm-1 during the test temperature 800℃. Furthermore, the GDC electrolyte still can exhibits good conductance (about 4.1×10-2Scm-1) even lower the operated temperature from 800℃ to 700℃.
APA, Harvard, Vancouver, ISO, and other styles
20

"A Power System Reliability Evaluation Technique and Education Tool for Wind Energy Integration." Master's thesis, 2012. http://hdl.handle.net/2286/R.I.14543.

Full text
Abstract:
abstract: This thesis is focused on the study of wind energy integration and is divided into two segments. The first part of the thesis deals with developing a reliability evaluation technique for a wind integrated power system. A multiple-partial outage model is utilized to accurately calculate the wind generation availability. A methodology is presented to estimate the outage probability of wind generators while incorporating their reduced power output levels at low wind speeds. Subsequently, power system reliability is assessed by calculating the loss of load probability (LOLP) and the effect of wind integration on the overall system is analyzed. Actual generation and load data of the Texas power system in 2008 are used to construct a test case. To demonstrate the robustness of the method, relia-bility studies have been conducted for a fairly constant as well as for a largely varying wind generation profile. Further, the case of increased wind generation penetration level has been simulated and comments made about the usability of the proposed method to aid in power system planning in scenarios of future expansion of wind energy infrastructure. The second part of this thesis explains the development of a graphic user interface (GUI) to demonstrate the operation of a grid connected doubly fed induction generator (DFIG). The theory of DFIG and its back-to-back power converter is described. The GUI illustrates the power flow, behavior of the electrical circuit and the maximum power point tracking of the machine for a variable wind speed input provided by the user. The tool, although developed on MATLAB software platform, has been constructed to work as a standalone application on Windows operating system based computer and enables even the non-engineering students to access it. Results of both the segments of the thesis are discussed. Remarks are presented about the validity of the reliability technique and GUI interface for variable wind speed conditions. Improvements have been suggested to enable the use of the reliability technique for a more elaborate system. Recommendations have been made about expanding the features of the GUI tool and to use it to promote educational interest about renewable power engineering.
Dissertation/Thesis
M.S. Electrical Engineering 2012
APA, Harvard, Vancouver, ISO, and other styles
21

Garcia, Christopher. "Study on the Procedural Generation of Visualization from Musical Input using Generative Art Techniques." Thesis, 2011. http://hdl.handle.net/1969.1/ETD-TAMU-2011-05-9113.

Full text
Abstract:
The purpose of this study was to create a new method for visualizing music. Although many music visualizations already exist, this research was focused on creating high-quality, high-complexity animations that cannot be matched by real-time systems. There should be an obvious similarity between the input music and the final animation, based on the music information which the user decides to extract and visualize. This project includes a pipeline for music data extraction and creation of an editable visualization file. Within the pipeline, a music file is read into a custom analysis tool and time-based data is extracted. This data is output and then read into Autodesk Maya. The user may then manipulate the visualization as they see fit using the tools within Maya and render out a final animation. The default result of this process is a Maya scene file which makes use of the dynamics systems available to warp and contort a jelly-like cube. A variety of other visualizations may be obtained by mapping the data to different object attributes within the Maya interface. When rendered out and overlaid onto the music, there was a recognizable correlation between elements in the music and the animations in the video. This study shows that an accurate musical visualization may be achieved using this pipeline. Also, any number of different music visualizations may be obtained with relative ease when compared to the manual analysis of a music file or the manual animation of Maya objects to match elements in the music.
APA, Harvard, Vancouver, ISO, and other styles
22

Bendersky, Marina. "Particle-collector interactions in nanoscale heterogeneous systems." 2013. https://scholarworks.umass.edu/dissertations/AAI3556234.

Full text
Abstract:
Particle-surface interactions govern a myriad of interface phenomena, that span from technological applications to naturally occurring biological processes. In the present work, particle-collector DLVO interactions are computed with the grid-surface integration (GSI) technique, previously applied to the computation of particle colloidal interactions with anionic surfaces patterned with O(10 nm) cationic patches. The applicability of the GSI technique is extended to account for interactions with collectors covered with topographical and chemical nanoscale heterogeneity. Surface roughness is shown to have a significant role in the decrease of the energy barriers, in accordance with experimental deposition rates that are higher than those predicted by the DLVO theory for smooth surfaces. An energy- and force-averaging technique is presented as a reformulation of the GSI technique, to compute the mean particle interactions with random heterogeneous collectors. A statistical model based on the averaging technique is also developed, to predict the variance of the interactions and the particle adhesion thresholds. An excellent agreement is shown between the models' predictions and results obtained from GSI calculations for large number of random heterogeneous collectors. Brownian motion effects for particle-collector systems governed by nanoscale heterogeneity are analyzed by introducing stochastic Brownian displacements in particle trajectory equations. It is shown that for the systems under consideration and particle sizes usually used in experiments, it is reasonable to neglect the effects of Brownian motion entirely. Computation of appropriately defined Péclet numbers that quantify the relative importance of shear, colloidal and Brownian forces validate that conclusion. An algorithm for the discretization of spherical surfaces into small equal-area elements is implemented in conjunction with the GSI technique and mobility matrix calculations of particle velocities, to obtain interactions and dynamic behaviors of patchy particles in the vicinity of uniform flat collectors. The patchy particle and patchy collector systems are compared in detail, through the computation of statistical measures that include adhesion probabilities and maximum residence times per patch. The lessened tendency of the patchy particle to adhere on the uniform collector is attributed to a larger maximum residence time per patch, which precludes interactions with multiple surface nano-features at a given simulated time. Also briefly described are directions for future work, that involve the modeling of two heterogeneous surfaces, and of surfaces covered with many types of heterogeneity, such as patches, pillars and spring-like structures that resemble polymer brushes or cellular receptors.
APA, Harvard, Vancouver, ISO, and other styles
23

MA, JING-YUAN, and 馬敬媛. "Applying Data Mining Techniques to Explore the Relationship Between Taiwan’s Power Consumption and GDP Growth- A Comparison Before and After Asian Financial Crisis." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/c4z9p4.

Full text
Abstract:
碩士
崑山科技大學
資訊管理研究所
105
Changes in the economic climate have a close connection with and greatly affect the demand for electricity. With the improvement of the economy and increase in the national income year by year, electricity consumption has grown alongside it and we can thus say that the growth of electricity consumption is closely related to the development of industry and commerce. This study examines the efficiency and relevance of electricity consumption and Gross Domestic Product (GDP) in Taiwan before and after the Asian financial crisis by using data exploration method. Firstly, we made a clustering analysis of industrial and commercial electricity consumption and GDP within a 30-year period from the year 75 to 104 of the Republic of China by using the k-means method. The study result shows that the k-means method can accurately classify the point before and after the Asian financial crisis. It is also found that Taiwan began to focus on commercial development after the Asian financial crisis in 1998 and the proportion of commerce GDP continued to grow. Meanwhile, electricity utilization efficiency also became better after the Asian financial crisis. Finally, we made a regression analysis of industrial and commercial electricity consumption and the Real GDP. The study result shows that, maybe because of the triangle trade, industrial electricity consumption accounts for a small proportion of the Real GDP, and the correlation between the two is uncertain. However, the commercial electricity consumption accounts for a large proportion of the Real GDP, and the correlation between the two is strong.
APA, Harvard, Vancouver, ISO, and other styles
24

Galantucci, Rosella Alessia. "Innovative methods and techniques for diagnostics and monitoring of architectural heritage, through digital image processing and machine learning approaches." Doctoral thesis, 2022. http://hdl.handle.net/11589/238121.

Full text
Abstract:
Nell’ambito dell’ingegneria civile e edile la valutazione dello stato di conservazione di un edificio o di un'infrastruttura è di fondamentale importanza, a fini di monitoraggio e di conservazione, tanto più in riferimento al patrimonio architettonico di interesse storico-artistico in condizioni di diffuso degrado. In tale contesto, il rilievo e la diagnostica attualmente fanno riferimento a tecnologie e strumentazioni onerose per costo e tempo, che sovente presuppongono interventi invasivi o di difficile attuazione. Nel campo del monitoraggio di infrastrutture diversi studiosi affrontano queste criticità attraverso l'implementazione di tecnologie digitali come l’image processing o l'intelligenza artificiale. Tuttavia, la maggior parte delle metodologie sono spesso orientate al riconoscimento di un’unica tipologia di difetto, e applicate su dati bidimensionali, comportando quindi la perdita di informazioni tridimensionali. Al contrario, la loro diffusione è ancora piuttosto limitata nel settore dei beni culturali, ambito nel quale i modelli tridimensionali reality-based sono utilizzati principalmente come riferimento geometrico. Inoltre, vi è una mancanza di protocolli unificati per l'acquisizione e l'elaborazione dei dati, finalizzati all’ispezione e al controllo del patrimonio costruito. In virtù di tali considerazioni, il lavoro di ricerca ha preso avvio dalla necessità di definire, in analogia a prassi esplorate in altri settori dell’ingegneria, approcci di diagnostica innovativa, di supporto agli operatori nella fase di conoscenza di un manufatto, preliminare alla progettazione di un intervento. Il progetto di ricerca mira ad un esame delle condizioni generali di un edificio mediante specifiche metodiche di valutazione computerizzata qualitativa e quantitativa di dati fotogrammetrici, volta all’individuazione e alla mappatura delle morfologie di degrado definite nelle normative di settore. Nel dettaglio, tecniche di digital image processing e machine learning sono state implementate su modelli tridimensionali ad alta risoluzione (nuvole di punti, mesh poligonali testurizzate), configurando opportune strategie scalari, ad un livello di dettaglio crescente, in relazione alla tipologia di degrado da investigare (alterazioni basate sulla geometria o sul colore). Tali procedure sono state validate su una pluralità di casi studio di rilevante interesse storico-artistico, appartenenti al patrimonio culturale territoriale, a scala regionale o internazionale. La grande eterogeneità, in termini di epoca, tipologia costruttiva, materiali, componenti architettonici e fenomeni di degrado, ha permesso un’ampia applicazione sperimentale dei flussi di lavoro proposti, dimostrandone l’idoneità e adattabilità alla diagnostica degli edifici. Esempi come la facciata esterna, il portale e i pilastri di palazzo Palmieri (XVIII secolo), la torre nord della fortezza medievale di Bashtovë (XV secolo), oppure gli interni della cattedrale romanica di San Corrado (XII/XIII secolo), sono stati presi in considerazione poiché caratterizzati da patologie associate a cambiamenti nella geometria e nella forma degli elementi architettonici, riconducibili al degrado superficiale della pietra (mancanze, erosioni, alveolizzazioni..) o a fenomeni di instabilità statica (quadri fessurativi). Di contro, alcune aree del sito archeologico di Egnazia (I secolo a.c.), o alcuni ambienti interni degli ex conventi di San Leonardo e dei Cappuccini (XVI secolo), sono prevalentemente interessati da alterazioni su base cromatica, attribuibili soprattutto a problemi di umidità. Tali condizioni sono state monitorate in intervalli di anni, al fine di osservarne l’evoluzione nel tempo. La presente ricerca propone un avanzamento nell'analisi e nel controllo dello stato di conservazione degli edifici, in particolare nel settore dei beni culturali, attraverso un articolato flusso di lavoro metodologico, finalizzato ad ottenere informazioni utili per la fase di pre-diagnosi. I principali contributi riguardano la possibilità di ottenere un approfondimento qualitativo e quantitativo su diverse morfologie di degrado, a partire da dati tridimensionali reality-based, mediante procedure remote, non invasive, e semi-automatiche, a supporto delle attività diagnostiche. Inoltre, la flessibilità e la scalabilità dell’approccio sono condizioni fondamentali, nell’ottica di una limitazione di costi e tempi, di una semplificazione del piano di indagini e di una riduzione della soggettività e dipendenza degli esiti dalle competenze del tecnico.
IIn Civil or Building engineering, the assessment of the state of conservation of a building or an infrastructure is fundamental, for monitoring and conservation purposes. This is of paramount importance also in the context of Cultural Heritage, where artefacts are denoted by historic-artistic interest, and often in widespread damaged conditions. The actual state of practice refers to costly and time-consuming technologies and equipment, often requiring invasive actions, or difficult applications. In Civil Engineering, several research works address these criticalities through the implementation of digital technologies like image processing or artificial intelligence. However, most proposals are applied on 2D data, with substantial losses of 3D information, and often single-defect oriented. On the contrary, in Cultural Heritage domain their diffusion is still scarce, with reality-based 3D data used mainly as reference for geometrical survey. In addition, there is a substantial lack of unified protocols for the acquisition and processing of data, finalized to the quantitative inspection of built heritages. In light of this, the aim of this research is to define, analogously to other fields of Engineering, innovative diagnostic approaches, to support the experts in the phase of knowledge, before planning an intervention. The evaluation of the general conditions of a building has been proposed through the analysis of reality-based 3D data, acquired by means of photogrammetry. Specific mapping and computerized evaluation routines have been created, for the detection of various decay morphologies, as defined by sectorial standards, through qualitative and quantitative analysis systems. Digital Image Processing and Machine Learning techniques have been adopted and implemented on high resolution three-dimensional models (dense point clouds, texturized polygonal meshes), to define distinct pipelines tailored on the basis of the kind of damage to investigate (geometry-based or colour-based alterations). A scalar strategy was outlined, articulated in different paths and levels of details. Furthermore, they have been tested on a plurality of case studies of significant historical-artistic interest, belonging to the territorial, regional or international, cultural heritage. Their heterogeneity, in terms of epoch, building type, surface material, architectural components and decay phenomena, allowed the experimental application of the proposed workflows, demonstrating their suitability and adaptability to the building diagnostic domain. Indeed, exterior walls, portals and pillars of Palmieri residential palace (XVIII century), portions of a tower in the medieval fortress of Bashtovë (XV century), or the interiors of the Romanesque cathedral of San Corrado (XII/XIII century), were considered because they are all characterized by decay phenomena expressed by changes in the geometry and shape, like stone surface decay (lacks, erosions, alveolizations..) or static instabilities (crack patterns). While specific areas of the archaeological site of Egnatia (I century B.C), or internal environments of ex-convents, like San Leonardo and Cappuccini, were selected, in light of chromatic-based modifications consistently affecting them, mainly ascribable to humidity problems. Those conditions have been monitored within intervals of years, in order to control their evolution overtime. The present research lays out an advancement in the analysis and control of the state of conservation of buildings, especially in cultural heritage domain, providing an articulated methodological workflow, to obtain and collect investigation data for the pre-diagnosis phase. Some principal contributions concern the possibility to achieve both qualitative and quantitative insights on different decay morphologies, starting from reality-based 3D data, with remote, non-invasive, semi-automatic procedures, in support of diagnostic activities. Furthermore, flexibility and scalability are paramount conditions, to address the peculiarity of the diagnostic process, in the perspective of a reduction of time cost requirements, a simplification of the investigation plan and a minimization of dependence from the technician’s expertise.
APA, Harvard, Vancouver, ISO, and other styles
25

Triggiani, Maurizio. "Integration of machine learning techniques in chemometrics practices." Doctoral thesis, 2022. http://hdl.handle.net/11589/237998.

Full text
Abstract:
Food safety is a key objective in all the development plans of the European Union. To ensure the quality and the sustainability of the agricultural production (both intensive and extensive) a well-designed analysis strategy is needed. Climate change, precision agriculture, green revolution and industry 4.0 are areas of study that need innovative practices and approaches that aren’t possible without precise and constant process monitoring. The need for product quality assessment during the whole supply chain is paramount and cost reduction is also another constant need. Non targeted Nuclear Magnetic Resonance (NMR) analysis is still a second-choice approach for food analysis and monitoring, one of the problems of this approach is the big amount of information returned. This kind of data needs a new and improved method of handling and analysis. Classical chemometrics practices are not well suited for this new field of study. In this thesis, we approached the problem of food fingerprinting and discrimination by the means of non-targeted NMR spectroscopy combined with modern machine learning algorithms and databases meant for the correct and easy access of data. The introduction of machine learning techniques alongside the clear benefits introduces a new layer of complexity regarding the need for trusted data sources for algorithm training and integrity, if this kind of approach proves is worth in the global market, we’ll need not only to create a good dataset, but we’ll need to be prepared to defend against also more clever attacks like adversarial machine learning attacks. Comparing the machine learning results with the classic chemometric approach we’ll highlight the strengths and the weakness of both approaches, and we’ll use them to prepare the framework needed to tackle the challenges of future agricultural productions.
APA, Harvard, Vancouver, ISO, and other styles
26

Bauer, Regine. "Untersuchung des transkriptionellen Mechanismus der Igf2- Überexpression in Patched-assoziierten Tumoren." Doctoral thesis, 2006. http://hdl.handle.net/11858/00-1735-0000-0006-AC37-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography