To see the other types of publications on this topic, follow the link: Data with gaps.

Dissertations / Theses on the topic 'Data with gaps'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Data with gaps.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Pinell, Graciela Tejada. "Spatial assessment of data gaps for estimating biomass across the brazilian Amazon." Instituto Nacional de Pesquisas Espaciais (INPE), 2017. http://urlib.net/sid.inpe.br/mtc-m21b/2017/06.16.22.29.

Full text
Abstract:
Amazon forest provides fundamental ecosystem services such as biodiversity conservation, water cycling and carbon sequestration. Given the large extent of Brazilian forests, 75% of the Amazon Basin, there is great uncertainty in the storage of aboveground biomass (AGB) carbon stocks. There is a significant difference between AGB estimates and an urgent need to improve AGB estimates to support the National Communications (NC) of Brazil to the United Nations Framework Convention on Climate Change and Reducing Emissions from Deforestation and Degradation (REDD+). Whether for NC, REDD+ or for the carbon emissions modeling, stakeholders, policy makers and scientists have to decide which AGB product, dataset or combination of data to use, according to its availability, scale and coverage. The purpose of this study was to assess forest AGB spatial data gaps across the Brazilian Amazon. To achieve this goal, we conducted an extensive review and analysis of the AGB datasets coverage. AGB stakeholders connections were made through a social network analysis. Also, AGB maps variability within different environmental factors maps (soil, vegetation, topography and climate) were analyzed. Using difference and statistical analyses of AGB maps and, through a spatial multicriteria evaluation, we obtained a forest AGB spatial data gaps map for the Brazilian Amazon. The spatial coverage of AGB field and airborne LiDAR data shows great areas without AGB data and, even though stakeholders have connections, few datasets are available. By quantifying AGB maps and field data variability within multiple environmental factors, we provide valuable elements for understanding the current AGB data in function of climate, soils, vegetation and geomorphology. The main differences between AGB maps are found next to the rivers (mainly the Amazon River), in Amapá, northeast of Pará and central and north Amazon States, these areas coincide with areas of higher AGB. The forest AGB spatial data gaps map, which refers to places with no field or LiDAR data and where AGB maps differ the most, show the priority areas for further AGB assessments in the Brazilian Amazon. This study can be a useful tool for policy makers and different stakeholders working on AGB on which to base their decisions to choose AGB data or products for National Communications, REDD+, or carbon emissions modeling.
A floresta amazônica fornece serviços ecossistêmicos fundamentais, como conservação da biodiversidade, ciclagem a água e sequestro de carbono. Dada a grande extensão das florestas brasileiras, 75% da Bacia Amazônica, existe uma grande incerteza nos estoques de carbono da biomassa acima do solo (AGB) armazenados na região. As estimativas de AGB existentes diferem significativamente entre si e há uma necessidade urgente de melhorá-las, uma vez que podem dar suporte às Comunicações Nacionais (NC) do Brasil para a Convenção-Quadro das Nações Unidas sobre Mudanças do Clima (UNFCCC) e Redução das Emissões por Desmatamento e Degradação florestal (REDD+). Seja para NC, REDD+ ou para a modelagem de emissões de carbono, as partes interessadas, os tomadores de decisão e os cientistas devem decidir qual produto, conjunto de dados ou combinação de dados de AGB usar, de acordo com sua disponibilidade, escala e cobertura. Com o objetivo de suprir esta demanda, neste estudo, avaliamos as lacunas de dados espaciais de AGB da floresta na Amazônia brasileira. Para isso, fizemos uma extensa revisão e análise da cobertura dos conjuntos de dados disponíveis. As conexões entre as partes interessadas foram feitas usando a social network analysis. Além disso, analisamos a variabilidade dos mapas de AGB em função de diferentes fatores ambientais (solo, vegetação, topografia e clima). Foram feitas também análises estatísticas e das diferenças entre os mapas de AGB e, com uma avaliação espacial multicritério, produzimos um mapa das lacunas de dados de AGB para a floresta amazônica brasileira. A cobertura espacial de AGB e os dados LiDAR aéreos mostram grandes áreas sem informação e, mesmo que as partes interessadas tenham conexões, poucos conjuntos de dados estão disponíveis. Ao quantificar os mapas de AGB e a variabilidade dos dados de campo em múltiplos fatores ambientais, fornecemos elementos valiosos para a compreensão dos dados de AGB atuais em função do clima, dos solos, da vegetação e da geomorfologia. As principais diferenças entre os mapas são encontradas ao lado dos rios (principalmente o rio Amazonas), no Amapá, no nordeste do Pará e nos estados amazônicos do centro e norte, coincidindo com áreas de maior AGB. O mapa de lacunas de dados espaciais de AGB da floresta,que se refere a locais sem dados de campo ou LiDAR e também onde os mapas da AGB diferem mais, mostram as áreas prioritárias para futuras avaliações de AGB na Amazônia brasileira. Este estudo é uma ferramenta útil para os formuladores de políticas e as diferentes partes interessadas que trabalham na AGB, que terá que devem decidir quais dados ou produtos da AGB devem usar para Comunicação Nacional, REDD + ou modelagem de emissões de carbono.
APA, Harvard, Vancouver, ISO, and other styles
2

McHugh, Alyson Elizabeth. "Missing baseline information for British Columbia's forests : can timber cruise data fill some gaps?" Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/778.

Full text
Abstract:
Assessing trends in forest ecosystems requires a thorough understanding of a benchmark or condition against which changes can be measured. Timber cruise information is a valuable source of baseline data, and has potential to be used in monitoring the effectiveness of management actions taken to maintain biodiversity and other societal values during and after harvesting. The objective of this study was to assess the efficacy of using these data as baseline information in FREP (Forest and Range Evaluation Program) Stand Level Biodiversity (SLB) assessments. Using three different data sources (timber cruise data, FREP pre-harvest data, and FREP post-harvest data), I conducted a pre- and post-harvest survey and evaluated trends in indicators within and across seven cutblocks. Mean densities for live and standing dead trees by diameter class, total live and dead trees, functional snags, large trees, tree species composition, coarse woody debris, and a number of qualitative indicators were analyzed. Results indicate that similarities exist between several characteristics within the timber cruise and pre- and post-harvest FREP data. For example, there was substantial overlap between stand structural characteristics assessed by the three methods. However, some discrepancies were identified. Large trees (live, dead and live and dead combined) were evident in very small numbers in the timber cruise and data were not consistent with pre-harvest FREP data. The number of tree species identified in FREP data was generally lower than timber cruise data, with the species absent in the FREP data generally being recorded as rare in the timber cruise. Some important stand structural attributes are not collected under the current timber cruise protocol. This research has identified some possible limitations of using timber cruise statistics as baseline information for FREP SLB monitoring. Forests are dynamic, rare forest elements may be misrepresented in all three samples, and some potentially valuable data are currently missing from timber cruise statistics. However, the opportunities that timber cruise data provide as a provincial baseline dataset are immense, and further exploration and study could identify ways to improve the compatibility, efficiency, and utility of these data in FREP Stand Level Biodiversity monitoring.
APA, Harvard, Vancouver, ISO, and other styles
3

Xiang, Yun. "Ethnic differences in achievement growth: Longitudinal data analysis of math achievement in a hierarchical linear modeling framework." Thesis, Boston College, 2009. http://hdl.handle.net/2345/676.

Full text
Abstract:
Thesis advisor: Henry Braun
Given the call for greater understanding of racial inequality in student achievement in K-12 education, this study contributes a comprehensive, quantitative, longitudinal examination of the achievement gap phenomenon, with particular attention to the organization characteristics of schools and school districts. Employing data from a large number of districts in a single state, it examines the trends in achievement and the growth in achievement after the passage of NCLB. It focuses on mathematics performance from grade 6 to grade 8. Both a traditional descriptive approach and one employing Hierarchical Linear Models were applied and compared. The purpose was not to determine which methodology is superior but to provide complementary perspectives. The comparison between the two approaches revealed similar trends in achievement gaps, but the HLM approach offered a more nuanced description. Nonetheless the results suggest that it is useful to employ both approaches. As to the main question regarding ethnicity, it appears that even if student ethnicity is confounded with other indicators, such as initial score and socio-economic status, it is still an important predictor of both achievement gaps and achievement growth gaps. Moreover, demographic profiles at the school and district levels were also associated with these gaps
Thesis (PhD) — Boston College, 2009
Submitted to: Boston College. Lynch School of Education
Discipline: Educational Research, Measurement, and Evaluation
APA, Harvard, Vancouver, ISO, and other styles
4

Gyau-Boakye, Philip. "Filling gaps in hydrological runoff data series in West-Africa = Ergänzung lückenhafter Abflussreihen in West-Afrika /." Bochum : Ruhr-Univ, 1993. http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&doc_number=006430220&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Guttieres, Donovan G. "Closing gaps in global access to biologic medicines : building tools to evaluate innovations in biomanufacturing." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/117892.

Full text
Abstract:
Thesis: S.M. in Technology and Policy, Massachusetts Institute of Technology, School of Engineering, Institute for Data, Systems, and Society, Technology and Policy Program, 2018.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 127-134).
Low-and-Middle Income Countries (LMICs) are experiencing a growing need for safe, effective, and affordable health services, especially medicines. Such trends are in part due to a continued epidemiologic transition from infectious to chronic, non-communicable diseases (NCDs). Today, NCDs account for a large portion of total global disease burden: 70% of deaths as per the World Health Organization (WHO). NCDs are projected to continue to undercut economic productivity and drive up health spending. Many NCDs are effectively treated using biologic therapies; or large molecules produced by, or involving, living cells. Recently, some of these therapies have been included on the WHO Model List of Essential Medicines. However, the molecular, manufacturing, regulatory, and supply chain features of biologics lead to relatively higher costs and complexity compared to small-molecule drugs, with implications on widespread access. As part of the Global Action Plan for the Prevention and Control of NCDs 2013-2020, an 80% target for global availability of affordable essential medicines has been set for all public and private providers. In order to reach this target, there is need to better understand the complex barriers to accessing biologics across the biopharmaceutical value chain. Current gaps in access indicate the potential need to re-orient the biopharmaceutical system in order to meet future projected healthcare demand in terms of quantity, quality, and affordability. There is also growing uncertainty within the biopharmaceutical ecosystem as to the best use of resources, design of policies, and development of technologies that will have the most cost-effective impact on maximizing the supply of and access to such biologics. This research specifically focuses on the manufacturing component of biologics access, providing an analysis of the benefits and risks across different production networks, with varying number and location of facilities. A cost modeling tool is presented for quantitatively analyzing different manufacturing design options. This is accomplished by comparing the cost of good (COGs) and net present cost (NPC) of different scenarios, using Trastuzumab (a monoclonal antibody drug used to treat HER-2+ breast cancer) as a case study. Finally, future research questions are presented, aimed at better understanding the drivers of variability in manufacturing cost across manufacturing networks, especially when considering differences in product type, locations, regulatory jurisdictions, geopolitical zones, and sociocultural norms. In light of changing global health patterns and increasing demand for quality, affordable care, the thesis presents tools that can be generalized for addressing tradeoffs, short-and- long term effects, and intended-and-unintended consequences of investments in global health. It holds the potential for assessing the potential impact of various innovations (policies, technologies, organizational structures and more) on complex, dynamic systems and provide an evidence-base to better inform future areas of research, design of policies, and development of technologies.
by Donovan G. Guttieres.
S.M. in Technology and Policy
APA, Harvard, Vancouver, ISO, and other styles
6

Hofuku, Yoyoi, Shinya Cho, Tomohiro Nishida, and Susumu Kanemune. "Why is programming difficult? : proposal for learning programming in “small steps” and a prototype tool for detecting “gaps”." Universität Potsdam, 2013. http://opus.kobv.de/ubp/volltexte/2013/6445/.

Full text
Abstract:
In this article, we propose a model for an understanding process that learners can use while studying programming. We focus on the “small step” method, in which students learn only a few concepts for one program to avoid having trouble with learning programming. We also analyze the difference in the description order between several C programming textbooks on the basis of the model. We developed a tool to detect “gaps” (a lot of concepts to be learned in a program) in programming textbooks.
APA, Harvard, Vancouver, ISO, and other styles
7

Feighan, Kelly. "A QUANTITATIVE ANALYSIS OF MARITAL AGE GAPS IN THE U.S. BETWEEN 1970 AND 2014." Diss., Temple University Libraries, 2018. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/494818.

Full text
Abstract:
Sociology
Ph.D.
Measuring spouses’ ages allows us to explore larger sociological issues about marriage, such as whether narrowing gaps signal gender progress or if a rise in female-older unions reveals a status change. Using Census and American Community Survey data, I test the merits of beauty-exchange and status homogamy theories as explanations for how heterosexual marital age gaps changed over a 40-year period of social and economic revolution. Analyses address questions about how age gaps compared for people with different characteristics, whether similarly aged couples exhibited greater educational and socio-economic homogamy than others, and if the odds of being in age-heterogamous marriages changed. Chapter 4 provides the historical context of U.S. marriages from 1910 on, and shows that while disadvantaged groups retreated from marriage, the percentage of individuals with greater education and income who married remained high. Age homogamy rose over 100 years due to a decline in marriages involving much-older husbands rather than increases in wife-older unions. Results in Chapter 5 show that mean age gaps decreased significantly over time for first-married individuals by most—but not all—characteristics. Gaps narrowed for those who were White, Black, other race, or of Hispanic origin; from any age group; with zero, one, or two wage earners; with any level of education; and from most types of interracial pairs. One exception was that mean age gaps increased between Asian wives and White husbands, and Asian women’s odds of having a much older husband were higher than the odds for racially homogamous women. Those odds increased over time. Findings lent support for status homogamy theory, since same-age couples showed greater educational homogamy than others in any decade, but showed mixed support for beauty exchange. In 2010-14, the median spousal earnings gap was wider in husband-older marriages than age-homogamous ones; however, the reverse was true in 1980. Women-older first or remarriages exhibited the smallest median earnings gaps in 1980 and 2010-14, and women in these marriages contributed a greater percentage of the family income than other women in 2010-14 (43.6% vs 36.9%, respectively). The odds of being in age-heterogamous unions were significantly higher for persons who were remarried, from older age groups, from certain racial backgrounds, in some interracial marriages, less educated, and from lower SES backgrounds. Age and remarriage showed the greatest impact on odds ratios. While age homogamy increased overall, the odds of being a much older spouse (11+ years older) increased dramatically for remarried men and women between 1970 and 1980, and then remained high in 2010-14. Remarried women’s odds of being the much older wife versus a same-age spouse were 20.7 times that of the odds of first-married women in 2010-14. Other results showed that Black men’s odds of being with a much-older wife compared to one around the same age were about 2.5 times that of the odds of White men in each decade. Hispanic men’s odds of being in a first marriage with a much-older wife versus one of the same age were also twice the odds of White men in 1980 and 2010-14. Analyses demonstrated that marital age gaps have, indeed, changed significantly since the second-wave women’s movement, and that while age homogamy increased, the odds of being age heterogamous also shifted for people with different characteristics.
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
8

Blomberg, Madeleine. "Biggest Skills Needs & Gaps : Case Study of Sandvik Coromant & Microsoft." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-300053.

Full text
Abstract:
In our increasingly digital age, the pace of digital transformation requires continuous learning. Microsoft CEO Satya Nadella put it simply when wanting Microsoft to transition from a culture of "know-it-all" to a culture of "learn-it-all". The most valuable commodity for leaders to do is to set an example for lifelong learning and to find opportunities to encourage others to do the same, allowing each employee to take responsibility for skilling up [18]. This study identifies prioritized skills and assesses what skills gaps exist from these. A maturity framework is developed to measure the level of skills within three dimensions “Technical and Digital skills”, “People and organization skills” and “Strategy skills” and is composed of 30 attributes (Table 1). This study uses Sandvik Coromant as a use case for assessing skill gaps and Microsoft as a use case for how to fulfil the gaps. This study contributes to the manufacturing sector by identifying prioritized skills, empirically establishing a maturity framework and providing an evaluation of Sandvik Cormorant’s current skills gap including how to fulfil these skills gaps by programs, tools or initiatives.
I vår alltmer digitala värld kräver takten i digital transformation kontinuerligt lärande. Microsofts vd Satya Nadella uttrycker att Microsoft ska övergå från en kultur av "veta allt" till en kultur av "lära sig allt". Den mest värdefulla tillgången för ledare är att föregå med gott exempel för livslångt lärande och hitta möjligheter att uppmuntra andra att göra detsamma, så att varje anställd kan ta ansvar för kompetensutveckling [18]. Denna studie identifierar prioriterade kompetenser och bedömer vilka kompetensgap som existerar i dessa. Ett mognadsramverk utvecklas för att mäta kompetensnivån inom tre dimensioner ”Tekniska och digitala färdigheter”, ”Människor och organisationsfärdigheter” och “Strategifärdigheter” och består av 30 attribut (tabell 1). Denna studie använder Sandvik Coromant som ett användningsfall för att bedöma kompetensgap och Microsoft som ett användningsfall för hur man ska kunna uppfylla gapen. Denna studie bidrar till tillverkningssektorn genom att identifiera prioriterade färdigheter, empiriskt upprätta ett mognadsramverk och ge en utvärdering av Sandvik Cormorants nuvarande kompetensgap inklusive hur man kan uppfylla dessa kompetensgap genom program, verktyg eller initiativ.
APA, Harvard, Vancouver, ISO, and other styles
9

Nordström, Fanny, and Claudia Järvelä. "Digital Competencies and Data Literacy in Digital Transformations : Experience from the Technology Consultants." Thesis, Uppsala universitet, Företagsekonomiska institutionen, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-450947.

Full text
Abstract:
The digital revolution is challenging both individuals and organizations to be more comfortable using various digital technologies. Digital technologies enable and generate high amounts of data, but people are not very good at interpreting or making sense of it. This study aimed to explore the role of digital competencies and data literacy in digital transformations and identify the consequences the lack of digital competencies and data literacy can cause within digital transformation projects. The authors studied technology consultants' perspectives with experience in digital transformation projects using an exploratory qualitative research design building on the empirical data gathered from semi-structured interviews. The authors were able to identify that the technology consultants perceived digital competencies as crucial skills for individuals to possess in digital transformations. At the same time, data literacy was not considered a crucial skill in the context of digital transformations. Regarding the consequences of a digital skills gap, the technology consultants saw issues within the implementation of the project, delays, or indirect waste of resources like monetary assets.
APA, Harvard, Vancouver, ISO, and other styles
10

Irfan, Kamran. "Adaptation of the generic crop model STICS for rice (Oryza sativa L.) using farm data in Camargue." Thesis, Aix-Marseille, 2013. http://www.theses.fr/2013AIXM4355.

Full text
Abstract:
Le modèle de culture STICS a été adapté pour la culture du riz inondé et la capacité de prédiction du modèle a été évaluée pour la simulation de la biomasse à la récolte et du rendement en grains. La base de données utilisée pour ce travail résulte de la collecte de données au champ sur des parcelles en Camargue (sud-Est de la France) gérées par les agriculteurs. Pour la modélisation, ne disposant que très peu de données d’expérimentation, une procédure originale d’utilisation des données obtenues à la ferme a été développée. Ce travail est composé de trois phases: (i) une analyse de la base de données initiale constituée d’informations sur 472 parcelles, 33 variétés et 11 sols aux propriétés physiques différentes et collectées entre 1984 et 2009 dans toute la Camargue; (ii) la sélection des options et des formalismes pertinents pour la culture du riz, (iii) la préparation du jeu de données pour la modélisation par élimination des parcelles dont les rendements sont limités par des facteurs non pris en compte dans le modèle; (iv) la paramétrisation et la simulation des variables choisies.Les résultats de l’application de STICS au riz sont satisfaisants pour près de 80% des parcelles utilisées pour la base de données de calibration. L’accord entre les simulations et les observations est meilleur lorsque les informations d’entrée du modèle sont complètes. Les simulations de la biomasse et du rendement en grains sont d’une qualité légèrement plus faible pour la base de données de validation que pour la base da calibration
The crop model STICS was adapted for the flooded rice and model’s prediction ability was evaluated by the simulation of the plant biomass at harvest as well as the grain yield. The dataset used for this purpose was collected from the fields situated in whole Camargue (Southern France) and managed by the farmers. We introduced an original procedure to use the farm data instead of experimentation for modeling. This work was carried out in three phases, (i) analysis of the initial database of 472 fields, 33 different varieties and 11 physically different soils grown in the whole Camargue between 1984 and 2009, (ii) selection of the options of formalisms relevant to the rice crop, (iii) preparation of dataset for modeling by eliminating the fields in which the yields were limited by the factors not taken into account by the model and (iv) parameterization and the simulation of the selected target variables. The results of the application of STICS to rice crop were satisfactory for almost 80% of the fields of calibration data. Particularly, there was a good agreement between simulations and measurements of the situations with complete information regarding to the inputs. The simulation patterns for both the plant biomass and the grain yield of dataset of validation are similar as that of dataset of calibration exhibiting slightly reduced simulation quality. More discrepancies were observed in the simulations made by the model calculated dates of different phenological stages compared to the simulations run by using the observed dates of same stages
APA, Harvard, Vancouver, ISO, and other styles
11

Pereira, Cíntia Marques. "Challenges in flex binning ultra high resolution seismic reflection data." Master's thesis, Universidade de Aveiro, 2017. http://hdl.handle.net/10773/21923.

Full text
Abstract:
Mestrado em Engenharia Geológica
Os dados sísmicos 3D podem conter “lacunas” de cobertura devido a diversos problemas operacionais. As “lacunas” em dados 3D podem causar impactos adversos em várias etapas do processamento de dados, tais como na análise de velocidade, atenuação dos múltiplos, “stack” e migração. O Flex Binning é um método expediente para resolver este problema. O método Flex Binning permite que cada bin contenha mais traços, aumentando o tamanho efectivo de cada bin e incluindo traços dos bins vizinhos. O desafio é aplicar o Flex Binning a um número muito grande de traços que excedem as soluções actuais de design de software e capacidade de hardware. Vários testes foram realizados no software Seismic Processing Workshop (SPW). Durante os vários testes realizados, foi possível detectar e resolver problemas de código e “bugs” de software tais como, por exemplo, a incapacidade de aplicar uma grelha ao conjunto de dados e deixar traços fora da grelha. A questão da indexação foi outro problema detectado, que se resolveu por uma mudança de design do software no fluxo de processamento CMP Binning. A realização de vários testes nas diversas versões do SPW permitiu avanços na implementação do Flex Binning para conjuntos de dados de elevadas dimensões, nomeadamente a execução do fluxo CMP Binning com sucesso num conjunto de dados com 751GB. No fluxo de processamento CMP Binning, antes destes testes só era possível aplicar o Flex Binning corretamente a um conjunto de dados com aproximadamente 30 GB de tamanho.
3D seismic reflection data may contain coverage gaps due to operational problems. The gaps in 3D data can cause adverse impacts in several steps of the data processing such as velocity analysis, multiple attenuation, stacking and migration. Flex Binning is an expedient method to solve these gaps. The Flex Binning method allows each bin to contain more traces, by increasing the effective size of each bin and thereby include traces which also fall into neighboring bins. The challenge is to apply Flex Binning to very large number of traces exceeding current software design solutions and hardware capability. Several tests were performed with the Seismic Processing Workshop (SPW) software. During several tests, coding problems and bugs were detected and corrected, such as the inability to apply a grid to the dataset and leave traces out of the grid. Also, proper indexing was also solved by a change of software design in CMP Binning code. Beta testing on SPW allowed advances in the implementation of Flex Binning to large inputs, namely the execution of the CMP Binning flow successfully to a dataset with 751GB. The CMP Binning step before the tests could only Flex Binning effectively to an input with a size of approximately 30 GB.
APA, Harvard, Vancouver, ISO, and other styles
12

Lundström, Lukas. "Weather data for building simulation : New actual weather files for North Europe combining observed weather and modeled solar radiation." Thesis, Mälardalens högskola, Akademin för hållbar samhälls- och teknikutveckling, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-16446.

Full text
Abstract:
Dynamic building simulation is increasingly necessary for accurately quantifying potential energy savings measures in retrofit projects, to compliant with new stricter directives from EU implanted into member states legislations and building codes. For good result the simulation model need to be accurately calibrated. This requires actual weather data, representative for the climate surrounding the given building, in order to calibrate against actual energy bills of the same period of time. The main objective of this degree project is to combine observed weather (temperature, humidity, wind etc.) data with modeled solar radiation data, utilizing the SMHI STRÅNG model system; and transform these data into AMY (Actual Meteorological Year) files to be used with building simulation software. This procedure gives actual weather datasets that will cover most of the urban and semi urban area in Northern Europe while still keeping the accuracy of observed weather data. A tool called Real-Time Weather Converter was developed to handle data retrieval & merging, filling of missing data points and to create the final AMY-file. Modeled solar radiation data from STRÅNG had only been validated against a Swedish solar radiation network; validation was now made by the author with wider geographic coverage. Validation results show that STRÅNG model system performs well for Sweden but less so outside of Sweden. There exist some areas outside of Sweden (mainly Central Europe) with reasonable good result for some periods but the result is not as consistent in the long run as for Sweden. The missing data fill scheme developed for the Real-Time Weather Converter does perform better than interpolation for data gaps (outdoor temperature) of about 9 to 48 hours. For gaps between 2 and 5 days the fill scheme will still give slightly better result than linear interpolation. Akima Spline interpolation performs better than linear interpolation for data gaps (outdoor temperature) in the interval 2 to about 8 hours. Temperature uncertainty was studied using data from the period 1981-2010 for selected sites. The result expressed as SD (Standard Deviation) for the uncertainty in yearly mean temperature is about 1˚C for the Nordic countries. On a monthly basis the variation in mean temperature is much stronger (for Nordic countries it ranges from 3.5 to 4.7 ˚C for winter months), while summer months have less variation (with SD in the range of 1.3 to 1.9 ˚C). The same pattern is visible in sites at more southern latitudes but with much lower variation, and still lower for sites near coast areas. E.g. the cost-near Camborne, UK, has a SD of 0.7 to 1.7 ˚C on monthly basis and yearly SD of 0.5 ˚C. Mean direct irradiance SD for studied sites ranges from 5 to 19 W/m2 on yearly basis, while on monthly basis the SD ranges from 40 to 60 W/m2 for summer months. However, the sample base was small and of inconsistent time periods and the numbers can only be seen as indicative. The commonly used IWEC (International Weather for Energy Calculations) files direct radiation parameter was found to have a very strong negative bias of about 20 to 40 % for Northern Europe.  These files should be used with care, especially if solar radiation has a significant impact of on the building being modeled. Note that there exist also a newer set of files called IWEC2 that can be purchased from ASHRAE, these files seems not to be systematically biased for North Europe but haven’t been studied in this paper. The STRÅNG model system does catch the trend, also outside of Sweden, and is thus a very useful source of solar radiation data for model calibration.
APA, Harvard, Vancouver, ISO, and other styles
13

Goodman, Seth. "Filling In The Gaps: Applications Of Deep Learning, Satellite Imagery, And High Performance Computing For The Estimation And Distribution Of Geospatial Data." W&M ScholarWorks, 2020. https://scholarworks.wm.edu/etd/1616444509.

Full text
Abstract:
Many regions around the world suffer from a lack of authoritatively-collected data on factors critical to understanding human well-being. This challenges our ability to understand the progress society is making towards reducing poverty, improving lifespans, or otherwise improving livelihoods. A growing body of research is exploring how deep learning algorithms can be used to produce novel estimates of sparse development data, and how access to such data can impact development efforts. This dissertation contributes to this literature in three parts. First, using Landsat 8 satellite imagery and data from the Armed Conflict Location & Event Data Project, convolutional neural networks are trained to predict locations where conflict is likely to result in fatalities for one year. Second, building on the findings in chapter 1, this dissertation explores the potential to extend predictions to a time series using both yearly and six month intervals. Finally, chapter 3 introduces GeoQuery, a dynamic web application which utilizes a High Performance Computing cluster and novel parallel geospatial data processing methods to overcome challenges associated with integrating, and distributing geospatial data within research communities.
APA, Harvard, Vancouver, ISO, and other styles
14

Manyanga, Taru. "Examining Lifestyle Behaviours and Weight Status of Primary Schoolchildren: Using Mozambique to Explore the Data Gaps in Low- and Middle-Income Countries." Thesis, Université d'Ottawa / University of Ottawa, 2019. http://hdl.handle.net/10393/39711.

Full text
Abstract:
The emergency of malnutrition, in all of its forms, and physical inactivity among children and adolescents as serious public health challenges, especially in resource-limited low- and middle-income countries is concerning and requires attention. Data on the prevalence of unhealthy weight status and levels of physical inactivity among children and adolescents in these low- and middle-income countries are limited, not systematically collected nor are they well documented. Accurate prevalence estimates, and an informed understanding of the relationships among movement behaviours and weight status of children and adolescents, are required to facilitate evidence-informed interventions and public health policies in these countries. The main purposes of this dissertation were to examine relationships between lifestyle behaviours and weight status among primary schoolchildren in Mozambique; compare body mass indices and movement behaviours of Mozambican schoolchildren to those of children from other countries; and use these findings to highlight important data gaps that exist in low- and middle-income countries. First, the Active Healthy Kids Global Alliance’s Report Card development methodology was used to conduct thorough narrative literature searches, identify data gaps and research needs which subsequently informed research questions and primary data collection. A published protocol that was developed for the multinational cross-sectional International Study of Childhood Obesity, Lifestyle and the Environment was adopted and used for primary data collection among urban and rural schoolchildren in Mozambique (n=683), to facilitate data comparability. Anthropometric (weight, height, percent body fat, bioelectric impedance, mid-upper-arm circumference, waist-circumference) and accelerometry (nocturnal sleep, sedentary time, various intensities of physical activity) data were objectively measured by trained personnel. Data about lifestyle behaviours (diet and movement behaviours), demographics and environmental (home, neighbourhood, school) factors associated with child weight status were collected using context-adapted questionnaires. As part of this dissertation, six manuscripts were developed and submitted for publication in peer-reviewed scientific journals. Overall, the narrative literature searches revealed a dearth of information about prevalences of unhealthy weight status, and key lifestyle behaviours among children and adolescents in low- and middle-income countries. Results from data collected in Mozambique showed overweight/obesity to be an emerging public health concern, especially among urban children (11.4%), while thinness still persists and is more prevalent among rural schoolchildren (6.3%). Moderate- to vigorous-intensity physical activity, active transport and mother’s body mass index were found to be important modifiable correlates of weight status for Mozambican children. Distinct differences in the prevalences and correlates of lifestyle behaviours (sleep and physical activity) were observed between urban and rural children in Mozambique. The findings showed that mean moderate- to vigorous-intensity physical activity was lower (82.9±29.5 minutes/day) among urban compared to rural Mozambican children (96.7±31.8 minutes/day). Compared to children from 12 other countries, on average, children from Mozambique had lower body mass indices, higher daily moderate- to vigorous-intensity physical activity, lower daily sedentary time and comparable sleep duration. For example, rural Mozambican children had lower mean BMI z-scores (-0.5±0.9) than the rest of the sample (0.4±1.3), 46 more minutes of daily moderate- to vigorous-intensity physical activity, and 99 less minutes of daily sedentary time than the other children. Furthermore, linear distributions of study site-specific body mass index (positive), minutes of daily moderate- to vigorous-intensity physical activity (negative), and daily sedentary time (positive) by country human development index were observed. Compared to others, children from the urban Mozambican site closely resembled those from Nairobi Kenya on body mass index and movement behaviours, whereas those from rural Mozambique were distinctly different from the rest of the sample on many indicators. Findings from this dissertation highlight the importance of including participants from low, medium, high, and very high-income countries in multinational studies investigating contextual and environmental factors related to childhood weight status. The findings revealed important differences between urban and rural children supporting the need to include both in study samples and especially in low- and middle-income countries where the majority of people live in rural areas. Finally, findings from this dissertation have demonstrated that despite the reported global progress in the availability of data about obesity and related factors among children and adolescents, gaps still exist and need to be filled in low- and middle-income countries.
APA, Harvard, Vancouver, ISO, and other styles
15

Salem, Nidal Eleanor. "Using Design Thinking to Explore Millennial Segmentation Gaps and Improve Relevancy within Cuyahoga Valley National Park." Kent State University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=kent1524496515760127.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Homanen, Malin, and Therese Karlsson. "Kunskapsskillnaderna mellan IT och Redovisning och dess påverkan på redovisningsdatakvalitet : en kvalitativ studie på ett av de största bemanningsföretagen i Sverige och i världen." Thesis, Södertörns högskola, Företagsekonomi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:sh:diva-38932.

Full text
Abstract:
Det oundvikliga beroendet av digitalisering och IT-system i dagens verksamheter och organisationer ställer krav på dagens arbetskraft att öka sina IT-kunskaper för att kunna integrera och kommunicera med nya datasystem för en mer effektiv verksamhet. Inte minst lika viktigt blir det för redovisningsekonomer som sköter verksamhetens finansiella redovisning då de måste kunna säkerställa att den redovisningsdata som framställs och levereras med hjälp av IT är felfri och uppnår kvalitet. Bristen på kunskap inom IT kan riskera att fel i redovisningsdata inte upptäcks och därmed påverkar redovisningsdatakvalitet. Detta i sin tur riskerar påverka redovisningskvalitet i den slutliga finansiella rapporteringen. Kommunikationen mellan avdelningarna riskerar också bli lidande då de med olika kunskaper har svårt att förstå varandra.Studiens syfte är att försöka bidra med kunskap om hur kunskapsskillnader i digital gundkunskap kan påverka säkerställandet av redovisningsdatakvalitet samt ge insyn i hur arbetet med detta kan gå till i praktiken. Med hjälp av tidigare forskning togs en analysmodell fram som illustrerar identifierade faktorers påverkansordning av redovisningsdatakvalitet; kunskapsskillnader → intern kontroll → redovisningsdatakvalitet.Studien tillämpar en instrumentell fallstudiedesign med en kvalitativ forskningsansats för att besvara frågeställningen. Två fokusgruppsintervjuer utfördes vid två olika tillfällen med respondenter från redovisningsavdelningen och IT-avdelningen från samma företag. Data transkriberades och kodades med hjälp av färgkodning för att tydliggöra de faktorer som utgör utgångspunkten i analysmodellen. En enkätundersökning genomfördes på resterande anställda på respektive avdelning för att komplettera resultaten från intervjuerna.Resultatet av studien visade att kunskapsskillnaderna har liten eller ingen alls direkt påverkan på redovisningsdatakvalitet utan snarare påverkar den interna kontrollen desto mer utifrån externa faktorer som tillkom.
The inevitable dependence on digitization and IT systems in today's operations and organizations demands the current workforce to increase their IT skills in order to be able to integrate and communicate with new computer systems for a more efficient business. It is equally important for financial accountants who’s responsible for the business’s financial reporting, since they must be able to ensure that the accounting data produced and delivered using IT is correct and of high quality. The lack of IT skills can increase the risk of errors in accounting data not detected and thus further affect the accounting data quality. This in turn risks affecting the accounting quality in the final financial reporting. The communication between the departments could also suffer due to the knowledge gaps between them that could make it difficult to understand each other.The aim of the study is to contribute with knowledge about how the differences in knowledge can affect the work in ensuring accounting data quality and give insight into how this work can be realized in practice. With the help of previous research, an analysis model was developed that illustrates identified factors and their influence on accounting data quality; knowledge gaps → internal control → accounting data quality.The study applies an instrumental case study with a qualitative research approach. Two focus group interviews were conducted on two different occasions with respondents from the accounting department and the IT department, both from the same company. Data was transcribed and coded using color coding to clarify the factors that form the basis of the analysis model. A survey was conducted with the other employees to complement and confirm the results found from the interviews.The result of the study showed that the differences in knowledge have little or no direct impact on accounting data quality, but rather affect the internal control, based on external factors that came into light during the analysis of the result. A revised analysis model was developed based on the result and replaced the initial hypothetical model.
APA, Harvard, Vancouver, ISO, and other styles
17

Diazgranados, Ferrans Silvia. "The Civic Knowledge Gaps in Chile, Colombia and Mexico: An Application of the Oaxaca-Blinder Decomposition Method Using Data From the 2009 International Civic and Citizenship Education Study (ICCS)." Thesis, Harvard University, 2016. http://nrs.harvard.edu/urn-3:HUL.InstRepos:27112704.

Full text
Abstract:
The existence of significant differences in the civic knowledge, civic attitudes and civic skills of young people from different socio-economic (SES) backgrounds represent civic competence gaps that affect their ability to act as personally responsible, participatory and justice-oriented citizens in their society (Carretero et al, 2016; MEN, 2004; Westheimer & Kahne, 2004). Identifying civic competence gaps, their magnitude, and the factors that account for them should be a priority for researchers, policy-makers and educators in Latin America because they can threaten the strength, stability and legitimacy of democracies in the region (Levinson, 2010). I use data from three nationally representative samples of 8th grade students who participated in the 2009 International Civic and Citizenship Study (ICCS) to identify civic competence gaps between youth from high and low SES backgrounds in Chile, Colombia and Mexico, using eight measures related to civic competence. I document large gaps in students’ civic knowledge in the three countries, and small gaps in their internal sense of political efficacy, intention to participate in future electoral processes and legal and illegal protests, as well as in their attitudes toward corruption, authoritarianism and disobeying the law. I do not find gaps in their attitudes toward civil disobedience. I then use the Oaxaca-Blinder method (Oaxaca, 1973; Blinder, 1973) to identify how 1) differences in access to school resources, positive school climates and interactive civic learning opportunities, and 2) differences in the civic knowledge gains that students from different SES backgrounds obtain from equal school resources, school climates and civic learning opportunities, account for the civic knowledge gaps in these countries. Findings suggest that the largest portion of the civic knowledge gap in Chile is due to differences in civic knowledge gains, but in Colombia and Mexico the largest portions are due to differences in access. In all three countries high SES students have significantly more access than low SES students to the school resources, school climate and civic learning opportunities that are associated with higher civic knowledge, and in every case, the school SES accounts for the largest portion of the explained civic knowledge gaps. Given equal characteristics, low SES students in Colombia and Mexico –but not in Chile- obtain more civic knowledge gains than high SES students from school resources, school climate and civic learning opportunities.
APA, Harvard, Vancouver, ISO, and other styles
18

Yang, Liqiang. "Statistical Inference for Gap Data." NCSU, 2000. http://www.lib.ncsu.edu/theses/available/etd-20001110-173900.

Full text
Abstract:

This thesis research is motivated by a special type of missing data - Gap Data, which was first encountered in a cardiology study conducted at Duke Medical School. This type of data include multiple observations of certain event time (in this medical study the event is the reopenning of a certain artery), some of them may have one or more missing periods called ``gaps'' before observing the``first'' event. Therefore, for those observations, the observed first event may not be the true first event because the true first event might have happened in one of the missing gaps. Due to this kind of missing information, estimating the survival function of the true first event becomes very difficult. No research nor discussion has been done on this type of data by now. In this thesis, the auther introduces a new nonparametric estimating method to solve this problem. This new method is currently called Imputed Empirical Estimating (IEE) method. According to the simulation studies, the IEE method provide a very good estimate of the survival function of the true first event. It significantly outperforms all the existing estimating approaches in our simulation studies. Besides the new IEE method, this thesis also explores the Maximum Likelihood Estimate in thegap data case. The gap data is introduced as a special type of interval censored data for thefirst time. The dependence between the censoring interval (in the gap data case is the observedfirst event time point) and the event (in the gap data case is the true first event) makes the gap data different from the well studied regular interval censored data. This thesis points of theonly difference between the gap data and the regular interval censored data, and provides a MLEof the gap data under certain assumptions.The third estimating method discussed in this thesis is the Weighted Estimating Equation (WEE)method. The WEE estimate is a very popular nonparametric approach currently used in many survivalanalysis studies. In this thesis the consistency and asymptotic properties of the WEE estimateused in the gap data are discussed. Finally, in the gap data case, the WEE estimate is showed to be equivalent to the Kaplan-Meier estimate. Numerical examples are provied in this thesis toillustrate the algorithm of the IEE and the MLE approaches. The auther also provides an IEE estimate of the survival function based on the real-life data from Duke Medical School. A series of simulation studies are conducted to assess the goodness-of-fit of the new IEE estimate. Plots and tables of the results of the simulation studies are presentedin the second chapter of this thesis.

APA, Harvard, Vancouver, ISO, and other styles
19

Lejarza, Lander. "Gas Data Acquisition using Arduino." Thesis, Högskolan i Gävle, Elektronik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-24607.

Full text
Abstract:
The aim of this project is to acquire gas data using an Arduino microcontroller. This project is part of a bigger project called The electronic nose where mechanical, thermal, electrical, electronic and software parts work together. It is the continuation of a project handled before where mainly the mechanical, thermal and electrical parts were done.   The whole e-Nose project is divided in 4 main subcategories: general mechanical structure, thermal and piston electric circuit, gas sensors and software. My work will be focused in the last two subcategories of the general project.   The sample to be measured is placed inside of a moving cylinder, which will lift up the sample reaching near the sensors and warming it using a thermal resistance, to release more odor. That is where the sensors act, synchronized with the piston, will get the data through the Arduino and sends it to the computer to be analysed. The sensors will be activated using an Arduino Mega 2560 and transferred to the computer to be analysed with MatLab.   To control the measurement, a push button, a LCD display and a LED will be placed; having like this a full control of the project and an easy interface for the user. Six gas sensors will be used, which will be enough to be able to differentiate between different kind of gases. With such variety it is possible to categorize between combustible gas (methane, propane, LPG etc.), NH3, alcohol and more gases.   The e-Nose will be able to measure different gases in more than ways depending on the program we choose. For a more accurate response, more sensors would be needed using a sensor fusion method or more accurate sensors.
APA, Harvard, Vancouver, ISO, and other styles
20

Ding, Wenzhong. "Analysis of data from a restricted-entry well /." Access abstract and link to full text, 1989. http://0-wwwlib.umi.com.library.utulsa.edu/dissertations/fullcit/9015983.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Najeh, Houda. "Diagnostic du système bâtiment : nouveaux défis." Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAT110.

Full text
Abstract:
Le diagnostic des défauts et la maintenance d'un système bâtiment est une tâche complexe à effectuer. Les outils existants pour la détection et le diagnostic de défauts dans les bâtiments permettent d'effectuer cette détection à l'aide d'une analyse des contraintes comportementales.La thèse de Mahendra Pratap Singh propose le concept de tests hétérogènes avec des contraintes de validité dans le contexte du diagnostic de défauts dans les bâtiments, mais l'approche proposée suppose que les capteurs sont fiables et ne s'intéresse qu'aux processus thermo-aérauliques et aux systèmes de chauffage. Les contraintes de validité sont mesurées avec des capteurs. Si ces capteurs sont défectueux, le résultat du diagnostic n'est pas garanti et il est nécessaire d'avoir une méthode permettant de prouver le test ainsi que le diagnostic global.Pour effectuer un test, il est nécessaire de disposer de données provenant de différentes parties: météorologiques, humaines et physiques. Cependant, les données manquantes constituent le type de défauts de capteurs majeur dans les bâtiments. Les mesures des capteurs ne sont pas échantillonnées de manière uniforme et il est nécessaire de décider à partir de quel retard le capteur devient défectueux.L'objectif de ce travail est de mettre en évidence ces défis et de fournir une stratégie sur la façon de les résoudre. Trois solutions pour le diagnostic du système bâtiment sont proposées1- Un niveau de complétude pour une formalisation de la validité lorsque les capteurs sont potentiellement défaillants.Pour le système bâtiment, il n'existe pas de modèle global précis mais il existe des modèles contextuels à validité limitée. L'espace de test consiste en un ensemble de mesures. Le niveau de complétude est proposé comme méthode pour prouver si un espace de test est entièrement couvert ou non c'est-à-dire pour évaluer le niveau de validité d'un test en présence de capteurs non fiables.2- Un niveau de confiance pour prouver un diagnostic globalUn test automatique est caractérisé par des seuils, c’est-à-dire que la contrainte comportementale est satisfaite ou non satisfaite. L'incertitude est liée aux contraintes de validité. En effet, il est difficile de définir un seuil pour le niveau de complétude à partir duquel on peut dire qu'un test est valide. Le résultat du diagnostic est calculé à partir d’un ensemble de tests, chacun défini par sonniveau de complétude. La contribution est de proposer une solution permettant de calculer le niveau de confiance d’un diagnostic global déduit d’un ensemble de tests dont certains ont un niveau de complétude inférieur à 1. Une méthode basée sur le raisonnement de logique floue est utilisée à cet effet.3- Seuillage automatique pour la détection de données de capteurs manquantesLe retard dépend de la valeur mesurée et de type du capteur. L'objectif est d'identifier à partir de quel retard un capteur devient défectueux. Deux techniques sont proposées: une analyse de séries temporelles et une approche statistique.Différentes applications ont été étudiées pour la validation: un bureau au laboratoire G-SCOP, un appartement Grenoblois et une plateforme à l'université de Danemark Sud
Fault diagnosis and maintenance of a whole-building system is a complex task to perform. Available building fault detection and diagnosis tools are only capableof performing fault detection using behavioral constraints analysis. The thesis of Mahendra Pratap Singh proposes to use heterogeneous tests with validity constraints in the context of building fault diagnosis but the proposed approach assumes that the sensors are reliable. Nevertheless, validity constraints are checked with potentially faulty sensors. If these sensors are faulty, the diagnostic result is not guarantee and there is a need for method to prove the test as well as global diagnoses.To make a test, data are required from different parts: meteorological, human and physical parts. However, the data gaps is the main sensor fault in buildings. Sensor values are not uniformly sampled and there is a need to decide from which delay the sensor becomes faulty?The objective of this work is to highlight these challenges as well as to provide a strategy about how to solve them. Three solutions for diagnosis in building are proposed1-A level of completeness for better formalizing validity.In this work, we make the hypothesis that there is no precise global model for a building system but there is contextual models with limited validity. The validity is measured with potentially faulty sensors. The completeness level is proposed as a method to prove if a test space is fully covered or not i.e to assess the level of validity of a test.2-A confidence level for proving global diagnosis.A test is characterized by thresholds i.e the behavioral constraint is either satisfied or unsatisfied. Uncertainty is related to the validity constraints. Indeed, it is difficult to set a threshold for the level of completeness from which one can say that a test is valid.Diagnostic results are calculated from a set of tests, each one defined by itscompleteness level. The contribution is to propose a solution to compute the confidence level of a global diagnosis deduced from a set of tests whose some of them have a completeness level lower than 1. A method based on fuzzy logic reasoning is used for this purpose.3- Automatic thresholding for sensor data gap detection.The delay depends on the measured value and the type of sensor. The objective is toidentify from which delay a sensor become faulty. Two techniques are proposed: a time series analysis and a statistical approaches.Different applications have been studied for validation: an office at G-SCOP lab,an appartement at Grenoble and a platform in the University of southern Denmark
APA, Harvard, Vancouver, ISO, and other styles
22

Sciaraffa, Rocco. "A Reconfigurable Device for GALS Systems." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-235712.

Full text
Abstract:
Globally Asynchronous Locally Synchronous (GALS) Field-Programmable Gate Array (FPGA) are composed of standard synchronous reconfigurable logic islands that communicate with each other via an asynchronous means. Past research into fully asynchronous FPGA has demonstrated high throughput and reliability adopting dual-rail encoding. GALS FPGAs have been proposed, relying on bundled-data encoding and fixed asynchronous communication between synchronous islands. This thesis proposes a new GALS FPGA architecture with fully reconfigurable asynchronous fabric, that relies on coarse-grained Configurable Logic Blocks (CLBs) to improve the communication capability of the device. Through datapath dedicated elements, asynchronous pipelines are efficiently mapped onto the device. The architecture is presented as well as the customized tool flow needed to compile Verilog for this new coarse-grained reconfigurable circuit.The main purpose of this thesis is to map communication-purpose user-circuits on the proposed asynchronous fabric and evaluate their performance. The benchmark circuits target the design of a Networkon-Chip (NoC) router and employ two-phase bundled-data protocol. The results are obtained through simulation and compared with the performances of the same circuits on a fine-grained classical FPGA style. The proposed architecture achieves up to 3.2x higher throughput and 2.9x lower latency than the classical one. The results show that the coarse-grained style efficiently maps asynchronous communication circuits, and it may be the starting point for future reconfigurable GALS systems. Future work should focus on improving the back-end synthesis and evaluating the FPGA GALS system as a whole.
Globala Asynkrona Lokalt Synkrona (GALS) FPGAer består av standardiserade synkrona rekonfigurerbara logiska öar som kommunicerar med varandra på ett asynkront sätt. Tidigare forskning om helt asynkrona FPGAer har demonstrerat att hög genomströmning och tillförlitlighet kan erhållas mha sk dual-rail kodning. GALS FPGA har också föreslagits, där man istället förlitar sig på kodad data och fast asynkron kommunikation mellan synkrona öar. Denna avhandling föreslår en ny GALS FPGA-arkitektur med en omkonfigurerbar asynkron struktur, bestående av sk Coarse-grained CLBs för att förbättra kommunikationsförmågan på enheten. Genom att datavägarna använder sig av dedikerade element, kan asynkrona pipelines mappas effektivt på enheten. Arkitekturen presenteras liksom det verktygsflöde som behövs för att kompilera Verilog för denna nya grovkornigt omkonfigurerbara krets.Huvudsyftet med denna avhandling är att mappa kommunikationskretsar på den föreslagna asynkrona strukturen och utvärdera dess prestanda. Referenskretsarna som används för utvärdering är en NoC router som använder sig av ett tvåfas kommunikationsprotokoll. Resultaten erhålls genom simulering och jämförs med prestanda av samma krets implementerad i en finkornig klassisk FPGA-stil. Den föreslagna arkitekturen uppnår ca 3.2x högre genomströmning och 2.9x lägre latens än den klassiska. Resultaten visar att en grovkornig stil kan mappa asynkrona kommunikationskretsar på ett effektivt sätt, och att det kan vara en bra utgångspunkt för framtida omkonfigurerbara GALS-system.Framtida arbete bör fokusera på att förbättra back-end-syntesen och att utvärdera FPGA GALS-systemet i sin helhet.
APA, Harvard, Vancouver, ISO, and other styles
23

McCaffrey, Philip D. "Equilibrium structures from gas-phase electron-diffraction data." Thesis, University of Edinburgh, 2007. http://hdl.handle.net/1842/2601.

Full text
Abstract:
For the past 75 years gas-phase electron diffraction (GED) has remained the most valuable technique for determining structures of small molecules, free from intermolecular interactions. Throughout this period many improvements have been made to both the experimental and theoretical aspects of this technique, leading to the determination of more accurate structures. As the uncertainties associated with many stages of the process have been greatly reduced, errors introduced by assumptions, which were previously neglected, now play an important role in the overall accuracy of the determined structure. This work is focused on two such areas, namely the treatment of vibrational corrections and the vibrational effects on the scattering of individual electrons by multiple atoms. A novel method has been developed which allows the extraction of equilibrium structures (re) from distances obtained directly from GED experiments (ra). In unfavourable cases (such as small molecules with large-amplitude and / or highly anharmonic modes of vibration) traditional methods can introduce errors of comparable size to those obtained from the experiment. The newly developed method, EXPRESS (EXPeriments Resulting in Equilibrium StructureS), overcomes the problems which have plagued previous attempts through exploring a more extensive region of the potential-energy surface (PES), specifically regions relating to the normal modes of vibration. The method has been applied, initially, to sodium chloride in the gas phase as this contains dimer molecules with very low-frequency large-amplitude modes of vibration. The experimentally determined re structure gives good agreement with high-level ab initio calculations. Following this success, the EXPRESS method was then applied to sodium fluoride, sodium bromide and sodium iodide, giving similarly good agreement with theoretical calculations. The regular mixed alkali halide dimers (D2h symmetry) cannot be studied by microwave spectroscopy as they do not have a permanent dipole moment. However, vi mixed dimers (C2v) and asymmetric dimers (Cs) do not suffer from this constraint. Using insights learned from the ab initio studies of the sodium halides, geometries and dipole moments have been calculated for a range of mixed and asymmetric alkali halide dimers to enable their study by microwave spectroscopy. A multi-dimensional version of the EXPRESS method has been applied to the lowfrequency modes of chlorofluoroacetylene and chlorodifluoronitrosomethane to assess the effects of coupling between these modes of vibration in these structurally challenging systems. To obtain re structures of larger molecules a second method, using molecular dynamics (MD), has been developed and has been implemented on two test cases: the sodium chloride dimer and octasilsesquioxane. Traditional scattering theory used in GED employs the first-order Born approximation (FBO). However, this ignores any multiple scattering events, which are important for heavier atoms. Using a method similar in nature to EXPRESS a full vibrational analysis of three-atom scattering has been conducted on tellurium dibromide and tellurium tetrabromide.
APA, Harvard, Vancouver, ISO, and other styles
24

Johnson, Kevin J. "Strategies for chemometric analysis of gas chromatographic data /." Thesis, Connect to this title online; UW restricted, 2003. http://hdl.handle.net/1773/8513.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Buba, Ibrahim Muhammad. "Direct estimation of gas reserves using production data." Texas A&M University, 2003. http://hdl.handle.net/1969/153.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Sundin, Daniel. "Natural gas storage level forecasting using temperature data." Thesis, Linköpings universitet, Produktionsekonomi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-169856.

Full text
Abstract:
Even though the theory of storage is historically a popular view to explain commodity futures prices, many authors focus on the oil price link. Past studies have shown an increased futures price volatility on Mondays and days when natural gas storage levels are released, which could both implicate that storage levels and temperature data are incorporated in the prices. In this thesis, the U.S. natural gas storage level change is studied as a function of the consumption and production. Consumption and production are furthered segmented and separately forecasted by modelling inverse problems that are solved by least squares regression using temperature data and timeseries analysis. The results indicate that each consumer consumption segment is highly dependent of the temperature with R2-values of above 90%. However, modelling each segment completely by time-series analysis proved to be more efficient due to lack of flexibility in the polynomials, lack of used weather stations and seasonal patterns in addition to the temperatures. Although the forecasting models could not beat analysts’ consensus estimates, these present natural gas storage level drivers and can thus be used to incorporate temperature forecasts when estimating futures prices.
APA, Harvard, Vancouver, ISO, and other styles
27

Carvalho, Ana Margarida de Almeida Bastos. "Support of operational processes in the Data Warehouse: the gap between theory and practice." Master's thesis, Instituto Superior de Economia e Gestão, 2007. http://hdl.handle.net/10400.5/3701.

Full text
Abstract:
Mestrado em Gestão de Sistemas de Informação
Data Warehouses have been traditionally accepted as a subject-oriented, integrated, non-volatile, time-variant collection of data in support of management's decisions. They support the process of top and middle management decision-making and the organization's strategic planning processes. Their use to support operational data requirements as well has been somehow controversial by being either supported or criticized by different authors. This thesis focuses on the identification, for a Data Warehouse containing some level of operational activities, of the reasons that may be driving this kind of activities into the Data Warehouse. The thesis is made up of two parts. The first part departed from a literature review of Data Warehouse concepts, its characteristics, its usage and its role into organizations to the general opinion concerning the support of operational activities in Data Warehouse environments. The second part describes a single case study used to search for evidences of operational support in a Data Warehouse environment and to identify possible reasons that may be forcing the Data Warehouse to support operational activities. The research findings show that, for the specific case of the organization studied, there is evidence of the support of operational activities in the organization's Data Warehouse and some evidence was collected concerning the reasons that motivate the localization of these activities. Finally, we will discuss findings and opportunities for further research.
Os Data Warehouses têm sido tradicionalmente aceites como uma coleção de dados orientados por assunto, integrados, não voláteis, com diferentes períodos temporais que suportam a tomada de decisões pela gestão. Suportam o processo de tomada de decisão pela gestão de topo e intermédia e os processos de planeamento estratégico da organização. A sua utilização para suportar igualmente requisitos de dados operacionais tem sido de alguma forma controversa sendo apoiada ou criticada por diferentes autores. Esta tese coloca o enfoque na identificação, para um Data Warehouse suportando um determinado nível de actividades operacionais, das razões que podem estar a desviar este tipo de actividades para o Data Warehouse. A tese é constituída por duas partes. A primeira parte partiu da revisão da literatura sobre os conceitos de Data Warehouse, as suas características, a sua utilização e o seu papel nas organizações, para a opinião geralmente aceite relativamente ao suporte de actividades operacionais em ambientes de Data Warehouse. A segunda parte descreve um único estudo de casos utilizado na busca de evidências de suporte operacional num ambiente de Data Warehouse e na identificação das possíveis razões que podem estar a forçar o Data Warehouse a suportar actividades operacionais. Os resultados da investigação mostram que, para o caso específico da organização estudada, existe evidência do suporte de actividades operacionais no Data Warehouse da organização e foi recolhida alguma evidência relativamente às razões que motivam esta localização de actividades. Por fim, serão analisadas as conclusões e oportunidades futuras de investigação.
APA, Harvard, Vancouver, ISO, and other styles
28

Neal, P. M. "Data acquisition for turbomachinery (MORDAS)." Thesis, University of Oxford, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.282719.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Karlsson, Christoffer. "Control of critical data flows : Automated monitoring of insurance data." Thesis, KTH, Skolan för elektro- och systemteknik (EES), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-187733.

Full text
Abstract:
EU insurance companies work on implementing the Solvency II directive, which calls for stronger focus on data quality and information controls. Information controls are procedures that can validate data at rest and data in motion to detect errors and anomalies. In this master thesis a case study was carried out at AMF, a Swedish pension insurance company, to identify and investigate their critical data flows and the controls performed in the respective flows. A purpose of this project is to help AMF ensure data quality requirements from the Financial Supervisory Authority that they have to fulfill. The thesis was conducted at AMF between September and December 2015, and included tasks such as carrying out interviews, Enterprise Architecture modeling, analysis, prototyping, product evaluation and calculation of a business case.  A gap analysis was carried out to analyze the needs for change regarding existing information controls at AMF, where different states of the company are documented and analyzed. The current state corresponds to the present situation at the company including attributes to be improved while the future state outlines the target condition that the company wants to achieve. A gap between the current state and future state is identified and elements that make up the gap are presented in the gap description. Lastly, possible remedies for bridging the gap between the current and future state are presented.  Furthermore, a prototype of an automated control tool from a company called Infogix has been implemented and analyzed regarding usability, governance and cost.  A benefits evaluation was carried out on the information control tool to see whether an investment would be beneficial for AMF. The benefit evaluation was carried out using the PENG method, a Swedish model developed by three senior consultants that has been specially adjusted for evaluation of IT investments. The evaluation showed that such an investment would become beneficial during the second year after investment.
Försäkringsbolag i EU arbetar med införandet av Solvens II-direktivet som kräver att företag har ett större fokus på datakvalitet och informationskontroller. I detta examensarbete har en fältstudie utförts på AMF som är ett svenskt pensionsbolag. Arbetet har gått ut på att identifiera och undersöka kritiska dataflöden i företaget samt kontroller som utförs i dessa flöden. Ett syfte med arbetet var att hjälpa AMF att kunna påvisa att man uppfyller krav från finansinspektionen på datakvalitet och spårbarhet. Projektet utfördes under perioden september till december hösten 2015, vilket inkluderade arbetsuppgifter såsom intervjuer, Enterprise Architecture-modellering, implementering av prototyp, produktutvärdering samt kalkylering av ett business case.  En gap-analys har utförts för att analysera behovet av förändringar på de nuvarande informationskontrollerna som finns på AMF, där olika lägen har dokumenterats och analyserats. Nuläget motsvarar hur situationen ser ut på företaget i dagsläget och fokuserar på de attribut som man vill förbättra, medan önskat läge beskriver de mål som företaget vill uppnå. Ett gap mellan nuläge och önskat läge identifieras tillsammans med de faktorer som utgör skillnaden mellan dessa lägen presenteras. Till sist presenteras tänkbara åtgärder för att uppnå önskat läge. Som en del av detta examensarbete har en prototyp av ett automatiserat kontrollverktyg från ett företag som heter Infogix implementerats och utvärderas med avseende på användbarhet, styrning och kostnad. En nyttovärdering har utförts på kontrollverktyget för att undersöka huruvida en investering skulle vara gynnsam för AMF. Nyttovärderingen gjordes med hjälp av PENG, en svensk nyttovärderingsmodell utvecklad av tre ekonomer/IT-konsulter, som har anpassat speciellt för att bedöma IT-investeringar. Värderingen visade på att en sådan investering skulle komma att bli gynnsam under andra året efter att investeringen gjordes.
APA, Harvard, Vancouver, ISO, and other styles
30

Yellapantula, Sudha Ravali. "Synthesizing Realistic Data for Vision Based Drone-to-Drone Detection." Thesis, Virginia Tech, 2019. http://hdl.handle.net/10919/91460.

Full text
Abstract:
In the thesis, we aimed at building a robust UAV(drone) detection algorithm through which, one drone could detect another drone in flight. Though this was a straight forward object detection problem, the biggest challenge we faced for drone detection is the limited amount of drone images for training. To address this issue, we used Generative Adversarial Networks, CycleGAN to be precise, for the generation of realistic looking fake images which were indistinguishable from real data. CycleGAN is a classic example of Image to Image Translation technique, and we this applied in our situation where synthetic images from one domain were transformed into another domain, containing real data. The model, once trained, was capable of generating realistic looking images from synthetic data without the presence of real images. Following this, we employed a state of the art object detection model, YOLO(You Only Look Once), to build a Drone Detection model that was trained on the generated images. Finally, the performance of this model was compared against different datasets in order to evaluate its performance.
Master of Science
In the recent years, technologies like Deep Learning and Machine Learning have seen many rapid developments. Among the many applications they have, object detection is one of the widely used application and well established problems. In our thesis, we deal with a scenario where we have a swarm of drones and our aim is for one drone to recognize another drone in its field of vision. As there was no drone image dataset readily available, we explored different ways of generating realistic data to address this issue. Finally, we proposed a solution to generate realistic images using Deep Learning techniques and trained an object detection model on it where we evaluated how well it has performed against other models.
APA, Harvard, Vancouver, ISO, and other styles
31

Nicholson, Alexander. "Rapid adaptive programming using image data." Access electronically, 2005. http://www.library.uow.edu.au/adt-NWU/public/adt-NWU20051104.151041/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Lam, Yan-ki Jacky. "Developmental normative data for the random gap detection test." Click to view the E-thesis via HKU Scholors Hub, 2005. http://lookup.lib.hku.hk/lookup/bib/B38279289.

Full text
Abstract:
Thesis (B.Sc)--University of Hong Kong, 2005.
"A dissertation submitted in partial fulfilment of the requirements for the Bachelor of Science (Speech and Hearing Sciences), The University of Hong Kong, June 30, 2005." Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
33

Kumar, Gaurev. "Data-driven models for reliability prognostics of gas turbines." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/106960.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, School of Engineering, Center for Computational Engineering, Computation for Design and Optimization Program, 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 69-70).
This thesis develops three data-driven models of a commercially operating gas turbine, and applies inference techniques for reliability prognostics. The models focus on capturing feature signals (continuous state) and operating modes (discrete state) that are representative of the remaining useful life of the solid welded rotor. The first model derives its structure from a non-Bayesian parametric hidden Markov model. The second and third models are based on Bayesian nonparametric methods, namely the hierarchical Dirchlet process, and can be viewed as extensions of the first model. For all three approaches, the model structure is first prescribed, parameter estimation procedures are then discussed, and lastly validation and prediction results are presented, using proposed degradation metrics. All three models are trained using five years of data, and prediction algorithms are tested on a sixth year of data. Results indicate that model 3 is superior, since it is able to detect new operating modes, which the other models fail to do. The turbine is based on a sequential combustion design and operates in the 50Hz wholesale electricity market. The rotor is the most critical asset of the machine and is subject to nonlinear loadings induced from three sources: i) day-to-day variations in total power generated by the turbine; ii) machine trips in high and low loading conditions; iii) downtimes due to scheduled maintenance and inspection events. These sources naturally lead to dynamics, where random (resp. forced) transitions occur due to switching in the operating mode (resp. trip and/or maintenance events). The degradation of the rotor is modeled by measuring the abnormality witnessed by the cooling air temperature within different modes. Generation companies can utilize these indicators for making strategic decisions such as maintenance scheduling and generation planning.
by Gaurev Kumar.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
34

BARRETO, GISELE DE OLIVEIRA. "MASS BALANCE DATA RECONCILIATION OF THE URUCU- MANAUS GAS PIPELINE." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2015. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=26316@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
PROGRAMA DE SUPORTE À PÓS-GRADUAÇÃO DE INSTS. DE ENSINO
Se por um lado à privatização do setor de energia, que induz acirrada concorrência, tem estimulado a inovação tecnológica e a adoção de mecanismos de incentivos à eficiência operacional, a regulação do mercado introduz mecanismos de controle requerendo maior responsabilidade no uso consciente da energia de sorte a assegurar a eficiência energética e a proteção ambiental. Pressões de organizações ambientalistas internacionais e a crescente demanda por energia explicam a tendência mundial pelo uso de combustíveis fósseis mais limpos. O baixo nível de emissões e resíduos associados ao processo de combustão de gás natural qualifica esta commodity energética como um elemento estratégico para integrar a matriz energética de organizações e países comprometidos com a sustentabilidade global. O impacto econômico associado à medição de gás natural exige uma otimização do controle do balanço de massa no sistema de entrega. A aplicação da Metodologia de Reconciliação de Dados constitui o objetivo deste trabalho. A técnica provou ser uma ferramenta eficaz para a avaliação do balanço de massa em um gasoduto durante o período de operação associado ao transporte de gás natural. A natureza intrínseca do seu algoritmo de cálculo, que leva em conta a redundância nas medições, qualifica a metodologia para aumentar a confiabilidade da medição assim reduzindo a incerteza individual associada a cada grandeza física capaz de interferir na medição e identificar erros grosseiros. Fundamentado na avaliação metrológica do balanço de massa de um gasoduto brasileiro, os resultados do estudo permitem discutir a adequação da técnica proposta de reconciliação de dados. Dentre as conclusões do trabalho, foi possível mostrar que o uso da técnica de tratamento dos dados do gás não contado (unaccounted for gas) pode atingir valores inferiores a 0,3 porcento, comparando-se, assim, à tolerância preconizada em nível internacional.
If on the one hand, privatization of the energy sector, which induces keen competition, has stimulated technological innovation and the adoption of incentive mechanisms for operational efficiency, regulation of the market introduces control mechanisms requiring greater responsibility in the conscious use of energy so as to ensure energy efficiency and environmental protection. Pressure from international environmental organizations and the growing demand for energy, explain the worldwide tendency for the use of cleaner fossil fuels. The lower levels of emissions and residues associated with the combustion process of natural gas classify this energy commodity as a strategic element to enter into the energy matrix of organizations and countries committed to global sustainability. The economic impact associated with the measurement of natural gas, demands optimization in controlling the mass balance in the delivery system. Application of the Data Reconciliation Methodology constitutes the objective of this work. The technique proved to be an efficient tool for the evaluation of the mass balance in a gas pipeline for the period of operation associated with the transport of natural gas. The intrinsic nature of its calculation algorithm, which takes into account the redundancy of measurements, qualifies the methodology to increase the confidence of measurement, thereby reducing the individual uncertainty associated with each physical volume capable of interfering with the measurement and identify gross errors. Based on the metrological evaluation of the mass balance of a Brazilian pipeline, the results of the study enable discussion on the adequacy of the data reconciliation technique proposed. Among the conclusions of the work, it was possible to demonstrate that the use of the technique in treating the data of unaccounted for gas, could achieve values lower than 0.3 percent, thereby comparable with the tolerances advocated at international level.
APA, Harvard, Vancouver, ISO, and other styles
35

Alirezaie, Marjan. "Bridging the Semantic Gap between Sensor Data and Ontological Knowledge." Doctoral thesis, Örebro universitet, Institutionen för naturvetenskap och teknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-45908.

Full text
Abstract:
The rapid growth of sensor data can potentially enable a better awareness of the environment for humans. In this regard, interpretation of data needs to be human-understandable. For this, data interpretation may include semantic annotations that hold the meaning of numeric data. This thesis is about bridging the gap between quantitative data and qualitative knowledge to enrich the interpretation of data. There are a number of challenges which make the automation of the interpretation process non-trivial. Challenges include the complexity of sensor data, the amount of available structured knowledge and the inherent uncertainty in data. Under the premise that high level knowledge is contained in ontologies, this thesis investigates the use of current techniques in ontological knowledge representation and reasoning to confront these challenges. Our research is divided into three phases, where the focus of the first phase is on the interpretation of data for domains which are semantically poor in terms of available structured knowledge. During the second phase, we studied publicly available ontological knowledge for the task of annotating multivariate data. Our contribution in this phase is about applying a diagnostic reasoning algorithm to available ontologies. Our studies during the last phase have been focused on the design and development of a domain-independent ontological representation model equipped with a non-monotonic reasoning approach with the purpose of annotating time-series data. Our last contribution is related to coupling the OWL-DL ontology with a non-monotonic reasoner. The experimental platforms used for validation consist of a network of sensors which include gas sensors whose generated data is complex. A secondary data set includes time series medical signals representing physiological data, as well as a number of publicly available ontologies such as NCBO Bioportal repository.
APA, Harvard, Vancouver, ISO, and other styles
36

Bährecke, Niklas. "Automatic Classification and Visualisation of Gas from Infrared Video Data." Thesis, KTH, Skolan för teknik och hälsa (STH), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-183546.

Full text
Abstract:
Optical gas imaging denotes the visualisation of gases by means of an infrared camera, which allows operators to quickly, easily, and safely scan a large area and therefore plays a major role in the early detection and repair of gas leaks in various environments within the petrochemical industry such as processing plants and pipelines, but also in production facilities and hospitals. Thereby they help to avert damage to the environment as well as to health and safety of workers or inhabitants of nearby residential areas. The current generation of thermal gas cameras employs a so-called high-sensitivity mode, based on frame differencing, to increase the visibility of gas plumes. However, this method often results in image degradation through loss of orientation, distortion, and additional noise. Taking the increased prevalence and sinking costs for IR gas cameras – entailing an increased number of inexperienced users – into consideration, a more intuitive and user-friendly system to visualise gas constitutes a useful feature for the next generation of IR gas cameras. A system that retains the original infrared video images and highlights the gas cloud, providing the user with a clear and distinct visualisation of gas on the camera’s display, would be one example for such a visualisation system. This thesis discusses the design of such an automatic gas detection and visualisation framework based on machine learning and computer vision methods, where moving objects in video images are detected and classified as gas or non-gas based on appearance and spatiotemporal features. The main goal was to conduct a proof-of-concept study of this method, which included gathering examples for training a classifier as well as implementing the framework and evaluating several feature descriptors – both static and dynamic ones – with regard to their classification performance in gas detection in video images. Depending on the application scenario, the methods evaluated in this study are capable of reliably detecting gas.
APA, Harvard, Vancouver, ISO, and other styles
37

Groero, Jaroslav. "East and West Germany after the Unification: The Wage Gap Analysis." Master's thesis, Vysoká škola ekonomická v Praze, 2014. http://www.nusl.cz/ntk/nusl-193373.

Full text
Abstract:
Under socialism workers had their wages set by the central planners.. In my thesis I use panel data from SHARLIFE questionnaire in order to analyze how returns to East German human capital variables changed after the reunification in 1990.I also compare these returns to West German returns to human capital variables. Before 1990 the returns to experience and education were lower in East Germany than in West Germany. After the reunification East German returns to experience obtained before 1990 and to education decreased. I find a significant decrease of returns to high educated workers who spent in the East German educational system 15 and more years. East German returns to both human capital variables are smaller than West German ones before the reunification and the difference is more pronounced after the reunification.
APA, Harvard, Vancouver, ISO, and other styles
38

Nandakumar, Neha. "Computational models of natural gas markets for gas-fired generators." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/108213.

Full text
Abstract:
Thesis: S.M. in Technology and Policy, Massachusetts Institute of Technology, School of Engineering, Institute for Data, Systems, and Society, Technology and Policy Program, 2016.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 69-72).
Climate change is a major factor reforming the world's energy landscape today, and as electricity consumes 40% of total energy, huge efforts are being undertaken to reduce the carbon footprint within the electricity sector. The electric sector has been taking steps to reform the grid, retiring carbon-intensive coal plants, increasing renewable penetration, and introducing cyber elements end-to-end for monitoring, estimating, and controlling devices, systems, and markets. Due to retirements of coal plants, discovery of shale gas leading to low natural gas prices, and geopolitical motives to reduce dependence on foreign oil, natural gas is becoming a major fuel source for electricity around the United States. In addition, with increasingly intermittent renewable sources in the grid, there is a need for a readily available, clean, and flexible back-up fuel; natural gas is sought after in New England to serve this purpose as a reliable and guaranteed fuel in times when wind turbines and solar panels cannot produce. While research has been conducted advocating natural gas pipeline expansion projects to ensure this reliability, not enough attention has been paid to the overall market structure in the natural gas and electricity infrastructures which can also impact reliable delivery of gas and therefore efficient interdependency between the two infrastructures. This thesis explores the market structures in natural gas and electricity, the interdependence of natural gas and electricity prices with increasing reliance on natural gas as the penetration of renewable energy resources (RER) increases in order to complement their intermittencies, possible volatilities in these prices with varying penetration rates in RER, and alternatives to existing market structures that improve reliability and reduce volatility in electricity and gas prices. In particular, the thesis will attempt to answer the following two questions: What will the generation mix look like in 2030 and how will this impact gas and electricity prices? How do Gas-Fired Generator (GFG) bids for gas change between 2015 and 2030? In order to answer these questions, a computational model is determined using regression analysis tools and an auction model. Data from the New England region in terms of prices, generation, and demand is used to determine these models.
by Neha Nandakumar.
S.M. in Technology and Policy
APA, Harvard, Vancouver, ISO, and other styles
39

van, Rijswijk David G. "An Automated Script to Acquire Gas Uptake Data from Molecular Simulation of Metal Organic Frameworks." Thèse, Université d'Ottawa / University of Ottawa, 2012. http://hdl.handle.net/10393/22728.

Full text
Abstract:
Attention worldwide has been placed towards reducing the global carbon footprint. To this end the scientific community has been involved in improving many of the available methods of carbon capture and storage (CCS). CCS involves scrubbing flue gases of greenhouse gases and safely storing them deep underground. MOFs, a family of functionally tunable three dimensional nanoporous frameworks, have been shown to adsorb gases with great selectivity and capacity. Investigating these frameworks using computational simulations, although faster than in-lab synthetic methods, involves a tedious and meticulous input preparation process which is subject to human error. This thesis presents Dave's Occupancy Automation Package (DOAP),a software which provides a means to automatically determine the gas uptake of many three dimensional frameworks. By providing atomic coordinates for a unit simulation cell, the software acts to performs the necessary calculations to construct and execute a Grand Canonical Monte Carlo simulation, determining the gas uptake in a metal organic framework. Additionally an analysis of different convergence assessment tests for describing the end point of the GCMC simulation is presented.
APA, Harvard, Vancouver, ISO, and other styles
40

Kahn, Daniel Scott. "The Blake Ridge a study of multichannel seismic reflection data /." Thesis, Available online, Georgia Institute of Technology, 2004:, 2004. http://etd.gatech.edu/theses/available/etd-06072004-131223/unrestricted/kahn%5Fdaniel%5Fs%5F200405%5Fms.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Nimblett, Jillian Nicole. "Characterizing the accumulation and distribution of gas hydrate in marine sediments using numerical models and seismic data." Diss., Available online, Georgia Institute of Technology, 2004:, 2003. http://etd.gatech.edu/theses/available/etd-04072004-180126/unrestricted/nimblett%5Fjillian%5Fn%5F200312%5Fphd.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Nelson, Wade, and Diana Shurtleff. "Bridging The Gap Between Telemetry and the PC." International Foundation for Telemetering, 1988. http://hdl.handle.net/10150/615216.

Full text
Abstract:
International Telemetering Conference Proceedings / October 17-20, 1988 / Riviera Hotel, Las Vegas, Nevada
The explosive use and extensive development of software and hardware for the IBM PC and PC Clones over the past few years has positioned the PC as one of many viable alternatives to system designers configuring systems for both data acquisition and data analysis. Hardware abounds for capturing signals to be digitized and analyzed by software developed for the PC. Communication software has improved to where system developers can easily link instrumentation devices together to form integrated test environments for analyzing and displaying data. Telemetry systems, notable those developed for lab calibration and ground station environments, are one of many applications which can profit from the rapid development of data acquisition techniques for the PC. Recently developed for the ADS100A telemetry processor is a data acquisition module which allows the system to be linked into the PC world. The MUX-I/O module was designed to allow the PC access to telemetry data acquired through the ADS 100A, as well as provide a method by which data can be input into the telemetry environment from a host PC or equivalent RS-232 or GPIB interface. Signals captured and digitized by the ADS100A can be passed on to the PC for further processing and/or report generation. Providing interfaces of this form to the PC greatly enhances the functionality and scope of the abilities already provided by the ADS100A as one of the major front-end processors used in telemetry processing today. The MUX-I/O module helps "bridge the gap" between telemetry and the PC in an ever increasing demand for improving the quantity and quality of processing power required by today's telemetry environment. This paper focuses on two distinct topics, how to transfer data to and from the PC and what off-the-shelf software is available to provide communication links and analysis of incoming data. Major areas of discussion will include software protocols, pre vs post processing, static vs dynamic processing environments, and discussion of the major data analysis and acquisition packages available for the PC today, such as DaDisp and Lotus Measure, which aid the system designer in analyzing and displaying telemetry data. Novel applications of the telemetry to PC link will be discussed.
APA, Harvard, Vancouver, ISO, and other styles
43

Mattila, Marianne. "Synthetic Image Generation Using GANs : Generating Class Specific Images of Bacterial Growth." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-176402.

Full text
Abstract:
Mastitis is the most common disease affecting Swedish milk cows. Automatic image classification can be useful for quickly classifying the bacteria causing this inflammation, in turn making it possible to start treatment more quickly. However, training an automatic classifier relies on the availability of data. Data collection can be a slow process, and GANs are a promising way to generate synthetic data to add plausible samples to an existing data set. The purpose of this thesis is to explore the usefulness of GANs for generating images of bacteria. This was done through researching existing literature on the subject, implementing a GAN, and evaluating the generated images. A cGAN capable of generating class-specific bacteria was implemented and improvements upon it made. The images generated by the cGAN were evaluated using visual examination, rapid scene categorization, and an expert interview regarding the generated images. While the cGAN was able to replicate certain features in the real images, it fails in crucial aspects such as symmetry and detail. It is possible that other GAN variants may be better suited to the task. Lastly, the results highlight the challenges of evaluating GANs with current evaluation methods.
APA, Harvard, Vancouver, ISO, and other styles
44

Bansal, Reeshidev. "Discrimination and Enhancement of Fracture Signals on Surface Seismic Data." Thesis, Virginia Tech, 2003. http://hdl.handle.net/10919/33336.

Full text
Abstract:
Fracture patterns control flow and transport properties in a tight gas reservoir and therefore play a great role in siting the production wells. Hence, it is very important that the exact location and orientation of fractures or fracture swarms is known. Numerical models show that the fractures may be manifested on seismograms as discrete events.A number of data processing workflows were designed and examined to enhance these fracture signals and to suppress the reflections in seismic data. The workflows were first tested on a 2D synthetic data set, and then applied to 3D field data from the San Juan Basin in New Mexico. All these workflows combine conventional processing tools which makes them easily applicable. Use of conventional P-wave data may also make this approach to locate fractures more economical than other currently available technology which often requires S-wave survey or computationally intensive inversion of data. Diode filtering and dip-filtering in the common-offset domain yield good results and work very well in the presence of flat reflectors. NMO-Dip filter depends on the NMO velocity of the subsurface, but removes both flat and slightly dipping reflectors without affecting the fracture signals. Prior application of dip-moveout correction (DMO) did not make any difference on reflections, but included some incoherent noise to the data. The Eigenvector filter performed very well on flat or near-flat reflectors and left the fracture signals almost intact, but introduced some incoherent noise in the presence of steeply dipping reflectors. Harlanâ s scheme and Radon filtering are very sensitive with regard to parameters selection, but perform exceptionally well on flat or near-flat reflectors. Dip-filter, Eigenvector filter, and Radon filter were also tested on 3D land data. Dip-filter and Eigenvector filter suppressed strong reflections with slight perturbations to the fracture signals. Radon filter did not produce satisfactory result due to small residual moveout difference between reflectors and fracture signals.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
45

Park, Soyoun. "Penalized method based on representatives and nonparametric analysis of gap data." Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/37307.

Full text
Abstract:
When there are a large number of predictors and few observations, building a regression model to explain the behavior of a response variable such as a patient's medical condition is very challenging. This is a "p ≫n " variable selection problem encountered often in modern applied statistics and data mining. Chapter one of this thesis proposes a rigorous procedure which groups predictors into clusters of "highly-correlated" variables, selects a representative from each cluster, and uses a subset of the representatives for regression modeling. The proposed Penalized method based on Representatives (PR) extends the Lasso for the p ≫ n data and highly correlated variables, to build a sparse model practically interpretable and maintain prediction quality. Moreover, we provide the PR-Sequential Grouped Regression (PR-SGR) to make computation of the PR procedure efficient. Simulation studies show the proposed method outperforms existing methods such as the Lasso/Lars. A real-life example from a mental health diagnosis illustrates the applicability of the PR-SGR. In the second part of the thesis, we study the analysis of time-to-event data called a gap data when missing time intervals (gaps) possibly happen prior to the first observed event time. If a gap occurs prior to the first observed event, then the first observed event may or may not be the first true event. This incomplete knowledge makes the gap data different from the well-studied regular interval censored data. We propose a Non-Parametric Estimate for the Gap data (NPEG) to estimate the survival function for the first true event time, derive its analytic properties and demonstrate its performance in simulations. We also extend the Imputed Empirical Estimating method (IEE), which is an existing nonparametric method for the gap data up to one gap, to handle the gap data with multiple gaps.
APA, Harvard, Vancouver, ISO, and other styles
46

Diallo, Ousmane Nasr. "A data analytics approach to gas turbine prognostics and health management." Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/42845.

Full text
Abstract:
As a consequence of the recent deregulation in the electrical power production industry, there has been a shift in the traditional ownership of power plants and the way they are operated. To hedge their business risks, the many new private entrepreneurs enter into long-term service agreement (LTSA) with third parties for their operation and maintenance activities. As the major LTSA providers, original equipment manufacturers have invested huge amounts of money to develop preventive maintenance strategies to minimize the occurrence of costly unplanned outages resulting from failures of the equipments covered under LTSA contracts. As a matter of fact, a recent study by the Electric Power Research Institute estimates the cost benefit of preventing a failure of a General Electric 7FA or 9FA technology compressor at $10 to $20 million. Therefore, in this dissertation, a two-phase data analytics approach is proposed to use the existing monitoring gas path and vibration sensors data to first develop a proactive strategy that systematically detects and validates catastrophic failure precursors so as to avoid the failure; and secondly to estimate the residual time to failure of the unhealthy items. For the first part of this work, the time-frequency technique of the wavelet packet transforms is used to de-noise the noisy sensor data. Next, the time-series signal of each sensor is decomposed to perform a multi-resolution analysis to extract its features. After that, the probabilistic principal component analysis is applied as a data fusion technique to reduce the number of the potentially correlated multi-sensors measurement into a few uncorrelated principal components. The last step of the failure precursor detection methodology, the anomaly detection decision, is in itself a multi-stage process. The obtained principal components from the data fusion step are first combined into a one-dimensional reconstructed signal representing the overall health assessment of the monitored systems. Then, two damage indicators of the reconstructed signal are defined and monitored for defect using a statistical process control approach. Finally, the Bayesian evaluation method for hypothesis testing is applied to a computed threshold to test for deviations from the healthy band. To model the residual time to failure, the anomaly severity index and the anomaly duration index are defined as defects characteristics. Two modeling techniques are investigated for the prognostication of the survival time after an anomaly is detected: the deterministic regression approach, and parametric approximation of the non-parametric Kaplan-Meier plot estimator. It is established that the deterministic regression provides poor prediction estimation. The non parametric survival data analysis technique of the Kaplan-Meier estimator provides the empirical survivor function of the data set comprised of both non-censored and right censored data. Though powerful because no a-priori predefined lifetime distribution is made, the Kaplan-Meier result lacks the flexibility to be transplanted to other units of a given fleet. The parametric analysis of survival data is performed with two popular failure analysis distributions: the exponential distribution and the Weibull distribution. The conclusion from the parametric analysis of the Kaplan-Meier plot is that the larger the data set, the more accurate is the prognostication ability of the residual time to failure model.
APA, Harvard, Vancouver, ISO, and other styles
47

Moosavi, Seyyed Ali. "TECTAS : bridging the gap between collaborative tagging systems and structured data." Thesis, University of British Columbia, 2010. http://hdl.handle.net/2429/29554.

Full text
Abstract:
Ontologies are core building block of the emerging semantic web, and taxonomies which contain class-subclass relationships between concepts are a key component of ontologies. A taxonomy that relates the tags in a collaborative tagging system makes the collaborative tagging system's underlying structure easier to understand. Automatic construction of taxonomies from various data sources such as text data and collaborative tagging systems has been an interesting topic in the field of data mining. This thesis introduces a new algorithm for building a taxonomy of keywords from tags in collaborative tagging systems. This algorithm is also capable of detecting has-a relationships between tags. Proposed method - the TECTAS algorithm - uses association rule mining to detect is-a relationships between tags and can be used in an automatic or semi-automatic framework. TECTAS algorithm is based on the hypothesis that users tend to assign both "child" and "parent" tags to a resource. Proposed method leverages association rule mining algorithms, bi-gram pruning using search engines, discovering relationships when pairs of tags have a common child, and lexico-syntactic patterns to detect meronyms. In addition to proposing the TECTAS algorithm, several experiments are reported using four real data sets: Del.icio.us, LibraryThing, CiteULike, and IMDb. Based on these experiments, the following topics are addressed in this thesis: (1) Verify the necessity of building domain specific taxonomies (2) Analyze tagging behavior of users in collaborative tagging systems (3) Verify the effectiveness of our algorithm compared to previous approaches (4) Use of additional quality and richness metrics for evaluation of automatically extracted taxonomies.
APA, Harvard, Vancouver, ISO, and other styles
48

Hitchcock, Jonathan James. "Automated processing and analysis of gas chromatography/mass spectrometry screening data." Thesis, University of Bedfordshire, 2009. http://hdl.handle.net/10547/134940.

Full text
Abstract:
The work presented is a substantial addition to the established methods of analysing the data generated by gas chromatography and low-resolution mass spectrometry. It has applications where these techniques are used on a large scale for screening complex mixtures, including urine samples for sports drug surveillance. The analysis of such data is usually automated to detect peaks in the chromatograms and to search a library of mass spectra of banned or unwanted substances. The mass spectra are usually not exactly the same as those in the library, so to avoid false negatives the search must report many doubtful matches. Nearly all the samples in this type of screening are actually negative, so the process of checking the results is tedious and time-consuming. A novel method, called scaled subtraction, takes each scan from the test sample and subtracts a mass spectrum taken from a second similar sample. The aim is that the signal from any substance common to the two samples will be eliminated. Provided that the second sample does not contain the specified substances, any which are present in the first sample can be more easily detected in the subtracted data. The spectrum being subtracted is automatically scaled to allow for compounds that are common to both samples but with different concentrations. Scaled subtraction is implemented as part of a systematic approach to preprocessing the data. This includes a new spectrum-based alignment method that is able to precisely adjust the retention times so that corresponding scans of the second sample can be chosen for the subtraction. This approach includes the selection of samples based on their chromatograms. For this, new measures of similarity or dissimilarity are defined. The thesis presents the theoretical foundation for such measures based on mass spectral similarity. A new type of difference plot can highlight significant differences. The approach has been tested, with the encouraging result that there are less than half as many false matches compared with when the library search is applied to the original data. True matches of compounds of interest are still reported by the library search of the subtracted data.
APA, Harvard, Vancouver, ISO, and other styles
49

Pitt, Joseph. "Novel methods to constrain regional greenhouse gas fluxes using aircraft data." Thesis, University of Manchester, 2017. https://www.research.manchester.ac.uk/portal/en/theses/novel-methods-to-constrain-regional-greenhouse-gas-fluxes-using-aircraft-data(e9aea30c-dd81-43c6-917b-27e22b32352f).html.

Full text
Abstract:
Anthropogenically induced changes to the Earth's climate system are widely accepted to be one of the greatest threats to the sustainable future of humanity. Greenhouse gas emissions constitute the largest driving factor behind these changes, and with annual emissions still increasing further perturbation is projected. Accurate quantification of these emissions, broken down both spatially and sectorally, is vitally important in guiding effective policy for emission reduction at both national and international levels. This thesis focusses on methods to improve top-down estimates for greenhouse gas emissions within the UK, using data sampled on board the UK atmospheric research aircraft. Novel instrumentation and analytical techniques are presented and evaluated, based on measurements made as part of the GAUGE (Greenhouse gAs UK and Global Emissions) and MAMM (Methane and other greenhouse gases in the Arctic: Measurements, process studies and Modelling) projects. A new quantum cascade laser absorption spectrometer (QCLAS) for measuring CH4 and N2O on board the aircraft has been characterised. Its performance was evaluated over 17 flights during summer 2016, and a sensitivity to changes in aircraft cabin pressure was observed. A new calibration procedure was derived to minimise the effect of this sensitivity on the data, and the impact of this new procedure was quantified through analysis of in-flight target cylinder measurements and comparison against simultaneous CH4 measurements made using a previously characterised analyser. The impact of water vapour on the retrievals was also investigated, with superior results derived by directly including line broadening due to water vapour in the mole fraction retrieval algorithm. Applying the new calibration procedure to the data, total 1-sigma uncertainties of 2.47 ppb for CH4 and 0.54 ppb for N2O have been calculated for 1 Hz measurement. The British Isles CH4 flux has been derived for a case study on 12 May 2015, using aircraft and ground-based sampling and a combination of local dispersion modelling, global chemical transport modelling and a composite inventory comprised of anthropogenic and natural sources. A new multiple variable regression technique was used to compare measured and modelled CH4 mole fractions, and to derive scale factors used to estimate posterior fluxes based on prior inventory values. A maximal range for the total British Isles CH4 flux has been calculated to be 67 kg/s -- 121 kg/s, with a central estimate of 103 kg/s based on an assessment of the most likely apportionnment of model uncertainty. A further case study measuring CO2, CH4 and CO fluxes from London and surrounding urban areas using a mass balance technique has also been performed. Fluxes have been found to be a factor of ~0.7 lower for CH4, ~0.8 lower for CO, and ~1.3 higher for CO2, relative to a similar study in 2012. Likely sources of difference between the derived fluxes, as well as the overall utility of this technique, have been assessed.
APA, Harvard, Vancouver, ISO, and other styles
50

Pathirathna, Kuruppulage Asela Buddhika. "Gas turbine thermodynamic and performance analysis methods using available catalog data." Thesis, Högskolan i Gävle, Avdelningen för bygg- energi- och miljöteknik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-17482.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography