Dissertations / Theses on the topic 'Data warehousing Case studies'

To see the other types of publications on this topic, follow the link: Data warehousing Case studies.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Data warehousing Case studies.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Mathew, Avin D. "Asset management data warehouse data modelling." Thesis, Queensland University of Technology, 2008. https://eprints.qut.edu.au/19310/1/Avin_Mathew_Thesis.pdf.

Full text
Abstract:
Data are the lifeblood of an organisation, being employed by virtually all business functions within a firm. Data management, therefore, is a critical process in prolonging the life of a company and determining the success of each of an organisation’s business functions. The last decade and a half has seen data warehousing rising in priority within corporate data management as it provides an effective supporting platform for decision support tools. A cross-sectional survey conducted by this research showed that data warehousing is starting to be used within organisations for their engineering asset management, however the industry uptake is slow and has much room for development and improvement. This conclusion is also evidenced by the lack of systematic scholarly research within asset management data warehousing as compared to data warehousing for other business areas. This research is motivated by the lack of dedicated research into asset management data warehousing and attempts to provide original contributions to the area, focussing on data modelling. Integration is a fundamental characteristic of a data warehouse and facilitates the analysis of data from multiple sources. While several integration models exist for asset management, these only cover select areas of asset management. This research presents a novel conceptual data warehousing data model that integrates the numerous asset management data areas. The comprehensive ethnographic modelling methodology involved a diverse set of inputs (including data model patterns, standards, information system data models, and business process models) that described asset management data. Used as an integrated data source, the conceptual data model was verified by more than 20 experts in asset management and validated against four case studies. A large section of asset management data are stored in a relational format due to the maturity and pervasiveness of relational database management systems. Data warehousing offers the alternative approach of structuring data in a dimensional format, which suggests increased data retrieval speeds in addition to reducing analysis complexity for end users. To investigate the benefits of moving asset management data from a relational to multidimensional format, this research presents an innovative relational vs. multidimensional model evaluation procedure. To undertake an equitable comparison, the compared multidimensional are derived from an asset management relational model and as such, this research presents an original multidimensional modelling derivation methodology for asset management relational models. Multidimensional models were derived from the relational models in the asset management data exchange standard, MIMOSA OSA-EAI. The multidimensional and relational models were compared through a series of queries. It was discovered that multidimensional schemas reduced the data size and subsequently data insertion time, decreased the complexity of query conceptualisation, and improved the query execution performance across a range of query types. To facilitate the quicker uptake of these data warehouse multidimensional models within organisations, an alternate modelling methodology was investigated. This research presents an innovative approach of using a case-based reasoning methodology for data warehouse schema design. Using unique case representation and indexing techniques, the system also uses a business vocabulary repository to augment case searching and adaptation. The system was validated through a case-study where multidimensional schema design speed and accuracy was measured. It was found that the case-based reasoning system provided a marginal benefit, with a greater benefits gained when confronted with more difficult scenarios.
APA, Harvard, Vancouver, ISO, and other styles
2

Mathew, Avin D. "Asset management data warehouse data modelling." Queensland University of Technology, 2008. http://eprints.qut.edu.au/19310/.

Full text
Abstract:
Data are the lifeblood of an organisation, being employed by virtually all business functions within a firm. Data management, therefore, is a critical process in prolonging the life of a company and determining the success of each of an organisation’s business functions. The last decade and a half has seen data warehousing rising in priority within corporate data management as it provides an effective supporting platform for decision support tools. A cross-sectional survey conducted by this research showed that data warehousing is starting to be used within organisations for their engineering asset management, however the industry uptake is slow and has much room for development and improvement. This conclusion is also evidenced by the lack of systematic scholarly research within asset management data warehousing as compared to data warehousing for other business areas. This research is motivated by the lack of dedicated research into asset management data warehousing and attempts to provide original contributions to the area, focussing on data modelling. Integration is a fundamental characteristic of a data warehouse and facilitates the analysis of data from multiple sources. While several integration models exist for asset management, these only cover select areas of asset management. This research presents a novel conceptual data warehousing data model that integrates the numerous asset management data areas. The comprehensive ethnographic modelling methodology involved a diverse set of inputs (including data model patterns, standards, information system data models, and business process models) that described asset management data. Used as an integrated data source, the conceptual data model was verified by more than 20 experts in asset management and validated against four case studies. A large section of asset management data are stored in a relational format due to the maturity and pervasiveness of relational database management systems. Data warehousing offers the alternative approach of structuring data in a dimensional format, which suggests increased data retrieval speeds in addition to reducing analysis complexity for end users. To investigate the benefits of moving asset management data from a relational to multidimensional format, this research presents an innovative relational vs. multidimensional model evaluation procedure. To undertake an equitable comparison, the compared multidimensional are derived from an asset management relational model and as such, this research presents an original multidimensional modelling derivation methodology for asset management relational models. Multidimensional models were derived from the relational models in the asset management data exchange standard, MIMOSA OSA-EAI. The multidimensional and relational models were compared through a series of queries. It was discovered that multidimensional schemas reduced the data size and subsequently data insertion time, decreased the complexity of query conceptualisation, and improved the query execution performance across a range of query types. To facilitate the quicker uptake of these data warehouse multidimensional models within organisations, an alternate modelling methodology was investigated. This research presents an innovative approach of using a case-based reasoning methodology for data warehouse schema design. Using unique case representation and indexing techniques, the system also uses a business vocabulary repository to augment case searching and adaptation. The system was validated through a case-study where multidimensional schema design speed and accuracy was measured. It was found that the case-based reasoning system provided a marginal benefit, with a greater benefits gained when confronted with more difficult scenarios.
APA, Harvard, Vancouver, ISO, and other styles
3

Haneuse, Sebastian J. P. A. "Ecological studies using supplemental case-control data /." Thesis, Connect to this title online; UW restricted, 2004. http://hdl.handle.net/1773/9595.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Bhansali, Neera, and nbhansali@yahoo com. "Strategic Alignment in Data Warehouses Two Case Studies." RMIT University. Business Information Technology, 2007. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20080108.150431.

Full text
Abstract:
This research investigates the role of strategic alignment in the success of data warehouse implementation. Data warehouse technology is inherently complex, requires significant capital investment and development time. Many organizations fail to realize the full benefits from it. While failure to realize benefits has been attributed to numerous causes, ranging from technical to organizational reasons, the underlying strategic alignment issues have not been studied. This research confirms, through two case studies, that the successful adoption of the data warehouse depends on its alignment to the business plans and strategy. The research found that the factors that are critical to the alignment of data warehouses to business strategy and plans are (a) joint responsibility between data warehouse and business managers, (b) alignment between data warehouse plan and business plan, (c) business user satisfaction, (d) flexibility in data warehouse planning and (e) technical integration of the data warehouse. In the case studies, the impact of strategic alignment was visible both at implementation and use levels. The key findings from the case studies are that a) Senior management commitment and involvement are necessary for the initiation of the data warehouse project. The awareness and involvement of data warehouse managers in corporate strategies and a high level of joint responsibility between business and data warehouse managers is critical to strategic alignment and successful adoption of the data warehouse. b) Communication of the strategic direction between the business and data warehouse managers is important for the strategic alignment of the data warehouse. Significant knowledge sharing among the stakeholders and frequent communication between the iv data warehouse managers and users facilitates better understanding of the data warehouse and its successful adoption. c) User participation in the data warehouse project, perceived usefulness of the data warehouse, ease of use and data quality (accuracy, consistency, reliability and timelines) were significant factors in strategic alignment of the data warehouse. d) Technology selection based on its ability to address business and user requirements, and the skills and response of the data warehousing team led to better alignment of the data warehouse to business plans and strategies. e) The flexibility to respond to changes in business needs and flexibility in data warehouse planning is critical to strategic alignment and successful adoption of the data warehouse. Alignment is seen as a process requiring continuous adaptation and coordination of plans and goals. This research provides a pathway for facilitating successful adoption of data warehouse. The model developed in this research allows data warehouse professionals to ensure that their project when implemented, achieve the strategic goals and business objectives of the organization.
APA, Harvard, Vancouver, ISO, and other styles
5

Godes, David Bradley. "Use of heterogeneous data sources : three case studies." Thesis, Massachusetts Institute of Technology, 1989. http://hdl.handle.net/1721.1/61057.

Full text
Abstract:
Thesis (M.S.)--Massachusetts Institute of Technology, Sloan School of Management, 1989.
Title as it appears in the M.I.T. Graduate List, June 1989: Integration of heterogeneous data sources--three case studies.
Includes bibliographical references (leaf 159).
by David Bradley Godes.
M.S.
APA, Harvard, Vancouver, ISO, and other styles
6

Perl, Henning [Verfasser]. "Security and Data Analysis - Three Case Studies / Henning Perl." Bonn : Universitäts- und Landesbibliothek Bonn, 2017. http://d-nb.info/1149154179/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Connelly, Roxanne. "Social stratification and education : case studies analysing social survey data." Thesis, University of Stirling, 2013. http://hdl.handle.net/1893/18590.

Full text
Abstract:
Social Stratification is an enduring influence in contemporary societies which shapes many outcomes over the lifecourse. Social Stratification is also a key mechanism by which social inequalities are transmitted from one generation to the next. This thesis presents a set of inter-related case studies which explore social stratification in contemporary Britain. This thesis focuses on the analysis of an appropriate set of large scale social survey datasets, which contain detailed micro-level data. The thesis begins with a detailed review of one area of social survey research practice which has been neglected, namely the measurement and operationalisation of ‘key variables’. Three case studies are then presented which undertake original analyses using five different large-scale social survey resources. Throughout this thesis detailed consideration of the operationalisation of variables is made and a range of statistical modelling approaches are employed to address middle range theories regarding the processes of social stratification. Case study one focuses on cognitive inequalities in the early years of childhood. This case study builds on research which has indicated that social stratification impacts on the cognitive performance of young children. This chapter makes the original contribution of charting the extent of social inequalities on childhood cognitive abilities between three British birth cohorts. There are clear patterns of social inequality within each cohort. Between the cohorts there is also evidence that the association between socio-economic advantage and childhood cognitive capability have remained largely stable over the post-war period, in spite of the raft of policy measures that have been floated to tackle social inequality. Case study two investigates the recent sociological idea that there is a ‘middle’ group of young people who are absent in sociological inquiries. This chapter sets out to explore the existence of a ‘middle’ group based on their socio-economic characteristics. This case study focuses on school GCSE examination performance, and finds that performance is highly stratified by parental occupational positions. The analysis provided no persuasive evidence of the existence of a ‘middle’, mediocre or ordinary group of young people. The analytical benefits of studying the full attainment spectrum are emphasised, over a priori categorisation. Case study three combines the analysis of intra-generational and inter-generational status attainment perspectives by studying the influences of social origins, educational attainment and cognitive abilities across the occupational lifecourse. This case study tests theoretical ideas regarding the importance of these three areas of influence over time. This case study therefore presents a detailed picture of social stratification processes. The results highlight that much more variation in occupational positions is observed between individuals, rather than across an individual’s lifecourse. The influence of social origins, educational attainment and cognitive ability on occupational positions appear to decrease across an individual’s occupational lifecourse. A brief afterword that showcases a sensitivity analysis is presented at the end of the thesis. This brief exposition is provided to illustrate the potential benefit of undertaking sensitivity analyses when developing research which operationalises key variables in social stratification. It is argued that such an activity is beneficial and informative and should routinely be undertaken within sociological analyses of social surveys. The thesis concludes with a brief reflection on large-scale survey research and statistical modelling and comments on potential areas for future research.
APA, Harvard, Vancouver, ISO, and other styles
8

Lewis, Taariq, and Bryan Long. "Case analysis studies of diffusion models on E-commerce transaction data." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/49772.

Full text
Abstract:
Thesis (M.B.A.)--Massachusetts Institute of Technology, Sloan School of Management, 2009.
Includes bibliographical references (p. 50).
As online merchants compete in the growing e-commerce markets for customers, attention to data generated from merchant and customer website interactions continues to drive ongoing online analytical innovation. However, successful online sales forecasting arising from historical transaction data still proves elusive for many online retailers. Although there are numerous software and statistical models used in online retail, not many practitioners claim success creating accurate online inventory management or marketing effectiveness forecast models. Thus, online retailers with both online and offline strategies express frustration that although they are able to predict sales in their offline properties, even with substantial online data, they are not as successful with their online-stores. This paper attempts to test two analytical approaches to determine whether reliable forecasting can be developed using already established statistical models. Firstly, we use the original Bass Model of Diffusion and modify it for analysis of online retail data. Then, we test the model's forecasting effectiveness to extrapolate expected sales in the following year. As a second method, we use statistical cluster analysis to categorize groups of products into distinct product performance groups. We then analyze those groups for distinct characteristics and then test whether we can forecast new product performance based on the identified group characteristics.
(cont.) We partnered with a medium-sized online retail e-commerce firm with both online and offline retail channels to provide us with online transaction data. Using a modified Bass Diffusion Model, we were able to fit a sales forecast curve to a sample of products. We then used k-means cluster analysis to partition products into similar groups of sales transaction-behavior, over the period of 1 year. For each group, we tried to identify characteristics which we could use to forecast new product launch behavior. However, lack of accurate, characteristic mapping of products made it difficult to establish confidence in cluster forecasting for some groups with similar curves. With more accurate characteristic mapping of products, we're hopeful that cluster analysis can reasonably forecast new product performance in online retail catalogs.
by Taariq Lewis [and] Bryan Long.
M.B.A.
APA, Harvard, Vancouver, ISO, and other styles
9

Kang, Sangwook Cai Jianwen. "Statistical methods for case-control and case-cohort studies with possibly correlated failure time data." Chapel Hill, N.C. : University of North Carolina at Chapel Hill, 2007. http://dc.lib.unc.edu/u?/etd,1244.

Full text
Abstract:
Thesis (Ph. D.)--University of North Carolina at Chapel Hill, 2007.
Title from electronic title page (viewed Mar. 26, 2008). "... in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Department of Biostatistics, School of Public Health." Discipline: Biostatistics; Department/School: Public Health.
APA, Harvard, Vancouver, ISO, and other styles
10

Koylu, Caglar. "A Case Study In Weather Pattern Searching Using A Spatial Data Warehouse Model." Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/2/12609573/index.pdf.

Full text
Abstract:
Data warehousing and Online Analytical Processing (OLAP) technology has been used to access, visualize and analyze multidimensional, aggregated, and summarized data. Large part of data contains spatial components. Thus, these spatial components convey valuable information and must be included in exploration and analysis phases of a spatial decision support system (SDSS). On the other hand, Geographic Information Systems (GISs) provide a wide range of tools to analyze spatial phenomena and therefore must be included in the analysis phases of a decision support system (DSS). In this regard, this study aims to search for answers to the problem how to design a spatially enabled data warehouse architecture in order to support spatio-temporal data analysis and exploration of multidimensional data. Consequently, in this study, the concepts of OLAP and GISs are synthesized in an integrated fashion to maximize the benefits generated from the strengths of both systems by building a spatial data warehouse model. In this context, a multidimensional spatio-temporal data model is proposed as a result of this synthesis. This model addresses the integration problem of spatial, non-spatial and temporal data and facilitates spatial data exploration and analysis. The model is evaluated by implementing a case study in weather pattern searching.
APA, Harvard, Vancouver, ISO, and other styles
11

Magaia, Luis. "Processing Techniques of Aeromagnetic Data. Case Studies from the Precambrian of Mozambique." Thesis, Uppsala universitet, Geofysik, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-183714.

Full text
Abstract:
During 2002-2006 geological field work were carried out in Mozambique. The purpose was to check the preliminary geological interpretations and also to resolve the problems that arose during the compilation of preliminary geological maps and collect samples for laboratory studies. In parallel, airborne geophysical data were collected in many parts of the country to support the geological interpretation and compilation of geophysical maps. In the present work the aeromagnetic data collected in 2004 and 2005 in two small areas northwest of Niassa province and another one in eastern part of Tete province is analysed using GeosoftTM. The processing of aeromagnetic data began with the removal of diurnal variations and corrections for IGRF model of the Earth in the data set. The study of the effect of height variations on recorded magnetic field, levelling and interpolation techniques were also studied. La Porte interpolation showed to be a good tool for interpolation of aeromagnetic data using measured horizontal gradient. Depth estimation techniques are also used to obtain semi-quantitative interpretation of geological bodies. It was showed that many features in the study areas are located at shallow depth (less than 500 m) and few geological features are located at depths greater than 1000 m. This interpretation could be used to draw conclusions about the geology or be incorporated into further investigations in these areas.
APA, Harvard, Vancouver, ISO, and other styles
12

Jian, Wen. "Analysis of Longitudinal Data in the Case-Control Studies via Empirical Likelihood." Digital Archive @ GSU, 2006. http://digitalarchive.gsu.edu/math_theses/8.

Full text
Abstract:
The case-control studies are primary tools for the study of risk factors (exposures) related to the disease interested. The case-control studies using longitudinal data are cost and time efficient when the disease is rare and assessing the exposure level of risk factors is difficult. Instead of GEE method, the method of using a prospective logistic model for analyzing case-control longitudinal data was proposed and the semiparametric inference procedure was explored by Park and Kim (2004). In this thesis, we apply an empirical likelihood ratio method to derive limiting distribution of the empirical likelihood ratio and find one likelihood-ratio based confidence region for the unknown regression parameters. Our approach does not require estimating the covariance matrices of the parameters. Moreover, the proposed confidence region is adapted to the data set and not necessarily symmetric. Thus, it reflects the nature of the underlying data and hence gives a more representative way to make inferences about the parameter of interest. We compare empirical likelihood method with normal approximation based method, simulation results show that the proposed empirical likelihood ratio method performs well in terms of coverage probability.
APA, Harvard, Vancouver, ISO, and other styles
13

Wilkinson, Robert H. "The measurement of university performance using concepts derived from data envelopment analysis (DEA)." Thesis, University of Stirling, 1991. http://hdl.handle.net/1893/2113.

Full text
Abstract:
Performance measurement in higher education is examined during this study, in particular university performance indicators are reviewed and discussed. The conclusion is made that appropriate input and output indicators require some form of combination in order to allow practical consideration to be made. The technique of Data Envelopment Analysis (DEA) is reviewed and found to have a number of conceptual drawbacks. The model is considerably developed within the thesis, primarily by the introduction of weight restrictions on the variables. Taken as a whole the developments, coined the DEAPMAS process, create a technique which can be used to assess cost effectiveness rather than simply efficiency. Data for two examples of subject areas, defined by recognised accounting units, are applied to the program as inter-university comparison was felt to be impractical at institutional level; due to differing subject mixes. A considerable computer implementation of the developed theory was written and utilised to provide results over a number of data runs for the examples. It was concluded that the results obtained represented a considerable improvement over separate consideration of numerous performance indicators.
APA, Harvard, Vancouver, ISO, and other styles
14

Smithers, Clay. "Managing Geographic Data as an Asset: A Case Study in Large Scale Data Management." [Tampa, Fla] : University of South Florida, 2008. http://purl.fcla.edu/usf/dc/et/SFE0002761.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Al-Mashaqbeh, Ibtesam. "Computer applications in higher education : a case study of students' experiences and perceptions." Virtual Press, 2003. http://liblink.bsu.edu/uhtbin/catkey/1263918.

Full text
Abstract:
The purpose of this qualitative study was to describe the educational experiences with computers of nine female international graduate students at Ball State University. Their experiences with computers before they came to the United States, their current use of computers during their study at Ball State University, challenges faced related to the use of computers during their graduate study in the United States, and the support received from the university to help them overcome these barriers were described. Descriptions of ways computers supplemented and enriched the experiences of female international graduate students in the completion of their graduate work at Ball State University were reported.Participants of the present study were nine female international graduate students from Ball State University. They were identified through cooperation with the Center For International Programs, which provided a list of names and e-mail addresses of female international graduate students who were enrolled in graduate studies at Ball State University. Nine female international graduate students were selected from the list.The researcher interviewed each participant for two hours on one occasion. Following each interview participants were asked to complete a brief questionnaire to identify age, country of origin, academic program, and length of time spent in the United States.The following conclusions were established based upon this research study: (1) most participants did not use computer applications on a daily basis during their undergraduate study in their native countries; (2) all participants used computer applications on a daily basis during their study at BSU; (3) some participants faced two important academic adjustments at the same time, the adjustment to the English language and the adjustment to the use of computer; (4) most participants received support from friends regarding the use of computers; (5) most participants faced problems regarding their typing skills; (6) using the library web site was a challenge for most participants; (7) all participants believed that the use of computers enriched their experiences during their study at BSU; and (7) all participants used the Self-Learning Theory to improve their computer skills.
Department of Educational Studies
APA, Harvard, Vancouver, ISO, and other styles
16

Leicester, Howard James. "Improving data quality in English healthcare : from case studies to an applied framework." Thesis, City University London, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.407556.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Shimazaki, Hiroto. "Application-oriented approaches of geospatial data analysis : case studies on global environmental problems." 京都大学 (Kyoto University), 2009. http://hdl.handle.net/2433/126501.

Full text
Abstract:
Kyoto University (京都大学)
0048
新制・課程博士
博士(工学)
甲第14926号
工博第3153号
新制||工||1473(附属図書館)
27364
UT51-2009-M840
京都大学大学院工学研究科都市環境工学専攻
(主査)教授 田村 正行, 准教授 立川 康人, 准教授 須﨑 純一
学位規則第4条第1項該当
APA, Harvard, Vancouver, ISO, and other styles
18

Ström, Fabian. "Quantitative Requirements Testing for Autonomous Systems-of-Systems : Case Studies in Platooning." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-251659.

Full text
Abstract:
A platoon, or a road train, has many different benefits such as safety, fuel efficiency and road utilisation. However it must also be safe to use and must thus be thoroughly tested. This report looks at the specific use case of emergency braking using three different emergency braking strategies and two models of communication. A cooperative emergency braking strategy, a naive emergency braking strategy and the combination of both are tested. It is found that the combination of both performs the best.
En platoon, eller ett vägtåg, har många fördelar såsom bränsleeffektivitet och vägutnyttjande. Däremot måste den också vara säker att använda och därför grundligt testad. Denna rapport beaktar det specifika användarfallet nödbromsning genom tre olika nödbromsstrategier och två modeller för kommunikation. En samarbetande nödbromsstrategi, en naiv nödbromsstrategi och kombinationen av de två testas. Resultatet är att kombinationen av de två presterar bäst.
APA, Harvard, Vancouver, ISO, and other styles
19

Palencia, Arreola Daniel Heriberto. "Arguments for and field experiments in democratizing digital data collection : the case of Flocktracker." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/121749.

Full text
Abstract:
Thesis: M.C.P., Massachusetts Institute of Technology, Department of Urban Studies and Planning, 2019
Cataloged from PDF version of thesis.
Includes bibliographical references (pages [127]-131).
Data is becoming increasingly relevant to urban planning, serving as a key input for many conceptions of a "smart city." However, most urban data generation results from top-down processes, driven by government agencies or large companies. This provides limited opportunities for citizens to participate in the ideation and creation of the data used to ultimately gain insights into, and make decisions about, their communities. Digital community data collection can give more inputs to city planners and decision makers while also empowering communities. This thesis derives arguments from the literature about why it would be helpful to have more participation from citizens in data generation and examines digital community mapping as a potential niche for the democratization of digital data collection.
In this thesis, I examine one specific digital data collection technology, Flocktracker, a smartphone-based tool developed to allow users with no technical background to setup and generate their own data collection projects. I define a model of how digital community data collection could be "democratized" with the use of Flocktracker. The model envisions a process in which "seed" projects lead to a spreading of Flocktracker's use across the sociotechnical landscape, eventually producing self-sustaining networks of data collectors in a community. To test the model, the experimental part of this research examines four different experiments using Flocktracker: one in Tlalnepantla, Mexico and three in Surakarta, Indonesia. These experiments are treated as "seed" projects in the democratization model and were setup in partnership with local NGOs.
The experiments were designed to help understand whether citizen participation in digital community mapping events might affect their perceptions about open data and the role of participation in community data collection and whether this participation entices them to create other community datasets on their own, thus starting the democratization process. The results from the experiments reveal the difficulties in motivating community volunteers to participate in technology-based field data collection. While Flocktracker proved easy enough for the partner organizations to create data collection projects, the technology alone does not guarantee participation. The envisioned "democratization" model could not be validated. Each of the experiments had relatively low levels of participation in the community events that were organized.
This low participation, in turn, led to inconclusive findings regarding the effects of community mapping on participants' perceptions and on the organizations themselves. Nonetheless, numerous insights emerge, providing lessons for the technology and how it might be better used in the future to improve digital community mapping events.
by Daniel Heriberto Palencia Arreola.
M.C.P.
M.C.P. Massachusetts Institute of Technology, Department of Urban Studies and Planning
APA, Harvard, Vancouver, ISO, and other styles
20

Samson, Brian R. "A system for writing interactive engineering programs in APL." Thesis, University of British Columbia, 1985. http://hdl.handle.net/2429/25108.

Full text
Abstract:
As the use of computers in engineering becomes more significant and widespread, there is a growing need for interactive computer programs which can be used with a minimum of user preparation. This thesis presents and demonstrates a system for writing interactive engineering programs in APL, a programming language. A good interactive program is sensitive to the needs of the user, and generally includes help features, default options, escape features and check features. To include all of these features in a conventionally organized program is complicated and tedious, especially for longer programs with many interaction events between the program and the user. The system presented here makes it fairly simple to include all of the above features, and provides two additional benefits: 1. The logic of the program becomes more prominent, hence easier to follow and check. 2. The program tends to be highly modular in form, making it more readable and easier to test and debug.
Applied Science, Faculty of
Civil Engineering, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
21

Ortega, Villa Ana Maria. "Semiparametric Varying Coefficient Models for Matched Case-Crossover Studies." Diss., Virginia Tech, 2015. http://hdl.handle.net/10919/64181.

Full text
Abstract:
Semiparametric modeling is a combination of the parametric and nonparametric models in which some functions follow a known form and some others follow an unknown form. In this dissertation we made contributions to semiparametric modeling for matched case-crossover data. In matched case-crossover studies, it is generally accepted that the covariates on which a case and associated controls are matched cannot exert a confounding effect on independent predictors included in the conditional logistic regression model. Any stratum effect is removed by the conditioning on the fixed number of sets of the case and controls in the stratum. However, some matching covariates such as time, and/or spatial location often play an important role as an effect modification. Failure to include them makes incorrect statistical estimation, prediction and inference. Hence in this dissertation, we propose several approaches that will allow the inclusion of time and spatial location as well as other effect modifications such as heterogeneous subpopulations among the data. To address modification due to time, three methods are developed: the first is a parametric approach, the second is a semiparametric penalized approach and the third is a semiparametric Bayesian approach. We demonstrate the advantage of the one stage semiparametric approaches using both a simulation study and an epidemiological example of a 1-4 bi-directional case-crossover study of childhood aseptic meningitis with drinking water turbidity. To address modifications due to time and spatial location, two methods are developed: the first one is a semiparametric spatial-temporal varying coefficient model for a small number of locations. The second method is a semiparametric spatial-temporal varying coefficient model, and is appropriate when the number of locations among the subjects is medium to large. We demonstrate the accuracy of these approaches by using simulation studies, and when appropriate, an epidemiological example of a 1-4 bi-directional case-crossover study. Finally, to explore further effect modifications by heterogeneous subpopulations among strata we propose a nonparametric Bayesian approach constructed with Dirichlet process priors, which clusters subpopulations and assesses heterogeneity. We demonstrate the accuracy of our approach using a simulation study, as well a an example of a 1-4 bi-directional case-crossover study.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
22

Davidson, Michael R. (Michael Roy). "Creating markets for wind electricity in China : case studies in energy policy and regulation." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/119914.

Full text
Abstract:
Thesis: Ph. D. in Engineering Systems, Massachusetts Institute of Technology, School of Engineering, Institute for Data, Systems, and Society, 2018.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 327-360).
China's rapid economic growth -- largely industrial, energy-intensive, and reliant on coal -- has generated environmental, public health, and governance challenges. While China now leads the world in renewable energy deployment, curtailment (waste) of wind and solar is high and increasing, generating much discussion on the relative contributions of technical inflexibilities and incomplete institutional reforms on integration outcomes. These integration challenges directly affect China's ability to meet long-term environmental and economic objectives. A second, related challenge emerges from how wind integration interacts with China's reinvigoration in 2015 of a three-decade-old process to establish competitive electricity markets. A "standard liberalization prescription" for electricity markets exists internationally, though Chinese policy-makers ignore or under-emphasize many of its elements in current reforms, and some scholars question its general viability in emerging economies. This dissertation examines these interrelated phenomena by analyzing the contributions of diverse causes of wind curtailment, assessing whether current experiments will lead to efficient and politically viable electricity markets, and offering prescriptions on when and how to use markets to address renewable energy integration challenges. To examine fundamentals of the technical system and the impacts of institutional incentives on system outcomes, this dissertation develops a multi-method approach that iterates between engineering models and qualitative case studies of the system operation decision-making process (Chapter 2). These are necessary to capture, respectively, production functions inclusive of physical constraints and costs, and incentive structures of formally specified as well as de facto institutions. Interviews conducted over 2013-2016 with key stakeholders in four case provinces/regions with significant wind development inform tracing of the processes of grid and market operations (Chapter 3). A mixed-integer unit commitment and economic dispatch optimization is formulated and, based on the case studies, further tailored by adding several institutions of China's partially-liberalized system (Chapter 4). The model generates a reference picture of three of the systems as well as quantitative contributions of relevant institutions (Chapter 5). Insights from qualitative and quantitative approaches are combined iteratively for more parsimonious findings (Chapter 6). This dissertation disentangles the causes of curtailment, focusing on the directional and relative contributions of institutions, technical issues, and potential interactive effects. Wind curtailment is found to be closely tied to engineering constraints, such as must-run combined heat and power (CHP) in northern winters. However, institutional causes -- inflexibilities in both scheduling and inter-provincial trading -- have a larger impact on curtailment rates. Technical parameters that are currently set administratively at the provincial level (e.g., coal generator minimum outputs) are a third and important leading cause under certain conditions. To assess the impact of China's broader reform of the electricity system on wind curtailment, this dissertation examines in detail "marketizing" experiments. In principle, spot markets for electricity naturally prioritize wind, with near zero marginal cost, thereby contributing to low curtailment. However, China has not yet created a spot market and this dissertation finds that its implementation of other electricity markets in practice operates far from ideal. Market designs follow a similar pattern of relying on dual-track prices and out-of-market parameters, which, in the case of electricity, leave several key institutional causes of inefficiency and curtailment untouched. Compared to other sectors with successful marketization occurring when markets "grow out of the plan," all of the major electricity experiments examined show deficiencies in their ability to transition to an efficient market and to cost-effectively integrate wind energy. Although China's setting is institutionally very different, results support implementation of many elements of standard electricity market prescriptions: prioritize regional (inter-provincial) markets, eliminate conflicts of interest in dispatch, and create a consistent central policy on "transition costs" of reducing central planning. Important for China, though overlooked in standard prescriptions: markets are enhanced by clarifying the connection between dispatch and exchange settlement. As is well established, power system efficiency is expected to achieve greatest gains with a short-term merit order dispatch and primarily financial market instruments, though some workable near-term deviations for the Chinese context are proposed. Ambiguous property rights related to generation plans have helped accelerate reforms, but also delay more effective markets from evolving. China shares similarities with the large class of emerging economies undergoing electricity market restructuring, for which this suggests research efforts should disaggregate planning from scheduling institutions, analyze the range of legacy sub-national trade barriers, and prioritize finding "second-best" liberalization options fit to country context in the form and order of institutional reforms.
by Michael R. Davidson.
Ph. D. in Engineering Systems
APA, Harvard, Vancouver, ISO, and other styles
23

Zeynali, Reyhaneh. "Geomatics data acquisition and processing in support of Urban Heat Island studies, case study Bologna." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2022.

Find full text
Abstract:
Global warming and changes in Earth’s weather patterns are the main consequences of climate change, and bioclimate discomfort has significant public health problems, especially for the elderly. Normally, the thermal characteristics of urban areas are poor due to a phenomenon known as urban heat island (UHI). To study the thermal characteristics of the city of Bologna, mobile temperature measurements took place with a car, along a 75-km transect, while fixed measurements of temperature have done using 15 present weather stations and also placing five thermometers in the city center. Some interpolation models (i.e., global, and local interpolators) are applied to correct the mobile measurements using fixed data. Kriging fulfilled the best result with a correlation coefficient of 0.99. However, there was no meaningful correlation between the corrected temperatures and remote sensing land surface temperature (LST) data (due to lack of nocturnal remote sensing imagery), its correlation with remote sensing normalized difference vegetation index data (NDVI) was 0.69.
APA, Harvard, Vancouver, ISO, and other styles
24

Chen, Lu. "Semi-parametric analysis of failure time data from case-control family studies on candidate genes /." Thesis, Connect to this title online; UW restricted, 2004. http://hdl.handle.net/1773/9573.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Fitzpatrick, Benjamin. "Ultrahigh dimensional variable selection for interpolation of geostatistical data: Case studies in soil carbon modelling." Thesis, Queensland University of Technology, 2017. https://eprints.qut.edu.au/112365/1/Benjamin_Fitzpatrick_Thesis.pdf.

Full text
Abstract:
This thesis explores statistical methodologies for predicting maps of soil carbon levels from small numbers of soil core observations. Each of these methods improves the accuracy of the mapping by discovering and exploiting empirical relationships between soil carbon observations and data on large numbers of potentially related environmental characteristics. In tandem, data visualisation techniques are applied in novel ways to represent the roles of the many environmental characteristics used in these models of soil carbon distributions. This thesis also holds relevance beyond soil carbon mapping to the widespread task of leveraging maps of potentially related, ancillary data when predicting maps from point referenced observations.
APA, Harvard, Vancouver, ISO, and other styles
26

Granér, Mats. "Essays on trade and productivity : case studies of manufacturing in Chile and Kenya /." Göteborg : Dept. of Economics, School of Economics and Commercial Law (Nationalekonomiska institutionen, Handelshögsk.), 2002. http://www.handels.gu.se/epc/data/html/html/PDF/GranerdissNE.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Xiong, Zhekun. "Evaluating and predicting urban performance through behavioral patterns in temporal telecom data : a case study in Andorra." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/118074.

Full text
Abstract:
Thesis: M.C.P., Massachusetts Institute of Technology, Department of Urban Studies and Planning, 2018.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 76-83).
Evaluating and predicting performance of urban space has always set a challenge for the design and planning community. The lack of tools and data that can shed a light on how human flow is affected by urban spaces left many design decisions unexplained or unproven. However, with the constant emergence of advanced spatial-temporal analysis methods and availability of massive datasets, researchers can now better expose human behavioral patterns within dense urban settings. Focusing on the case study area of Andorra, this research experiments in analyzing Radio Network Controller (RNC) records of cell phone, which is of higher accuracy and precision, and uses computational data science algorithms such as Stay Point Detection algorithm and Density-Based Spatial Clustering of Application with Noise (DBSCAN) to evaluate performance of urban space. By leveraging regression models for machine learning, the research attempts to match characteristics of human behavioral patterns of clustering including persistence, size and diversity, with discrete urban features such as urban function, transportation network, natural landscape, and built environment. In this way, the research aims to find evidence-based correlations between urban performance and the design of urban form. On one hand, the results provide statistical analysis for potential opportunities to improve urban performance in Andorra particularly, and guidance in practice for urban planning and urban design field. On the other hand, this research explores a novel method to analyze diverse behavioral patterns in large urban populations, and to associate them with discrete urban features, which can potentially be applied to urban spaces in similar scale.
by Zhekun Xiong.
M.C.P.
APA, Harvard, Vancouver, ISO, and other styles
28

Hansson, Lisbeth. "Statistical Considerations in the Analysis of Matched Case-Control Studies. With Applications in Nutritional Epidemiology." Doctoral thesis, Uppsala University, Department of Information Science, 2001. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-1092.

Full text
Abstract:

The case-control study is one of the most frequently used study designs in analytical epidemiology. This thesis focuses on some methodological aspects in the analysis of the results from this kind of study.

A population based case-control study was conducted in northern Norway and central Sweden in order to study the associations of several potential risk factors with thyroid cancer. Cases and controls were individually matched and the information on the factors under study was provided by means of a self-completed questionnaire. The analysis was conducted with logistic regression. No association was found with pregnancies, oral contraceptives and hormone replacement after menopause. Early pregnancy and artificial menopause were associated with an increased risk, and cigarette smoking with a decreased risk, of thyroid cancer (paper I). The relation with diet was also examined. High consumption with fat- and starch-rich diet was associated with an increased risk (paper II).

Conditional and unconditional maximum likelihood estimations of the parameters in a logistic regression were compared through a simulation study. Conditional estimation had higher root mean square error but better model fit than unconditional, especially for 1:1 matching, with relatively little effect of the proportion of missing values (paper III). Two common approaches to handle partial non-response in a questionnaire when calculating nutrient intake from diet variables were compared. In many situations it is reasonable to interpret the omitted self-reports of food consumption as indication of "zero-consumption" (paper IV).

The reproducibility of dietary reports was presented and problems for its measurements and analysis discussed. The most advisable approach to measure repeatability is to look at different correlation methods. Among factors affecting reproducibility frequency and homogeneity of consumption are presumably the most important ones (paper V). Nutrient variables can often have a mixed distribution form and therefore transformation to normality will be troublesome. When analysing nutrients we therefore recommend comparing the result from a parametric test with an analogous distribution-free test. Different methods to transform nutrient variables to achieve normality were discussed (paper VI).

APA, Harvard, Vancouver, ISO, and other styles
29

Ren, Zheng. "Case Studies on Fractal and Topological Analyses of Geographic Features Regarding Scale Issues." Thesis, Högskolan i Gävle, Samhällsbyggnad, GIS, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-23996.

Full text
Abstract:
Scale is an essential notion in geography and geographic information science (GIScience). However, the complex concepts of scale and traditional Euclidean geometric thinking have created tremendous confusion and uncertainty. Traditional Euclidean geometry uses absolute size, regular shape and direction to describe our surrounding geographic features. In this context, different measuring scales will affect the results of geospatial analysis. For example, if we want to measure the length of a coastline, its length will be different using different measuring scales. Fractal geometry indicates that most geographic features are not measurable because of their fractal nature. In order to deal with such scale issues, the topological and scaling analyses are introduced. They focus on the relationships between geographic features instead of geometric measurements such as length, area and slope. The scale change will affect the geometric measurements such as length and area but will not affect the topological measurements such as connectivity.   This study uses three case studies to demonstrate the scale issues of geographic features though fractal analyses. The first case illustrates that the length of the British coastline is fractal and scale-dependent. The length of the British coastline increases with the decreased measuring scale. The yardstick fractal dimension of the British coastline was also calculated. The second case demonstrates that the areal geographic features such as British island are also scale-dependent in terms of area. The box-counting fractal dimension, as an important parameter in fractal analysis, was also calculated. The third case focuses on the scale effects on elevation and the slope of the terrain surface. The relationship between slope value and resolution in this case is not as simple as in the other two cases. The flat and fluctuated areas generate different results. These three cases all show the fractal nature of the geographic features and indicate the fallacies of scale existing in geography. Accordingly, the fourth case tries to exemplify how topological and scaling analyses can be used to deal with such unsolvable scale issues. The fourth case analyzes the London OpenStreetMap (OSM) streets in a topological approach to reveal the scaling or fractal property of street networks. The fourth case further investigates the ability of the topological metric to predict Twitter user’s presence. The correlation between number of tweets and connectivity of London named natural streets is relatively high and the coefficient of determination r2 is 0.5083.   Regarding scale issues in geography, the specific technology or method to handle the scale issues arising from the fractal essence of the geographic features does not matter. Instead, the mindset of shifting from traditional Euclidean thinking to novel fractal thinking in the field of GIScience is more important. The first three cases revealed the scale issues of geographic features under the Euclidean thinking. The fourth case proved that topological analysis can deal with such scale issues under fractal way of thinking. With development of data acquisition technologies, the data itself becomes more complex than ever before. Fractal thinking effectively describes the characteristics of geographic big data across all scales. It also overcomes the drawbacks of traditional Euclidean thinking and provides deeper insights for GIScience research in the big data era.
APA, Harvard, Vancouver, ISO, and other styles
30

Xu, Jie. "MINING STATIC AND DYNAMIC STRUCTURAL PATTERNS IN NETWORKS FOR KNOWLEDGE MANAGEMENT: A COMPUTATIONAL FRAMEWORK AND CASE STUDIES." Diss., Tucson, Arizona : University of Arizona, 2005. http://etd.library.arizona.edu/etd/GetFileServlet?file=file:///data1/pdf/etd/azu%5Fetd%5F1151%5F1%5Fm.pdf&type=application/pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Dietrich, Georg [Verfasser], and Frank [Gutachter] Puppe. "Ad Hoc Information Extraction in a Clinical Data Warehouse with Case Studies for Data Exploration and Consistency Checks / Georg Dietrich ; Gutachter: Frank Puppe." Würzburg : Universität Würzburg, 2019. http://d-nb.info/1191102610/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Chan, Chun-ying, and 陳俊英. "A case study of hoe technology is used to create service value." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1995. http://hub.hku.hk/bib/B31266332.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Butts, Kuan. "Design and deploy : iterative methods in adapting mobile technologies for data acquisition : a case study in St. Louis, Missouri." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/97344.

Full text
Abstract:
Thesis: M.C.P., Massachusetts Institute of Technology, Department of Urban Studies and Planning, 2014.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 142-147).
Advancements in mobile technology provide the opportunity to explore not only how data gathering (e.g., surveying) can be eased through digital input mechanisms, but also how such devices can bring new resolution to data gathered. This thesis covers the development history of an Android-based application, Flocktracker. Flocktracker incorporates techniques capitalizing on standard modern locational sensors on Android devices, demonstrating how data ranging from vehicle speeds to locations, directions, and on-board conditions can be relatively easily gathered. The research then deploys Flocktracker to explore the spatiotemporal dynamics of onboard security perception, as reported by users, along the 70 bus line in St. Louis. Over a brief, three-day period in March, an on-board survey was implemented via Flocktracker. Based on this field work, the thesis presents aspects of the route data collected (origin-destination, ridership, speed, uploads activity by time of day), as well as a multivariate, ordered logit model of users' reported security perceptions, incorporating additional spatial data (e.g., on crime). Results from this model indicate the user-reported security perceptions relate significantly to highly localized aspects of a route, such as proximity to homicides, public disorder, property crimes, vacancies, vehicle speed, and relative location along the route.
by Kuan Butts.
M.C.P.
APA, Harvard, Vancouver, ISO, and other styles
34

Tsang, Kwong-ping Loretta, and 曾廣萍. "Offshore office: a strategic move : a post-implementation review of Cathay Pacific Airways Sydney Data Centremove." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1997. http://hub.hku.hk/bib/B3126833X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Hübert, Juliane. "From 2D to 3D Models of Electrical Conductivity based upon Magnetotelluric Data : Experiences from two Case Studies." Doctoral thesis, Uppsala universitet, Geofysik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-165143.

Full text
Abstract:
Magnetotelluric measurements are among the few geophysical techniques capable of imaging the structure both in the shallow subsurface as well as the entire crust of the Earth. With recent technical and computational advances it has become possible to derive three-dimensional inversion models of the electrical conductivity from magnetotelluric data, thereby overcoming the problems arising from the simplified assumption of two-dimensionality in conventional two-dimensional modeling. The transition from two-dimensional to three-dimensional analysis requires careful reconsideration of the classical challenges of magnetotellurics: galvanic distortion, data errors, model discretization and resolution.This work presents two examples of magnetotelluric investigations, where a new three-dimensional inversion algorithm has been applied. The new models are compared with conventional two-dimensional models and combined with the results of other geophysical techniques like reflection seismics and electrical resistivity tomography. The first case presents magnetotelluric investigations of the Kristineberg mining area in the Skellefte district, northern Sweden. This study is part of a joint geophysical and geological project to investigate the present structure and evolution of the whole district. Together with reflection seismic and surface geological information a three-dimensional conductivity model, derived through the inversion of magnetotelluric data, was interpreted. A comparison with two-dimensional modeling gives insights into the capabilities and challenges of three-dimensional inversion strategies with respect to data sampling and model resolution.The second case presents a study of remediation monitoring  with geophysical methods after an oil blow-out in Trecate, Italy. A three-dimensional conductivity model was derived from radiomagnetotelluric measurements. In addition, two-dimensional joint inversion of radiomagnetotelluric and electrical tomography data was performed. Compared with electrical resistivity tomography, radiomagnetotelluric data is more sensitive to conductors and the derived three-dimensional inversion model resolves the vadose zone and the underlying aquifer.
APA, Harvard, Vancouver, ISO, and other styles
36

Al, Rababa'A Abdel Razzaq. "Uncovering hidden information and relations in time series data with wavelet analysis : three case studies in finance." Thesis, University of Stirling, 2017. http://hdl.handle.net/1893/25961.

Full text
Abstract:
This thesis aims to provide new insights into the importance of decomposing aggregate time series data using the Maximum Overlap Discrete Wavelet Transform. In particular, the analysis throughout this thesis involves decomposing aggregate financial time series data at hand into approximation (low-frequency) and detail (high-frequency) components. Following this, information and hidden relations can be extracted for different investment horizons, as matched with the detail components. The first study examines the ability of different GARCH models to forecast stock return volatility in eight international stock markets. The results demonstrate that de-noising the returns improves the accuracy of volatility forecasts regardless of the statistical test employed. After de-noising, the asymmetric GARCH approach tends to be preferred, although that result is not universal. Furthermore, wavelet de-noising is found to be more important at the key 99% Value-at-Risk level compared to the 95% level. The second study examines the impact of fourteen macroeconomic news announcements on the stock and bond return dynamic correlation in the U.S. from the day of the announcement up to sixteen days afterwards. Results conducted over the full sample offer very little evidence that macroeconomic news announcements affect the stock-bond return dynamic correlation. However, after controlling for the financial crisis of 2007-2008 several announcements become significant both on the announcement day and afterwards. Furthermore, the study observes that news released early in the day, i.e. before 12 pm, and in the first half of the month, exhibit a slower effect on the dynamic correlation than those released later in the month or later in the day. While several announcements exhibit significance in the 2008 crisis period, only CPI and Housing Starts show significant and consistent effects on the correlation outside the 2001, 2008 and 2011 crises periods. The final study investigates whether recent returns and the time-scaled return can predict the subsequent trading in ten stock markets. The study finds little evidence that recent returns do predict the subsequent trading, though this predictability is observed more over the long-run horizon. The study also finds a statistical relation between trading and return over the long-time investment horizons of [8-16] and [16-32] day periods. Yet, this relation is mostly a negative one, only being positive for developing countries. It also tends to be economically stronger during bull-periods.
APA, Harvard, Vancouver, ISO, and other styles
37

Haberstroh, Charlotte Juliane. "Geographical Information Systems (GIS) Applied to Urban Nutrient Management: Data Scarce Case Studies from Belize and Florida." Scholar Commons, 2017. http://scholarcommons.usf.edu/etd/6620.

Full text
Abstract:
Nutrient inputs into the environment greatly impact urban ecosystems. Appropriate management strategies are needed to limit eutrophication of surface water bodies and contamination of groundwater. In many existing urban environments, retrofits or complete upgrades are needed for stormwater and/or wastewater infrastructure to manage nutrients. However, sustainable urban nutrient management requires comprehensive baseline data that is often not available. A Framework for Urban Nutrient (FUN) Management for Geographic Information Systems (GIS) was developed to specifically address those areas with limited data access. Using spatial analysis in GIS, it links water quality, land use, and socio-demographics, thereby reducing data collection and field-based surveying efforts. It also presents preliminary results in a visually accessible format, potentially improving how data is shared and discussed amongst diverse stakeholders. This framework was applied to two case studies, one in Orange County Florida and one in Placencia, Belize. A stormwater pond index (SPI) was developed to evaluate 961 residential wet ponds in Orange County, Florida where data was available for land use and socio-demographic parameters, but limited for water quality. The SPI consisted of three categories (recreation, aesthetics, education) with a total of 13 indicators and provided a way to score the cultural and ecosystem services of 41 ponds based on available data. Using only three indicators (presence of a fence, Dissolved Oxygen (DO) < 4 mg/l, and water depth < 3 ft), 371 out of 961 stormwater ponds were assessed. Additional criteria based on socio-demographic information (distance to a school, population density, median household income under $50,000, percentage of population below the poverty line, and distance to parks) identified seven wet ponds as optimum for potential intervention to benefit residents and urban nutrient management purposes. For the second case study, a water quality analysis and impact assessment was performed for the Placencia peninsula and lagoon in Belize. This study had access to water quality data, but limited land use data and very limited socio-demographic data. Since May 2014, water quality samples have been taken from 56 locations and analyzed monthly. For this study, Dissolved Oxygen (DO), Nitrate (NO3--N), Ammonia (NH3), Chemical Oxygen Demand (COD), and 5-Day Biochemical Oxygen Demand (BOD5), Escherichia coli (E. coli), and Enterococci were selected to assess spatial and temporal variation of water quality in the groundwater on the peninsula as well as the surface water in lagoon, estuaries and along the coast. A spline interpolation of DO, Nitrate, BOD5, and COD for June 2016 indicated the concentration distribution of those parameters and areas of special concern. A spatial analysis was conducted that showed that Nitrate and Enterococci exceeded the effluent limits of Belize very frequently in the complete study area while the other parameters contributed to the identification of key areas of concern. As a high variability of concentrations over time was observed, a temporal analysis was conducted identifying a link between the water quality data and two temporal impact factors, rainfall and tourism. The two case studies showed the broad and flexible application of the FUN management for GIS and the great advantages the use of GIS offers to reduce costs and resources use.
APA, Harvard, Vancouver, ISO, and other styles
38

Afzal, Samra. "Big data analysis of Customers’ information: A case study of Swedish Energy Company’s strategic communication." Thesis, Uppsala universitet, Medier och kommunikation, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-388552.

Full text
Abstract:
Big data analysis and inbound marketing are interlinked and can play a significant role in the identification of target audience and in the production of communication content as per the needs of target audience for strategic communication campaigns. By introducing and bringing the marketing concepts of big data analysis and inbound marketing into the field of strategic communication this quantitative study attempts to fill the gap in the limited body of knowledge of strategic communication research and practice. This study has used marketing campaigns as case studies to introduce a new strategic communication model by introducing the big data analysis and inbound marketing strategy into the three staged model of strategic communication presented by Gulbrandsen, I. T., & Just, S. N. in 2016. Big data driven campaigns are used to explain the procedure of target audience selection, key concepts of big data analysis, future opportunities, practical applications of big data for strategic communication practitioners and researchers by identifying the need for more academic research and practical use of big data analysis and inbound marketing in the strategic communication area. The study shows that big data analysis has potential to contribute in the field of strategic and target oriented communication. Inbound marketing and big data analysis has been used and considered as marketing strategy but this study is an attempt to shift the attention towards its role in strategic communication so there is a need to study big data analysis and inbound marketing with an open mind without confining it with some particular fields.
APA, Harvard, Vancouver, ISO, and other styles
39

Williams, Rhys E. "Modelling issues in repetitive construction and an approach to schedule updating." Thesis, University of British Columbia, 1985. http://hdl.handle.net/2429/25145.

Full text
Abstract:
Planning and control of time and other resources are crucial to the construction of large projects. Yet, current computerized techniques are unable to model the work patterns by which construction personnel plan a project. Furthermore, these methods are not capable of reflecting the day to day changes which must be monitored to control the construction site. The purpose of this thesis-is to promote the usability of computerized planning and scheduling through the development of the heuristic manner by which construction personnel perceive the project. Site studies held in cooperation with Poole Construction Limited and Foundation Company of Canada were performed using a computer scheduling system at the University of British Columbia which contained a prototype model of repetitive work. It provided insight to the process of repetition and rhythm by which projects are planned and to the requirements of the updating process necessary to monitor, and hence control the project. Two models evolved. The definition of the general repetitive structure was formulated to provide construction personnel with a tool by which to model the process of repetition. The definition of an updating process was formulated capable of monitoring daily progress on a construction site. Work performed with these models have shown them to be realistic in their approach to construction management.
Applied Science, Faculty of
Civil Engineering, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
40

Singleton, Demian. "Value-added versus status comparative case studies of the utilization of student achievement data by public school systems /." Connect to resource online, 2009. http://library2.sage.edu/archive/thesis/ED/2009singleton_d.PDF.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Sarzalejo, de Bauduhin Sabrina 1955. "Integration of borehole and seismic data to unravel complex stratigraphy : case studies from the Mannville Group, western Canada." Thesis, McGill University, 2009. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=115696.

Full text
Abstract:
Understanding the stratigraphic architecture of geologically complex reservoirs, such as the heavy oil deposits of Western Canada, is essential to achieve an efficient hydrocarbon recovery. Borehole and 3-D seismic data were integrated to define the stratigraphic architecture and generate 3-dimensional geological models of the Mannville Group in Saskatchewan. The Mannville is a stratigraphically complex unit formed of fluvial to marine deposits. Two areas in west-central and southern Saskatchewan were examined in this study. In west-central Saskatchewan, the area corresponds to a stratigraphically controlled heavy oil reservoir with production from the undifferentiated Dina-Cummings Members of the Lower Cretaceous Mannville Group. The southern area, although non-prospective for hydrocarbons, shares many similarities with time-equivalent strata in areas of heavy oil production. Seismic sequence stratigraphic principles together with log signatures permitted the subdivision of the Mannville into different packages. An initial geological model was generated integrating seismic and well-log data Multiattribute analysis and neural networks were used to generate a pseudo-lithology or gamma-ray volume. The incorporation of borehole core data to the model and the subsequent integration with the lithological prediction were crucial to capture the distribution of reservoir and non-reservoir deposits in the study area. The ability to visualize the 3-D seismic data in a variety of ways, including arbitrary lines and stratal or horizon slicing techniques helped the definition of stratigraphic features such as channels and scroll bars that affect fluid flow in hydrocarbon producing areas. Small-scale heterogeneities in the reservoir were not resolved due to the resolution of the seismic data. Although not undertaken in this study, the resulting stratigraphic framework could be used to help construct a static reservoir model. Because of the small size of the 3-D seismic surveys, horizontal slices through the data volume generally imaged only small portions of the paleogeomorphologic features thought to be present in this area. As such, it was only through the integration of datasets that the geological models were established.
APA, Harvard, Vancouver, ISO, and other styles
42

Islam, Syed R. "Cumulative Impact of Shortest Path, Environment and Fuel Efficiency on Route Choice: Case Studies with Real-Time Data." ScholarWorks@UNO, 2016. http://scholarworks.uno.edu/td/2162.

Full text
Abstract:
Intelligent Transportation System (ITS) provides a great platform for the planners to reduce environmental externalities from auto. We now have access to real time data. We have been using shortest path to provide route choice to the user. But we have the potential to add more variables in choosing routes. Real time data can be used to measure carbon di-oxide emission during a trip. Also, fuel efficiency can be measured using the real time data. Planners should use this potential of ITS to reduce the environmental impact. This paper thus try to evaluate if considering three variables shortest path, environmental impact and fuel efficiency together instead of only shortest path will change the route choice or not. It provides case studies on different types of routes and between different sets of origin /destination to evaluate the combined influence of these three variables on route choice.
APA, Harvard, Vancouver, ISO, and other styles
43

Van, Alstyne Audrey May. "Computers in the home curriculum project : an atttitude and gender study." Thesis, University of British Columbia, 1991. http://hdl.handle.net/2429/31215.

Full text
Abstract:
Computers are a valuable tool for education. Studies have proven that the computer can assist in the development of a positive self-concept and a positive attitude toward school. Computers can increase student-teacher interaction and achievement by individualizing the learning process. The research clearly documents the dominance of males in the computer field. Home economics educators have the ability to assist individuals and families in using this tool to their best advantage. This research study included 224 students at Sir Charles Tupper School in Vancouver, B.C. The students were thirteen or fourteen years of age and in grade nine or ten. The study was conducted between September 1989 and February 1990. The purpose of this study was to determine if the integration of computers into home economics can encourage attitude changes and promote equitable computer use between male and female students. This study will test the assertion of previous research that indicates females are less interested in computers and less likely to use computers than males. Can females do as well as males and males as well as females when given the opportunity to study personally relevant material under the supervision of a female role model? Of the 224 students in the study, 185 were in the control group and 39 were in the treatment group. The treatment involved participation in the new course, Computers in the Home. This course studies the impact of computers on family life, and explores personal and home computer applications. The survey was designed to assess student attitudes toward the computer and how they may have changed as a result of the course. Student responses to the survey were analyzed using SPSS-X and Chi-Square analyses were performed to determine any significant differences. During the period of study, the enrollment patterns in both Computer Science and Computers in the Home refute the majority of research in that more females than males were enrolled in these computer classes. It was expected and postulated that students enrolled in Computers in the Home would have been exposed to a different experience than those not enrolled. Unfortunately, there was no significant difference between the attitudes of the students enrolled in the course and students not enrolled in Computers in the Home. Although empirical observation throughout the study period lead the researcher to believe there were differences, statistical analysis of the survey responses did not support this observation. Males overtly displayed their enjoyment—they were more adventurous, aggressive and curious. Female students were quieter and tended to be more covert toward this machine. Since no difference in attitude was found, this research study has shown that females are as interested and use computers as often as male students at Sir Charles Tupper School. Although females react differently toward computers, the general trend appears to be moving toward more equitable computer experiences for all.
Education, Faculty of
Curriculum and Pedagogy (EDCP), Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
44

Lam, Hon-yin Hymen, and 林漢賢. "Chargeout system for data processing services: a case study on Standard Chartered Bank, HK." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1987. http://hub.hku.hk/bib/B42574043.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Leech, Andrea Dawn. ""What Does This Graph Mean?" Formative Assessment With Science Inquiry to Improve Data Analysis." PDXScholar, 2014. https://pdxscholar.library.pdx.edu/open_access_etds/1537.

Full text
Abstract:
This study investigated the use of formative assessment to improve three specific data analysis skills within the context of a high school chemistry class: graph interpretation, pattern recognition, and making conclusions based on data. Students need to be able to collect data, analyze that data, and produce accurate scientific explanations (NRC, 2011) if they want to be ready for college and careers after high school. This mixed methods study, performed in a high school chemistry classroom, investigated the impact of the formative assessment process on data analysis skills that require higher order thinking. We hypothesized that the use of evaluative feedback within the formative assessment process would improve specific data analysis skills. The evaluative feedback was given to the one group and withheld from the other for the first part of the study. The treatment group had statistically better data analysis skills after evaluative feedback over the control. While these results are promising, they must be considered preliminary due to a number of limitations involved in this study.
APA, Harvard, Vancouver, ISO, and other styles
46

Macpherson, Robert Allan. "Immigrant integration and the global recession : a case study using Swedish register data." Thesis, University of St Andrews, 2015. http://hdl.handle.net/10023/7598.

Full text
Abstract:
In many immigrant-receiving countries, the increased rate and diversification of immigration has placed immigrant integration high on academic and political agendas. Immigrant integration must also be understood within increasingly complex contexts due to the global recession and new geographies of immigrant settlement. The aim of this thesis is to deepen understanding of immigrant integration processes during the recession by using Sweden as an empirical lens. Using Swedish register data, this thesis examines the registered population during the recent economic boom and bust to explore how the recession may have resulted in differential labour market and migration outcomes between immigrants and natives. The first empirical chapter highlights how long-term processes have produced a spatial, immigrant division of labour that results in differential risks of unemployment during the recession. The second empirical chapter examines internal migration to show that although cyclical patterns of the economy offer some explanation of the differences in experiences between immigrant and natives, long-term, deeper processes are more important in understanding geographies of immigrant integration. The final empirical chapter examines a recent immigrant cohort to show that labour market entry is by no means uniform across time, space and immigrant origin. Conceptually, the thesis shows that existing theories of immigrant integration processes during recessions are underdeveloped and that processes taking place across other temporal and spatial scales offer deeper explanation for the differential outcomes between immigrants and natives. The thesis also reveals what is knowable from register data and how such data allows future research to present a more holistic picture of how various forms of immigrant integration play out across time (economic cycles, lifecourse, generations) and across space (urban, rural areas, old and new immigrant destinations). This methodological contribution is significant given that social scientists are currently evaluating the relative merits of population censuses versus administrative register data.
APA, Harvard, Vancouver, ISO, and other styles
47

Phipps, Owen Dudley. "The use of a database to improve higher order thinking skills in secondary school biology: a case study." Thesis, Rhodes University, 1994. http://hdl.handle.net/10962/d1003696.

Full text
Abstract:
The knowledge explosion of the last decade has left education in schools far behind. The emphasis in schools must change if they are to prepare students for their future lives. Tertiary institutions as well as commerce and industry need people who have well-developed cognitive skills. A further requirement is that the school leaver must have skills pertaining to information processing. The skills that are required are those which have been labelled higher order thinking skills. The work of Piaget, Thomas and Bloom have led to a better understanding of what these skills actually are. Resnick sees these skills as being: nonalgorithmic; complex; yielding multiple solutions; involving nuanced judgements; involving the application of multiple criteria; involving uncertainty; involving self-regulation of the thinking process; imposing meaning and being effortful. How these can be taught and the implication of doing so are considered by the researcher. The outcome of this consideration is that higher order - thinking entails communication skills, reasoning, problem solving and self management. The study takes the form of an investigation of a particular case: whether a Biology field trip could be used as a source of information, which could be handled by a computer, so that higher order thinking skills could be acquired by students. Students were instructed in the use of a Database Management System called PARADOX. The students then went on an excursion to a Rocky Shore habitat to collect data about the biotic and abiotic factors pertaining to that ecosystem. The students worked in groups sorting data and entering it into the database. Once all the data had been entered the students developed hypotheses and queried the database to obtain evidence to substantiate or disprove their hypotheses. Whilst this was in progress the researcher obtained data by means of observational field notes, tape recordings, evoked documents and interviews. The qualitative data was then arranged into classes to see if it showed that the students were using any of the higher order thinking skills. The results showed that the students did use the listed higher order thinking skills whilst working on the database.
APA, Harvard, Vancouver, ISO, and other styles
48

Gibson, Carolyn M. (Carolyn Margaret). "A study of the integration of computers into the writing processes of first-year college composition students /." Thesis, McGill University, 1990. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=74582.

Full text
Abstract:
Twenty first-year management students were observed as they undertook an Effective Written Communication course (EWC) in a microcomputer lab at McGill University. The study focused on the students' adaptation to the computer during a one-semester course and for a two-year period following the course. Results suggest that although students master the basics of word processors with relative ease, they bring entrenched paper and pen habits to the computer lab; habits that are not easily changed. This study further suggests that because student writers in a first-year composition class are often inexperienced writers and computer users, inferences based upon this group may not apply to other populations.
APA, Harvard, Vancouver, ISO, and other styles
49

Ben, Nasr Sana. "Mining and modeling variability from natural language documents : two case studies." Thesis, Rennes 1, 2016. http://www.theses.fr/2016REN1S013/document.

Full text
Abstract:
L'analyse du domaine vise à identifier et organiser les caractéristiques communes et variables dans un domaine. Dans la pratique, le coût initial et le niveau d'effort manuel associés à cette analyse constituent un obstacle important pour son adoption par de nombreuses organisations qui ne peuvent en bénéficier. La contribution générale de cette thèse consiste à adopter et exploiter des techniques de traitement automatique du langage naturel et d'exploration de données pour automatiquement extraire et modéliser les connaissances relatives à la variabilité à partir de documents informels. L'enjeu est de réduire le coût opérationnel de l’analyse du domaine. Nous étudions l'applicabilité de notre idée à travers deux études de cas pris dans deux contextes différents: (1) la rétro-ingénierie des Modèles de Features (FMs) à partir des exigences réglementaires de sûreté dans le domaine de l’industrie nucléaire civil et (2) l’extraction de Matrices de Comparaison de Produits (PCMs) à partir de descriptions informelles de produits. Dans la première étude de cas, nous adoptons des techniques basées sur l’analyse sémantique, le regroupement (clustering) des exigences et les règles d'association. L'évaluation de cette approche montre que 69% de clusters sont corrects sans aucune intervention de l'utilisateur. Les dépendances entre features montrent une capacité prédictive élevée: 95% des relations obligatoires et 60% des relations optionnelles sont identifiées, et la totalité des relations d'implication et d'exclusion sont extraites. Dans la deuxième étude de cas, notre approche repose sur la technologie d'analyse contrastive pour identifier les termes spécifiques au domaine à partir du texte, l'extraction des informations pour chaque produit, le regroupement des termes et le regroupement des informations. Notre étude empirique montre que les PCMs obtenus sont compacts et contiennent de nombreuses informations quantitatives qui permettent leur comparaison. L'expérience utilisateur montre des résultats prometteurs et que notre méthode automatique est capable d'identifier 43% de features correctes et 68% de valeurs correctes dans des descriptions totalement informelles et ce, sans aucune intervention de l'utilisateur. Nous montrons qu'il existe un potentiel pour compléter ou même raffiner les caractéristiques techniques des produits. La principale leçon à tirer de ces deux études de cas, est que l’extraction et l’exploitation de la connaissance relative à la variabilité dépendent du contexte, de la nature de la variabilité et de la nature du texte
Domain analysis is the process of analyzing a family of products to identify their common and variable features. This process is generally carried out by experts on the basis of existing informal documentation. When performed manually, this activity is both time-consuming and error-prone. In this thesis, our general contribution is to address mining and modeling variability from informal documentation. We adopt Natural Language Processing (NLP) and data mining techniques to identify features, commonalities, differences and features dependencies among related products. We investigate the applicability of this idea by instantiating it in two different contexts: (1) reverse engineering Feature Models (FMs) from regulatory requirements in nuclear domain and (2) synthesizing Product Comparison Matrices (PCMs) from informal product descriptions. In the first case study, we adopt NLP and data mining techniques based on semantic analysis, requirements clustering and association rules to assist experts when constructing feature models from these regulations. The evaluation shows that our approach is able to retrieve 69% of correct clusters without any user intervention. Moreover, features dependencies show a high predictive capacity: 95% of the mandatory relationships and 60% of optional relationships are found, and the totality of requires and exclude relationships are extracted. In the second case study, our proposed approach relies on contrastive analysis technology to mine domain specific terms from text, information extraction, terms clustering and information clustering. Overall, our empirical study shows that the resulting PCMs are compact and exhibit numerous quantitative and comparable information. The user study shows that our automatic approach retrieves 43% of correct features and 68% of correct values in one step and without any user intervention. We show that there is a potential to complement or even refine technical information of products. The main lesson learnt from the two case studies is that the exploitability and the extraction of variability knowledge depend on the context, the nature of variability and the nature of text
APA, Harvard, Vancouver, ISO, and other styles
50

Bloomer, Stephen F. "Examination of the potential of seismic reflection data for paleoceanographic studies, case study from the eastern equatorial Pacific Ocean." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape4/PQDD_0033/NQ65452.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography