To see the other types of publications on this topic, follow the link: Transparency and data for valuation.

Dissertations / Theses on the topic 'Transparency and data for valuation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Transparency and data for valuation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Ineza, Kayihura Didier. "Adoption of Artificial Intelligence in Commercial Real Estate : Data Challenges, Transparency and Implications for Property Valuations." Thesis, KTH, Fastigheter och byggande, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-298074.

Full text
Abstract:
Investment decision in the property market is closely connected to property valuation. Thus, accuracy of valuation results and deep analysis of the market is essential. Artificial Intelligence (AI) models have been successfully adopted in different fields and markets. However, the real estate market is typically lagged in time to adapt to these changes. Swedish commercial property market arrangements are characterized by increasing confidentiality of certain data types. As a consequence, the adoption of the AI valuation models in the Swedish commercial property market is slowed down.  This study aims to bridge the gap in existing research by focusing on the market actor’s behavior in relation to market development and exploiting the capabilities inherent in adopting AI models in commercial property valuations.  The qualitative approach based on interviews with experts has been used to achieve the main objective of this study. Results suggest that the AI valuation models used on commercial properties are applied on valuation data and not on real transaction data. Analysis covers different aspects including data challenges and its disclosure, the role of government authorities, market and data perspectives of AI application on property valuations. A framework on AI implication in property valuation in different time horizons presented in this study will help to overcome data challenges and improve transparency of valuation results. This study is beneficial to various actors in the property market, including government authorities, investors, valuers and researchers.
Investeringsbeslut på fastighetsmarknaden är sammankopplat till fastighetsvärdering. Således är noggrannhet i värderingsresultat och en djup marknadsanalys nödvändiga. Artificiell intelligens (AI) modeller applicerades framgångsrikt inom olika områden och marknader. Fastighetsmarknaden är dock försenad i tid för att anpassa sig till dessa förändringar. Svenskt kommersiellt fastighetsmarknadsarrangemang är känd för ökad sekretess för vissa datatyper. Som en följd av detta minskar adopteringen av AI-värderingsmodeller på den svenska kommersiella fastighetsmarknaden. Denna studie syftar på att fylla i gapet i befintlig forskning genom att fokusera på marknadsaktörens beteende i förhållande till marknadsutveckling och utnyttja de möjligheter som ligger i adopteringen av AI-modeller i kommersiella fastighetsvärderingar.Den kvalitativa metoden baserad på intervjuer med experter har använts för att uppnå huvudmålet för denna studie. Resultaten tyder på att AI-värderingsmodellerna som används på kommersiella fastigheter tillämpas på värderingsdata och inte på transaktionsdata. Analysen täcker olika aspekter, inklusive datautmaningar och dess avslöjande, myndigheternas roll, marknads- och dataperspektiv för AI-tillämpning på fastighetsvärderingar. Ett ramverk för AI-implikationer i fastighetsvärdering inom olika tidshorisonter som presenteras i denna studie kommer att hjälpa till att överkomma datautmaningar och förbättra transparensen i värderingsresultaten. Denna studie är nyttig för olika aktörer på fastighetsmarknaden, inklusive myndigheter, investerare, värderare och forskare.
APA, Harvard, Vancouver, ISO, and other styles
2

Sciuto, Alex. "Data Visualization for Medical Price Education and Transparency." Research Showcase @ CMU, 2015. http://repository.cmu.edu/theses/94.

Full text
Abstract:
The health care system in the United States is changing rapidly. Individual patients are expected to become educated medical consumers making informed choices and paying for those choices. Many researchers and designers are studying how medical consumers understand their medical care, but there is an opportunity for meaningful design strategies using data visualization to help consumers understand how much they pay for their care. This thesis uses service and user-centered design methods and interactive data visualization to create systems that gather medical prices and display them back to users all with the goal of creating more educated medical consumers.
APA, Harvard, Vancouver, ISO, and other styles
3

Jackson, Kirsti. "Qualitative methods, transparency, and qualitative data analysis software| Toward an understanding of transparency in motion." Thesis, University of Colorado at Boulder, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=3621346.

Full text
Abstract:

This study used in-depth, individual interviews to engage seven doctoral students and a paired member of their dissertation committee in discussions about qualitative research transparency and the use of NVivo, a Qualitative Data Analysis Software (QDAS), in pursuing it. The study also used artifacts (an exemplary qualitative research article of the participant's choice and the student's written dissertation) to examine specific researcher practices within particular contexts. The design and analysis were based on weak social constructionist (Schwandt, 2007), boundary object (Star, 1989; Star & Griesemer, 1989) and boundary-work (Gieryn, 1983, 1999) perspectives to facilitate a focus on: 1) The way transparency was used to coordinate activity in the absence of consensus. 2) The discursive strategies participants employed to describe various camps (e.g., qualitative and quantitative researchers) and to simultaneously stake claims to their understanding of transparency.

The analysis produced four key findings. First, the personal experiences of handling their qualitative data during analysis influenced the students' pursuit of transparency, long before any consideration of being transparent in the presentation of findings. Next, the students faced unpredictable issues when pursuing transparency, to which they responded in situ, considering a wide range of contextual factors. This was true even when informed by ideal types (Star & Griesemer, 1989) such as the American Educational Research Association (2006) guidelines that provided a framework for pursuing the principle of transparency. Thirdly, the QDAS-enabled visualizations students used while working with NVivo to interpret the data were described as a helpful (and sometimes indispensable) aspect of pursuing transparency. Finally, this situational use of visualizations to pursue transparency was positioned to re-examine, verify, and sometimes challenge their interpretations of their data over time as a form of self-interrogation, with less emphasis on showing their results to an audience. Together, these findings lead to a new conceptualization of transparency in motion, a process of tacking back and forth between situated practice of transparency and transparency as an ideal type. The findings also conclude with several proposals for advancing a transparency pedagogy. These proposals are provided to help qualitative researchers move beyond the often implicit, static, and post-hoc invocations of transparency in their work.

APA, Harvard, Vancouver, ISO, and other styles
4

Nevitt, S. J. "Data sharing and transparency : the impact on evidence synthesis." Thesis, University of Liverpool, 2017. http://livrepository.liverpool.ac.uk/3017585/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hudson, Sara P. "Using contingent valuation data to simulate referendums." Thesis, This resource online, 1992. http://scholar.lib.vt.edu/theses/available/etd-03302010-020112/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Nicholson, Alexander Abu-Mostafa Yaser S. "Generalization error estimates and training data valuation /." Diss., Pasadena, Calif. : California Institute of Technology, 2002. http://resolver.caltech.edu/CaltechETD:etd-09062005-083717.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Rowley, Steven. "A National Valuation Evidence Database : the future of valuation data provision and collection." Thesis, Northumbria University, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.245441.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Pulls, Tobias. "Preserving Privacy in Transparency Logging." Doctoral thesis, Karlstads universitet, Institutionen för matematik och datavetenskap, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-35918.

Full text
Abstract:
The subject of this dissertation is the construction of privacy-enhancing technologies (PETs) for transparency logging, a technology at the intersection of privacy, transparency, and accountability. Transparency logging facilitates the transportation of data from service providers to users of services and is therefore a key enabler for ex-post transparency-enhancing tools (TETs). Ex-post transparency provides information to users about how their personal data have been processed by service providers, and is a prerequisite for accountability: you cannot hold a controller accountable for what is unknown. We present three generations of PETs for transparency logging to which we contributed. We start with early work that defined the setting as a foundation and build upon it to increase both the privacy protections and the utility of the data sent through transparency logging. Our contributions include the first provably secure privacy-preserving transparency logging scheme and a forward-secure append-only persistent authenticated data structure tailored to the transparency logging setting. Applications of our work range from notifications and deriving data disclosures for the Data Track tool (an ex-post TET) to secure evidence storage.
The subject of this dissertation is the construction of privacy-enhancing technologies (PETs) for transparency logging, a technology at the intersection of privacy, transparency, and accountability. Transparency logging facilitates the transportation of data from service providers to users of services and is therefore a key enabler for ex-post transparency-enhancing tools (TETs). Ex-post transparency provides information to users about how their personal data have been processed by service providers, and is a prerequisite for accountability: you cannot hold a controller accountable for what is unknown. We present three generations of PETs for transparency logging to which we contributed. We start with early work that defined the setting as a foundation and build upon it to increase both the privacy protections and the utility of the data sent through transparency logging. Our contributions include the first provably secure privacy-preserving transparency logging scheme and a forward-secure append-only persistent authenticated data structure tailored to the transparency logging setting. Applications of our work range from notifications and deriving data disclosures for the Data Track tool (an ex-post TET) to secure evidence storage.
APA, Harvard, Vancouver, ISO, and other styles
9

Murmann, Patrick. "Towards Usable Transparency via Individualisation." Licentiate thesis, Karlstads universitet, Institutionen för matematik och datavetenskap (from 2013), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-71120.

Full text
Abstract:
The General Data Protection Regulation grants data subjects the legal rights of transparency and intervenability. Ex post transparency provides users of data services with insight into how their personal data have been processed, and potentially clarifies what consequences will or may arise due to the processing of their data. Technological artefacts, ex post transparency-enhancing tools (TETs) convey such information to data subjects, provided the TETs are designed to suit the predisposition of their audience. Despite being a prerequisite for transparency, however, many of the TETs available to date lack usability in that their capabilities do not reflect the needs of their final users. The objective of this thesis is therefore to systematically apply the concept of human-centred design to ascertain design principles that demonstrably lead to the implementation of a TET that facilitates ex post transparency and supports intervenability. To this end, we classify the state of the art of usable ex post TETs published in the literature and discuss the gaps therein. Contextualising our findings in the domain of fitness tracking, we investigate to what extent individualisation can help accommodate the needs of users of online mobile health services. We introduce the notion of privacy notifications as a means to inform data subjects about incidences worthy of their attention and examine how far privacy personas reflect the preferences of distinctive groups of recipients. We suggest a catalogue of design guidelines that can serve as a basis for specifying context-sensitive requirements for the implementation of a TET that leverages privacy notifications to facilitate ex post transparency, and which also serve as criteria for the evaluation of a future prototype.

Paper 2 ingick som manuskript i avhandlingen, nu publicerad.

APA, Harvard, Vancouver, ISO, and other styles
10

Bonatti, Piero A., Bert Bos, Stefan Decker, Garcia Javier David Fernandez, Sabrina Kirrane, Vassilios Peristeras, Axel Polleres, and Rigo Wenning. "Data Privacy Vocabularies and Controls: Semantic Web for Transparency and Privacy." CEUR Workshop Proceedings, 2018. http://epub.wu.ac.at/6490/1/SW4SG_2018.pdf.

Full text
Abstract:
Managing Privacy and understanding the handling of personal data has turned into a fundamental right¿at least for Europeans since May 25th with the coming into force of the General Data Protection Regulation. Yet, whereas many different tools by different vendors promise companies to guarantee their compliance to GDPR in terms of consent management and keeping track of the personal data they handle in their processes, interoperability between such tools as well uniform user facing interfaces will be needed to enable true transparency, user-configurable and -manageable privacy policies and data portability (as also¿implicitly¿promised by GDPR). We argue that such interoperability can be enabled by agreed upon vocabularies and Linked Data.
APA, Harvard, Vancouver, ISO, and other styles
11

Sjöström, Linus, and Carl Nykvist. "How Certificate Transparency Impact the Performance." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-140838.

Full text
Abstract:
Security on the Internet is essential to ensure the privacy of an individual. Today, Trans- port Layer Security (TLS) and certificates are used to ensure this. But certificates are not enough in order to maintain confidentiality and therefore a new concept, Certificate Trans- parency (CT), has been introduced. CT improves security by allowing the analysis of sus- picious certificates. Validation by CT uses public logs that can return Signed Certificate Timestamp (SCT), which is a promise returned by the log indicating that the certificate will be added to the log. A server may then deliver the SCT to a client in three different ways: X.509v3 extension, Online Certificate Status Protocol (OSCP) stapling and TLS extension. For further analysis, we have created a tool to collect data during TLS handshakes and data transfer, including byte information, the certificates themselves, SCT delivery method and especially timing information. From our dataset we see that most websites do not use CT and the ones that use CT almost only use X.509 extension to send their SCTs.
APA, Harvard, Vancouver, ISO, and other styles
12

Rahman, Amn. "Improving the transparency of government requests for user data from ICT companies." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/104826.

Full text
Abstract:
Thesis: S.M. in Technology and Policy, Massachusetts Institute of Technology, School of Engineering, Institute for Data, Systems, and Society, Technology and Policy Program, 2016.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 96-107).
In 1968, the US Congress enacted a detailed list of requirements for transparency reporting of wiretaps but with subsequent surveillance statutes with minimal reporting requirements and rapidly evolving Internet technologies, the gap in surveillance transparency grew. The Snowden disclosures in 2013 provided a peek into the surveillance landscape and the central role of ICT companies in fostering it. While attempting to salvage their tarnished reputations and encourage public discussion, several companies began to see an incentive in publishing 'transparency reports', providing statistics on user data requested by the government. Since then, publishing these reports has become a norm in the industry but the reports provide little benefit in bridging the transparency gap. The varying formats, definitions and levels of granularity in the reports and the absence of a governance framework in the industry, prevent the reports from becoming useful tools for stakeholders wishing to inform policy decisions. In addition, new technologies, modern surveillance techniques, and evolving business models have created a set of transparency requirements that is markedly different from the initial set of requirements established under the US Wiretap Act. This thesis identifies the missing elements in the current transparency reports while providing a detailed list of necessary features. In addition, it uncovers the incentives that can be leveraged using available tools to encourage better reporting practices and suggests technical, legal and policy solutions so that transparency reporting may become a useful public policy tool rather than a ritualistic practice.
by Amn Rahman
S.M. in Technology and Policy
APA, Harvard, Vancouver, ISO, and other styles
13

Naude, Stephanus David. "Application of spatial resource data to assist in farmland valuation." Thesis, Stellenbosch : Stellenbosch University, 2011. http://hdl.handle.net/10019.1/18118.

Full text
Abstract:
Thesis (MScAgric) -- Stellenbosch University, 2011.
ENGLISH ABSTRACT: In South Africa more than 80 percent of the total land area is used for agriculture and subsistence livelihoods. A land transaction is generally not a recurring action for most buyers and sellers, their experience and knowledge are limited, for this reason the services of property agents and valuers are sometimes used, just to get more information available. The condition of insufficient information and the inability to observe differences in land productivity gives rise to the undervaluation of good land and overvaluation of poor land. The value of a property plays an important role in the acquisition of a bond, in this context farm valuations are essential and therefore commercial banks make more use of specialist businesses that have professional valuers available. The advent of the Internet made access to comprehensive information sources easier for property agents and valuers whose critical time and resources can now be effectively managed through Geographic Information System (GIS) integrated workflow processes. This study aims to develop the blueprint for a farm valuation support system (FVSS) that assists valuers in their application of the comparable sales method by enabling them to do the following: (1) Rapid identification of the location of the subject property and transaction properties on an electronic map. (2) Comparison of the subject property with the transaction properties in terms of value contributing attributes that can be expressed in a spatial format, mainly a) location and b) land resource quality factors not considered in existing valuation systems that primarily focus on residential property. Interpretation of soil characteristics to determine the suitability of a soil for annual or perennial crops requires specialized knowledge of soil scientists, knowledge not normally found among property valuers or estate agents. For this reason an algorithm, that generates an index value, was developed to allow easy comparison of the land of a subject property and that of transaction properties. Whether this index value reflects the soil suitability of different areas sufficiently accurate was confirmed by soil suitability data of the Breede and Berg River areas, which were obtained by soil scientists by means of a reconnaissance soil survey. This index value distinguishes the proposed FVSS from other existing property valuation systems and can therefore be used by valuers as a first approximation of a property’s soil suitability, before doing further field work. A nationwide survey was done among valuers and estate agents that provided information for the design of the proposed FVSS and proved that the need for such a system does exist and that it will be used by valuers.
AFRIKAANSE OPSOMMING: Meer as 80 persent van die totale grondoppervlakte in Suid-Afrika word gebruik vir landbou en bestaansboerdery. 'n Grondtransaksie is oor die algemeen nie 'n herhalende aksie vir die meeste kopers en verkopers nie, hul ervaring en kennis is beperk, om hierdie rede word die dienste van eiendomsagente en waardeerders soms gebruik om meer inligting beskikbaar te kry. Die toestand van onvoldoende inligting en die onvermoë om verskille in grondproduktiwiteit te identifiseer gee aanleiding tot die onderwaardering van goeie grond en oorwaardering van swak grond. Die waarde van 'n eiendom speel 'n belangrike rol in die verkryging van 'n verband. In hierdie konteks is plaaswaardasies noodsaaklik en daarom maak kommersiële banke meer gebruik van gespesialiseerde maatskappye wat oor professionele waardeerders beskik. Die koms van die Internet het toegang tot omvattende inligtingsbronne makliker gemaak vir eiendomsagente en waardeerders wie se kritiese tyd en hulpbronne nou effektief bestuur kan word deur middel van Geografiese Inligtingstelsel (GIS) geïntegreerde werksprosesse. Hierdie studie poog om die bloudruk vir 'n plaaswaardasie ondersteuningstelsel te ontwikkel wat waardeerders sal help in hul toepassing van die vergelykbare verkope metode deur hul in staat te stel om die volgende te doen: (1) Vinnige identifisering van die ligging van die betrokke onderwerp eiendom en transaksie eiendomme op 'n elektroniese kaart. (2) Vergelyking van die onderwerp eiendom met transaksie eiendomme in terme van waardedraende eienskappe wat in 'n ruimtelike formaat uitgedruk word, hoofsaaklik a) ligging en b) bodem gehaltefaktore wat nie oorweeg word in bestaande residensieel georiënteerde waardasiestelsels nie. Interpretasie van grondeienskappe om die geskiktheid van grond vir eenjarige of meerjarige gewasse te bepaal vereis gespesialiseerde kennis van grondkundiges, kennis wat nie normaalweg gevind word onder eiendomswaardeerders of eiendomsagente nie. Om hierdie rede is 'n algoritme ontwikkel sodat die grond van ‘n onderwerp eiendom d.m.v. ‘n indekswaarde met transaksie eiendomme vergelyk kan word. Die indekswaarde is akkuraat genoeg bevestig toe dit vergelyk is met grond geskiktheidsdata wat deur grondkundiges in die Breede- en Bergrivier gebiede ingesamel is. Hierdie indekswaarde onderskei die voorgestelde plaaswaardasie ondersteuningstelsel van ander bestaande eiendom waardasiestelsels en kan dus deur waardeerders gebruik word as 'n eerste bepaling van 'n eiendom se grond geskiktheid, voordat verdere veldwerk gedoen word. 'n Landwye opname is gedoen onder waardeerders en eiendomsagente wat inligting voorsien het vir die ontwerp van die voorgestelde plaaswaardasie ondersteuningstelsel, asook bewys gelewer het dat daar ‘n behoefte aan so 'n stelsel bestaan en dat dit deur waardeerders gebruik sal word.
APA, Harvard, Vancouver, ISO, and other styles
14

Allard, Nathan, and Tobias Hagström. "Modern Housing Valuation : A Machine Learning Approach." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-301866.

Full text
Abstract:
The primary goal of this report was to examine and demonstrate how machine learning methods can be used to produce accurate and useful apartment valuation models, focusing on the Stockholm market. Furthermore, the paper analyzed how different attributes, including apartment descriptions, affect the price of an apartment. Accurate and efficient valuation could not only be useful for individual property owners, who want a quick and precise valuation, but also for real estate agencies and tax authorities. To this end, several different methods were applied and evaluated, the best of which achieved a MAPE of 6.37%. Some of the most important features in relation to apartment price included: rent, construction year and a wide range of location related variables. It was concluded that machine learning methods can produce accurate and useful real estate valuation models, outperforming real estate agent’s manual appraisals. In addition, location based features were identified as the most important whilst bathroom and kitchen condition was not as important as expected. Furthermore, whilst the models developed in this report did not manage to utilize ”agent written” real estate descriptions, the results indicate that there is valuable information to be extracted, provided a more rigorous pre- processing and analysis of the data is conducted.
Det huvudsakliga målet med denna rapport var att undersöka och demonstrera hur maskininlärningsmetoder kan användas för att skapa exakta och användbara bostadsvärderingsmodeller, med fokus på Stockholmsmarknaden. Vidare analyseras även om och hur olika attribut, inklusive lägenhetsbeskrivningar, påverkar priset på lägenheten. Exakt och effektiv värdering skulle inte bara vara användbart för privatpersoner, som vill ha en snabb och precis värdering, men också för mäklare och skattemyndigheter. För detta ändamål applicerades och utvärderades ett flertal metoder, varav den bästa uppnådde en MAPE på 6.37%. Avgift, byggnadsår och ett flertal olika geografiskt relaterade variabler var bland de viktigaste vad gäller lägenhetspriset. För det första drogs slutsatsen att maskininläarningsmetoder kan producera exakta och användbara bostadsvärderingsmodeller, med högre precision än mäklares värderingar. För det andra identifierades geografiskt baserade attribut som mest väsentliga, medan skicket på badrum och kök inte var mindre viktigt än förväntat. Avslutningsvis kan konstateras att även om modellerna som utvecklades i rapporten inte lyckades utnyttja lägenhetsbeskrivningarna, indikerar resultatet att de innehåller värdefull information som potentiellt kan utnyttjas, givet att en mer rigorös förbehandling och analys av datan utförs.
APA, Harvard, Vancouver, ISO, and other styles
15

Dlamini, Majaha. "Data driven urbanism: challenges in implementing open data policy and digital transparency in the City of Cape Town." Master's thesis, Faculty of Science, 2019. https://hdl.handle.net/11427/31689.

Full text
Abstract:
As part of its quest to become the first digital African city, in 2014 the City of Cape Town adopted an open data policy, which was later coupled with an open data portal to make government data available for public access. This was touted as a novelty initiative as the City of Cape Town was the first African city to implement a policy of this nature. This open data initiative aimed at enhancing transparency and accountability as well as promoting inclusive economic participation for its citizens. Open data project managers from the city and external industry experts working on open data initiatives were interviewed to understand the current the state of open data within the city and how it worked with other stakeholders. The study draws on these interviews to present the current challenges experienced by the city from the city’s official point of view as well as from open data experts working closely with the city. To understand the practical experiences of how the city publishes data in its platforms, the study also extensively explored the city’s open data portal, as well as examining and commenting on the documented open data policy guidelines contrasted and compared to current practical experiences. To guide the objectives and analysis of the study, four key themes were adopted from literature; context, use, data and impact. Context focused on the overall context or environment at which open data in the city is provided as a public service, while use focused challenges on the uses of open data as well as it is users, data focused on the types of datasets published on the portal as well as the technical challenges in publishing them. Lastly impact looked at the expected benefits and goals of the city’s open data policy. The study through the themes highlighted the ongoing challenges at various levels that the city experience as they implement and develop the open data policy. Overall it was noted that open data was not a goal but continuous challenges were arising daily while implementing and developing the policy- while it was noted that various stakeholders within and outside government had to collaborate to effectively meet the required open data standards.
APA, Harvard, Vancouver, ISO, and other styles
16

Poon, Chun-ho, and 潘仲豪. "Efficient occlusion culling and non-refractive transparency rendering for interactive computer visualization." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2000. http://hub.hku.hk/bib/B2974328X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Hayles, Kelly, and kellyhayles@iinet net au. "A Property Valuation Model for Rural Victoria." RMIT University. Mathematical and Geospatial Sciences, 2006. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20070221.150256.

Full text
Abstract:
Licensed valuers in the State of Victoria, Australia currently appraise rural land using manual techniques. Manual techniques typically involve site visits to the property, liaison with property owners through interview, and require a valuer experienced in agricultural properties to determine a value. The use of manual techniques typically takes longer to determine a property value than for valuations performed using automated techniques, providing appropriate data are available. Manual methods of valuation can be subjective and lead to bias in valuation estimates, especially where valuers have varying levels of experience within a specific regional area. Automation may lend itself to more accurate valuation estimates by providing greater consistency between valuations. Automated techniques presently in use for valuation include artificial neural networks, expert systems, case based reasoning and multiple regression analysis. The latter technique appears mo st widely used for valuation. The research aimed to develop a conceptual rural property valuation model, and to develop and evaluate quantitative models for rural property valuation based on the variables identified in the conceptual model. The conceptual model was developed by examining peer research, Valuation Best Practice Standards, a standard in use throughout Victoria for rating valuations, and rural property valuation texts. Using data that are only available digitally and publicly, the research assessed this conceptualisation using properties from four LGAs in the Wellington and Wimmera Catchment Management Authority (CMAs) areas in Victoria. Cluster analysis was undertaken to assess if the use of sub-markets, that are determined statistically, can lead to models that are more accurate than sub-markets that have been determined using geographically defined areas. The research is divided into two phases; the 'available data phase' and the 'restricted data phase'. The 'available data phase' used publicly available digital data to build quantitative models to estimate the value of rural properties. The 'restricted data phase' used data that became available near the completion of the research. The research examined the effect of using statistically derived sub-markets as opposed to geographically derived ones for property valuation. Cluster analysis was used during both phases of model development and showed that one of the clusters developed in the available data phase was superior in its model prediction compared to the models produced using geographically derived regions. A number of limitations with the digital property data available for Victoria were found. Although GIS analysis can enable more property characteristics to be derived and measured from existing data, it is reliant on having access to suitable digital data. The research also identified limitations with the metadata elements in use in Victoria (ANZMETA DTD version 1). It is hypothesised that to further refine the models and achieve greater levels of price estimation, additional properties would need to be sourced and added to the current property database. It is suggested that additional research needs to address issues associated with sub-market identification. If results of additional modelling indicated significantly different levels of price estimation, then these models could be used with manual techniques to evaluate manually derived valuation estimates.
APA, Harvard, Vancouver, ISO, and other styles
18

Watanabe, Nobuhide 1967. "Business valuation of location-specific infrastructure projects in data-poor regions." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/16750.

Full text
Abstract:
Thesis (S.M. in Urban Studies and Planning; and, S.M. in Real Estate Development)--Massachusetts Institute of Technology, Dept. of Urban Studies and Planning, 2000.
Includes bibliographical references (leaves 56-57).
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
A methodology in determining the financial values (business values) of physical infrastructure projects is presented from the public point of view. The business valuation model in this thesis adopts three concepts of financial modeling, Monte Carlo simulation (probability-generated cash flow), Capital Asset Pricing Model, and Adjusted Present Value. Using this model, the business values of a hypothetical infrastructure project are simulated 1,000 times and the mean business value is analyzed in terms of patterns and magnitudes of the simulation. The results from the 1,000 simulations showed large differences between the value derived by this model and those by the traditional net present value method. Also, this model elucidated qualitative information on how levels of government’s financial support such as subsidies, tax incentives and revenue guarantees will affect the project’s business value by components. The model elucidated, as well, the qualitative information on how project’s contractual framework may affect the business value when private contractors bear key uncertain risks, such as demand changes and construction cost overruns.
by Nobuhide Watanabe.
S.M.in Urban Studies and Planning; and, S.M.in Real Estate Development
APA, Harvard, Vancouver, ISO, and other styles
19

Bandrowski, Anita. "Rigor and Transparency i.e., How to prevent the zombie paper Apocalypse." University of Arizona Library (Tucson, AZ), 2016. http://hdl.handle.net/10150/621551.

Full text
Abstract:
Presentation given on October 27, 2016 at Data Reproducibility: Integrity and Transparency program as part of Open Access Week 2016.
The NIH is now requiring the authentication of Key Biological Resources to be specified in a scored portion of most grant applications, but what does it mean to authenticate? We will discuss what Key Biological Resources are, the ongoing efforts to understand how to authenticate them and of course the resources available, including examples. The journal response to authentication will also be pointed to and practical steps that every researcher can take today to improve reporting of research in scientific publication.
APA, Harvard, Vancouver, ISO, and other styles
20

Kane, Gregory D. "Accounting data and stock returns across business-cycle associated valuation change periods." Diss., This resource online, 1992. http://scholar.lib.vt.edu/theses/available/etd-07282008-134006/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Choong, Kwee Keong. "Residual income information dynamics and equity valuation : a study using UK data." Thesis, Imperial College London, 2003. http://hdl.handle.net/10044/1/8707.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Morel, Victor. "Enhancing transparency and consent in the internet of things." Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEI073.

Full text
Abstract:
Le nombre d’appareils connectés à Internet ne cesse d’augmenter, certaines perspectives prédisant 75 milliards d’appareils d’ici 2025. L’Internet des Objets envisagé il y a 20 ans se matérialise à une vitesse soutenue, mais cette croissance n’est pas sans conséquence. Le nombre croissant d’appareils suscite en effet des possibilités de surveillance jamais vu auparavant. Un cap a été franchi en 2018 pour la protection de l’intimité numérique (privacy), avec la mise en application du Règlement Européen sur la Protection des Données (RGPD) dans l’Union Européenne. Il impose des obligations aux responsables de traitements sur le contenu de l’information à communiquer aux personnes concernées à propos de la collecte et du traitement de leurs données personnelles, ainsi que sur les moyens de communiquer cette information. Cette information est d’autant plus importante qu’elle est une condition préalable à la validité du consentement. Cependant, l’Internet des Objets peut poser des difficultés pour mettre en place la communication de l’information nécessaire à la validité légale d’un traitement, ainsi qu’à la gestion du consentement. La tension entre les exigences du RGPD à propos de l’information et du consentement et l’Internet des Objets n’est pas chose facile à résoudre. Ce n’est cependant pas impossible. Le but de cette thèse est de fournir une solution pour la communication de l’information et la gestion du consentement dans l’Internet des Objets. Pour ce faire, nous proposons un cadre conceptuel générique pour la communication de l’information et la gestion du consentement dans l’Internet des Objets. Ce cadre conceptuel est composé d’un protocole de communication et de négociation des politiques de protection de la vie privée (privacy policies), d’exigences pour la présentation de l’information et l’interaction avec les personnes concernées, ainsi que d’exigences pour la démonstration du consentement. Nous soutenons la faisabilité de ce cadre conceptuel générique avec différentes options de mise en oeuvre. La communication de l’information et du consentement peut être effectuée de deux manières : directement et indirectement. Nous proposons ensuite différentes manières de mettre en oeuvre la présentation de l’information et la démonstration du consentement. Un espace de conception (design space) est aussi proposé à destination des concepteurs de systèmes, afin d’aider à choisir entre différentes options de mise en oeuvre. Enfin, nous proposons des prototypes fonctionnels, conçus pour démontrer la faisabilité des options de mise en oeuvre du cadre conceptuel. Nous illustrons comment la communication indirecte de l’information peut être mise en oeuvre au sein d’un site web collaboratif appelé Map of Things. Nous présentons ensuite la communication directe de l’information et du consentement combinée à un agent présentant l’information aux personnes concernées à travers une application mobile nommée CoIoT
In an increasingly connected world, the Internet permeates every aspect of our lives. The number of devices connected to the global network is rising, with prospects foreseeing 75 billions devices by 2025. The Internet of Things envisioned twenty years ago is now materializing at a fast pace, but this growth is not without consequence. The increasing number of devices raises the possibility of surveillance to a level never seen before. A major step has been taken in 2018 to safeguard privacy, with the introduction of the General Data Protection Regulation (GDPR) in the European Union. It imposes obligations to data controllers on the content of information about personal data collection and processing, and on the means of communication of this information to data subjects. This information is all the more important that it is required for consent, which is one of the legal grounds to process personal data. However, the Internet of Things can pose difficulties to implement lawful information communication and consent management. The tension between the requirements of the GDPR for information and consent and the Internet of Things cannot be easily solved. It is however possible. The goal of this thesis is to provide a solution for information communication and consent management in the Internet of Things from a technological point of view. To do so, we introduce a generic framework for information communication and consent management in the Internet of Things. This framework is composed of a protocol to communicate and negotiate privacy policies, requirements to present information and interact with data subjects, and requirements over the provability of consent. We support the feasibility of this generic framework with different options of implementation. The communication of information and consent through privacy policies can be implemented in two different manners: directly and indirectly. We then propose ways to implement the presentation of information and the provability of consent. A design space is also provided for systems designers, as a guide for choosing between the direct and the indirect implementations. Finally, we present fully functioning prototypes devised to demonstrate the feasibility of the framework’s implementations. We illustrate how the indirect implementation of the framework can be developed as a collaborative website named Map of Things. We then sketch the direct implementation combined with the agent presenting information to data subjects under the mobile application CoIoT
APA, Harvard, Vancouver, ISO, and other styles
23

Maus, Benjamin. "Designing Usable Transparency for Mobile Health Research: The impact of transparency enhancing tools on the users’ trust in citizen science apps." Thesis, Malmö universitet, Fakulteten för kultur och samhälle (KS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-21637.

Full text
Abstract:
Medical researchers are exploring the potential of patients’ mobile phones and wearables for medical studies. The contribution of volunteers in a form of citizen science, where citizens donate their data for research purposes, can enable studies on a large scale. This research area, known as mobile health, often relies on shared data such as tracked steps or self- reporting forms. Privacy, transparency and trust play a fundamental role in the interaction of users with related platforms that agglomerate medical studies.This project explores privacy concerns of potential users of mobile health citizen science apps, summarises similar user patterns and analyses the impact of transparency enhancing tools on the users’ trust. In this context, a prototype with different features that aim to increase the transparency is designed, tested and evaluated. The results indicate how users perceive the importance and the generated trust of the proposed features and provide recommendations for data donation platforms.
APA, Harvard, Vancouver, ISO, and other styles
24

Postulka, Aleš. "Zobrazení a úprava informací v Transparency and Consent Framework." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2021. http://www.nusl.cz/ntk/nusl-445505.

Full text
Abstract:
This thesis deals with the development of multilingual for web browsers Mozilla Firefox and Google Chrome. The purpose of the extension is to enable the automated management of provided consents to the processing of personal data on websites using the Transparency and Consent Framework. Extension was developed on the basis of knowledge about this framework and about legal norms GDPR and ePrivacy Directive, which deal with the protection of personal data. Knowledge of the method of developing extensions for web browsers using WebExtensions was also used during the implementation. During testing, consent was successfully enforced in 96,2 % of tested websites in Mozilla Firefox. In Google Chrome, success has been achieved in 82,1 % of tested websites. The banner requiring consent was not displayed in 33 % of websites in Mozilla Firefox and in 31,1 % of websites in Google Chrome.
APA, Harvard, Vancouver, ISO, and other styles
25

Ågerstrand, Marlene. "Improving the transparency and predictability of environmental risk assessments ofpharmaceuticals." Licentiate thesis, KTH, Philosophy, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-24832.

Full text
Abstract:

The risk assessment process and the subsequent risk management measures need tobe constantly evaluated, updated and improved. This thesis contributes to that workby considering, and suggesting improvements, regarding aspects like userfriendliness,transparency, accuracy, consistency, data reporting, data selection anddata evaluation.The first paper in this thesis reports from an empirical investigation of themotivations, intentions and expectations underlying the development andimplementation of a voluntary industry owned environmental classification systemfor pharmaceuticals. The results show that the purpose of the classification systemis to provide information, no other risk reduction measures are aimed for.The second paper reports from an evaluation of the accuracy and the consistency ofthe environmental risk assessments conducted within the classification system. Theresults show that the guideline recommendations were not followed in several casesand consequently alternative risk ratios could be determined for six of the 36pharmaceutical substances selected for evaluation in this study. When additionaldata from the open scientific literature was included the risk ratio was altered formore than one-third of the risk assessments. Seven of the 36 substances wereassessed and classified by more than one risk assessor. In two of the seven cases,different producers classified the same substance into different classificationcategories.The third paper addresses the question whether non-standard ecotoxicity data couldbe used systematically in environmental risk assessments of pharmaceuticals. Fourdifferent evaluation methods were used to evaluate nine non-standard studies. Theevaluation result from the different methods varied at surprisingly high rate and theevaluation of the non-standard data concluded that the reliability of the data wasgenerally low.


QC 20100929
APA, Harvard, Vancouver, ISO, and other styles
26

Cyr, J. "The Pitfalls and Promise of Focus Groups as a Data Collection Method." SAGE PUBLICATIONS INC, 2015. http://hdl.handle.net/10150/615820.

Full text
Abstract:
Despite their long trajectory in the social sciences, few systematic works analyze how often and for what purposes focus groups appear in published works. This study fills this gap by undertaking a meta-analysis of focus group use over the last 10 years. It makes several contributions to our understanding of when and why focus groups are used in the social sciences. First, the study explains that focus groups generate data at three units of analysis, namely, the individual, the group, and the interaction. Although most researchers rely upon the individual unit of analysis, the method’s comparative advantage lies in the group and interactive units. Second, it reveals strong affinities between each unit of analysis and the primary motivation for using focus groups as a data collection method. The individual unit of analysis is appropriate for triangulation; the group unit is appropriate as a pretest; and the interactive unit is appropriate for exploration. Finally, it offers a set of guidelines that researchers should adopt when presenting focus groups as part of their research design. Researchers should, first, state the main purpose of the focus group in a research design; second, identify the primary unit of analysis exploited; and finally, list the questions used to collect data in the focus group.
APA, Harvard, Vancouver, ISO, and other styles
27

Ågerstrand, Marlene. "Improving the transparency and predictability of environmental risk assessments of pharmaceuticals." Licentiate thesis, KTH, Filosofi, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-24832.

Full text
Abstract:
The risk assessment process and the subsequent risk management measures need tobe constantly evaluated, updated and improved. This thesis contributes to that workby considering, and suggesting improvements, regarding aspects like userfriendliness,transparency, accuracy, consistency, data reporting, data selection anddata evaluation.The first paper in this thesis reports from an empirical investigation of themotivations, intentions and expectations underlying the development andimplementation of a voluntary industry owned environmental classification systemfor pharmaceuticals. The results show that the purpose of the classification systemis to provide information, no other risk reduction measures are aimed for.The second paper reports from an evaluation of the accuracy and the consistency ofthe environmental risk assessments conducted within the classification system. Theresults show that the guideline recommendations were not followed in several casesand consequently alternative risk ratios could be determined for six of the 36pharmaceutical substances selected for evaluation in this study. When additionaldata from the open scientific literature was included the risk ratio was altered formore than one-third of the risk assessments. Seven of the 36 substances wereassessed and classified by more than one risk assessor. In two of the seven cases,different producers classified the same substance into different classificationcategories.The third paper addresses the question whether non-standard ecotoxicity data couldbe used systematically in environmental risk assessments of pharmaceuticals. Fourdifferent evaluation methods were used to evaluate nine non-standard studies. Theevaluation result from the different methods varied at surprisingly high rate and theevaluation of the non-standard data concluded that the reliability of the data wasgenerally low.
QC 20100929
APA, Harvard, Vancouver, ISO, and other styles
28

Zeliha, Işıl Vural. "Sports Data Journalism: Data driven journalistic practices in Spanish newspapers." Doctoral thesis, Universitat Ramon Llull, 2021. http://hdl.handle.net/10803/672394.

Full text
Abstract:
Treballar amb dades sempre és una part important del periodisme, però la seva combinació amb la tecnologia és una innovació per als diaris. En els últims anys, els diaris han començat a adaptar el periodisme de dades i el periodisme de dades s'ha convertit en part de les redaccions a contra de l'entorn periodístic tradicional dels diaris espanyols. Aquesta tesi té com a objectiu analitzar les pràctiques del periodisme de dades esportius a Espanya amb enfocament quantitatiu i qualitatiu amb anàlisi de contingut de 1068 articles de periodisme de dades publicades per 6 diaris (Marca, Mundo Deportivo, AS, El Mundo, El Periódico, El País) entre 2017- 2019, i entrevistes a 15 participants de 6 diaris (Marca, Mundo Deportivo, AS, El Mundo, El Confidencial, El País). Tant l'anàlisi quantitativa com el qualitatiu es centren en com s'està adaptant el periodisme de dades a Espanya, la seva situació actual i característiques tècniques, oportunitats i amenaces en el seu desenvolupament.
Trabajar con datos siempre es una parte importante del periodismo, pero su combinación con la tecnología es una innovación para los periódicos. En los últimos años, los periódicos han comenzado a adaptar el periodismo de datos y el periodismo de datos se ha convertido en parte de las redacciones al contrario del entorno periodístico tradicional de los periódicos españoles. Esta tesis tiene como objetivo analizar las prácticas del periodismo de datos deportivos en España con enfoque cuantitativo y cualitativo con análisis de contenido de 1068 artículos de periodismo de datos publicados por 6 periódicos (Marca, Mundo Deportivo, AS, El Mundo, El Periódico, El País) entre 2017-2019, y entrevistas a 15 participantes de 6 periódicos (Marca, Mundo Deportivo, AS, El Mundo, El Confidencial, El País). Tanto el análisis cuantitativo como el cualitativo se centran en cómo se está adaptando el periodismo de datos en España, su situación actual y características técnicas, oportunidades y amenazas en su desarrollo.
Working with data is always an important part of journalism but its combination with technology is an innovation for newspapers. In recent years, newspapers have started to adapt data journalism and data journalism became a part of newsrooms to the contrary of the traditional journalism environment in Spanish newspapers. This thesis aims to analyse sports data journalism practices in Spain with quantitative and qualitative approach with content analysis of 1068 data journalism articles published by 6 newspapers (Marca, Mundo Deportivo, AS, El mundo, El Periódico, El Pais) between 2017-2019, and interviews with 15 participants from 6 newspapers (Marca, Mundo Deportivo, AS, El Mundo, El Confidencial, El País). Both quantitative and qualitative analysis focus on how data journalism is being adapted in Spain, its current situation and technical features, opportunities and threats in its development.
APA, Harvard, Vancouver, ISO, and other styles
29

Vatn, Erik Sæbu, and Trond Ytre-Arne. "Towards the use of qualitative data in the valuation of new technology-based ventures." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for industriell økonomi og teknologiledelse, 2011. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-15051.

Full text
Abstract:
This thesis addresses valuation of new technology-based ventures. Its main contribution is a framework for new technology-based venture valuation based on use of both empirically identified success criteria and traditional financial valuation theory. The framework is based on the principle that new technology-based venture value is driven by venture success, and that one therefore can assess a venture’s value through assessing its’ performance on criteria indicating success. To our knowledge this is the first framework for new technology-based venture valuation using this principle.In the thesis we conduct a thorough literature review on venture success, and we identify a series of success criteria. We also assess the impact of applying traditional financial theory on the new venture market. A framework is developed based on the theoretical development and the identified criteria. A preliminary empirical investigation to verify the identified factors is also conducted, and an indication of its ability to predict success is assessed. The framework is applied in a case study, and the results are compared to reference values. Conclusions on its applicability and value are drawn.The thesis consists of two main parts. The first part is a theoretically based article aimed for publication and the second part is a report that has a more applied context. Both the article and the report cover some of the same topics, but are aimed at different audiences of readers. Both parts can be read separately.
APA, Harvard, Vancouver, ISO, and other styles
30

Towers, Isabel Margaret Falcon. "The valuation of health outcomes data from clinical trials for use in economic evaluation." Thesis, University of Sheffield, 2005. http://etheses.whiterose.ac.uk/6075/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Daud, Muhammad Nasir. "Public sector information management and analysis using GIS in support of property valuation in Malaysia." Thesis, University of Newcastle Upon Tyne, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.313266.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Spiekermann-Hoff, Sarah, and Jana Korunovska. "Towards a value theory for personal data." Palgrave Macmillan, 2017. http://dx.doi.org/10.1057/jit.2016.4.

Full text
Abstract:
Analysts, investors and entrepreneurs have recognized the value of personal data for Internet economics. Personal data is viewed as the "oil" of the digital economy. Yet, ordinary people are barely aware of this. Marketers collect personal data at minimal cost in exchange for free services. But will this be possible in the long term, especially in the face of privacy concerns? Little is known about how users really value their personal data. In this paper, we build a user-centered value theory for personal data. On the basis of a survey experiment with 1269 Facebook users, we identify core constructs that drive the value of volunteered personal data. We find that privacy concerns are less influential than expected and influence data value mainly when people become aware of data markets. In fact, the consciousness of data being a tradable asset is the single most influential factor driving willingness-to-pay for data. Furthermore, we find that people build a sense of psychological ownership for their data and hence value it more. Finally, our value theory helps to unveil market design mechanisms that will influence how personal data markets thrive: First, we observe a majority of users become reactant if they are consciously deprived of control over their personal data; many drop out of the market. We therefore advice companies to consider user-centered data control tools to have them participate in personal data markets. Second, we find that in order to create scarcity in the market, centralized IT architectures (reducing multiple data copies) may be beneficial.
APA, Harvard, Vancouver, ISO, and other styles
33

Jamal, Majd. "Using K-Nearest-Neighbor with valuation metrics to detect similarities between stock performances." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-281962.

Full text
Abstract:
Algorithmic trading has increased in popularity since the publication of Agent-Human Interactions in the Continuous Double Auction by IBM researchers Das et al. (2001). Today many investors acquire algorithms that act on their behalf on the stock markets. Most of the algorithms have worked on predicting stock prices and making transactions when price thresholds are triggered. This project has a different objective and aims to construct a machine learning algorithm to cluster stocks with similar stock performances, and ultimately test the possibility if such stocks continue to perform similarly in the future. The KNN-model succeeds in its mission to cluster stocks with similar market performances. Statistical measurements highlighted a moderate correlation amongst stocks and their neighbors. Furthermore, some stocks did not continue to perform similarly in the short-term future, and the main reason has been of natural causes, such as management changes, and not meeting market expectations. Those factors impose a possibility for stocks to break their developing pattern at any time and move in a different direction than expected, which imposes a substantial limitation when clustering stocks that are expected to perform similarly in the future
Algoritmer har fått en ökad popularitet i aktiemarknaden sedan publikationen av Agent-Human Interactions in the Continuous Double Auction av IBM forskarna Das m.fl. (2001). Många investerare anskaffar sig algoritmer som utför marknadsanalyser och transaktioner när prisnivåer bryts. Detta projekt har en målsättning om att skapa en annan typ av algoritm, där i stället för att predicera aktiepriser så grupperar den aktier som har liknande aktieutvecklingar. Projektet testar även möjligheten om huruvida aktier med liknande nyckeltal och kursutvecklingar fortsätter att utvecklas likadant i framtiden. En KNN-modell lyckades med att gruppera aktier som har liknande kursutvecklingar. Statistik påvisar en moderat positiv korrelation mellan kursutveckling bland aktier och dess närmsta grannar. Vissa aktier fortsatte inte att utvecklas likadant i framtiden, av naturliga skäl, som ändringar i styrelsemedlemmar, eller redovisa finanser som inte bemöter marknadens förväntningar. Dessa faktorer leder till att aktier kan bryta sina mönster och röra sig åt en annan riktning än förväntat, vilket leder till en begränsning när en maskininlärningsalgoritm ska gruppera aktier som förväntas utvecklas likadant i framtiden.
APA, Harvard, Vancouver, ISO, and other styles
34

Božić, Denis S. M. Massachusetts Institute of Technology. "From haute couture to fast-fashion : evaluating social transparency in global apparel supply chains." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/111237.

Full text
Abstract:
Thesis: S.M. in Technology and Policy, Massachusetts Institute of Technology, School of Engineering, Institute for Data, Systems, and Society, Technology and Policy Program, 2017.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 134-139).
After Rana Plaza collapsed on April 24, 2013, in Dhaka, Bangladesh, and killed more than 1,100 workers, the apparel industry fell under widely publicized scrutiny for its negligent social practices. With consumers and non-governmental organizations aware of these issues and creating public pressure on the industry, many companies are increasingly trying to institute transparency within their supply chains to become socially sustainable. However, transparency so far has not been clearly defined, which makes the process of evaluating transparency difficult and often unpractical. The main goal of this thesis is to establish a framework and methodology that can be used by consumers, brands, and regulatory bodies to define and evaluate social transparency in global supply chains. Building on previous research in this field, we first construct a framework that distinguishes external and internal transparency, after which we identify five factors that drive supply chain transparency. Adaptive survey is then designed and used to evaluate both external and internal transparency, while investigating the role of each factor in shaping supply chain transparency. Due to time constraints and data availability, this thesis focuses primarily on external transparency and two factors: legal and political complexity and supply chain communication. Our quantitative analysis shows that the degree of external transparency increases with the size of brands, which is influenced by legal acts that focus on supply chain transparency. Additionally, our qualitative analysis shows that information asymmetry and lack of standardized auditing system have a detrimental effect on external and, ultimately, internal transparency. We therefore argue that socially responsible national legal regimes and diffusion of technological innovations are necessary to increase the degree of social transparency in global supply chains.
by Denis Bozic.
S.M. in Technology and Policy
APA, Harvard, Vancouver, ISO, and other styles
35

Frank, Kimberly Elaine 1968. "The effect of growth on the relevance of financial accounting data for stock valuation purposes." Diss., The University of Arizona, 1997. http://hdl.handle.net/10150/288728.

Full text
Abstract:
This study investigates the impact of growth on the value relevance of accounting data. Indirect evidence in the contracting literature suggests differences in value relevance is negatively related to growth, but to date evidence to empirically document the relationship is mixed. In contrast to previous research, this study analyzes the effect of growth on value relevance from a security price perspective, using the recently developed Ohlson model, and uses the analysts' five-year growth in EPS forecast as the proxy for growth which focuses on growth in terms of value to the investor. This study also considers the effect of growth on the persistence of abnormal earnings as well as the incremental information content of book value beyond earnings. The results provide evidence that the value relevance of accounting data is decreasing in growth. These findings are robust to different samples, other growth proxies, and controls for size and the lead-lag structure of prices and earnings. The evidence relating growth to persistence is inconclusive, suggesting persistence is more sensitive to the characteristics of the individual firms which make up each growth portfolio than the quality of accounting data. The findings also indicate the incremental information content of book value is greater for low growth firms compared to high growth firm, further supporting the hypothesis that the accounting data of high growth firms does not capture value relevant events as well as the accounting data of low growth firms. Overall, this study contributes to the understanding of cross-sectional differences in the valuation of securities by providing evidence that growth negatively affects the quality of accounting information.
APA, Harvard, Vancouver, ISO, and other styles
36

Pohl, Volker [Verfasser], and Ernst [Akademischer Betreuer] Eberlein. "Valuation of portfolio credit derivatives and data-based default prediction = Bewertung von Portfoliokreditderivaten und datenbasierte Ausfallprediktion." Freiburg : Universität, 2012. http://d-nb.info/1123474451/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Jabbour, Chadi. "Essays in the economics of Spatial Data Infrastructures (SDI) : business model, service valuation and impact assessment." Thesis, Montpellier, 2019. http://www.theses.fr/2019MONTD018.

Full text
Abstract:
Cette thèse tente de répondre à plusieurs thématiques liées aux aspects socio-économiques des Infrastructures de Données Géo Spatiales (IDGS). Elle met en particulier l’accent sur plusieurs questions concernant leur évaluation économique et la mesure de leurs impacts. Les objectifs peuvent être résumés autour des cinq points suivants: i) proposer un modèle économique pour ce type d’infrastructure afin d’assurer un financement durable; ii) réaliser une évaluation économique de l'information géo-spatiale issue d’une IDGS et disponible via sa plateforme: les images satellitaires à haute résolution spatiale (HR); iii) examiner le rôle d’une IDGS en tant que structure d’information; iv) identifier les impacts économiques d’une IDGS; v) étudier la stabilité de la demande pour l’imagerie satellitaire à travers les IDGS. Afin de répondre à ces interrogations, un premier défi concernait les modèles économiques (au sens de “business model”) dans la mise en œuvre des IDGS. La pertinence d’une approche des marchés biface a été testée via un processus de gestion de plateforme, pour analyser la dynamique d’une IDGS afin d’assurer une transition de l’IDGS vers un mécanisme de financement durable. Un protocole a été élaboré, décrivant la stratégie à travers laquelle une IDGS via sa plateforme, pourrait interagir en permanence entre les différents composants, représentés par les développeurs d’applications basées sur des données spatiales et les utilisateurs potentiels de ces données.Également, il était important d’affiner les questions relatives à l’évaluation des IDGS, en parallèle avec les réflexions sur le modèle économique de ce type d’infrastructure. Dans notre contexte, nous avons examiné la valeur économique des images satellites à haute résolution spatiale (HRS) perçue par les utilisateurs directs d’une IDGS. Les résultats obtenus pourraient être utilisés pour éclairer la conception d’une future tarification de l’imagerie satellitaire, visant à pérenniser le financement de ces services. Dans un troisième temps, nous avons examiné le rôle d’une IDGS en tant que structure d’information. La méthodologie a été appliquée dans le cas de suivi des coupes rases en France. Sur une base d’informations hétérogènes reçues via une multitude de structures d’information, une méthode de prise de décision a été mise en place, afin de fournir à un décideur un outil pour une meilleure prise de décision. Une approche originale a été introduite, en articulant entre deux théories : la méthode classique de Blackwell et la théorie de l’entropie. Le contexte méthodologique se présente suivant deux niveaux : le choix de la structure d’information ayant le pouvoir le plus informatif et la détection de l’action optimale.De même, pour aller plus en détail dans l’identification et l’analyse des impacts socio-économiques d’une IDGS basée sur l’imagerie satellitaire, nous avons considéré l’exemple des coupes rases. Après une analyse des acquisitions d’images satellites pour qualifier le champ des politiques publiques concernées, nous avons étudié la structure des impacts liés à une IDGS. Dans un deuxième temps nous avons évalué quelques-uns de ces impacts d’une manière plus détaillée.Enfin, ces études d’évaluation nous ont mené à examiner la stabilité de la demande d’images via une IDGS. Les IDGS constituent un lien direct entre les utilisateurs de premier rang et la grande industrie spatiale. Elles jouent également un rôle important dans la création d’opportunités de marché. Bien que les utilisateurs soient considérés comme les principaux moteurs de la technologie des données spatiales, ils contribuent à travers leur demande de données et de services au développement et à la croissance de ce domaine. Nous avons abordé la stabilité de différentes demandes d’images satellitaires, et avons fourni des éléments supplémentaires pour une meilleure compréhension de la gestion de ces données, en se basant sur la théorie des Records
The development of spatial data infrastructures (SDIs) is hampered by several barriers: form economical, technical to organizational and financial, the hurdles are numerous. This thesis attempts to answer some issues related to the socio-economic aspects of SDIs. It focuses on several topics concerning the SDI economic valuation and impact measurement. The aim has been fivefold: i) to propose a business model for this particular type of infrastructure in order to meet a sustainable financing scheme; ii) to perform an economic valuation of the geospatial information available through the SDI platform, the high resolution (HR) satellite images; iii) to examine the role of a SDI as an information structure; iv) to identify the economic impacts of a SDI; v) to study the stability of the satellite image markets through a SDI.In this thesis, a challenge consisted of approaching the business models field into the implementation of SDIs. The relevance of a two-sided market approach for analyzing a SDI dynamics was tested through a platform management process, in order for a SDI to transition to a self-sustaining funding mechanism. We explained how a SDI through its platform could ensure continuous interaction between the different components, represented by the developers of spatial data applications and the potential users of such data.It was important that the economic valuation questions concerning the SDI, need to be refined in parallel with the reflections about the business model of this type of infrastructure. In our context, we examined the economic value of the HR satellite images as perceived by the direct users of a SDI platform. The valuation study came to assess the importance of the satellite imagery as a support for the territorial planning and development economics. In a context of open and distributed innovation within the networks, it offered elements allowing to establish pricing scenarios on a next level, in order to sustain the SDI platform business model in the long run.In addition, we examined the role of a SDI as an information structure. We applied our findings to the clear-cut forest control case in France. Based on heterogeneous information received, we elaborated a decision-making policy in order to help a decision maker better model his decision. An original approach was introduced, articulating between two existing theories: the classic method of Blackwell and the Entropy theory. We advanced a two-level methodological context: The choice of the information structure with the most informative power and the detection of the optimal action.Similarly, by considering the clear cut example, we analyzed the socio-economic impacts of a SDI based on satellite imagery. A detailed analysis of the geospatial information acquired through the SDI, allowed to characterize the public policies involved in this field, in order to examine the impacts related the SDI ecosystem. In a second step, some of these impacts have been assessed in more details.Finally, these valuation studies opened a window to examine the market demand stability through the SDI. The spatial data infrastructures, which constitute the direct link between the users and the large Earth Observation (EO) industry, have a leading role in establishing market opportunities. While the users are becoming primary key-drivers for spatial data technology, they contribute through their demand of raw data and services, to its development and growth. We approached the stability of different satellite image markets through two independent French SDIs, by using the Records theory. We implemented an innovative method and provided additional elements for a better comprehension of the EO data management
APA, Harvard, Vancouver, ISO, and other styles
38

Wallace, Amelia. "Protection of Personal Data in Blockchain Technology : An investigation on the compatibility of the General Data Protection Regulation and the public blockchain." Thesis, Stockholms universitet, Institutet för rättsinformatik (IRI), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-167303.

Full text
Abstract:
On 25 May 2018 the General Data Protection Regulation, GDPR, came into force in the EU. The regulation strengthened the rights of the data subjects’ in relation to the data controllers and processors and gave them more control over their personal data. The recitals of the GDPR state that it was the rapid development in technology and globalisation that brought new challenges for the protection of personal data. Private companies and public authorities where making use of personal data on an unprecedented scale in order to pursue their own activities. The protection should be technologically neutral and not dependant on the technique used. This leads to questions on whether the protection that is offered through the GDPR is de facto applicable on all technologies. One particular technology which has caught interest of both private companies and public authorities is the blockchain. The public distributed blockchain is completely decentralized, meaning it is the users who decide the rules and its content. There are no intermediaries in power and the transactions of value or other information is sent peer to peer. By using asymmetric cryptography and advanced hash algorithms the transactions sent in the blockchain are secured. Whilst the interest and use of blockchain is increasing and the GDPR attempting to be applicable on all techniques, the characteristics of the public blockchain must be analysed under the terms of the GDPR. The thesis examines whether natural persons can be identified in a public blockchain, who is considered data controller and data processor of a public blockchain and whether the principles of the GDPR can be applied in such a decentralised and publicly distributed technology.
Den 25 maj 2018 tradde den nya dataskyddsforordningen, GDPR, i kraft i EU vilken slog hardare mot personuppgiftsansvariga och personuppgiftsbitraden an vad det tidigare dataskyddsdirektivet gjort. Med reformen ville EU starka personuppgiftsskyddet genom att ge de registrerade mer kontroll over sina personuppgifter. I skalen till forordningen anges att det var den snabba tekniska utvecklingen och globaliseringen som skapat nya utmaningar for skyddet da privata foretag och offentliga myndigheter anvander personuppgifter i en helt ny omfattning idag. Skyddet bor saledes vara teknikneutralt och inte beroende av den teknik som anvands. Detta oppnar upp for fragor om huruvida skyddet som GDPR erbjuder faktiskt ar applicerbart pa samtliga tekniker. En sarskild teknologi som fangat intresse hos saval privatpersoner som foretag och offentliga myndigheter ar blockkedjan. Den oppet distribuerade blockkedjetekniken ar helt decentraliserad, vilket innebar att det ar dess anvandare som styr och bestammer over innehallet. Nagra mellanman finns inte, utan vardetransaktioner och andra overforingar av information sands direkt mellan anvandare. Genom asymmetrisk kryptografi och avancerade hash algoritmer sakras de overforingar som sker via blockkedjan. Nagot som uppmarksammats under den okande anvandningen och intresset for blockkedjan samt ikrafttradandet av GDPR ar hur personuppgifter bor hanteras i en sadan decentraliserad teknologi, dar inga mellanman kan bara ansvaret for eventuell personuppgiftsbehandling. Flera av den publika blockkedjeteknikens egenskaper bor problematiseras, framfor allt dess oppenhet och tillganglighet for varje person i varlden, samt dess forbud mot rattelse och radering av inlagda data. Denna uppsats behandlar fragorna huruvida fysiska personer kan identifieras i en publik blockkedja, vem som kan anses vara personuppgiftsansvarig och personuppgiftsbitrade i en publik blockkedja, samt om de principer och krav som uppstalls i GDPR kan efterlevas i en sadan decentraliserad och oppet distribuerad teknologi.
APA, Harvard, Vancouver, ISO, and other styles
39

Danmo, Emil, and Fredrik Kihlström. "Exploring the Prerequisites to Increase Real Estate Market Transparency in Sweden." Thesis, KTH, Fastigheter och byggande, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-264449.

Full text
Abstract:
In the 2018 edition of the JLL Global Real Estate Transparency Index (GRETI), Sweden was ranked the 10th most transparent real estate market in the world, categorized as ‘Highly Transparent’. For the most part, Sweden has held a similar position since the measurements started in 1999. Transparency on a real estate market generally attracts foreign real estate investments and tenants as well as increasing global competitiveness. It also streamlines work processes in many real estate professions through comprehensive real estate market information and comprehensible legal frameworks, transaction processes and methods of monitoring different sustainability metrics. This study explores the prerequisites for Sweden to attain a better position in the index by increasing its degree of real estate market transparency, with the long-term goal in having Sweden reaping more of the benefits in having a highly transparent real estate market. This is done in two ways. First is through a critical analysis of the index’s methodology for assessing if ranks and scores within the different index categories are produced fairly. Secondly, different industry actors are interviewed to identify in what areas Sweden lags behind compared to more transparent markets, in which way they would like to see transparency improved in Sweden and the main barriers in implementing projects that would increase real estate market transparency and ways of overcoming them. An examination of the index methodology shows a changing methodology from year to year, which indicates a steady increase in real estate market transparency in Sweden. Interview findings support a generally positive view on transparency, facilitating decision making for real estate investments, but the level of preferred transparency differs between net sellers and buyers. It is therefore questionable if increasing real estate market transparency would provide significantly increased utility for some market actors with longer investment horizons and market knowledge through extensive business networks. Main suggestions for improving real estate transparency in Sweden include measures for data standards, increasing the level of data disclosure and information platforms for such standardized, disclosed data. The study suggests that the main barriers for implementing this could be conceptualized as a Prisoners’ dilemma and that institutional bodies could act as trustworthy partners in further opening up real estate market information.
I 2018 års upplaga av rapporten JLL Global Real Estate Transparency Index (GRETI) rankades Sverige som den tionde mest transparenta fastighetsmarknaden i världen, kategoriserat som ‘Mycket Transparent’. Sverige har mestadels hållit en liknande position i rankingen sedan mätningarna startade år 1999. Generellt så medför transparens på ett lands fastighetsmarknad en ökad attraktion för investeringar och hyresgäster såväl som en ökad global konkurrenskraft. Det effektiviserar även arbetsprocesser i många yrken i fastighetsbranschen genom omfattande fastighetsmarknadsinformation och överskådliga legala ramverk, transaktionsprocesser och metoder för att utvärdera olika nyckeltal kopplat till hållbarhet. Denna studie undersöker förutsättningarna för Sverige för att kunna uppnå en bättre position i indexet genom att öka transparensen på landets fastighetsmarknad, med det långsiktiga målet att få Sverige att kunna åtnjuta fördelarna av att ha en mycket transparent fastighetsmarknad. Detta är genomfört på två sätt. Det första är genom en kritisk analys av indexets metodik för att utvärdera om rankingar och poängsättningen inom de olika indexkategorierna har producerats på ett rättvist tillvägagångssätt. Det andra är genom intervjustudier med olika branschaktörer för att identifiera de områden där Sverige släpar efter i förhållande till andra mer transparenta marknader och på vilket sätt de skulle vilja se att transparensen förbättras i Sverige samt de huvudsakliga hindren mot att kunna implementera projekt som skulle kunna öka transparensen på Sveriges fastighetsmarknad och sätt att överkomma dessa hinder. En undersökning av indexmetodiken visar på en ändrad metodik från år till år, som indikerar en stabilt ökande grad av transparens på Sveriges fastighetsmarknad. Intervjuresultaten stödjer en generell positiv syn på transparens som ett sätt att underlätta beslutsfattande för fastighetsinvesteringar, men nivån av föredragen transparens skiljer sig åt mellan nettoköpare och nettosäljare. Det ifrågasätts därför om en ökad transparens på Sveriges fastighetsmarknad skulle bidra med en signifikant ökad nytta för vissa branschaktörer med längre investeringshorisonter samt marknadskännedom genom sina stora branschnätverk. Huvudsakliga förbättringspunkter i termer av att öka transparensen på Sveriges fastighets-marknad inkluderar åtgärder för datastandarder, en ökad nivå av datadelning samt informationsplattformar för sådan standardiserad, delade data. Studien visar på att de huvudsakliga barriärerna för att implementera dessa åtgärder kan konceptualiseras som ett Fångarnas dilemma och att offentliga organ kan agera som pålitliga partners i att vidare öppna upp fastighetsmarknadsinformation.
APA, Harvard, Vancouver, ISO, and other styles
40

Kriström, Bengt. "Valuing environmental benefits using the contingent valuation method : an econometric analysis." Doctoral thesis, Umeå universitet, Institutionen för nationalekonomi, 1990. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-90578.

Full text
Abstract:
The purpose of this study is to investigate methods for assessing the value people place on preserving our natural environments and resources. It focuses on the contingent valuation method, which is a method for directly asking people about their preferences. In particular, the study focuses on the use of discrete response data in contingent valuation experiments.The first part of the study explores the economic theory of the total value of a natural resource, where the principal components of total value are analyzed; use values and non-use values. Our application is a study of the value Swedes' attach to the preservation of eleven forest areas that contain high recreational values and contain unique environmental qualities. Six forests were selected on the basis of an official investigation which includes virgin forests and other areas with unique environmental qualities. In addition, five virgin forests were selected.Two types of valuation questions are analyzed, the continuous and the discrete. The first type of question asks directly about willingness to pay, while the second type suggests a price that the respondent may reject or accept. The results of the continuous question suggest an average willingness to pay of about 1,000 SEK per household for preservation of the areas. Further analysis of the data suggests that this value depends on severi characteristics of the respondent: such as the respondent's income and whether or not the respondent is an altruist.Two econometric approaches are used to analyze the discrete responses; a flexible parametric approach and a non-parametric approach. In addition, a Bayesian approach is described. It is shown that the results of a contingent valuation experiment may depend to some extent on the choice of the probability model. A re-sampling approach and a Monte-Carlo approach is used to shed light on the design of a contingent valuation experiment with discrete responses. The econometric analysis ends with an analysis of the often observed disparity between discrete and continuous valuation questions.A cost-benefit analysis is performed in the final chapter. The purpose of this analysis is to illustrate how the contingent valuation approach may be combined with opportunity cost data to improve the decision-basis in the environmental policy domain. This analysis does not give strong support for a cutting alternative. Finally, the results of this investigation are compared with evidence from other studies.The main conclusion of this study is that assessment of peoples' sentiments towards changes of our natural environments and resources can be a useful supplement to decisions about the proper husbandry of our natural environments and resources. It also highlights the importance of careful statistical analysis of data gained from contingent valuation experiments.
digitalisering@umu
APA, Harvard, Vancouver, ISO, and other styles
41

Miller, David B. "Decision support systems for land evaluation : theoretical and practical development." Thesis, University of British Columbia, 1985. http://hdl.handle.net/2429/24865.

Full text
Abstract:
The challenge of resolving land use allocation and policy questions depends to a large degree on the conversion of data into information, and the effective integration of information into the decision process. Land evaluation is one of the fundamental means of generating information for land planning. Information products have however, been inconsistently and ineffectively used in the decision process. This thesis develops a decision centered approach to land evaluation as a response to this concern. Included in this development is a description of important theoretical concepts, as well as a practical demonstration of the use of decision support systems as a design approach. Initially, a conceptual model is introduced illustrating the technical and use components of information generation, as well as the adaptive design cycle. Various terms and techniques involved in the technical aspects of land evaluation are reviewed. Decision making concepts including decision structure, environment, analysis, and criteria are outlined. Three existing methods of land evaluation are then compared from a use or decision making perspective. Having completed a review of current approaches, Decision Support Systems are introduced as a logical progression towards a decision centered approach. Decision Support System design is demonstrated using a portion of the Central Fraser Valley Regional District as a case study area combined with an interactive microcomputer land planning tool (LANDPLAN). The demonstration emphasizes the advantages of the flexible, interactive capabilities of Decision Support Systems in aiding the decision process. Iterative design is also promoted with several needs identified if a more complete system is to be developed. In particular, data on strategic long term supply and demand factors is required, as well as continuous rating functions for assessing land performance.
Science, Faculty of
Resources, Environment and Sustainability (IRES), Institute for
Graduate
APA, Harvard, Vancouver, ISO, and other styles
42

Taonezvi, Lovemore. "The recreational value of the Baviaanskloof: a travel cost analysis using count data models." Thesis, Nelson Mandela Metropolitan University, 2017. http://hdl.handle.net/10948/12371.

Full text
Abstract:
Despite constituting a sheer 2% of the world’s surface area, South Africa (SA) is the third most biologically diverse country in the world and this makes the country part of the 17 member countries that make up the ‘Megadiverse Countries’(Sandwith, 2002; Nel & Driver, 2012). Besides its exceptional levels of endemism, according to Boshoff, Cowling and Kerley (2000), three of the 25 internationally recognised biodiversity hotspots are found in SA namely the Cape Florist Region, the Succulent Karoo and the Maputaland-Pondoland-Albany centre of endemism. The Baviaanskloof is a very popular tourist destination, which falls within the Cape Floristic Region) ‘biodiversity hotspot’ in the Eastern Cape Province (Myers, 1988; Crane, 2007). Its high biodiversity, numerous archaeological sites, pristine environment, low crime rates, absence of malaria and easy of accessibility, makes it a perfect destination for recreationists (Clark, 1999; Boshoff et al., 2000). The Baviaanskloof was declared a ‘mega reserve’ under the Cape Action for People and Environment (CAPE) programme (CSIR, 2000). It consists of privately-owned farm land and a nature reserve called the Baviaanskloof Wilderness Area (BWA). In order to properly manage, conserve and utilise the rich natural resources of the Baviaanskloof, its benefits need to be clearly documented and demonstrated. The aim of this study is to determine the recreational value of the Baviaanskloof, and this was achieved using non-market value technique, namely the travel cost method (TCM). The TCM is used to value recreational assets via the expenditures on traveling to the site by recognising that visitors to a recreation site pay an implicit price – the cost of travelling to it, including access fees and the opportunity costs of their time (Baker & Ruting, 2014). This method is mostly used to estimate use values for recreation activities and changes in these use values associated with changes in environmental quality/quantity. The greatest advantage of the TCM is that valuation estimates are derived from real economic choices made by individuals in real markets, whereas its inability to estimate non-market values is its major weakness which only limits its application to recreational studies. In estimating the recreational value of the Baviaanskloof, data from 328 respondents were used. Five econometric models, namely, a standard Poisson specification, a Poisson specification adjusted for truncation and endogenous stratification (TES Poisson), a standard negative vii binomial model (NB), a negative binomial model adjusted for truncation and endogenous stratification (NBTES), and a generalised negative binomial with endogenous stratification (GNBES) were used to estimate the recreational value of the Baviaanskloof. Crucially, all the five models simultaneously established income and total costs to be statistically significant in determining the number of trips to the recreational site according to a priori expectations. The GNBES model was observed to have the best fit of the data than the other four models after an examination of goodness-of-fit measures in conjunction with the number of statistically significant variables per model. Of the 328 respondents surveyed, on average, visitors to the Baviaanskloof are mostly male, highly educated individuals, receiving gross annual income of ZAR436 372 (USD30 451.64) and 39.87 years of age. The mean travel cost was estimated to be ZAR1 433.56 (USD100.04) and each travelling party consisted of 4.09 people on average. Using estimates from the preferred GNBES model, the study estimated consumer surplus per trip for a recreationist to the Baviaanskloof to be ZAR1 759.32 (USD122.78). When this value is multiplied by the average annual trips a person takes to site, a value of ZAR2 445.46 (USD170.66) for consumer surplus per person is produced. Further aggregation of this value across the population (i.e. 18 500), of recreationists to the Baviaanskloof per year gives total consumer surplus of ZAR3 157 210 (USD220 321). The study concludes that the Baviaanskloof has a significant recreational value which can further be increased if policymakers take actions to, inter alia, upgrade infrastructure, budget more money for conservation and market the nature reserve in unexploited markets. Since the non-use values were not taken into account and also the impact of on-site sampling on the data set, the recreational value of the Baviaanskloof should be carefully considered in any management or conservation project. More studies of this nature are greatly needed to allow for more comparisons and increase credibility of the results of environmental valuation studies in SA.
APA, Harvard, Vancouver, ISO, and other styles
43

Chen, Jinchun S. M. Massachusetts Institute of Technology. "Evaluation of application of ontology and semantic technology for improving data transparency and regulatory compliance in the global financial industry." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/99020.

Full text
Abstract:
Thesis: S.M. in Management Research, Massachusetts Institute of Technology, Sloan School of Management, 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 78-79).
In the global financial industry, there are increasing motivations for financial data standardization. The financial crisis in 2008 revealed the risk management issues, including risk data aggregation and risk exposure reporting, at many banks and financial institutions. After the crisis, the Dodd-Frank Act required transaction data of derivatives trades to be reported to Swap Data Repositories (SDRs). In addition, the Basel Committee on Banking Supervision (the Basel Committee) issued the Principles for effective risk data aggregation and risk reporting (BCBS 239) in January 2013. These new regulatory requirements aim to enhance financial institutions' data aggregation capabilities and risk management practices. Using ontology and semantic technology would be a plausible way to improve data transparency and meet regulatory compliance. The Office of Financial Research (OFR) has considered a project recommended by Financial Research Advisory Committee (FRAC) to explore the viability of a comprehensive ontology for solving existing data challenges, such as the Financial Industry Business Ontology (FIBO). FIBO, which could be a credible solution, is an abstract ontology for data that is intended to allow firms to explain the semantics of their data in a standard way, which could permit the automated translation of data from one local standard to another. This thesis studies the new regulatory requirements, analyzes the challenges of implementing these regulations, proposes a possible solution, and evaluates the application of semantic technology and FIBO with a use case. The thesis tries to explain how semantic technology and FIBO could be implemented and how they could be beneficial to risk data management in the financial industry.
by Jinchun Chen.
S.M. in Management Research
APA, Harvard, Vancouver, ISO, and other styles
44

Atiyah, Perla Christina. "Non-market valuation and marine management using panel data analysis to measure policy impacts on coastal resources /." Diss., Restricted to subscribing institutions, 2009. http://proquest.umi.com/pqdweb?did=1835200041&sid=1&Fmt=2&clientId=1564&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Aldherwi, Aiman. "Conceptualising a Procurement 4.0 Model for a truly Data Driven Procurement." Thesis, KTH, Hållbar produktionsutveckling (ML), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-297583.

Full text
Abstract:
Purpose - Procurement is an integrated part of the supply chain and crucial for the success of manufacturing. Many organisations have already started the digitalisation of their manufacturing processes using Industry 4.0 technologies and consequently trying to understand how this would impact the procurement function. The research purpose is to conceptualize a procurement of 4.0 model for a truly data driven procurement. Two research questions were proposed to address the model from digital capabilities and sustainability preceptive. Design/Methodology/approach - This study is based on a systematic literature review. A method of reviewing the literature and the current research for the propose of conceptualizing a procurement 4.0 model. Findings - The findings from the literature review contributed to the development of a proposed procurement 4.0 model based on Industry 4.0 technologies, applications, mathematical algorithms and procurement processes automation. The model contributes to the research field by addressing the gap in the literature about the lack of visualization and conceptualization of procurement 4.0. Originality/Value - The current literature discusses the advantages, implementation and impact of individual or a group of industry 4.0 technologies and applications on procurement but lacks visualization of the transformation process of combining the technologies to enable a truly data driven procurement. This research supports the creation of knowledge in this area. Practical Implementation /Managerial Implications - The proposed model can support managers and digital consultants to have practical knowledge from an academic perspective in the area of procurement 4.0. The knowledge from the literature and the systematic literature review is used to create knowledge on procurement 4.0 applications and analytics taking in to consideration the importance of visibility, transparency, optimization and the automation of the procurement function and its sustainability.
Syfte - Upphandling är en integrerad del av supply chain och avgörande för tillverkningens framgång. Många organisationer har redan börjat digitalisera sina tillverkningsprocesser med hjälp av Industry 4.0-teknologier och försöker därför förstå hur detta skulle påverka upphandlingsfunktionen. Målet med studien är att konceptualisera en upphandling av 4.0-modellen för en verkligt datadriven upphandling. Två forskningsfrågor föreslogs för att ta itu med modellen från digital kapacitet och hållbarhet. Design / metod / tillvägagångssätt - Denna studie baseras på en systematisk litteraturstudie. En metod för att granska litteraturen och den aktuella forskningen för att föreslå konceptualisering av en upphandlings 4.0-modell. Resultat - Resultaten från litteraturstudien bidrog till utvecklingen av en föreslagen upphandlings 4.0-modell baserad på Industry 4.0-teknologier, applikationer, matematiska algoritmer och automatisering av upphandlingsprocesser. Modellen bidrar till forskningsområdet genom att ta itu med klyftan i litteraturen om bristen på visualisering och konceptualisering av upphandling 4.0. Originalitet / värde - Den nuvarande litteraturen diskuterar fördelarna, implementeringen och effekten av individer eller en grupp av industri 4.0-teknologier och applikationer på upphandling men saknar visualisering av transformationsprocessen för att kombinera teknologierna för att skapa en verklig datadriven upphandling. Denna forskning stöder skapandet av kunskap inom detta område. Praktisk implementering / chefsimplikationer - Den föreslagna modellen kan stödja chefer och digitala konsulter att ha praktisk kunskap ur ett akademiskt perspektiv inom området upphandling 4.0. Kunskapen från litteraturen och den systematiska litteraturstudien används för att skapa kunskap om inköp 4.0 applikationer och analyser med beaktande av vikten av synlighet, transparens, optimering och automatisering av upphandlingsfunktionen och dess hållbarhet.
APA, Harvard, Vancouver, ISO, and other styles
46

Silveira, Wennergren Tove. "Access and Accountability - A Study of Open Data in Kenya." Thesis, Malmö högskola, Fakulteten för kultur och samhälle (KS), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-23366.

Full text
Abstract:
This study explores Open Data actors in Kenya, focusing on the issue of transparency and accountability. Drawing on an exploratory quantitative analysis of existing statistical material of usage of the Kenya Open Data Initiative website and 15 qualitative interviews conducted primarily in Nairobi, the study analyses key factors – both enabling and disabling – that shape transparency initiatives connected to Open Data in Kenya. The material is analysed from three perspectives: a) a review based on existing research around impact and effectiveness of transparency and accountability initiatives; b) based on theories on human behaviour in connection to transparency and accountability; and c) introducing a critical perspective on power relations based on Michel Foucault’s concept of ‘governmentality’. The study shows that the Kenya Open Data Initiative has potential to become an effective transparency and accountability initiative in Kenya, but that its future is heavily dependent on current trends within the political context and fluctuations in power relations. Applying a stronger user-perspective and participatory approach is critical.Open Data is a relatively new area within the governance and development field, and academia can play an important role in enhancing methodology and impact assessments to create more effective and sustainable initiatives and ensure that future Open Data initiatives can be both accessible and constitute a base for accountability.
APA, Harvard, Vancouver, ISO, and other styles
47

Momen, Nurul. "Towards Measuring Apps' Privacy-Friendliness." Licentiate thesis, Karlstads universitet, Institutionen för matematik och datavetenskap (from 2013), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-68569.

Full text
Abstract:
Today's phone could be described as a charismatic tool that has the ability to keep human beings captivated for a considerable amount of their precious time. Users remain in the illusory wonderland with free services, while their data becomes the subject to monetizing by a genie called big data. In other words, users pay with their personal data but the price is in a way invisible. Poor means to observe and to assess the consequences of data disclosure causes hindrance for the user to be aware of and to take preventive measures. Mobile operating systems use permission-based access control mechanism to guard system resources and sensors. Depending on the type, apps require explicit consent from the user in order to avail access to those permissions. Nonetheless, it does not put any constraint on access frequency. Granted privileges allow apps to access to users' personal information for indefinite period of time until being revoked explicitly. Available control tools lack monitoring facility which undermines the performance of access control model. It has the ability to create privacy risks and nontransparent handling of personal information for the data subject. This thesis argues that app behavior analysis yields information which has the potential to increase transparency, to enhance privacy protection, to raise awareness regarding consequences of data disclosure, and to assist the user in informed decision making while selecting apps or services. It introduces models and methods, and demonstrates the risks with experiment results. It also takes the risks into account and makes an effort to determine apps' privacy-friendliness based on empirical data from app-behavior analysis.
Today's phone could be described as a charismatic tool that has the ability to keep human beings captivated for a considerable amount of their precious time. Users remain in the illusory wonderland with free services, while their data becomes the subject to monetizing by a genie called big data. In other words, users pay with their personal data but the price is in a way invisible. They face hindrance to be aware of and to take preventive measures because of poor means to observe and to assess consequences of data disclosure. Available control tools lack monitoring properties that do not allow the user to comprehend the magnitude of personal data access. Such circumstances can create privacy risks, erode intervenability of access control mechanism and lead to opaque handling of personal information for the data subject. This thesis argues that app behavior analysis yields information which has the potential to increase transparency, to enhance privacy protection, to raise awareness regarding consequences of data disclosure, and to assist the user in informed decision making while selecting apps or services. It introduces models and methods, and demonstrates the data disclosure risks with experimental results. It also takes the risks into account and makes an effort to determine apps' privacy-friendliness based on empirical data from app-behavior analysis.
APA, Harvard, Vancouver, ISO, and other styles
48

Ekström, Hagevall Adam, and Carl Wikström. "Increasing Reproducibility Through Provenance, Transparency and Reusability in a Cloud-Native Application for Collaborative Machine Learning." Thesis, Uppsala universitet, Avdelningen för datorteknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-435349.

Full text
Abstract:
The purpose of this thesis paper was to develop new features in the cloud-native and open-source machine learning platform STACKn, aiming to strengthen the platform's support for conducting reproducible machine learning experiments through provenance, transparency and reusability. Adhering to the definition of reproducibility as the ability of independent researchers to exactly duplicate scientific results with the same material as in the original experiment, two concepts were explored as alternatives for this specific goal: 1) Increased support for standardized textual documentation of machine learning models and their corresponding datasets; and 2) Increased support for provenance to track the lineage of machine learning models by making code, data and metadata readily available and stored for future reference. We set out to investigate to what degree these features could increase reproducibility in STACKn, both when used in isolation and when combined.  When these features had been implemented through an exhaustive software engineering process, an evaluation of the implemented features was conducted to quantify the degree of reproducibility that STACKn supports. The evaluation showed that the implemented features, especially provenance features, substantially increase the possibilities to conduct reproducible experiments in STACKn, as opposed to when none of the developed features are used. While the employed evaluation method was not entirely objective, these features are clearly a good first initiative in meeting current recommendations and guidelines on how computational science can be made reproducible.
APA, Harvard, Vancouver, ISO, and other styles
49

Malmberg, Jacob, Öhman Marcus Nystad, and Alexandra Hotti. "Implementing Machine Learning in the Credit Process of a Learning Organization While Maintaining Transparency Using LIME." Thesis, KTH, Industriell ekonomi och organisation (Inst.), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-232579.

Full text
Abstract:
To determine whether a credit limit for a corporate client should be changed, a financial institution writes a PM containingtext and financial data that then is assessed by a credit committee which decides whether to increase the limit or not. To make thisprocess more efficient, machine learning algorithms was used to classify the credit PMs instead of a committee. Since most machinelearning algorithms are black boxes, the LIME framework was used to find the most important features driving the classification. Theresults of this study show that credit memos can be classified with high accuracy and that LIME can be used to indicate which parts ofthe memo had the biggest impact. This implicates that the credit process could be improved by utilizing machine learning, whilemaintaining transparency. However, machine learning may disrupt learning processes within the organization.
För att bedöma om en kreditlimit för ett företag ska förändras eller inte skriver ett finansiellt institut ett PM innehållande text och finansiella data. Detta PM granskas sedan av en kreditkommitté som beslutar om limiten ska förändras eller inte. För att effektivisera denna process användes i denna rapport maskininlärning istället för en kreditkommitté för att besluta om limiten ska förändras. Eftersom de flesta maskininlärningsalgoritmer är svarta lådor så användes LIME-ramverket för att hitta de viktigaste drivarna bakom klassificeringen. Denna studies resultat visar att kredit-PM kan klassificeras med hög noggrannhet och att LIME kan visa vilken del av ett PM som hade störst påverkan vid klassificeringen. Implikationerna av detta är att kreditprocessen kan förbättras av maskininlärning, utan att förlora transparens. Maskininlärning kan emellertid störa lärandeprocesser i organisationen, varför införandet av dessa algoritmer bör vägas mot hur betydelsefullt det är att bevara och utveckla kunskap inom organisationen.
APA, Harvard, Vancouver, ISO, and other styles
50

Muthukumar, Subrahmanyam. "The application of advanced inventory techniques in urban inventory data development to earthquake risk modeling and mitigation in mid-America." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/26662.

Full text
Abstract:
Thesis (Ph.D)--City Planning, Georgia Institute of Technology, 2009.
Committee Chair: French, Steven P.; Committee Member: Drummond, William; Committee Member: Goodno, Barry; Committee Member: McCarthy, Patrick; Committee Member: Yang, Jiawen. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography