Dissertations / Theses on the topic 'Vague Set'

To see the other types of publications on this topic, follow the link: Vague Set.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Vague Set.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Almadani, Firdos Mohammed. "Modelling and analysing vague geographical places using fuzzy set theory." Thesis, University of Leicester, 2016. http://hdl.handle.net/2381/37352.

Full text
Abstract:
Vagueness is an essential part of how humans perceive and understand the geographical world they occupy. It has now become of increasing important to acknowledge this situation in geographical databases and analyses in the field of Geographical Information Science (GIScience). This research has tackled the wholly original topic of modelling vague geographical places (objects) based on fuzzy set theory with a view to assessing the implications of routing problem around those vague places. The research has focused on the modelling of vague places, for a number of villages and rural settlements, working with national address databases which have numerous ambiguous characteristics which add challenge to the work. It has demonstrated the way in which fuzzy set theory can be used to derive approximate boundaries for vague spatial extents (fuzzy footprint) form sets of precise addresses, reporting rural settlements, recorded in different databases. It has further explored the implications of applying the Travelling Salesman Problem (TSP) in traditional hard village extents versus the modelled fuzzy extents. The introduced methods evaluate the usefulness of fuzzy set theory in modelling and analysing such vague regions. The results imply that the fuzzy model is more efficient than the traditional hard, crisp model of approximating the spatial extent of rural areas. However, the TSP results showed that longer tours were mostly found in the fuzzy model than the traditional crisp model. This is mainly affected by the scale factor of rural areas, considering the relatively small distances between villages. One challenge for the approach outlined here is to incorporate this method applied in other novel analyses of geographical information based on fuzzy representation of geographical phenomena.
APA, Harvard, Vancouver, ISO, and other styles
2

Smith, Luke Alexander. "Refining Multivariate Value Set Bounds." Thesis, University of California, Irvine, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=3709756.

Full text
Abstract:

Over finite fields, if the image of a polynomial map is not the entire field, then its cardinality can be bounded above by a significantly smaller value. Earlier results bound the cardinality of the value set using the degree of the polynomial, but more recent results make use of the powers of all monomials.

In this paper, we explore the geometric properties of the Newton polytope and show how they allow for tighter upper bounds on the cardinality of the multivariate value set. We then explore a method which allows for even stronger upper bounds, regardless of whether one uses the multivariate degree or the Newton polytope to bound the value set. Effectively, this provides an alternate proof of Kosters' degree bound, an improved Newton polytope-based bound, and an improvement of a degree matrix-based result given by Zan and Cao.

APA, Harvard, Vancouver, ISO, and other styles
3

Soh, Bao Lin Pauline. "Test-Set Reading: Value to Mammography." Thesis, The University of Sydney, 2014. http://hdl.handle.net/2123/11557.

Full text
Abstract:
Purpose: The purpose of this thesis is to understand the relationship between mammographic performance based on actual clinical reading and performances at screen read test-sets, as well as to examine the potential causal agents for any lack of correlation. Methods: This study was designed to encompass three facets. The first element investigated the extent to which test-set reading can represent actual clinical reporting in screening mammography. The second element examined the manner that the location where reading takes place and the availability of prior images can impact upon performance in breast test-set reading. The third element considered the reading workstation monitors and the viewing environment available within BreastScreen New South Wales centres to determine whether consistent reporting conditions were provided to breast screen readers. Results: Moderate or acceptable level of agreement (W = 0.69–0.73, P < 0.01) were shown between actual clinical reporting and test-set conditions when describing group performance. The agreement was enhanced when prior images were available. The location where reading takes place and the availability of prior images showed acceptable levels of agreement (W = 0.75–0.79, P < 0.001) between group performance although both factors had a varying impact when examining the results of individual reader. The final aspect demonstrated an overall good adherence of reading workstation monitors and the viewing environment to published guidelines. Conclusions: Test-set readings in clinical and laboratory settings can be used to represent radiologic group performance in the clinic to a reasonable level particularly if prior images are available. If individual efficacy is being examined, some observers do demonstrate differences between test-sets and clinical performance, as well as differences between test-set situations even when viewing conditions are generally adhering to international standards.
APA, Harvard, Vancouver, ISO, and other styles
4

Buquicchio, Luke J. "Variational Open Set Recognition." Digital WPI, 2020. https://digitalcommons.wpi.edu/etd-theses/1377.

Full text
Abstract:
In traditional classification problems, all classes in the test set are assumed to also occur in the training set, also referred to as the closed-set assumption. However, in practice, new classes may occur in the test set, which reduces the performance of machine learning models trained under the closed-set assumption. Machine learning models should be able to accurately classify instances of classes known during training while concurrently recognizing instances of previously unseen classes (also called the open set assumption). This open set assumption is motivated by real world applications of classifiers wherein its improbable that sufficient data can be collected a priori on all possible classes to reliably train for them. For example, motivated by the DARPA WASH project at WPI, a disease classifier trained on data collected prior to the outbreak of COVID-19 might erroneously diagnose patients with the flu rather than the novel coronavirus. State-of-the-art open set methods based on the Extreme Value Theory (EVT) fail to adequately model class distributions with unequal variances. We propose the Variational Open-Set Recognition (VOSR) model that leverages all class-belongingness probabilities to reject unknown instances. To realize the VOSR model, we design a novel Multi-Modal Variational Autoencoder (MMVAE) that learns well-separated Gaussian Mixture distributions with equal variances in its latent representation. During training, VOSR maps instances of known classes to high-probability regions of class-specific components. By enforcing a large distance between these latent components during training, VOSR then assumes unknown data lies in the low-probability space between components and uses a multivariate form of Extreme Value Theory to reject unknown instances. Our VOSR framework outperforms state-of-the-art open set classification methods with a 15% F1 score increase on a variety of benchmark datasets.
APA, Harvard, Vancouver, ISO, and other styles
5

GOMEZ-CALDERON, JAVIER. "POLYNOMIALS WITH SMALL VALUE SET OVER FINITE FIELDS." Diss., The University of Arizona, 1986. http://hdl.handle.net/10150/183933.

Full text
Abstract:
Let K(q) be the finite field with q elements and characteristic p. Let f(x) be a monic polynomial of degree d with coefficients in K(q). Let C(f) denote the number of distinct values of f(x) as x ranges over K(q). It is easy to show that C(f) ≤ [|(q - 1)/d|] + 1. Now, there is a characterization of polynomials of degree d < √q for which C(f) = [|(q - 1)/d|] +1. The main object of this work is to give a characterization for polynomials of degree d < ⁴√q for which C(f) < 2q/d. Using two well known theorems: Hurwitz genus formula and Andre Weil's theorem, the Riemann Hypothesis for Algebraic Function Fields, it is shown that if d < ⁴√q and C(f) < 2q/d then f(x) - f(y) factors into at least d/2 absolutely irreducible factors and f(x) has one of the following forms: (UNFORMATTED TABLE FOLLOWS) f(x - λ) = D(d,a)(x) + c, d|(q² - 1), f(x - λ) = D(r,a)(∙ ∙ ∙ ((x²+b₁)²+b₂)²+ ∙ ∙ ∙ +b(m)), d|(q² - 1), d=2ᵐ∙r, and (2,r) = 1 f(x - λ) = (x² + a)ᵈ/² + b, d/2|(q - 1), f(x - λ) = (∙ ∙ ∙((x²+b₁)²+b₂)² + ∙ ∙ ∙ +b(m))ʳ+c, d|(q - 1), d=2ᵐ∙r, f(x - λ) = xᵈ + a, d|(q - 1), f(x - λ) = x(x³ + ax + b) + c, f(x - λ) = x(x³ + ax + b) (x² + a) + e, f(x - λ) = D₃,ₐ(x² + c), c² ≠ 4a, f(x - λ) = (x³ + a)ⁱ + b, i = 1, 2, 3, or 4, f(x - λ) = x³(x³ + a)³ +b, f(x - λ) = x⁴(x⁴ + a)² +b or f(x - λ) = (x⁴ + a) ⁱ + b, i = 1,2 or 3, where D(d,a)(x) denotes the Dickson’s polynomial of degree d. Finally to show other polynomials with small value set, the following equation is obtained C((fᵐ + b)ⁿ) = αq/d + O(√q) where α = (1 – (1 – 1/m)ⁿ)m and the constant implied in O(√q) is independent of q.
APA, Harvard, Vancouver, ISO, and other styles
6

Noviani, Evi. "Shape optimisation for the wave-making resistance of a submerged body." Thesis, Poitiers, 2018. http://www.theses.fr/2018POIT2298/document.

Full text
Abstract:
Dans cette thèse, nous calculons la forme d’un objet immergé d’aire donnée qui minimise la résistance de vague. Le corps, considéré lisse, avance à vitesse constante sous la surface libre d’un fluide qui est supposé parfait et incompressible. La résistance de vague est la traînée, c’est-à-dire la composante horizontale de la force exercée par le fluide sur l’obstacle. Nous utilisons les équations de Neumann-Kelvin 2D, qui s’obtiennent en linéarisant les équations d’Euler irrotationnelles avec surface libre. Le problème de Neumann-Kelvin est formulé comme une équation intégrale de frontière basée sur une solution fondamentale qui intègre la condition linéarisée à la surface libre. Nous utilisons une méthode de descente de gradient pour trouver un minimiseur local du problème de résistance de vague. Un gradient par rapport à la forme est calculé par la méthode de variation de frontières. Nous utilisons une approche level-set pour calculer la résistance de vague et gérer les déplacements de la frontière de l’obstacle. Nous obtenons une grande variété de formes optimales selon la profondeur de l’objet et sa vitesse
In this thesis, we compute the shape of a fully immersed object with a given area which minimises the wave resistance. The smooth body moves at a constant speed under the free surface of a fluid which is assumed to be inviscid and incompressible. The wave resistance is the drag, i.e. the horizontal component of the force exerted by the fluid on the obstacle. We work with the 2D Neumann-Kelvin equations, which are obtained by linearising the irrotational Euler equations with a free surface. The Neumann-Kelvin problem is formulated as a boundary integral equation based on a fundamental solution which handles the linearised free surface condition. We use a gradient descent method to find a local minimiser of the wave resistance problem. A gradient with respect to the shape is calculated by a boundary variation method. We use a level-set approach to calculate the wave-making resistance and to deal with the displacements of the boundary of the obstacle. We obtain a great variety of optimal shapes depending on the depth of the object and its velocity
APA, Harvard, Vancouver, ISO, and other styles
7

Kergadallan, Xavier. "Estimation des niveaux marins extrêmes avec et sans l’action des vagues le long du littoral métropolitain." Thesis, Paris Est, 2015. http://www.theses.fr/2015PESC1102/document.

Full text
Abstract:
Pour caractériser le risque de submersion marine, il est très important d'avoir une connaissance précise des lois de distribution des niveaux d'eau marins, et plus particulièrement des niveaux d'eau extrêmes. En effet ce sont eux qui sont à l'origine des conséquences les plus dramatiques. Le programme de recherche mené au cours de cette thèse a été financé par le Ministère de l'Écologie, du Développement Durable et de l'Énergie. L'objectif final est de fournir des valeurs de référence de niveau d'eau marin le long des côtes françaises, par le biais d'une méthode d'analyse statistique des extrêmes. Ces niveaux comprennent les trois composantes suivantes : la marée, la surcote météorologique et le wave set-up. Le principe de base utilisé est le suivant : une analyse statistique est effectuée aux ports où la donnée marégraphique est disponible, puis le résultat est interpolé entre les ports. Différentes approches sont testées. Les points suivants sont en particulier étudiés :- la dépendance marée surcote, avec deux différents types de dépendance, une dépendance temporelle et une dépendance en amplitude ;- la méthode d'interpolation, avec la comparaison d'une analyse site-par-site (ASS) avec une analyse régionale (RFA), et celle d'interpolations 1D et 2D ;- l'estimation du wave set-up, basée sur l'état de l'art des formules paramétriques ;- la dépendance surcote vagues, avec des lois bi-variées de valeurs extrêmes. Le résultat final se présente sous la forme de deux profils de niveau d'eau de période de retour 100 ans : le premier sans l'action des vagues (marée et surcote météorologique) et le deuxième avec l'action des vagues. Les valeurs les plus élevées sont atteintes, pour le littoral de la Mer du Nord, la Manche et l'Atlantique, en Baie du Mont-Saint-Michel (à cause des conditions de marée), et pour le littoral méditerranéen au niveau de Marseille. L'analyse montre que la modélisation de la dépendance temporelle marée surcote n'influe pas significativement sur les estimations des valeurs extrêmes. Par contre la modélisation de la dépendance en amplitude donne des résultats intéressants pour certains ports. En comparaison avec l'ASS, la RFA tend à lisser les résultats. Les estimations issues de la RFA sont supérieures pour le littoral méditerranéen, et équivalentes pour le littoral de Mer du Nord, Manche et Atlantique. La RFA serait recommandée pour l'estimation des niveaux de retour en dehors du domaine de validité de l'ASS.À cause du petit nombre de sites d'observation, il est préféré une interpolation 1D le long du trait de côte lissé. Le wave set-up est calculé par la formule de Dean et Walton [2009].La dépendance surcote vagues est moyenne le long du littoral méditerranéen. Le facteur de dépendance montre des variations plus importantes le long du littoral de Mer du Nord, Manche et Atlantique, avec un maximum observé en Baie de Seine et des minima en Baie de Mont-Saint-Michel et au niveau de Calais. Des suggestions sont faites pour améliorer les méthodologies développées et appliquées dans le cadre d'un futur travail
Accurate knowledge of the statistical distribution of extreme sea levels is of the utmost importance for the characterization of flood risks in coastal areas, with a particular interest devoted to extreme water levels because they may induce the most dramatic consequences. Research was funded by the French Ministry of Ecology, Sustainable Development and Energy to identify the risk of flooding from the sea in France. The aim is to provide values on design levels along the French coasts by a statistical method of extreme value analysis. These levels must include the effect of the three following components: tide, meteorological surge and wave set-up. The principle is as follows: an analysis is carried out at the harbours, where seal level observations are available, then the result is interpolated between the harbours. Different approaches are tested. In particularly, the following specific items are studied:- the tide surge dependence, with two different types of dependence: a temporal dependence and an amplitude dependence;- the interpolation method: with the comparison of a site-by-site analysis (SSA) with a Regional Frequency Analysis (RFA), and a 1-D with a 2-D interpolation;- the estimation of the wave set-up, based on the state of art of parametric formula;- the surge wave dependence, with the bivariate laws of extreme values. The final result is two profiles of the 100-year water level: one for the still water level (tide and meteorological surge), and the other for the sea level with the wave set-up. The highest sea levels are located, for the English Channel and Atlantic coasts at the Saint-Michel-bay (because of the tide), and for the Mediterranean coast around Marseille. The analysis shows that the temporal tide surge dependence has no effect on the estimation of the sea level extreme values. In contrast, the model of the amplitude tide surge dependence shows some interesting results for few harbours. In comparison with the SSA, the RFA tends to smooth the result. RFA estimates are higher along the Mediterranean coast, and similarly along the English Channel and Atlantic coasts. RFA would be recommended for estimating return levels out of the SSA validity domain. Because of the small number of observation sites, a 1-D interpolation, along a smoothed coastline, is preferred. The wave set-up is calculated with the formula of Dean and Walton [2009].The surge wave dependence is medium along the Mediterranean coast. Variations of the dependence factor are more important along the English Channel and the Atlantic coasts, with a maximum at the bay of the Seine and some minima at the bay of Saint-Michel and Calais. Some ideas are provided to perform the methodology for further work
APA, Harvard, Vancouver, ISO, and other styles
8

Christenson, Nina. "Knowledge, Value and Personal experience : Upper secondary students' resources of supporting reasons when arguing socioscientific issues." Licentiate thesis, Karlstads universitet, Avdelningen för geografi och turism, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-6815.

Full text
Abstract:
This thesis focuses on upper secondary students’ use of resources in their supporting reasons when arguing socioscientific issues (SSIs). The skills of argumentation have been emphasized in science education during the past decades and SSIs are proven a good context for learners to enhance skills of argumentation and achieve the goal of scientific literacy. Research has shown that supporting reasons from various resources are embedded in students’ argumentation on SSIs, and also that multi-perspective involvement in reasoning is important for the quality of argumentation. To explore the reasons used by students in arguing about SSIs in this thesis, the SEE-SEP model was adopted as an analytical framework. The SEE-SEP model covers the six subject areas of sociology/culture, economy, environment/ecology, science, ethics/morality and policy, which are connected to the three aspects of knowledge, value and personal experience. Two studies covering four SSIs (global warming, GMO, nuclear power and consumption) explore how students construct arguments on one SSI topic chosen by them. In paper I, I investigated students’ use of resources in their informal argumentation and to what extent students made use of knowledge. The results showed that students used value to a larger extent (67%) than knowledge (27%). I also found that the distribution of supporting reasons generated by students varied from the different SSIs. In paper II, I explored students’ use of resources in relation to students’ study background (science majors and social-science majors) and gender. The results showed that social-science majors and females generated more numbers of reasons and also showed a larger amount of multi-disciplinary resources in their supporting reasons. From the findings of this thesis, the SEE-SEP model was established as a suitable model used to analyze students’ resources of supporting reasons while arguing about SSIs. Furthermore, the potential for applying the SEE-SEP model in teachers’ SSI-teaching and students’ SSI-learning is suggested. The implications to research and teaching are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
9

Roberts, Creta M. "Promoting generalization of coin value relations with young children via equivalence class formation." Virtual Press, 1999. http://liblink.bsu.edu/uhtbin/catkey/1137578.

Full text
Abstract:
Sidman and Tailby (1982) established procedures to analyze the nature of stimulus to stimulus relations established by conditional discriminations. Their research describes specific behavioral tests to determine the establishment of properties that define the relations of equivalence. An equivalence relation requires the demonstration of three conditional relations: reflexivity, symmetry, and transitivity. The equivalence stimulus paradigm provides a method to account for novel responding. The research suggests that equivalence relations provide a more efficient and effective approach to the assessment, analysis, and instruction of skills. The present research examined the effectiveness of the formation of an equivalence class in teaching young children coin value relations. The second aspect of the study was to determine if there was a relationship between equivalence class formation and generalization of the skills established to other settings. Five children, 4- and 5-years old, were selected to participate in the study based on their lack of skills in the area of coin values and purchasing an item with dimes or quarters equaling fifth cents. The experimental task was presented on a Macintosh computer with HyperCard programming. The experimental stimuli consisted of pictures of dimes, quarters, and Hershey candy bars presented in match-to-sample procedures. Two conditional discriminations were taught (if A then B and if B then C.). The formation of an equivalence class was evaluated by if C then A. Generalization across settings was tested after the formation of an equivalence class by having the children purchase a Hershey candy bar with dimes at a play store. A multiple baseline experimentaldesign was used to demonstrate a functional relationship between the formation of an equivalence class and generalization of skills across settings. The present research provides supportive evidence that coin value relations can be taught to young children using equivalence procedures. The study also demonstrated generalization of novel, untaught stimuli across settings, after the formation of an equivalence class. A posttest on generalization across settings was conducted 3 months after the study. Long-term stability of equivalence relations was demonstrated by three of the subjects.
Department of Special Education
APA, Harvard, Vancouver, ISO, and other styles
10

Mastako, Kimberley Allen. "Choice set as an indicator for choice behavior when lanes are managed with value pricing." Texas A&M University, 2003. http://hdl.handle.net/1969.1/1582.

Full text
Abstract:
Due to recent pricing studies that have revealed substantial variability in values of time among decision makers with the same socioeconomic characteristics, there is substantial interest in modeling the observed heterogeneity. This study addresses this problem by revealing a previously overlooked connection between choice set and choice behavior. This study estimates a discrete choice model for mode plus route plus time choice, subdivides the population according to empirically formed choice sets, and finds systematic variations among four choice set groups in user preferences for price managed lanes. Rather than assume the same values of the coefficients for all users, the model is separately estimated for each choice set group, and the null hypothesis of no taste variations among them is rejected, suggesting that choice set is an indicator for choice behavior. In the State Route 91 study corridor, the price-managed lanes compete with at least two other congestion-avoiding alternatives. The principal hypothesis is that a person’s willingness to pay depends on whether or not he perceives as personally feasible the option to bypass some congestion in a traditional carpool lane or by traveling outside the peak period. The procedure for estimating the choice sets empirically is predicated on the notion that individuals operate within a wide array of unobservable constraints that can establish the infeasibility of either alternative. The universal choice set includes eight combinations of mode and time and route, wherein there are exactly two alternatives for each. Choice sets are formed from an assumed minimum set, which is expanded to one of three others whenever a non-zero choice probability for either ridesharing, or shoulder period travel, or both is revealed in a person’s history of choice behavior. Based on the test of taste variations, this author finds different values of time across the four choice set groups in the study sample. If these relationships can be validated in other locations, this would make a strong case for modeling choice behavior in value pricing as a function of choice set.
APA, Harvard, Vancouver, ISO, and other styles
11

Buyukbasaran, Tayyar. "Ranking Units By Target-direction-set Value Efficiency Analysis And Mixed Integer Programming." Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/12605988/index.pdf.

Full text
Abstract:
In this thesis, two methods are proposed in order to rank units: Target-direction-set value efficiency analysis (TDSVEA) and mixed integer programming (MIP) technique. Besides its ranking ability based on preferences of a decision maker (DM), TDSVEA, which modifies the targeted projection approach of Value Efficiency Analysis (VEA) and Data Envelopment Analysis (DEA), provides important information to analyzer: targets and distances of units from these targets, proposed input allocations in order to project these targets, the lack of harmony between the DM and the manager of the unit etc. In MIP technique, units select weights of the criteria from a feasible weight space in order to outperform maximum number of other units. Units are then ranked according to their outperforming ability. Mixed integer programs in this technique are simplified by domination and weight-domination relations. This simplification procedure is further simplified using transitivity between relations. Both TDSVEA and MIP technique are applied to rank research universities and these rankings are compared to those of other ranking techniques.
APA, Harvard, Vancouver, ISO, and other styles
12

Tawn, Jonathan Angus. "Extreme value theory with oceanographic applications." Thesis, University of Surrey, 1988. http://epubs.surrey.ac.uk/2882/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Chen, Chen. "SQL Implementation of Value Reduction with Multiset Decision Tables." University of Akron / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=akron1387495607.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Matos, Bárbara Cartagena da Silva. "Do sea otters according to prey's nutritional value?" Master's thesis, Universidade de Aveiro, 2016. http://hdl.handle.net/10773/17176.

Full text
Abstract:
Mestrado em Ecologia Aplicada
A Teoria do Forrageio Ótimo propõe que o estímulo nutricional na escolha de presas e busca de alimento em carnívoros é o ganho energético. Em contraste, pesquisas recentes sugerem que os carnívoros selecionam presas que fornecem uma dieta com um equilíbrio específico de macronutrientes (gordura, proteína, hidratos de carbono), ao invés do maior conteúdo energético. Para este efeito, as escolhas de presas de lontras-marinhas (Enhydra lutris) que habitam Sitka Sound no sudeste do Alasca, foram estudadas durante os meses de maio a agosto de 2016. Os objetivos desta pesquisa foram: 1) descrever a dieta das lontras-marinhas em Sitka Sound; 2) descrever o valor nutricional das suas presas; 3) comparar diferenças na escolha de presas de acordo com o sexo; e 4) avaliar e comparar o valor nutricional das presas com as escolhas das lontras-marinhas. Os dados de observação foram coletados oportunisticamente, através de uma plataforma de oportunidade. As presas de lontras-marinhas foram capturadas em áreas arbitrárias de Sitka Sound, e analisadas quanto à sua percentagem em lípidos (teor de gordura) e calorias (densidade de energia). O consumo de presas foi significativamente diferente: as amêijoas foram as presas mais consumidas (68,6%), seguidos pelos ouriços-do-mar (14,3%), vieiras (5,7%), pepinos-do-mar (5,7%), caranguejos (2,9%) e estrelas-do-mar (2,9%). Além disso, os resultados revelaram uma significativa diversidade no conteúdo de gordura e densidade energética entre presas de lontra-marinha. O abalone registou maior teor de densidade energética, seguido pelas vieiras, enquanto que os ouriços-do-mar registaram maior teor em lípidos. A escolha de presas e a ingestão de nutrientes não diferiram significativamente entre machos e fêmeas, no entanto, os machos de lontras-marinhas consumiram mais moluscos do que as fêmeas, enquanto que as fêmeas consumiram mais ouriços-do-mar do que os machos. O trabalho sobre nutrição em carnívoros é preliminar, e estes resultados fornecem um ponto de partida para futuras pesquisas. As respostas a estas questões não só terão implicações significativas na gestão das populações de predadores e das comunidades ecológicas de que fazem parte, mas também acrescentarão informações importantes sobre a biologia de predadores que até agora foram negligenciadas. Além disso, os conflitos nas comunidades sobre os impactos que as lontras-marinhas têm na pesca comercial no sudeste do Alasca, não podem ser ignorados. Compreender as escolhas de presas de lontras-marinhas pode fornecer previsões de como a pesca pode ser afetada, de acordo com o crescimento da população de lontras nesta área, a fim de ajudar políticos, membros da comunidade e pescadores comerciais, a responder em conformidade.
Foraging theory proposes that the nutritional driver of prey choice and foraging in carnivores is energy gain. In contrast, recent research suggests that carnivores select prey that provides a diet with a specific balance of macronutrients (fat, protein, carbohydrates), rather than the highest energy content. To this effect, the prey choices of sea otters (Enhydra lutris) inhabiting Sitka Sound, in southeast Alaska, were studied during the months of May-August of 2016. The goals of this research were to 1) describe sea otter’s diet in Sitka Sound; 2) describe the nutritional value of sea otters’ prey items; 3) compare differences in prey choice according to sex; and 4) evaluate and compare prey’s nutritional value with sea otter’s prey choices. Foraging observational data were collected opportunistically on a boat-based platform of opportunity. Sea otter’s main prey were captured in arbitrary areas of Sitka Sound, and analyzed for percentage in lipids (fat content), and calories (energy density). Prey consumption was significantly different: clams were the most frequently consumed prey (68,6%), followed by sea urchins (14,3%), scallops (5,7%), sea cucumbers (5,7%), crabs (2,9%) and sea stars (2,9%). Also, the results revealed a significant diversity in content of fat and energy density between sea otter prey specimens. Abalone ranked first on content of energy density, followed by scallops, while sea urchins recorded the highest lipid content. Prey choice and nutrient intake were not significant different between male and female sea otters, nevertheless, males consumed more clams than females, while females consumed more sea urchins than males. The work on carnivore nutrition is preliminary, and these results provide a starting point for future work. Answers to such questions not only will have significant implications for managing predator populations and the ecological communities of which they are a part, but will also add important information on predator biology that has been neglected so far. Moreover, communities’ conflicts over the impacts sea otters are having on commercial shellfisheries in southeast Alaska cannot be overlooked. Understanding sea otter’s prey choices may provide information and predictions of how fisheries may be affected as the sea otter population grows in this area, in order to help decision makers, policy makers, community members, and commercial fishermen respond accordingly.
APA, Harvard, Vancouver, ISO, and other styles
15

Hill, Joshua Erin. "On Calculating the Cardinality of the Value Set of a Polynomial (and some related problems)." Thesis, University of California, Irvine, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=3646731.

Full text
Abstract:

We prove a combinatorial identity that relates the size of the value set of a map with the sizes of various iterated fiber products by this map. This identity is then used as the basis for several algorithms that calculate the size of the value set of a polynomial for a broad class of algebraic spaces, most generally an algorithm to calculate the size of the value set of a suitably well-behaved morphism between "nice" affine varieties defined over a finite field. In particular, these algorithms specialize to the case of calculating the size of the value set of a polynomial, viewed as a map between finite fields. These algorithms operate in deterministic polynomial time for fixed input polynomials (thus a fixed number of variables and polynomial degree), so long as the characteristic of the field grows suitably slowly as compared to the other parameters.

Each of these algorithms also produces a fiber signature for the map, which for each positive integer j, specifies how many points in the image have fibers of cardinality exactly j.

We adapt and analyze the zeta function calculation algorithms due to Lauder-Wan and Harvey, both as point counting algorithms and as algorithms for computation of one or many zeta functions.

These value set cardinality calculation algorithms extend to amortized cost algorithms that offer dramatic computational complexity advantages, when the computational cost is amortized over all the results produced. The last of these amortized algorithms partially answers a conjecture of Wan, as it operates in time that is polynomial in log q per value set cardinality calculated.

For the value set counting algorithms, these are the first such results, and offer a dramatic improvement over any previously known approach.

APA, Harvard, Vancouver, ISO, and other styles
16

Vollrath, Scott Jeffrey. "A seasonal and regional evaluation of a value added national radar data set for precipitation." Thesis, Massachusetts Institute of Technology, 1998. http://hdl.handle.net/1721.1/47454.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Mabonesho, Ernest Francis. "Diversification, financial performance and the destruction of corporate value? : an application of fuzzy set analysis." Thesis, University of Strathclyde, 2013. http://oleg.lib.strath.ac.uk:80/R/?func=dbin-jump-full&object_id=20941.

Full text
Abstract:
FSA techniques appear to offer valuable complementary theoretical and empirical insights to conventional finance research methods in order to better understand the financial impact of corporate diversification strategies. FSA can provide a conceptual framework to integrate the often confusing and conflicting theoretical explanations and empirical results of past research. This thesis explores the potential usefulness of FSA in addressing finance research problems or paradoxes that are characterised by large numbers of inter-connected variables, complex causality and where different configurations lead to similar outcomes. Specifically fuzzy set analysis is used on cross-sectional data from firms listed in London stock exchange FTSE All-share index (2001-2010) in order to address a gap in the literature as to "how corporate diversification necessarily and sufficiently leads to favourable financial performance". The results of this research show that there is no sim ple answer to this question nor is there a simple theoretical explanation. It appears that a diversification strategy per se is neither a necessary nor a sufficient indicator of favourable or unfavourable financial performance. The FSA results showed multiple configurations of corporate diversifications and other firm attributes which are usually or more often than not sufficiently associated with favourable firm value, profitability, and risk-return performance. This indicates presence of complex causality, asymmetric causality, and equifinality in examining determinants of financial performance. The results are partially explained by elements of standalone theories but better explained by the construction of a series of hybrid theoretical frameworks. The usefulness of FSA in helping understand and improve decision making processes that rely on complex financial or numeric information has been demonstrated, and it is hoped that this research acts as a "stepping stone" to legitimate a new set of analytical techniques for accounting and finance researchers to use. This would help corporate managers/CEOs, analysts, and investors in decision making processes.
APA, Harvard, Vancouver, ISO, and other styles
18

Van, der Westhuizen Anriette. "The verification of seat effective amplitude transmissibility (SEAT) value as a reliable metric to evaluate dynamic seat comfort." Thesis, Stellenbosch : University of Stellenbosch, 2004. http://hdl.handle.net/10019.1/16453.

Full text
Abstract:
Thesis (MScIng)--University of Stellenbosch, 2004.
ENGLISH ABSTRACT: A rough road vibration stimulus was reconstructed on a shaker platform to assess the dynamic comfort of seven seats by six human subjects. The virtual seat method was combined with a paired comparison procedure to assess subjective dynamic seat comfort. The psychometric method of constants, 1-up-1-down Levitt procedure and a 2-up-1-down Levitt procedure were compared experimentally to find the most accurate and efficient paired comparison scheme. A two-track interleaved, 2-up-1-down Levitt procedure was used for the subjective dynamic seat comfort assessment. SEAT value is an objective metric and has been widely used to determine seat vibration isolation efficiency. There was an excellent correlation (R2 = 0.97) between the subjective ratings and estimated SEAT values on the seat top when the values are averaged over the six subjects. This study suggests that the SEAT values, estimated from averaged seat top transmissibility of six carefully selected subjects, could be used to select the best seat for a specific road vibration input.
AFRIKAANSE OPSOMMING: Ses persone het deelgeneem aan ‘n eksperiment, om die dinamiese ritgemak van sewe stoele te karakteriseer. ‘n Rowwe padvibrasie is vir die doel op ‘n skudplatform geherkonstrueer. Subjektiewe ritgemak is bepaal deur die virtuelestoel metode met ‘n gepaarde, vergelykingstoets te kombineer. Die psigometriese metode van konstantes, die 1-op-1-af Levitt procedure en die 2-op- 1-af Levitt procedure is vergelyk om die mees effektiewe en akkurate vergelykingstoets te vind. ‘n Tweebaan, vervlegde , 2-op-1-af Levitt prosedure het die beste resultate gelewer en is gekies vir die subjektiewe evaluasie van dinamiese ritgemak. SEAT-waarde is ‘n objektiewe maatstaf, wat gebruik word om te bepaal hoe effektief ‘n stoel die insittende van voertuigvibrasie isoleer. Daar was ‘n uitstekende korrelasie (R2 = 0.97) tussen subjektiewe dinamiese ritgemakevaluesies en SEAT-waardes in die vertikale rigting op die stoelkussing as die gemiddelde oor die ses persone bereken word. Uit die resultate van hierdie studie blyk dit dat SEAT-waardes, wat bereken is vanaf die gemiddelde sitplektransmissie van die ses persone, wat verteenwoordigend van die teikenbevolking is, gebruik kan word om die beste stoel vir ‘n spesifieke vibrasieinset te kies.
APA, Harvard, Vancouver, ISO, and other styles
19

Deng, Kefu. "The value and validity of software effort estimation models built from a multiple organization data set." Click here to access this resource online, 2008. http://hdl.handle.net/10292/473.

Full text
Abstract:
The objective of this research is to empirically assess the value and validity of a multi-organization data set in the building of prediction models for several ‘local’ software organizations; that is, smaller organizations that might have a few project records but that are interested in improving their ability to accurately predict software project effort. Evidence to date in the research literature is mixed, due not to problems with the underlying research ideas but with limitations in the analytical processes employed: • the majority of previous studies have used only a single organization as the ‘local’ sample, introducing the potential for bias • the degree to which the conclusions of these studies might apply more generally is unable to be determined because of a lack of transparency in the data analysis processes used. It is the aim of this research to provide a more robust and visible test of the utility of the largest multi-organization data set currently available – that from the ISBSG – in terms of enabling smaller-scale organizations to build relevant and accurate models for project-level effort prediction. Stepwise regression is employed to enable the construction of ‘local’, ‘global’ and ‘refined global’ models of effort that are then validated against actual project data from eight organizations. The results indicate that local data, that is, data collected for a single organization, is almost always more effective as a basis for the construction of a predictive model than data sourced from a global repository. That said, the accuracy of the models produced from the global data set, while worse than that achieved with local data, may be sufficiently accurate in the absence of reliable local data – an issue that could be investigated in future research. The study concludes with recommendations for both software engineering practice – in setting out a more dynamic scenario for the management of software development – and research – in terms of implications for the collection and analysis of software engineering data.
APA, Harvard, Vancouver, ISO, and other styles
20

Homer, Jason B. "Collecting, retrieving and analyzing Knowledge Value Added (KVA) data from U.S. navy vessels afloat." Thesis, Monterey, California : Naval Postgraduate School, 2009. http://edocs.nps.edu/npspubs/scholarly/theses/2009/Sep/09Sep%5FHomer.pdf.

Full text
Abstract:
Thesis (M.S. in Information Warfare Systems Engineering)--Naval Postgraduate School, September 2009.
Thesis Advisor(s): Housel, Thomas J. ; Bergin, Richard D. "September 2009." Description based on title screen as viewed on November 9, 2009. Author(s) subject terms: ROI, return on investment, ROA, return on asset, IT ROI, IT performance, IT valuation, KVA, Knowledge Value Added, public sector finance. Includes bibliographical references (p. 65). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
21

Bernauer, Martin K., and Roland Herzog. "Optimal Control of the Classical Two-Phase Stefan Problem in Level Set Formulation." Universitätsbibliothek Chemnitz, 2010. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-62014.

Full text
Abstract:
Optimal control (motion planning) of the free interface in classical two-phase Stefan problems is considered. The evolution of the free interface is modeled by a level set function. The first-order optimality system is derived on a formal basis. It provides gradient information based on the adjoint temperature and adjoint level set function. Suitable discretization schemes for the forward and adjoint systems are described. Numerical examples verify the correctness and flexibility of the proposed scheme.
APA, Harvard, Vancouver, ISO, and other styles
22

AHRAM, TAREQ. "INFORMATION RETRIEVAL PERFORMANCE ENHANCEMENT USING THE AVERAGE STANDARD ESTIMATOR AND THE MULTI-CRITERIA DECISION WEIGHTED SET." Doctoral diss., University of Central Florida, 2008. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/3280.

Full text
Abstract:
Information retrieval is much more challenging than traditional small document collection retrieval. The main difference is the importance of correlations between related concepts in complex data structures. These structures have been studied by several information retrieval systems. This research began by performing a comprehensive review and comparison of several techniques of matrix dimensionality estimation and their respective effects on enhancing retrieval performance using singular value decomposition and latent semantic analysis. Two novel techniques have been introduced in this research to enhance intrinsic dimensionality estimation, the Multi-criteria Decision Weighted model to estimate matrix intrinsic dimensionality for large document collections and the Average Standard Estimator (ASE) for estimating data intrinsic dimensionality based on the singular value decomposition (SVD). ASE estimates the level of significance for singular values resulting from the singular value decomposition. ASE assumes that those variables with deep relations have sufficient correlation and that only those relationships with high singular values are significant and should be maintained. Experimental results over all possible dimensions indicated that ASE improved matrix intrinsic dimensionality estimation by including the effect of both singular values magnitude of decrease and random noise distracters. Analysis based on selected performance measures indicates that for each document collection there is a region of lower dimensionalities associated with improved retrieval performance. However, there was clear disagreement between the various performance measures on the model associated with best performance. The introduction of the multi-weighted model and Analytical Hierarchy Processing (AHP) analysis helped in ranking dimensionality estimation techniques and facilitates satisfying overall model goals by leveraging contradicting constrains and satisfying information retrieval priorities. ASE provided the best estimate for MEDLINE intrinsic dimensionality among all other dimensionality estimation techniques, and further, ASE improved precision and relative relevance by 10.2% and 7.4% respectively. AHP analysis indicates that ASE and the weighted model ranked the best among other methods with 30.3% and 20.3% in satisfying overall model goals in MEDLINE and 22.6% and 25.1% for CRANFIELD. The weighted model improved MEDLINE relative relevance by 4.4%, while the scree plot, weighted model, and ASE provided better estimation of data intrinsic dimensionality for CRANFIELD collection than Kaiser-Guttman and Percentage of variance. ASE dimensionality estimation technique provided a better estimation of CISI intrinsic dimensionality than all other tested methods since all methods except ASE tend to underestimate CISI document collection intrinsic dimensionality. ASE improved CISI average relative relevance and average search length by 28.4% and 22.0% respectively. This research provided evidence supporting a system using a weighted multi-criteria performance evaluation technique resulting in better overall performance than a single criteria ranking model. Thus, the weighted multi-criteria model with dimensionality reduction provides a more efficient implementation for information retrieval than using a full rank model.
Ph.D.
Department of Industrial Engineering and Management Systems
Engineering and Computer Science
Industrial Engineering PhD
APA, Harvard, Vancouver, ISO, and other styles
23

Hägg, Anna. "Improving the Product Value Flow at Atlas Copco SED Yokohama, Japan." Thesis, KTH, Industriell produktion, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-154993.

Full text
Abstract:
Atlas Copco SED, Yokohama develops and assembles surface drill rigs. Products are customized to suit customer need, which demands a make-to-order production with high flexibility. With changing customer demands, difficulties are experienced with estimating accurate future material and production capacity need. Periodical peaks and valleys in production occur with frequent changes to the production schedule and sometimes there is excessive inventory. There is a wish from management to decrease lead times and improve the system as a whole to better handle the difficulties. In this project, the company’s internal product value stream was investigated by following a product all the way from customer order to delivery and investigating supporting functions. The purpose was to develop improvement suggestions which if implemented could lead to shorter production lead time, increased efficiency (direct work time in relation to total required time) and potential inventory reduction. If this could be achieved, the circumstances to handle frequent changes would improve, as well as the company’s future competitiveness. The lean method of Value Stream Mapping (VSM) was used, since this provides an overview of the entire factory, rather than having process-specific focus, and offers a systematic way of finding the sources of problems and solving them. The results showed that problems are largely due to external factors; especially concerning customer demand forecast accuracy, and oversea suppliers sometimes having long lead times for material. However, production processes being cycled faster than takt and specialization in skills were two more factors, and others were found as well. Suggestions were made in the form of two future states, A and B, where one focuses on a more short-term perspective, and one on a slightly more long-term which requires more training and redistribution of assembly tasks. Both suggestions have potential to help work against the problems discovered. Suggestion A allows production to be run with a single scheduled pacemaker process and FIFO (First-In-First-Out) systems, while suggestion B involves the development of a continuous flow in the main line working according to takt. Future state A could decrease lead time up to 42% and reach an efficiency of 45%. Future state B could decrease lead time by 45%. In addition, future state B requires 3 less assemblers who can be assigned as team leaders and work for other support functions needed, and can reach an efficiency of 70%. Strategic suggestions were made for supply management and handling the forecast, which if implemented have the potential to decrease the risks of getting excessive inventory.
Atlas Copco SED, Yokohama utvecklar och monterar ytborriggar. Produkterna är skräddarsydda för att passa olika kundbehov, vilket kräver tillverkning mot order med hög flexibilitet. Med skiftande kundbehov upplevs svårigheter med att uppskatta korrekt framtida material- och kapacitetsbehov. Berg och dalar i arbetsbörda inträffar i produktionen då det sker frekventa ändringar i produktionsplaneringen och ibland är lagernivåerna onödigt höga. Det fanns en önskan från ledningen att minska ledtider och förbättra systemet som helhet för att bättre kunna hantera svårigheterna. I detta projekt undersöks företagets interna värdeflöde genom att följa en produkt från kundorder till leverans och genom att undersöka stödjande funktioner. Syftet var att utveckla förbättringsförslag som om implementerades kunde leda till kortare total ledtid i produktionen, ökad effektivitet (direktarbete i relation till total tidsåtgång) och potentiellt minskning av lager. Om detta kunde uppnås, skulle omständigheterna för att hantera frekventa förändringar förbättras, och även företagets framtida konkurrenskraft. Metoden Värdeflödesanalys (VSM) användes, eftersom att den ger en överblick av hela fabriken, istället för att ha processpecifikt fokus och den erbjuder en systematisk metod att hitta källorna till problem och att åtgärda dessa. Resultaten visade att problemen till stor del beror på externa faktorer, speciellt med avseende på exaktheten av kundbehovsprognos och att leverantörer utomlands ibland har långa ledtider för material. Dock så var två andra faktorer även specialisering i färdigheter och det att produktionsprocesserna går snabbare än takt. Andra orsaker upptäcktes också. Förslag togs fram i form av två framtida värdeflödeskartor, A och B, där den förstnämnda har mer kortsiktigt fokus och den andra lite mer långsiktigt, vilket kräver mer utbildning och omfördelning av monteringsuppgifter. Båda förslagen har potential att underlätta arbetet mot de problem som fastslogs. Förslag A tillåter produktionen att styras genom schemaläggning av en pacemaker-process och FIFU (Först-In-Först-Ut) system, medan förslag B involverar utvecklandet av ett kontinuerligt flöde genom huvudlinan som går enligt takt. Förslag A kan minska ledtiden upp till 42 % och nå en effektivitet på 45 %. Förslag B kan minska ledtiden med 45 %. Det kräver även 3 mindre montörer som kan ges uppgifter som teamledare och andra stödfunktioner som kommer behövas, och det kan uppnå en effektivitet på 70 %. Strategiska förslag utvecklades även för leveranshantering och hantering av kundorderprognos, som om implementeras har potential att minska risken för onödigt höga lagernivåer.
APA, Harvard, Vancouver, ISO, and other styles
24

Stenberg, Johan. "Snapple : A distributed, fault-tolerant, in-memory key-value store using Conflict-Free Replicated Data Types." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-188691.

Full text
Abstract:
As services grow and receive more traffic, data resilience through replication becomes increasingly important. Modern large-scale Internet services such as Facebook, Google and Twitter serve millions of users concurrently. Replication is a vital component of distributed systems. Eventual consistency and Conflict-Free Replicated Data Types (CRDTs) are suggested as an alternative to strong consistency systems. This thesis implements and evaluates Snapple, a distributed, fault-tolerant, in-memory key-value database based on CRDTs running on the Java Virtual Machine. Snapple supports two kinds of CRDTs, an optimized implementation of the OR-Set and version vectors. Performance measurements show that the Snapple system is significantly faster than Riak, a persistent database based on CRDTs, but has a factor 5x - 2.5x lower throughput than Redis, a popular in-memory key-value database written in C. Snapple is a prototype-implementation but might be a viable alternative to Redis if the user wants the consistency guarantees CRDTs provide.
När internet-baserade tjänster växer och får mer trafik blir data replikering allt viktigare. Moderna storskaliga internet-baserade tjänster såsom Facebook, Google och Twitter hanterar miljoner av förfrågningar från användare samtidigt. Datareplikering är en vital komponent av distribuerade system. Eventuell synkronisering och Konfliktfria Replikerade Datatyper (CRDTs) är föreslagna som alternativ till direkt synkronisering. Denna uppsats implementerar och evaluerar Snapple, en distribuerad feltolerant nyckelvärdesdatabas i RAM-minnet baserad på CRDTs och som exekverar på Javas virtuella maskin. Snapple stödjer två sorters CRDTs, den optimerade implementationen av observera-ta-bort setet och versionsvektorer. Prestanda-mätningar visar att Snapple-systemet är mycket snabbare än Riak, en persistent databas baserad på CRDTs. Snapple visar sig ha 5x - 2.5x lägre genomströmning än Redis, en popular i-minnet nyckel-värdes databas skriven i C. Snapple är en prototyp men CRDT-stödda system kan vara ett värdigt alternativ till Redis om användaren vill ta del av synkroniseringsgarantierna som CRDTs tillhandahåller.
APA, Harvard, Vancouver, ISO, and other styles
25

Ode, Egena. "Making co-creation work in mobile financial services innovation : what capabilities are needed and what practices work best in developing countries?" Thesis, University of Manchester, 2018. https://www.research.manchester.ac.uk/portal/en/theses/making-cocreation-work-in-mobile-financial-services-innovation-what-capabilities-are-needed-and-what-practices-work-best-in-developing-countries(0ad4071d-e58a-41f0-b1e2-50109f47aa46).html.

Full text
Abstract:
This thesis addresses existing shortcomings in the co-creation literature by proposing organisational capabilities that support co-creation in financial service firms. A developing country perspective is taken and the context is Nigeria, a West African Country. In this thesis, the Resource-based view and Knowledge-based view are integrated with the Dynamic Capability perspective to identify capabilities required to manage the dyadic interactions during co-creation. First, a conceptual model is developed through an in-depth literature review, before testing, refining and validating the model through a mixed-method research approach, involving both qualitative and quantitative research steps. The conceptual model identified a set of capabilities - namely the firm's innovation, knowledge management and relational capability and their effect on co-creation practice. The aim of the qualitative research step was to improve the conceptual model through exploratory research. This step involved in-depth interviews (n=9) with key informants and a focus group discussion with users (n=7). In the quantitative step, empirical data was collected via a questionnaire (n=261) using a drop-off-pick-up (DOPU) technique. The data is analysed using structural path analysis, hypotheses testing and model re-specification. The results of the qualitative phase indicate that co-creation in financial services is dependent on regulation, user need and the structure of financial services in Nigeria. The results also confirm the influence of innovation, knowledge management and relational capabilities on co-creation practice. Nevertheless, qualitative findings also show that knowledge management capability emerged as a vital capability upon which other value creation activities in financial service firms depend. These findings were further tested and validated in the quantitative phase. In line with the resource-based view (RBV) and the knowledge-based view (KBV), empirical findings confirm that the firm`s resource endowments explain, in part, value co-creation in firms. Principally, the findings of this study show that the capacity of financial service organisations to provide sustainable value creation for its clients and itself depend on the degree to which they possess specific dynamic capabilities. The findings also show the relative importance of co-creation practices and how they are effective only in certain conditions and specific environments.
APA, Harvard, Vancouver, ISO, and other styles
26

Allen, George Louis. "An Empirical Investigation of the Complementary Value of a Statement of Cash Flows in a Set of Published Financial Statements." Thesis, North Texas State University, 1985. https://digital.library.unt.edu/ark:/67531/metadc331207/.

Full text
Abstract:
This research investigates the complementary value of a statement of cash flows (SCF) in a set of published financial statements. Selected accounting studies and selected parts of communication theory are used to argue the case for treating an SCF as a primary financial statement. Ideas adapted from communication theory are also used to decide key issues involved in developing an SCF. Specifically, the study selects a direct rather than a reconciling format for an SCF; it also defines cash to include currency, bank accounts, and marketable securities and exclude claims to cash such as notes and accounts receivable. The definition of cash limits cash flow to strict receipts and disbursements; it excludes constructive receipts and disbursements.
APA, Harvard, Vancouver, ISO, and other styles
27

Boutin, Guillaume. "Interactions vagues-banquise en zones polaires." Thesis, Brest, 2018. http://www.theses.fr/2018BRES0050/document.

Full text
Abstract:
La banquise, qui couvre de larges étendues de l’océan près des pôles, est une composante majeure du climat. Le réchauffement de la planète entraîne sa fonte massive, en particulier en Arctique.Là où l’extension de la banquise diminue, l’augmentation du fetch est associée à une élévation de la hauteur des vagues, laissant penser que les effets liés aux interactions vagues-glace pourraient s’accroître dans le futur. L’évolution rapide de la banquise associée à l’intensification des activités humaines dans les régions polaires pressent à améliorer notre connaissance de ces interactions.La banquise atténue les vagues. Elles peuvent néanmoins s’y propager et briser la glace sur de longues distances. L’atténuation dépend des propriétés de la glace comme l’épaisseur, la taille des plaques... Les plaques de glace une fois cassées sont plus susceptibles de dériver et de fondre. En outre, lors de l’atténuation, les plaques sont poussées dans la direction de propagation des vagues.Une représentation simple de la banquise dans un modèle de vagues intégrant une distribution de la taille des plaques nous a permis de montrer l’importance des mécanismes dissipatifs dans l’atténuation, notamment ceux induits par la flexion de la glace.Après avoir été validé, ce modèle a été couplé à un modèle de glace. La taille des plaques est échangée et utilisée dans le calcul de la fonte latérale. La force exercée par les vagues sur la banquise est également envoyée depuis le modèle de vagues. En été, cette force compacte la glace et tend à diminuer la fonte, augmentant significativement la température et la salinité des eaux de surface au bord de la banquise
Sea ice, which covers most of the ocean near the poles, is a key component of the climate system. Global warming is driving its massive melting, especially in the Arctic. Where sea ice cover decreases, fetch increases leading to more energetic sea states. This means potentially enhanced wavesice interactions effects in the future. The quick evolution of sea ice extent and volume combined with the intensification of human activities in polar regions urge us to improve our understanding of waves-ice interactions.Sea ice attenuates waves. They can however propagate through it and break it far into the ice cover. Attenuation depends on ice properties such as floe size, thickness, etc. Once broken, resulting floes are more likely to drift and melt. In addition, wave attenuation yields a force which pushes the floes in the direction of wave propagation.A simplified representation of sea ice, including a floe size distribution, has been incorporated in a wave model.It allows us to show the important contribution of dissipative mechanisms in the wave attenuation, especially those induced by the bending of the ice plates. After validation, the modified wave model is coupled to an ice model. The floe size distribution is exchanged in the coupled framework and used in ice lateral melt computation. The force exerted by the waves on the ice floes is sent from the wave model and is shown to compact sea ice in summer. This reduces the melting and significantly increases the temperature and salinity in the surface ocean close to the ice edge
APA, Harvard, Vancouver, ISO, and other styles
28

Yi, Dingrong. "Singular value decomposition of Arctic sea ice cover and overlying atmospheric circulation fluctuations." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape11/PQDD_0005/MQ44321.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Yi, Dingrong 1969. "Singular value decomposition of Arctic Sea ice cover and overlying atmospheric circulation fluctuations." Thesis, McGill University, 1998. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=20610.

Full text
Abstract:
The relationship between the Arctic and sub-Arctic sea-ice concentration (SIC) anomalies, particularly those associated with the Greenland and Labrador Seas' "Ice and Salinity Anomalies (ISAs)" occurring during the 1960s/1970s, 1970s/1980s, and 1980s/1990s, and the overlying atmospheric circulation (SLP) fluctuations is investigated using the Empirical Orthogonal Function (EOF) and Singular Value Decomposition (SVD) analysis methods. The data used are monthly SIC and SLP anomalies, which cover the Northern Hemisphere north of 450 and extend over the 38-year period 1954--1991.
One goal of the thesis is to describe the spatial and temporal variability of SIC and atmospheric circulation on interannual and decadal timescales. Another goal is to investigate the nature and strength of the air-ice interactions. The air-ice interactions are investigated in detail in the first SVD mode of the coupled variability, which is characterized by decadal-to-interdecadal timescales. Subsequently, the nature and strength of the air-ice interactions are studied in the second SVD mode, which shows a long-term trend. The interactions in the third SVD mode which has an interannual timescale are briefly mentioned. (Abstract shortened by UMI.)
APA, Harvard, Vancouver, ISO, and other styles
30

Tenekecioglu, Goksel. "Increasing intermodal transportation in Europe through realizing the value of short sea shipping." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/33588.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Ocean Engineering, 2005.
Includes bibliographical references (leaves 87-89).
This thesis describes the role of short sea shipping within the transportation network in the European Union. It examines the existence of externalities relating to congestion, infrastructure, air pollution, noise, and accidents in the transportation sector. It evaluates the level of these externalities and also their effects on the Community. It then explains current attempts to internalize these factors, or incorporate them into the cost of transportation that the user pays. It concludes that current efforts are lacking and do not produce the most beneficial situation for the citizens of Europe. Consequently, the thesis investigates other possible methods of internalization that may produce more advantageous results and analyzes their possible effects on the transportation sector. The value of short sea shipping is examined in regards to the previously mentioned externalities. It concludes that, with the exception of the emission of sulfur dioxide, maritime transportation outperforms other modes of transportation by producing relatively few external effects.
(cont.) The current status of the short sea shipping industry is then described, followed by a discussion of intermodal transportation and the initiative within the European Community to increase the use of intermodal transportation. Two case studies are then reviewed, which demonstrate the economy of intermodal transportation solutions compared to all-road alternatives. The thesis concludes by summarizing the benefits of short sea shipping. Some of the obstacles which prevent the realization of the full potential of short sea shipping are discussed. Suggestions for improving the current situation are included as well as a description of some of the measures adopted by the European Commission to increase the use of short sea shipping as an alternative to road transportation.
by Goksel Tenekecioglu.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
31

Dennis, Rojas. "Värdeskapande i agil systemutveckling : En komparativ studie mellan mjukvaruverksamheter i Karlskronaregionen och om hur de ser på värdeskapande." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-14652.

Full text
Abstract:
I denna uppsats analyseras fem mjukvaruverksamheter om hur de tolkar och hur de skapar värde i sina mjukvaruprocesser. Analysen tar upp hur modellen Software Value Map kan användas som hjälpmedel för olika besluttaganden vid värdeskapande. Syftet är att förstå hur olika beslutstagande påverkar värde utfallet i varje leverans och produkt. I uppsatsen söks svaret på hur de lokala företagen tolkar och levererar värde idag. Genom att studera ekonomi- och beslutsteorier förstår vi vilken betydelse och påverkan dessa har i värdeskapandet när produkter utvecklas. Resultatet av denna studie visar att de lokala företagen prioriterar kundbaserade värdeaspekter för att generera värde. Det visar sig även finnas likheter och skillnader om företagens personal och hur företagen värdesätter olika aspekter som genererar värde.
This thesis searches for the answer of how five companies interpret and deliver value in their software processes. The analysis uses the Software Value Map model that can be used as a tool for decision making in value creation. The purpose is to understand how different decisions affect the value of each delivery and product. By studying economics and decision theories, we understand the importance and impact they have in value creation when products are developed. The result of this study shows that local businesses prioritize customer-based value aspects to generate value. There are also similarities and differences in staff and how companies value different aspects that generate value.
APA, Harvard, Vancouver, ISO, and other styles
32

Hasselström, Linus. "The monetary value of marine environmental change." Doctoral thesis, KTH, Miljöstrategisk analys (fms), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-193727.

Full text
Abstract:
The marine ecosystems are fundamental for human welfare. A number of current environmental pressures need attention, and the formulation of management strategies requires information from a variety of analytical dimensions. The linkage between environmental change and resulting implications for human welfare is one such dimension. This thesis presents studies on welfare implications from hypothetical future policies which improve the state of the marine environment. The method for these studies is economic valuation. The studied scenarios concern eutrophication in the Baltic Sea (including the Kattegat) and oil spill risk from shipping in the Lofoten-Vesterålen area in the Arctic Barents Sea. The thesis shows that the economic benefits from undertaking policies to improve or protect the marine environment in these cases are substantial and exceed the costs of taking measures. In addition to providing new monetary estimates, the thesis also provides new insights concerning 1) what type of scenario to use when valuing an environmental improvement and 2) whether there may exist trade-offs between precision in estimates and the level of ambition with respect to survey instrument complexity and econometric models when conducting valuation studies. The findings suggest an end of an era for studies in which the environmental change is unspecified or based on a single environmental indicator while the actual consequences of the suggested measures are more multifaceted. In contrast, relevant scenarios to study are well-specified and holistic. The thesis further reveals that it might not always be worth the effort to go for the most advanced scenario presentation or statistically best-fitting model specifications. This is something that needs to be further discussed among practitioners in order to allocate valuation resources wisely and not waste resources on unnecessarily elegant valuation studies.

QC 20161011

APA, Harvard, Vancouver, ISO, and other styles
33

DE, LEO FRANCESCO. "New methodologies for the characterization of extreme sea states: applications in the Mediterranean Sea." Doctoral thesis, Università degli studi di Genova, 2020. http://hdl.handle.net/11567/999932.

Full text
Abstract:
The sampling of met-ocean variables is crucial for a plethora of applications. In coastal areas, the management of coastal activities and shipping lanes have to account for variations on mean sea level, wave parameters and current velocities, and coastal defences need to be designed according to the severe sea states they will most likely have to face. Similarly, off-shore engineering projects are expected to stand against forces driven by waves that might occur, e.g., once in ten thousand years. The assessment of design waves relies on statistical extrapolations that need to be fed with reliable and continuous wave data. Therefore, it would be appropriate to extend as much as possible the possible ways of sampling waves. In this regard, this thesis first addresses the reliability of HF-radar wave measures, through a practical case study in the Gulf of Naples. Radar data are compared to the outcomes of two numerical models: one providing the wave parameters on a regional scale, and the other specifically developed for the area of investigation over finer resolutions. Both the models are previously validated against a buoy installed offshore the gulf (taken as reference), which is placed outside the radar domain and therefore cannot be employed for a direct comparison with the latter. The agreement between the models and the HF-radars is evaluated through error indexes computed on the significant wave heights, mean period and mean incoming directions. Results show a reasonable consistency between HF-radar and models measures, leaving room for further investigations on the use of such devices. The aforementioned study refers to hindcast data provided by the Department of Civil, Chemical and Environmental Engineering of the University of Genoa (Italy). The hindcast was developed through a third generation wave model defined over the whole Mediterranean Sea, outputting the most significant wave parameters on a hourly base in the 1979-2018 period. Such data, being continuously defined over a long period, allow also to perform reliable analysis of the extreme waves for given locations. In particular, beyond the analysis of HF-radar wave measurements, this thesis proposes two insights in the framework of the so-called extreme value analysis (EVA). First, a ``bottom-up'' approach for the identification and classification of the atmospheric processes producing extreme wave conditions is revisited, and applied to several locations selected among the Italian buoy network. A methodology is given for classifying samples of significant wave height peaks in homogeneous subsets, related to the climatic forcing driving the most severe wave states. Subsequently, the study shows how to compute the overall extreme values distribution of significant wave height starting from the distributions fitted to each single subset previously detected. From the obtained results, it is concluded that the proposed methodology is capable of identifying clearly differentiated subsets, driven by homogeneous atmospheric processes: two well-known cyclonic systems are identified as most likely responsible of the extreme conditions detected in the investigated locations. These systems are analyzed in the context of the Mediterranean Sea atmospheric climatology, and compared with those figured out by previous researches developed in similar frameworks. Then, it is proved that the high return period quantiles for the significant wave height are consistent with those resulting from the usual computational scheme of the EVA. Finally, a simple model for evaluating non-stationarity in extreme waves is discussed, and possible implications are analyzed through practical examples. This model takes advantage of a linear slope estimate that allows to quantify the rate of change of a given time series of data, lowering the weight of possible outliers. The reliability of this slope is proved against two other methods that are not bounded by the linear trend hypothesis, which in fact could represent a too limiting assumption. This study is applied to series of significant wave height annual statistics over the whole Mediterranean Sea. Trend tests are applied on the series carried out from the hindcast locations, and show that the modified linear slope is sound and reliable. Hence, it is shown how such index can be employed to evaluate for the need of non-stationary EVA rather than the common stationary ones, i.e. when significant divergences between the two models may arise. Finally, the linear slope estimates are used to assess the spatial distribution of historical long-term trend in the Mediterranean Sea, showing interesting analogies with previous works defined over similar locations.
APA, Harvard, Vancouver, ISO, and other styles
34

Paskelian, Ohannes. "Government Ownership, Firm Value and Choice of SEO Methods--- Evidence from Privatized Chinese SOEs." ScholarWorks@UNO, 2006. http://scholarworks.uno.edu/td/1050.

Full text
Abstract:
In 1991, the Chinese government started the privatization process. A distinguishing feature of this privatization process is the presence of the government as a major shareholder in the privatized SOEs which creates a unique ownership structure that affects the firm performance, and, in turns its choice of equity issuing method. This dissertation investigates the relation between ownership structure, firm value, and the choice of equity offering method of the Chinese semi-privatized former state-owned enterprises. The dissertation consists of two essays. The first essay examines this relationship during the period 1993-1998 when the Chinese stock exchanges were at the infancy stage. The second essay covers the period 1999-2003 when the Chinese government crafted many laws to modernize their stock exchanges and protect the investor. In the first essay, we find that firms with higher government ownership under-perform relative to those with lower government ownership and prefer issuing rights offerings. The market reaction to the rights offering is lower than that to the public offering. Finally, the long-term market and operating performance of firms issuing rights offerings is poorer than their matched peer group. In the second essay, we find that 1) firms with higher government ownership have still lower performance than firms with lower government ownership; 2) firms with higher government ownership still use rights offerings as equity issue method; 3) firms with the lowest government ownership issue equity using private placements; 4) the market reaction to the announcement of private placements is positive; and 5) the monitoring action provided by the placement buyer has a positive effect on the long-term performance of the firms issuing private placements. Our results are consistent with previous findings about the effects of government ownership on firm value. Privatized firms with high government ownership do not necessarily maximize firm value; instead the managers are more aligned with the political and social agenda of the government. However, firms with low government ownership and high institutional ownership are more profitable. A major contribution of the dissertation is to establish the linkage between ownership, performance, and the choice of equity methods.
APA, Harvard, Vancouver, ISO, and other styles
35

Sekonyela, Malira Patience. "Integrating Lesotho economy into the regional automotive value chain : manufacturing of car-seat covers." Master's thesis, University of Cape Town, 2015. http://hdl.handle.net/11427/17421.

Full text
Abstract:
Includes bibliographical references
The purpose of this study was to analyse the Automotive Industry in Southern Africa, to assess how best Lesotho can contribute to this supply chain. This analysis was done to better understand the sector, to identify Lesotho's potential to produce car seat covers for South African automotive assembly plants, and find the best trade policies and programmes to support value chains in the sector. The plan was to assess the possibility for Lesotho made automotive components manufacturers to supply the Original Equipment Manufacturers (OEMs - the main automotive assembly plants), and use the South African Automotive Industry as the entry point for the Lesotho components to penetrate the Regional Automotive Value Chain. The main focus of this study was the manufacturing of car-seat covers to supply the seven Original Equipment Manufacturers namely: Volkswagen, BMW, Renault, Toyota, Daimler Chrysler, Ford and Mercedes Benz. The impact of Motor Industry Development Programme (MIDP) and Automotive Production and Development Programme (APDP) on the industry was assessed. The impact of the APDP on relocation of components manufacturers to other Southern African Customs Union (SACU) countries was assessed, Lesotho being used as a case study. It set out to find out if Lesotho firms have the potential to contribute to the automotive value chains through manufacture of car seat covers.
APA, Harvard, Vancouver, ISO, and other styles
36

Fang, Chin-Lung. "Predictability of Japan/East Sea (JES) system to uncertain initial/lateral boundary conditions and surface winds." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2003. http://library.nps.navy.mil/uhtbin/hyperion-image/03sep%5FFang.pdf.

Full text
Abstract:
Thesis (M.S. in Meteorology and Physical Oceanography)--Naval Postgraduate School, September 2003.
Thesis advisor(s): Peter C. Chu, Steve Haeger. Includes bibliographical references (p. 73-77). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
37

Bayar, Regzedmaa, and Dolgorsuren Chandmani. "What you see? Value or ...? : A study of life values and lifestyles, and attractiveness of consumers towards advertising posters with value appeals in Umea." Thesis, Umeå University, Umeå School of Business, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-35484.

Full text
Abstract:

This research is investigated a relationship between life values and attitude towards advertising, which included life value appeals. A survey is used a self administrated questionnaire of a quantitative research method, which is asked about people’s life values based on Kahle scale of eight items and their attitudes about advertising posters are created ourselves. Sample was chosen from students and workers in Umea. University, offices and shopping mall techniques used to collect the data.

 

Theoretical review has shown that the link between advertising appeals and consumer behavior factors’ life value and lifestyle being addressed before scholars, such as Belch, Polay, and Kahle so on. The review also included influencer factors and types of advertising appeals, and life values and lifestyle activities.

 

This research’s empirical findings have established the correlation between ranking of eight life values and ranking of eight advertising posters with the values. In addition, the posters are compared by gender and lifestyles activities.  Our findings confirmed two out of three hypotheses. Confirmed first hypothesis is that consumer life value is reflected their choice of advertising posters with the value. Next, choice of the value and the poster is not different by gender. Unconfirmed hypothesis is lifestyle activities relative the choice of the posters.

 

The research’s results have highlighted practical implications for advertisers and marketers, so that they can understand consumer behavior towards advertising. Especially in today’s world of booming advertisement industry, as such they are able to make more efficient their advertising, not to overdo nor underestimate its effects on customers.

APA, Harvard, Vancouver, ISO, and other styles
38

Rech, Ilirio José. "Formação do valor justo dos ativos biológicos sem mercado ativo: uma análise baseada no valor presente." Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/12/12136/tde-19032012-185759/.

Full text
Abstract:
Este estudo contribui para a área contábil ao contemplar a discussão científica e acadêmica quanto à mensuração dos ativos biológicos, analisando os principais elementos usados para estimar o valor justo com base nos conceitos de valor presente. Portanto, tem como objetivo realizar uma análise crítica da formação do valor justo dos ativos biológicos que não apresentam mercado ativo usando como base os fundamentos e técnicas de valor presente. Para alcançar o objetivo foi estabelecida como metodologia a pesquisa exploratória e como estratégia de pesquisa o estudo de casos múltiplos. Nesse sentido, foi realizado um estudo de caso em três grandes empresas do setor rural que exploram a produção de ativos biológicos, a fim de verificar como mensuram o valor justo dos ativos biológicos. Os principais elementos analisados foram as receitas, os custos de produção e as taxas de desconto usados no processo de mensuração. Os principais parâmetros estabelecidos com base nos conceitos teóricos e práticos foram: a) estimar a produção esperada com base na experiência passada da empresa, projetada para toda a fase produtiva do ativo que pretende explorar; em se tratando de ativos frutíferos, ou, estimar com base no ponto ótimo de colheita para os ativos que devem ser exauridos; b) calcular o preço de venda do produto usando o preço de mercado da data de elaboração das demonstrações financeiras; c) os custos diretos de produção como mão de obra e insumos devem ser inclusos como redutores da receita para a formação do fluxo de caixa com base nos preços de mercado da data da mensuração; d) os custos indiretos devem ser incluídos na mesma proporção e critério que os adotados para fins de tomada de decisões gerenciais. Neste quesito destacam-se alguns custos que, efetivamente, não devem ser incluídos na formação do fluxo de caixa, tais como: custos de remuneração do capital investido para a produção e os impostos sobre a renda desses ativos; e) sugere-se o uso de modelos de precificação de ativos como o CAPM e o SIM para se estimar as taxas de desconto, sendo que os melhores resultados nos ensaios realizados foram obtidos com o uso do modelo CAPM. No estudo de casos múltiplos constatou-se que as empresas estudadas não adotam os parâmetros recomendados. Foi, ainda, demonstrado que todas as emrpesas usaram a estimativa de produção com base nas expectativas próprias, porém foi diversa a forma de estabelecer o preço de venda para obter a receita bruta. Os parâmetros adotados para estimar a receita foram: a) a média de preços de mercado de um período anterior e b) os preços cotados no mercado de futuros. Na formação dos custos verificou-se que as empresas adotam as mesmas premissas de análise de investimentos e consideram como custo a remuneração do capital investido em terra, infraestrutura e o imposto de renda como dedução do fluxo de caixa futuro. Quanto às taxas de juros, verificou-se que as empresas adotam premissas baseadas no WACC ajustado pela estrutura de capital e taxas de desconto arbitrárias ou pelo menos sem explicação nos relatórios acessados.
This study contributes to the accounting department to consider the scientific and academic discussion on the measurement of biological assets, analyzing the main elements used to estimate the fair value based on the concepts of present value. Therefore, aims to conduct a critical analysis of the formation of the fair value of biological assets that do not have an active market using as basis the fundamentals and present value techniques. To achieve the goal was established as the exploratory research methodology and research strategy as the study of multiple cases. In this sense was carried out an case study in three large companies in the rural sector to explore the production of biological assets, in order to verify how they measure the fair value of biological assets. The main elements were analyzed revenues, production costs and discount rates used in the measurement process. The main parameters established on the basis of theoretical and practical concepts were: a) estimating the expected production based on past experience of the company, designed for all the productive phase of the asset you want to explore, in the case of active fruitful, or estimate based on optimum harvest for assets that must be exhausted; b) calculate the selling price of the product using the market price of the date of preparation of financial statements; c) the direct costs of production as labor and supplies must be included as a reduction of revenue for the formation of the cash flow based on market prices of the measurement date; d) the indirect costs should be included in the same proportion and that the criteria adopted for purposes of managerial decision making. This item stand out some costs that effectively should not be included in the training of cash flows, such as costs of return on capital invested in production and taxes on incomes of those assets; e) suggests the use of asset pricing models such as CAPM and the SIM to estimate discount rates, and the best results were obtained in tests performed using the CAPM. In multiple case study it was found that the studied companies do not adopt the recommended parameters. It was also shown that all companies used to estimate production based on their own expectations, but it was a diverse way to establish the selling price to obtain the gross revenue. The formation of the cost was found that companies adopt the same assumptions of investment analysis and consider cost as the return on capital invested in land, infrastructure and income tax as a deduction from future cash flow. Regarding interest rates, it was found that companies adopt assumptions based on WACC adjusted for capital structure and arbitrary rates discounts or at least unexplained reports accessed.
APA, Harvard, Vancouver, ISO, and other styles
39

Lenain, Luc. "Etudes expérimentales et numériques de la dynamique des vagues et leurs implications pour les échanges océan - atmosphère." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLN033/document.

Full text
Abstract:
Au cours des dernières décennies, les communautés de recherches océanographiques et atmosphérique ont démontrées que pour améliorer notre compréhension du couplage entre l'atmosphère et l'océan, et le paramétrage du flux de masse entre l'océan et l'atmosphère (gaz, aérosols, par exemple) , de moment (pour la génération de vagues et de courants marins) et d'énergie (flux de chaleur et énergie cinétique pour les courants et le processus de mélange près de la surface ) dans les modèles couplés océan-atmosphère, les vagues doivent être prises en compte. La physique du couplage dépend de la cinématique et de la dynamique du champ de vagues, y compris les processus de génération de vagues liées au vent, les interactions non-linéaires, ondes-ondes et la dissipation des vagues, cette dernière étant normalement considérée comme dominée par le déferlement. Nous présentons ici une série d'études expérimentales et numériques, démontrant l'importance du champ de vagues sur les interactions océan - atmosphère
Over the last several decades there has been growing recognition from both the traditional oceanographic and atmospheric science communities that to better understand the coupling between the atmosphere and the ocean, and reflect that understanding in improved air-sea fluxes of mass (e.g. gases, aerosols), momentum (e.g. generation of waves and currents) and energy (e.g. heat and kinetic energy for currents and mixing) in coupled ocean-atmosphere models, surface-wave processes must be taken into account. The underlying physics of the coupling depends on the kinematics and dynamics of the wave field, including processes of wind-wave growth, nonlinear wave-wave interactions, wave-current interactions and wave dissipation, with the last normally considered dominated by wave breaking. Here we present a series of experiments, both numerical and field observations, focusing on surface wave effects on air-sea interaction processes
APA, Harvard, Vancouver, ISO, and other styles
40

Abedin, Behnam. "Social entrepreneurs value co-creation in online communities." Thesis, Queensland University of Technology, 2022. https://eprints.qut.edu.au/231390/1/Behnam_Abedin_Thesis.pdf.

Full text
Abstract:
This thesis examined how and why social entrepreneurs in Australia use online communities for value co-creation. It specifically investigated social entrepreneurs' motivations to participate in online communities, the barriers that might inhibit them from participating in online communities, and the particular abilities that they need to have in order to participate in online communities. Moreover, this study explored value co-creation activities that social entrepreneurs perform in online communities to create value together for all parties involved and the positive outcomes for them as the result of their participation in online communities.
APA, Harvard, Vancouver, ISO, and other styles
41

Joshi, Neekita. "ASSESSING THE SIGNIFICANCE OF CLIMATE VARIABILITY ON GROUNDWATER RISE AND SEA LEVEL CHANGES." OpenSIUC, 2021. https://opensiuc.lib.siu.edu/dissertations/1908.

Full text
Abstract:
Climate variability is important to understand as its effects on groundwater are complex than surface water. Climate association between Groundwater Storage (GWS) and sea level changes have been missing from the Intergovernmental Panel on Climate Change, demanding a requisite study of their linkage and responses. The current dissertation is primarily focused on the ongoing issues that have not been focused on the previous literatures. Firstly, the study evaluated the effects of short-term persistence and abrupt shifts in sea level records along the US coast by utilizing popular robust statistical techniques. Secondly, the study evaluated the variability in groundwater due to variability in hydroclimatic variables like sea surface temperature (SST), precipitation, sea level, and terrestrial water storage. Moreover, a lagged correlation was also analyzed to obtain their teleconnection patterns. Lastly, the relationship between the groundwater rise and one of the most common short-term climate variability, ENSO was obtained. To accomplish the research goals the current dissertation was subdivided into three research tasks.The first task attempted to answer a major question, Is sea level change affected by the presence of autocorrelation and abrupt shift? This question reflects the importance of trend and shift detection analysis in sea level. The primary factor driving the global sea level rise is often related to climate change. The current study investigates the changes in sea level along the US coast. The sea level records of 59 tide gauge data were used to evaluate the trend, shift, and persistence using non-parametric statistical tests. Mann-Kendall and Pettitt’s tests were utilized to estimate gradual trends and abrupt shifts, respectively. The study also assessed the presence of autocorrelation in sea level records and its effect on both trend and shift was examined along the US coast. The presence of short-term persistence was found in 57 stations and the trend significance of most stations was not changed at a 95% confidence level. Total of 25 stations showed increasing shift between 1990–2000 that was evaluated from annual sea level records. Results from the current study may contribute to understanding sea level variability across the contiguous US. The second task dealt with variability in the Hydrologic Unit Code—03 region. It is one of the major U.S. watersheds in the southeast in which most of the variability is caused by Sea Surface Temperature (SST) variability in the Pacific and Atlantic Ocean, was identified. Furthermore, the SST regions were identified to assess its relationship with GWS, sea level, precipitation, and terrestrial water storage. Temporal and spatial variability were obtained utilizing the singular value decomposition statistical method. A gridded GWS anomaly from the Gravity Recovery and Climate Experiment (GRACE) was used to understand the relationship with sea level and SST. The negative pockets of SST were negatively linked with GWS. The identification of teleconnections with groundwater may substantiate temporal patterns of groundwater variability. The results confirmed that the SST regions exhibited El Niño Southern Oscillation patterns, resulting in GWS changes. Moreover, a positive correlation between GWS and sea level was observed on the east coast in contrast to the southwestern United States. The findings highlight the importance of climate-driven changes in groundwater attributing changes in sea level. Therefore, SST could be a good predictor, possibly utilized for prior assessment of variabilities plus groundwater forecasting. The primary goal of the third task is to better understand the effects of ENSO climate patterns on GWS in the South Atlantic-Gulf region. Groundwater issues are complex and different studies focused on groundwater depletion while few emphasized, “groundwater rise”. The current research is designed to develop an outline for assessing how climate patterns can affect groundwater fluctuation, which might lead to groundwater rise. The study assessed the effect of ENSO phases on spatiotemporal variability of groundwater using Spearman Rank Correlation. A significant positive correlation between ENSO and GWS was observed. An increasing trend was detected in GWS where most grids were observed in Florida by utilizing the non-parametric Mann-Kendall. A positive magnitude of the trend was also detected by utilizing Theil-Sen’s Slope method with high magnitude in the mid-Florida region. The highest GWS anomalies were observed in the peak of El Niño events and the lowermost GWS was observed during La Niña events. Furthermore, most of the stations were above normal groundwater conditions. This study provides a better understanding of the research gap between groundwater rise and ENSO.
APA, Harvard, Vancouver, ISO, and other styles
42

Burnett, William A. (William Albert). "Pretherapy Religious Value Information its Influence on Stated Perceptions of and Willingness to See a Counselor." Thesis, North Texas State University, 1986. https://digital.library.unt.edu/ark:/67531/metadc330693/.

Full text
Abstract:
This study sought to determine the influence of pretherapy religious value information upon potential clients' (a) perceptions of a counselor, (b) willingness to see a counselor and (c) confidence of counselor helpfulness. Two hundred and ten undergraduate college students volunteered for the study. Subjects were randomly assigned to one of three treatment groups and given varying amounts and types of written information about a counselor. Group 1 received just the counselor's credentials. Group 2 received the same information plus statements about the counselor's beliefs about counseling and his therapeutic approach. Group 3 received the same information as group 2 plus a statement of the counselor's religious values. Subjects then viewed a short video tape of the counselor in a counseling session. Results of statistical treatment of dependent variables indicated that subjects' perceptions of the counselor, willingness to see the counselor, and confidence of counselor helpfulness were not influenced by the written information, including the statement of religious values that the subjects received before viewing the video tape of the counselor. Implications and recommendations for further research are discussed.
APA, Harvard, Vancouver, ISO, and other styles
43

Bazargan-Harandi, Hamid. "Neural network based simulation of sea-state sequences." Thesis, Brunel University, 2006. http://bura.brunel.ac.uk/handle/2438/379.

Full text
Abstract:
The present PhD study, in its first part, uses artificial neural networks (ANNs), an optimization technique called simulated annealing, and statistics to simulate the significant wave height (Hs) and mean zero-up-crossing period ( ) of 3-hourly sea-states of a location in the North East Pacific using a proposed distribution called hepta-parameter spline distribution for the conditional distribution of Hs or given some inputs. Two different seven- network sets of ANNs for the simulation and prediction of Hs and were trained using 20-year observed Hs’s and ’s. The preceding Hs’s and ’s were the most important inputs given to the networks, but the starting day of the simulated period was also necessary. However, the code replaced the day with the corresponding time and the season. The networks were trained by a simulated annealing algorithm and the outputs of the two sets of networks were used for calculating the parameters of the probability density function (pdf) of the proposed hepta-parameter distribution. After the calculation of the seven parameters of the pdf from the network outputs, the Hs and of the future sea-state is predicted by generating random numbers from the corresponding pdf. In another part of the thesis, vertical piles have been studied with the goal of identifying the range of sea-states suitable for the safe pile driving operation. Pile configuration including the non-linear foundation and the gap between the pile and the pile sleeve shims were modeled using the finite elements analysis facilities within ABAQUS. Dynamic analyses of the system for a sea-state characterized by Hs and and modeled as a combination of several wave components were performed. A table of safe and unsafe sea-states was generated by repeating the analysis for various sea-states. If the prediction for a particular sea-state is repeated N times of which n times prove to be safe, then it could be said that the predicted sea-state is safe with the probability of 100(n/N)%. The last part of the thesis deals with the Hs return values. The return value is a widely used measure of wave extremes having an important role in determining the design wave used in the design of maritime structures. In this part, Hs return value was calculated demonstrating another application of the above simulation of future 3-hourly Hs’s. The maxima method for calculating return values was applied in such a way that avoids the conventional need for unrealistic assumptions. The significant wave height return value has also been calculated using the convolution concept from a model presented by Anderson et al. (2001).
APA, Harvard, Vancouver, ISO, and other styles
44

Chung, Alexander Quoc Huy. "Emergency Preparedness and Response Planning: A Value-Based Approach to Preparing Coastal Communities for Sea Level Rise." Thesis, Université d'Ottawa / University of Ottawa, 2014. http://hdl.handle.net/10393/31446.

Full text
Abstract:
Extreme weather events have become a common occurrence and coastal communities are adversely affected by it. Studies have shown that the changing climate has increased the frequency and severity of storms, surging sea levels, and floods, as was seen with Hurricane Sandy (2012) and Typhoon Haiyan (2013). The need to be proactive in preparing for these events, as a means of climate change adaptation and disaster risk reduction, is evident. This study focuses on the formal definition, measurement and simulation of coastal community preparedness and response to severe storm events. Preparedness and response requires resources, emergency plans, informed decision making and the ability to cope with unexpected events. A suite of preparedness indicators is developed using a three level hierarchical framework in the construction of a coastal community preparedness index to evaluate resources and plans. Informed decision making for emergency management personnel in the Emergency Operations Centre (EOC) is evaluated through a table-top exercise using a five-phase approach. Lastly, decision making with risk is introduced with a storm decision making simulation model. This study is applied to the case of the breakwater failure in the coastal community of Little Anse, Cape Breton, Nova Scotia.
APA, Harvard, Vancouver, ISO, and other styles
45

Andersson, Elina, and Nicolai Pitz. "Ready, set, live! How Do European Consumers Perceive the Value of Live Video Shopping and What are Their Motivations to Engage in It? A Qualitative Study." Thesis, Umeå universitet, Företagsekonomi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-185201.

Full text
Abstract:
The phenomenon of Live Video Shopping (LVS) has gained increased attention in recent years. Sinceapproximately 2017, the Chinese market has brought LVSto the attention of the public. In terms of overall market share, LVS was projected to account for roughly 20% of the overalle-commerce volume in Chinauntil 2021. Despite that, within Europe LVS has not seen such rapid growth and seems to be still a neglected domain,both in practice and in academia.Most academic research on LVS has originated in China and took place in an Asian setting. From a European standpoint, these findings can onlyserve as a starting point for further researchdue to cultural differences. Sincemost extant research on the topic of LVS adopted a quantitative research design, we could clearly identify a lack of qualitative studies in a European context. Therefore, our study aims to fill this research gap by investigating European consumersperceived value of LVS and their motivations to engage in it. A qualitative research design is particularly suitable for the exploratory nature of ourresearch. For the collection of data, we conducted three synchronous online focus groups through purposive sampling of European participants. In addition to that, and as a triangulation for our data sources, Pål Burman, the CEO of the LVS provider Zellma, was interviewed. With this we were able to includedifferent perspectives on the topic. It also helped us to increase thevalidityofthemanagerial implications.Usingthematic analysis, we could identify themes that theparticipants associated with the concept of LVS. The findings suggest that the perceived value of LVS for European consumers is not indisputable. Based on the theoretical concept of perceived value,LVS has to solve the trade-off between give and get components. This means that the committed time for attending an LVS stream has to be compensated by certainbenefits, such asdiscounts, enhanced product information, exclusivity of content,or inspiration. By using the theoretical framework of Uses and Gratifications Theory (UGT), we were able to divide the motivations of European consumers to engage in LVS in three types of gratifications: hedonic, utilitarian, and social. As opposed to prior Asian research on the topic, social gratifications only played a minor role forEuropean consumers. Reasons for an interaction between the viewer and the broadcaster were primarily product-related and utilitarian by nature. Engagement for the sake of social motives could not be confirmed. Utilitarian gratifications desired byEuropean consumers were mainly connected to theobtainment of enhanced production information, leading to more informed purchase decisions. Hedonic motivations were for exampleentertainmentand inspiration. When connecting UGT to the TechnologyAcceptance Model (TAM), we have also been able to identify consumers' approachesto LVS as a new technology.Crucial here weretheperceived ease of use and perceived usefulnessof LVS.It was shown that consumers require LVS to be as easy or easier than normal e-commerce shopping is already.LVS would further be perceived as useful if it could provide the viewers with something uniquethat old technologies are not able to provide.Asa new digital phenomenon, itis expected to serve as a complement to physical and online stores. In fact, LVS might not only serve as a platform of sales, but also tocreate long-lasting relationships between a brand and its consumers with consumer engagementas a main desired outcome.
APA, Harvard, Vancouver, ISO, and other styles
46

Azevedo, Sayuri Unoki de. "Modelagem do public value scorecard como instrumento de avaliação de desempenho para uma organização do terceiro setor." reponame:Repositório Institucional da UFPR, 2013. http://hdl.handle.net/1884/30117.

Full text
Abstract:
Resumo: A mensuração dos resultados do desempenho dos gestores e de entidades do Terceiro Setor auxilia para um ambiente mais factível e produtivo, pois possibilita demonstrar sua eficiência em relação aos fins sociais que deseja atingir. Em entidades sem fins lucrativos, como ocorre com organizações do Terceiro Setor, o gerenciamento de serviços requer a medição e identificação dos indicadores que são fundamentais para o sucesso gerencial. Moore (2003) propõe o Public Value Scorecard em vez do uso do Balanced Scorecard (BSC) no setor público e em entidades sem fins lucrativos. Assim, o Public Value Scorecard usado no Terceiro Setor adota uma série de Public Values, em vez de fundamentar-se em informações de custo ou lucro, como ocorre no Balanced Scorecard. Este estudo busca propor um sistema de avaliação de desempenho sustentado no modelo conceitual Public Value Scorecard para uma organização social sem fins lucrativos. A presente pesquisa utiliza como teoria que sustenta o estudo a Teoria Institucional, em específico, a primeira parte do processo de Institucionalização de Burns e Scapens (2000), denominada Codificação. O estudo possui abordagem qualitativa e a estratégia de pesquisa baseia-se em um estudo de caso. Alicerçado no construto Avaliação de Desempenho, esta pesquisa divide-se em três categorias: (1) Missão Social; (2) Legitimidade e Suporte; e (3) Capacidade Operacional. A Missão Social possui como variável os indicadores não financeiros, enquanto que as demais categorias possuem tanto os indicadores não financeiros como os financeiros como variáveis. O percurso metodológico baseia-se em: (1) entrevistas semiestruturadas fundamentadas no modelo de avaliação de desempenho do Public Value Scorecard com o Presidente e a Gerente Administrativa da organização, seguido por um feedback das respostas, bem como entrevista semiestruturada com um representante de cada área, também da contábil, examinadas por meio da análise de discurso para validação dos indicadores criados; e (2) pesquisa documental para coleta de dados de avaliação de desempenho junto aos relatórios da Organização, com análise de conteúdo. O resultado desta pesquisa foi a proposta de um Sistema de Avaliação de Desempenho construído para cada uma dos três componentes do Public Value Scorecard, passíveis de serem avaliados de forma segregada e de atribuir-se pesos diferenciados para cada componente (Missão Social; Legitimidade e Suporte; Capacidade Operacional).
APA, Harvard, Vancouver, ISO, and other styles
47

Ilves, Kristin. "Seaward Landward : Investigations on the archaeological source value of the landing site category in the Baltic Sea region." Doctoral thesis, Uppsala universitet, Arkeologi, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-172401.

Full text
Abstract:
There is a tendency in archaeology dealing with watercraft landing sites in a wider context to assume a direct relationship between sites in coastal and shore-bound areas and the practise of landing, without any deeper practical or theoretical exploration of the reality of any such relationship. This problem has its origins in the poor archaeological and conceptual definitions of watercraft landing sites obstructing any real understanding of the role of these sites in the maritime cultural landscape. Landing sites are taken for granted and they are undervalued as an archaeological source of explanation; notwithstanding, the concept of the landing site is readily used in archaeology in order to underpin archaeological interpretations on the maritime activities of past societies. In order to break away from the simplified understandings of past water-bound strategies based on the undefined concept of the landing site, this dissertation suggests a definition of watercraft landing sites in a wider social sense as water-bound contact zones; places of social interaction that can be archaeologically identified and investigated. This perspective integrates the understanding of the intentional character of human activity related to watercraft landing with the remaining archaeological traces. Archaeological definitions of landing sites that can be tested against the archaeological data are provided, and thereby, the dissertation contributes with the possibility to archaeologically evaluate and approach the social function of watercraft landing sites. This dissertation demonstrates that there can be an archaeology of landing sites.
APA, Harvard, Vancouver, ISO, and other styles
48

Kretzer, Ursula M. H. "The value of information : the case of pre-auction exploration and development exploration of North Sea oil resources." Thesis, University of Oxford, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.260661.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Kaller, Emma, and Lina Söderqvist. "Implementing Agile : A Qualitative Case Study About Agile Project Management at SEB." Thesis, Uppsala universitet, Företagsekonomiska institutionen, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-415898.

Full text
Abstract:
Many organizations turn to agile methods and practices in their organization in order to retain competitiveness in today's rapidly changing environment and changing customer demands. Researchers claim agile practices, such as Agile Project Management to be most successful if implemented in the entire organization, in an all-or-nothing approach. However, as many traditional organizations are attempting to adopt agile principles such as APM, few studies have been made to what extent agile methods and practices can be successful in traditional organizations. This research investigates how a APM-team at SEB can function in accordance to agile philosophy, and further, if the legacy and traditional structures at SEB counteract the APM-team. The study is a single case study, investigating one APM-team at SEB through semi-structured interviews and organizational documents. In order to answer the research question, a model of analysis was derived in order to capture the important theoretical concepts. It was found that the investigated APM-team in SEB does not function fully in order with agile philosophies, and further that the traditional structures and legacy at SEB hinders the APMteam to work according to agile philosophies. It was also found that the APM-team experienced difficulties with the actual agile way of working, which could affect their ability to work in accordance with said practices. Further research is needed for a greater context to fully understand how traditional organizations can counteract agile initiatives.
APA, Harvard, Vancouver, ISO, and other styles
50

CASTELPIETRA, MARCO. "Metric, geometric and measure theoretic properties of nonsmooth value functions." Doctoral thesis, Università degli Studi di Roma "Tor Vergata", 2007. http://hdl.handle.net/2108/202601.

Full text
Abstract:
La funzione valore è un nodo centrale del controllo ottimo. `E noto che la funzione valore può essere irregolare anche per sistemi molto regolari. Pertanto l’analisi non liscia diviene un importante strumento per studiarne le proprietà, anche grazie alle numerose connessioni con la semiconcavità. Sotto opportune ipotesi, la funzione valore è localmente semiconcava. Questa proprietà è connessa anche con la proprietà di sfera interna dei suoi insiemi di livello e dei loro perimetri. In questa tesi introduciamo l’analisi non-liscia e le sue connessioni con funzioni semiconcave ed insiemi di perimetro finito. Descriviamo i sistemi di controllo ed introduciamo le proprietà basilari della funzione tempo minimo T(x) e della funzione valore V (x). Usando il principio del massimo, estendiamo alcuni risultati noti di sfera interna per gli insiemi raggiungibili A(T), al caso non-autonomo ed ai sistemi con costo corrente non costante. Questa proprietà ci permette di ottenere delle stime sui perimetri per alcuni sistemi di controllo. Infine queste proprietà degli insiemi raggiungibili possono essere estese agli insiemi di livello della funzione valore, e, sotto alcune ipotesi di controllabilità otteniamo anche semiconcavità locale per V (x). Inoltre studiamo anche sistemi di controllo vincolati. Nei sistemi vincolati la funzione valore perde regolarità. Infatti, quando una traiettoria tocca il bordo del vincolo Ω, si presentano delle singolarità. Questi effetti sono evidenziati anche dal principio del massimo, che produce un termine aggiuntivo di misura(eventualmente discontinuo), quando una traiettoria tocca il bordo ∂Ω. E la funzione valore perde la semiconcavità, anche per sistemi particolarmente semplici. Ma siamo in grado di recuperare lipschitzianità per il tempo minimo, ed enunciare il principio del massimo esplicitando il termine di bordo. In questo modo otteniamo delle particolari proprietà di sfera interna, e quindi anche stime sui perimetri, per gli insiemi raggiungibili.
The value function is a focal point in optimal control theory. It is a known fact that the value function can be nonsmooth even with very smooth data. So, nonsmooth analysis is a useful tool to study its regularity. Semiconcavity is a regularity property, with some fine connection with nonsmooth analysis. Under appropriate assumptions, the value function is locally semiconcave. This property is connected with the interior sphere property of its level sets and their perimeters. In this thesis we introduce basic concepts of nonsmooth analysis and their connections with semiconcave functions, and sets of finite perimeter. We describe control systems, and we introduce the basic properties of the minimum time function T(x) and of the value function V (x). Then, using maximum principle, we extend some known results of interior sphere property for the attainable setsA(t), to the nonautonomous case and to systems with nonconstant running cost L. This property allow us to obtain some fine perimeter estimates for some class of control systems. Finally these regularity properties of the attainable sets can be extended to the level sets of the value function, and, with some controllability assumption, we also obtain a local semiconcavity for V (x). Moreoverwestudycontrolsystemswithstateconstraints. Inconstrained systems we loose many of regularity properties related to the value function. In fact, when a trajectory of control system touches the boundary of the constraint set Ω, some singularity effect occurs. This effect is clear even in the statement of the maximum principle. Indeed, due to the times in which a trajectory stays on ∂Ω, a measure boundary term (possibly, discontinuous) appears. So, we have no more semiconcavity for the value function, even for very simple control systems. But we recover Lipschitz continuity for the minimum time and we rewrite the constrained maximum principle with an explicit boundary term. We also obtain a kind of interior sphere property, and perimeter estimates for the attainable sets for some class of control systems.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography