Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Application scenarios.

Rozprawy doktorskie na temat „Application scenarios”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Application scenarios”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.

1

Muhammad, Jan. "Application Deployment in Relay Based Edge Computing Scenarios". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/20049/.

Pełny tekst źródła
Streszczenie:
Fog computing is a developing model as an innovative approach to increase the capacity of the cloud computing platform, which has been proposed to expand the Internet of Things (IoT) task to the edge of the network. Fog computing and edge computing are similar; both of them are promised to maximize the computing capabilities inside the local network to accomplish the computational responsibilities. Both computing mechanics can support systems to reduce their dependence on cloud-based platform to examine data, which usually directs to latency problems. The IoT brings to an ever-widening existence of ubiquitous networking devices in business, private and public areas. These networked computing devices feature storage, computational, and networking resources, instead of simply acting as a sensor. It is positioned at the edge of the network so that these resources can be used to execute the IoT applications in a distributed way. This idea is known as fog computing. We intensely propose relevant fog computing architecture. Genetic Algorithm contributes the general framework for solving complicated problems as a kind of heuristic algorithm and this optimization technique is inspired by natural evolution. We work in this project concerning the application deployment for the IoT applications over fog resources as an optimization problem. We propose three edge computing scenarios (centralized, distributed and random) with a relay in this work, so that the application execution can achieve the maximum probability, when these edge computing scenarios are used with a relay. Our numerical results illustrates that the with the relay based edge computing scenarios, we increase the probability of coverage and it shows that our work is successful.
Style APA, Harvard, Vancouver, ISO itp.
2

Bushehri, Yousef. "Application of the functional scenarios method on alternative settings". Thesis, Georgia Institute of Technology, 2016. http://hdl.handle.net/1853/55066.

Pełny tekst źródła
Streszczenie:
Goals of this study are to set up the frame-work for analyzing residential buildings using the functional scenarios method and to tests the applicability of the method on large scale projects. The metrics for the analysis are based on guidelines for designing spaces that promote healthy aging. In addition, the study was providing an opportunity to developing and refined the method. The result of the analysis determines that the functional scenarios method is applicable to large scale buildings as effectively as smalls scale buildings; design configurations can be extracted from the results of the analysis to inform future designs. The limitations of the analysis are due to the available resources. Opportunities for continued work include 1) developing standard ways of representing the results of the analysis; and 2) developing a systematic approach for extracting design configurations based on the research questions asked.
Style APA, Harvard, Vancouver, ISO itp.
3

Vianney, Hakizamana Jean Marie. "Investigation of Services and Application Scenarios for Inter-Vehicle Communication". Thesis, Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-995.

Pełny tekst źródła
Streszczenie:

In recent years, the number of vehicles has increased dramatically in Europe, USA and Japan. This causes a high traffic density and makes new security features a crucial point in order to keep the traffic safe. Inter-vehicle communication offers solutions in this field, as cars can communicate with each other. To this date, there is no special technology standardized for inter-vehicle communication. This is the reason why car makers, researchers and academics have invested money and time in different research projects so that in future they may come up with a common solution. Some of the technologies like DSRC, CALM, IEEE 802.11 or Infrared are thought to be more reliable than others according to different authors [9][23].

The technologies described above will help to improve road safety and application scenarios like lane change, blind merge or pre and post crash situations can be addressed. The position of each car is known through a GPS; speed, heading and other dynamic data of a car are known to all cars in the same vicinity.

In this thesis, a thorough investigation of services and applications related to inter-vehicular communication technology (i.e. car-to-car and car-to-infrastructure or vice versa) will be carry out. The emphasis will be on requirements on the communication system, sensors and user interface in order to make the technology more useful for future vehicle alert system and to avoid as many of the mentioned scenarios as possible. A rear-end collision can be avoided if the driver is warned within 0 to 5 second of potential accident.

Style APA, Harvard, Vancouver, ISO itp.
4

Cranley, Nikki, i Diarmuid Corry. "Analysis and Application Scenarios for Telemetry Data Transmission and Synchronisation over Wireless LAN". International Foundation for Telemetering, 2008. http://hdl.handle.net/10150/606181.

Pełny tekst źródła
Streszczenie:
ITC/USA 2008 Conference Proceedings / The Forty-Fourth Annual International Telemetering Conference and Technical Exhibition / October 27-30, 2008 / Town and Country Resort & Convention Center, San Diego, California
The use of IEEE 802.11 Wireless LAN (WLAN) technology offers numerous advantages over wired Ethernet including high bandwidth, device mobility, and the elimination of network wiring within the aircraft. With such benefits, there are certain caveats associated with the ability and performance of WLAN technology to carry time-sensitive and critical telemetry data using current IEEE 802.11 WLAN standards. In this paper, the limitations of WLAN for real-time data transmission are experimentally investigated. In particular, it will be shown how the fundamental wireless access mechanism and contention impact on the WLANs ability to carry real-time data. Although telemetry data is constant, the wireless access mechanism causes the WLAN throughput and per-packet delays to vary over time. Moreover, with the increased popularity of the IEEE 1588 Precision Time Protocol (PTP), the ability of the WLAN to provide time synchronisation is investigated. It is shown that asymmetric data loads on the uplink and downlink introduce synchronization errors. To mitigate some of these issues, this paper will discuss how the Quality of Service (QoS) Enabling WLAN standard, IEEE 802.11e, can be used to provide differentiated services and prioritised transmission for critical data.
Style APA, Harvard, Vancouver, ISO itp.
5

Hew, Aiwen. "Shaping strategic thinking among geographically dispersed stakeholders with the application of digital scenarios". Thesis, Queensland University of Technology, 2017. https://eprints.qut.edu.au/113984/1/Aiwen_Hew_Thesis.pdf.

Pełny tekst źródła
Streszczenie:
Scenarios have been used effectively over the years as a tool for helping to develop strategies in the face of uncertainty. This investigation involved delivering scenarios online with geographically distributed groups of transportation experts within the Asia-Pacific region and, by using a mixed-methods approach, examined their effectiveness as a way to challenge how participants think about the future. The results show that the perceived usefulness of the digitally delivered scenarios contributed to individual learning that led to a change in mental models, thereby supporting the intended benefits of scenarios to challenge management thinking during the strategy development process. The findings also provide an improved understanding of using digital technologies to engage a wide range of stakeholders in the scenario process who are geographically dispersed and relatively time-poor.
Style APA, Harvard, Vancouver, ISO itp.
6

Qian, Kun [Verfasser], i Jana [Akademischer Betreuer] Dittmann. "Context modelling for IT security in selected application scenarios / Kun Qian. Betreuer: Jana Dittmann". Magdeburg : Universitätsbibliothek, 2015. http://d-nb.info/1066295344/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
7

Habbecke, Martin [Verfasser]. "Interactive image-based 3D reconstruction techniques for application scenarios at different scales / Martin Habbecke". Aachen : Hochschulbibliothek der Rheinisch-Westfälischen Technischen Hochschule Aachen, 2012. http://d-nb.info/1025882873/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
8

Mertens, Ralf. "The Role of Psychophysiology in Forensic Assessments: Deception Detection, ERPs and Virtual Reality Mock Crime Scenarios". Diss., Tucson, Ariz. : University of Arizona, 2006. http://etd.library.arizona.edu/etd/GetFileServlet?file=file:///data1/pdf/etd/azu%5Fetd%5F1470%5F1%5Fm.pdf&type=application/pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
9

MOSTEFAI, NADIR. "Contribution a la modelisation d'un systeme cooperatif de minirobots mobiles autonomes. Application a des scenarios d'inspiration ethologique". Besançon, 1998. http://www.theses.fr/1998BESA2032.

Pełny tekst źródła
Streszczenie:
Ce travail s'inscrit dans le paradigme multi-agents ou l'agent est dans notre cas un minirobot mobile autonome. Nous avons postule que pour faire cooperer des agents, l'un d'entre eux au moins doit etre dote d'un degre minimal de cognitivite. D'ou la definition de l'architecture acramma qui est de type cognito-reactive. L'ethologie animale nous a inspire pour la definition de scenarios de cooperation entre minirobots mobiles autonomes, en particulier les mecanismes de recrutement chez les fourmis qui sont a la base d'une reussite collective impressionnante pour des individus d'apparence fruste. Differents scenarios de cooperation ont ete modelises par les reseaux de petri a objets (rpo). Nous avons propose une approche modulaire de la modelisation en definissant plusieurs modules dont l'association genere un modele global. Pour analyser un modele rpo qui a ete produit, on le transforme en un reseau de petri ordinaire equivalent tout en veillant a ce que certaines regles soient bien respectees afin de refleter la dynamique originelle du systeme. Des simulations nous ont permis de recolter des resultats qui peuvent aider a choisir les parametres de commande des minirobots en amont de la phase de mise en oeuvre effective. Plusieurs essais experimentaux ont ete menes pour verifier la plausibilite des algorithmes d'evitement d'obstacle, de memorisation de chemin et enfin le deplacement en file indienne apres recrutement par communication implicite. Les resultats sont multiples et renseignent sur les ecarts qu'il peut y avoir entre une simulation et un cas reel. Des perspectives s'offrent a ce travail tant en modelisation et simulation que sur le plan experimental. Les applications sont tres prometteuses notamment dans certains domaines telle que la domotique.
Style APA, Harvard, Vancouver, ISO itp.
10

Hussein, Bilal. "Contribution des "scenarios patterns" et du raisonnement à partir de cas à la modélisation conceptuelle des applications logicielles : application à la gestion bancaire". Valenciennes, 2008. http://ged.univ-valenciennes.fr/nuxeo/site/esupversions/240be5bd-824f-4523-9519-5a9dbc055102.

Pełny tekst źródła
Streszczenie:
Situé dans le domaine de l’ingénierie des besoins, l’exposé concerne la contribution de la notion de « scénario pattern » et du Raisonnement à Partir de Cas (RàPC) à la construction automatisée du modèle de conception des applications logicielles complexes. L’objectif est d’optimiser la tâche amont de modélisation conceptuelle du développeur en lui offrant un outil d’aide à la génération automatique du modèle de conception de l’application au format UML. Dans ce but, un Environnement de Modélisation Intelligent (EMI), fondé sur la réutilisation des connaissances métier (« scénarii patterns ») et de leurs patterns de conception (design patterns), a été conçu, réalisé et validé. Les « scénarii patterns » sont décrits dans un langage semi formel muni d’un analyseur lexical, syntaxique et sémantique afin de permettre leur réutilisation. La réutilisation est basée sur le paradigme du RàPC. Ce mode de raisonnement analogique s’appuie sur la remémoration de problèmes passés résolus, les cas sources, pour résoudre de nouveaux problèmes, les problèmes cibles. Les couples {« scénario pattern », pattern de conception} structurent les cas sources capitalisés et réutilisés par l’EMI. Enfin, l’exploitation de l’EMI est illustrée par la réutilisation d’une base de cas sources issue du domaine bancaire
Located in the field of requirement engineering, the thesis concerns the contribution of the concept of "scenario pattern" and Case-Based Reasoning (CBR) in order to construct automatically a design model for complex software applications. The objective is to optimize the task ahead of developer's conceptual modeling by providing him a software aided tool to generate automatically an application's design model in UML format. To this end, an Intelligent Modeling Environment (IME), based on reuse of business knowledge ("scenarios patterns") and their design patterns, was designed, built and validated. The scenarios patterns are described in a semi-formal language with a lexical, syntax and semantic analyzer to enable their reuse. Reuse is based on the CBR paradigm. The analog mode of reasoning is based on retrieve of past problems solved, called the stored cases, to solve new problems, called target problems. Couples (scenario pattern, pattern design) structure stored cases that are capitalized and reused by the IME. Finally, exploitation of the IME is illustrated by the reuse of stored cases issue from banking domain
Style APA, Harvard, Vancouver, ISO itp.
11

Hiebel, Markus. "Development and application of a method to calculate optimal recycling rates with the help of cost-benefit scenarios /". Stuttgart : Fraunhofer-IRB-Verl, 2007. http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&doc_number=016033623&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
12

Lochmann, Gerrit [Verfasser], i Stefan [Gutachter] Müller. "Latency Reduction for Real-Time Rendering and its Application to VR Training Scenarios / Gerrit Lochmann ; Gutachter: Stefan Müller". Koblenz, 2021. http://d-nb.info/1234452707/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
13

Ciesielski, Anna [Verfasser], i Karen [Akademischer Betreuer] Pittel. "The calibration of economic growth : an application to carbon emission scenarios and to the DICE model / Anna Ciesielski ; Betreuer: Karen Pittel". München : Universitätsbibliothek der Ludwig-Maximilians-Universität, 2018. http://d-nb.info/1153712148/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
14

Ali, Syed Mahtab. "Climate change and water management impacts on land and water resources". Thesis, Curtin University, 2007. http://hdl.handle.net/20.500.11937/202.

Pełny tekst źródła
Streszczenie:
This study evaluated the impacts of shallow and deep open drains on groundwater levels and drain performance under varying climate scenarios and irrigation application rates. The MIKE SHE model used for this study is an advanced and fully spatially distributed hydrological model. Three drain depths, climates and irrigation application rates were considered. The drains depths included 0, 1 and 2 m deep drains. The annual rainfall and meteorological data were collected from study area from 1976 to 2004 and analysed to identify the typical wet, average and dry years within the record. Similarly three irrigation application rates included 0, 10 and 16 ML/ha-annum. All together twenty seven scenarios (3 drains depths, 3 climates and 3 irrigation application rates) were simulated. The observed soil physical and hydrological data were used to calibrate and validate the model. Mean square error (R[superscript]2) of the simulated and observed water table data varied from 0.7 to 0.87. Once validated the MIKE SHE model was used to evaluate the effectiveness of 1 and 2 metre deep drains. The simulated water table depth, unsaturated zone deficit, exchange between unsaturated and saturated zones, drain outflow and overland flow were used to analyse their performance. The modeling results showed that the waterlogging was extensive and prolonged during winter months under the no drainage and no irrigation scenario. In the wet climate scenario, the duration of water logging was longer than in the average climate scenario during the winter months. In the dry climate scenario no waterlogging occurred during the high rainfall period. The water table reached soil surface during the winter season in the case of wet and average climate. For the dry climate, the water table was about 0.9 metres below soil surface during winter.One and 2 metre deep drains lowered the water table up to 0.9 and 1.8 metres in winter for the wet climate when there was no irrigation application. One metre deep drains proved effective in controlling water table during wet and average climate without application of irrigation water. One metre deep drains were more effective in controlling waterlogging a in wet, average and dry years when the irrigation application rate was 10 ML/ha-annum. With 16 ML/ha-annum irrigation application, 1 metre deep drains did not perform as efficiently as 2 metre deep drains in controlling the water table and waterlogging. In the dry climate scenario, without irrigation application, 1 metre deep drains were not required as there was not enough flux from rainfall and irrigation to raise the water table and create waterlogging risks. Two metre deep drains lowered the water table to greater depths in the wet, average and dry climate scenarios respectively when no irrigation was applied. They managed water table better in wet and average climate with 10 and 16 ML/ha-annum irrigation application rate. Again in the dry climate, without irrigation application 2 metre deep drains were not required as there was a minimal risk of waterlogging. The recharge to the groundwater table in the no drainage case was far greater than for the 1 and 2 metre deep drainage scenarios. The recharge was higher in case of 1 metre deep drains than 2 metre deep drains in wet and average climate during winter season.There was no recharge to ground water with 1 and 2 metre deep drains under the dry climate scenarios and summer season without irrigation application as there was not enough water to move from the ground surface to the unsaturated and saturated zones. When 10 ML/ha-annum irrigation rate was applied during wet, average and dry climate respectively, 1 metre deep drains proved enough drainage to manage the recharge into the groundwater table with a dry climate. For the wet and average climate scenarios, given a 10 ML/ha-annum irrigation application rate, 2 metre deep drains managed recharge better than 1 metre deep drains. Two metres deep drains with a 10 ML/ha-annum irrigation application rate led to excessive drainage of water from the saturated zone in the dry climate scenario. Two metres deep drains managed recharge better with a 16 ML/ha-annum irrigation application rate in the wet and average climate scenarios than the 1 metre deep drains. Two metres deep drains again led to excessive drainage of water from the saturated zone in dry climate. In brief, 1 metre deep drains performed efficiently in the wet and average climate scenarios with and without a 10 ML/ha-annum irrigation application rate. One metre deep drains are not required for the dry climate scenario. Two metre deep drains performed efficiently in the wet and average climate scenarios with 16 ML/ha-annum irrigation application rate. Two metre deep drains are not required for the dry climate scenario.
Style APA, Harvard, Vancouver, ISO itp.
15

Bradley, James. "An application of the Point Surface Energy Balance Model in forcasting ablation of Arctic and Alpine glaciers under varying climate change scenarios /". Leeds : University of Leeds, School of Geography, 2006. http://0-www.leeds.ac.uk.wam.leeds.ac.uk/library/secure/counter/geogbsc/200506/bradley.pdf.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
16

Krätzer, Christian [Verfasser], i Jana [Akademischer Betreuer] Dittmann. "Statistical pattern recognition for audio-forensics : empirical investigations on the application scenarios audio steganalysis and microphone forensics / Christian Krätzer. Betreuer: Jana Dittmann". Magdeburg : Universitätsbibliothek, 2013. http://d-nb.info/1054420408/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
17

Vujic, Zoran [Verfasser]. "Improvement and verification of steam explosion models and codes for application to accident scenarios in light water reactors / vorgelegt von Zoran Vujic". Stuttgart : IKE, 2008. http://d-nb.info/997049855/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
18

Sionneau, Bernard. "Risque-pays et prospective internationale : theorie et application (la republique socialiste du viet nam)". Paris, CNAM, 2000. http://www.theses.fr/2000CNAM0347.

Pełny tekst źródła
Streszczenie:
Les crises recentes et mal anticipees qui, entre 1994 et 1998 ont secoue les marches emergents et menace le systeme financier international, ont suscite d'abondantes critiques a l'encontre des professionnels du risque-pays, accuses de n'avoir pas su les anticiper. Pour autant, et a la decharge de ces experts, un constat s'impose : l'evaluation du risque-pays est devenue une activite particulierement complexe. Inscrite - dans un contexte d'apres-guerre froide, de dereglementation et de globalisation - a la charniere de realites mondiales et nationales, de logiques privees et d'interets souverains, elle necessite de manipuler de nombreux facteurs et acteurs interagissant simultanement; elle requiert egalement de mobiliser une somme impressionnante de donnees (quantitatives et qualitatives) concernant ces forces. C'est donc pour tenter de completer les methodes traditionnelles d'evaluation du risque-pays en integrant les elements precites, mais aussi pour dissiper les zones d'ombre entourant une activite professionnelle mal connue, que ce travail de recherche a ete realise. L'introduction generale contient des precisions de nature semantique, situe le sujet, ainsi que les objections et demandes faites par rapport a son traitement. La premiere partie de la these dresse un bilan des pratiques du risque-pays. La deuxieme partie propose une theorie du risque-pays et une methode d'analyse prospective destinee a l'evaluer. Dans la troisieme partie, la demarche mono-pays proposee prend pour terrain d'application et test de validite le viet nam. La conclusion generale de la these souligne l'interet de la demarche theorique et methodologique en fonction de l'evolution du risque-pays et des resultats obtenus dans l'application au cas du viet nam.
Style APA, Harvard, Vancouver, ISO itp.
19

Chen, Xiangtuo. "Statistical Learning Methodology to Leverage the Diversity of Environmental Scenarios in Crop Data : Application to the prediction of crop production at large-scale". Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLC055.

Pełny tekst źródła
Streszczenie:
La prévision du rendement des cultures est toujours une question primordiale. De nombreuses recherches ont été menées avec cet objectif en utilisant diverses méthodologies. Généralement, les méthodes peuvent être classées en approches basées sur les modèles et en approches basées sur les données.Les approches basées sur les modèles reposent sur la modélisation mécaniste des cultures. Ils décrivent la croissance des cultures en interaction avec leur environnement comme systèmes dynamiques. Comme ces modèles sont basés sur la description mécanique des processus biophysiques, ils impliquent potentiellement un grand nombre de variables d'état et de paramètres, dont l'estimation n'est pas simple. En particulier, les problèmes d'estimation des paramètres résultant sont généralement non linéaires et conduisent à des problèmes d'optimisation non-convexes dans un espace multidimensionnel. De plus, l’acquisition de données est très difficile et nécessite un travail expérimental lourd afin d’obtenir les données appropriées pour l’identification du modèle.D'un autre côté, les approches basées sur les données pour la prévision du rendement nécessitent des données provenant d'un grand nombre de scénarios environnementaux, mais les données sont plus simples à obtenir: (données climatiques et rendement final). Cependant, les perspectives de ce type de modèles se limitent principalement à la prévision de rendement.La première contribution originale de cette thèse consiste à proposer une méthodologie statistique pour calibrer les modèles mécanistes potentiellement complexes, lorsque des ensembles de données avec différents scénarios environnementaux et rendements sont disponibles à grande échelle. Nous l'appellerons Méthodologie d'estimation de paramètres multi-scénarios (MuScPE). Les principales étapes sont les suivantes:Premièrement, nous tirons parti des connaissances préalables sur les paramètres pour leur attribuer des distributions a priori pertinentes et effectuons une analyse de sensibilité globale sur les paramètres du modèle afin de sélectionner les paramètres les plus importants à estimer en priorité.Ensuite, nous mettons en œuvre une méthode d’optimisation efficace non convexe, l’optimisation parallèle des essaims de particules, pour rechercher l’estimateur MAP (maximum a posteriori) des paramètres;Enfin, nous choisissons la meilleure configuration en ce qui concerne le nombre de paramètres estimés par les critères de sélection de modèles. Il y a en effet un compromis à trouver entre d’un côté l'ajustement aux données, et d'un autre côté la variance du modèle et la complexité du problème d'optimisation à résoudre.Cette méthodologie est d'abord testée avec le modèle CORNFLO, un modèle de culture fonctionnel pour le maïs.La seconde contribution de la thèse est la comparaison de cette méthode basée sur un modèle mécaniste avec des méthodes classiques d'apprentissage statistique basées sur les données. Nous considérons deux classes de méthodes de régression: d'une part, les méthodes statistiques dérivées de la régression linéaire généralisée qui permettent de simplifier le modèle par réduction dimensionnelle (régressions Ridge et Lasso, Régression par composantes principales ou régression partielle des moindres carrés) et d'autre part les méthode de régression de machine learning basée sur des modèles non-linéaires ou des techniques de ré-échantillonnage comme la forêt aléatoire, le réseau de neurones et la régression SVM.Enfin, une régression pondérée est appliquée pour prédire la production à grande échelle. La production de blé tendre, une culture de grande importance économique en France, est prise en exemple. Les approches basées sur les modèles et sur les données ont également été comparées pour déterminer leur performance dans la réalisation de cet objectif, ce qui est finalement la troisième contribution de cette thèse
Crop yield prediction is a paramount issue in agriculture. Considerable research has been performed with this objective relying on various methodologies. Generally, they can be classified into model-driven approaches and data-driven approaches.The model-driven approaches are based on crop mechanistic modelling. They describe crop growth in interaction with their environment as dynamical systems. Since these models are based on the mechanical description of biophysical processes, they potentially imply a large number of state variables and parameters, whose estimation is not straightforward. In particular, the resulting parameter estimation problems are typically non-linear, leading to non-convex optimisation problems in multi-dimensional space. Moreover, data acquisition is very challenging and necessitates heavy specific experimental work in order to obtain the appropriate data for model identification.On the other hand, the data-driven approaches for yield prediction necessitate data from a large number of environmental scenarios, but with data quite easy to obtain: climatic data and final yield. However, the perspectives of this type of models are mostly limited to prediction purposes.An original contribution of this thesis consists in proposing a statistical methodology for the parameterisation of potentially complex mechanistic models, when datasets with different environmental scenarios and large-scale production records are available, named Multi-scenario Parameter Estimation Methodology (MuScPE). The main steps are the following:First, we take advantage of prior knowledge on the parameters to assign them relevant prior distributions and perform a global sensitivity analysis of the model parameters to screen the most important ones that will be estimated in priority;Then, we implement an efficient non-convex optimisation method, the parallel particle swarm optimisation, to search for the MAP (maximum a posterior) estimator of the parameters;Finally, we choose the best configuration regarding the number of estimated parameters by model selection criteria. Because when more parameters are estimated, theoretically, the calibrated model could explain better the variance of the output. Meanwhile, it increases also difficulty for optimization, which leads to uncertainty in calibration.This methodology is first tested with the CORNFLO model, a functional crop model for the corn.A second contribution of the thesis is the comparison of this model-driven method with classical data-driven methods. For this purpose, according to their different methodology in fitting the model complexity, we consider two classes of regression methods: first, Statistical methods derived from generalized linear regression that are good at simplifying the model by dimensional reduction, such as Ridge and Lasso Regression, Principal Components Regression or Partial Least Squares Regression; second, Machine Learning Regression based on re-sampling techniques like Random Forest, k-Nearest Neighbour, Artificial Neural Network and Support Vector Machine (SVM) regression.At last, a weighted regression is applied to large-scale yield prediction. Soft wheat production in France is taken as an example. Model-driven and data-driven approaches have also been compared for their performances in achieving this goal, which could be recognised as the third contribution of this thesis
Style APA, Harvard, Vancouver, ISO itp.
20

Ali, Syed Mahtab. "Climate change and water management impacts on land and water resources". Curtin University of Technology, Faculty of Engineering and Computing, Dept. of Civil Engineering, 2007. http://espace.library.curtin.edu.au:80/R/?func=dbin-jump-full&object_id=18688.

Pełny tekst źródła
Streszczenie:
This study evaluated the impacts of shallow and deep open drains on groundwater levels and drain performance under varying climate scenarios and irrigation application rates. The MIKE SHE model used for this study is an advanced and fully spatially distributed hydrological model. Three drain depths, climates and irrigation application rates were considered. The drains depths included 0, 1 and 2 m deep drains. The annual rainfall and meteorological data were collected from study area from 1976 to 2004 and analysed to identify the typical wet, average and dry years within the record. Similarly three irrigation application rates included 0, 10 and 16 ML/ha-annum. All together twenty seven scenarios (3 drains depths, 3 climates and 3 irrigation application rates) were simulated. The observed soil physical and hydrological data were used to calibrate and validate the model. Mean square error (R[superscript]2) of the simulated and observed water table data varied from 0.7 to 0.87. Once validated the MIKE SHE model was used to evaluate the effectiveness of 1 and 2 metre deep drains. The simulated water table depth, unsaturated zone deficit, exchange between unsaturated and saturated zones, drain outflow and overland flow were used to analyse their performance. The modeling results showed that the waterlogging was extensive and prolonged during winter months under the no drainage and no irrigation scenario. In the wet climate scenario, the duration of water logging was longer than in the average climate scenario during the winter months. In the dry climate scenario no waterlogging occurred during the high rainfall period. The water table reached soil surface during the winter season in the case of wet and average climate. For the dry climate, the water table was about 0.9 metres below soil surface during winter.
One and 2 metre deep drains lowered the water table up to 0.9 and 1.8 metres in winter for the wet climate when there was no irrigation application. One metre deep drains proved effective in controlling water table during wet and average climate without application of irrigation water. One metre deep drains were more effective in controlling waterlogging a in wet, average and dry years when the irrigation application rate was 10 ML/ha-annum. With 16 ML/ha-annum irrigation application, 1 metre deep drains did not perform as efficiently as 2 metre deep drains in controlling the water table and waterlogging. In the dry climate scenario, without irrigation application, 1 metre deep drains were not required as there was not enough flux from rainfall and irrigation to raise the water table and create waterlogging risks. Two metre deep drains lowered the water table to greater depths in the wet, average and dry climate scenarios respectively when no irrigation was applied. They managed water table better in wet and average climate with 10 and 16 ML/ha-annum irrigation application rate. Again in the dry climate, without irrigation application 2 metre deep drains were not required as there was a minimal risk of waterlogging. The recharge to the groundwater table in the no drainage case was far greater than for the 1 and 2 metre deep drainage scenarios. The recharge was higher in case of 1 metre deep drains than 2 metre deep drains in wet and average climate during winter season.
There was no recharge to ground water with 1 and 2 metre deep drains under the dry climate scenarios and summer season without irrigation application as there was not enough water to move from the ground surface to the unsaturated and saturated zones. When 10 ML/ha-annum irrigation rate was applied during wet, average and dry climate respectively, 1 metre deep drains proved enough drainage to manage the recharge into the groundwater table with a dry climate. For the wet and average climate scenarios, given a 10 ML/ha-annum irrigation application rate, 2 metre deep drains managed recharge better than 1 metre deep drains. Two metres deep drains with a 10 ML/ha-annum irrigation application rate led to excessive drainage of water from the saturated zone in the dry climate scenario. Two metres deep drains managed recharge better with a 16 ML/ha-annum irrigation application rate in the wet and average climate scenarios than the 1 metre deep drains. Two metres deep drains again led to excessive drainage of water from the saturated zone in dry climate. In brief, 1 metre deep drains performed efficiently in the wet and average climate scenarios with and without a 10 ML/ha-annum irrigation application rate. One metre deep drains are not required for the dry climate scenario. Two metre deep drains performed efficiently in the wet and average climate scenarios with 16 ML/ha-annum irrigation application rate. Two metre deep drains are not required for the dry climate scenario.
Style APA, Harvard, Vancouver, ISO itp.
21

Rana, Santu. "Multilinear analysis of face image ensembles". Thesis, Curtin University, 2010. http://hdl.handle.net/20.500.11937/1662.

Pełny tekst źródła
Streszczenie:
Machine based face recognition is an important area of research that has attracted significant attention over the past few decades. Recently, multilinear models of face images have gained prominence as an alternative method for face recognition. Against linear techniques, multilinear models offer the advantage of having more complex models. Against kernel and manifold based non-linear techniques, the advantage lies in having more intuitive and computationally frugal modelling. In this thesis, we present an in-depth analysis and understanding of different properties associated with multilinear analysis and propose three different face recognition algorithms and a unified framework addressing open issues in face recognition.We first propose a face recognition algorithm primarily to address the limitations of the existing multilinear based algorithms in the form of their inability to handle test images that are in unseen conditions. The algorithm is based on the construction of a new representational basis multilinear eigenmodes, enabling representation and classification of faces at unseen conditions. Subsequently, we propose a second algorithm to address the high computational complexity of the first algorithm. We define a set of person-specific bases to represent person-specific images under all variations, and based on this propose an efficient recognition algorithm. Next, we propose a framework of face recognition based on an interpretation of the multilinear analysis as a factor analysis paradigm. We the reformulate all the multilinear based algorithms to link them to a single optimization framework. A theoretical comparison of these algorithms is performed revealing the fundamental differences between them and their applicability in different face recognition scenarios. Experiments performed to compare multilinear analysis based methods to the leading linear and non-linear techniques reveal the superiority of our second algorithm on both measures of recognition accuracy and test speed.Next, we address the issue of inadequate training samples that arise in many application scenarios. We introduce a novel “friendly-hostile” paradigm, in which we propose a mechanism to compensate for the low number of training samples of hostile people by learning the structure of face images from a large training set of the friendly people. The formulation is built on a novel synthesis paradigm that is based on the unique factorization properties of the multilinear analysis. Experimental results show significant performance gain in comparison to the conventional methods. We also discuss issues concerning unbalanced datasets, wherein some people may be under-represented than others in the training set. This results in different apriori bias per class, affecting conventional recognition algorithms. Based on theory and experiments we demonstrate that our algorithm does not get affected by any such imbalance in bias and produces consistent performance in all situations.
Style APA, Harvard, Vancouver, ISO itp.
22

Kabaale, Edward. "A Formal Framework for Software Process Modelling and Verification: A Behavior Engineering and Ontology Approach". Thesis, Griffith University, 2018. http://hdl.handle.net/10072/384794.

Pełny tekst źródła
Streszczenie:
The main objective of software development is to produce software that addresses user needs appropriately. As such, success in its realisation is highly dependent on an explicit software process that aims at describing precisely and unambiguously all activities and tasks undertaken to develop the required software. Software Engineering (SE) standards and reference models provide a set of process life cycle activities and best practices to guide the engineering and production of software products with the right quality within budget and schedule. However, these are usually diverse and described in natural language. While specifying software processes in natural language is straightforward and guarantees a wider understanding, it is di cult to consistently, systematically and automatically monitor and verify if they have been fully implemented and adhered too in a given software project. Besides the process of de ning and documenting the necessary evidence to comply with SE standard requirements is often subjective, manual and time consuming. With the quick development and diversity of SE standards and reference models for di erent domains, systematic methods of modelling and veri cation of software processes are crucial for process analysis, understanding and evolution. Although there is substantial literature on software process formalisation, the existing approaches have some limitations that we address in this thesis, namely: (i) there is hardly any systematic and repeatable approach for translation of natural language software processes to formal presentations; (ii) the current approaches don't accommodate diverse software processes in a uni ed way during formalisation process; (iii) the current approaches do not provide a comprehensive formal framework for reasoning on the consistency, completeness and veri cation of software process descriptions. Following a design science research methodology, we develop and evaluate a framework for a systematic, repeatable and consistent approach for formalisation and automated veri cation of software processes that are usually written in natural language and published in formal documents such as SE standards and reference models. Our approach utilises the potential o ered by a synergistic representation model based on a graphical and logical formalism. While logical approaches o er mathematically rigorous speci cation and veri cation, graphical approaches on the other hand, encapsulate the use of logical techniques with familiar concepts and notions of the domain, making the approach simple and intuitive for stakeholders to use in software development. Our contribution has four main aspects : 1) A customised metamodel to underpin systematic and faithful formalisation of heterogeneous software processes from diverse SE standards and reference process models; 2) A formalisation approach to consistently and repeatedly formalise software processes ; 3) A Software Process Knowledge Base that integrates the various semantic process models that form the nucleus of our approach; and 4) A set of application scenarios that demonstrate and evaluate the quality, utility and e cacy of our approach.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Info & Comm Tech
Science, Environment, Engineering and Technology
Full Text
Style APA, Harvard, Vancouver, ISO itp.
23

Stamford, Laurence James. "Life cycle sustainability assessment of electricity generation : a methodology and an application in the UK context". Thesis, University of Manchester, 2012. https://www.research.manchester.ac.uk/portal/en/theses/life-cycle-sustainability-assessment-of-electricity-generation-a-methodology-and-an-application-in-the-uk-context(e4d76ed6-7247-4435-81db-505895067dd0).html.

Pełny tekst źródła
Streszczenie:
This research has developed a novel sustainability assessment framework for electricity technologies and scenarios, taking into account techno-economic, environmental and social aspects. The methodology uses a life cycle approach and considers relevant sustainability impacts along the supply chain. The framework is generic and applicable to a range of electricity technologies and scenarios. To test the methodology, sustainability assessments have been carried out first for different technologies and then for a range of possible future electricity scenarios for the UK. The electricity options considered either contribute significantly to the current UK electricity mix or will play a greater role in the future; these are nuclear power (PWR), natural gas (CCGT), wind (offshore), solar (residential PV) and coal power (subcritical pulverised). The results show that no one technology is superior and that certain tradeoffs must be made. For example, nuclear and offshore wind power have the lowest life cycle environmental impacts, except for freshwater eco-toxicity for which gas is the best option; coal and gas are the cheapest options, but both have high global warming potential; PV has relatively low global warming potential but high cost, ozone layer and resource depletion. Nuclear, wind and PV increase certain aspects of energy security but introduce potential grid management problems; nuclear also poses complex risk and intergenerational questions. Five potential future electricity mixes have also been examined within three overarching scenarios, spanning 2020 to 2070, and compared to the present-day UK grid. The scenarios have been guided by three different approaches to climate change: one future in which little action is taken to reduce CO2 emissions (‘65%’), one in which electricity decarbonises by 80% by 2050 in line with the UK’s CO2 reduction target (‘80%’), and one in which electricity is virtually decarbonised (at the point of generation) by 2050, in line with current policy (‘100%’).In order to examine the sustainability implications of these scenarios, the assessment results from the present-day comparison were projected forward to describe each technology in future time periods. Additional data were compiled so that coal with carbon capture and storage (CCS) – a potentially key future technology – could be included. The results of the scenario analyses show that the cost of generating electricity is likely to increase and become more capital-intensive. However, the lower-carbon scenarios are also at least 87% less sensitive to fuel price volatility. Higher penetration of nuclear and renewables generally leads to better environmental performance and more employment, but creates unknown energy storage costs and, in the case of nuclear power and coal CCS, the production of long-lived waste places a burden of management and risk on future generations. Therefore, the choice of the ‘most sustainable’ electricity options now and in the future will depend crucially on the importance placed on different sustainability impacts; this should be acknowledged in future policy and decision making. A good compromise requires strategic government action; to provide guidance, specific recommendations are made for future government policy.
Style APA, Harvard, Vancouver, ISO itp.
24

Pehcevski, Jovan, i jovanp@cs rmit edu au. "Evaluation of Effective XML Information Retrieval". RMIT University. Computer Science and Information Technology, 2007. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20080104.142709.

Pełny tekst źródła
Streszczenie:
XML is being adopted as a common storage format in scientific data repositories, digital libraries, and on the World Wide Web. Accordingly, there is a need for content-oriented XML retrieval systems that can efficiently and effectively store, search and retrieve information from XML document collections. Unlike traditional information retrieval systems where whole documents are usually indexed and retrieved as information units, XML retrieval systems typically index and retrieve document components of varying granularity. To evaluate the effectiveness of such systems, test collections where relevance assessments are provided according to an XML-specific definition of relevance are necessary. Such test collections have been built during four rounds of the INitiative for the Evaluation of XML Retrieval (INEX). There are many different approaches to XML retrieval; most approaches either extend full-text information retrieval systems to handle XML retrieval, or use database technologies that incorporate existing XML standards to handle both XML presentation and retrieval. We present a hybrid approach to XML retrieval that combines text information retrieval features with XML-specific features found in a native XML database. Results from our experiments on the INEX 2003 and 2004 test collections demonstrate the usefulness of applying our hybrid approach to different XML retrieval tasks. A realistic definition of relevance is necessary for meaningful comparison of alternative XML retrieval approaches. The three relevance definitions used by INEX since 2002 comprise two relevance dimensions, each based on topical relevance. We perform an extensive analysis of the two INEX 2004 and 2005 relevance definitions, and show that assessors and users find them difficult to understand. We propose a new definition of relevance for XML retrieval, and demonstrate that a relevance scale based on this definition is useful for XML retrieval experiments. Finding the appropriate approach to evaluate XML retrieval effectiveness is the subject of ongoing debate within the XML information retrieval research community. We present an overview of the evaluation methodologies implemented in the current INEX metrics, which reveals that the metrics follow different assumptions and measure different XML retrieval behaviours. We propose a new evaluation metric for XML retrieval and conduct an extensive analysis of the retrieval performance of simulated runs to show what is measured. We compare the evaluation behaviour obtained with the new metric to the behaviours obtained with two of the official INEX 2005 metrics, and demonstrate that the new metric can be used to reliably evaluate XML retrieval effectiveness. To analyse the effectiveness of XML retrieval in different application scenarios, we use evaluation measures in our new metric to investigate the behaviour of XML retrieval approaches under the following two scenarios: the ad-hoc retrieval scenario, exploring the activities carried out as part of the INEX 2005 Ad-hoc track; and the multimedia retrieval scenario, exploring the activities carried out as part of the INEX 2005 Multimedia track. For both application scenarios we show that, although different values for retrieval parameters are needed to achieve the optimal performance, the desired textual or multimedia information can be effectively located using a combination of XML retrieval approaches.
Style APA, Harvard, Vancouver, ISO itp.
25

Mejri, Lassaâd. "Une démarche basée sur l'apprentissage automatique pour l'aide a l'évaluation et à la génération de scenarios d'accidents : application à l'analyse de sécurité des systèmes de transport automatisés". Valenciennes, 1995. https://ged.uphf.fr/nuxeo/site/esupversions/25d8a55d-404e-4c70-9361-b6f2a051d706.

Pełny tekst źródła
Streszczenie:
Pour valider un nouveau système de transport automatise (STA), les experts évaluent les scenarios d'accidents proposés par le constructeur du STA. Pour parfaire l'exhaustivité de l'analyse de sécurité, ces experts imaginent de nouveaux scenarios non pris en compte par le constructeur. Ils font appel à des méthodes d'analyse structurelle et fonctionnelle du STA. Notre approche consiste à exploiter les scenarios historiques qui se rapportent à des STA déjà validés. Pour cela, nous utilisons les techniques d'apprentissage pour organiser la connaissance en vrac (scenarios) et produire des connaissances structurées sous forme de règles. Ce mémoire propose une démarche fondée sur l'apprentissage multi-stratégie pour l'aide à l'évaluation et à la génération de nouveaux scenarios d'accidents. L’apprentissage multi-stratégie intègre diverses stratégies de raisonnement: l'induction, la déduction, l'analogie, etc. Pour résoudre le problème posé. Pour confronter la démarche retenue, une maquette a été développée. Elle a montré la faisabilité de la démarche proposée. Elle a aussi souligné la nécessité mais l'insuffisance de l'apprentissage pour caractériser la démarche des experts. Il est donc judicieux de conjuguer les mécanismes de l'apprentissage avec ceux de l'acquisition de connaissances pour parvenir à identifier le raisonnement des experts.
Style APA, Harvard, Vancouver, ISO itp.
26

Eriksson, Linda, i Lina Simme. "The Application of Futures Studies in Innovation Processes : Scenario methods as a tool to facilitate flexibility and enable future resilient products". Thesis, Linköpings universitet, Institutionen för ekonomisk och industriell utveckling, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-167822.

Pełny tekst źródła
Streszczenie:
Companies are pressured by dynamic markets and the increase of innovation speed, technology change and shortening of product life cycles. They need to attend to customer demands and ever-changing environmental conditions, policies and regulations set by governments and institutions in order to stay relevant on the market and be allowed to operate. Innovation has therefore become a must and the innovation processes are a central part of companies’ operations. Futures studies is presented as a systematic way of studying the future that can contribute to a better understanding of the needed direction of innovations. The aim of the study is to investigate how futures studies can be embodied in the innovation process of manufacturing companies in the industry of rail and road vehicles. The structure of an innovation process within the industry of rail and road vehicles is summarized to consist of three different phases: the fuzzy-front-end, the development and the maintenance. The innovation process is further divided by the components of the product and during the entire process there are decision points to evaluate the projects. The organizational aspects which are considered to have the most influence on the innovation process concern the company environment and internal knowledge sharing. Futures studies are moderately performed at different stages of the innovation process and levels of the organization, mainly at corporate level and in the fuzzy-front-end. The people involved in these activities are solely employees from the company in question and the main issue found regarding the activities of futures studies is that the results of the foresight are not communicated properly across the company. Two ways in which futures studies can be embodied in the innovation process are identified to create more high-quality ideas and to tune the product during the process according to the future market, with a third way ensuring alignment with corporate level.  A recommendation is presented consisting of a scenario workshop which enables for futures studies to be embodied in the innovation process of manufacturing companies. Activities and pointers for prior, during and after the workshop are presented. The results of the workshop will further be embodied in the innovation process in three different ways, in the beginning, alongside and as a basis for the corporate strategy.
Style APA, Harvard, Vancouver, ISO itp.
27

Gac, Pierre. "Conception et évaluation d’environnements virtuels pédagogiques : application à la formation professionnelle". Thesis, Angers, 2020. http://www.theses.fr/2020ANGE0016.

Pełny tekst źródła
Streszczenie:
Le système éducatif français revendique une transition vers le numérique dans le cadre des apprentissages. L’utilisation du numérique est un bon moyen pour les acteurs de la formation professionnelle de promouvoir et de proposer de nouvelles méthodes pédagogiques. En effet, ces filières souffrent d’un manque de moyens pour proposer des situations d’apprentissages professionnelles concrètes aux apprenants. Grâce aux avancées techniques, la Réalité Virtuelle (RV) est désormais accessible au plus grand nombre; ces usages permettent de proposer des simulations d’apprentissages multiples et adaptées aux situations professionnelles. Cette thèse porte sur la conception et l’évaluation d’environnements virtuels (EV) dans un contexte de formation professionnelle. La création d’un environnement de formation utilisable en conditions réelles dans les lycées professionnels demande des compétences variées, comme en pédagogie, à propos du métier ciblé ou encore sur le profil des élèves et des enseignants. Nous développons dans ce manuscrit différentes approches permettant de transcrire une situation professionnelle emblématique en un support de formation virtuel pertinent. Pour le développeur, la complexité réside dans le fait que la simulation doit correspondre à la fois aux élèves et aux enseignants. L’industrialisation de la conception d’EVs doit passer d’une part, par une optimisation des procédés de création informatique et d’autre part la considération des attentes pédagogiques des enseignants. Nous proposons une approche de développement informatique d’outils génériques permettant d’adapter rapidement la création de scénarios virtuels dans plusieurs domaines. Parmi ces outils, nous proposons des interactions génériques multisupports ou un assistant virtuel qui fait office de facilitateur dans le monde virtuel. Également, le profil des élèves est à prendre en compte lors de la conception des EVs, notamment vis-à-vis du choix des interactions en procédant à des étapes d’étayages permettant d’enlever les éléments peu pertinents sur le plan de la didactique au profit de tâches pédagogiques plus enrichissantes. Nous développons dans cette thèse différentes approches d’exploitation de la technologie ainsi que des techniques d’évaluation à mettre en place avec les élèves. Ces discussions sont appuyées par des expérimentations menées dans les lycées afin de valider les différentes approches techniques et pédagogiques utilisées
The French educational system claims a digital transition. Using digital means is a good way for vocational training professionals to promote new pedagogical approaches. Vocational training suffers from a lack of means to offer new training situations. Thanks to technological improvements, Virtual Reality (VR) is now open to the public. It allows multiple relevant pedagogical simulations that are tailored to the professional world. This thesis is about designing and evaluating virtual environments (VE) in a vocational training context. The design of a training environment implies a high skill diversity for the designer to be usable in real conditions in schools. Like pedagogy for the targeted job or knowledge about students or teachers’ profiles. We expand in this manuscript several approaches that allow transcribing a real pedagogical situation into a relevant virtual scenario. For the developer, the complexity of the digitalization process relies on the need for the virtual situation to be suitable for both students and teachers. We propose a generic programming approach, allowing to quickly adapt the design process of a new virtual training scenario. For instance, we propose multi-support generic interactions or a virtual assistant to ease virtual interactions for the trainee. Additionally, students’ profiles need to be considered while designing the VR, especially regarding the choices of interactions by simplifying steps. Allowing the designer to remove didactic irrelevant parts for the benefit of more enlightening pedagogical actions. We expand in this thesis some exploitation approaches of VR in a real training context as well as evaluation techniques to assess students. Those discussions are supported by experimental results conducted in high schools to validate our technical and pedagogical approaches detailed in this manuscript
Style APA, Harvard, Vancouver, ISO itp.
28

Wiedemann, Anja. "Development and application of a cellular system simulator for an evaluation of signaling performance and efficiency accepting the challenges of IP-based UMTS radio access network evolution scenarios /". [S.l.] : [s.n.], 2007. http://deposit.ddb.de/cgi-bin/dokserv?idn=984796339.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
29

Rocklin, Delphine. "Des modèles et des indicateurs pour évaluer la performance d'aires marines protégées pour la gestion des zones côtières. Application à la Réserve Naturelle des Bouches de Bonifacio (Corse)". Thesis, Montpellier 2, 2010. http://www.theses.fr/2010MON20145/document.

Pełny tekst źródła
Streszczenie:
Les dernières décennies ont été marquées la surexploitation de nombreux stocks de ressources marines. Po ur limiter ce déclin et restaurer les communautés impactées, l'instauration de mesures de gestion s'est révélée nécessaire. Les Aires Marines Protégées (AMP), initialement développées pour protéger les habitats remarquables et la biodiversité associée, sont de plus en plus utilisées en tant qu'outil de gestion spatialisée des activités de pêche.L'objectif de cette thèse était d'évaluer à l'aide d'indicateurs et de modèles prédictifs (i) l'impact de la mise en place de la Réserve Naturelle des Bouches de Bonifacio (Corse) sur les communautés de poissons et (ii) les bénéfices de ce type de gestion pour l'activité de pêche artisanale locale. Nous avons ensuite développé un modèle et testé (iii) les scénarios de gestion les mieux adaptés au maintien de la ressource de langouste rouge, en déclin dans la réserve, et à son exploitation durable. Les données de captures de la pêche artisanale du sud de la Corse nous ont permis de mettre en évidence l'impact de la pêche plaisancière sur la structure et la biomasse des communautés exploitées. Bien que la forte diminution de l'effort de pêche ait pu contribuer à une augmentation des captures par unité d'effort, l'analyse sous forme de groupes de réponse nous a permis de mettre en évidence une augmentation différenciée des captures de la pêche artisanale en fonction de l'intérêt des espèces pour la chasse au harpon. De plus, nous avons remarqué que la réglementation de la RNBB ne semble pas suffire à la protection d'une espèce emblématique, la langouste rouge. Les indicateurs issus de l'outil de modélisation ISIS-Fish nous ont permis de constater la nécessité d'une restriction plus importante de l'accès à cette ressource dans l'objectif d'une pêche durable
These last decades have been characterized by a great development of fishing techniques, contributing to the overexploitation of numerous marine fish stocks. In order to limit this collapse and to restore impacted communities, the implementation of management measures was necessary. Marine Protected Areas (MPAs), initially developed to protect remarkable habitats and associated biodiversity, are more and more used as a tool for spatial management of fishing activities, by adult export and/or larvae migrations from protected zones to surrounding fisheries.The aim of this PhD was to evaluate using indicators and predictive models (i) the impact of the Bonifacio Strait Natural Reserve (Corse) implementation on fish communities and (ii) the benefits of such management measures for the local artisanal fishery. We then developed a model and tested (iii) management scenarios for maintaining the spiny lobster resource, en decline in the reserve, into a sustainable exploitation way.Artisanal fishery catch data from south Corsica has permitted to highlight the indirect impact of recreational fishing on exploited fish communities structure and biomass. Whereas a decrease of the fishing effort may contribute to increasing catches per unit effort (CPUE), the analysis using response groups helped us to reveal differentiated increase of the artisanal fishery catches considering the interest of species to spearfishing. Moreover, we noticed than even if the BSNR legislation represents a benefit for many species, it is not sufficient for the emblematic red spiny lobster recover. Indicators issued from the ISIS-Fish model showed that higher restrictions on this resource access are necessary in the objective of sustainable fisheries
Style APA, Harvard, Vancouver, ISO itp.
30

Stach, Robert [Verfasser]. "Mid infrared sensor technologies for environmental and occupational health and safety : advancement and application of infrared spectroscopies for monitoring fuel residues in marine ecosystems,and inhalable particles in mining scenarios / Robert Stach". Ulm : Universität Ulm, 2021. http://d-nb.info/1240314388/34.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
31

Labbas, Mériem. "Modélisation hydrologique de bassins versants périurbains et influence de l'occupation du sol et de la gestion des eaux pluviales : Application au bassin de l'Yzeron (130km2)". Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAU006/document.

Pełny tekst źródła
Streszczenie:
Les bassins périurbains, constitués de zones urbaines, agricoles et naturelles, sont des bassinsversants complexes à étudier. L’augmentation des surfaces imperméables et les modifications deschemins d’écoulement par les réseaux d’assainissement influencent leur hydrologie. Ces modificationssont notamment liées aux choix de modes de gestion des eaux pluviales : réseaux unitaires,réseaux séparatifs, infiltration à la parcelle, etc. La modélisation hydrologique spatialisée, quirend compte de l’hétérogénéité des bassins versants, est un outil permettant d’évaluer les différentsenjeux en termes d’occupation du sol et de gestion des eaux pluviales. Cependant, peu demodèles ont été construits pour être appliqués aux bassins périurbains, à l’échelle des gestionnaires(˜ 100 km2) et pour des simulations sur de longues périodes (> 10 ans). La modélisationhydrologique doit donc être adaptée afin de mieux capter les spécificités des milieux périurbainstelles que l’hétérogénéité de l’occupation du sol et la connexion de certaines zones urbaines à unréseau d’assainissement.Ce travail de thèse a consisté à développer un nouvel outil de modélisation adapté à ces problématiques: le modèle distribué horaire J2000P. Ce modèle simule les processus hydrologiquesen milieux ruraux et urbains et prend en compte les réseaux d’assainissement, les connexionsà ces réseaux et les déversements des déversoirs d’orage (DO). Le modèle a été mis en oeuvresur le bassin périurbain de l’Yzeron (˜ 130 km2), situé à l’ouest de Lyon. L’évaluation, effectuéeà l’exutoire de différents sous-bassins de tailles et d’occupations du sol différentes, montre desrésultats très encourageants. Le modèle a tendance à sous-estimer le débit mais la dynamiquedes pics est bien représentée tout comme le déversement des DO. Suite aux résultats de l’évaluation,une analyse de sensibilité « pas à pas » du modèle a été réalisée et différentes hypothèsesde fonctionnement du bassin ont été formulées pour améliorer la compréhension du modèle etdes processus représentés. Le modèle a ensuite été utilisé pour tester l’impact de modificationsde l’occupation des sols et/ou de la gestion des eaux pluviales sur la réponse hydrologique. Lemodèle montre que la gestion de l’occupation du sol a moins d’influence sur l’hydrologie dubassin que la gestion du réseau d’assainissement
Growing urbanization and related anthropogenic processes have a high potential to influencehydrological process dynamics. Typical consequences are an increase of surface imperviousnessand modifications of water flow paths due to artificial channels and barriers (combined and separatedsystem, sewer overflow device, roads, ditches, etc.). Periurban catchments, at the edgeof large cities, are especially affected by fast anthropogenic modifications. They usually consistof a combination of natural areas, rural areas with dispersed settlements and urban areas mostlycovered by built zones and spots of natural surfaces. Spatialized hydrological modeling tools, simulatingthe entire hydrological cycle and able to take into account the important heterogeneityof periurban watersheds can be used to assess the impact of stormwater management practiceson their hydrology.We propose a new modeling tool for these issues : the hourly distributed J2000P model.This model simulates the hydrological processes in rural and urban areas and takes into accountthe sewerage networks, connections to these networks and overflows from sewer overflow devices(SOD). The application site is the Yzeron catchment (˜ 130 km2), located in the West of Lyon.The evaluation, conducted at the outlet of different sub-basins with different sizes and landuse, shows very encouraging results. The model tends to underestimate the discharge but thedynamics of the peaks and the SOD overflows are well simulated. The model is also used to testthe impact of changes in land use and/or stormwater management on the hydrological response.The results show that land use management has less impact on the hydrology of the catchmentthan stormwater management
Style APA, Harvard, Vancouver, ISO itp.
32

Martín, Campillo Abraham. "Triage applications and communications in emergency scenarios". Doctoral thesis, Universitat Autònoma de Barcelona, 2012. http://hdl.handle.net/10803/117616.

Pełny tekst źródła
Streszczenie:
El triatge de víctimes és una de les primeres i més importants tasques a realitzar en arribar a un escenari d'emergència. Aquest procés prioritza l'atenció mèdica a les víctima en base al nivell de les seves lesions. Aquest procés és molt important per a una assignació de recursos eficient i eficaç, sobretot en emergències de gran abast amb un gran nombre de víctimes. El procés de classificació de víctimes tradicional utilitza etiquetes de triatge com a indicador de l'estat de la víctima, una solució que comporta alguns inconvenients: Els metges han d'acostar-se a la víctima per veure el seu estat en l'etiqueta de paper, la pèrdua de l'etiqueta de triatge, etc. Avui dia, la informatització de les etiquetes de classificació és essencial per a una coordinació i atenció a les víctimes més ràpida. No obstant això, els escenaris d'emergència usualment es caracteritzen per la falta de xarxes sense fils disponibles per al seu ús. Xarxes sense fils basades en infraestructura com les xarxes de telefonia mòbil o les xarxes Wi-Fi solen destruir-se o saturar-se a causa d'un gran intent d'utilització o per la mateixa naturalesa de l'emergència. Algunes solucions proposen l'ús de sensors i la creació d'una xarxa de sensors sense fils per transmetre l'estat i la posició de les víctimes o el desplegament de repetidors per crear una MANET completament connectada. No obstant això, en grans emergències, això pot no ser possible a causa de l'extensió d'aquesta o pot no ser viable a causa del temps requerit per desplegar els repetidors. Aquesta tesi analitza les situacions d'emergència des del punt de vista de xarxes i comunicacions. Es proposa un sistema per a la classificació electrònica de víctimes fins i tot en casos sense cap tipus de xarxa disponible gràcies a la utilització de xarxes oportunistes i agents mòbils. També s'analitza el rendiment dels protocols de forwarding a les zones de desastre i es proposen algunes millores per reduir el consum d'energia.
El triaje de víctimas es una de las primeras y más importantes tareas al llegar a un escenario de emergencia. Este proceso prioriza la atención médica a las víctima en base al nivel de sus lesiones. Este proceso es muy importante para una asignación de recursos eficiente y eficaz, sobretodo en emergencias de gran abasto con un gran número de víctimas. El proceso de clasificación de víctimas tradicional utiliza etiquetas de triaje como indicador del estado de la víctima, una solución que con algunos inconvenientes: Los médicos tienen que acercarse a la víctima para ver su estado en la etiqueta de papel, la pérdida de la etiqueta de triaje, etc. Hoy en día, la informatización de las etiquetas de clasificación es esencial para una coordinación y atención a las víctimas más rápida. Sin embargo, los escenarios de emergencia usualmente se caracterizan por la falta de redes inalámbricas disponibles para su uso. Redes inalámbricas basadas en infraestructura como las redes de telefonía móvil o las redes Wi-Fi suelen destruirse o saturarse debido un gran intento de utilización o a la misma naturaleza de la emergencia. Algunas soluciones proponen el uso de sensores y la creación de una red de sensores inalámbricos para transmitir el estado y la posición de las víctimas o el despliegue de repetidores para crear una MANET completamente conectada. Sin embargo, en grandes emergencias, esto puede no ser posible debido a la extensión de esta o puede no ser viable debido al tiempo requerido para desplegar los repetidores. Esta tesis analiza las situaciones de emergencia desde el punto de vista de redes y comunicaciones. Se propone un sistema para la clasificación electrónica de víctimas incluso en casos sin ningún tipo de red disponible gracias a la utilización de redes oportunistas y agentes móviles. También se analiza el rendimiento de los protocolos de forwarding en las zonas de desastre y se proponen algunas mejoras para reducir el consumo de energía.
Triaging victims is the first and foremost task in an emergency scenario. This process priorizes victim's attention based on their injuries, very important for an efficient and effective resource allocation in mass casualty incidents which large amount of victims. Traditional triage process used paper triage tags as victim's injury level indicator, a solution that had some drawbacks: first responder had to go to the each victim to see their injury level on the paper triage tag, loss of the triage tag, etc. On today emergencies, an electronic triage tag is essential for a faster coordination and attention to victims. However, emergency scenarios are usually characterized by the lack of wireless networks to rely on. Infrastructure based wireless networks as mobile phone networks or Wi-Fi networks are usually destroyed or overused due to the very nature of the emergency. Some solutions propose the use of sensors, creating a wireless sensor networks to transmit the injury level and position of the victim or deploying repeaters to create a fully connected MANET. However, in large emergencies this may not be possible and the time required to deploy all the repeaters could be not worth. This thesis analyses emergencies from the communication point of view. It proposes a system for the electronic triage of victims and emergency management to work even in worst cases scenarios from the network communications perspective thanks to the use of opportunistic networks and mobile agents. It also analyses the performance of several forwarding protocols in disaster areas and proposes some improvements to reduce energy consumption.
Style APA, Harvard, Vancouver, ISO itp.
33

Vargas, Florez Jorge. "Aide à la conception de chaînes logistiques humanitaires efficientes et résilientes : application au cas des crises récurrentes péruviennes". Thesis, Ecole nationale des Mines d'Albi-Carmaux, 2014. http://www.theses.fr/2014EMAC0008/document.

Pełny tekst źródła
Streszczenie:
Chaque année, plus de 400 catastrophes naturelles frappent le monde. Pour aider les populations touchées, les organisations humanitaires stockent par avance de l’aide d’urgence dans des entrepôts. Cette thèse propose des outils d’aide à la décision pour les aider à localiser et dimensionner ces entrepôts. Notre approche repose sur la construction de scénarios représentatifs. Un scénario représente la survenue d’une catastrophe dont on connaît l’épicentre, la gravité et la probabilité d’occurrence. Cette étape repose sur l’exploitation et l’analyse de bases de données des catastrophes passées. La seconde étape porte sur la propagation géographique de la catastrophe et détermine son impact sur la population des territoires touchés. Cet impact est fonction de la vulnérabilité et de la résilience du territoire. La vulnérabilité mesure la valeur attendue des dégâts alors que la résilience estime la capacité à résister au choc et à se rétablir rapidement. Les deux sont largement déterminées par des facteurs économiques et sociaux, soit structurels (géographie, PIB…) ou politiques (existence d’infrastructure d’aide, normes de construction…). Nous proposons par le biais d’analyses en composantes principales (ACP) d’identifier les facteurs influents de résilience et de vulnérabilité, puis d’estimer le nombre de victimes touchées à partir de ces facteurs. Souvent, les infrastructures (eau, télécommunication, électricité, voies de communication) sont détruits ou endommagés par la catastrophe (ex : Haïti en 2010). La dernière étape a pour objectif d’évaluer les impacts logistiques en ce qui concerne : les restrictions des capacités de transport existant et la destruction de tout ou partie des stocks d’urgence. La suite de l’étude porte sur la localisation et le dimensionnement du réseau d’entrepôt. Nos modèles présentent l’originalité de tenir compte de la dégradation des ressources et infrastructures suite due à la catastrophe (dimension résilience) et de chercher à optimiser le rapport entre les coûts engagés et le résultat obtenu (dimension efficience). Nous considérons d’abord un scénario unique. Le problème est une extension d’un problème de location classique. Puis, nous considérons un ensemble de scénarios probabilisés. Cette approche est indispensable à la considération du caractère très incertain des catastrophes humanitaires. L’ensemble de ces contributions a été confronté à la réalité des faits dans le cadre d’une application au cas des crises récurrentes du Pérou. Ces crises, essentiellement dues aux tremblements de terre et aux inondations (El Niño), imposent la constitution d’un réseau logistique de premiers secours qui soit résilient et efficient
Every year, more than 400 natural disasters hit the world. To assist those affected populations, humanitarian organizations store in advance emergency aid in warehouses. This PhD thesis provides tools for support decisions on localization and sizing of humanitarian warehouses. Our approach is based on the design of representative and realistic scenarios. A scenario expresses some disasters’ occurrences for which epicenters are known, as well as their gravity and frequency. This step is based on the exploitation and analysis of databases of past disasters. The second step tackles about possible disaster’s propagation. The objective consists in determining their impact on population on each affected area. This impact depends on vulnerability and resilience of the territory. Vulnerability measures expected damage values meanwhile resilience estimates the ability to withstand some shock and recover quickly. Both are largely determined by social and economic factors, being structural (geography, GDP, etc.) or political (establishment or not relief infrastructure, presence and strict enforcement of construction standards, etc.). We propose through Principal Component Analysis (PCA) to identify, for each territory, influential factors of resilience and vulnerability and then estimate the number of victims concerned using these factors. Often, infrastructure (water, telecommunications, electricity, communication channels) are destroyed or damaged by the disaster (e.g. Haiti in 2010). The last step aims to assess the disaster logistics impact, specifically those related to with: transportation flows capacity limitations and destruction of all or part of emergency relief inventories. The following of our study focuses on location and allocation of a warehouses’ network. The proposed models have the originality to consider potential resources and infrastructure degradation after a disaster (resilience dimension) and seek optimizing the equilibrium between costs and results (effectiveness dimension). Initially we consider a single scenario. The problem is an extension of classical location studies. Then we consider a set of probable scenarios. This approach is essential due to the highly uncertain character of humanitarian disasters. All of these contributions have been tested and validated through a real application case: Peruvian recurrent disasters. These crises, mainly due to earthquakes and floods (El Niño), require establishment of a first aid logistics network that should be resilient and efficient
Style APA, Harvard, Vancouver, ISO itp.
34

Sulewski, Joe, John Hamilton, Timothy Darr i Ronald Fernandes. "Web Service Applications in Future T&E Scenarios". International Foundation for Telemetering, 2010. http://hdl.handle.net/10150/605923.

Pełny tekst źródła
Streszczenie:
ITC/USA 2010 Conference Proceedings / The Forty-Sixth Annual International Telemetering Conference and Technical Exhibition / October 25-28, 2010 / Town and Country Resort & Convention Center, San Diego, California
In this paper, we discuss ways in which web services can be used in future T&E scenarios, from the initial hardware setup to making dynamic configuration changes and data requests. We offer a comparison of this approach to other standards such as SNMP, FTP, and RTSP, describing the pros and cons of each as well as how these standards can be used together for certain applications.
Style APA, Harvard, Vancouver, ISO itp.
35

Bedia, Jiménez Joaquín. "Downscaling of climate scenarios for wildfire danger assessment: Development and Applications". Doctoral thesis, Universidad de Cantabria, 2015. http://hdl.handle.net/10803/382486.

Pełny tekst źródła
Streszczenie:
El peligro de incendios, desde una perspectiva climática, es el descriptor resultante de la integración de las principales variables atmosféricas que afectan de forma directa al inicio, propagación y dificultad de control de un incendio forestal en un momento determinado. Uno de los más utilizados a nivel mundial es el sistema canadiense, conocido como FWI, acrónimo del inglés Fire Weather Index. En esta tesis se desarrollan escenarios futuros de FWI (e indicadores derivados de éste) a varias escalas espaciales, a partir de diferentes proyecciones de cambio climático y aplicando diversas técnicas de regionalización. Se analizan las relaciones entre peligro de incendios y áreas quemadas a nivel global para la identificación de las zonas del planeta más sensibles al cambio climático, y se analizan algunos aspectos metodológicos clave insuficientemente tratados hasta el momento, como la resolución temporal de las variables de entrada, la aplicación de técnicas de regionalización estadística apropiadas y las ventajas y limitaciones del uso de modelos numéricos para la generación de escenarios de FWI.
From a climatic standpoint, fire danger can be defined as the descriptor resultant after the integration of the main atmospheric variables most directly involved in the ignition, propagation and difficulty of suppression of a forest fire. One of the most popular fire danger indicators worldwide is the Canadian Fire Weather Index (FWI). This PhD Thesis is focused on the generation of future FWI (and other FWI-derived indicators) scenarios at different spatial scales, building upon different future climate projections and downscaling techniques. The relationship between fire danger and burned area is analyzed at a global scale in order to identify the most sensitive areas to climate change. Several key methodological aspects, insufficiently analyzed in previous studies, are addressed such as the time resolution of input variables, the use of adequate statistical downscaling techniques and the advantages and limitations of using numerical model simulations for the generation of FWI scenarios.
Style APA, Harvard, Vancouver, ISO itp.
36

Franzmann, Guilherme. "Application of open string field theory to the inflationary scenario /". São Paulo, 2014. http://hdl.handle.net/11449/108901.

Pełny tekst źródła
Streszczenie:
Orientador: Nathan Jacob Berkovits
Banca: Horatiu Stefan Nastase
Banca: Luis Raul Weber Abramo
Resumo: Esta tese consiste em uma revisão da estrutura da Teoria de Campo na Corda, focando nas propriedades clássicas do setor da corda aberta e suas possíveis aplicações para inflação. Portanto, seguiremos a prescrição do Witten e construiremos uma ação para a Teoria de Campo na Corda Aberta Bosônica. Então, percebendo que a teoria tem uma modo taquiônico e motivado pelo cenário de inflação, calcularemos e consideraremos o potencial taquiônico em alguma ordem de aproximação. Como aplicação, tomaremos o campo taquiônico como um candidato possível para o inflaton. A fim de trabalhar com esta proposta, revisaremos primeiramente a teoria inflacionária e estudaremos sua abordagem moderna, considerando apenas suas implicações a nível clássico utilizando a aproximação slow-roll. Finalmente, analisaremos o potencial taquiônico como sendo o potencial do inflaton e exploramos suas consequências. Como suporte, há quatro apêndices contendo alguns aspectos de Teoria de Cordas, Relatividade Geral, Cosmologia e alguns cálculos relevantes que foram omitidos ao longo da tese
Abstract: This thesis consists in a review of the String Field Theory framework, focusing in the classical properties of the open sector and its possible applications for inflation. Therefore, we intend to follow the Witten's prescription and build an action for the Open Bosonic String Field Theory. Then, recognizing that the theory has a tachyonic mode and motivated by the inflationary scenario, we calculate and consider the tachyonic potential in some order of approximation. As an application, we consider the tachyon field as a possible candidate for the inflaton. In order to work with this proposal, we first review the inflationary theory and study its modern approach, considering only its classical implications using the slow-roll approximation. Finally, we analyze the tachyonic potential as being the inflaton potential and explore its consequences. As a support, there are four appendices containing some aspects of String Theory, General Relativity, Cosmology and some relevant calculations that were omitted throughout the thesis
Mestre
Style APA, Harvard, Vancouver, ISO itp.
37

Franzmann, Guilherme [UNESP]. "Application of open string ?eld theory to the in?ationary scenario". Universidade Estadual Paulista (UNESP), 2014. http://hdl.handle.net/11449/108901.

Pełny tekst źródła
Streszczenie:
Made available in DSpace on 2014-08-27T14:36:44Z (GMT). No. of bitstreams: 0 Previous issue date: 2014-04-28Bitstream added on 2014-08-27T15:57:04Z : No. of bitstreams: 1 000778539.pdf: 1451675 bytes, checksum: b544992a89fd92ec7594d7a06c06f8c7 (MD5)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Esta tese consiste em uma revisão da estrutura da Teoria de Campo na Corda, focando nas propriedades clássicas do setor da corda aberta e suas possíveis aplicações para in?ação. Portanto, seguiremos a prescrição do Witten e construiremos uma ação para a Teoria de Campo na Corda Aberta Bosônica. Então, percebendo que a teoria tem uma modo taquiônico e motivado pelo cenário de in?ação, calcularemos e consideraremos o potencial taquiônico em alguma ordem de aproximação. Como aplicação, tomaremos o campo taquiônico como um candidato possível para o in?aton. A ?m de trabalhar com esta proposta, revisaremos primeiramente a teoria in?acionária e estudaremos sua abordagem moderna, considerando apenas suas implicações a nível clássico utilizando a aproximação slow-roll. Finalmente, analisaremos o potencial taquiônico como sendo o potencial do in?aton e exploramos suas consequências. Como suporte, há quatro apêndices contendo alguns aspectos de Teoria de Cordas, Relatividade Geral, Cosmologia e alguns cálculos relevantes que foram omitidos ao longo da tese
This thesis consists in a review of the String Field Theory framework, focusing in the classical properties of the open sector and its possible applications for in?ation. Therefore, we intend to follow the Witten’s prescription and build an action for the Open Bosonic String Field Theory. Then, recognizing that the theory has a tachyonic mode and motivated by the in?ationary scenario, we calculate and consider the tachyonic potential in some order of approximation. As an application, we consider the tachyon ?eld as a possible candidate for the in?aton. In order to work with this proposal, we ?rst review the in?ationary theory and study its modern approach, considering only its classical implications using the slow-roll approximation. Finally, we analyze the tachyonic potential as being the in?aton potential and explore its consequences. As a support, there are four appendices containing some aspects of String Theory, General Relativity, Cosmology and some relevant calculations that were omitted throughout the thesis
Style APA, Harvard, Vancouver, ISO itp.
38

Martínez, Domínguez Francisco José. "Improving Vehicular ad hoc Network Protocols to Support Safety Applications in Realistic Scenarios". Doctoral thesis, Universitat Politècnica de València, 2011. http://hdl.handle.net/10251/9195.

Pełny tekst źródła
Streszczenie:
La convergencia de las telecomunicaciones, la informática, la tecnología inalámbrica y los sistemas de transporte, va a facilitar que nuestras carreteras y autopistas nos sirvan tanto como plataforma de transporte, como de comunicaciones. Estos cambios van a revolucionar completamente cómo y cuándo vamos a acceder a determinados servicios, comunicarnos, viajar, entretenernos, y navegar, en un futuro muy cercano. Las redes vehiculares ad hoc (vehicular ad hoc networks VANETs) son redes de comunicación inalámbricas que no requieren de ningún tipo de infraestructura, y que permiten la comunicación y conducción cooperativa entre los vehículos en la carretera. Los vehículos actúan como nodos de comunicación y transmisores, formando redes dinámicas junto a otros vehículos cercanos en entornos urbanos y autopistas. Las características especiales de las redes vehiculares favorecen el desarrollo de servicios y aplicaciones atractivas y desafiantes. En esta tesis nos centramos en las aplicaciones relacionadas con la seguridad. Específicamente, desarrollamos y evaluamos un novedoso protocol que mejora la seguridad en las carreteras. Nuestra propuesta combina el uso de información de la localización de los vehículos y las características del mapa del escenario, para mejorar la diseminación de los mensajes de alerta. En las aplicaciones de seguridad para redes vehiculares, nuestra propuesta permite reducir el problema de las tormentas de difusión, mientras que se mantiene una alta efectividad en la diseminación de los mensajes hacia los vehículos cercanos. Debido a que desplegar y evaluar redes VANET supone un gran coste y una tarea dura, la metodología basada en la simulación se muestra como una metodología alternativa a la implementación real. A diferencia de otros trabajos previos, con el fin de evaluar nuestra propuesta en un entorno realista, en nuestras simulaciones tenemos muy en cuenta tanto la movilidad de los vehículos, como la transmisión de radio en entornos urbanos, especialmente cuando los edificios interfieren en la propagación de la señal de radio. Con este propósito, desarrollamos herramientas para la simulación de VANETs más precisas y realistas, mejorando tanto la modelización de la propagación de radio, como la movilidad de los vehículos, obteniendo una solución que permite integrar mapas reales en el entorno de simulación. Finalmente, evaluamos las prestaciones de nuestro protocolo propuesto haciendo uso de nuestra plataforma de simulación mejorada, evidenciando la importancia del uso de un entorno de simulación adecuado para conseguir resultados más realistas y poder obtener conclusiones más significativas.
Martínez Domínguez, FJ. (2010). Improving Vehicular ad hoc Network Protocols to Support Safety Applications in Realistic Scenarios [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/9195
Palancia
Style APA, Harvard, Vancouver, ISO itp.
39

Jabri, Sana. "Génération de scénarios de tests pour la vérification de systèmes complexes et répartis : application au système européen de signalisation ferroviaire (ERTMS)". Phd thesis, Ecole Centrale de Lille, 2010. http://tel.archives-ouvertes.fr/tel-00584308.

Pełny tekst źródła
Streszczenie:
Dans les années 90, la commission européenne a sollicité la mise au point d'un système de contrôle commande et de signalisation ferroviaire commun à tous les réseaux des états membres : le système ERTMS " European Railway Traffic Management System ". Il s'agit d'un système réparti complexe dont le déploiement complet est long et coûteux. L'objectif global consiste à diminuer les coûts de validation et de certification liés à la mise en œuvre de ce nouveau système en Europe. La problématique scientifique réside dans la modélisation formelle de la spécification afin de permettre la génération automatique des scénarios de test. Les verrous scientifiques, traités dans cette thèse, sont liés d'une part à la transformation de modèle semi-formel en modèle formel en préservant les propriétés structurelles et fonctionnelles des constituants réactifs du système réparti, et d'autre part à la couverture des tests générés automatiquement. Les constituants sont sous la forme de boîte noire. L'objectif consiste à tester ces derniers à travers la spécification ERTMS. Nous avons développé une approche de modélisation basée sur le couplage de modèles semi-formels (UML) et de modèles formels (Réseaux de Petri). Ce couplage se fait à travers une technique de transformation de modèles. Nous avons développé ensuite une méthode de génération automatique de scénarios de test de conformité à partir des modèles en réseaux de Petri. Les scénarios de test ont été considérés comme une séquence de franchissement filtrée puis réduite du réseau de Petri interprété représentant la spécification. Ces scénarios ont été exécutés sur notre plateforme de simulation ERTMS
Style APA, Harvard, Vancouver, ISO itp.
40

Daud, Malik Imran. "Ontology-based Access Control in Open Scenarios: Applications to Social Networks and the Cloud". Doctoral thesis, Universitat Rovira i Virgili, 2016. http://hdl.handle.net/10803/396179.

Pełny tekst źródła
Streszczenie:
La integració d'Internet a la societat actual ha fet possible compartir fàcilment grans quantitats d'informació electrònica i recursos informàtics (que inclouen maquinari, serveis informàtics, etc.) en entorns distribuïts oberts. Aquests entorns serveixen de plataforma comuna per a usuaris heterogenis (per exemple, empreses, individus, etc.) on es proporciona allotjament d'aplicacions i sistemes d'usuari personalitzades; i on s'ofereix un accés als recursos compartits des de qualsevol lloc i amb menys esforços administratius. El resultat és un entorn que permet a individus i empreses augmentar significativament la seva productivitat. Com ja s'ha dit, l'intercanvi de recursos en entorns oberts proporciona importants avantatges per als diferents usuaris, però, també augmenta significativament les amenaces a la seva privacitat. Les dades electròniques compartides poden ser explotades per tercers (per exemple, entitats conegudes com "Data Brokers"). Més concretament, aquestes organitzacions poden agregar la informació compartida i inferir certes característiques personals sensibles dels usuaris, la qual cosa pot afectar la seva privacitat. Una manera de del.liar aquest problema consisteix a controlar l'accés dels usuaris als recursos potencialment sensibles. En concret, la gestió de control d'accés regula l'accés als recursos compartits d'acord amb les credencials dels usuaris, el tipus de recurs i les preferències de privacitat dels propietaris dels recursos/dades. La gestió eficient de control d'accés és crucial en entorns grans i dinàmics. D'altra banda, per tal de proposar una solució viable i escalable, cal eliminar la gestió manual de regles i restriccions (en la qual, la majoria de les solucions disponibles depenen), atès que aquesta constitueix una pesada càrrega per a usuaris i administradors . Finalment, la gestió del control d'accés ha de ser intuïtiu per als usuaris finals, que en general no tenen grans coneixements tècnics.
La integración de Internet en la sociedad actual ha hecho posible compartir fácilmente grandes cantidades de información electrónica y recursos informáticos (que incluyen hardware, servicios informáticos, etc.) en entornos distribuidos abiertos. Estos entornos sirven de plataforma común para usuarios heterogéneos (por ejemplo, empresas, individuos, etc.) donde se proporciona alojamiento de aplicaciones y sistemas de usuario personalizadas; y donde se ofrece un acceso ubicuo y con menos esfuerzos administrativos a los recursos compartidos. El resultado es un entorno que permite a individuos y empresas aumentar significativamente su productividad. Como ya se ha dicho, el intercambio de recursos en entornos abiertos proporciona importantes ventajas para los distintos usuarios, no obstante, también aumenta significativamente las amenazas a su privacidad. Los datos electrónicos compartidos pueden ser explotados por terceros (por ejemplo, entidades conocidas como “Data Brokers”). Más concretamente, estas organizaciones pueden agregar la información compartida e inferir ciertas características personales sensibles de los usuarios, lo cual puede afectar a su privacidad. Una manera de paliar este problema consiste en controlar el acceso de los usuarios a los recursos potencialmente sensibles. En concreto, la gestión de control de acceso regula el acceso a los recursos compartidos de acuerdo con las credenciales de los usuarios, el tipo de recurso y las preferencias de privacidad de los propietarios de los recursos/datos. La gestión eficiente de control de acceso es crucial en entornos grandes y dinámicos. Por otra parte, con el fin de proponer una solución viable y escalable, es necesario eliminar la gestión manual de reglas y restricciones (en la cual, la mayoría de las soluciones disponibles dependen), dado que ésta constituye una pesada carga para usuarios y administradores. Por último, la gestión del control de acceso debe ser intuitivo para los usuarios finales, que por lo general carecen de grandes conocimientos técnicos.
Thanks to the advent of the Internet, it is now possible to easily share vast amounts of electronic information and computer resources (which include hardware, computer services, etc.) in open distributed environments. These environments serve as a common platform for heterogeneous users (e.g., corporate, individuals etc.) by hosting customized user applications and systems, providing ubiquitous access to the shared resources and requiring less administrative efforts; as a result, they enable users and companies to increase their productivity. Unfortunately, sharing of resources in open environments has significantly increased the privacy threats to the users. Indeed, shared electronic data may be exploited by third parties, such as Data Brokers, which may aggregate, infer and redistribute (sensitive) personal features, thus potentially impairing the privacy of the individuals. A way to palliate this problem consists on controlling the access of users over the potentially sensitive resources. Specifically, access control management regulates the access to the shared resources according to the credentials of the users, the type of resource and the privacy preferences of the resource/data owners. The efficient management of access control is crucial in large and dynamic environments such as the ones described above. Moreover, in order to propose a feasible and scalable solution, we need to get rid of manual management of rules/constraints (in which most available solutions rely) that constitutes a serious burden for the users and the administrators. Finally, access control management should be intuitive for the end users, who usually lack technical expertise, and they may find access control mechanism more difficult to understand and rigid to apply due to its complex configuration settings.
Style APA, Harvard, Vancouver, ISO itp.
41

Karl, Hendrickson K. "Development and Application of an Analyst Process Model for a Search Task Scenario". Wright State University / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=wright1401538470.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
42

Ingeson, Martin. "Long-Term Experience Applications for Augmented Reality - In a Medication Adherence Scenario". Thesis, Umeå universitet, Institutionen för datavetenskap, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-149702.

Pełny tekst źródła
Streszczenie:
The aim of this research project was to explore how long-term experience applications (LTEAs) can be designed and implemented for augmented reality (AR)-headsets. Personalization, unintrusiveness, planning (short-term and long-term) and context-awareness were identified as particularly important areas when designing and implementing LTEAs for AR-headsets. An intelligent system, called Medication Coach Intelligent Agent (MCIA), directed towards medication adherence was designed and an approach for designing LTEAs considering the above mentioned areas are presented. The goal of the MCIA was to increase medication adherence for patients in an intelligent way and the goal was achieved by using beliefs-desires-intentions (BDI)-based reasoning algorithms. A concept of intentions and actions is presented in order to be able to plan ahead while stile being adaptive to the situation of users. A proof-of-concept prototype was implemented in the setting of a Microsoft HoloLens and an evaluation with user tests was conducted.The purpose of the evaluation was to investigate user experience of usingAR-headsets and the potential for using AR-headsets as a tool to address problems related to medication adherence and for similar problems related to the behavior of users. The results showed that people with experience in use of smart technology were positive towards using AR-headsets and that it is possible to implement intelligent systems for AR-headsets using today’s technology.
Style APA, Harvard, Vancouver, ISO itp.
43

Gonçcalves, Marcos André. "Streams, Structures, Spaces,Scenarios, and Societies (5S): A Formal Digital Library Framework and Its Applications". Diss., Virginia Tech, 2004. http://hdl.handle.net/10919/29942.

Pełny tekst źródła
Streszczenie:
Digital libraries (DLs) are complex information systems and therefore demand formal foundations lest development efforts diverge and interoperability suffers. In this dissertation, we propose the fundamental abstractions of Streams, Structures, Spaces, Scenarios, and Societies (5S), which allow us to define digital libraries rigorously and usefully. Streams are sequences of arbitrary items used to describe both static and dynamic (e.g., video) content. Structures can be viewed as labeled directed graphs, which impose organization. Spaces are sets with operations that obey certain constraints. Scenarios consist of sequences of events or actions that modify states of a computation in order to accomplish a functional requirement. Societies are sets of entities and activities, and the relationships among them. Together these abstractions provide a formal foundation to define, relate, and unify concepts -- among others, of digital objects, metadata, collections, and services -- required to formalize and elucidate ``digital libraries''. A digital library theory based on 5S is defined by proposing a formal ontology that defines the fundamental concepts, relationships, and axiomatic rules that govern the DL domain. The ontology is an axiomatic, formal treatment of DLs, which distinguishes it from other approaches that informally define a number of architectural invariants. The applicability, versatility, and unifying power of the 5S theory are demonstrated through its use in a number of distinct applications including: 1) building and interpreting a DL taxonomy; 2) informal and formal analysis of case studies of digital libraries (NDLTD and OAI); 3)utilization as a formal basis for a DL description language, digital library visualization and generation tools, and a log format specific for DLs; and 4) defining a quality model for DLs.
Ph. D.
Style APA, Harvard, Vancouver, ISO itp.
44

LOPES, WANDER DE PINHO. "AN APPROACH FOR INTEGRATED APPLICATION OF STRATEGY SCENARIO WITH REAL OPTION VALUATION IN TELECOMMUNICATIONS". PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2004. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=5124@1.

Pełny tekst źródła
Streszczenie:
Certamente uma das arenas competitivas mais marcada pela incerteza é a indústria de telecomunicações. Características intrínsecas ao negócio, como o alto nível de imobilização e a dispersão geográfica, sob a ação de questões voláteis como demanda, câmbio, tecnologia, regulamentação, dentre outras, podem representar grandes impactos nos resultados do negócio. Esta incerteza sempre existiu, porém nos últimos anos tem estado em maior evidência. Diante desta nova necessidade, os métodos adotados atualmente em planejamento estratégico e em análise de investimentos têm se mostrado pouco eficazes. Não é uma questão de usar uma abordagem melhor ou pior, e sim de usar a mais adequada. Quanto a isto, porém, a única certeza é a de que a melhor forma não é tentar encaixar investimentos e estratégias de uma indústria tão dinâmica como a de telecomunicações em um planejamento estático e rígido. A proposta deste trabalho é propor e exemplificar um método para descrição e avaliação de estratégia para telecomunicações, baseado no paradigma de opções reais. Este método é uma visão integrada de ferramentas e métodos conhecidos e utilizados em opções reais, análise de cenários e estratégia. A pesquisa não pretende definir o que é melhor para a discussão de análise de estratégias e investimentos em telecomunicações, mas oferecer uma reflexão mais em linha com as peculiaridades do negócio.
Definitely, one of the most uncertain competitive arena in corporate business is the industry of telecommunications. It's intrinsic characteristics, such as high degree of immobilization and geographic dispersion, under the effect of some volatile issues such as demand, currency rate, technology and regulatory affairs, among others, may yield great impacts on business results. This uncertainty has always existed, but in recent times it has been emphasized. In the face of this new necessity, the current methods used in strategic planning and investment analysis have been little effective. It's not a matter of adopting a better or worse approach, but of adopting the most appropriate one. Regarding this, however, the only certainty is that the best alternative is not try to fit investments and strategies of such a dynamic industry as telecommunication in a static and inflexible planning. The objective of this work is present and illustrates a method for strategy assessment and definition for telecommunications, based on real options. This method is an integrated view of employed and known tools and methods in real options, scenario analysis and strategy. The research does not intend to define what is best for analysis and discussion of strategies and investments in telecommunications, but to offer reflections more in line with the singularities of the business.
Style APA, Harvard, Vancouver, ISO itp.
45

Diallo, Abdoulaye. "Inference of insertion and deletion scenarios for ancestral genome reconstruction and phylogenetic analyses: algorithms and biological applications". Thesis, McGill University, 2009. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=40771.

Pełny tekst źródła
Streszczenie:
This thesis focuses on algorithms related to ancestral genome reconstruction and phylogenetics analyses. Specially, it studies insertion and deletion (indel) in genomic sequences, their utilities for (1) evolutionary studies of species families, (2) multiple alignment and phylogenetic trees reconstruction assessment, and (3) functional DNA sequence annotation. Here, the indel scenarios reconstruction problem is presented, in a likelihood framework, and it can be stated as follows: given a multiple alignment of orthologous sequences and a phylogenetic tree for these sequences, reconstruct the most likely scenario of insertions and deletions capable of explaining the gaps observed in the alignment. This problem, that we called the Indel Maximum Likelihood Problem (IMLP), is an important step toward the reconstruction of ancestral genomic sequences, and is important for studying evolutionary processes, genome function, adaptation and convergence. In this thesis, first, we showed that we can solve the IMLP using a new type of tree hidden Markov model whose states correspond to single-base evolutionary scenarios and where transitions model dependencies between neighboring columns. The standard Viterbi and Forward-backward algorithms are optimized to produce the most likely ancestral reconstruction and to compute the level of confidence associated to specific regions of the reconstruction. A heuristic is presented to make the method practical for large data sets, while retaining an extremely high degree of accuracy. The developed methods have been made available for the community through a web interface. Second we showed the utilities of the defined indel score for assessing the accuracy of multiple sequence alignment and phylogenetic tree reconstruction. Third, the provided method is included into the framework of the ancestral protein reconstruction of phages under a reticulate evolution and the evolutionary studies of the carcinogencity of the Human Papilloma Vir
Cette thèse traite d'algorithmes pour la reconstruction de génomes ancestraux et l'analyse phylogénétique. Elle étudie particulièrement les scénarios d'insertion et délétion (indels) dans les séquences génomiques, leur utilité (1) pour l'étude des familles d'espèces, (2) pour l'évaluation des alignements multiples de séquences et la reconstruction phylogénétique, (3) et pour l'annotation de séquences génomiques fonctionnelles. Dans cette thèse, le problème de la reconstruction du scénario d'indels est étudié en utilisant le critère de maximum de vraisemblance. Ce problème peut être défini de la manière suivante: étant donné un alignement multiple de séquences orthologues et un arbre phylogénétique traduisant l'histoire évolutive de ces séquences, reconstruire le scénario d'indels le plus vraisemblable capable d'expliquer les brèches présentes dans l'alignement. Ce problème, dénommé ''Indel Maximum Likelihood Problem (IMLP)'', est une importante étape de la reconstruction de séquences ancestrales. Il est également important pour l'étude des processus évolutifs, des fonctions des gènes, de l'adaptation et de la convergence.Dans une première étape de cette thèse, nous montrons que l'IMLP peut être résolu en utilisant un nouveau type de données combinant un arbre phylogénétique et un modèle de Markov caché. Les états de ce modèle de Markov caché correspondent à un scénario évolutif d'une colonne de l'alignement. Ses transitions modélisent la dépendance entre les colonnes voisines de l'alignement.Les algorithmes standard de Viterbi et de Forward-Backward ont été optimisés pour produire le scénario ancestral le plus vraisemblable et pour calculer le niveau de confiance associé aux prédictions. Dans cette thèse, Nous présentons également une heuristique qui permet d'adapter la méthode à des données de grandes tailles. En second, nous montrons l'utilité du score d'indel dans l'évaluatio
Style APA, Harvard, Vancouver, ISO itp.
46

Yellakonda, Amulya. "DESIGN AND IMPLEMENTATION OF RICH COMMUNICATION SERVICE SCENARIO REPLAYER AND PERFORMANCE EVALUATION OF APPLICATION SERVICE". Thesis, Blekinge Tekniska Högskola, Institutionen för kommunikationssystem, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-10846.

Pełny tekst źródła
Streszczenie:
Rich Communication Services(RCS) program is a GSM Association (GSMA) initiative to create inter-operator communication services based on IP Multimedia Subsystem (IMS) . This initiative came up as the Global Telecom Operators ́response to the decline in their revenues and to help compete ’Over The Top’(OTT) service providers such as Viber, whatsapp, etc. RCS is an universal standard, making it inter-operable between mul- tiple service providers unlike OTT services with closed communities. RCS services use IMS as the underlying architecture with a RCS stack imple- mented into Android background service which offers high level API. For the purpose of testing RCS stack functionality which is usually affected by external dependencies like third party vendors, or ISP customizations in real telecommunication scenario, there is a persistent demand for scenario replay tools that can recreate the range of test conditions similar to those experienced in live deployments. There is also a need to evaluate the per- formance of service provided by application servers in the network in-order to predict the factors affecting the RCS service in general. In this work, we propose a tool to address the RCS scenario repro- duction in a test environment. The tool is implemented within an automated test environment with full control on interaction with the RCS stack, hence the ability to replay the scenario in a controlled fashion. To achieve the goal, the tool replays trace interactively with the RCS stack in a stateful manner , it ensures no improper packet generation which is critical feature for test environments where protocol semantics accuracy is fundamental. A detailed demonstration of how the tool can be deployed in test environ- ments is stated. A novel approach is used to validate the effectiveness of the replayed scenario, the sequence of events and states are compared to those from the recorded scenario using a call-back service to indicate the state. The replayed scenario showed strong relationship with the recorded RCS scenario. The paper also presents a performance evaluation of Application service by considering the request-reponse times of Network Registration procedure. The obtained results show that the average time taken for the Registration process is 555 milliseconds and in few instances there exists larger deviations from this average value showing the faulty behavior of the Server which is most crucial during the debugging process for the developers.
Style APA, Harvard, Vancouver, ISO itp.
47

Nake, Magdalena. "Increasing Online Hotel Bookings with the Application of Promotional Cues : A Scenario-Based Experimental Study". Thesis, Linnéuniversitetet, Institutionen för marknadsföring (MF), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-96135.

Pełny tekst źródła
Streszczenie:
Background: Promotional cues are commonly applied on online travel agency websites. The most common cues were identified to be scarcity, popularity and pricing, which are said to increase consumers’ perception of scarcity, popularity and price and following enhance consumers’ booking intention. There is an increasing interest in promotional cues’ impact on booking intention in the online booking context, however, the results are not coherent as well as previous studies have looked only into intention to book. By applying the theory of planned behavior model to measure booking intention, new theoretical insights regarding promotional cues and booking intention should be given. Furthermore, this should help hotel suppliers to increase booking intention on the own hotel’s website. Purpose: The purpose is to explain how promotional cues found on online hotel booking sites impact the relationship between (1) perceived scarcity, (2) perceived popularity, (3) perceived price and consumers’ booking intention, by using the theory of planned behavior model. Method: This study has taken a deductive approach with quantitative data collection methods. By using a between-subjects experimental design, a fictional online hotel booking scenario was created. In total, data from 379 respondents were collected with a web-based survey. Thereby, respondents were assigned to six different groups (three treatment groups and three control groups). With conducting a manipulation check, it was identified that the perceived pricing group did not show a significant difference between control group and treatment group, however, perceived scarcity and perceived popularity did. For the analysis a correlation and regression analysis were conducted. Several t tests were conducted to find significant differences between diverse variables. Conclusion: A positive moderate relationship was found between perceived scarcity and booking intention, when applying scarcity cues. Furthermore, implementing popularity cues, led also to a moderate positive relationship indicated between perceived popularity and booking intention, however, the relationship was not as strongly correlated as when applying scarcity cues. Thereby, implementing scarcity and popularity cues is an effective tool to increase bookings, however, only to a small amount. Hence, it is important to also pay attention to the other factors influencing booking intention and simply using the cues as support to increase the number of booking.
Style APA, Harvard, Vancouver, ISO itp.
48

Kuo, Pei-Chen, i 郭珮晨. "Analysis of Application Scenarios and Business Models of Internet Micro TV". Thesis, 2012. http://ndltd.ncl.edu.tw/handle/14539798668020159294.

Pełny tekst źródła
Streszczenie:
碩士
國立臺灣大學
資訊管理學研究所
101
Multimedia website and mobile device are more and more popular now, and modern people become more and more busy. At the same time, micro era is also coming. How can we combine these distinguishing features to satisfy the needs of consumers? This research is going to design an Internet Micro TV platform, and to create values through this platform. First, we are going to analyze the characteristics between traditional TV and Multimedia website and then analyze the scenario and characteristic of application. Then find out the key components of Internet Micro TV. So we can design a system structure to operate this platform. According to the structure, we find out the questions of this industry about how to develop in different periods. Through solving these questions we can provide the advices about the strategy in different periods. They should pay attention to the source of contents and the amount of users in the initial-stage. With the increase of users, they should look for cooperation partner to make the operation of platform more mature. And in long time period they must have complete profit model and business model to keep the value creation cycle operating.
Style APA, Harvard, Vancouver, ISO itp.
49

Yeh, Chia-Wei, i 葉家瑋. "An Application of Decision Tree Generation to the Induction of Management Scenarios". Thesis, 2016. http://ndltd.ncl.edu.tw/handle/34829283316187835863.

Pełny tekst źródła
Streszczenie:
碩士
國立臺灣大學
工業工程學研究所
104
A large part of domestic industries is rooted in contract manufacturing but the need to develop brand powers have been fully recognized in the industry. Secondary brands rooted in contract manufacturing are in weak positions in competing with name brands. Therefore, how to develop the strategic option of a secondary brand forming an alliance with a channel brand to compete with the name brands is a significant issue. Then we can develop many scenarios which will happen in the future and continuously induce them into a lot of useful strategies by scenario analysis. This paper discusses with an issue of integration of knowledge. First, we consider a model of product positioning is constructed to analyze the feasibility of such a strategy. Then we apply concepts of knowledge discovery in database and scenario analysis to construct a database of scenarios. Finally, we use decision tree method to classify different scenarios and make an induction of them. A result we obtained is the market scale of the alliance is the determinant factor.
Style APA, Harvard, Vancouver, ISO itp.
50

Chen, Wei-Hsuan, i 陳瑋旋. "Application of Scenarios to the Impact Assessment of Forest Life Zones in Taiwan". Thesis, 2006. http://ndltd.ncl.edu.tw/handle/76568280958162836716.

Pełny tekst źródła
Streszczenie:
碩士
國立中興大學
森林學系所
94
In order to evaluate the impact of climate change under the enviroment which CO2 concentration in atmosphere is increasing, we used the Holdridge life zone classification model, IPCC SRES scenarios and Forest-Grid to simulate and predict future habitant factors and ecoregion in Taiwan. Holdridge model that includes three decision factors: biotemperature, annual precipitation and potential evapotranspiration ratio. These three factors determine the classification of its life zones. There are four types: A1 (Rapid Global Growth Scenario), A2 (Regional Growth Scenario), B1 (Global Service Economy Scenario), B2 (Increasing Population Scenario) in the scenarios of IPCC SRES. These scenarios are used to build the whole world models of future. In these 4 types of scenarios, A showed that is favorable to economic development, B showed that is favorable at protection of the environment, 1 is focused on the global, 2 is focused on the local. The global situation of future that can divide into four major types by these 4 symbols “A, B, 1 and 2”. Taiwan is a regional part in the whole world, we simulate the temperature and precipitation under twice CO2 (560ppm) concentration of industrial revolution in the atmosphere, the simulation results of the temperature and precipitation change in SRES-A2 and SRES-B2. This simulation is dealing with temperature and rainfall in the climate simulation of following 30 years, 60 years and 90 years in the future by GCMs. There are 96*48 zones in the IPCC SRES simulation, and three zones include in Taiwan. We proceed the climate data of each every weather station and rainfall station in the past 30 years and be a base of materials, and we simulate the specific future change by SRES. By spatial interpolation, we use “trend” to interpolate the temperature distribution of Taiwan, and “Kriging” were used to interpolate the rainfall distribution of Taiwan. Finally, we could get Holdridge life zones after SRES-A2&B2 scenarios of climate changes. We divided the procedures to "Spatial interpolation before calculation" and "Spatial interpolation after calculation". "Spatial interpolation before calculation" is a method that calculates monthly average temperature and monthly rainfall and then proceeds spatial interpolation before raster calculation of annual average temperature and annual rainfall. "Spatial interpolation after calculation" is a method that proceeds spatial interpolation after data calculation of annual average temperature and annual rainfall. We estimated the difference of temperature, rainfall and potential evapotranspiration ratio and Holdridge life zones with Forest Grid (Feng and Wu, 2005). The simulation results showed the temperature rises, and the rainfall reduces in SRES-A2 which caused the "Tropical Moist Forest" and "Tropical Dry Forest" of Holdridge life zones increase in western Taiwan. The temperature rises in SRES-B2 is less than that of SRES-A2, but the rainfall increase. So the area of "Tropical Dry Forest" increase less than SRES-A2, and the reduced area of "Subtropical Lower Montane Moist Forest" of the mountain area is less too. The Holdridge life zones of SRES-A2&B2 change in the northeast and the southwest of Taiwan in a short-term at the beginning, then changes expand to the whole Taiwan in medium-term and long-term. The tropical forest life zones were from low elevation to high elevation.The area of cool temperate and subtropical forest life zones reduce at high elevation. Specially, we found some area that Holdridge could not define the life zone and bared areas over the "forest line". We know the area of "Cool Temperate Montane Rain Forest" and ''Boreal Subalpine Rain Forest" increase, and the vegetation zone rises. We could know "Spatial Interpolation before Calculation" is a better method in data calculating and fits in with the research result of present situation. These data could supply the information to do impact assessment of climate changes to ecological environment of Taiwan in the future.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii