Thèses sur le sujet « Efficienza informativa »

Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Efficienza informativa.

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleures thèses pour votre recherche sur le sujet « Efficienza informativa ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.

1

Vento, Francesca. « Fondamenti logici del corporate credit rating ed efficienza informativa ». Doctoral thesis, Università degli studi di Trieste, 2009. http://hdl.handle.net/10077/3109.

Texte intégral
Résumé :
2007/2008
Intermediari creditizi ed agenzie di credit rating svolgono un ruolo decisivo nell’attenuare gli effetti distorsivi della distribuzione asimmetrica delle informazioni cui sono intrinsecamente soggette le transazioni che si pongono in essere tra datori e prenditori di fondi, i quali soli hanno piena conoscenza del proprio, effettivo, merito di credito. In particolare, nel mercato creditizio, le banche, interponendosi tra soggetti in surplus e soggetti in deficit, si sostituiscono agli investitori nel selezionare le alternative di finanziamento, svolgendo per essi il processo di raccolta, di analisi, e di elaborazione delle informazioni relative alla capacità di credito delle possibili controparti, favorendo in ultimo la riduzione dei costi di transazione e di gestione delle informazioni. Analogamente, nel mercato obbligazionario, ove unità in surplus ed unità in deficit si interfacciano in modo diretto, le agenzie di rating si interpongono tra investitori ed emittenti di strumenti di debito, assumendo tuttavia il differente ruolo di “brokers informativi”, ovvero di operatori specializzati nella raccolta e nell’elaborazione delle informazioni circa lo standing creditizio dell’emittente, e nella successiva sintesi e divulgazione delle stesse attraverso un giudizio espresso da un simbolo alfabetico, il credit rating, in grado di comunicare in modo semplice ed immediato agli investitori l’opinione dell’agenzia circa l’idoneità dell’emittente di provvedere in modo esatto e puntuale sia al rimborso del capitale, sia al pagamento degli interessi relativi ad una o più emissioni di debito. Invero, a nostro avviso, è ragionevole ritenere che la reale portata del ruolo assolto da intermediari creditizi ed agenzie di rating nel contrastare le asimmetrie informative tra datori e prenditori di fondi, e l’effettiva efficacia del loro intervento nel promuovere l’efficienza del mercato creditizio ed obbligazionario possano dipendere dalla natura delle informazioni che tali soggetti scelgono di porre a fondamento delle valutazioni relative al merito di credito dei potenziali soggetti finanziati. A tal riguardo è possibile operare una distinzione, seppur non strettamente dicotomica, tra elementi informativi di natura “hard”- i quali tipicamente si presentano in forma quantitativa, e prestandosi a facile codificazione, consentono di essere raccolti in modo impersonale ed interpretati in modo oggettivo ed univoco - ed informazioni di tipo “soft” - che, al contrario, presentandosi in forma prettamente qualitativa/descrittiva ed essendo subordinate ad interpretazione soggettiva da parte del valutatore, sono connotate da un’intrinseca difficoltà di codificazione, e impongono modalità di acquisizione di tipo personale e diretto. Alla luce dei descritti aspetti di differenziazione, il prevalente ricorso ad informazioni di natura hard, se da un lato determina significativi vantaggi in termini di costo di acquisizione ed elaborazione dell’informazione stessa, dall’altro comporta l’inevitabile perdita del patrimonio informativo che solo input di tipo soft sono in grado di veicolare. In tale quadro si inserisce il presente lavoro, finalizzato ad analizzare la variegata morfologia dell’informazione e ad esaminare nello specifico in quale forma (hard o soft) predominante si presentino le informazioni poste alla base dei processi di analisi del merito di credito svolti da intermediari creditizi e agenzie di rating. Ciò equivale ad indagare come tali soggetti scelgano di posizionarsi rispetto al trade-off esistente tra i costi connessi al trattamento dell’informazione - i quali risultano minimizzati in funzione di un crescente utilizzo di hard information - ed il grado di completezza dell’informazione stessa - che si accresce in funzione dell’inclusione di soft information nella base informativa a disposizione del valutatore - dal quale, come anticipato, riteniamo possa dipendere l’efficace riduzione delle asimmetrie informative esistenti tra datori e prenditori di fondi. A tale proposito si osserva come l’utilizzo più o meno intenso di informazioni di natura soft rispetto ad informazioni di carattere hard nell’ambito dei procedimenti valutativi condotti rispettivamente dalle banche e dalle agenzie di rating sia legato al contesto in cui tali processi di analisi sono originati. In particolare, con riferimento alle istituzioni bancarie, è possibile istituire un raffronto tra analisi del merito creditizio condotte nell’ambito di approcci al credito basati alternativamente sulla relazione (relationship lending) o sulla transazione (transactional lending), mentre, per quanto concerne le agenzie di rating, si individua una distinzione tra valutazioni svolte nell’ambito del processo di attribuzione del rating alternativamente sollecitato (solicited) o non sollecitato (unsolicited). La relazione che si instaura tra il soggetto valutatore e l’impresa oggetto di analisi nell’ambito del relationship lending - nel caso della banca - o nell’ambito del processo di attribuzione del rating sollecitato - nel caso dell’agenzia di rating - determina infatti un significativo arricchimento del set di input informativi disponibili per la valutazione, in quanto, oltre a consentire l’acquisizione di un complesso di elementi che, pur essendo codificabili e prestandosi ad interpretazione oggettiva (e, dunque, pur presentando carattere hard), non risultano desumibili da fonti pubbliche per questioni di riservatezza o di opacità informativa dell’impresa, permette di raccogliere una serie di informazioni impossibili da acquisire in forma impersonale alla luce della loro natura prettamente soft, che le rende non codificabili e misurabili esclusivamente attraverso il giudizio e la percezione individuale del medesimo soggetto preposto alla raccolta delle stesse. In ciò risiede, a nostro avviso, il valore aggiunto del contributo offerto da intermediari creditizi ed agenzie di rating alla riduzione delle asimmetrie informative tra datori e prenditori di fondi: anche prescindendo dalle condizioni di maggiore o minore trasparenza della controparte oggetto di analisi, l’inclusione della soft information alla base delle valutazioni condotte da tali soggetti - che risulta preclusa nel contesto tanto del rapporto di finanziamento transaction-based posto in essere dalla banca, quanto dell’attribuzione di un giudizio di rating non sollecitato da parte dell’agenzia - costituisce la determinante ultima dell’effettiva efficacia del loro intervento nel promuovere l’efficienza del mercato. Alla luce di tali considerazioni, la trattazione viene completata attraverso l’esame dell’impianto di analisi e valutazione delle informazioni qualitative sviluppato ed implementato presso un’agenzia di rating italiana, al fine di comprendere con quali strumenti, nel concreto, una credit rating agency possa affrontare il problema dell’elaborazione di informazioni soft - che si presenta con massima intensità nell’ambito del processo di assegnazione del rating di tipo solicited - pervenendo ad un bilanciamento ottimale del trade-off tra il costo di trattamento dell’informazione ed il grado di completezza della stessa.
XXI Ciclo
1979
Styles APA, Harvard, Vancouver, ISO, etc.
2

Zhang, Xiang. « Efficiency in Emergency medical service system : An analysis on information flow ». Thesis, Växjö University, School of Mathematics and Systems Engineering, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:vxu:diva-1620.

Texte intégral
Résumé :

In an information system which includes plenty of information services, we are always seeking a solution to enhance efficiency and reusability. Emergency medical service system is a classic information system using application integration in which the requirement of information flow transmissions is extremely necessary. We should always ensure this system is running in best condition with highest efficiency and reusability since the efficiency in the system directly affects human life.

The aim of this thesis is to analysis emergency medical system in both qualitative and quantitative ways. Another aim of this thesis is to suggest a method to judge the information flow through the analysis for the system efficiency and the correlations between information flow traffic and system applications.

The result is that system is a main platform integrated five information services. Each of them provides different unattached functions while they are all based on unified information resources. The system efficiency can be judged by a method called Performance Evaluation, the correlation can be judged by multi-factorial analysis of variance method.

Styles APA, Harvard, Vancouver, ISO, etc.
3

Li, Haorui. « Efficiency of hospitals : Evaluation of Cambio COSMIC system ». Thesis, Växjö University, School of Mathematics and Systems Engineering, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:vxu:diva-1625.

Texte intégral
Résumé :

In this modern world, healthcare has becoming a popular word in human life. People pay their attention on their health protection and treatment, but at the same time, they need to bear the high expenditure for their healthcare processing.

It is a serious problem that the government income can not afford the large expense in healthcare industry. Especially in some developing countries, healthcare problem has become the problem for the nation development.

We would like to choose this basic way to solve this problem directly, to provide the channel to improve the efficiency of healthcare system, Cambio COSMIC.

The aim to analysis COSMIC for my case study is to find out the conclusion that how does the architect design the system from the stakeholders requirement to achieve the success of improving the efficiency of healthcare system. And how to measure the success for the system achieving to improve the efficiency of healthcare system is still required to indicate.

Styles APA, Harvard, Vancouver, ISO, etc.
4

Gustafsson, Anders, et Peter Wulff. « EFFICIENT HANDOVER OF INFORMATION IN SUPPLY CHAINS ». Thesis, Växjö University, School of Mathematics and Systems Engineering, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:vxu:diva-2334.

Texte intégral
Résumé :

Absract: Information logistics is a question about moving information between different individuals or

systems to that place in time and space where a demand for the information arise. It is utilized

trough different sources of bearer that is distributed trough different channels in an optimal way.

How to make it without distorting the information in the interface between different participants?

This report is exactly about that.

The report has resulted in the Information Quality Model (IQM) and the Information Quality

Process (IQP).

IQM is a model for evaluation of information that explains which characteristics that affects the

efficiency of information. IQM is built on Aspects - Characteristics – Questions.

If we for example have right information and we can deliver to right place in right time but with a

poor layout the efficiency still can be low. This leads to that we can state that all the stated aspects

with its belonging characteristics must be paid attention to get a good quality in the in the

interfaces.

IQP Is a process showing how to handle and build up processes for collecting and processing

of information with a method that will secure the quality.

The method is built on an analyze phase where frames are created first for what should be collected

and how it should be done and an executing phase where information is filled in, in accordance

to to the rules that was set up in the analyze phase.

IQM and IQP form a solution that together illuminates the whole process of the information delivery

in a unique way.

The research method is mainly based on cases built on the authors empirical experiences.


Sammanfattning: Informationslogistik handlar om att flytta information mellan olika individer eller system till den

plats i tid och rum som informationsbehovet uppstår. Detta sker genom att man utnyttjar olika

former av bärare som distribueras via olika kanaler på ett optimalt sätt.

Hur gör man detta utan att information förvanskas i gränssnitten mellan olika aktörer?

Just detta handlar detta X-jobb om.

X-jobbet har resulterat i The Information Quality Model (IQM) och The Information Quality Process

(IQP).

IQM är en informationsvärderingsmodell som förklarar vilka egenskaper som påverkar verkningsgraden

hos information. IQM bygger på Aspekter – Egenskaper – Frågor.

Om exempelvis vi har rätt information och vi kan leverera till rätt plats i rätt tid men i en usel

layout så kan verkningsgraden bli låg. Detta leder till att vi kan konstatera att alla redovisade

aspekter med tillhörande egenskaper måste beaktas för att få en bra kvalitet i gränssnitten.

IQP är en process som visar hur man hanterar och bygger upp processer för insamling och

bearbetning av information med en kvalitetssäkrande metod.

Metoden bygger på en analysfas där man skapar ramar för vad som skall samlas in och hur det

skall ske samt en utförande fas där man fyller på med information uppställd enligt de regler

man satt i analysfasen.

IQM och IQP bildar tillsammans en helhet som belyser hela processen av informationsöverlämning

på ett unikt sätt.

Forskningsmetoden är i huvudsak case baserad och bygger på författarnas empiriska erfarenheter.

Styles APA, Harvard, Vancouver, ISO, etc.
5

Malek, Behzad. « Efficient private information retrieval ». Thesis, University of Ottawa (Canada), 2005. http://hdl.handle.net/10393/26966.

Texte intégral
Résumé :
In this thesis, we study Private Information Retrieval and Oblivious Transfer, two strong cryptographic tools that are widely used in various security-related applications, such as private data-mining schemes and secure function evaluation protocols. The first non-interactive, secure dot-product protocol, widely used in private data-mining schemes, is proposed based on trace functions over finite fields. We further improve the communication overhead of the best, previously known Oblivious Transfer protocol from O ((log(n))2) to O (log(n)), where n is the size of the database. Our communication-efficient Oblivious Transfer protocol is a non-interactive, single-database scheme that is generally built on Homomorphic Encryption Functions. We also introduce a new protocol that reduces the computational overhead of Private Information Retrieval protocols. This protocol is shown to be computationally secure for users, depending on the security of McEliece public-key cryptosystem. The total online computational overhead is the same as the case where no privacy is required. The computation-saving protocol can be implemented entirely in software, without any need for installing a secure piece of hardware, or replicating the database among servers.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Kassir, Abdallah. « Communication Efficiency in Information Gathering through Dynamic Information Flow ». Thesis, The University of Sydney, 2014. http://hdl.handle.net/2123/12113.

Texte intégral
Résumé :
This thesis addresses the problem of how to improve the performance of multi-robot information gathering tasks by actively controlling the rate of communication between robots. Examples of such tasks include cooperative tracking and cooperative environmental monitoring. Communication is essential in such systems for both decentralised data fusion and decision making, but wireless networks impose capacity constraints that are frequently overlooked. While existing research has focussed on improving available communication throughput, the aim in this thesis is to develop algorithms that make more efficient use of the available communication capacity. Since information may be shared at various levels of abstraction, another challenge is the decision of where information should be processed based on limits of the computational resources available. Therefore, the flow of information needs to be controlled based on the trade-off between communication limits, computation limits and information value. In this thesis, we approach the trade-off by introducing the dynamic information flow (DIF) problem. We suggest variants of DIF that either consider data fusion communication independently or both data fusion and decision making communication simultaneously. For the data fusion case, we propose efficient decentralised solutions that dynamically adjust the flow of information. For the decision making case, we present an algorithm for communication efficiency based on local LQ approximations of information gathering problems. The algorithm is then integrated with our solution for the data fusion case to produce a complete communication efficiency solution for information gathering. We analyse our suggested algorithms and present important performance guarantees. The algorithms are validated in a custom-designed decentralised simulation framework and through field-robotic experimental demonstrations.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Fernandez, Garcia Javier David, Jürgen Umbrich et Axel Polleres. « BEAR : Benchmarking the Efficiency of RDF Archiving ». Department für Informationsverarbeitung und Prozessmanagement, WU Vienna University of Economics and Business, 2015. http://epub.wu.ac.at/4615/1/BEAR_techReport_022015.pdf.

Texte intégral
Résumé :
There is an emerging demand on techniques addressing the problem of efficiently archiving and (temporal) querying different versions of evolving semantic Web data. While systems archiving and/or temporal querying are still in their early days, we consider this a good time to discuss benchmarks for evaluating storage space efficiency for archives, retrieval functionality they serve, and the performance of various retrieval operations. To this end, we provide a blueprint on benchmarking archives of semantic data by defining a concise set of operators that cover the major aspects of querying of and interacting with such archives. Next, we introduce BEAR, which instantiates this blueprint to serve a concrete set of queries on the basis of real-world evolving data. Finally, we perform an empirical evaluation of current archiving techniques that is meant to serve as a first baseline of future developments on querying archives of evolving RDF data. (authors' abstract)
Series: Working Papers on Information Systems, Information Business and Operations
Styles APA, Harvard, Vancouver, ISO, etc.
8

Bengtsson, Conny. « Energianalys av luftbehandling ». Thesis, Högskolan i Halmstad, Sektionen för ekonomi och teknik (SET), 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-15234.

Texte intégral
Résumé :
Elanvändningen till ventilation har ökat med 40 % sedan 1990 i Sverige. Ventilationen är en miljöbov enligt undersökningar som visar att elanvändningen till ventilationssystemens fläktar kan reduceras. Minskning av elförbrukningen kan t.ex. uppnås genom utbyte av fläktmotor till en EC-motor eller till en fläkt med bakåtböjda skovlar och direktdrift. Effektivieringsåtgärder kan ge en lägre elförbrukning på upptill 50 % i vissa applikationer. Ventilationsfläktarna i Sverige förbrukar 12,3 TWh per år. Idag har 175 000 ventilationsaggregat en för hög elanvändning och saknar värmeåtervinning. Enligt Energimyndigheten har området ventilation en stor besparingspotential eftersom energianvändningen till ventilation kan minska med 30 % (3,5 TWh), vilket ungefär motsvarar vindkraftsproduktionen i Sverige under ett år. Det analysverktyg som utvecklats i detta examensarbete är användarvänligt utformat och är tänkt som ett hjälpmedel vid utvärdering av funktion och effektivitet i luftbehandlingssystem. Verktyget analyserar mätdata från luftbehandlingssystemet och producerar nyckeltal som sedan jämförs mot givna nyckeltal och riktvärden, samt myndigheters gällande krav. Analysverktyget har provats på ett referensobjekt och fungerade som tänkt men får, i nuvarande utformning, anses vara en prototyp som kan vidareutvecklas. Viss utbildning krävs innan användning av analysverktyget sker, med hänsyn till elsäkerhet och mätnoggrannhet. Detta examensarbete avser vidare att kartlägga hur stor besparingspotential det finns för energieffektivisering av byggnaders ventilation samt att utveckla ett beräkningsverktyg för att kunna möta efterfrågan som troligtvis kommer att uppstå från fastighetsägare som vill spara energi.
Electricity consumption for ventilation has increased by 40% since 1990 in Sweden. Ventilation is an environmental villain since inquiries indicates that the electricity consumption for ventilation system fans can be much reduced. Reduction of electricity consumption for ventilation purposes can be achieved through e.g. the exchange of fan motors to EC motors or to fans with backward curved blades and direct drive. Such measures can provide reduced power consumption by up to 50 % in some applications. The ventilation fans in Sweden consume 12.3 TWh each year. Today Sweden has 175 000 ventilation systems with too high consumption of electricity and no heat recycling. According to the Swedish Energy Agency the ventilation sector has a large potential for savings and the energy consumption for ventilation could be lowered by 30 % (3,5 TWh), which roughly corresponds to the total annual wind power production in Sweden. The Analytical tool developed in this thesis work is designed to be user friendly and is thought to be an aid when evaluating functionality and efficiency in air treatment system. The tool analyzes data from measurements and produces key ratios which are then compared with given key ratios and standard guidelines. The Analysis tool was tested on one reference object and functioned as intended, but may at this stage be considered as a prototype that can be further developed. Specific training will be required before using the analysis tool with consideration to electrical safety and accuracy of measurement. This thesis work aims at evaluating the potential savings within air treatment systems in energy efficient buildings, and to develop a calculation tool to meet the demand that is likely to arise from property owners who want to save energy.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Brennan, Alan. « Efficient Computation of Value of Information ». Thesis, University of Sheffield, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.487609.

Texte intégral
Résumé :
This thesis is concerned with computation of expected value of information (EVI). The topic is important because EVI methodology is a rational, coherent framework for prioritising and planning the design of biomedical and clinical research studies that represent an enormous expenditure world-wide. At the start of my research few studies existed. During the course of the PhD, my own work and that of other colleagues has been published and the uptake of the developing methods is increasing. The thesis contains a review of early literature as well as of the the emerging studies over the 5 years since my first work was done in 2002 (Chapter 2). Methods to compute partial expected value of perfect information are developed and tested in illustrative cost-utility decision models with non-linear net benefit functions and correlated parameters. Evaluation using nested Monte Carlo simulations is investigated and the number of inner and outer simulations required is explored (Chapter 3). The computation of expected value of sample information using nested Monte Carlo simulations combined with Bayesian updating of model parameters with conjugate distributions given simulated data is examined (Chapter 4). In Chapter 5, the development of a novel Bayesian approximation for posterior expectations is given and this is applied and tested in the computation of EVSI for an illustrative model again with normally distributed parameters. The application is further extended to a non-conjugate proportional hazards Weibull distribution, a common circumstance for clinical trial concerned with survival or time to event data (Chapter 6). The application of the Bayesian approximation in the Weibull model is then tested against 4 other methods for estimating the Bayesian up- dated Weibull parameters including the computationally intensive Markov Chain Monte Carlo (MCMC) approach which could be considered the gold standard (Chapter 7). The result of the methodological developments in this thesis and the testing on case studies is that some new approaches to computing EVI are now a;vailable. In many models this will improve the efficiency of computation, making possible EVI calculations in some previously infeasible circumstances. In Chapter 8, I summarise the achievements made in this work, how they relate to that of other scholars and the research agenda which still faces us. I conclude with the firm hope that EVI methods will begin to provide decision makers with clearer support when deciding on investments in further research.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Daoud, Amjad M. « Efficient data structures for information retrieval ». Diss., Virginia Tech, 1993. http://hdl.handle.net/10919/40031.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
11

Chli, Margarita. « Applying information theory to efficient SLAM ». Thesis, Imperial College London, 2010. http://hdl.handle.net/10044/1/5634.

Texte intégral
Résumé :
The problem of autonomous navigation of a mobile device is at the heart of the more general issue of spatial awareness and is now a well-studied problem in the robotics community. Following a plethora of approaches throughout the history of this research, recently, implementations have been converging towards vision-based methods. While the primary reason for this success is the enormous amount of information content encrypted in images, this is also the main obstacle in achieving faster and better solutions. The growing demand for high-performance systems able to run on affordable hardware pushes algorithms to the limits, imposing the need for more effective approximations within the estimation process. The biggest challenge lies in achieving a balance between two competing goals: the optimisation of time complexity and the preservation of the desired precision levels. The key is in agile manipulation of data, which is the main idea explored in this thesis. Exploiting the power of probabilistic priors in sequential tracking, we conduct a theoretical investigation of the information encoded in measurements and estimates, which provides a deep understanding of the map structure as perceived through the camera lens. Employing information theoretic principles to guide the decisions made throughout the estimation process we demonstrate how this methodology can boost both the efficiency and consistency of algorithms. Focusing on the most challenging processes in a state of the art system, we apply our information theoretic framework to local motion estimation and maintenance of large probabilistic maps. Our investigation gives rise to dynamic algorithms for quality map-partitioning and robust feature mapping in the presence of significant ambiguity and variable camera dynamics. The latter is further explored to achieve scalable performance allowing dense feature matching based on concrete probabilistic decisions.
Styles APA, Harvard, Vancouver, ISO, etc.
12

Ableiter, Dirk. « Smart caching for efficient information sharing in distributed information systems ». Thesis, Monterey, Calif. : Naval Postgraduate School, 2008. http://edocs.nps.edu/npspubs/scholarly/theses/2008/Sept/08Sep%5FAbleiter.pdf.

Texte intégral
Résumé :
Thesis (M.S. in Computer Science)--Naval Postgraduate School, September 2008.
Thesis Advisor(s): Singh, Gurminder ; Otani, Thomas. "September 2008." Description based on title screen as viewed on October 31, 2008. Includes bibliographical references (p. 45-50). Also available in print.
Styles APA, Harvard, Vancouver, ISO, etc.
13

McLafferty, Kevin. « Operational efficiency of industrialised information processing systems ». Thesis, Cardiff Metropolitan University, 2016. http://hdl.handle.net/10369/8253.

Texte intégral
Résumé :
The British economy has always been a trading nation in terms of goods and more recently services. At the heart of the nation and international trading is London, the hub of a global financial empire that unites the globe on a 24-hour basis. Vast revenues are generated by commercial and investment banking institutions, yet research in this sector has been comparatively low. Management researchers have instead gravitated towards the ‘back office’ operations of high street banks or general insurance company call centres (Seddon, 2008). Research has focused on repetitive clerical activities for customers, and how these businesses suffer from ‘failure demand’ and/or ‘demand amplification’ (Forrester 1961) created when a customer is forced to re-establish contact with a call centre to have their issue/concerns reworked (when it should have been ‘right first time’). Modern commercial and investment banks do not share the repetitive and relatively predictable transactions of call centres and instead, represent extreme operations management cases. The workload placed upon commercial and investment banking systems is incredibly high volume, high value and high variety in terms of what clients’ demand and how ‘the product’ (trades) is executed. At this period in time, the financial collapse of 2008 is still shaping working practices due punitive regulatory environment. Many UK banks are now part-owned by the government, there is social and political pressure to stimulate improvement in banking operations which – it is thought – will herald the return of the banks back to private ownership. This thesis addresses the flow of global “trades” through the operations office and explores how the design and fit of the sociotechnical environment provides effective and efficient trade flow performance. The key research questions emerging from the literature review establishing the gap in knowledge are 1) How efficient are commercial and investment banking trading processing systems? And 2) What are the enablers and inhibitors of efficient and high performance of industrialised processing systems? xviii To answer these questions, the researcher undertook an in-depth and longitudinal case study whilst at a British bank that was ‘benchmarked’ as underperforming against its peers (MGT Report1, 2011). The case study strategy was executed using an action research and reflective learning approach (cycles of research) to explore the performance and improvement of banking operations management performance. The findings show that, using systems feedback, the management at the bank were able to develop into a “learning organisation” (Senge 1990) and improve and enhances the flow of work through the system. The study has resulted in significant gains for the case study and a new model of Rolled Throughput Yield is presented that rests on the key concept of “Information Fidelity”. This work marks a contribution to the operations management body of knowledge by exploring “flow” under conditions of high volume and high variety and from within the under-researched context of commercial and investment banking. 1 “MGT” is an anonymised commercial and investment banking industry report into operational efficiency and cost performance. The report was commissioned by the participant banks and conducted by “MGT Consultants” and is considered highly confidential. The researcher was given a copy of the report while working with the case, forming as the catalyst for the research into operational performance. The researcher was unable to receive “MGT Consultants” agreement to ‘directly’ cite the report as part of this study.
Styles APA, Harvard, Vancouver, ISO, etc.
14

Verbeek, Erwin [Verfasser]. « Information Efficiency in Speculative Markets / Erwin Verbeek ». Aachen : Shaker, 2011. http://d-nb.info/1084535610/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
15

Bako, Boto [Verfasser]. « Efficient information dissemination in VANETs / Boto Bako ». Ulm : Universität Ulm, 2016. http://d-nb.info/1122195583/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
16

PONNA, CHAITANYA, et BODEPUDI RAKESH CHOWDARY. « HOW ENABLE EFFICIENT INFORMATION INTERCHANGES IN VIRTUAL ». Thesis, Högskolan i Borås, Institutionen Handels- och IT-högskolan, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-20865.

Texte intégral
Résumé :
The Internet is a collection of computer networks; it is the most important networkingenvironment in the world, it used for information interchanges in virtual network. Wirelesstechnologies, such as Wi-Fi, WiMAX etc, can suggest more suitable and cooler way forinformation interchanges in virtual network.The most well-known ongoing wireless city projects are counting Wireless Philadelphia, GoogleWi-Fi Mountain View, Wireless Taipei City, and San Francisco Tech Connect project. Web haslimits of interactivity and presentation. The Web’s client-server architecture blocks informationexchange. Furthermore, most Web applications are only intended for conservative computers,not for mobile handheld devices. In period of information exchange on the Web, the new Web2.0 is suggested. Web 2.0 refers to a perceived or planned second generation of Internet-basedservices, such as public networking sites, wikis, communication tools, which highlight onlineteamwork and sharing among users.A virtual network or online community is a collection of people that may or may not chiefly ororiginally connect or interact via the Internet. Virtual network have also become an additionalform of communication amid people who know each other in actual life. Today, virtual networkcan be used insecurely for a diversity of social collections interacting via the Internet. It does notunavoidably mean that there is a solid bond between the members.The validation methods like internal validity, external validity and Reliability for this researchand how it affects these methods for our research. We also use interview method has beenconsidered for this research. We will use diagrams, models, prototypes, and texts, to illuminate our result. We will define allthe diagrams and models and prototypes in my own text. We will give the reference of theoriginal data collected from various sources where as from Internet, websites, books andJournals.
Program: Magisterutbildning i informatik
Styles APA, Harvard, Vancouver, ISO, etc.
17

Shin, Jungmin. « Data transform composition for efficient information integration ». [Gainesville, Fla.] : University of Florida, 2009. http://purl.fcla.edu/fcla/etd/UFE0024907.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
18

Francescato, Riccardo <1993&gt. « Efficient Black-box JTAG Discovery ». Master's Degree Thesis, Università Ca' Foscari Venezia, 2018. http://hdl.handle.net/10579/12266.

Texte intégral
Résumé :
Embedded devices represents the most widespread form of computing device in the world. Almost every consumer product manufactured in the last decades contains an embedded system, e.g., refrigerators, smart bulbs, activity trackers, smart watches and washing machines. This computing devices are also used in safety and security-critical systems, e.g., autonomous driving cars, cryptographic tokens, avionics, alarm systems. Often, manufacturers do not take much into consideration the attack surface offered by low level interfaces such as JTAG. In the last decade, JTAG port has been used by the research community to show a number of attacks and reverse engineering techniques. Therefore, finding and identifying the JTAG port of a device or a de-soldered integrated circuit (IC) can be the first step required for performing a successful attack. In this work we analyse the design of JTAG port and develop methods and algorithms aimed at searching the JTAG port. Specifically we will cover the following arguments: i) an introduction to the problem and related attacks; ii) a general description of the JTAG port and his functions; iii) the analysis of the problem and the naive solution; iv) an efficient algorithm based on 4-state GPIO; v) a randomized algorithm using the 4-states GPIO; vi) an overview on the problem and search methods used in PCBs; vii) the conclusions and the suggestions for a proficient use.
Styles APA, Harvard, Vancouver, ISO, etc.
19

Giouvris, Evangelos Thomas. « Issues in asset pricing, liquidity, information efficiency, asymmetric information and trading systems ». Thesis, Durham University, 2006. http://etheses.dur.ac.uk/2940/.

Texte intégral
Résumé :
Market microstructure is a relatively new area in finance which emerged as a result of inconsistency between actual and expected prices due to a variety of frictions (mainly trading frictions and asymmetric information) and the realisation that the trading process through which investors' demand is ultimately translated into orders and volumes is of greater importance in price formation than it was originally thought. Despite increased research in the area of liquidity, asset pricing, asymmetric information and trading systems, all subfields in the area of market microstructure, there are a number of questions that remain unanswered such as the effect of different trading systems on systematic liquidity, informational efficiency or components of the spread. This thesis aims at shedding light on those questions by providing a detailed empirical investigation of the effect of trading systems on systematic liquidity, pricing, informational efficiency, volatility and bid-ask spread decomposition mainly with respect to the UK market (FTSEIOO and FTSE250) and to a less extent with respect to the Greek market. Those two markets are at different levels of development/sophistication and are negatively correlated.The aims of this thesis are outlined in chapter one with chapter two providing a detailed review of the theoretical literature relevant to this study. Chapter three is the first empirical chapter and tests for the presence of a common underlying liquidity factor (systematic liquidity) and its effect on pricing for FTSE100 and FTSE250 stocks under different trading regimes. Results show the presence of commonality for FTSE100 and FTSE250 stocks although commonality is weaker for FTSE250 stocks and its role on pricing is reduced. Chapter four investigates the same issues with respect to the Greek market and we find that commonality appears to be stronger in some periods while it is reduced to zero for other periods. Chapter five focuses on the effect that changes in the trading systems can have on informational efficiency and volatility primarily with respect to FTSE100 and FTSE250. Different methodologies and data are employed for this purpose and produce similar results. We find that order driven markets are more responsive to incoming information when compared to quote driven markets. Volatility has a greater impact on the spread when the market is quote driven. We also examined if automated trading increased informational efficiency with respect to the Greek market. The results obtained indicated that the effect of automation was positive. Finally the last chapter focused on the effect of different trading systems on the components of the spread and their determinants. Our main finding is that the asymmetric component of the spread is higher under a quote driven market. Also stock volatility appears to affect the asymmetric component to a greater extent when the market is quote driven. We believe that the main justification for those findings is affirmative quotation.
Styles APA, Harvard, Vancouver, ISO, etc.
20

Rydberg, Christoffer. « Time Efficiency of Information Retrieval with Geographic Filtering ». Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-172918.

Texte intégral
Résumé :
This study addresses the question of time efficiency of two major models within Information Retrieval (IR): the Extended Boolean Model (EBM) and the Vector Space Model (VSM). Both models use the same weighting scheme, based on term-frequency-inverse document frequency (tf-idf). The VSM uses a cosine score computation to rank the document-query similarity. In the EBM, P-norm scores are used, which ranks documents not just by matching terms, but also by taking the Boolean interconnections between the terms in the query into account. Additionally, this study investigates how documents with a single geographic affiliation can be retrieved based on features such as the location and geometry of the geographic surface. Furthermore, we want to answer how to best integrate this geographic search with the two IR-models previously described. From previous research we conclude that using an index based on Z-Space Filling Curves (Z-SFC) is the best approach for documents containing a single geographic affiliation. When documents are retrieved from the Z-SFC-index, there are no guarantees that the retrieved documents are relevant for the search area. It is, however, guaranteed that only the retrieved documents can be relevant. Furthermore, the ranked output of the IR models gives a great advantage to the geographic search, namely that we can focus on documents with a high relevance. We intersect the results from one of the IR models with the results from the Z-SFC index and sort the resulting list of documents by relevance. At this point we can iterate over the list, check for intersections of each document's geometry and the search geometry, and only retrieve documents whose geometries are relevant for the search. Since the user is only interested in the top results we can stop as soon as a sufficient amount of results have been obtained. The conclusion of this study is that the VSM is an easy-to-implement, time efficient, retrieval model. It is inferior to the EBM in the sense that it is a rather simple bag-of-words model, while the EBM allows to specify term- conjunctions and disjunctions. The geographic search has shown to be time efficient and independent of which of the two IR models that is used. The gap in efficiency between the VSM and the EBM, however, drastically increases as the query gets longer and more results are obtained. Depending on the requirements of the user, the collection size, the length of queries, etc., the benefits of the EBM might outweigh the downside of performance. For search engines with a big document collection and many users, however, it is likely to be too slow.
Den här studien addresserar tidseffektiviteten av två större modeller inom informationssökning: ”Extended Boolean Model” (EBM) och ”Vector Space Model” (VSM) . Båda modellerna använder samma typ av viktningsschema, som bygger på ”term frequency–inverse document frequency“ (tf- idf). I VSM rankas varje dokument, utifrån en söksträng, genom en skalärprodukt av dokumentets och söksträngens vektorrepresentationer. I EBM används såkallade ”p-norm score functions” som rankar dokument, inte bara utifrån matchande termer, utan genom att ta hänsyn till de Booleska sammanbindningar som finns mellan sökorden. Utöver detta undersöker studien hur dokument med en geografisk anknytning kan hämtas baserat på positionen och geometrin av den geografiska ytan. Vidare vill vi besvara hur denna geografiska sökning på bästa sätt kan integreras med de två informationssökningmodellerna. Utifrån tidigare forskning dras slutsatsen att det bästa tillvägagångssättet för dokument med endast en geografisk anknytning är att använda ett index baserat på ”Z-Space Filling Curves” (Z-SFC). När dokument hämtas genom Z-SFC-indexet finns det inga garantier att de hämtade dokumenten är relevanta för sökytan. Det är däremot garanterat att endast dessa dokument kan vara relevanta. Vidare är det rankade utdatat från IR-modellerna till en stor fördel för den geografiska sökningen, nämligen att vi kan fokusera på dokument med hög relevans. Detta görs genom att jämföra resultaten från vald IR-modell med resultaten från Z-SFC-indexet och sortera de matchande dokumenten efter relevans. Därefter kan vi iterera över listan och beräkna vilka dokuments geometrier som skär sökningens geometri. Eftersom användaren endast är intresserad av de högst rankade dokumenten kan vi avbryta när vi har tillräckligt många sökresultat. Slutsatsen av studien är att VSM är enkel att implementera och mycket tidseffektiv jämfört med EBM. Modellen är underlägsen EBM i den mening att det är en ganska enkel ”bag of words”-modell, medan EBM tillåter specificering av konjuktioner och disjunktioner. Den geografiska sökningen har visats vara tidseffektiv och oberoende av vilken av de två IR-modellerna som används.Skillnaden i tidseffektivitet mellan VSM och EBM ökar däremot drastiskt när söksträngen blir längre och fler resultat erhålls. Emellertid, beroende på användarens krav, storleken på dokumentsamlingen, söksträngens längd, etc., kan fördelarna med EBM ibland överväga nackdelen av den lägre prestandan. För sökmotorer med stora dokumentsamlingar och många användare är dock modellen sannolikt för långsam.
Styles APA, Harvard, Vancouver, ISO, etc.
21

Verney, Steven P. « Pupillary responses index : information processing efficiency across cultures / ». Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2000. http://wwwlib.umi.com/cr/ucsd/fullcit?p9992386.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
22

Bazargan-Lari, Sina. « Benefits of information technology in improving preconstruction efficiency ». [Gainesville, Fla.] : University of Florida, 2009. http://purl.fcla.edu/fcla/etd/UFE0041262.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
23

Bellato, Carlo <1997&gt. « Information and Theories of Efficiency in Financial Markets ». Master's Degree Thesis, Università Ca' Foscari Venezia, 2022. http://hdl.handle.net/10579/21794.

Texte intégral
Résumé :
The purpose of this thesis is to analyse the state of the art of financial markets, focusing on information and on the theories of efficiency. I will present the Efficient Markets Hypothesis developed by Eugene Fama, the cornerstone of every classic model, which states that it’s not possible to “beat the market” through assets trading since all available information is reflected into prices. I will then present the Adaptive Markets Hypothesis, a theory developed by Andrew Lo that tries to complete the theory of Fama and to reconcile it with Behavioral Finance. The Adaptive Hypothesis applies the principle of evolution of the species to financial markets: individuals adapt according to market forces and compete with each other in order to survive in the financial environment. Finally, i will analyze the actual situation of financial markets for what concerns information and knowledge. I describe the technological changes that have affected financial markets over the recent years and the role of media in spreading information, creating an environment in which the real challenge for investors is to find the appropriate news and to avoid disinformation and fake news.
Styles APA, Harvard, Vancouver, ISO, etc.
24

CATENA, MATTEO. « Energy efficiency in large scale information retrieval systems ». Doctoral thesis, Gran Sasso Science Institute, 2017. http://hdl.handle.net/20.500.12571/9645.

Texte intégral
Résumé :
Web search engines are large scale information retrieval systems, which provide easy access to information on the Web. High performance query processing is fundamental for the success of such systems. In fact, web search engines can receive billions of queries per day. Additionally, the issuing users are often impatient and expect sub-second response times to their queries (e.g., 500 ms). For such reasons, search companies adopt distributed query processing strategies to cope with huge volumes of incoming queries and to provide sub-second response times. Web search engines perform distributed query processing on computer clusters composed by thousands of computers and hosted in large data centers. While data center facilities enable large-scale online services, they also raise economical and environmental concerns. Therefore, an important problem to address is how to reduce the energy expenditure of data centers. Moreover, another problem to tackle is how to reduce carbon dioxide emissions and the negative impact of the data centers on the environment. A large part of the energy consumption of a data center could be accounted to inefficiencies in its cooling and power supply systems. However, search companies already adopt state-of-the art techniques to reduce the energy wastage of such systems, leaving little room for more improvements in those areas. Therefore, new approaches are necessary to mitigate the environmental impact and the energy expenditure of web search engines.One option is to reduce the energy consumption of computing resources to mitigate the energy expenditure and carbon footprint of a search company. In particular, reducing the energy consumption of CPUs represents an attractive venue for web search engines. Currently, CPU cores frequencies are typically managed by operating system components, called frequency governors. We propose to delegate the CPU power management from the OS frequency governors to the query processing application. Such search engine-specific governors can reduce up to 24% a server power consumption, with only limited (but uncontrollable) drawbacks in the quality of search results with respect to a system running at maximum CPU frequency. Since users can hardly notice response times that are faster than their expectations we advise that web search engine should not process queries faster than user expectations and, consequently, we propose the Predictive Energy Saving Online Scheduling (PESOS) algorithm, to select the most appropriate CPU frequency to process a query by its deadline, on a per-core basis. PESOS can reduce the CPU energy consumption of a query processing server from 24% up to 48% when compared to a high performance system running at maximum CPU core frequency. To reduce the carbon footprint of web search engines, another option consists in using green energy to partially power their data centers. Stemming from these observations, we propose a new query forwarding algorithm that exploits both the green energy sources available at different data centers and the differences in market energy prices. The proposed solution maintains a high query throughput, while reducing by up to 25% the energy operational costs of multi-center search engines.
Styles APA, Harvard, Vancouver, ISO, etc.
25

He, Bing. « Efficient information retrieval from the World Wide Web ». Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ33938.pdf.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
26

Bruch, Jessica, et Monica Bellgran. « Design information for efficient equipment supplier/buyer integration ». Mälardalens högskola, Akademin för innovation, design och teknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-14106.

Texte intégral
Résumé :
Purpose - The purpose of this paper is to describe the underlying design information and success factors for production equipment acquisition, in order to support the design of high-performance production systems. Design/methodology/approach - The research strategy employed was an in-depth case study of an industrialization project, together with a questionnaire of 25 equipment suppliers. Findings - The study provides the reader with an insight into the role of design information when acquiring production equipment by addressing questions such as: What type of information is used? How do equipment suppliers obtain information? What factors facilitate a smooth production system acquisition? Research limitations/implications - Limitations are primarily associated with the chosen research methodology, which requires further empirical studies to establish a generic value. Practical implications - The implications are that manufacturing companies have to transfer various types of design information with respect to the content and kind of information. More attention has to be placed on what information is transferred to ensure that equipment suppliers receive all the information needed to design and subsequently build the production equipment. To facilitate integration of equipment suppliers, manufacturing companies should appoint a contact person who can gather, understand and transform relevant design information. Originality/value - External integration of equipment suppliers in production system design by means of design information is an area that has been rarely addressed in academia and industry
Styles APA, Harvard, Vancouver, ISO, etc.
27

魯建江 et Kin-kong Loo. « Efficient mining of association rules using conjectural information ». Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2001. http://hub.hku.hk/bib/B31224878.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
28

CARDOSO, EDUARDO TEIXEIRA. « EFFICIENT METHODS FOR INFORMATION EXTRACTION IN NEWS WEBPAGES ». PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2011. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=28984@1.

Texte intégral
Résumé :
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO
Nós abordamos a tarefa de segmentação de páginas de notícias; mais especificamente identificação do título, data de publicação e corpo da notícia. Embora existam resultados muito bons na literatura, a maioria deles depende da renderização da página, que é uma tarefa muito demorada. Nós focamos em cenários com um alto volume de documentos, onde desempenho de tempo é uma necessidade. A abordagem escolhida estende nosso trabalho prévio na área, combinando propriedades estruturais com traços de atributos visuais, calculados através de um método mais rápido do que a renderização tradicional, e algoritmos de aprendizado de máquina. Em nossos experimentos, nos atentamos para alguns fatos não comumente abordados na literatura, como tempo de processamento e a generalização dos nossos resultados para domínios desconhecidos. Nossa abordagem se mostrou aproximadamente uma ordem de magnitude mais rápida do que alternativas equivalentes que se apoiam na renderização completa da página e manteve uma boa qualidade de extração.
We tackle the task of news webpage segmentation, specifically identifying the news title, publication date and story body. While there are very good results in the literature, most of them rely on webpage rendering, which is a very time-consuming step. We focus on scenarios with a high volume of documents, where a short execution time is a must. The chosen approach extends our previous work in the area, combining structural properties with hints of visual presentation styles, computed with a faster method than regular rendering, and machine learning algorithms. In our experiments, we took special attention to some aspects that are often overlooked in the literature, such as processing time and the generalization of the extraction results for unseen domains. Our approach has shown to be about an order of magnitude faster than an equivalent full rendering alternative while retaining a good quality of extraction.
Styles APA, Harvard, Vancouver, ISO, etc.
29

Freiwat, Sami, et Lukas Öhlund. « Fuel-Efficient Platooning Using Road Grade Preview Information ». Thesis, Uppsala universitet, Avdelningen för systemteknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-270263.

Texte intégral
Résumé :
Platooning is an interesting area which involve the possibility of decreasing the fuel consumption of heavy-duty vehicles. By reducing the inter-vehicle spacing in the platoon we can reduce air drag, which in turn reduces fuel consumption. Two fuel-efficient model predictive controllers for HDVs in a platoon has been formulated in this master thesis, both utilizing road grade preview information. The first controller is based on linear programming (LP) algorithms and the second on quadratic programming (QP). These two platooning controllers are compared with each other and with generic controllers from Scania. The LP controller proved to be more fuel-efficient than the QP controller, the Scania controllers are however more fuel-efficient than the LP controller.
Styles APA, Harvard, Vancouver, ISO, etc.
30

Loo, Kin-kong. « Efficient mining of association rules using conjectural information ». Hong Kong : University of Hong Kong, 2001. http://sunzi.lib.hku.hk/hkuto/record.jsp?B22505544.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
31

Yu, Xiangshen. « Efficient Interest Forwarding for Vehicular Information-Centric Networks ». Thesis, Université d'Ottawa / University of Ottawa, 2018. http://hdl.handle.net/10393/38252.

Texte intégral
Résumé :
Content Distribution in Vehicular Ad-hoc Networks (VANETs) has always been a critical challenge, due to the peculiar characteristics of VANETs, such as high mobility, intermittent connectivity, and dynamic topologies. In fact, traditional Host-Centric Networks have shown to be unable to handle the increasing demand for content distribution in VANETs. Recently, Information-Centric Networks (ICN) have been proposed to VANETs to cope with the existing issues and improve the content delivery. In Vehicular Information-Centric Networks, instead of communicating in a host-to-host pattern and maintaining host-to-host links during the communication, consumers opportunistically send the Interest requests to the neighbor vehicles, which may have the desired Data packets that can satisfy the Interest packets. However, uncontrolled Interest packet transmissions for content search will result in a waste of resources and diminish the performance of applications in VANETs. In the thesis, we focus on two daunting problems that have limited content distribution in Vehicular Information-Centric Networks when using Vehicle-to-Vehicle (V2V) communication: (i) unreliable content delivery and (ii) broadcast storm. We proposed a suite of protocols, OIFP, LISIC and LOCOS, destined to tackle these and other issues. In the proposed protocols, we have considered different metrics in VANETs that may influence the content distribution, such as distance, velocity, directions and the locations of the producers and consumers. By utilizing a small deferred timer, which is the time holden by the forwarding vehicles before sending the Interest packets out, priority is given to the selected vehicles to forward the Interest packets. Extensive simulations show that all the proposed protocols outperform the vanilla VNDN protocol regarding transmission delay, content satisfaction rate and the average number of Interest transmissions. Besides, we have also implemented several related works and compared with our protocols. The overall performance of the proposed LOCOS protocol outperforms the related works. Moreover, our protocols do alleviate the broadcast storm problem and improve the content delivery rate.
Styles APA, Harvard, Vancouver, ISO, etc.
32

CARRATINO, LUIGI. « Resource Efficient Large-Scale Machine Learning ». Doctoral thesis, Università degli studi di Genova, 2020. http://hdl.handle.net/11567/1001813.

Texte intégral
Résumé :
Non-parametric models provide a principled way to learn non-linear functions. In particular, kernel methods are accurate prediction tools that rely on solid theoretical foundations. Although they enjoy optimal statistical properties, they have limited applicability in real-world large-scale scenarios because of their stringent computational requirements in terms of time and memory. Indeed their computational costs scale at least quadratically with the number of points of the dataset and many of the modern machine learning challenges requires training on datasets of millions if not billions of points. In this thesis, we focus on scaling kernel methods, developing novel algorithmic solutions that incorporate budgeted computations. To derive these algorithms we mix ideas from statistics, optimization, and randomized linear algebra. We study the statistical and computational trade-offs for various non-parametric models, the key component to derive numerical solutions with resources tailored to the statistical accuracy allowed by the data. In particular, we study the estimator defined by stochastic gradients and random features, showing how all the free parameters provably govern both the statistical properties and the computational complexity of the algorithm. We then see how to blend the Nyström approximation and preconditioned conjugate gradient to derive a provably statistically optimal solver that can easily scale on datasets of millions of points on a single machine. We also derive a provably accurate leverage score sampling algorithm that can further improve the latter solver. Finally, we see how the Nyström approximation with leverage scores can be used to scale Gaussian processes in a bandit optimization setting deriving a provably accurate algorithm. The theoretical analysis and the new algorithms presented in this work represent a step towards building a new generation of efficient non-parametric algorithms with minimal time and memory footprints.
Styles APA, Harvard, Vancouver, ISO, etc.
33

Nguyen, Chi Mai. « Efficient Reasoning with Constrained Goal Models ». Doctoral thesis, Università degli studi di Trento, 2017. https://hdl.handle.net/11572/367790.

Texte intégral
Résumé :
GOAL models have been widely used in Computer Science to represent software requirements, business objectives, and design qualities. Existing goal modelling techniques, however, have shown limitations of expressiveness and/or tractability in coping with complex real-world problems. In this work, we exploit advances in automated reasoning technologies, notably Satisfiability and Optimization Modulo Theories (SMT/OMT), and we propose and formalize: (i) an extended modelling language for goals, namely the Constrained Goal Model (CGM), which makes explicit the notion of goal refinements and of domain as- sumptions, allows for expressing preferences between goals and refinements, and allows for associating numerical attributes to goals and refinements for defining constraints and optimization goals over multiple objective functions, refinements and their numerical attributes; (i) a novel set of automated reasoning functionalities over CGMs, allowing for automatically generating suitable realizations of input CGMs, under user-specified assumptions and constraints, that also maximize preferences and optimize given objective functions. We are also interested in supporting software evolution caused by changing requirements and/or changes in the operational environment of a software system. For example, users of a system may want new functionalities or performance enhancements to cope with growing user population (requirements evolution). Alternatively, vendors of a system may want to minimize costs in implementing requirements changes (evolution requirements). We propose to use CGMs to represent the requirements of a system and capture requirements changes in terms of incremental operations on a goal model. Evolution requirements are then represented as optimization goals that minimize implementation costs or customer value. We can then exploit reasoning techniques to derive optimal new specifications for an evolving software system. We have implemented these modelling and reasoning functionalities in a tool, named CGM-Tool, using the OMT solver OptiMathSAT as automated reasoning backend. More- over, we have conducted an experimental evaluation on large CGMs to support the claim that our proposal scales well for goal models with thousands of elements. To access our framework usability, we have employed a user-oriented evaluation using enquiry evaluation method.
Styles APA, Harvard, Vancouver, ISO, etc.
34

Mendonca, Costa Javier. « Context-Aware Machine to Machine Communications in Cellular Networks ». Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-143180.

Texte intégral
Résumé :
Cellular network based Machine-to-Machine (M2M) communications have been growing rapidly in recent years, being used in a wide range of services such as security, metering, health, remote control, tracking and so on. A critical issue that needs to be considered in M2M communications is the energy efficiency, typically the machines are powered by batteries of low capacity and it is important to optimize the way the power is consumed. In search of better M2M systems, we propose a context-aware framework for M2M communications so the machine type communication (MTC) devices dynamically adapt their settings depending on a series of characteristics such as data reporting mode and quality of service (QoS) features so higher energy efficient is achieved, extending the operating lifetime of the M2M network. Simulations were performed with four commonly used M2M applications:home security, telehealth, climate and smart metering, achieving considerable energy savings and operating lifetime extension on the network. Thus, it is shown that contexts play an important role on the energy efficiency of a M2M system.
Styles APA, Harvard, Vancouver, ISO, etc.
35

Hibraj, Feliks <1995&gt. « Efficient tensor kernel methods for sparse regression ». Master's Degree Thesis, Università Ca' Foscari Venezia, 2020. http://hdl.handle.net/10579/16921.

Texte intégral
Résumé :
Recently, classical kernel methods have been extended by the introduction of suitable tensor kernels so to promote sparsity in the solution of the underlying regression problem. Indeed, they solve an lp-norm regularization problem, with p=m/(m-1) and m even integer, which happens to be close to a lasso problem. However, a major drawback of the method is that storing tensors requires a considerable amount of memory, ultimately limiting its applicability. In this work we address this problem by proposing two advances. First, we directly reduce the memory requirement, by introducing a new and more efficient layout for storing the data. Second, we use a Nyström-type subsampling approach, which allows for a training phase with a smaller number of data points, so to reduce the computational cost. Experiments, both on synthetic and real datasets, show the effectiveness of the proposed improvements. Finally, we take care of implementing the code in C++ so to further speed-up the computation.
Styles APA, Harvard, Vancouver, ISO, etc.
36

Ramos, Gabriel de Oliveira. « Regret minimisation and system-efficiency in route choice ». reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2018. http://hdl.handle.net/10183/178665.

Texte intégral
Résumé :
Aprendizagem por reforço multiagente (do inglês, MARL) é uma tarefa desafiadora em que agentes buscam, concorrentemente, uma política capaz de maximizar sua utilidade. Aprender neste tipo de cenário é difícil porque os agentes devem se adaptar uns aos outros, tornando o objetivo um alvo em movimento. Consequentemente, não existem garantias de convergência para problemas de MARL em geral. Esta tese explora um problema em particular, denominado escolha de rotas (onde motoristas egoístas deve escolher rotas que minimizem seus custos de viagem), em busca de garantias de convergência. Em particular, esta tese busca garantir a convergência de algoritmos de MARL para o equilíbrio dos usuários (onde nenhum motorista consegue melhorar seu desempenho mudando de rota) e para o ótimo do sistema (onde o tempo médio de viagem é mínimo). O principal objetivo desta tese é mostrar que, no contexto de escolha de rotas, é possível garantir a convergência de algoritmos de MARL sob certas condições. Primeiramente, introduzimos uma algoritmo de aprendizagem por reforço baseado em minimização de arrependimento, o qual provamos ser capaz de convergir para o equilíbrio dos usuários Nosso algoritmo estima o arrependimento associado com as ações dos agentes e usa tal informação como sinal de reforço dos agentes. Além do mais, estabelecemos um limite superior no arrependimento dos agentes. Em seguida, estendemos o referido algoritmo para lidar com informações não-locais, fornecidas por um serviço de navegação. Ao usar tais informações, os agentes são capazes de estimar melhor o arrependimento de suas ações, o que melhora seu desempenho. Finalmente, de modo a mitigar os efeitos do egoísmo dos agentes, propomos ainda um método genérico de pedágios baseados em custos marginais, onde os agentes são cobrados proporcionalmente ao custo imposto por eles aos demais. Neste sentido, apresentamos ainda um algoritmo de aprendizagem por reforço baseado em pedágios que, provamos, converge para o ótimo do sistema e é mais justo que outros existentes na literatura.
Multiagent reinforcement learning (MARL) is a challenging task, where self-interested agents concurrently learn a policy that maximise their utilities. Learning here is difficult because agents must adapt to each other, which makes their objective a moving target. As a side effect, no convergence guarantees exist for the general MARL setting. This thesis exploits a particular MARL problem, namely route choice (where selfish drivers aim at choosing routes that minimise their travel costs), to deliver convergence guarantees. We are particularly interested in guaranteeing convergence to two fundamental solution concepts: the user equilibrium (UE, when no agent benefits from unilaterally changing its route) and the system optimum (SO, when average travel time is minimum). The main goal of this thesis is to show that, in the context of route choice, MARL can be guaranteed to converge to the UE as well as to the SO upon certain conditions. Firstly, we introduce a regret-minimising Q-learning algorithm, which we prove that converges to the UE. Our algorithm works by estimating the regret associated with agents’ actions and using such information as reinforcement signal for updating the corresponding Q-values. We also establish a bound on the agents’ regret. We then extend this algorithm to deal with non-local information provided by a navigation service. Using such information, agents can improve their regrets estimates, thus performing empirically better. Finally, in order to mitigate the effects of selfishness, we also present a generalised marginal-cost tolling scheme in which drivers are charged proportional to the cost imposed on others. We then devise a toll-based Q-learning algorithm, which we prove that converges to the SO and that is fairer than existing tolling schemes.
Styles APA, Harvard, Vancouver, ISO, etc.
37

Costello, Greg. « Price discovery and information diffusion in the Perth housing market 1988-2000 ». UWA Business School, 2004. http://theses.library.uwa.edu.au/adt-WU2005.0034.

Texte intégral
Résumé :
[Truncated abstract] This thesis examines informational efficiency and price discovery processes within the Perth housing market for the period 1988-2000 by utilising a rich source of Western Australian Valuer General’s Office (VGO) data. Fama’s (1970) classification of market efficiency as potentially weak form, semi-strong, or strong form has been a dominant paradigm in tests of market efficiency in many asset markets. While there are some parallels, the results of tests in this thesis suggest there are also limitations in applying this paradigm to housing markets. The institutional structure of housing markets dictates that a deeper recognition of important housing market characteristics is required. Efficiency in housing markets is desirable in that if prices provide accurate signals for purchase or disposition of real estate assets this will facilitate the correct allocation of scarce financial resources for housing services. The theory of efficient markets suggests that it is desirable for information diffusion processes in a large aggregate housing market to facilitate price corrections. In an efficient housing market, these processes can be observed and will enable housing units to be exchanged with an absence of market failure in all price and location segments. Throughout this thesis there is an emphasis on disaggregation of the Perth housing market both by price and location criteria. Results indicate that the Perth housing market is characterised by varying levels of informational inefficiency in both price and location segments and there are some important pricing-size influences.
Styles APA, Harvard, Vancouver, ISO, etc.
38

On, Sai Tung. « Efficient transaction recovery on flash disks ». HKBU Institutional Repository, 2010. http://repository.hkbu.edu.hk/etd_ra/1170.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
39

Malhotra, Suvarcha. « Information capacity and power efficiency in operational transconductance amplifiers ». College Park, Md. : University of Maryland, 2003. http://hdl.handle.net/1903/104.

Texte intégral
Résumé :
Thesis (M.S.) -- University of Maryland, College Park, 2003.
Thesis research directed by: Dept. of Electrical and Computer Engineering. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
Styles APA, Harvard, Vancouver, ISO, etc.
40

Cheah, Eng-Tuck. « The implications of information processing efficiency on decision making ». Thesis, University of Southampton, 2011. https://eprints.soton.ac.uk/184037/.

Texte intégral
Résumé :
This thesis investigates the implications of information processing efficiency on decision making with respect to the ability of decision makers to process information in a rational and timely manner. In order to examine the different aspects of information efficiency with respect to decision making, three different settings were used. First, attitudes and perceptions held by individual decision makers play an important role in the information processing stage of a decision. Therefore, the first thrust of this thesis seeks to investigate the impact of demographic characteristics of decision makers (socially responsible investors (SRIs)) on their attitudes and perceptions (in relation to their corporate social responsibility (CSR) views). The results show that demographic characteristics are useful predictors of CSR views held by SRIs. This implies that companies can reduce their cost of capital by attracting the affluent members of SRIs community and increase their CSR rankings by creating diversity in their corporate boardrooms. These efforts, if undertaken by companies, can help increase share price of the respective companies. Government agencies can also encourage companies to implement CSR agendas by requiring companies to implement CSR agendas which will appeal to the specific members in the SRIs community (clientele effect). Second, the ability of decision makers to process information in a rational manner can be seriously undermined when decision makers are expected to match the different motivations underlying their own or others‟ objectives with the multiple choices which are available to them. In the second thrust of the thesis, a state contingent (UK horseracing pari-mutuel betting market) with multi-competitor choices is used to illustrate the discovery of the determinants of demand (day-of-the-week, weekend, public holiday, number of races in the same hour, field size, televised races, flat and jump races, race quality, timing of the race during the day, insider trading, track conditions, bookmakers‟ over-round and risk attitude of bettors) unique to different groups of decision makers (bettors). The results demonstrate that unique sets of determinants can be used to identify the different types of decision makers (that is, sophisticated and unsophisticated bettors). Clearly, the discovery of these unique determinants for demand can be used by the respective authorities (British Horseracing Board, Horseracing Betting Levy and Tote boards) in deciding which variables are important to influence the behavior of the respective decision makers (bettors and horseracing authorities). Third, decision makers ought to be able to arrive at a decision in a timely manner. The third thrust of this thesis attempts to investigate the speed of adjustment with respect to the arrival of new and unexpected information in understanding the financial integration process in the Asia Pacific region (APR). Using stock market capitalization as a measure of equity market size, it was also found that more advanced equity markets are more informationally efficient that those less advanced equity markets possibly due to the fact that the infrastructure which supports information flow enables information to be easily accessible by investors for decision making. The results suggest that a more integrated equity market in the APR can lead to a greater speed of adjustment with respect to information shocks. Therefore, domestic governments have a role to play in ensuring the necessary infrastructure to facilitate information flow is improved and better integrated with neighbouring equity markets. Finally, the thesis concludes that demographic characteristics play an important role in influencing the rational information processing involved in decision making by individuals. When confronted with choices, decision makers are affected by their various motivations and those who seek to capitalise on others‟ decisions need to be aware of these motivations. In addition, the infrastructure on which information flows is essential in influencing the speed at which information is processed
Styles APA, Harvard, Vancouver, ISO, etc.
41

Wang, Yong. « Diversification, information asymmetry, cost of capital, and production efficiency ». Diss., Temple University Libraries, 2008. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/13948.

Texte intégral
Résumé :
Business Administration
Ph.D.
This study examines how diversification changes firms' key characteristics, which consequently alter firms' value. The reason why I focus on this topic is because of the mixed findings in literature about the valuation effect of diversification. This study offers deeper insights to the influence of diversification on important valuation factors that are already identified in finance literature. Specifically, it examines if diversification affects firms' information asymmetry problem, firms' cost of capital and cash flow, and firms' production efficiency. The study looks at both the financial industry and non-financial industry and the chapters are arranged in the following order. Firstly, empirical studies show that investors do not value BHCs' pursuit of non-interest income generating activities and yet these activities have demonstrated a dramatic pace of growth in the recent decades. An interesting question is what factors drive the discontent of the investors with the diversification endeavors of the BHCs in non-interest income activities. The first chapter examines the subject from the view point of information opaqueness, which is unique in the banking industry in terms of its intensity. We propose that increased diversification into non-interest income activities deepens information asymmetry, making BHCs more opaque and curtailing their value, as a result. Two important results are obtained in support of this proposition. First, analysts' forecasts are less accurate and more dispersed for the BHCs with greater diversity of non-interest income activities, indicating that information asymmetry problem is more severe for these BHCs. Second, stock market reactions to earning announcements by these BHCs signaling new information to the market are larger, indicating that more information is revealed to the market by each announcement. These findings indicate that increased diversity of non-interest income activities is associated with more severe information asymmetry between insiders and outsiders and, hence, a lower valuation by shareholder. Secondly, since Lang and Stulz (1994) and Berger and Ofek (1995), corporate literature has taken the position that industrial diversification is associated with a firm value discount. However, the validity and the sources of the diversification discount are still highly debated. In particular, extant studies limit themselves to cash flow effects, totally overlooking the cost of capital as a factor determining firm value. Inspired by Lamont and Polk (2001), the second chapter examines how industrial and international diversification change the conglomerates' cost of capital (equity and debt), and thereby the firm value. Our empirical results, based on a sample of Russell 3000 firms over the 1998-2004 period, show that industrial (international) diversification is associated with a lower (higher) firm cost of capital. These findings also hold for firms fully financed with equity. In addition, international diversification is found to be associated with a lower operating cash flow while industrial diversification doesn't alter it. These results indicate that industrial (international) diversification is associated with firm value enhancement (destruction). Given the fact that the majority of the firms involved in industrial diversification also diversify internationally, failing to separate these two dimensions of diversification may result in mistakenly attributing the diversification discount to industrial diversification. Thirdly, financial conglomerates have been increasingly diversifying their business into banking, securities, and insurance activities, especially after the Gramm-Leach-Bliley Act (GLBA, 1999). The third chapter examines whether bank holding company (BHC) diversification is associated with improvement in production efficiency. By applying the data envelopment analysis (DEA), the Malmquist Index of productivity, and total factor productivity change as a decomposed factor of the index, are calculated for a sample of BHCs over the period 1997-2007. The following results are obtained. First, technical efficiency is negatively associated with activity diversification and the effect is primarily driven by BHCs that did not diversify through Section 20 subsidiaries before GLBA. Second, the degree of change in diversification over time does not affect the total factor productivity change but is negatively associated with technical efficiency change over time. This latter effect is also primarily shown on BHCs that did not have Section 20 subsidiaries before GLBA. Therefore, it can be concluded that diversification is on average associated with lower production efficiency of BHCs, especially those BHCs without first-mover advantage obtained through Section 20 subsidiaries. These chapters explores the possible channels through which diversification could alter firms' valuation. They contribute to the literature by offering further knowledge about the effect of diversification.
Temple University--Theses
Styles APA, Harvard, Vancouver, ISO, etc.
42

Herglotz, Christian [Verfasser]. « Energy Efficient Video Decoding / Christian Herglotz ». München : Verlag Dr. Hut, 2018. http://d-nb.info/1155058011/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
43

Elsken, Thomas [Verfasser], et Frank [Akademischer Betreuer] Hutter. « Efficient and practical neural architecture search ». Freiburg : Universität, 2021. http://d-nb.info/1236500423/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
44

Hansen, Tone. « A Digital Tool to Improve the Efficiency of IT Forensic Investigations ». Thesis, Högskolan i Halmstad, Akademin för informationsteknologi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-40232.

Texte intégral
Résumé :
The IT forensic process causing bottlenecks in investigations is an identified issue, with multiple underlying causes – one of the main causes being the lack of expertise among those responsible for ordering IT forensic investigations. The focus of the study is to create and evaluate a potential solution for this problem, aiming to answer research questions related to a suitable architecture, structure and design of a digital tool that would assist individuals in creating IT forensic orders. This work evaluates concepts of such a digital tool. This is done using a grounded theory approach, where a series of test sessions together with the answers from a survey have been examined and analyzed in an iterative process. A low-fidelity prototype is used in the process. The resulting conclusion of the study is a set of concepts, ideas and principles for a digital tool that would aid in the IT forensic ordering process, as well improving the efficiency of the IT forensic process itself. Future work could involve developing the concept further to eventually become a finished product, or using it for improving already existing systems and tools, improving the efficiency and quality of the IT forensic process.
Styles APA, Harvard, Vancouver, ISO, etc.
45

Schall, Daniel [Verfasser]. « Energy Efficiency in Database Systems / Daniel Schall ». München : Verlag Dr. Hut, 2015. http://d-nb.info/107151301X/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
46

Daniel, Gregory Wayne. « An Evaluation of a Payer-Based Electronic Health Record in an Emergency Department on Quality, Efficiency, and Cost of Care ». Diss., The University of Arizona, 2008. http://hdl.handle.net/10150/195598.

Texte intégral
Résumé :
Background: Health information exchange technologies are currently being implemented in many practice settings with the promise to improve quality, efficiency, and costs of care. The benefits are likely highest in settings where entry into the healthcare system is gained; however, in no setting is the need for timely, accurate, and pertinent information more critical than in the emergency department (ED). This study evaluated the use of a payer-based electronic health record (EHR) in an ED on quality, efficiency, and costs of care among a commercially insured population.Methods: Data came from a large health plan and the ED of a large urban ED. Visits with the use of a payer-based EHR were identified from claims between 9/1/05 and 2/17/06. A historical comparison sample of visits was identified from 11/1/04 to 3/31/05. Outcomes included return visits, ED duration, use of laboratory and diagnostic imaging, total costs during and in the four weeks after, and prescription drug utilization.Results: A total of 2,288 ED visits were analyzed (779 EHR visits and 1,509 comparison visits). Discharged visits were associated with an 18 minute shorter duration (95% CI: 5-33); whereas, the EHR among admitted visits was associated with a 77 minute reduction (95% CI: 28-126). The EHR was also associated with $1,560 (95% CI: $43-$2,910) savings in total plan paid for the visit among admitted visits. No significant differences were observed on return visits, laboratory or diagnostic imaging services and total costs over the four week follow-up. Exploratory analyses suggested that the EHR may be associated with a reduction in the number of prescription drugs used among chronic medication users.Conclusion: The EHR studied was associated with a significant reduction in ED duration. Technologies that can reduce ED lengths of stay can have a substantial impact on the care provided to patients and their satisfaction. The data suggests that the EHR may be associated with lower health plan paid amounts among admitted visits and a reduction in the number of pharmacy claims after the visit among chronic users of prescription drugs. Additional research should be conducted to confirm these findings.
Styles APA, Harvard, Vancouver, ISO, etc.
47

Duta, Ionut Cosmin. « Efficient and Effective Solutions for Video Classification ». Doctoral thesis, Università degli studi di Trento, 2017. https://hdl.handle.net/11572/369314.

Texte intégral
Résumé :
The aim of this PhD thesis is to make a step forward towards teaching computers to understand videos in a similar way as humans do. In this work we tackle the video classification and/or action recognition tasks. This thesis was completed in a period of transition, the research community moving from traditional approaches (such as hand-crafted descriptor extraction) to deep learning. Therefore, this thesis captures this transition period, however, unlike image classification, where the state-of-the-art results are dominated by deep learning approaches, for video classification the deep learning approaches are not so dominant. As a matter of fact, most of the current state-of-the-art results in video classification are based on a hybrid approach where the hand-crafted descriptors are combined with deep features to obtain the best performance. This is due to several factors, such as the fact that video is a more complex data as compared to an image, therefore, more difficult to model and also that the video datasets are not large enough to train deep models with effective results. The pipeline for video classification can be broken down into three main steps: feature extraction, encoding and classification. While for the classification part, the existing techniques are more mature, for feature extraction and encoding there is still a significant room for improvement. In addition to these main steps, the framework contains some pre/post processing techniques, such as feature dimensionality reduction, feature decorrelation (for instance using Principal Component Analysis - PCA) and normalization, which can influence considerably the performance of the pipeline. One of the bottlenecks of the video classification pipeline is represented by the feature extraction step, where most of the approaches are extremely computationally demanding, what makes them not suitable for real-time applications. In this thesis, we tackle this issue, propose different speed-ups to improve the computational cost and introduce a new descriptor that can capture motion information from a video without the need of computing optical flow (which is very expensive to compute). Another important component for video classification is represented by the feature encoding step, which builds the final video representation that serves as input to a classifier. During the PhD, we proposed several improvements over the standard approaches for feature encoding. We also propose a new feature encoding approach for deep feature encoding. To summarize, the main contributions of this thesis are as follows3: (1) We propose several speed-ups for descriptor extraction, providing a version for the standard video descriptors that can run in real-time. We also investigate the trade-off between accuracy and computational efficiency; 
(2) We provide a new descriptor for extracting information from a video, which is very efficient to compute, being able to extract motion information without the need of extracting the optical flow; (3) We investigate different improvements over the standard encoding approaches for boosting the performance of the video classification pipeline.;(4) We propose a new feature encoding approach specifically designed for encoding local deep features, providing a more robust video representation.
Styles APA, Harvard, Vancouver, ISO, etc.
48

Billerbeck, Bodo, et bodob@cs rmit edu au. « Efficient Query Expansion ». RMIT University. Computer Science and Information Technology, 2006. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20060825.154852.

Texte intégral
Résumé :
Hundreds of millions of users each day search the web and other repositories to meet their information needs. However, queries can fail to find documents due to a mismatch in terminology. Query expansion seeks to address this problem by automatically adding terms from highly ranked documents to the query. While query expansion has been shown to be effective at improving query performance, the gain in effectiveness comes at a cost: expansion is slow and resource-intensive. Current techniques for query expansion use fixed values for key parameters, determined by tuning on test collections. We show that these parameters may not be generally applicable, and, more significantly, that the assumption that the same parameter settings can be used for all queries is invalid. Using detailed experiments, we demonstrate that new methods for choosing parameters must be found. In conventional approaches to query expansion, the additional terms are selected from highly ranked documents returned from an initial retrieval run. We demonstrate a new method of obtaining expansion terms, based on past user queries that are associated with documents in the collection. The most effective query expansion methods rely on costly retrieval and processing of feedback documents. We explore alternative methods for reducing query-evaluation costs, and propose a new method based on keeping a brief summary of each document in memory. This method allows query expansion to proceed three times faster than previously, while approximating the effectiveness of standard expansion. We investigate the use of document expansion, in which documents are augmented with related terms extracted from the corpus during indexing, as an alternative to query expansion. The overheads at query time are small. We propose and explore a range of corpus-based document expansion techniques and compare them to corpus-based query expansion on TREC data. These experiments show that document expansion delivers at best limited benefits, while query expansion � including standard techniques and efficient approaches described in recent work � usually delivers good gains. We conclude that document expansion is unpromising, but it is likely that the efficiency of query expansion can be further improved.
Styles APA, Harvard, Vancouver, ISO, etc.
49

Steffinlongo, Enrico <1987&gt. « Efficient security analysis of administrative access control policies ». Doctoral thesis, Università Ca' Foscari Venezia, 2017. http://hdl.handle.net/10579/12917.

Texte intégral
Résumé :
In recent years access control has been a crucial aspect of computer systems, since it is the component responsible for giving users specific permissions enforcing a administrator-defined policy. This lead to the formation of a wide literature proposing and implementing access control models reflecting different system perspectives. Moreover, many analysis techniques have been developed with special attention to scalability, since many security properties have been proved hard to verify. In this setting the presented work provides two main contributions. In the first, we study the security of workflow systems built on top of an attribute-based access control in the case of collusion of multiples users. We define a formal model for an ARBAC based workflow system and we state a notion of security against collusion. Furthermore we propose a scalable static analysis technique for proving the security of a workflow. Finally we implement it in a prototype tool showing its effectiveness. In the second contribution, we propose a new model of administrative attribute-based access control (AABAC) where administrative actions are enabled by boolean expressions predicating on user attributes values. Subsequently we introduce two static analysis techniques for the verification of reachability problem: one precise, but bounded, and one over-approximated. We also give a set of pruning rules in order to reduce the size of the problem increasing scalability of the analysis. Finally, we implement the analysis in a tool and we show its effectiveness on several realistic case studies.
Styles APA, Harvard, Vancouver, ISO, etc.
50

Savoia, Marco. « Architetture software per la coordinazione semantica : efficienza ed espressività ». Master's thesis, Alma Mater Studiorum - Università di Bologna, 2012. http://amslaurea.unibo.it/3046/.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie