Dissertations / Theses on the topic 'Internet Servers'

To see the other types of publications on this topic, follow the link: Internet Servers.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Internet Servers.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Voigt, Thiemo. "Architectures for Service Differentiation in Overloaded Internet Servers." Doctoral thesis, Uppsala Univ, 2002. http://publications.uu.se/theses/91-506-1559-9/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Pham, Nam Wilamowski Bogdan M. "Data extraction from servers by the Internet Robot." Auburn, Ala, 2009. http://hdl.handle.net/10415/1781.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Shukla, Amol. "TCP Connection Management Mechanisms for Improving Internet Server Performance." Thesis, University of Waterloo, 2005. http://hdl.handle.net/10012/1030.

Full text
Abstract:
This thesis investigates TCP connection management mechanisms in order to understand the behaviour and improve the performance of Internet servers during overload conditions such as flash crowds. We study several alternatives for implementing TCP connection establishment, reviewing approaches taken by existing TCP stacks as well as proposing new mechanisms to improve server throughput and reduce client response times under overload. We implement some of these connection establishment mechanisms in the Linux TCP stack and evaluate their performance in a variety of environments. We also evaluate the cost of supporting half-closed connections at the server and assess the impact of an abortive release of connections by clients on the throughput of an overloaded server. Our evaluation demonstrates that connection establishment mechanisms that eliminate the TCP-level retransmission of connection attempts by clients increase server throughput by up to 40% and reduce client response times by two orders of magnitude. Connection termination mechanisms that preclude support for half-closed connections additionally improve server throughput by up to 18%.
APA, Harvard, Vancouver, ISO, and other styles
4

WANG, JUN. "HIGH PERFORMANCE I/O ARCHITECTURES AND FILE SYSTEMS FOR INTERNET SERVERS." University of Cincinnati / OhioLINK, 2002. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1028116005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Nand, Alka. "Design and implemetation of internet mail servers with embedded data compression." CSUSB ScholarWorks, 1997. https://scholarworks.lib.csusb.edu/etd-project/1482.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Chang, He. "Server selection for heterogeneous cloud video services." HKBU Institutional Repository, 2017. http://repository.hkbu.edu.hk/etd_oa/419.

Full text
Abstract:
Server selection is an important problem of cloud computing in which cloud service providers direct user demands to servers in one of the multiple data centers located in different geographical locations. The existing solutions usually assume homogeneity of cloud services (i.e., all users request the same type of service) and handle user demands in an individual basis which incurs high computational overhead. In this study, we propose a new and effective server selection scheme in which diversities of cloud services are taken into account. We focus on a specific cloud service, i.e., online video service, and assume that different videos have different bandwidth requirements. We group users into clusters and handle user demands on a cluster basis for faster and more efficient process. Firstly, we assume that user demands and bandwidth capacities of servers are given in the data centers, our problem is to assign the user demands to the servers under the bandwidth constraint, such that the overall latency (measured by the network distance) between the user clusters and the selected servers is minimized. We design a server selection system and formulate this problem as a linear programming formulation which can be solved by existing techniques. The system periodically executes our scheme and computes an optimal solution for server selection. User demands are assigned to the servers according to the optimal solution and the minimum overall latency can be achieved. The simulation results show that our scheme is significantly better than the random algorithm and the YouTube server selection strategy. Based on the first part, we take the storage capacities of servers constraint into consideration. In the second part, our new problem is to assign the user demands to the servers under the bandwidth and storage constraint, such that the function of overall latency (measured by the network distance) between the user clusters and the selected servers and standard deviation of traffic load of every server in the system is minimized. We design a server selection system and formulate this problem which can be solved by existing techniques. User demands are assigned to the servers according to the optimal solution and the two goals (minimum overall latency and the most balanced traffic load) can be achieved. The simulation results show the influence of different weights of these two goals on the user demands assigning.
APA, Harvard, Vancouver, ISO, and other styles
7

Zhanwen, Li. "Fair Service for High-Concurrent Requests." Thesis, The University of Sydney, 2007. http://hdl.handle.net/2123/1908.

Full text
Abstract:
This thesis presents a new approach to ensuring fair service for highly concurrent requests. Our design uses the advantages of staged event-driven architecture (SEDA) to support high-concurrent loadings and makes use of control theory to manage the system performance. In order to guarantee the quality of service is fairly made to each request, based on SEDA, the control system for fairness is developed as a combination of a global control framework and a set of local self-tune stags. The global control framework is used to control the performance of the whole staged network at the top-level, aimed at coordinating the performance of the stages in the network. On the other hand, each self-tune stage under the control framework is built on the thread pool model, and will use automatic control theory to adjust its performance locally in order to meet the overall target performance. The automatic control system in each stage consists of an automatic modeling mechanism and a feedback module, which optimizes the controller parameters in the system automatically and guarantees the quality of performance (service rate here) for the stage at runtime. Based on mathematical proof and simulation results, our designs are implemented in a SEDA-based web server running in a dynamic loading environment. Results demonstrate that the performance of the new system in the real world is almost the same as the theoretical results. It demonstrates that the design is able to adaptively ensure the quality of service to the high-concurrent requests fairly. Compared to the original SEDA design, our design is an effective and handy approach to significantly enhancing the performance of SEDA in a variety of aspects, including fairer service, faster convergent speed, better robustness, higher accuracy and ease of deployment in various practical applications.
APA, Harvard, Vancouver, ISO, and other styles
8

Kappagantula, Sri Kasyap. "Virtualization of Data Centers : Case Study on Server Virtualizationn." Thesis, Blekinge Tekniska Högskola, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-16010.

Full text
Abstract:
Nowadays, Data Centers use Virtualization, as a technique to make use of the opportunity for extension of independent virtual resources from the available physical hardware. Virtualization technique is implemented in the Data Centers to maximize the utilization of physical hardware (which significantly reduces the energy consumption and operating costs) without affecting the “Quality of Service (QoS)”. The main objective of this thesis is to study, the different network topologies used in the Data Center architecture, and to compare the QoS parameters of the virtual server over the physical server and also to abstract the better technology exists for virtualization. The research methodology used in this thesis is “qualitative”. To measure the QoS, we take the Latency, Packet loss, Throughput aspects of virtual servers under different virtualization technologies (KVM, ESXi, Hyper-V, Fusion, and Virtualbox) and compare their performance over the physical server. The work also investigates the CPU, RAM Utilizations and compare the physical and virtual server's behavior under different load conditions. The Results shows us that the virtual servers have performed better in terms of resource utilization, Latency and response times when compared to the physical servers. But there are some factors like backup and recovery, VM Sprawl, capacity planning, building a private cloud, which should be addressed for the growth of virtual data centers. Parameters that affect the performance of the virtual servers are addressed and the trade-off between the virtual and physical servers is established in terms of QoS aspects. The overall performance of virtual servers is effective when compared to the performance of physical servers.
APA, Harvard, Vancouver, ISO, and other styles
9

Loyauté, Gautier. "Un modèle génératif pour le développement de serveurs Internet." Phd thesis, Université Paris-Est, 2008. http://tel.archives-ouvertes.fr/tel-00470539.

Full text
Abstract:
Les serveurs Internet sont des logiciels particuliers. Ils doivent répondre aux demandes d'un grand nombre de clients distants, supporter leur évolution et être robustes car ils ne s'arrêtent jamais. Les modèles de concurrence permettent d'entrelacer les traitements d'un grand nombre de clients. Aucun consensus ne se dégage sur un meilleur modèle. Pour s'abstraire du modèle de concurrence, je propose un modèle de développement de serveurs Internet. Les outils de vérification formelle permettent d'accroître la sûreté des logiciels mais il est nécessaire de leur fournir un modèle simple du logiciel. Le modèle de développement que je propose est utilisé pour générer le serveur et son modèle formel. Le décodage d'une requête cliente dépend du modèle de concurrence. Je propose d'utiliser un générateur d'analyseur syntaxique qui s'abstrait de ce problème et automatise le développement du décodage
APA, Harvard, Vancouver, ISO, and other styles
10

Brown, Johan, Brokås Alexander Gustafsson, Niklas Hurtig, and Tobias Johansson. "Designing and implementing a small scale Internet Service Provider." Thesis, Mälardalen University, School of Innovation, Design and Engineering, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-7437.

Full text
Abstract:

The objective of this thesis is to design and implement a small scaleInternet Service Provider (ISP) for the NetCenter sub department atMälardalen University. The ISP is intended to give NetCenter a networkseparate from the University’s network, providing them with a moreflexible environment for lab purposes. This will give their students anopportunity to experience a larger backbone with Internet accessibility,which has not been previously available. At the same time it will place theteachers in control of the network in the NetCenter lab premises.The network is designed with a layered approach including an Internetaccess layer, a larger core segment and a distribution layer with aseparated lab network. It also incorporates both a public and a privateserver network, housing servers running e.g. Windows Active Directory,external DNS services, monitoring tools and logging applications. TheInternet access is achieved by peering with SUNET providing a full BGPfeed.This thesis report presents methods, implementations and results involvedin successfully creating the NetCenter ISP as both a lab network and anInternet provider with a few inevitable shortcomings; the most prominentbeing an incomplete Windows Domain setup.

APA, Harvard, Vancouver, ISO, and other styles
11

Ouvrier, Gustaf. "Characterizing the HTTPS Trust Landscape : - A Passive View from the Edge." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-161814.

Full text
Abstract:
Our society increasingly relies on the Internet for common services like online banking, shopping, and socializing. Many of these services heavily depend on secure end-to-end transactions to transfer personal, financial, or other sensitive information. At the core of ensuring secure transactions are the TLS/SSL protocol and the ``trust'' relationships between all involved partners. In this thesis we passively monitor the HTTPS traffic between a campus network and the Internet, and characterize the certificate usage and trust relationships in this complex landscape. By comparing our observations against known vulnerabilities and problems, we provide an overview of the actual security that typical Internet users (such as the people on campus) experience. Our measurements cover both mobile and stationary users, consider the involved trust relationships, and provide insights into how the HTTPS protocol is used and the weaknesses observed in practice.
APA, Harvard, Vancouver, ISO, and other styles
12

Šepikas, Antanas. "Internetinis Šachmatų žaidimo serveris." Bachelor's thesis, Lithuanian Academic Libraries Network (LABT), 2014. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2014~D_20140716_111602-98045.

Full text
Abstract:
Šachmatų žaidimas buvo žaidžiamas nuo senų laikų. Praeityje buvo modifikuotos jo taisyklės, buvo naudotos medinės, akmeninės, plastikinės, stiklinės figūros. Skirtinguose amžiuose šachmatų žaidimą žaidė skirtingi žmonės, tačiau jo esmė išliko ta pati. Ir nors atrodo, kad toks senas dalykas turėtų būti užmirštas arba praradęs savo populiarumą, tačiau to nėra. Šiais laikais, šachmatų žaidimas tebėra vienas populiariausių. Tačiau bėgant metams ir keičiantis technologijoms reikia apsvarstyti kaip būtų galima pakeisti šio žaidimo realizaciją, kad ji taptų dar labiau patrauklesnė ir paprastesnė naudotis. Šiuo metu yra nemažai internetinių šachmatų žaidimų serverių ir aplikacijų, kurios leidžia vartotojams žaisti šachmatų žaidimą internete. Tačiau, tokios sistemos neretai iš vartotojo reikalauja parsisiųsti papildomą programą į kompiuterį ar išmanųjį įrenginį, arba papildomų įskiepių diegimo, o tai sudaro sistemos naudojimo nepatogumą. Šio darbo tikslas – sukurti šachmatų žaidimo serverį, panaudojant naujas populiarėjančias technologijas „Node.js“ ir „HTML5“, kuriame žaidėjai galės žaisti šachmatų žaidimą savo naršyklėse be papildomų programų ar įskiepių diegimo. Darbe iškelti uždaviniai: 1. Parengti kuriamos sistemos reikalavimų ir architektūros specifikacijas. 2. Remiantis reikalavimų ir architektūros specifikacijomis, realizuoti šachmatų žaidimo serverį, nereikalaujantį iš vartotojo papildomų įskiepių ir programų diegimo. 3. Ištestuoti sukurtą sistema ir įvertinti sistemos... [toliau žr. visą tekstą]
The main goal of this project, is to develop a system, where players can connect to each other, and play Chess game by using internet browser, with no additional software like “Adobe Flash Player“, “Microsoft Silverlight“ or any other, but the browser itself. This is achieved by using new technologies: “Node.js“, and “HTML5 WebSockets“. This document consists of five parts, containing different technical information on this subject: First part, is analysis of above mentioned technologies, and explanation of why are they chosen. Second part, is “Requirements Specification” where all the requirements for the system user use cases raise, those requirements are specified by using UML diagrams, and their descriptions. Third part is “Architectural Requirements” this section is used to visualize system, by using various UML diagrams and their descriptions just like the “Requirements Specification” but different UMLs. For example, in this case UML sequence diagrams are used to show how system handles match making process. Fourth part is “Testing”, testing section displays tables with information about system quality assurance tests. Test are made to determine if the system is ready to be published to the public. Last part is “User Guide”, this system has quite unusual user guide compared to other systems. This user guide is different because it’s integrated in the system itself, it makes it more accessible and user friendly because of its interactivity with the user. User guide... [to full text]
APA, Harvard, Vancouver, ISO, and other styles
13

Žabinskas, Vidas. "Interneto serverių apsaugos priemonių tyrimas." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2004. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2004~D_20040601_222038-85803.

Full text
Abstract:
Transferring the activities to electronic space, each Internet user could be involved in a risk that the information accessed and transmitted by network might be read, retrieved and, supposedly, trespassed. Therefore, the preventive protection of personal computer and computer system security is relevant in order security gaps in a computer system would appear as less as possible. Subject of the Work: “PC Security” Internet service website designated to check-out personal computer system security by users on their own. Goal of the Work: computer security measures analysis and computer security control system development. The current study contain the analysis of measures that support system to be more attack-resistant: rules necessary for network security resistance; information coding measures; actions disturbing normal system operation; actions to be undertaken in case of successful intruder attack. Requirements for the models of Internet server security and testing system were set therein. Observing the aforementioned requirements computer system security testing system was designed and implemented, system testing carried out, and system user specifications described. For flexibility purposes two check-out options were involved in the testing system: system user performs the computer IP testing himself/herself or testing is performed by a system operator with report sending. This particular testing system should be very useful for the users because the latter would be... [to full text]
APA, Harvard, Vancouver, ISO, and other styles
14

Jones, Robert M. "Content Aware Request Distribution for High Performance Web Service: A Performance Study." PDXScholar, 2002. https://pdxscholar.library.pdx.edu/open_access_etds/2662.

Full text
Abstract:
The World Wide Web is becoming a basic infrastructure for a variety of services, and the increases in audience size and client network bandwidth create service demands that are outpacing server capacity. Web clusters are one solution to this need for highperformance, highly available web server systems. We are interested in load distribution techniques, specifically Layer-7 algorithms that are content-aware. Layer- 7 algorithms allow distribution control based on the specific content requested, which is advantageous for a system that offers highly heterogenous services. We examine the performance of the Client Aware Policy (CAP) on a Linux/Apache web cluster consisting of a single web switch that directs requests to a pool of dual-processor SMP nodes. We show that the performance advantage of CAP over simple algorithms such as random and round-robin is as high as 29% on our testbed consisting of a mixture of static and dynamic content. Under heavily loaded conditions however, the performance decreases to the level of random distribution. In studying SMP vs. uniprocessor performance using the same number of processors with CAP distribution, we find that SMP dual-processor nodes under moderate workload levels provide equivalent throughput as the same number of CPU’s in a uniprocessor cluster. As workload increases to a heavily loaded state however, the SMP cluster shows reduced throughput compared to a cluster using uniprocessor nodes. We show that the web cluster’s maximum throughput increases linearly with the addition of more nodes to the server pool. We conclude that CAP is advantageous over random or round-robin distribution under certain conditions for highly dynamic workloads, and suggest some future enhancements that may improve its performance.
APA, Harvard, Vancouver, ISO, and other styles
15

Abousabea, Emad Mohamed Abd Elrahman. "Optimization algorithms for video service delivery." Thesis, Evry, Institut national des télécommunications, 2012. http://www.theses.fr/2012TELE0030/document.

Full text
Abstract:
L'objectif de cette thèse est de fournir des algorithmes d'optimisation pour l'accès aux services vidéo qu’ils soient non-gérés (Internet TV) ou gérés (IPTV). Nous étudions des statistiques récentes concernant les services vidéo non-gérés comme YouTube et nous proposons des techniques d'optimisation appropriées qui pourraient améliorer l'accès aux fichiers vidéos et réduire le coût de cet accès. En outre, l’analyse des coûts joue un rôle important dans les décisions qui concernent la mise en cache des fichiers vidéos et celles liées au choix des périodes temporelles d'hébergement de ces fichiers sur les serveurs. En ce qui concerne les services vidéo gérés appelés IPTV, nous avons mené des expériences sur une architecture ouverte IPTV-collaboration entre différents opérateurs. Ce modèle est analysé selon un critère de coût d’investissement et d'exploitation à l'intérieur de la sphère domestique. En outre, nous avons introduit une solution d’optimisation dynamique de l'arbre « minimum spanning tree » (MST) pour le service IPTV multicast. Lors d’un accès nomade, les arbres statiques pourraient être incapables de fournir le service de manière efficace vu que l'utilisation de la bande passante augmente aux côté des points de streaming (racines de la topologie). Finalement, nous étudions des mesures de sécurité fiables en streaming vidéo basées sur la méthodologie de la chaîne de hachage et nous proposons un nouvel algorithme hybride. Nous effectuons des comparaisons entre les différentes manières utilisées dans la réalisation de la fiabilité des chaînes de hachage basées sur les classifications génériques
The aim of this thesis is to provide optimization algorithms for accessing video services either in unmanaged or managed ways. We study recent statistics about unmanaged video services like YouTube and propose suitable optimization techniques that could enhance files accessing and reduce their access costs. Moreover, this cost analysis plays an important role in decision making about video files caching and hosting periods on the servers. Under managed video services called IPTV, we conducted experiments for an open-IPTV collaborative architecture between different operators. This model is analyzed in terms of CAPEX and OPEX costs inside the domestic sphere. Moreover, we introduced a dynamic way for optimizing the Minimum Spanning Tree (MST) for multicast IPTV service. In nomadic access, the static trees could be unable to provide the service in an efficient manner as the utilization of bandwidth increases towards the streaming points (roots of topologies). Finally, we study reliable security measures in video streaming based on hash chain methodology and propose a new algorithm. Then, we conduct comparisons between different ways used in achieving reliability of hash chains based on generic classifications
APA, Harvard, Vancouver, ISO, and other styles
16

Henek, Jan. "Proxy servery v síti Internet." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2016. http://www.nusl.cz/ntk/nusl-241977.

Full text
Abstract:
The goal of this paper is to analyze the representation of proxy servers in cyber attacks conducted by Internet. For this purpose I used method which compares tested IP address with database of open proxy servers. I assembled a list of IP address taken from the blacklist of cyber attacks committed in 2015. Then I checked this list with the created program Proxy checker and compared them with a database of open proxy servers. By measurement I demonstrate the inefficacy of this method for reverse detection of proxy servers in the IP list of past attacks.
APA, Harvard, Vancouver, ISO, and other styles
17

Miliauskas, Evaldas. "Mobilaus įrenginio ir serverio duomenų sinchronizacijos galimybių tyrimas esant nepastoviam interneto ryšiui." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2009. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2009~D_20090824_151955-11668.

Full text
Abstract:
Duomenų sinchronizacija yra neatsiejama šiomis dienomis, kuomet eina kalba apie paskirstytas aplikacijas, kurios veikia mobiliuose įrenginiuose. Kuomet duomenys yra sinchronizuojami mobiliajame įrenginyje, jų vartotojai yra nepriklausomi nuo duomenų serverio ir gali laisvai naudotis lokaliais duomenis atsijungę nuo tinklo. [14] Šiame darbe išnagrinėjome galimas duomenų prieigos architektūras: 1. Visuomet atsijungęs 2. Visuomet prisijungęs 3. Mišrus Taip pat aptarėme techninės realizacijos ypatumus, problemas su kuriomis susiduriama, kuriant sinchronizacijos algoritmus bei būtiną informaciją reikalingą šių algoritmų funkcionavimui (ID valdymas, pakeitimų sekimas, greita/lėta sinchronizacija ir kt.). Projektinėje dalyje buvo išnagrinėtos jau egzistuojančios sistemos architektūra, bei aprašyti reikalingi pakeitimai, tam kad būtų galima įtraukti sinchronizacijos procesą į sistemos funkcionalumą. Galiausiai detaliai išanalizavome mišrios duomenų prieigos architektūrą, jos paskirtį, privalumus ir trūkumus. Taip pat sudarėme imitacinį modelį pagal šios architektūros principus panaudodami sudėtingų sistemų formalizavimo agregatinį metodą bei atlikome jo parametrų analizę.
Distributed applications are modeled around replicating copies of the same data on many hosts in a network for a variety of reasons. For one, system designers can alleviate the single-server implosion problem and instead distribute client requests for data across many hosting servers. Second, making data locally available on a host speeds up applications because the applications do not block for network input/output as data is transmitted. In this work we have analyzed several data access architectures: 1. Always offline 2. Always online 3. Mixed architecture Also we have described technical implementation details, problems which occur developing synchronization algorithms and mandatory information which is needed for these algorithms to work correctly (ID handling, change detection, fast/slow synchronization etc.). Furthermore, we investigated existing system architecture and defined mandatory changes needed for incorporating synchronization process into system functionality. Finally, we did detail analysis of mixed data access architecture, which included real world appliance, pros and cons. After that we created a model using complex systems formalization aggregation method which was based on these architecture principles. Also we did models parameter analysis in experimental part.
APA, Harvard, Vancouver, ISO, and other styles
18

YAMAMOTO, Shuichiro, Toshihiro MOTODA, and Takashi HATASHIMA. "An "Interest" Index for WWW Servers and CyberRanking." Institute of Electronics, Information and Communication Engineers, 2000. http://hdl.handle.net/2237/15021.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Bajgl, Vojtěch. "Internet věcí v praxi." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2018. http://www.nusl.cz/ntk/nusl-376903.

Full text
Abstract:
The subject of this diploma thesis, is the Internet of Things, and its use in applied practice, with a subsequent design of a custom system of smart home automatization. It acquaints the reader with the issues of automated home and smart systems, including a basic familiarization of regulations as declared by law about electronic devices. The attributes of miniature computer platforms, developmental boards, and wireless modules for the design of custom systems are analyzed and compared. The biggest part of the thesis is dedicated to a design of a custom system, specifically designing the central processing module and other controlling modules for controlling various electric home appliances. The main controlling unit of the system is a Raspberry Pi 2B unit, with which it is possible to control a device in a household through an user-friendly web application.
APA, Harvard, Vancouver, ISO, and other styles
20

Styblík, Zdeněk. "Monitorování služeb v síti internet." Master's thesis, Česká zemědělská univerzita v Praze, 2015. http://www.nusl.cz/ntk/nusl-258445.

Full text
Abstract:
Diploma thesis deals with design and implementation of solution for monitoring servers and services over the Internet in combination with application for monitoring, Nagios. Common solutions are presented and evaluated first and serve as basis for design and implementation of a custom solution. Some of existing solutions were used, eg. HTTP protocol, others were developed, eg. scripts to automatically change Nagios configuration. Alternative solution for monitoring servers and services is the result of this diploma thesis.
APA, Harvard, Vancouver, ISO, and other styles
21

Maluleke, Enock Vongani. "Satellite-based web server." Thesis, Stellenbosch : Stellenbosch University, 2002. http://hdl.handle.net/10019.1/53040.

Full text
Abstract:
Thesis (MScEng)--Stellenbosch University, 2002.
ENGLISH ABSTRACT: There is a large variety of telemetry recervmg software currently available for the reception of telemetry information from different satellites. Most of the software used in receiving telemetry data is satellite specific. Hence, a user-friendly way is needed to make telemetry data easily accessible. A satellite-based web server is aimed at providing telemetry information to any standard web browser as a way of bringing space technology awareness to the people. Two different satellite-based web server methods are examined in this thesis. Based on the evaluation, the on-board File server with proxy server was proposed for satellite-based web server development. This requires that the File server be ported to the on-board computer of the satellite. The web proxy server is placed on the ground segment with the necessary communication requirements to communicate with the on-board File server. In the absence of satellite, the satellite-based web server was successfully implemented on two computers, laying a good foundation for implementation on the on-board computer of the satellite (OBe).
AFRIKAANSE OPSOMMING: Daar is 'n groot verskeidenheid telemetrie ontvangs sagteware huidiglik beskikbaar vir die ontvangs van telemetrie informasie vanaf verskillende satelliete. Die meeste van die sagteware wat gebruik word om telemetrie data te ontvang is satelliet spesifiek. Gevolglik, 'n gebruikers vriendelike metode is nodig om telemetrie data maklik beskikbaar te maak. 'n Satelliet-gebaseerde web-bediener word beoog om telemetrie informasie te verskaf aan enige standaard web-blaaier as 'n metode om mense bewus te maak van ruimte tegnologie. Twee verskillende satelliet gebaseerde web-bediener metodes salondersoek word in hierdie tesis. Gebaseer op 'n evaluering, word die aanboord leêr-bediener met instaanbediener voorgestel vir satelliet-gebaseerde webbediener ontwikkeling. Hiervoor is dit nodig dat die leêr-bediener na die aanboord rekenaar van die satelliet gepoort word. Die web instaanbediener word op die grond segment geplaas met die nodige kommunikasie benodighede, om te kommunikeer met die aanboord leêr-bediener. In die afwesigheid van die satelliet was die satellietgebaseerde web-bediener met sukses geïmplementeer op twee rekenaars, met die gevolg dat 'n goeie fondasie gelê is vir die implementering op die aanboord rekenaar van die satelliet (OBC).
APA, Harvard, Vancouver, ISO, and other styles
22

Cai, Meng. "A plotting tool for Internet based on client/server computing model." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp05/MQ64076.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Rada, Holger. "Von der Druckerpresse zum Web-Server : Zeitungen und Magazine im Internet /." Berlin : Wissenschaftlicher Verlag, 1999. http://catalogue.bnf.fr/ark:/12148/cb37563275m.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Wu, Huaigu 1975. "Adaptable stateful application server replication." Thesis, McGill University, 2008. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=115903.

Full text
Abstract:
In recent years, multi-tier architectures have become the standard computing environment for web- and enterprise applications. The application server tier is often the heart of the system embedding the business logic. Adaptability, in particular the capability to adjust to the load submitted to the system and to handle the failure of individual components, are of outmost importance in order to provide 7/24 access and high performance. Replication is a common means to achieve these reliability and scalability requirements. With replication, the application server tier consists of several server replicas. Thus, if one replica fails, others can take over. Furthermore, the load can be distributed across the available replicas. Although many replication solutions have been proposed so far, most of them have been either developed for fault-tolerance or for scalability. Furthermore, only few have considered that the application server tier is only one tier in a multi-tier architecture, that this tier maintains state, and that execution in this environment can follow complex patterns. Thus, existing solutions often do not provide correctness beyond some basic application scenarios.
In this thesis we tackle the issue of replication of the application server tier from ground off and develop a unified solution that provides both fault-tolerance and scalability. We first describe a set of execution patterns that describe how requests are typically executed in multi-tier architectures. They consider the flow of execution across client tier, application server tier, and database tier. In particular, the execution patterns describe how requests are associated with transactions, the fundamental execution units at application server and database tiers. Having these execution patterns in mind, we provide a formal definition of what it means to provide a correct execution across all tiers, even in case failures occur and the application server tier is replicated. Informally, a replicated system is correct if it behaves exactly as a non-replicated that never fails. From there, we propose a set of replication algorithms for fault-tolerance that provide correctness for the execution patterns that we have identified The main principle is to let a primary AS replica to execute all client requests, and to propagate any state changes performed by a transaction to backup replicas at transaction commit time. The challenges occur as requests can be associated in different ways with transactions. Then, we extend our fault-tolerance solution and develop a unified solution that provides both fault-tolerance and load-balancing. In this extended solution, each application server replica is able to execute client requests as a primary and at the same time serves as backup for other replicas. The framework provides a transparent, truly distributed and lightweight load distribution mechanism which takes advantage of the fault-tolerance infrastructure. Our replication tool is implemented as a plug-in of JBoss application server and the performance is carefully evaluated, comparing with JBoss' own replication solutions. The evaluation shows that our protocols have very good performance and compare favorably with existing solutions.
APA, Harvard, Vancouver, ISO, and other styles
25

Saidane, Ayda. "Conception et réalisation d'une architecture tolérant les intrusions pour des serveurs internet." Phd thesis, INSA de Toulouse, 2005. http://tel.archives-ouvertes.fr/tel-00009600.

Full text
Abstract:
La connexion de systèmes critiques à Internet pose de sérieux problèmes de sécurité. En effet, les techniques classiques de protection sont souvent inefficaces, dans ce nouveau contexte. Dans cette thèse, nous proposons une architecture générique tolérant les intrusions pour serveurs Internet. Cette architecture est basée sur les principes de redondance avec diversification afin de renforcer les capacités du système à faire face aux attaques. En effet, une attaque vise généralement une application particulière sur une plateforme particulière et s'avère très souvent inefficace sur les autres. L'architecture comprend plusieurs serveurs Web (COTS) redondants et diversifiés, et un ou plusieurs mandataires mettant en Suvre la politique de tolérance aux intrusions. L'originalité de cette architecture réside dans son adaptabilité. En effet, elle utilise un niveau de redondance variable qui s'adapte au niveau d'alerte. Nous présentons deux variantes de cette architecture destinées à différents systèmes cibles. La première architecture est destinée à des systèmes complètement statiques où les mises à jours sont effectuées hors-ligne. La deuxième architecture est plus générique, elle considère les systèmes complètement dynamiques où les mises à jours sont effectuées en temps réel. Elle propose une solution basée sur un mandataire particulier chargé de gérer les accès à la base de données. Nous avons montré la faisabilité de notre architecture, en implémentant un prototype dans le cadre d'un exemple d'une agence de voyages sur Internet. Les premiers tests de performances ont été satisfaisants, les temps de traitements des requêtes sont acceptables ainsi que le temps de réponse aux incidents.
APA, Harvard, Vancouver, ISO, and other styles
26

Kalyadin, Dmitry. "Robot data and control server for Internet-based training on ground robots." [Tampa, Fla.] : University of South Florida, 2007. http://purl.fcla.edu/usf/dc/et/SFE0002111.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Kangasharju, Jussi Antti Tapio. "Distribution de l'information sur Internet." Nice, 2002. http://www.theses.fr/2002NICE5724.

Full text
Abstract:
Cette thèse étudie la distribution de contenu sur Internet. Dans la première partie nous étudions les méthodes de redirection des clients. Nous développons une architecture pour localiser les copies d'un objet. Cette architecture est une extension du Domain Name System et peut être mise en place d'une manière incrémentale. Nous présentons une architecture d'annuaire répliqué et montrons comment réaliser le Domain Name System avec cette architecture. Cette architecture permet de stocker des informations qui changent rapidement, elle peut être réalisée de manière incrémentale, et ne nécessite aucun changement logiciel. L'évaluation de performance de cette architecture nous donne des indications sur la durée pendant laquelle on peut cacher l'information. Nous évaluons aussi les performance des méthodes de redirection utilisées par les réseaux de distribution de contenu modernes. Nos résultats montrent que le coût associé à l'ouverture de nouvelles connexions peut limiter sévèrement les performances perçues par l'utilisateur. Dans la deuxième partie nous considérons la réplication d'objets. Nous développons un modèle d'optimisation combinatoire pour répliquer des objets dans un réseau de distribution. Nos résultats montrent que la meilleure performance est obtenue quand la réplication est coordonnée sur tout le réseau. Nous étudions la réplication optimale de contenu dans les réseaux de type peer-to-peer. Nous construisons un modèle et développons plusieurs algorithmes adaptatifs pour répliquer les objets de manière dynamique. Nos résultats montrent que nos algorithmes, combinés avec une politique de remplacement LFU, offrent une performance presque optimale. Nous considérons aussi la distribution de vidéos en couches en utilisant un modèle de "knapsack" stochastique. Nous développons plusieurs heuristiques pour déterminer quelles couches de quelles vidéos doivent être cachées afin de maximiser le revenu
This thesis studies Internet content distribution. In the first part of the thesis we consider client redirection mechanisms. We develop an architecture for locating copies of cached objects. This architecture is a small extension to the Domain Name System and can be deployed incrementally. We present an architecture of an Internet-wide replicated directory service and show how the current Domain Name System can be implemented with this architecture. The key features of this architecture are that it allows us to store rapidly changing information, can be deployed incrementally, and requires no changes to existing software. Our extensive performance evaluation of this architecture provides us with insight on how long the information can be cached. We also evaluate the performance of the redirection mechanisms used by modern content distribution networks. We find that the overhead of opening new connections to new servers can severly limit the user-perceived performance. In the second part of the thesis we consider object replication in content distribution. We develop a combinatorial optimization model for optimally replicating objects in a content distribution network. Our results show that best performance is obtained when replication is coordinated over the whole network. Using the same model we also develop cooperation strategies for peer-to-peer networks. We also consider the problem of optimal content replication in peer-to-peer communities. We formulate this problem as an integer programming problem and develop several adaptive algorithms to replicate objects on-the-fly. Our results indicate that our algorithms combined with least-frequently-used replacement policy provide near-optimal performance. We also consider the distribution of layered encoded video using a stochastic knapsack model. We develop several heuristics to determine which layers of which videos should be cached in order to maximize the accrued revenue
APA, Harvard, Vancouver, ISO, and other styles
28

Frías, Castillo Amparo. "Estudis d'usuaris en els serveis personalitzats als mitjans de comunicació a Internet, Els." Doctoral thesis, Universitat de Barcelona, 2007. http://hdl.handle.net/10803/766.

Full text
Abstract:
La investigació es centra en l'àmbit concret dels mitjans de comunicació (diaris, emissores de ràdio i televisió, empreses de press cliping i de seguiment de mitjans) que disposin de pàgina web pròpia. A partir d'aquesta identificació s'ha procedit a analitzar-les per descobrir l'existència d'algun tipus de servei personalitzat d'informació d'actualitat, ofert a partir dels dispositius de telefonia mòbil, agenda electrònica o correu electrònic.

Es presenta l'estat de l'art a Espanya, França i Regne Unit i els estats nord americans de Califòrnia, Florida, Nova York i Texas, en referència als mitjans de comunicació que ofereixen algun tipus de servei personalitzat, i a la vegada, s'han avaluat el contingut informatiu, la gestió del servei, la navegabilitat, i el grau de personalització.

La recopilació dels serveis personalitzats ha permès crear una classificació d'aquests en funció de les possibilitats que el mitjà permet a l'usuari per a la selecció dels continguts informatius, en funció de les seves preferències.

A la recerca s'ha volgut determinar quins mitjans de comunicació han dut a terme un estudi d'usuaris per copsar quines són les necessitats informatives d'aquests, a partir d'una enquesta adreçada als directors dels mitjans que oferien serveis personalitzats.

Els resultats obtinguts mostren una infra utilització dels estudis d'usuaris com a eina adequada per aconseguir el coneixement imprescindible de les necessitats informatives dels usuaris dels serveis personalitzats que ofereixen els mitjans de comunicació a la xarxa. Per finalitzar es presenta una visió prospectiva sobre el futur dels serveis personalitzats a la xarxa.
The following research is focused on the concrete area of the media (newspapers, radio stations and televisions, press cliping, and follow-up medias) having their own web site. Starting from this identification, we have proceeded to analyse them in order to find out any sort of current personalized news services offered through devices of a mobile phone, a personal digital assistant (PDA) or an electronic mail.

It shows the state of the art in Spain, France, United Kingdom and the North American States of California, Florida, New York and Texas relating to the media which offers some kind of personalized services. At the same time, informative contents, the service management, the web surf as well as the degree of personalization have been evaluated.

The compilation of these personalized services has made possible to create a classification of these ones depending on the possibilities of the means given to the user for the selection of informative contents acoording to their preferences.

By means of this research we try to determine which medias have carried out a users survey in order to understand their informative needs. This was made starting from an own survey addressed to those directors and editors of medias offering personalized services.

The obtained results show a very small use of users surveys as a suitable tool, in order to get the necessary knowledge of those informative needs of the users of personalized services offered by the media on the net. Finally, we show a prospective view of the future of personalized services on the Net.

KEY WORDS: Medias, personalized services, Net
APA, Harvard, Vancouver, ISO, and other styles
29

Tenzakhti, Fathi. "On the placement of Web replicas in the Internet with server capacity constraints." Thesis, University of Glasgow, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.425318.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Wonka, Richard. "Sicherheitsaspekte des Aufbaus eines Internet-Portal-Systems am Beispiel des Portals zur Informationsethik nethics.net." [S.l. : s.n.], 2003. http://www.bsz-bw.de/cgi-bin/xvms.cgi?SWB10547323.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

O'Daniel, Graham M. "HTTP 1.2: DISTRIBUTED HTTP FOR LOAD BALANCING SERVER SYSTEMS." DigitalCommons@CalPoly, 2010. https://digitalcommons.calpoly.edu/theses/302.

Full text
Abstract:
Content hosted on the Internet must appear robust and reliable to clients relying on such content. As more clients come to rely on content from a source, that source can be subjected to high levels of load. There are a number of solutions, collectively called load balancers, which try to solve the load problem through various means. All of these solutions are workarounds for dealing with problems inherent in the medium by which content is served thereby limiting their effectiveness. HTTP, or Hypertext Transport Protocol, is the dominant mechanism behind hosting content on the Internet through websites. The entirety of the Internet has changed drastically over its history, with the invention of new protocols, distribution methods, and technological improvements. However, HTTP has undergone only three versions since its inception in 1991, and all three versions serve content as a text stream that cannot be interrupted to allow for load balancing decisions. We propose a solution that takes existing portions of HTTP, augments them, and includes some new features in order to increase usability and management of serving content over the Internet by allowing redirection of content in-stream. This in-stream redirection introduces a new step into the client-server connection where servers can make decisions while continuing to serve content to the client. Load balancing methods can then use the new version of HTTP to make better decisions when applied to multi-server systems making load balancing more robust, with more control over the client-server interaction.
APA, Harvard, Vancouver, ISO, and other styles
32

Hofer, Reinhard. "Webserver : Betrieb, Sicherheit, Codebeispiele /." Saarbrücken : VDM, Müller, 2006. http://deposit.d-nb.de/cgi-bin/dokserv?id=2872401&prov=M&dok_var=1&dok_ext=htm.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Clay, Lenitra M. "Replication techniques for scalable content distribution in the internet." Diss., Georgia Institute of Technology, 2002. http://hdl.handle.net/1853/8491.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

De, Barros Serra Antonio. "La gestion de la QoS pour les services Internet au niveau de grappes de serveurs Web." Evry, Institut national des télécommunications, 2005. http://www.theses.fr/2005TELE0004.

Full text
Abstract:
Un très grand effort est fait par la communauté scientifique afin de fourni un support à la QoS dans l'infrastructure réseau. Cependant, traiter la QoS uniquement dans le réseau n'est pas suffisant pour garantir une QoS de bout-en-bout. Même s'il existe déjà des mécanismes qui permettent la différenciation des services au niveau réseau, les utilisateurs sont toujours obligés de partager d'une même façon les ressources disponibles du côté serveur Web. Pour garantir la QoS dans le réseau, un ensemble de principes sont employés (classification du trafic, contrôle d'admission, partitionnement de ressources, etc. ). Une solution possible au problème de la QoS du côté serveur est l'utilisation de ces principes pour la conception, la spécification, la mise en œuvre et l'évaluation d'une architecture de gestion de QoS pour les grappes de serveurs Web. Nous proposons une solution souple, évolutive et peu intrusive qui prend en compte l'hétérogénéité des technologies utilisées pour la mise en œuvre des applications basées sur le Web. Nous proposons aussi un mécanisme pour le contrôle d'admission de requêtes et l'équilibre de charge dans les grappes de serveurs Web, nommé WS-DSAC. Ce mécanisme permet la différenciation de la QoS des utilisateurs et utilise d'une façon efficace les ressources de traitement disponibles, réallouant dynamiquement les ressources entre les classes de QoS existantes
A significant amount of efforts have been directed toward the support of Quality of Service (QoS) in the network infrastructure. However, deal with juste network QoS problem is not sufficient to guarantee end-to-end QoS. Even there are mechanisms (DiffServ, IntServ, MPLS, etc. ) allowing service differentiation at network level, users are still forced to share in the same way available resources on the server Web side. A possible solution to deal with this problem is to use the network QoS principles (traffic classification, admission control, resources partition, etc) to extend components of Web clusters infrastructures introducing resources management mechanisms allowing guaranties of SLAs (Service Level Agreements) granted to users. Our contribution relies on the use of those principles for design, specification and implementation of a QoS management architecture for Web servers clusters. We propose a flexible, scalable and non intrusive solution that takes into account the heterogeneity of technologies used to implement Web Applications. We also propose an admission control and load balancing mechanism for Web servers clusters, called WS-DSAC (Web Servers – DiffServ Admission Control), that allows user QoS differentiation and effective use of available resources, by dynamically reallocating resources between existent QoS classes and promoting fairness during non critical moments
APA, Harvard, Vancouver, ISO, and other styles
35

Saito, Yasushi. "Functionally homogeneous clustering : a framework for building scalable data-intensive internet services /." Thesis, Connect to this title online; UW restricted, 2001. http://hdl.handle.net/1773/6936.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Chadim, Pavel. "Zabezpečení komunikace a ochrana dat v Internetu věcí." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2018. http://www.nusl.cz/ntk/nusl-377025.

Full text
Abstract:
This Master's thesis „Secure communication and data protection in the internet of things“ is dealing with crypthografy and crypthographic libraries, which are compared with eachother according to supporting algorithm and standard. For comparing therewere used following libraries: openSSL, wolfSSL, nanoSSL and matrixSSL. Practical part of the thesis is focused on testing the productivity of each ciphers and protocols of openSSL and wolfSSL libraries on RaspberryPi 2 device. Further, the thesis shows the design of communication scenario client-server in the Internet of Things (IoT). Simple authentication protocol client-server was implemented and simulated on RaspberryPi 2 device.
APA, Harvard, Vancouver, ISO, and other styles
37

Strauß, Harry. "Electronic Commerce - elektronische Bestellsysteme im Internet Realisierung eines datenbankgestützten Produktbestellsystems im World Wide Web mit einem Merchant Server /." [S.l.] : Universität Konstanz , Fakultät für Wirtschaftswissenschaften und Statistik, 1998. http://www.bsz-bw.de/cgi-bin/xvms.cgi?SWB8500793.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Varma, Nitesh. "Secure Network-Centric Application Access." Thesis, Virginia Tech, 1998. http://hdl.handle.net/10919/46318.

Full text
Abstract:
In the coming millennium, the establishment of virtual enterprises will become increasingly common. In the engineering sector, global competition will require corporations to create agile partnerships to use each other’s engineering resources in mutually profitable ways. The Internet offers a medium for accessing such resources in a globally networked environment. However, remote access of resources require a secure and mutually trustable environment, which is lacking in the basic infrastructure on which the Internet is based. Fortunately, efforts are under way to provide the required security services on the Internet. This thesis presents a model for making distributed engineering software tools accessible via the Internet. The model consists of an extensible clientserver system interfaced with the engineering software tool on the server-side. The system features robust security support based on public-key and symmetric cryptography. The system has been demonstrated by providing Web-based access to a .STL file repair program through a Java-enabled Web browser.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
39

Abdullah, Jan Mirza, and Mahmododfateh Ahsan. "Multi-View Video Transmission over the Internet." Thesis, Linköping University, Department of Electrical Engineering, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-57903.

Full text
Abstract:

3D television using multiple views rendering is receiving increasing interest. In this technology a number of video sequences are transmitted simultaneously and provides a larger view of the scene or stereoscopic viewing experience. With two views stereoscopic rendition is possible. Nowadays 3D displays are available that are capable of displaying several views simultaneously and the user is able to see different views by moving his head.

The thesis work aims at implementing a demonstration system with a number of simultaneous views. The system will include two cameras, computers at both the transmitting and receiving end and a multi-view display. Besides setting up the hardware, the main task is to implement software so that the transmission can be done over an IP-network.

This thesis report includes an overview and experiences of similar published systems, the implementation of real time video, its compression, encoding, and transmission over the internet with the help of socket programming and finally the multi-view display in 3D format.  This report also describes the design considerations more precisely regarding the video coding and network protocols.

APA, Harvard, Vancouver, ISO, and other styles
40

Szilassy, Martin, and Daniel Örn. "Low Energy GPS Positioning : A device-server approach." Thesis, Linköpings universitet, Reglerteknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-118788.

Full text
Abstract:
GPS is widely used for localization and tracking, however traditional GPS receivers consume too much energy for many applications. This thesis implements and evaluates the performance of a low energy GPS solution, including a working hardware prototype, that reduces energy consumption significantly. The prototype operates for 2 years on a coin cell battery, sampling every minute. The corresponding time for a traditional receiver is 2 days. The main difference is that a traditional receiver requires 30 seconds of data to estimate a position; this solution only requires 2 milliseconds of data, a reduction of a factor 15 000. The solution consists of a portable device, sampling the GPS signal, and server software that utilizes Doppler navigation and Coarse Time Navigation to estimate positions. The median positioning error is at most 38 meters in our tests. We expect that this solution will enable positioning for billions of devices in the near future.
APA, Harvard, Vancouver, ISO, and other styles
41

Hangwei, Qian. "Dynamic Resource Management of Cloud-Hosted Internet Applications." Case Western Reserve University School of Graduate Studies / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=case1338317801.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Sousa, Alexandre José Barbieri de. "IXnet - proposta de Internet alternativa para aplicações sensíveis a atraso." Universidade de São Paulo, 2009. http://www.teses.usp.br/teses/disponiveis/3/3142/tde-02092009-153606/.

Full text
Abstract:
A Internet pode não atender às necessidades exigidas por aplicações sensíveis a atraso tais como voz, video ou até jogos em redes. A distância de saltos entre o cliente e o servidor, o congestionamento dos links e a aleatoriedade do caminho do pacote, podem causar atrasos. Existem motivações e até implementações de infra-estruturas alternativas para atender necessidades não garantidas pela Internet. Um exemplo é o projeto Rede COMEP, que propõe uma rede alternativa a fim de diminuir a distância e aumentar a capacidade entre os usuários (alunos e professores) e o conteúdo. Esta tese propõe uma infra-estrutura de rede de alta velocidade, exclusiva para aplicações sensíveis a atraso. A rede, denominada de IXnet, origina-se de experimentos de roteamento compartilhado por meio do IXP, estudo do tráfego de uma aplicação sensível a atraso e uso de simulador de rede. O uso da IXnet pode ampliar a frente de negócios para as operadoras e usuários que fazem uso de aplicações sensíveis a atraso, melhorando a qualidade na prestação de seus serviços.
The Internet may does not meet all the demands required by applications sensitive to delay. Voice, video or even game applications demand low delay. The distance of hops between client and server, link traffic and the ramdomness of the package path, may cause delays. There is motivation and even implementation of alternative infra-structure to improve access. One example is the COMEP Network project, that proposes an alternative network in order to comply with the specific demands of such applications. This thesis proposes a high speed infra-structure network, exclusive for delay sensitive applications. This network, named IXnet, has it\'s origins in experiments with shared routing IXP, traffic studies of a sensitive to delay application and use of network simulator. The use of IXnet may increase the business front for operators and users that utilize sensitive to delay applications, improving the quality of the services provided.
APA, Harvard, Vancouver, ISO, and other styles
43

Gaillardon, Philippe. "Accès aux fichiers partagés : une réalisation sous le système VM." Lyon 1, 1988. http://www.theses.fr/1988LYO10064.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Juell, Martin Andreas, and Gaute Larsen Nordhaug. "An approach to rapid development of modern ubiquitous Internet applications : Exploring the benefits of reusable server side components." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for datateknikk og informasjonsvitenskap, 2011. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-13990.

Full text
Abstract:
Popular Internet applications can grow rapidly into millions of users. This is an important challenge for application developers, as failing to handle increasing load can disrupt an application’s popularity surge and cause massive monetary losses.Many popular applications are ubiquitous, meaning they are used not only from web browsers on desktop computers, but also handheld devices, as well as other services operating on servers, connecting to the application via an Application Programming Interface (API). For traditionally designed web applications, this ubiquity is hard to achieve, as the difference in architecture creates a barrier for reusability of server side code.Using a design science research methology, this report details an approach to solving scalability issues and greatly improving reusability and development speed for modern ubiquitous Internet applications. The crux of the approach is a bare-essentials data access and user management API, whose implementation is intended to serve as the entire server side of the application.For applications that can cope with its reduced feature set, it has several major advantages. API implementations are interchangeable, eliminating vendor lock, and also completely reusable across applications, saving development effort. Presentation and application logic is shifted to the client side, reducing server strain, and the API is easily implemented with a modern, hyperscalable data store in a cloud environment, providing great elasticity and scalability.The functionality of the API is derived from an analysis of target applications, and the approach is evaluated through the development of a prototype, a blog application with clients for several platforms. The prototype development process reveals some architectural and practical limitations to the design, but also showcases the power of reusable components when those components are readily available.The approach presented here is not ideal for all types of applications. However, when applicable, it helps developers save time and overcome these important challenges in application development.
APA, Harvard, Vancouver, ISO, and other styles
45

Desbiens, Jean-Marc. "Développement de composants d'un logiciel client-serveur pour la poursuite de satellites." Sherbrooke : Université de Sherbrooke, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
46

Hewzulla, Dilshat. "Deriving mathematical significance in palaeontological data from large-scale database technologies." Thesis, University of East London, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.369440.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Abghour, Noreddine. "Schéma d'autorisation pour applications réparties sur Internet." Toulouse, INPT, 2004. http://www.theses.fr/2004INPT014H.

Full text
Abstract:
Le développement d'Internet et la généralisation de son utilisation ont entraîné l'émergence de nouvelles applications réparties à grande échelle, le commerce électronique par exemple. Mais ces applications posent des problèmes de sécurité difficiles à résoudre, en raison du grand nombre d'utilisateurs et de machines concernés. Des solutions existent pour bien garantir la sécurité des communications point à point et pour restreindre les connexions à un sous-réseau, mais elles sont d'une efficacité limitée, et souvent intrusives vis-à-vis de la protection de la vie privée des utilisateurs. Dans le but de lever ces limitations et de contrôler de façon efficace l'exécution d'applications réparties sur un nombre plus ou moins grand de machines du réseau, nous avons développé des schémas d'autorisation à la fois souples pour permettre de contrôler des applications de tous types, et efficaces par l'application du principe du moindre privilège : seules les opérations nécessaires au fonctionnement de l'application doivent être autorisées. L'architecture que nous proposons est organisée autour de serveurs d'autorisation répartis, tolérants aux fautes accidentelles et aux intrusions, et de moniteurs de référence sur chaque site participant. Les serveurs d'autorisation vérifient si les requêtes doivent être autorisées, et, dans ce cas, génèrent des preuves d'autorisation composées de capacités et de coupons qui sont ensuite vérifiées par les moniteurs de référence. Ces coupons forment un mécanisme de délégation original qui respecte le principe du moindre privilège. Compte-tenu de l'hétérogénéité des systèmes connectés, il n'est pas envisageable d'intégrer un moniteur de référence spécifique dans le système opératoire de chaque site. C'est pourquoi les moniteurs de référence sont en partie implantés dans des cartes à puce Java.
APA, Harvard, Vancouver, ISO, and other styles
48

Duquennoy, Simon. "Smews : un système d'exploitation dédié au support d'applications Web en environnement contraint." Thesis, Lille 1, 2010. http://www.theses.fr/2010LIL10184/document.

Full text
Abstract:
Les travaux présentés dans ce mémoire se placent dans le contexte de l'extension des technologies du Web à l'informatique enfouie. Le Web des objets qui en résulte ouvre les portes à de nouvelles interactions en garantissant une interopérabilité aussi bien au niveau réseau qu'applicatif. Nous adressons la conception du système logiciel jouant le rôle de serveur Web au sein d'équipements fortement contraints, tels que des cartes à puce ou des capteurs. Les solutions de l'état de l'art permettant de supporter une pile protocolaire standard à faible empreinte mémoire exigent de sacrifier les performances et les fonctionnalités du système. La thèse défendue est qu'en dédiant un système d'exploitation au support d'une famille d'applications de haut niveau, nous sommes en mesure de produire un logiciel performant et consommant très peu de ressources. Nous étudions une architecture basée sur un macro-noyau intégrant gestion du matériel, pile de communication et conteneur d'applications, présentant une interface adaptée aux besoins de ces dernières
The context of this thesis is the extension of Web technologies to ambient computing. The resulting Web of Things allows novel interactions by guaranteeing interoperability at both network and applications levels. We address the design of the software system behaving as a Web server and embedded in strongly constrained devices such as smart cards or sensors. The state of the art solutions allowing to run a lightweight standard protocol stack involve poor performances and sacrifice the system features. The thesis is that by dedicating an operating system to the support of a high-level family of applications, we can produce an efficient software consuming a few resources. We study an architecture based on an macro-kernel integrating the hardware management, the communications stack and the applications container, providing an interface that fits the applications needs
APA, Harvard, Vancouver, ISO, and other styles
49

Hjelmberg, Eric, and Henrik Rowell. "Persondetektering i inomhusmiljö med enkla sensorer." Thesis, Linköpings universitet, Datorteknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-122908.

Full text
Abstract:
Denna rapport syftar till att beskriva arbetet kring att kunna detektera närvaro i ett rum medhjälp av så enkla sensorer som möjligt, kopplade till en Arduino. Samtidigt som detta skerså används också systemet till att med samma sensorer visa klimatet i rummet. Läsaren fåren inblick i problematiken med att detektera människor samt inom funktionen av de valdasensorerna. Utöver detta studeras energiförbrukningen i systemet. Rapportenmynnar ut i enslutsats där en procentuell chans för närvaro presenteras via en internetuppkoppling medhjälp av en omfattande testning av sensorernas beteende.
APA, Harvard, Vancouver, ISO, and other styles
50

Self, Lance. "SERVING INTERACTIVE WEB PAGES TO TechSat21 CUSTOMERS." International Foundation for Telemetering, 2001. http://hdl.handle.net/10150/607668.

Full text
Abstract:
International Telemetering Conference Proceedings / October 22-25, 2001 / Riviera Hotel and Convention Center, Las Vegas, Nevada
TechSat21 is an innovative satellite program sponsored by the Air Force Research Laboratory Space Vehicles Directorate and the Air Force Office of Scientific Research. Its mission is to control a cluster of satellites that, when combined, create a “virtual satellite” with which to conduct various experiments in sparse aperture sensing and formation flying. Because TechSat21 customers have a need to view very large data sets that vary from the payload to the satellite state of health1 a modern viewing method using Java Server Pages and Active Server Pages is being developed to meet these interactive dynamic demands.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography