Rozprawy doktorskie na temat „Arquitetura e organização de computadores (eventos)”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 15 najlepszych rozpraw doktorskich naukowych na temat „Arquitetura e organização de computadores (eventos)”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.
Ramme, Fernando Luiz Prochnow. "Uma arquitetura cliente/servidor para apoiar a simulação de redes em ambiente de simulação orientado a eventos discretos". [s.n.], 2004. http://repositorio.unicamp.br/jspui/handle/REPOSIP/261803.
Pełny tekst źródłaDissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação
Made available in DSpace on 2018-08-04T14:27:12Z (GMT). No. of bitstreams: 1 Ramme_FernandoLuizProchnow_M.pdf: 5456847 bytes, checksum: 97f4a29dce76990aae51dfc598d77c70 (MD5) Previous issue date: 2004
Resumo: O objetivo deste trabalho é desenvolver uma arquitetura cliente/servidor em três camadas para o ambiente Hydragyrum, uma ferramenta de simulação orientada a eventos discretos, a fim de prover um procedimento padronizado para realizar a interoperabilidade entre diversos projetos de simulação criados sob este ambiente. Como conseqüência direta desta arquitetura, simulações distribuídas poderão ser realizadas onde modelos podem obter ganhos de processamento desde que haja um balanceamento cauteloso entre a quantidade de computação dos processos e a necessidade de comunicação entre processos alocados em máquinas distintas
Abstract: The objective of this work is to develop a three-tier client/server architecture for the Hydragyrum environment, an event driven simulation tool, in order to provide a standardized procedure to accomplish the interoperability among several simulation projects created under this environment. As a direct consequence of this architecture, distributed simulations could be done where models can obtain processing gains if there is a cautious balance between the computation size of the processes and the communication needs among processes allocated in different machines
Mestrado
Telecomunicações e Telemática
Mestre em Engenharia Elétrica
Cerqueira, Eduardo Coelho. "Implementação de um software discriminador de repasse de eventos para a arquitetura internet". Florianópolis, SC, 2003. http://repositorio.ufsc.br/xmlui/handle/123456789/85901.
Pełny tekst źródłaMade available in DSpace on 2012-10-21T01:04:59Z (GMT). No. of bitstreams: 1 205995.pdf: 1081924 bytes, checksum: 75cba0eb22c6f6e50d5ac0a2a938ae1c (MD5)
Este trabalho apresenta um Software Discriminador deRepasse de Eventos (SDRE) construído em Java, aplicando as funcionalidades do modelo OSI em ambientes SNMP. É feita uma revisão bibliográfica das duas arquiteturas de gerência de redes, buscando adequar a norma OSI/ITU-T X-734 à Internet e são apresentadas motivações que demonstram esforços atuais nesta área. Esta aplicação é simples e funcional, enfatizando a técnica de relatório de eventos, filtrando os mesmos e podendo selecionar várias estações de gerenciamento destinatárias, como forma de descentralizar e melhorar o desempenho da arquitetura de gerência de redes Internet. Por fim são apresentados os resultados dos testes efetuados, a avaliação e a validação do software, bem como o ambiente simulado e de produção onde foi implantado o discriminador.
Torres, Andre Luis de Lucena. "Simulação baseada em atores como ferramenta de ensino de organização e arquitetura de computadores". Universidade Federal da Paraíba, 2012. http://tede.biblioteca.ufpb.br:8080/handle/tede/6065.
Pełny tekst źródłaCoordenação de Aperfeiçoamento de Pessoal de Nível Superior
The educative informatics has become more present in pedagogical activities. On this new reality, many applications tend to make the knowledge construction a easier tool from the teachers to the students by dynamic methods, exposing multi-branch subjects with no great efforts or unnecessary repetitions. In computing area, the use of applications that facilitate learning is mandatory. Thus, it has been observed that the teaching of some introductory concepts on essential subject used to present an abstraction level that harms the instruction of students of computing courses that have difficulties on hardware related subjects. The use simulators in education have become more present in pedagogical activities. Thus, this work presents the achieved results of an extension developed on a simulation and modeling tool of concurrent systems based in actors, named Ptolemy. The extension was developed to contribute with the teaching-leaning process in the graduation course of Computers Architecture and Organization.
A informática educativa se faz cada vez mais presente nas atividades pedagógicas. Nesta nova realidade, várias aplicações visam facilitar a construção do conhecimento por parte dos professores em relação aos alunos através de métodos dinâmicos, expondo aulas para múltiplos ramos sem haver grande esforço ou repetições desnecessárias. Na área da Computação, existe a necessidade de utilização de aplicações que facilitem a aprendizagem. Pois, se tem observado que os ensinos de alguns conceitos introdutórios em disciplinas essenciais costumam apresentar um nível de abstração que prejudica o aprendizado dos alunos de cursos de informática que já possuem uma grande dificuldade em lidar com disciplinas da área de hardware. A utilização de simuladores na educação se faz cada vez mais presente nas atividades pedagógicas. Neste sentido, este trabalho apresenta os resultados alcançados com a aplicação de uma extensão desenvolvida numa ferramenta de modelagem e simulação de sistemas concorrente baseada em atores, denominada Ptolemy. A extensão foi criada para contribuir com o processo de ensino-aprendizagem da disciplina de Organização e Arquitetura de Computadores com alunos da graduação.
Prado, Renato Silva de Almeida. "Arquitetura de interface: análise de formas de organização da informação na interação entre pessoas e códigos". Pontifícia Universidade Católica de São Paulo, 2006. https://tede2.pucsp.br/handle/handle/4804.
Pełny tekst źródłaTwo important facts are observed in the communicational processes in the last two decades. The first one is the mass use of the computers on a global scale digital support and the second one is the connection between them the net. Both facts were accompanied by significant changes in the interface between man and machine. The wide spread of the computer had its beginning connected with the adoption of the graphic interface rather than the command-line interface and also the assimilation of the mouse. The growth of the net internet also was influenced by an interface change, when Mosaic appeared in 1993. The early internet, basically formed by textual information starts to work in a multimedia way, with sound and image along with the text. This project intends to analyze the unfolding of these two facts, by means of reading and analysis of some aspects of the digital culture and net culture, as well as to raise concepts and characteristics pertinent to these contexts for the development of new interfaces that can represent a new step or progress in an interaction form. More than ten years have passed and the signs of these changes are more and more evident and intricate in our social and cultural daily life. The discussion about the needs for new interfaces is already significant as Steven Johnson, Richard Grusin, Jay David Bolter, Lev Manovich, Giselle Beiguelman and Peter Weibel put it. This work is based greatly, besides the authors above, in the points of view of Alexander Galloway and Howard Rheingold. The relevance of this study is more evident as the digital interfaces are more and more present in so many social layers and activities, but now they have their capacity questioned. Today s graphic interface still has some of their characteristics attributed in the 70 s, and developed to work basically with a quantity of information restricted to one computer. At the same time it accesses and manipulates a much bigger quantity of information, come from and distributed to billions of computers
Dois importantes acontecimentos são observados nos processos comunicacionais das duas últimas décadas. O primeiro é o uso massificado dos computadores em escala mundial suporte digital e o segundo é a conexão entre eles a rede. Ambos os fatos foram acompanhados de significativas mudanças na interface entre o homem e a máquina. A difusão do computador teve início, em grande parte, pela adoção de uma interface gráfica em detrimento da linha de comando e pela incorporação do mouse. O crescimento da rede internet deu-se, também em grande parte, através de uma mudança de interface, com o surgimento do Mosaic em 1993. Antes basicamente constituída de informação textual, a internet passa a trabalhar de forma multimídia, com textos, sons e imagens. Este trabalho tem como objetivo analisar desdobramentos destes dois importantes acontecimentos, por meio da leitura e análise de alguns aspectos da cultura digital e da cultura de rede, e levantar características e conceitos pertinentes a estes contextos para o desenvolvimento de novas interfaces que possam representar um novo salto ou progresso na forma de interação. Mais de dez anos se passaram e os sinais destas mudanças são cada vez maiores e cada vez mais imbricados com o cotidiano social e cultural. A discussão sobre a necessidade de novas interfaces já é significativa, como colocam Steven Johnson, Richard Grusin, Jay David Bolter, Lev Manovich, Giselle Beiguelman e Peter Weibel. O trabalho está fundamentado em grande parte, além dos autores acima citados, nos pontos de vista de Alexander Galloway e Howard Rheingold. A relevância deste estudo se dá na medida em que as interfaces digitais encontram-se cada vez mais presentes em diversas camadas e atividades sociais, mas que, hoje, tem sua capacidade colocada em questionamento. Com características atribuídas na década de 1970, e desenvolvidas para, a priori, trabalhar com uma quantidade de informação restrita a apenas um computador, a interface gráfica, atualmente, acessa e manipula uma quantidade de informação muito maior, distribuída e provinda de bilhões de computadores
Ceron, João Marcelo. "MARS: uma arquitetura para análise de malwares utilizando SDN". Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-28022018-105426/.
Pełny tekst źródłaMechanisms to detect and analyze malicious software are essential to improve security systems. Current security mechanisms have limited success in detecting sophisticated malicious software. More than to evade analysis system, many malware require specific conditions to activate their actions in the target system. The flexibility of Software-Defined Networking (SDN) provides an opportunity to develop a malware analysis architecture that can detect behavioral deviations in an automated way. This thesis presents a specialized architecture to analyze malware by managing the analysis environment in a centralized way, including to control the sandbox and the elements that surrounds it. The proposed architecture enables to determine the network access policy, to handle the analysis environment resource configuration, and to manipulate the network connections performed by the malware. To evaluate our solution we have analyzed a set of malware in two evaluation scenarios. In the first evaluation scenario, we showed that the mechanisms proposed have increased the number of behavioral events in 100% of the malware analyzed. In the second evaluation scenario, we have analyzed malware designed for IoT devices. As a result, by using the MARS features, it was possible to block attacks, to manipulate attack commands, and to enable the malware communication with the respective botnet controller. The experimental results showed that our solution can improve the dynamic malware analysis process by providing this configuration flexibility to the analysis environment.
Penteado, Cesar Giacomini. "Arquitetura modular de processador multicore, flexível, segura e tolerante a falhas, para sistemas embarcados ciberfísicos". Universidade de São Paulo, 2010. http://www.teses.usp.br/teses/disponiveis/3/3142/tde-28022011-155817/.
Pełny tekst źródłaCyber-physical Systems (CPS) are systems where there is an union between computing and physics. The CPS will be used in several areas, forming a new era of systems or devices and could be anywhere, being used by anyone and anything. Applications for CPS include highly reliable medical systems and devices, traffic control and security, advanced automotive, process control, energy conservation, environmental control, aviation, instrumentation, control of critical infrastructure, defense systems, manufacturing, and smart structures. So, CPS scenario needs requirements design of embedded systems, composed by processors with new features in addition to I/O processing, power consumption, and communication. Then, the future of processor architectures should also have security, fault tolerance, architectural adaptation and flexibility to various and different scenarios. In this context, in this thesis, it is proposed a modular architecture to multicore processor (AMM) to use in the CPS. It is composed by multicore processors, dedicated hardware or both. Thus, in this thesis, we have proposed one processor architecture and we have done verification based on simulations using Modelsim software and simulation tools for integrated circuits, and we have running applications programs to demonstrate the main features of the AMM architecture. We also show a prototype of AMM using FPGA as well as implementation data such as FPGA usage and resources in silicon area. It is also presented an ASIC prototype of AMM core. The prototype architecture of the AMM was analyzed with critical applications which are used in CPS, such as security, redundancy and fault tolerance, and these tests suggest that the future CPS processors must have those characteristics. Thus, the thesis shows that these aspects can be included in embedded systems with dedicated features to multicore applications and systems used in CPS.
Rocha, Vladimir Emiliano Moreira. "Uma arquitetura escalável para recuperação e atualização de informações com relação de ordem total". Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-04012018-111326/.
Pełny tekst źródłaSince the beginning of the 21st century, we have experienced an explosive growth in the generation of information, such as photos, audios, videos, among others. Within this information, there are some in which the information can be divided and related following a total order. For example, a video file can be divided into ten segments identified with numbers from 1 to 10. To play the original video from these segments, their identifiers must be fully ordered. A structure called Distributed Hash Table (DHT) has been widely used to efficiently store, update, and retrieve this kind of information in several application domains, such as video on demand and sensor monitoring. However, DHT encounters scalability issues when one of its members fails to answer the requests, resulting in information loss. This work presents MATe, a layered architecture that addresses the problem of scalability on two levels: extending the DHT with the introduction of utility-based agents and organizing the volume of requests. The first layer manages the scalability by allowing the creation of new agents to distribute the requests when one of them has compromised its scalability. The second layer is composed of groups of devices, organized in such a way that only a few of them will be chosen to perform requests. The architecture was implemented in two application scenarios where scalability problems arise: (i) sensor monitoring; and (ii) video on demand. For both scenarios, the experimental results show that MATe improves scalability when compared to original DHT implementations.
Possignolo, Rafael Trapani. "Projeto de um coprocessador quântico para otimização de algoritmos criptográficos". Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-21062013-101915/.
Pełny tekst źródłaThe discovery of the Shor algorithm, which allows polynomial time factoring of integers, motivated efforts towards the implementation of a quantum computer. It is capable of breaking the main current public key cryptosystems used today (RSA and those based on elliptic curves). Those provide a set of security services, such as data confidentiality and integrity and source authentication, and also the distribution of a symmetric session key. To break those cryptosystem, a large quantum computer (2000 qubits) is needed. Nevertheless, cryptographers have started to look for alternatives. Some of which came from quantum mechanics itself. Despite some interesting properties found on quantum cryptography, a complete cryptosystem seems intangible, specially because of digital signatures, necessary to achieve authentication. Cryptosystems based on purely classical problems which are (believed) not treatable by quantum computers, called post-quantum, have them been proposed. Those systems still lacks of practicality, either because of the key size or the processing time. Among those post-quantum cryptosystems, specially the code based ones, the highlights are the McEliece and the Niederreiter cryptosystems. Per se, none of these provides digital signatures, but, the CFS signatures have been proposed, as a complement to them. Even if general purpose quantum computers are still far from our reality, it is possible to imagine a small dedicated quantum circuit. The benefits brought by it could make the deference to allow those signatures, in a truly post-quantum scenario. In this work, a quantum/classical hybrid architecture is proposed to accelerate post-quantum cryptographic algorithms. Two quantum coprocessors, implementing the Grover search, are proposed: one to assist the decoding process of Goppa codes, in the context of the McEliece and Niederreiter cryptosystems; another to assist the search for decodable syndromes, in the context of the CFS digital signatures. The results show that, for some cases, the use of the quantum coprocessor allows up to 99; 7% reduction in the key size and up to 76; 2% acceleration in the processing time. As a specific circuit, dealing with a well defined function, it is possible to keep a small size (300 qubits), depending on what is accelerated), showing that, if quantum computers come to existence, they will make post-quantum cryptosystems practical before breaking the current cryptosystems. Additionally, some implementation technologies of quantum computers are studied, in particular linear optics and silicon based technologies. This study aims to evaluate the feasibility of those technologies as potential candidates to the construction of a complete and personal quantum computer.
Viana, Phillip Luiz. "Uma arquitetura de preservação a longo prazo de Big Data com gerenciamento de elasticidade em nuvem". Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-05112018-152833/.
Pełny tekst źródłaWith the exponential growth in the volume of structured and unstructured data (Big Data) in enterprise storage systems, along with the also increasing demand for preservation of such data due to regulations and audits, there arises the problem of long-term preservation of Big Data, and more specifically of how to extend existing systems with time. Recent research projects encompass architectures for the preservation of structured data or short term archiving of Big Data, however they lack a model for architectures that support long-term preservation of Big Data with elasticity. In the present thesis, we propose an architecture for the archiving, longterm preservation and retrieval of Big Data with elasticity. A method for creating reference architectures was followed and as a result a reproducible long-term preservation architecture was obtained, which is capable of adapting to a growing demand receiving Big Data continuously. The architecture is compatible with cloud computing and was tested against several storage media, such as magnetic media, cloud and solid state. A comparison between the architecture and other available architectures is also provided. g Data. Unstructured data. Elasticity.
Junior, Roberto Borges Kerr. "Proposta e desenvolvimento de um algoritmo de associatividade reconfigurável em memórias cache". Universidade de São Paulo, 2008. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-01102008-135441/.
Pełny tekst źródłaWith the constant evolution of processors architecture, its getting even bigger the overhead generated with memory access. Trying to avoid this problem, some processors developers are using several techniques to improve the performance, as the use of cache memories. By the otherside, cache memories cannot supply all their needs, thats why its important some new technique that could use better the cache memory. Working on this problem, some authors are using reconfigurable computing to improve the cache memorys performance. This work analyses the reconfiguration of the cache memory associativity algorithm, and propose some improvements on this algorithm to better use its resources, showing some practical results from simulations with several cache organizations.
Ramos, André Luiz Tietböhl. "Organização de conhecimento e informações para integração de componentes em um arcabouço de projeto orientado para a manufatura". reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2015. http://hdl.handle.net/10183/134615.
Pełny tekst źródłaThis work proposes the use of industry standards to support the utilization of Design for Manufacturing (DFM) techniques in a comprehensive scale in the design field. The specific aspect being considered in an architecture is the definition and structure of DFM information context. In order to demonstrate the research concepts, some design activities are implemented the framework (which is focused in machining processes): Tolerancing model, Cost model based on material remove processes, Tool Accessibility model taking into consideration the part being designed, Availability of Machines and Tools model, and Material analysis. The broad needs of design–based frameworks, in general, require that its architecture must have the capabilities to handle di erent framework design information utilization contexts, or information context concepts. This is a relevant aspect since there are severalDFMcomponents/activities that preferably should be included in the design process. Traditionally, each one of them might have distinct data & knowledge requirements, which can be handled by the current information architecture – STEP – only in part. Additionally, each one of them might have, or need, di erent forms of understanding DFM information (information context). The framework handles information context concepts through the use of the ontologies targeted to the DFMfield. It is expected that a better comprehension and usage of the intrinsic information interfaces existent in its domain be achieved. Through it, more flexible and e ective DFM systems information-wise can be obtained.
Kobayashi, Jorge Mamoru. "Entropy: algoritmo de substituição de linhas de cache inspirado na entropia da informação". Universidade de São Paulo, 2010. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-29112016-102603/.
Pełny tekst źródłaThis work presents a study about cache line replacement problem for microprocessors. Inspired in the Information Entropy concept stated by Claude E. Shannon in 1948, this work proposes a novel heuristic to replace cache lines in microprocessors. The major goal is to capture the referential locality of programs and to reduce the miss rate for cache access during programs execution. The proposed algorithm, Entropy, employs that new entropy heuristic to estimate the chances of a cache line to be referenced after it has been loaded into cache. A novel decay function has been introduced to optimize its operation. Results show that Entropy could reduce miss rate up to 50.41% in comparison to LRU. This work also proposes a hardware implementation which keeps computation and complexity costs comparable to the most employed algorithm, LRU. To a 2-Mbytes and 8-way associative cache memory, the required storage area is 0.61% of the cache size. The Entropy algorithm was simulated using SimpleScalar ISA simulator and compared to LRU using SPEC CPU2000 benchmark programs.
Pereira, Fábio Dacêncio. "Proposta e implementação de uma Camada de Integração de Serviços de Segurança (CISS) em SoC e multiplataforma". Universidade de São Paulo, 2009. http://www.teses.usp.br/teses/disponiveis/3/3142/tde-18122009-124154/.
Pełny tekst źródłaComputer networks are increasingly complex environments and equipped with new services, users and infrastructure. The information safety and privacy become fundamental to the evolution of these environments. The anonymity, the weakness and other factors often encourage people to create malicious tools and techniques of attacks to information and computer systems. It can generate small inconveniences or even moral and financial damage. Thus, the detection of intrusion combined with other security tools can protect and prevent malicious attacks and anomalies in computer systems. Yet, considering the complexity and robustness of these systems, the security services are not always able to examine and audit the entire information flow, creating points of security failures that can be discovered and explored. Therefore, this PhD thesis proposes, designs, implements and analyzes the performance of an Integrated Security Services Layer (ISSL). So several security services were implemented and integrated to the ISSL such as Firewall, IDS, Antivirus, authentication tools, proprietary tools and cryptography services. Furthermore, the main feature of our ISSL is the creation of a common structure for storing information about incidents in a computer system. This information is considered to be the source of knowledge so that the system of anomaly detection, inserted in the ISSL, can act effectively in the prevention and protection of computer systems by detecting and classifying early anomalous situations. In this sense, behavioral models were created based on the concepts of the Hidden Markov Model (MHMM) and models for analysis of anomalous sequences. The ISSL was implemented in three versions: (i) System-on-Chip (SoC), (ii) JCISS software in Java and (iii) one simulator. Results such as the time performance, occupancy rates, the impact on the detection of anomalies and details of implementation are presented, compared and analyzed in this thesis. The ISSL obtained significant results regarding the detection rates of anomalies using the model MHMM, which are: for known attacks, rates of over 96% were obtained; for partial attacks by a time, rates above 80%, for partial attacks by a sequence, rates were over 96% and for unknown attacks, rates were over 54%. The main contributions of ISSL are the creation of a structure for the security services integration and the relationship and analysis of anomalous occurrences to reduce false positives, early detection and classification of abnormalities and prevention of computer systems. Furthermore, solutions were figured out in order to improve the detection as the sequential model, and features such as subMHMM for learning at real time. Finally, the SoC and Java implementations allowed the evaluation and use of the ISSL in real environments.
Luz, Fernando Henrique e. Paula da. "Metodologia para execução de aplicações paralelas baseadas no modelo BSP com tarefas heterogêneas". Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-07072016-143329/.
Pełny tekst źródłaParallel computing allows for a series of advantages on the execution of large applications and the effective use of parallel resources is an important aspect in the High Performance Computing. This work presents a methodology to provide the execution, in an automated way, of parallel applications based on BSP model with heterogeneous tasks. In this model it is assumed that the computation time between iterations does not have a high variance. The methodology is entitled ASE and it is composed by three stages: Acquisition, Scheduling and Execution. In the Acquisition step, the tasks\' processing time are obtained; In the Scheduling step, the methodology finds the ideal arrangement to distribute the tasks to maximize the execution speed and, simultaneously, minimize the use of resources. This is made using an algorithm developed in this work; and lastly the Execution step, where the parallel application is executed in the distribution defined in the previous step. The tools used in the methodology were implemented. A set of tests to apply the methodology were made and the results shown that the objectives were reached.
Ribacionka, Francisco. "Algoritmo distribuído para alocação de múltiplos recursos em ambientes distribuídos". Universidade de São Paulo, 2013. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-06072014-231945/.
Pełny tekst źródłaWhen considering a distributed system composed of a set of servers, clients, and resources that characterize environments like computational grids or clouds that offer a large number of distributed resources such as CPUs or virtual machines, which are used jointly by different types of applications, there is the need to have a solution for allocating these resources. Support the allocation of resources provided by such environments must satisfy all Requests for resources such applications, and provide affirmative answers to the efficient allocation of resources, to do justice in this allocation in the case of simultaneous Requests from multiple clients and answer these resources in a finite time these Requests. Considering such a context of large- scale distributed systems, this paper proposes a distributed algorithm for resource allocation This algorithm exploits fuzzy logic whenever a server is unable to meet a request made by a client, forwarding this request to a remote server. The algorithm uses the concept of logical clock to ensure fairness in meeting the demands made on all servers that share resources. This algorithm follows a distributed model, where a copy of the algorithm runs on each server that shares resources for its clients and all servers take part in decisions regarding allocation of resources. The strategy developed aims to minimize the response time in allocating resources, functioning as a load-balancing in a client-server environment with high resource Requests by customers.