Academic literature on the topic 'Priority map'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Priority map.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Dissertations / Theses on the topic "Priority map"

1

Ouředníková, Lucie. "TIME MANAGEMENT - nástroj nejen pro prokrastinující studenty." Master's thesis, Vysoká škola ekonomická v Praze, 2017. http://www.nusl.cz/ntk/nusl-359472.

Full text
Abstract:
This master thesis introduces term time management and describes steps for its successful implementation to our practical life. The aim of this thesis is to describe students´of University of Economics knowledge of this term and found out if study on this university and mainly passing course Management of Personal Development can expand their knowledge of this term. Mind mapping is used to deal with this aim. These mind maps ilustrate what students ideas about this term are. From the mind maps´ analysis arise that students of this university know time management very well, however passing the Management of Personal Development can highly expand this knowledge.
APA, Harvard, Vancouver, ISO, and other styles
2

Hasan, Meqdad, and Rahul Kali. "Method for Autonomous picking of paper reels." Thesis, Högskolan i Halmstad, Intelligenta system (IS-lab), 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-16212.

Full text
Abstract:
Autonomous forklift handling systems is one of the most interesting research in the last decades. While research fields such as path planning and map building are taking the most significant work for other type of autonomous vehicles, detecting objects that need to move and picking it up becomes one of the most important research fields in autonomous forklifts field. We in this research had provided an algorithm for detecting paper reels accurate position in paper reels warehouses giving a map of the warehouse itself. Another algorithm is provided for giving the priority of papers that want to be picked up. Finally two algorithms for choosing the most appropriate direction for picking the target reel and for choosing the safest path to reach the target reel without damage it are provided. While working on the last two algorithms shows very nice results, building map for unknown stake of papers by accumulating maps over time still tricky. In the following pages we will go in detail by the steps that we followed to provide these algorithms started from giving an over view to the problem background and moving through the method that we used or we developed and ending by result and the conclusion that we got from this work.
APA, Harvard, Vancouver, ISO, and other styles
3

Mojžiš, Ľubomír. "Návrh přepínače využitelného v moderních komunikačních sítích." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2015. http://www.nusl.cz/ntk/nusl-220535.

Full text
Abstract:
This diploma thesis deals with switching components of modern communication networks. A switch architecture focused on the quality of service is described in this paper and there are two switch models designed and simulated in the simulating program MATLAB-SIMULINK. The first model is based on classic switching and the other one is controlled by neural network. A laboratory exercise suitable for communication networks education is created in this paper on the basis of designed models.
APA, Harvard, Vancouver, ISO, and other styles
4

Höller, Yvonne. "A TDMA-MAC Protocol for a Seismic Telemetry-Network with Energy Constraints." International Foundation for Telemetering, 2010. http://hdl.handle.net/10150/605941.

Full text
Abstract:
ITC/USA 2010 Conference Proceedings / The Forty-Sixth Annual International Telemetering Conference and Technical Exhibition / October 25-28, 2010 / Town and Country Resort & Convention Center, San Diego, California<br>The requirements for a seismic telemetry-network are even more stringent than the well known problems of sensor networks. Existing medium access control (MAC) protocols suggest reducing energy consuming network activity by reducing costly transmissions and idle listening. Furthermore, it is required to set up communication patterns in different priority levels as well as ensuring fast handling of critical events. A protocol is proposed that operates with two parallel sets of time schedules in a time-division-multiple-access (TDMA) sense of periodic activity for listening and for transmitting. Synchronization packets sent from a central base station ensure optimal response times.
APA, Harvard, Vancouver, ISO, and other styles
5

Švec, Adam. "Optimalizace přepínače v konvergované síti." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2013. http://www.nusl.cz/ntk/nusl-220343.

Full text
Abstract:
Switch Optimization in a Converged Network thesis discusses the role of the Ethernet switch in the network. It describes differences between the switch and the hub in the network and the impacts on the collision domain size. In the converged network, priority data switching according to their origin and thus the qualitative point of view is also addressed - e-mails, voice services and multimedia will each behave differently. A real element, switch L3 - a switch with services quality support - is mentioned. The tool Matlab - Simulink is briefly described and a functional simplified model of the ethernet switch was created. Furthermore, the topic was used as a theme for a laboratory task in the subject Services of Telecommunication Networks. A model for Simulink and an example of the laboratory protocol for the created task are attached to the thesis.
APA, Harvard, Vancouver, ISO, and other styles
6

Tazir, Mohamed Lamine. "Precise localization in 3D prior map for autonomous driving." Thesis, Université Clermont Auvergne‎ (2017-2020), 2018. http://www.theses.fr/2018CLFAC047/document.

Full text
Abstract:
Les véhicules autonomes, qualifiés aussi de véhicules sans conducteur, deviennent dans certains contextes une réalité tangible et partageront très bientôt nos routes avec d’autres véhicules classiques. Pour qu’un véhicule autonome se déplace de manière sécurisée, il doit savoir où il se trouve et ce qui l’entoure dans l’environnement. Pour la première tâche, pour déterminer sa position dans l’environnement, il doit se localiser selon six degrés de liberté (position et angles de rotation). Alors que pour la deuxième tâche, une bonne connaissance de cet environnement « proche » est nécessaire, ce qui donne lieu à une solution sous forme de cartographie. Par conséquent, pour atteindre le niveau de sécurité souhaité des véhicules autonomes, une localisation précise est primordiale. Cette localisation précise permet au véhicule non seulement de se positionner avec précision, mais également de trouver sa trajectoire optimale et d’éviter efficacement les collisions avec des objets statiques et dynamiques sur son trajet. Actuellement, la solution la plus répandue est le système de positionnement (GPS). Ce système ne permet qu’une précision limitée (de l’ordre de plusieurs mètres) et bien que les systèmes RTK (RealTime Kinematic) et DGPS (Differential GPS) aient atteint une précision bien plus satisfaisante, ces systèmes restent sensibles au masquage des signaux, et aux réflexions multiples, en particulier dans les zones urbaines denses. Toutes ces déficiences rendent ces systèmes inadaptés pour traiter des tâches critiques telles que l’évitement des collisions. Une alternative qui a récemment attiré l’attention des experts (chercheurs et industriels), consiste à utiliser une carte à priori pour localiser la voiture de l’intérieur de celui-ci. En effet, les cartes facilitent le processus de navigation et ajoutent une couche supplémentaire de sécurité et de compréhension. Le véhicule utilise ses capteurs embarqués pour comparer ce qu’il perçoit à un moment donné avec ce qui est stocké dans sa mémoire. Les cartes à priori permettent donc au véhicule de mieux se localiser dans son environnement en lui permettant de focaliser ses capteurs et la puissance de calcul uniquement sur les objets en mouvement. De cette façon, le véhicule peut prédire ce qui devrait arriver et voir ensuite ce qui se passe réellement en temps réel, et donc peut prendre une décision sur ce qu’il faut faire.Cette thèse vise donc à développer des outils permettant une localisation précise d’un véhicule autonome dans un environnement connu à priori. Cette localisation est déterminée par appariement (Map-matching) entre une carte de l’environnement disponible a priori et les données collectées au fur et à mesure que le véhicule se déplace. Pour ce faire, deux phases distinctes sont déployées. La première permet la construction de la carte, avec une précision centimétrique en utilisant des techniques de construction de cartes statiques ou dynamiques. La seconde correspond à la capacité de localiser le véhicule dans cette carte 3D en l’absence d’infrastructures dédiées comprenant le système GPS, les mesures inertielles (IMU) ou des balises.Au cours de ce travail, différentes techniques sont développées pour permettre la réalisation des deux phases mentionnées ci-dessus. Ainsi, la phase de construction de cartes, qui consiste à recaler des nuages de points capturés pour construire une représentation unique et unifiée de l’environnement, correspond au problème de la localisation et de la cartographie simultanée (SLAM). Afin de faire face à ce problème, nous avons testé et comparé différentes méthodes de recalage. Cependant, l’obtention de cartes précises nécessite des nuages de points très denses, ce qui les rend inefficaces pour une utilisation en temps réel. Dans ce contexte, une nouvelle méthode de réduction des points est proposée. (...)<br>The concept of self-driving vehicles is becoming a happening reality and will soon share our roads with other vehicles –autonomous or not-. For a self-driving car to move around in its environment in a securely, it needs to sense to its immediate environment and most importantly localize itself to be able to plan a safe trajectory to follow. Therefore, to perform tasks suchas trajectory planning and navigation, a precise localization is of upmost importance. This would further allow the vehicle toconstantly plan and predict an optimal path in order to weave through cluttered spaces by avoiding collisions with other agentssharing the same space as the latter. For years, the Global Positioning System (GPS) has been a widespread complementary solution for navigation. The latter allows only a limited precision (range of several meters). Although the Differential GPSand the Real Time Kinematic (RTK) systems have reached considerable accuracy, these systems remain sensitive to signal masking and multiple reflections, offering poor reliability in dense urban areas. All these deficiencies make these systems simply unsuitable to handle hard real time constraints such as collision avoidance. A prevailing alternative that has attracted interest recently, is to use upload a prior map in the system so that the agent can have a reliable support to lean on. Indeed,maps facilitate the navigation process and add an extra layer of security and other dimensions of semantic understanding. The vehicle uses its onboard sensors to compare what it perceives at a given instant to what is stored in the backend memory ofthe system. In this way, the autonomous vehicle can actually anticipate and predict its actions accordingly.The purpose of this thesis is to develop tools allowing an accurate localization task in order to deal with some complex navigation tasks outlined above. Localization is mainly performed by matching a 3D prior map with incoming point cloudstructures as the vehicle moves. Three main objectives are set out leading with two distinct phases deployed (the map building and the localization). The first allows the construction of the map, with centimeter accuracy using static or dynamic laser surveying technique. Explicit details about the experimental setup and data acquisition campaigns thoroughly carried outduring the course of this work are given. The idea is to construct efficient maps liable to be updated in the long run so thatthe environment representation contained in the 3D models are compact and robust. Moreover, map-building invariant on any dedicated infrastructure is of the paramount importance of this work in order to rhyme with the concept of flexible mapping and localization. In order to build maps incrementally, we rely on a self-implementation of state of the art iterative closest point (ICP) algorithm, which is then upgraded with new variants and compared to other implemented versions available inthe literature. However, obtaining accurate maps requires very dense point clouds, which make them inefficient for real-time use. Inthis context, the second objective deals with points cloud reduction. The proposed approach is based on the use of both colorinformation and the geometry of the scene. It aims to find sets of 3D points with the same color in a very small region and replacing each set with one point. As a result, the volume of the map will be significantly reduced, while the proprieties of this map such as the shape and color of scanned objects remain preserved.The third objective resort to efficient, precise and reliable localization once the maps are built and treated. For this purpose, the online data should be accurate, fast with low computational effort whilst maintaining a coherent model of the explored space. To this end, the Velodyne HDL-32 comes into play. (...)
APA, Harvard, Vancouver, ISO, and other styles
7

Kammoun, Inès. "Codage spatio-temporel sans connaissance a priori du canal." Paris, ENST, 2004. http://www.theses.fr/2004ENST0026.

Full text
Abstract:
Les systèmes de communications sans fil à antennes multiples à l'émission et à la réception permettent d'offrir des services nécessitant des débits très élevés sur des canaux à évanouissements. La majorité des schémas proposés qui permettent d'atteindre ces débits élevés supposent une connaissance parfaite du canal au niveau du récepteur. En pratique, la connaissance du canal est obtenue via des séquences d'apprentissage, ce qui induit une baisse significative de l'efficacité spectrale du système. Nous proposons un algorithme, bloc par bloc, d'estimation au sens du maximum a posteriori qui permet d'effectuer une estimation itérative semi-aveugle du canal. Le schéma de codage considéré à l'émission est le schéma de codage spatio-temporel d'Alamouti. Néanmoins, il est parfois impossible ou non avantageux d'estimer le canal. Ce schéma est désigné par non cohérent. Dans ce sens, nous nous sommes proposés de concevoir de nouvelles techniques de codage/décodage adaptées au cas non cohérent. Nous avons commencé par montrer que le problème de conception des mots de code peut être remplacé par un problème d'empilement de sous-espaces dans la variété de Grassmann avec une distance déduite de l'étude de la probabilité d'erreur asymptotique par paire. En étudiant les paramétrisations existantes de la variété de Grassmann, nous avons conclu qu'il faut définir une nouvelle paramétrisation plus générale. Ainsi, nous avons proposé une paramétrisation exponentielle de cette variété. Nous avons proposé une méthode pour simplifier le critère de conception des mots de code dans la variété de Grassmann. Ceci nous a permis de définir une nouvelle famille de codes pour le cas non cohérent. Les avantages de ces codes sont multiples. Le nombre de symboles d'information est maximisé. Un ordre maximal de diversité est atteint en utilisant des moyens similaires au cas cohérent. Ils permettent d'atteindre une efficacité spectrale plus importante que les codes non cohérents existants pour des performances similaires ou meilleures. Nous avons proposé également une procédure de décodage simplifiée de la métrique GLRT (Generalized Likelihood Ratio Test) qui nous permet de nous affranchir d'une recherche exhaustive<br>Wireless communications multiple input multiple output systems promise very high data rates on scattering-rich wireless channels. Most of the proposed schemes that achieve these high rates require the channel to be known to the receiver. In practice, knowledge of the channel is often obtained via training, which can decrease significantly the spectral efficiency. We propose an EM-based maximum a posteriori semi-blind receiver which. This iterative receiver uses pilots as well as unknown data symbols in order to improve the channel estimation quality. The space-time scheme considered for the transmission is the Alamouti's two-branch scheme. However, it is not always feasible or advantageous to use training-based schemes. Hence, we propose to use a space-time transmission scheme that do not require channel state information either at the transmitter or at the receiver end. This scheme is referenced as non coherent one. In this context, we proposed to design new schemes that lead to efficient encoding/decoding for the noncoherent MIMO communication. First, we proved that the design of a good non coherent code is equivalent to the design of codes on the Grassmann manifold with a distance criterion deduced from the expression of the pairwise error probability. By the study of the existant parameterizations of the Grassmann manifold, we concluded that a new one must be introduced. Hence, we proposed an exponential parameterization of this manifold. We proposed a simplification of the code conception criterion in the Grassmann manifold. We have introduced a new family of space-time codes suited for non coherent MIMO systems. These codes have a lot of advantages. The number of conveyed information symbols is maximized, maximum order of diversity is reached by using similar tools as in the coherent case. They permit a larger spectral efficiency than existing non coherent codes for similar or better performance. We also proposed how to simplify the GLRT (Generalized Likelihood Ratio Test) decoding process
APA, Harvard, Vancouver, ISO, and other styles
8

Katila, Charles Jumaa. "Mac protocols for linear wireless (sensor) networks." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2014. http://amslaurea.unibo.it/7626/.

Full text
Abstract:
Wireless sensor networks (WSNs) consist of a large number of sensor nodes, characterized by low power constraint, limited transmission range and limited computational capabilities [1][2].The cost of these devices is constantly decreasing, making it possible to use a large number of sensor devices in a wide array of commercial, environmental, military, and healthcare fields. Some of these applications involve placing the sensors evenly spaced on a straight line for example in roads, bridges, tunnels, water catchments and water pipelines, city drainages, oil and gas pipelines etc., making a special class of these networks which we define as a Linear Wireless Network (LWN). In LWNs, data transmission happens hop by hop from the source to the destination, through a route composed of multiple relays. The peculiarity of the topology of LWNs, motivates the design of specialized protocols, taking advantage of the linearity of such networks, in order to increase reliability, communication efficiency, energy savings, network lifetime and to minimize the end-to-end delay [3]. In this thesis a novel contention based Medium Access Control (MAC) protocol called L-CSMA, specifically devised for LWNs is presented. The basic idea of L-CSMA is to assign different priorities to nodes based on their position along the line. The priority is assigned in terms of sensing duration, whereby nodes closer to the destination are assigned shorter sensing time compared to the rest of the nodes and hence higher priority. This mechanism speeds up the transmission of packets which are already in the path, making transmission flow more efficient. Using NS-3 simulator, the performance of L-CSMA in terms of packets success rate, that is, the percentage of packets that reach destination, and throughput are compared with that of IEEE 802.15.4 MAC protocol, de-facto standard for wireless sensor networks. In general, L-CSMA outperforms the IEEE 802.15.4 MAC protocol.
APA, Harvard, Vancouver, ISO, and other styles
9

Morales, Marcelo Aparecido. "Pol?tica de prioriza??o de acesso de esta??es com taxas diferentes para redes 802.11 baseada na SNR." Pontif?cia Universidade Cat?lica de Campinas, 2008. http://tede.bibliotecadigital.puc-campinas.edu.br:8080/jspui/handle/tede/493.

Full text
Abstract:
Made available in DSpace on 2016-04-04T18:31:21Z (GMT). No. of bitstreams: 1 Marcelo Aparecido Morales.pdf: 2213720 bytes, checksum: 6858d1301ee70451e07970bdb465607e (MD5) Previous issue date: 2008-02-15<br>The IEEE 802.11 wireless local area network presents a MAC anomaly when stations with different bit rates are connected in the same Access Point. Stations with high SNR have worst performance than stations with low SNR. This condition is worse in PWLANs (Public Wireless Local Area Network) with users connecting with different bit rates. This paper proposes a policy that uses the SNR and the condition of propagation to control users Contention Window. With this policy it is possible to control the Bit Rate, which is not possible in the 802.11 networks.<br>Redes 802.11 apresentam uma anomalia quando existem esta??es com diferentes taxas conectadas a um mesmo ponto de acesso. Esta??es com boa SNR (Signal-to-Noise Rate) obt?m um desempenho pior que esta??es com pior SNR. Esta condi??o ? esperada de forma agravada em redes p?blicas com v?rios usu?rios se conectando a diferentes taxas. Neste trabalho ? proposta uma pol?tica atrav?s da varia??o da janela de conten??o, levando em considera??o a SNR e a condi??o de propaga??o do local. Com a pol?tica a ser apresentada ? poss?vel controlar a taxa de transmiss?o em fun??o da SNR, criando uma possibilidade n?o oferecida pela rede 802.11.
APA, Harvard, Vancouver, ISO, and other styles
10

Oukili, Ahmed. "Reconstruction statistique 3D à partir d’un faible nombre de projections : application : coronarographie RX rotationnelle." Thesis, Rennes 1, 2015. http://www.theses.fr/2015REN1S109/document.

Full text
Abstract:
La problématique de cette thèse concerne la reconstruction statistique itérative 3D de l'arbre coronaire, à partir d'un nombre très réduit d'angiogrammes coronariens (5 images). Pendant un examen rotationnel d'angiographie RX, seules les projections correspondant à la même phase cardiaque sont sélectionnées afin de vérifier la condition de non variabilité spatio-temporelle de l'objet à reconstruire (reconstruction statique). Le nombre restreint de projections complique cette reconstruction, considérée alors comme un problème inverse mal posé. La résolution d'un tel problème nécessite une procédure de régularisation. Pour ce faire, nous avons opté pour le formalisme bayésien en considérant la reconstruction comme le champ aléatoire maximisant la probabilité a posteriori (MAP), composée d'un terme quadratique de vraisemblance (attache aux données) et un a priori de Gibbs (à priori markovien basé sur une interprétation partielle de l'objet à reconstruire). La maximisation MAP adoptant un algorithme d'optimisation numérique nous a permis d'introduire une contrainte de lissage avec préservation de contours des reconstructions en choisissant adéquatement les fonctions de potentiel associées à l'énergie à priori. Dans ce manuscrit, nous avons discuté en détail des trois principales composantes d'une reconstruction statistique MAP performante, à savoir (1) l'élaboration d'un modèle physique précis du processus d'acquisition, (2) l'adoption d'un modèle à priori approprié et (3) la définition d'un algorithme d'optimisation itératif efficace. Cette discussion nous a conduit à proposer deux algorithmes itératifs MAP, MAP-MNR et MAP-ARTUR-GC, que nous avons testés et évalués sur des données simulées réalistes (données patient issues d'une acquisition CT- 64 multi-barrettes)<br>The problematic of this thesis concerns the statistical iterative 3D reconstruction of coronary tree from a very few number of coronary angiograms (5 images). During RX rotational angiographic exam, only projections corresponding to the same cardiac phase are selected in order to check the condition of space and time non-variability of the object to reconstruct (static reconstruction). The limited number of projections complicates the reconstruction, considered then as an illness inverse problem. The answer to a similar problem needs a regularization process. To do so, we choose baysian formalism considering the reconstruction as a random field maximizing the posterior probability (MAP), composed by quadratic likelihood terms (attached to data) and Gibbs prior (prior markovian based on a partial interpretation of the object to reconstruct). The MAP maximizing allowed us using a numerical optimization algorithm, to introduce a smoothing constraint and preserve the edges on the reconstruction while choosing wisely the potential functions associated to prior energy. In this paper, we have discussed in details the three components of efficient statistical reconstruction MAP, which are : 1- the construction of precise physical model of acquisition process; 2- the selection of an appropriate prior model; and 3- the definition of an efficient iterative optimization algorithm. This discussion lead us to propose two iterative algorithms MAP, MAP-MNR and MAP-ARTUR-GC, which we have tested and evaluated on realistic simulated data (Patient data from 64-slice CT)
APA, Harvard, Vancouver, ISO, and other styles
More sources
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography