Auswahl der wissenschaftlichen Literatur zum Thema „Saillance 3D“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Inhaltsverzeichnis
Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Saillance 3D" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Zeitschriftenartikel zum Thema "Saillance 3D"
Coops, Nicholas C., Alexis Achim, Paul Arp, Christopher W. Bater, John P. Caspersen, Jean-François Côté, Jeffery P. Dech et al. „Progrès dans l’application de la télédétection pour les besoins en matière d’information sur les forêts au Canada : leçons tirées d’une collaboration nationale d’intervenants universitaires, industriels et gouvernementaux“. Forestry Chronicle 97, Nr. 02 (Juni 2021): 127–47. http://dx.doi.org/10.5558/tfc2021-015.
Der volle Inhalt der QuelleDissertationen zum Thema "Saillance 3D"
Nouri, Anass. „Cartes de saillances et évaluation de la qualité des maillages 3D“. Caen, 2016. https://hal.archives-ouvertes.fr/tel-01418334.
Der volle Inhalt der QuelleThe glance of each human being is attracted by specific areas into 3D objects (that can be represented by meshes). This attraction depends on the degree of saliency exposed by these areas. The first goal of this thesis is to propose an approach for detecting visual salient areas on 3D non-colored meshes. We consider that a vertex as salient if it strongly stands out from its local neighborhood and if its geometric configuration is different from its adjacent vertices. For this, we characterize the surface of the 3D mesh by the use of a local vertex de-scriptor in the form of an adaptive patch. This descriptor is used as a basis for similarity computation and integrated into a weighted multi-scale saliency computation. We propose also an extension of our visual saliency model to 3D colored meshes. Four saliency-based applications were developed after the validation of the saliency detection results with a pseudo ground truth. The first and the second one concern respectively the optimal viewpoint selection and the adaptive compression of 3D non colored meshes. The third one sharpens the details of a 3D colored mesh, and the fourth smooths adaptively its colors. The second aim is to propose a novel perceptual full reference metric for the quality assessment of 3D meshes. Given a 3D reference mesh (reputed devoid from any distorsions) and a 3D distorted mesh as inputs of this metric, the goal is to assess the perceived quality of the distorted mesh by providing a fidelity score which must be as close as possible to the humans scores. As visual saliency is a perti-nent information for our visual system, its use in the pipeline of the quality metric is natural. We use two properties of 3D meshes for evaluating their perceived quality : the visual saliency and the roughness. The multi-scale saliency map is used for the extraction of the structural informations of the 3D mesh and the roughness map for the account of the visual masking effect. We introduce 4 comparison functions between 2 corresponding local neighborhoods in order to estimate the structural differences between them. We combine these functions with a weighted Minkowski sum so as to obtain a final quality score. The third objectif of this thesis is to provide an approach for the difficult problem of the no reference quality assessment of 3D meshes. On the contrary to full reference metrics, this category is considered as the thorniest since the reference version of the 3D mesh is not used. Similarly to the full reference metric, we suppose that the visual quality of a 3D mesh is more affected when the salient areas of the mesh are affected and vice versa. We begin by segmenting the mesh into a number of Superfacets which represent the local patches in this context. Then we affect to each vertex of a Superfacet its respective values of saliency and roughness. Afterward, we extract four local characteristics of each Superfacet (mean saliency, standard-deviation saliency, covariance saliency and mean roughness). Variations of these 4 characteristics quantify effectively the distorsions that a mesh may undergo. Finally, we perform a learning step based on SVMs (Support Vectors Machines) using the constructed feature vector , To move from a vectorial representation to a final quality score, we use a regression SVM scheme
Fraihat, Hossam. „Contribution à la perception visuelle multi-résolution de l’environnement 3D : application à la robotique autonome“. Thesis, Paris Est, 2017. http://www.theses.fr/2017PESC1065/document.
Der volle Inhalt der QuelleThe research work, carried out within the framework of this thesis, concerns the development of a system of perception and saliency detection in 3D environment taking advantage from a pseudo-3D representation. Our contribution and the issued concept derive from the hypothesis that the depth of the object with respect to the robot is an important factor in the detection of the saliency. On this basis, a salient vision system of the 3D environment has been proposed, designed and validated on a platform including a robot equipped with a pseudo-3D sensor. The implementation of the aforementioned concept and its design were first validated on the pseudo-3D KINECT vision system. Then, in a second step, the concept and the algorithms have been extended to the aforementioned robotic platform. The main contributions of the present thesis can be summarized as follow: A) A state of the art on the various sensors for acquiring depth information as well as different methods of detecting 2D salience and pseudo 3D. B) Study of pseudo-3D visual saliency system based on benefiting from the development of a robust algorithm allowing the detection of salient objects. C) Implementation of a depth estimation system in centimeters for the Pepper robot. D) Implementation of the concepts and methods proposed on the aforementioned platform. The carried out studies and the experimental validations confirmed that the proposed approaches allow to increase the autonomy of the robots in a real 3D environment
Habibi, Zaynab. „Vers l'assistance à l'exploration pertinente et réaliste d'environnements 3D très denses“. Thesis, Amiens, 2015. http://www.theses.fr/2015AMIE0028/document.
Der volle Inhalt der QuelleIn this thesis, we address the issue of navigation in virtual 3D environment. In particular, environments made of hundreds of millions of points, which are difficult to bring under control by a novice. The complexity and the wealth of details of the 3D point cloud of the cathedral of Amiens can result in a disorientation and in an irrelevant visualization with existing tools (interfaces). The contributions of the thesis deal with automatic or assisted camera control exploiting 2D visual information from the image and other 3D information from the environment. To ensure the visual relevance, we propose two methods to pilot the camera, one based on the photometric entropy and the second representing the major contribution of this thesis, defines and exploits the saliency-based Gaussian mixture. The visual servoing formalism is used to link the image modelling to the camera degrees of freedom. The obstacle avoidance, the fluidity of motion and appropriate camera orientation are considered as additional constraints taken into account in two navigation modes: the local framing and the global exploration. The goal of visual framing is to move the camera by maximizing the saliency-based Gaussian mixture feature, in order to reach a relevant viewpoint to visualize an object. We test this approach in synthetic model, 3D points cloud model and in a real environment with a robot. Regarding exploration, we present first an automatic camera control exploiting the photometric entropy and some constraints to ensure realistic motion. The problem is solved using an hybrid and hierarchical optimization algorithm. Then, we present a navigation aid system helping the user to explore a part or the whole 3D environment. The system is built using the redundancy formalism taking into account several constraints. These approaches were tested on simple and complex dense 3D points cloud
Wang, Junle. „From 2D to stereoscopic-3D visual saliency : revisiting psychophysical methods and computational modeling“. Nantes, 2012. http://www.theses.fr/2012NANT2072.
Der volle Inhalt der QuelleVisual attention is one of the most important mechanisms deployed in the human visual system to reduce the amount of information that our brain needs to process. An increasing amount of efforts are being dedicated in the studies of visual attention, particularly in computational modeling of visual attention. In this thesis, we present studies focusing on several aspects of the research of visual attention. Our works can be mainly classified into two parts. The first part concerns ground truths used in the studies related to visual attention ; the second part contains studies related to the modeling of visual attention for Stereoscopic 3D (S-3D) viewing condition. In the first part, our work starts with identifying the reliability of FDM from different eye-tracking databases. Then we quantitatively identify the similarities and difference between fixation density maps and visual importance map, which have been two widely used ground truth for attention-related applications. Next, to solve the problem of lacking ground truth in the community of 3D visual attention modeling, we conduct a binocular eye-tracking experiment to create a new eye-tracking database for S-3D images. In the second part, we start with examining the impact of depth on visual attention in S-3D viewing condition. We firstly introduce a so-called “depth-bias” in the viewing of synthetic S-3D content on planar stereoscopic display. Then, we extend our study from synthetic stimuli to natural content S-3D images. We propose a depth-saliency-based model of 3D visual attention, which relies on depth contrast of the scene. Two different ways of applying depth information in S-3D visual attention model are also compared in our study. Next, we study the difference of center-bias between 2D and S-3D viewing conditions, and further integrate the center-bias with S-3D visual attention modeling. At the end, based on the assumption that visual attention can be used for improving Quality of Experience of 3D-TV when collaborating with blur, we study the influence of blur on depth perception and blur’s relationship with binocular disparity
Habibi, Zaynab. „Vers l'assistance à l'exploration pertinente et réaliste d'environnements 3D très denses“. Electronic Thesis or Diss., Amiens, 2015. http://www.theses.fr/2015AMIE0028.
Der volle Inhalt der QuelleIn this thesis, we address the issue of navigation in virtual 3D environment. In particular, environments made of hundreds of millions of points, which are difficult to bring under control by a novice. The complexity and the wealth of details of the 3D point cloud of the cathedral of Amiens can result in a disorientation and in an irrelevant visualization with existing tools (interfaces). The contributions of the thesis deal with automatic or assisted camera control exploiting 2D visual information from the image and other 3D information from the environment. To ensure the visual relevance, we propose two methods to pilot the camera, one based on the photometric entropy and the second representing the major contribution of this thesis, defines and exploits the saliency-based Gaussian mixture. The visual servoing formalism is used to link the image modelling to the camera degrees of freedom. The obstacle avoidance, the fluidity of motion and appropriate camera orientation are considered as additional constraints taken into account in two navigation modes: the local framing and the global exploration. The goal of visual framing is to move the camera by maximizing the saliency-based Gaussian mixture feature, in order to reach a relevant viewpoint to visualize an object. We test this approach in synthetic model, 3D points cloud model and in a real environment with a robot. Regarding exploration, we present first an automatic camera control exploiting the photometric entropy and some constraints to ensure realistic motion. The problem is solved using an hybrid and hierarchical optimization algorithm. Then, we present a navigation aid system helping the user to explore a part or the whole 3D environment. The system is built using the redundancy formalism taking into account several constraints. These approaches were tested on simple and complex dense 3D points cloud
Walter, Nicolas. „Détection de primitives par une approche discrète et non linéaire : application à la détection et la caractérisation de points d'intérêt dans les maillages 3D“. Phd thesis, Université de Bourgogne, 2010. http://tel.archives-ouvertes.fr/tel-00808216.
Der volle Inhalt der QuelleEl, Haje Noura. „A heterogeneous data-based proposal for procedural 3D cities visualization and generalization“. Thesis, Toulouse 3, 2018. http://www.theses.fr/2018TOU30238.
Der volle Inhalt der QuelleThis thesis project was born from a collaborative project between the research team VORTEX / Visual objects: from reality to expression (now REVA: Real Expression Artificial Life) at IRIT: Institute of Research in Computer Science Toulouse on the one hand and education professionals, companies and public entities on the other.The SCOLA collaborative project is essentially an online learning platform based on the use of serious games in schools. It helps users to acquire and track predefined skills. This platform provides teachers with a new flexible tool that creates pedagogical scenarios and personalizes student records. Several contributions have been attributed to IRIT. One of these is to suggest a solution for the automatic creation of 3D environments, to integrate into the game scenario. This solution aims to prevent 3D graphic designers from manually modeling detailed and large 3D environments, which can be very expensive and take a lot of time. Various applications and prototypes have been developed to allow the user to generalize and visualize their own virtual world primarily from a set of rules. Therefore, there is no single representation scheme in the virtual world due to the heterogeneity and diversity of 3D content design, especially city models. This constraint has led us to rely heavily on our project on real 3D urban data instead of custom data predefined by the game designer. Advances in computer graphics, high computing capabilities, and Web technologies have revolutionized data reconstruction and visualization techniques. These techniques are applied in a variety of areas, starting with video games, simulations, and ending with movies that use procedurally generated spaces and character animations. Although modern computer games do not have the same hardware and memory restrictions as older games, procedural generation is frequently used to create unique games, cards, levels, characters, or other random facets on each. Currently, the trend is shifting towards GIS : Geographical Information Systems to create urban worlds, especially after their successful implementation around the world to support many areas of applications. GIS are more specifically dedicated to applications such as simulation, disaster management and urban planning, with a great use more or less limited in games, for example the game "Minecraft", the latest version offers a map using real world cities Geodata in Minecraft.[...]
El, Sayed Abdul Rahman. „Traitement des objets 3D et images par les méthodes numériques sur graphes“. Thesis, Normandie, 2018. http://www.theses.fr/2018NORMLH19/document.
Der volle Inhalt der QuelleSkin detection involves detecting pixels corresponding to human skin in a color image. The faces constitute a category of stimulus important by the wealth of information that they convey because before recognizing any person it is essential to locate and recognize his face. Most security and biometrics applications rely on the detection of skin regions such as face detection, 3D adult object filtering, and gesture recognition. In addition, saliency detection of 3D mesh is an important pretreatment phase for many computer vision applications. 3D segmentation based on salient regions has been widely used in many computer vision applications such as 3D shape matching, object alignments, 3D point-point smoothing, searching images on the web, image indexing by content, video segmentation and face detection and recognition. The detection of skin is a very difficult task for various reasons generally related to the variability of the shape and the color to be detected (different hues from one person to another, orientation and different sizes, lighting conditions) and especially for images from the web captured under different light conditions. There are several known approaches to skin detection: approaches based on geometry and feature extraction, motion-based approaches (background subtraction (SAP), difference between two consecutive images, optical flow calculation) and color-based approaches. In this thesis, we propose numerical optimization methods for the detection of skins color and salient regions on 3D meshes and 3D point clouds using a weighted graph. Based on these methods, we provide 3D face detection approaches using Linear Programming and Data Mining. In addition, we adapted our proposed methods to solve the problem of simplifying 3D point clouds and matching 3D objects. In addition, we show the robustness and efficiency of our proposed methods through different experimental results. Finally, we show the stability and robustness of our methods with respect to noise
Ben, salah Imeen. „Extraction d'un graphe de navigabilité à partir d'un nuage de points 3D enrichis“. Thesis, Normandie, 2019. http://www.theses.fr/2019NORMR070/document.
Der volle Inhalt der QuelleCameras have become increasingly common in vehicles, smart phones, and advanced driver assistance systems. The areas of application of these cameras in the world of intelligent transportation systems are becoming more and more varied : pedestrian detection, line crossing detection, navigation ... Vision-based navigation has reached a certain maturity in recent years through the use of advanced technologies. Vision-based navigation systems have the considerable advantage of being able to directly use the visual information already existing in the environment without having to adapt any element of the infrastructure. In addition, unlike systems using GPS, they can be used outdoors and indoors without any loss of precision. This guarantees the superiority of these systems based on computer vision. A major area of {research currently focuses on mapping, which represents an essential step for navigation. This step generates a problem of memory management quite substantial required by these systems because of the huge amount of information collected by each sensor. Indeed, the memory space required to accommodate the map of a small city is measured in tens of GB or even thousands when one wants to cover large spaces. This makes impossible to integrate this map into a mobile system such as smartphones , cameras embedded in vehicles or robots. The challenge would be to develop new algorithms to minimize the size of the memory needed to operate this navigation system using only computer vision. It's in this context that our project consists in developing a new system able to summarize a3D map resulting from the visual information collected by several sensors. The summary will be a set of spherical views allow to keep the same level of visibility in all directions. It would also guarantee, at a lower cost, a good level of precision and speed during navigation. The summary map of the environment will contain geometric, photometric and semantic information
Fouquier, Geoffroy. „Optimisation de séquences de segmentation combinant modèle structurel et focalisation de l'attention visuelle : application à la reconnaissance de structures cérébrales dans des images 3D“. Phd thesis, Télécom ParisTech, 2010. http://pastel.archives-ouvertes.fr/pastel-00006074.
Der volle Inhalt der Quelle