Добірка наукової літератури з теми "Scene Graph Generation"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Scene Graph Generation".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Scene Graph Generation"

1

Khademi, Mahmoud, and Oliver Schulte. "Deep Generative Probabilistic Graph Neural Networks for Scene Graph Generation." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 07 (April 3, 2020): 11237–45. http://dx.doi.org/10.1609/aaai.v34i07.6783.

Повний текст джерела
Анотація:
We propose a new algorithm, called Deep Generative Probabilistic Graph Neural Networks (DG-PGNN), to generate a scene graph for an image. The input to DG-PGNN is an image, together with a set of region-grounded captions and object bounding-box proposals for the image. To generate the scene graph, DG-PGNN constructs and updates a new model, called a Probabilistic Graph Network (PGN). A PGN can be thought of as a scene graph with uncertainty: it represents each node and each edge by a CNN feature vector and defines a probability mass function (PMF) for node-type (object category) of each node and edge-type (predicate class) of each edge. The DG-PGNN sequentially adds a new node to the current PGN by learning the optimal ordering in a Deep Q-learning framework, where states are partial PGNs, actions choose a new node, and rewards are defined based on the ground-truth. After adding a node, DG-PGNN uses message passing to update the feature vectors of the current PGN by leveraging contextual relationship information, object co-occurrences, and language priors from captions. The updated features are then used to fine-tune the PMFs. Our experiments show that the proposed algorithm significantly outperforms the state-of-the-art results on the Visual Genome dataset for scene graph generation. We also show that the scene graphs constructed by DG-PGNN improve performance on the visual question answering task, for questions that need reasoning about objects and their interactions in the scene context.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Hua, Tianyu, Hongdong Zheng, Yalong Bai, Wei Zhang, Xiao-Ping Zhang, and Tao Mei. "Exploiting Relationship for Complex-scene Image Generation." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 2 (May 18, 2021): 1584–92. http://dx.doi.org/10.1609/aaai.v35i2.16250.

Повний текст джерела
Анотація:
The significant progress on Generative Adversarial Networks (GANs) has facilitated realistic single-object image generation based on language input. However, complex-scene generation (with various interactions among multiple objects) still suffers from messy layouts and object distortions, due to diverse configurations in layouts and appearances. Prior methods are mostly object-driven and ignore their inter-relations that play a significant role in complex-scene images. This work explores relationship-aware complex-scene image generation, where multiple objects are inter-related as a scene graph. With the help of relationships, we propose three major updates in the generation framework. First, reasonable spatial layouts are inferred by jointly considering the semantics and relationships among objects. Compared to standard location regression, we show relative scales and distances serve a more reliable target. Second, since the relations between objects have significantly influenced an object's appearance, we design a relation-guided generator to generate objects reflecting their relationships. Third, a novel scene graph discriminator is proposed to guarantee the consistency between the generated image and the input scene graph. Our method tends to synthesize plausible layouts and objects, respecting the interplay of multiple objects in an image. Experimental results on Visual Genome and HICO-DET datasets show that our proposed method significantly outperforms prior arts in terms of IS and FID metrics. Based on our user study and visual inspection, our method is more effective in generating logical layout and appearance for complex-scenes.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Wald, Johanna, Nassir Navab, and Federico Tombari. "Learning 3D Semantic Scene Graphs with Instance Embeddings." International Journal of Computer Vision 130, no. 3 (January 22, 2022): 630–51. http://dx.doi.org/10.1007/s11263-021-01546-9.

Повний текст джерела
Анотація:
AbstractA 3D scene is more than the geometry and classes of the objects it comprises. An essential aspect beyond object-level perception is the scene context, described as a dense semantic network of interconnected nodes. Scene graphs have become a common representation to encode the semantic richness of images, where nodes in the graph are object entities connected by edges, so-called relationships. Such graphs have been shown to be useful in achieving state-of-the-art performance in image captioning, visual question answering and image generation or editing. While scene graph prediction methods so far focused on images, we propose instead a novel neural network architecture for 3D data, where the aim is to learn to regress semantic graphs from a given 3D scene. With this work, we go beyond object-level perception, by exploring relations between object entities. Our method learns instance embeddings alongside a scene segmentation and is able to predict semantics for object nodes and edges. We leverage 3DSSG, a large scale dataset based on 3RScan that features scene graphs of changing 3D scenes. Finally, we show the effectiveness of graphs as an intermediate representation on a retrieval task.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Bauer, Daniel. "Understanding Descriptions of Visual Scenes Using Graph Grammars." Proceedings of the AAAI Conference on Artificial Intelligence 27, no. 1 (June 29, 2013): 1656–57. http://dx.doi.org/10.1609/aaai.v27i1.8498.

Повний текст джерела
Анотація:
Automatic generation of 3D scenes from descriptions has applications in communication, education, and entertainment, but requires deep understanding of the input text. I propose thesis work on language understanding using graph-based meaning representations that can be decomposed into primitive spatial relations. The techniques used for analyzing text and transforming it into a scene representation are based on context-free graph grammars. The thesis develops methods for semantic parsing with graphs, acquisition of graph grammars, and satisfaction of spatial and world-knowledge constraints during parsing.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Shao, Tong, and Dapeng Oliver Wu. "Graph-LSTM with Global Attribute for Scene Graph Generation." Journal of Physics: Conference Series 2003, no. 1 (August 1, 2021): 012001. http://dx.doi.org/10.1088/1742-6596/2003/1/012001.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Lin, Bingqian, Yi Zhu, and Xiaodan Liang. "Atom correlation based graph propagation for scene graph generation." Pattern Recognition 122 (February 2022): 108300. http://dx.doi.org/10.1016/j.patcog.2021.108300.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Wang, Ruize, Zhongyu Wei, Piji Li, Qi Zhang, and Xuanjing Huang. "Storytelling from an Image Stream Using Scene Graphs." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (April 3, 2020): 9185–92. http://dx.doi.org/10.1609/aaai.v34i05.6455.

Повний текст джерела
Анотація:
Visual storytelling aims at generating a story from an image stream. Most existing methods tend to represent images directly with the extracted high-level features, which is not intuitive and difficult to interpret. We argue that translating each image into a graph-based semantic representation, i.e., scene graph, which explicitly encodes the objects and relationships detected within image, would benefit representing and describing images. To this end, we propose a novel graph-based architecture for visual storytelling by modeling the two-level relationships on scene graphs. In particular, on the within-image level, we employ a Graph Convolution Network (GCN) to enrich local fine-grained region representations of objects on scene graphs. To further model the interaction among images, on the cross-images level, a Temporal Convolution Network (TCN) is utilized to refine the region representations along the temporal dimension. Then the relation-aware representations are fed into the Gated Recurrent Unit (GRU) with attention mechanism for story generation. Experiments are conducted on the public visual storytelling dataset. Automatic and human evaluation results indicate that our method achieves state-of-the-art.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Chen, Jin, Xiaofeng Ji, and Xinxiao Wu. "Adaptive Image-to-Video Scene Graph Generation via Knowledge Reasoning and Adversarial Learning." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 1 (June 28, 2022): 276–84. http://dx.doi.org/10.1609/aaai.v36i1.19903.

Повний текст джерела
Анотація:
Scene graph in a video conveys a wealth of information about objects and their relationships in the scene, thus benefiting many downstream tasks such as video captioning and visual question answering. Existing methods of scene graph generation require large-scale training videos annotated with objects and relationships in each frame to learn a powerful model. However, such comprehensive annotation is time-consuming and labor-intensive. On the other hand, it is much easier and less cost to annotate images with scene graphs, so we investigate leveraging annotated images to facilitate training a scene graph generation model for unannotated videos, namely image-to-video scene graph generation. This task presents two challenges: 1) infer unseen dynamic relationships in videos from static relationships in images due to the absence of motion information in images; 2) adapt objects and static relationships from images to video frames due to the domain shift between them. To address the first challenge, we exploit external commonsense knowledge to infer the unseen dynamic relationship from the temporal evolution of static relationships. We tackle the second challenge by hierarchical adversarial learning to reduce the data distribution discrepancy between images and video frames. Extensive experiment results on two benchmark video datasets demonstrate the effectiveness of our method.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Jung, Gayoung, Jonghun Lee, and Incheol Kim. "Tracklet Pair Proposal and Context Reasoning for Video Scene Graph Generation." Sensors 21, no. 9 (May 2, 2021): 3164. http://dx.doi.org/10.3390/s21093164.

Повний текст джерела
Анотація:
Video scene graph generation (ViDSGG), the creation of video scene graphs that helps in deeper and better visual scene understanding, is a challenging task. Segment-based and sliding-window based methods have been proposed to perform this task. However, they all have certain limitations. This study proposes a novel deep neural network model called VSGG-Net for video scene graph generation. The model uses a sliding window scheme to detect object tracklets of various lengths throughout the entire video. In particular, the proposed model presents a new tracklet pair proposal method that evaluates the relatedness of object tracklet pairs using a pretrained neural network and statistical information. To effectively utilize the spatio-temporal context, low-level visual context reasoning is performed using a spatio-temporal context graph and a graph neural network as well as high-level semantic context reasoning. To improve the detection performance for sparse relationships, the proposed model applies a class weighting technique that adjusts the weight of sparse relationships to a higher level. This study demonstrates the positive effect and high performance of the proposed model through experiments using the benchmark dataset VidOR and VidVRD.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Li, Shuohao, Min Tang, Jun Zhang, and Lincheng Jiang. "Attentive Gated Graph Neural Network for Image Scene Graph Generation." Symmetry 12, no. 4 (April 2, 2020): 511. http://dx.doi.org/10.3390/sym12040511.

Повний текст джерела
Анотація:
Image scene graph is a semantic structural representation which can not only show what objects are in the image, but also infer the relationships and interactions among them. Despite the recent success in object detection using deep neural networks, automatically recognizing social relations of objects in images remains a challenging task due to the significant gap between the domains of visual content and social relation. In this work, we translate the scene graph into an Attentive Gated Graph Neural Network which can propagate a message by visual relationship embedding. More specifically, nodes in gated neural networks can represent objects in the image, and edges can be regarded as relationships among objects. In this network, an attention mechanism is applied to measure the strength of the relationship between objects. It can increase the accuracy of object classification and reduce the complexity of relationship classification. Extensive experiments on the widely adopted Visual Genome Dataset show the effectiveness of the proposed method.
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "Scene Graph Generation"

1

Nguyen, Duc Minh Chau. "Affordance learning for visual-semantic perception." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2021. https://ro.ecu.edu.au/theses/2443.

Повний текст джерела
Анотація:
Affordance Learning is linked to the study of interactions between robots and objects, including how robots perceive objects by scene understanding. This area has been popular in the Psychology, which has recently come to influence Computer Vision. In this way, Computer Vision has borrowed the concept of affordance from Psychology in order to develop Visual-Semantic recognition systems, and to develop the capabilities of robots to interact with objects, in particular. However, existing systems of Affordance Learning are still limited to detecting and segmenting object affordances, which is called Affordance Segmentation. Further, these systems are not designed to develop specific abilities to reason about affordances. For example, a Visual-Semantic system, for captioning a scene, can extract information from an image, such as “a person holds a chocolate bar and eats it”, but does not highlight the affordances: “hold” and “eat”. Indeed, these affordances and others commonly appear within all aspects of life, since affordances usually connect to actions (from a linguistic view, affordances are generally known as verbs in sentences). Due to the above mentioned limitations, this thesis aims to develop systems of Affordance Learning for Visual-Semantic Perception. These systems can be built using Deep Learning, which has been empirically shown to be efficient for performing Computer Vision tasks. There are two goals of the thesis: (1) study what are the key factors that contribute to the performance of Affordance Segmentation and (2) reason about affordances (Affordance Reasoning) based on parts of objects for Visual-Semantic Perception. In terms of the first goal, the thesis mainly investigates the feature extraction module as this is one of the earliest steps in learning to segment affordances. The thesis finds that the quality of feature extraction from images plays a vital role in improved performance of Affordance Segmentation. With regard to the second goal, the thesis infers affordances from object parts to reason about part-affordance relationships. Based on this approach, the thesis devises an Object Affordance Reasoning Network that can learn to construct relationships between affordances and object parts. As a result, reasoning about affordance becomes achievable in the generation of scene graphs of affordances and object parts. Empirical results, obtained from extensive experiments, show the potential of the system (that the thesis developed) towards Affordance Reasoning from Scene Graph Generation.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Garrett, Austin J. "Infrastructure for modeling and inference engineering with 3D generative scene graphs." Thesis, Massachusetts Institute of Technology, 2021. https://hdl.handle.net/1721.1/130688.

Повний текст джерела
Анотація:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, February, 2021
Cataloged from the official PDF of thesis.
Includes bibliographical references (pages 67-68).
Recent advances in probabilistic programming have enabled the development of probabilistic generative models for visual perception using a rich abstract representation of 3D scene geometry called a scene graph. However, there remain several challenges in the practical implementation of scene graph models, including human-editable specification, visualization, priors, structure inference, hyperparameters tuning, and benchmarking. In this thesis, I describe the development of infrastructure to enable the development and research of scene graph models by researchers and practitioners. A description of a preliminary scene graph model and inference program for 3D scene structure is provided, along with an implementation in the probabilistic programming language Gen. Utilities for visualizing and understanding distributions over scene graphs are developed. Synthetic enumerative tests of the posterior and inference algorithm are conducted, and conclusions drawn for the improvement of the proposed modeling components. Finally, I collect and analyze real-world scene graph data, and use it to optimize model hyperparameters; the preliminary structure inference program is then tested in a structure prediction task with both the unoptimized and optimized models.
by Austin J. Garrett.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Wang, Tse-Hsien, and 汪澤先. "Interactive Background Scene Generation: Controllable Animation based on Motion Graph." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/20054229214556970499.

Повний текст джерела
Анотація:
碩士
國立臺灣大學
資訊網路與多媒體研究所
97
In this paper, an interactive background scene generation and editing system is proposed based on improved motion graph. By analyzing the motion of an input animation with limited length, our system could synthesize large amount of various motions to yield a composting scene animation with unlimited length by connecting the input motion pieces through smooth transitions based on a motion graph layer, which is generated by using randomized cuts and further analysis on time domain. The smooth transitions are obtained by searching the best path according to specified circumstances. Finally the result is optimized by repeatedly substituting animation subsequences. The user can interactively specify some physical constraints of the scene on keyframes, such as wind direction or velocity of flow, even one simple path for character to follow, and the system would automatically generate continuous and natural motion in accordance with them.
Стилі APA, Harvard, Vancouver, ISO та ін.

Книги з теми "Scene Graph Generation"

1

Coolen, A. C. C., A. Annibale, and E. S. Roberts. Introduction. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198709893.003.0001.

Повний текст джерела
Анотація:
This introductory chapter sets the scene for the material which follows by briefly introducing the study of networks and describing their wide scope of application. It discusses the role of well-specified random graphs in setting network science onto a firm scientific footing, emphasizing the importance of well-defined null models. Non-trivial aspects of graph generation are introduced. An important distinction is made between approaches that begin with a desired probability distribution on the final graph ensembles and approaches where the graph generation process is the main object of interest and the challenge is to analyze the expected topological properties of the generated networks. At the core of the graph generation process is the need to establish a mathematical connection between the stochastic graph generation process and the stationary probability distribution to which these processes evolve.
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Scene Graph Generation"

1

Yang, Jingkang, Yi Zhe Ang, Zujin Guo, Kaiyang Zhou, Wayne Zhang, and Ziwei Liu. "Panoptic Scene Graph Generation." In Lecture Notes in Computer Science, 178–96. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-19812-0_11.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Yang, Jianwei, Jiasen Lu, Stefan Lee, Dhruv Batra, and Devi Parikh. "Graph R-CNN for Scene Graph Generation." In Computer Vision – ECCV 2018, 690–706. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-01246-5_41.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Kumar, Vishal, Albert Mundu, and Satish Kumar Singh. "Scene Graph Generation with Geometric Context." In Communications in Computer and Information Science, 340–50. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-11346-8_30.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Su, Xia, Chenglin Wu, Wen Gao, and Weixin Huang. "Interior Layout Generation Based on Scene Graph and Graph Generation Model." In Design Computing and Cognition’20, 267–82. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-90625-2_15.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Khademi, Mahmoud, and Oliver Schulte. "Dynamic Gated Graph Neural Networks for Scene Graph Generation." In Computer Vision – ACCV 2018, 669–85. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-20876-9_42.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Zareian, Alireza, Zhecan Wang, Haoxuan You, and Shih-Fu Chang. "Learning Visual Commonsense for Robust Scene Graph Generation." In Computer Vision – ECCV 2020, 642–57. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58592-1_38.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Zhou, Fangbo, Huaping Liu, Xinghang Li, and Huailin Zhao. "MCTS-Based Robotic Exploration for Scene Graph Generation." In Communications in Computer and Information Science, 403–15. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-9247-5_31.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Zhang, Ao, Yuan Yao, Qianyu Chen, Wei Ji, Zhiyuan Liu, Maosong Sun, and Tat-Seng Chua. "Fine-Grained Scene Graph Generation with Data Transfer." In Lecture Notes in Computer Science, 409–24. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-19812-0_24.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Wang, Wenbin, Ruiping Wang, Shiguang Shan, and Xilin Chen. "Sketching Image Gist: Human-Mimetic Hierarchical Scene Graph Generation." In Computer Vision – ECCV 2020, 222–39. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58601-0_14.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Herzig, Roei, Amir Bar, Huijuan Xu, Gal Chechik, Trevor Darrell, and Amir Globerson. "Learning Canonical Representations for Scene Graph to Image Generation." In Computer Vision – ECCV 2020, 210–27. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-58574-7_13.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Scene Graph Generation"

1

Garg, Sarthak, Helisa Dhamo, Azade Farshad, Sabrina Musatian, Nassir Navab, and Federico Tombari. "Unconditional Scene Graph Generation." In 2021 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2021. http://dx.doi.org/10.1109/iccv48922.2021.01605.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Guo, Yuyu, Jingkuan Song, Lianli Gao, and Heng Tao Shen. "One-shot Scene Graph Generation." In MM '20: The 28th ACM International Conference on Multimedia. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3394171.3414025.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Liu, Hengyue, Ning Yan, Masood Mortazavi, and Bir Bhanu. "Fully Convolutional Scene Graph Generation." In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2021. http://dx.doi.org/10.1109/cvpr46437.2021.01138.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Khandelwal, Siddhesh, Mohammed Suhail, and Leonid Sigal. "Segmentation-grounded Scene Graph Generation." In 2021 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE, 2021. http://dx.doi.org/10.1109/iccv48922.2021.01558.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Yu, Jing, Yuan Chai, Yujing Wang, Yue Hu, and Qi Wu. "CogTree: Cognition Tree Loss for Unbiased Scene Graph Generation." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/176.

Повний текст джерела
Анотація:
Scene graphs are semantic abstraction of images that encourage visual understanding and reasoning. However, the performance of Scene Graph Generation (SGG) is unsatisfactory when faced with biased data in real-world scenarios. Conventional debiasing research mainly studies from the view of balancing data distribution or learning unbiased models and representations, ignoring the correlations among the biased classes. In this work, we analyze this problem from a novel cognition perspective: automatically building a hierarchical cognitive structure from the biased predictions and navigating that hierarchy to locate the relationships, making the tail relationships receive more attention in a coarse-to-fine mode. To this end, we propose a novel debiasing Cognition Tree (CogTree) loss for unbiased SGG. We first build a cognitive structure CogTree to organize the relationships based on the prediction of a biased SGG model. The CogTree distinguishes remarkably different relationships at first and then focuses on a small portion of easily confused ones. Then, we propose a debiasing loss specially for this cognitive structure, which supports coarse-to-fine distinction for the correct relationships. The loss is model-agnostic and consistently boosting the performance of several state-of-the-art models. The code is available at: https://github.com/CYVincent/Scene-Graph-Transformer-CogTree.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Zhang, Zhichao, Junyu Dong, Qilu Zhao, Lin Qi, and Shu Zhang. "Attention LSTM for Scene Graph Generation." In 2021 6th International Conference on Image, Vision and Computing (ICIVC). IEEE, 2021. http://dx.doi.org/10.1109/icivc52351.2021.9526967.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

He, Yunqing, Tongwei Ren, Jinhui Tang, and Gangshan Wu. "Heterogeneous Learning for Scene Graph Generation." In MM '22: The 30th ACM International Conference on Multimedia. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3503161.3548356.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Chen, Min, Xinyu Lyu, Yuyu Guo, Jingwei Liu, Lianli Gao, and Jingkuan Song. "Multi-Scale Graph Attention Network for Scene Graph Generation." In 2022 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2022. http://dx.doi.org/10.1109/icme52920.2022.9859970.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Yu, Xiang, Ruoxin Chen, Jie Li, Jiawei Sun, Shijing Yuan, Huxiao Ji, Xinyu Lu, and Chentao Wu. "Zero-Shot Scene Graph Generation with Knowledge Graph Completion." In 2022 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2022. http://dx.doi.org/10.1109/icme52920.2022.9859944.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Tang, Kaihua, Yulei Niu, Jianqiang Huang, Jiaxin Shi, and Hanwang Zhang. "Unbiased Scene Graph Generation From Biased Training." In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2020. http://dx.doi.org/10.1109/cvpr42600.2020.00377.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії