Добірка наукової літератури з теми "Invariant representation learning"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Invariant representation learning".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Статті в журналах з теми "Invariant representation learning"
Zhu, Zheng-Mao, Shengyi Jiang, Yu-Ren Liu, Yang Yu, and Kun Zhang. "Invariant Action Effect Model for Reinforcement Learning." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 8 (June 28, 2022): 9260–68. http://dx.doi.org/10.1609/aaai.v36i8.20913.
Повний текст джерелаShui, Changjian, Boyu Wang, and Christian Gagné. "On the benefits of representation regularization in invariance based domain generalization." Machine Learning 111, no. 3 (January 1, 2022): 895–915. http://dx.doi.org/10.1007/s10994-021-06080-w.
Повний текст джерелаHyun, Jaeguk, ChanYong Lee, Hoseong Kim, Hyunjung Yoo, and Eunjin Koh. "Learning Domain Invariant Representation via Self-Rugularization." Journal of the Korea Institute of Military Science and Technology 24, no. 4 (August 5, 2021): 382–91. http://dx.doi.org/10.9766/kimst.2021.24.4.382.
Повний текст джерелаAggarwal, Karan, Shafiq Joty, Luis Fernandez-Luque, and Jaideep Srivastava. "Adversarial Unsupervised Representation Learning for Activity Time-Series." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 834–41. http://dx.doi.org/10.1609/aaai.v33i01.3301834.
Повний текст джерелаWu, Yue, Hongfu Liu, Jun Li, and Yun Fu. "Improving face representation learning with center invariant loss." Image and Vision Computing 79 (November 2018): 123–32. http://dx.doi.org/10.1016/j.imavis.2018.09.010.
Повний текст джерелаByrne, Patrick, and Suzanna Becker. "A Principle for Learning Egocentric-Allocentric Transformation." Neural Computation 20, no. 3 (March 2008): 709–37. http://dx.doi.org/10.1162/neco.2007.10-06-361.
Повний текст джерелаXu, Qi, Liang Yao, Zhengkai Jiang, Guannan Jiang, Wenqing Chu, Wenhui Han, Wei Zhang, Chengjie Wang, and Ying Tai. "DIRL: Domain-Invariant Representation Learning for Generalizable Semantic Segmentation." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 3 (June 28, 2022): 2884–92. http://dx.doi.org/10.1609/aaai.v36i3.20193.
Повний текст джерелаQin, Cao, Yunzhou Zhang, Yan Liu, Sonya Coleman, Dermot Kerr, and Guanghao Lv. "Appearance-invariant place recognition by adversarially learning disentangled representation." Robotics and Autonomous Systems 131 (September 2020): 103561. http://dx.doi.org/10.1016/j.robot.2020.103561.
Повний текст джерелаLiang, Sen, Zhi-ze Zhou, Yu-dong Guo, Xuan Gao, Ju-yong Zhang, and Hu-jun Bao. "Facial landmark disentangled network with variational autoencoder." Applied Mathematics-A Journal of Chinese Universities 37, no. 2 (June 2022): 290–305. http://dx.doi.org/10.1007/s11766-022-4589-0.
Повний текст джерелаBradski, Gary, Gail A. Carpenter, and Stephen Grossberg. "Working Memory Networks for Learning Temporal Order with Application to Three-Dimensional Visual Object Recognition." Neural Computation 4, no. 2 (March 1992): 270–86. http://dx.doi.org/10.1162/neco.1992.4.2.270.
Повний текст джерелаДисертації з теми "Invariant representation learning"
Li, Nuo Ph D. Massachusetts Institute of Technology. "Unsupervised learning of invariant object representation in primate visual cortex." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/65288.
Повний текст джерелаCataloged from PDF version of thesis.
Includes bibliographical references.
Visual object recognition (categorization and identification) is one of the most fundamental cognitive functions for our survival. Our visual system has the remarkable ability to convey to us visual object and category information in a manner that is largely tolerant ("invariant") to the exact position, size, pose of the object, illumination, and clutter. The ventral visual stream in non-human primate has solved this problem. At the highest stage of the visual hierarchy, the inferior temporal cortex (IT), neurons have selectivity for objects and maintain that selectivity across variations in the images. A reasonably sized population of these tolerant neurons can support object recognition. However, we do not yet understand how IT neurons construct this neuronal tolerance. The aim of this thesis is to tackle this question and to examine the hypothesis that the ventral visual stream may leverage experience to build its neuronal tolerance. One potentially powerful idea is that time can act as an implicit teacher, in that each object's identity tends to remain temporally stable, thus different retinal images of the same object are temporally contiguous. In theory, the ventral stream could take advantage of this natural tendency and learn to associate together the neuronal representations of temporally contiguous retinal images to yield tolerant object selectivity in IT cortex. In this thesis, I report neuronal support for this hypothesis in IT of non-human primates. First, targeted alteration of temporally contiguous experience with object images at different retinal positions rapidly reshaped IT neurons' position tolerance. Second, similar temporal contiguity manipulation of experience with object images at different sizes similarly reshaped IT size tolerance. These instances of experience-induced effect were similar in magnitude, grew gradually stronger with increasing visual experience, and the size of the effect was large. Taken together, these studies show that unsupervised, temporally contiguous experience can reshape and build at least two types of IT tolerance, and that they can do so under a wide range of spatiotemporal regimes encountered during natural visual exploration. These results suggest that the ventral visual stream uses temporal contiguity visual experience with a general unsupervised tolerance learning (UTL) mechanism to build its invariant object representation.
by Nuo Li.
Ph.D.
Lu, Danni. "Representation Learning Based Causal Inference in Observational Studies." Diss., Virginia Tech, 2021. http://hdl.handle.net/10919/102426.
Повний текст джерелаDoctor of Philosophy
Reasoning cause and effect is the innate ability of a human. While the drive to understand cause and effect is instinct, the rigorous reasoning process is usually trained through the observation of countless trials and failures. In this dissertation, we embark on a journey to explore various principles and novel statistical approaches for causal inference in observational studies. Throughout the dissertation, we focus on the causal effect estimation which answers questions like ``what if" and ``what could have happened". The causal effect of a treatment is measured by comparing the outcomes corresponding to different treatment levels of the same unit, e.g. ``what if the unit is treated instead of not treated?". The challenge lies in the fact that i) a unit only receives one treatment at a time and therefore it is impossible to directly compare outcomes of different treatment levels; ii) comparing the outcomes across different units may involve bias due to confounding as the treatment assignment potentially follows a systematic mechanism. Therefore, deconfounding constructs the main hurdle in estimating causal effects. This dissertation presents two parallel principles of deconfounding: i) balancing, i.e., comparing difference under similar conditions; ii) contrasting, i.e., extracting invariance under heterogeneous conditions. Chapter 2 and Chapter 3 explore causal effect through balancing, with the former systematically reviews a classical propensity score weighting approach in a conventional data setting and the latter presents a novel generative Bayesian framework named Balancing Variational Neural Inference of Causal Effects(BV-NICE) for high-dimensional, complex, and noisy observational data. It incorporates the advance deep learning techniques of representation learning, adversarial learning, and variational inference. The robustness and effectiveness of the proposed framework are demonstrated through an extensive set of experiments. Chapter 4 extracts causal effect through contrasting, emphasizing that ascertaining stability is the key of causality. A novel causal effect estimating procedure called Risk Invariant Causal Estimation(RICE) is proposed that leverages the observed data disparities to enable the identification of stable causal effects. The improved generalizability of RICE is demonstrated through synthetic data with different structures, compared with state-of-art models. In summary, this dissertation presents a flexible causal inference framework that acknowledges the data uncertainties and heterogeneities. By promoting two different aspects of causal principles and integrating advance deep learning techniques, the proposed framework shows improved balance for complex covariate interactions, enhanced robustness for unobservable latent confounders, and better generalizability for novel populations.
Woodbury, Nathan Scott. "Representation and Reconstruction of Linear, Time-Invariant Networks." BYU ScholarsArchive, 2019. https://scholarsarchive.byu.edu/etd/7402.
Повний текст джерелаDupuy, Eric. "Construction d’une notion scientifique et invariant : le cas d'élèves de l'enseignement primaire." Thesis, Bordeaux 2, 2009. http://www.theses.fr/2009BOR21652/document.
Повний текст джерелаThe purpose of this dissertation is to study how scientific conceptions are constructed in the course of experimental activities in physical sciences by young children at school. The study is based on three principal hypotheses: a) The formation of concepts and notions depends on invariant elements. b) The elaboration of thought results from personal reflections, actions and exchanges all anchored in a social dynamic process. c) Representations reveal and organise modes of thinking and their actualisation. In the first stage, the dissertation focuses on the formation of the notion of concept: from the evidencing of invariants to a stable conceptual architecture. Next, it presents the questions raised by the notion of learning and the expected achievement of the learner’s autonomy. Then, it develops a theory of representation, considering the question of the constitution and realisation of knowledge. In a second stage, the dissertation conducts its experimentations within the framework of an observation of classroom situations, the conversion of concrete situations into interpretable data being based on the phenomenological hypothesis from the point of view of constructivist epistemology. One situation refers to the theme of shade, the other to that of electricity: both evidence a complex process of cognitive elaboration, giving rise to conceptions based on a set of invariants. The representations thus reveal and structure the processes of thought. While « childish » items (R1) prove to be numerous, there also often emerge « rationalising » items (R2), either image-based or resting on internal dynamics. Finally, the dissertation demonstrates, in a still empirical way, how certain item combinations evince, so to speak before our very eyes, the child’s process of thinking in action — i.e. « enaction » in the Varela sense of the word
Diese Arbeit befasst sich mit der Konstruktion von wissenschaftlichen Konzepten im Verlauf von physikalischen Experimenten ,die Schüler im Unterricht durchführen. Sie stützt sich dabei auf drei Hypothesen. Die Bildung von Konzepten und Begriffen strukturiert sich um Invarianten. Die Erarbeitung eines Gedankens ergibt sich aus der Verbindung von eigenständigen Überlegungen, von Handlungen und von in sozialer Dynamik verankertem Austausch. Repräsentationen zeigen Modalitäten des Denkens und ihre Aktualisierung auf und organisieren sie. Diese Arbeit fokalisiert sich zunächst auf die Ausbildung des Konzeptbegriffs: vom Erfassen von Invarianten hin zu einer stabilen Konzeptarchitektur. Dann geht sie auf die Fragestellungen des Lernbegriffs ein und auf die Perspektive der Autonomie des Lernenden. Schließlich stellt sie die Repräsentationstheorie dar und fragt nach der Ausformung und der Offenkundigkeit der Erkenntnis. Im zweiten Teil dieser Arbeit werden die Experimente in Form von Beobachtungen in der Schule ausgewertet. Dabei beruht die Umwandlung von erlebten Situationen in verwertbare Daten auf der phänomenologischen Hypothese einer konstruktivistischen Epistemologie. Ein Experiment beschäftigt sich mit dem Schatten, das andere mit dem Thema Elektrizität. Sie belegen eine komplexe kognitive Erarbeitung, die zu Konzepten auf der Grundlage von Invarianten führen. Durch Repräsentationen werden die Gedankengänge offensichtlich und strukturiert. Auch wenn es zahlreich „kindliche“ Item (R1) gibt, werden „rationalisierende“ Item (von R2) oft mit Hilfe einer „bildgebenden“ Repräsentation (R1?R2) oder einer internen Dynamik (R2?R2) freigesetzt. Auf noch empirische Weise zeigt diese Arbeit schließlich wie gewisse Kombinationen von Item sozusagen unter unseren Augen die Entstehung des Gedanken beim Schüler aufzeigen: eine Enaction im Sinne von Varela
Esta tesis centra su objeto en el campo de la construcción de concepciones científicas en el curso de actividades experimentales en ciencias físicas conducidas en medio escolar por alumnos. Se apoya sobre tres hipótesis mayores. La formación de los conceptos y de nociones se estructura alrededor de elementos invariantes. La elaboración del pensamiento resulta de la conjunción de reflexiones, propias acciones e intercambios anclados en una dinámica social. Las representaciones descubren y organizan las modalidades de pensamiento y su actualización. En un primer tiempo la tesis se concentra en la formación de la noción de concepto: del reconocimiento de invariantes hacia una arquitectura conceptual estable. Luego expone las preguntas que plantea la noción de aprendizaje y la perspectiva de autonomía del novato. Luego presenta la teoría de la representación y plantea la cuestión de la constitución y la puesta en evidencia del conocimiento. En un segundo tiempo, la tesis inscribe sus experimentaciones en la observación de situaciones escolares basada la hipótesis fenomenológica en una epistemológica constructivista, la condición de la transformación de situaciones vividas en datos explotables. Una sobre el tema de la sombra, otra en lo de la electricidad, dan testimonio de una elaboración cognitiva compleja de donde nacen concepciones sobre la base de invariantes, las representaciones permiten descubrir y estructurar aproches de pensamiento. Si los “ítem” infantiles (R1) son numerosos, unos “ítem” “ racionalizantes (de R2) se desprenden a menudo llevados por una representación llena de imágenes (R1?R2), o en una dinámica interne (R2?R2). Por fin, la tesis muestra, de manera aun empÍrica, cómo ciertas combinaciones de ítem manifiestan, dicho sea asÍ “bajo nuestros ojos”, el pensamiento del alumno elaborándose: una “enacción” en el sentido de Varela
Tacchetti, Andrea. "Learning invariant representations of actions and faces." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/113935.
Повний текст джерелаThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 125-139).
Recognizing other people and their actions from visual input is a crucial aspect of human perception that allows individuals to respond to social cues. Humans effortlessly identify familiar faces and are able to make fine distinctions between others' behaviors, despite transformations, like changes in viewpoint, lighting or facial expression, that substantially alter the appearance of a visual scene. The ability to generalize across these complex transformations is a hallmark of human visual intelligence, and the neural mechanisms supporting it have been the subject of wide ranging investigation in systems and computational neuroscience. However, advances in understanding the neural machinery of visual perception have not always translated in precise accounts of the computational principles dictating which representations of sensory input the human visual system learned to compute; nor how our visual system acquires the information necessary to support this learning process. Here we present results in support of the hypothesis that invariant discrimination and time continuity might fill these gaps. In particular, we use Magnetoencephalography decoding and a dataset of well-controlled, naturalistic videos to study invariant action recognition and find that representations of action sequences that support invariant recognition can be measured in the human brain. Moreover, we establish a direct link between how well artificial video representations support invariant action recognition and the extent to which they match neural correlation patterns. Finally, we show that visual representations of visual input that are robust to changes in appearance, can be learned by exploiting time continuity in video sequences. Taken as a whole our results suggest that supporting invariant discrimination tasks is the computational principle dictating which representations of sensory input are computed by human visual cortex and that time continuity in visual scenes is sufficient to learn such representations.
by Andrea Tacchetti.
Ph. D.
Vedaldi, Andrea. "Invariant representations and learning for computer vision." Diss., Restricted to subscribing institutions, 2008. http://proquest.umi.com/pqdweb?did=1676977531&sid=1&Fmt=2&clientId=1564&RQT=309&VName=PQD.
Повний текст джерелаEvans, Benjamin D. "Learning transformation-invariant visual representations in spiking neural networks." Thesis, University of Oxford, 2012. https://ora.ox.ac.uk/objects/uuid:15bdf771-de28-400e-a1a7-82228c7f01e4.
Повний текст джерелаMorère, Olivier André Luc. "Deep learning compact and invariant image representations for instance retrieval." Thesis, Paris 6, 2016. http://www.theses.fr/2016PA066406.
Повний текст джерелаImage instance retrieval is the problem of finding an object instance present in a query image from a database of images. Also referred to as particular object retrieval, this problem typically entails determining with high precision whether the retrieved image contains the same object as the query image. Scale, rotation and orientation changes between query and database objects and background clutter pose significant challenges for this problem. State-of-the-art image instance retrieval pipelines consist of two major steps: first, a subset of images similar to the query are retrieved from the database, and second, Geometric Consistency Checks (GCC) are applied to select the relevant images from the subset with high precision. The first step is based on comparison of global image descriptors: high-dimensional vectors with up to tens of thousands of dimensions rep- resenting the image data. The second step is computationally highly complex and can only be applied to hundreds or thousands of images in practical applications. More discriminative global descriptors result in relevant images being more highly ranked, resulting in fewer images that need to be compared pairwise with GCC. As a result, better global descriptors are key to improving retrieval performance and have been the object of much recent interest. Furthermore, fast searches in large databases of millions or even billions of images requires the global descriptors to be compressed into compact representations. This thesis will focus on how to achieve extremely compact global descriptor representations for large-scale image instance retrieval. After introducing background concepts about supervised neural networks, Restricted Boltzmann Machine (RBM) and deep learning in Chapter 2, Chapter 3 will present the design principles and recent work for the Convolutional Neural Networks (CNN), which recently became the method of choice for large-scale image classification tasks. Next, an original multistage approach for the fusion of the output of multiple CNN is proposed. Submitted as part of the ILSVRC 2014 challenge, results show that this approach can significantly improve classification results. The promising perfor- mance of CNN is largely due to their capability to learn appropriate high-level visual representations from the data. Inspired by a stream of recent works showing that the representations learnt on one particular classification task can transfer well to other classification tasks, subsequent chapters will focus on the transferability of representa- tions learnt by CNN to image instance retrieval…
Li, Muhua 1973. "Learning invariant neuronal representations for objects across visual-related self-actions." Thesis, McGill University, 2005. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=85565.
Повний текст джерелаIn contrast to the bulk of previous research work on the learning of invariance that focuses on the pure bottom-up visual information, we incorporate visual-related self-action signals such as commands for eye, head or body movements, to actively collect the changing visual information and gate the learning process. This helps neural networks learn certain degrees of invariance in an efficient way. We describe a method that can produce a network with invariance to changes in visual input caused by eye movements and covert attention shifts. Training of the network is controlled by signals associated with eye movements and covert attention shifting. A temporal perceptual stability constraint is used to drive the output of the network towards remaining constant across temporal sequences of saccadic motions and covert attention shifts. We use a four-layer neural network model to perform the position-invariant extraction of local features and temporal integration of invariant presentations of local features. The model is further extended to handle viewpoint invariance over eye, head, and/or body movements. We also study cases of multiple features instead of single features in the retinal images, which need a self-organized system to learn over a set of feature classes. A modified saliency map mechanism with spatial constraint is employed to assure that attention stays as much as possible on the same targeted object in a multiple-object scene during the first few shifts.
We present results on both simulated data and real images, to demonstrate that our network can acquire invariant neuronal representations, such as position and attention shift invariance. We also demonstrate that our method performs well in realistic situations in which the temporal sequence of input data is not smooth, situations in which earlier approaches have difficulty.
Hocke, Jens [Verfasser]. "Representation learning : from feature weighting to invariance / Jens Hocke." Lübeck : Zentrale Hochschulbibliothek Lübeck, 2017. http://d-nb.info/1125057130/34.
Повний текст джерелаКниги з теми "Invariant representation learning"
Visual Cortex and Deep Networks: Learning Invariant Representations. The MIT Press, 2016.
Знайти повний текст джерелаSejnowski, Terrence J., Tomaso A. Poggio, and Fabio Anselmi. Visual Cortex and Deep Networks: Learning Invariant Representations. MIT Press, 2016.
Знайти повний текст джерелаCheng, Patricia W., and Hongjing Lu. Causal Invariance as an Essential Constraint for Creating a Causal Representation of the World. Edited by Michael R. Waldmann. Oxford University Press, 2017. http://dx.doi.org/10.1093/oxfordhb/9780199399550.013.9.
Повний текст джерелаЧастини книг з теми "Invariant representation learning"
Shen, Weichao, Yuwei Wu, and Yunde Jia. "Temporal Invariant Factor Disentangled Model for Representation Learning." In Pattern Recognition and Computer Vision, 391–402. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-31723-2_33.
Повний текст джерелаQin, Shizheng, Kangzheng Gu, Lecheng Wang, Lizhe Qi, and Wenqiang Zhang. "Learning Camera-Invariant Representation for Person Re-identification." In Lecture Notes in Computer Science, 125–37. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-30484-3_11.
Повний текст джерелаAritake, Toshimitsu, and Noboru Murata. "Learning Scale and Shift-Invariant Dictionary for Sparse Representation." In Machine Learning, Optimization, and Data Science, 472–83. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-37599-7_39.
Повний текст джерелаLi, Muhua, and James J. Clark. "Learning of Position-Invariant Object Representation Across Attention Shifts." In Lecture Notes in Computer Science, 57–70. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/978-3-540-30572-9_5.
Повний текст джерелаShtrosberg, Aviad, Jesus Villalba, Najim Dehak, Azaria Cohen, and Bar Ben-Yair. "Invariant Representation Learning for Robust Far-Field Speaker Recognition." In Statistical Language and Speech Processing, 97–110. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-89579-2_9.
Повний текст джерелаHuang, Junqiang, Xiangwen Kong, and Xiangyu Zhang. "Revisiting the Critical Factors of Augmentation-Invariant Representation Learning." In Lecture Notes in Computer Science, 42–58. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-19821-2_3.
Повний текст джерелаBouajjani, Ahmed, Wael-Amine Boutglay, and Peter Habermehl. "Data-driven Numerical Invariant Synthesis with Automatic Generation of Attributes." In Computer Aided Verification, 282–303. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-13185-1_14.
Повний текст джерелаIosifidis, Alexandros, Anastasios Tefas, Nikolaos Nikolaidis, and Ioannis Pitas. "Learning Human Identity Using View-Invariant Multi-view Movement Representation." In Lecture Notes in Computer Science, 217–26. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-19530-3_20.
Повний текст джерелаGhimire, Sandesh, Satyananda Kashyap, Joy T. Wu, Alexandros Karargyris, and Mehdi Moradi. "Learning Invariant Feature Representation to Improve Generalization Across Chest X-Ray Datasets." In Machine Learning in Medical Imaging, 644–53. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-59861-7_65.
Повний текст джерелаZhao, Qing, Huimin Ma, Ruiqi Lu, Yanxian Chen, and Dong Li. "MVAD-Net: Learning View-Aware and Domain-Invariant Representation for Baggage Re-identification." In Pattern Recognition and Computer Vision, 142–53. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-88004-0_12.
Повний текст джерелаТези доповідей конференцій з теми "Invariant representation learning"
Du, Xiaoyu, Zike Wu, Fuli Feng, Xiangnan He, and Jinhui Tang. "Invariant Representation Learning for Multimedia Recommendation." In MM '22: The 30th ACM International Conference on Multimedia. New York, NY, USA: ACM, 2022. http://dx.doi.org/10.1145/3503161.3548405.
Повний текст джерелаDu, Wenchao, Hu Chen, and Hongyu Yang. "Learning Invariant Representation for Unsupervised Image Restoration." In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2020. http://dx.doi.org/10.1109/cvpr42600.2020.01449.
Повний текст джерелаLi, Yi, Cornelia Fermuller, Yiannis Aloimonos, and Hui Ji. "Learning shift-invariant sparse representation of actions." In 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2010. http://dx.doi.org/10.1109/cvpr.2010.5539977.
Повний текст джерелаLi, Haoqi, Ming Tu, Jing Huang, Shrikanth Narayanan, and Panayiotis Georgiou. "Speaker-Invariant Affective Representation Learning via Adversarial Training." In ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020. http://dx.doi.org/10.1109/icassp40776.2020.9054580.
Повний текст джерелаMa, Chao, Xiaokang Yang, Chongyang Zhang, and Ming-Hsuan Yang. "Learning a temporally invariant representation for visual tracking." In 2015 IEEE International Conference on Image Processing (ICIP). IEEE, 2015. http://dx.doi.org/10.1109/icip.2015.7350921.
Повний текст джерелаRayatdoost, Soheil, Yufeng Yin, David Rudrauf, and Mohammad Soleymani. "Subject-Invariant Eeg Representation Learning For Emotion Recognition." In ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2021. http://dx.doi.org/10.1109/icassp39728.2021.9414496.
Повний текст джерелаLi, Zongmin, Yupeng Zhang, and Yun Bai. "Geometric Invariant Representation Learning for 3D Point Cloud." In 2021 IEEE 33rd International Conference on Tools with Artificial Intelligence (ICTAI). IEEE, 2021. http://dx.doi.org/10.1109/ictai52525.2021.00235.
Повний текст джерелаTran, Luan, Xi Yin, and Xiaoming Liu. "Disentangled Representation Learning GAN for Pose-Invariant Face Recognition." In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2017. http://dx.doi.org/10.1109/cvpr.2017.141.
Повний текст джерелаChen, Jiawei, Janusz Konrad, and Prakash Ishwar. "A Cyclically-Trained Adversarial Network for Invariant Representation Learning." In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE, 2020. http://dx.doi.org/10.1109/cvprw50498.2020.00399.
Повний текст джерелаJeong, Seong-Yun, Ho-Joong Kim, Myeong-Seok Oh, Gun-Hee Lee, and Seong-Whan Lee. "Temporal-Invariant Video Representation Learning with Dynamic Temporal Resolutions." In 2022 18th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS). IEEE, 2022. http://dx.doi.org/10.1109/avss56176.2022.9959310.
Повний текст джерела