Journal articles on the topic 'Visual transformation'

To see the other types of publications on this topic, follow the link: Visual transformation.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Visual transformation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Issers, Oxana. "Potential of Transformations in Polycode Internet Meme Within the Event-Related Context of 2020." Vestnik Volgogradskogo gosudarstvennogo universiteta. Serija 2. Jazykoznanije, no. 2 (June 2021): 26–41. http://dx.doi.org/10.15688/jvolsu2.2021.2.3.

Full text
Abstract:
The article deals with a specific polycode text that functions in the field of virtual communication – Internet meme. The focus is on its regular transformations. Considering possible methodological approaches to the analysis of this phenomenon, the author comes to the conclusion about the efficiency of using the methodology of intertext linguistic theory. The essential characteristic of a meme is its transformational potential, which allows to create new meanings and use this communication unit in a wide range of relevant contexts. The polycode structure of a meme determines the spectrum of its potential transformations by changing the verbal and visual elements. The material of observations was one of the most popular memes of 2020 – "Get up, Natasha, we dropped everything". The author identifies 8 meme transformations, which can be considered as regular: development of the visuals with the addition of a verbal element; contraction of the visuals with the same verbal element / transformation of the verbal element; the reframing of the context; pragmatic transformation; intertextual transformation, including the effect of meta-communication; reduction of the image with the same visual markers; reframing with the change of verbal and visual elements; adaptation of the meme in the media texts. The latter transformation indicates the successful inclusion of the Internet meme as a communication element of network coverse in modern discursive practices.
APA, Harvard, Vancouver, ISO, and other styles
2

Robinett, Warren, and Richard Holloway. "The Visual Display Transformation for Virtual Reality." Presence: Teleoperators and Virtual Environments 4, no. 1 (January 1995): 1–23. http://dx.doi.org/10.1162/pres.1995.4.1.1.

Full text
Abstract:
The visual display transformation for virtual reality (VR) systems is typically much more complex than the standard viewing transformation discussed in the literature for conventional computer graphics. The process can be represented as a series of transformations, some of which contain parameters that must match the physical configuration of the system hardware and the user's body. Because of the number and complexity of the transformations, a systematic approach and a thorough understanding of the mathematical models involved are essential. This paper presents a complete model for the visual display transformation for a VR system; that is, the series of transformations used to map points from object coordinates to screen coordinates. Virtual objects are typically defined in an object-centered coordinate system (CS), but must be displayed using the screen-centered CSs of the two screens of a head-mounted display (HMD). This particular algorithm for the VR display computation allows multiple users to independently change position, orientation, and scale within the virtual world, allows users to pick up and move virtual objects, uses the measurements from a head tracker to immerse the user in the virtual world, provides an adjustable eye separation for generating two stereoscopic images, uses the off-center perspective projection required by many HMDs, and compensates for the optical distortion introduced by the lenses in an HMD. The implementation of this framework as the core of the UNC VR software is described, and the values of the UNC display parameters are given. We also introduce the vector-quaternion-scalar (VQS) representation for transformations between 3D coordinate systems, which is specifically tailored to the needs of a VR system. The transformations and CSs presented comprise a complete framework for generating the computer-graphic imagery required in a typical VR system. The model presented here is deliberately abstract in order to be general purpose; thus, issues of system design and visual perception are not addressed. While the mathematical techniques involved are already well known, there are enough parameters and pitfalls that a detailed description of the entire process should be a useful tool for someone interested in implementing a VR system.
APA, Harvard, Vancouver, ISO, and other styles
3

Pedersen, Pia. "Representing transformation through visual experimentation." InfoDesign - Revista Brasileira de Design da Informação 11, no. 3 (December 26, 2014): 273–90. http://dx.doi.org/10.51358/id.v11i3.275.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Costa, Marco. "Visual Tension." Perception 49, no. 11 (October 13, 2020): 1213–34. http://dx.doi.org/10.1177/0301006620963753.

Full text
Abstract:
Although tension perception is well investigated in the music domain, its determinants in visual displays are still largely unexplored. Furthermore, the distinctive role of tension and arousal in affect theory is still debated. The study aimed to assess how geometrical and graphical transformations of basic visual shapes can affect perceived tension and arousal. The geometrical transformations were angle amplitude, rotation, position within a frame, symmetry, verticality, angularity, size, and regularity in spacing, while the graphical transformation regarded contrast. The sample included 122 participants. Perceived tension was significantly higher in angles with small amplitude, squares that were slightly rotated and not in the upright position, the upper and right areas within a rectangle, angular shapes, high-contrasted graphical transitions, asymmetrical shapes, vertical shapes, and dot patterns with irregular spacing. Overall, there was a moderate correlation between perception of tension and perception of arousal, although in some specific features, tension exhibited a dissociation from arousal, suggesting a distinctive role of tension in affect theory.
APA, Harvard, Vancouver, ISO, and other styles
5

Miao, Xu, and Rajesh P. N. Rao. "Learning the Lie Groups of Visual Invariance." Neural Computation 19, no. 10 (October 2007): 2665–93. http://dx.doi.org/10.1162/neco.2007.19.10.2665.

Full text
Abstract:
A fundamental problem in biological and machine vision is visual invariance: How are objects perceived to be the same despite transformations such as translations, rotations, and scaling? In this letter, we describe a new, unsupervised approach to learning invariances based on Lie group theory. Unlike traditional approaches that sacrifice information about transformations to achieve invariance, the Lie group approach explicitly models the effects of transformations in images. As a result, estimates of transformations are available for other purposes, such as pose estimation and visuomotor control. Previous approaches based on first-order Taylor series expansions of images can be regarded as special cases of the Lie group approach, which utilizes a matrix-exponential-based generative model of images and can handle arbitrarily large transformations. We present an unsupervised expectation-maximization algorithm for learning Lie transformation operators directly from image data containing examples of transformations. Our experimental results show that the Lie operators learned by the algorithm from an artificial data set containing six types of affine transformations closely match the analytically predicted affine operators. We then demonstrate that the algorithm can also recover novel transformation operators from natural image sequences. We conclude by showing that the learned operators can be used to both generate and estimate transformations in images, thereby providing a basis for achieving visual invariance.
APA, Harvard, Vancouver, ISO, and other styles
6

Földiák, Peter. "Learning Invariance from Transformation Sequences." Neural Computation 3, no. 2 (June 1991): 194–200. http://dx.doi.org/10.1162/neco.1991.3.2.194.

Full text
Abstract:
The visual system can reliably identify objects even when the retinal image is transformed considerably by commonly occurring changes in the environment. A local learning rule is proposed, which allows a network to learn to generalize across such transformations. During the learning phase, the network is exposed to temporal sequences of patterns undergoing the transformation. An application of the algorithm is presented in which the network learns invariance to shift in retinal position. Such a principle may be involved in the development of the characteristic shift invariance property of complex cells in the primary visual cortex, and also in the development of more complicated invariance properties of neurons in higher visual areas.
APA, Harvard, Vancouver, ISO, and other styles
7

Varró, Dániel, Gergely Varró, and András Pataricza. "Designing the automatic transformation of visual languages." Science of Computer Programming 44, no. 2 (August 2002): 205–27. http://dx.doi.org/10.1016/s0167-6423(02)00039-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Roberts, Phillip. "Control and transformation in early visual culture." Early Popular Visual Culture 12, no. 2 (April 3, 2014): 83–103. http://dx.doi.org/10.1080/17460654.2014.925253.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Cai, Yang, Richard Stumpf, Timothy Wynne, Michelle Tomlinson, Daniel Sai Ho Chung, Xavier Boutonnier, Matthias Ihmig, Rafael Franco, and Nathaniel Bauernfeind. "Visual transformation for interactive spatiotemporal data mining." Knowledge and Information Systems 13, no. 2 (March 21, 2007): 119–42. http://dx.doi.org/10.1007/s10115-007-0075-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Avazpour, Iman, John Grundy, and Hai L. Vu. "Generating Reusable Visual Notations Using Model Transformation." International Journal of Software Engineering and Knowledge Engineering 25, no. 02 (March 2015): 277–305. http://dx.doi.org/10.1142/s0218194015400100.

Full text
Abstract:
Visual notations are a key aspect of visual languages. They provide a direct mapping between the intended information and set of graphical symbols. Visual notations are most often implemented using the low level syntax of programming languages which is time consuming, error prone, difficult to maintain and hardly human-centric. In this paper we describe an alternative approach to generating visual notations using by-example model transformations. In our new approach, a semantic mapping between model and view is implemented using model transformations. The notations resulting from this approach can be reused by mapping varieties of input data to their model and can be composed into different visualizations. Our approach is implemented in the CONVErT framework and has been applied to many visualization examples. Three case studies for visualizing statistical charts, visualization of traffic data, and reuse of a Minard's map visualization's components, are presented in this paper. A detailed user study of our approach for reusing notations and generating visualizations has been provided. 80% of the participants in this user study agreed that the novel approach to visualization was easy and 87% stated that they quickly learned to use the tool support.
APA, Harvard, Vancouver, ISO, and other styles
11

Sülzenbrück, Sandra. "The Impact of Visual Feedback Type on the Mastery of Visuo-Motor Transformations." Zeitschrift für Psychologie 220, no. 1 (January 2012): 3–9. http://dx.doi.org/10.1027/2151-2604/a000084.

Full text
Abstract:
For the effective use of modern tools, the inherent visuo-motor transformation needs to be mastered. The successful adjustment to and learning of these transformations crucially depends on practice conditions, particularly on the type of visual feedback during practice. Here, a review about empirical research exploring the influence of continuous and terminal visual feedback during practice on the mastery of visuo-motor transformations is provided. Two studies investigating the impact of the type of visual feedback on either direction-dependent visuo-motor gains or the complex visuo-motor transformation of a virtual two-sided lever are presented in more detail. The findings of these studies indicate that the continuous availability of visual feedback supports performance when closed-loop control is possible, but impairs performance when visual input is no longer available. Different approaches to explain these performance differences due to the type of visual feedback during practice are considered. For example, these differences could reflect a process of re-optimization of motor planning in a novel environment or represent effects of the specificity of practice. Furthermore, differences in the allocation of attention during movements with terminal and continuous visual feedback could account for the observed differences.
APA, Harvard, Vancouver, ISO, and other styles
12

Sytykh, Olga Leonidovna. "From logos to image: the trends in transformation of modern culture." Философия и культура, no. 2 (February 2021): 1–11. http://dx.doi.org/10.7256/2454-0757.2021.2.35209.

Full text
Abstract:
The subject of this study is the transformation of modern culture associated with the visual turn. Using the dialectical method and systemic analysis, the author determines the trends in transformation of modern culture. The goal of article is to demonstrate the essence such changes, their vector and impact upon individual and society. The factors of transformation, its manifestations, and consequences are revealed. The works of the representatives of philosophy, culturology, sociology, history, etc. comprise the theoretical framework for this research. The empirical base contains the results of sociological survey conducted 2020 on the premises of two universities in the Altai Krai. The novelty consists in identification of the trends of cultural transformation under the influence of visual turn, their positive and negative consequences for individual and society, as well as generalization of numerous manifestations of these trends. The main conclusion of this study is the author's position about serious and contradictory transformations in society under the influence of visual rotation. The impact of visual rotation has both positive and negative development trends and their consequences. Among the positive effects of the visual turn, the author highlights stimulation of the development of visual thinking, which depends on functioning of the right brain. The author also indicates negative consequences of the visual transformation of culture. Perception through images weans people from thinking. Receiving information through images, rather than through speech, increases the clipping nature of consciousness. This entails the possibility of formation of an individual subject to manipulation. The presentation of images on the Internet and arrangement of visual series develops the habit of expecting “the solution of your problems by someone else” in the situation of choice. Another consequence of the transition from logos to image is the formation of prerequisites for “easy” submersion into virtual situations.
APA, Harvard, Vancouver, ISO, and other styles
13

Edwards, Laurie D. "Children's Learning in a Computer Microworld for Transformation Geometry." Journal for Research in Mathematics Education 22, no. 2 (March 1991): 122–37. http://dx.doi.org/10.5951/jresematheduc.22.2.0122.

Full text
Abstract:
Twelve middle school students working in pairs used a computer microworld to explore an introductory curriculum in transformation geometry. The microworld linked a symbolic representation (a set of simple Logo commands) with a visual display that showed the effects of each transformation. Worksheets were designed with the objective of encouraging the students to find and express mathematical patterns in the domain. The students were successful in constructing an accurate working understanding of the transformations. There was a tendency for symbolic overgeneralization in some activities, but the students were able to use visual feedback from the microworld and discussions with their partners to correct their own errors.
APA, Harvard, Vancouver, ISO, and other styles
14

Andreallo, Fiona. "The selfie generation: a transformation of visual social relationships." Vista, no. 4 (July 30, 2019): 153–71. http://dx.doi.org/10.21814/vista.3019.

Full text
Abstract:
The selfie generation is a term commonly used to describe people born after 1981 because of the supposed proliferation of selfies they take daily. If Selfies indeed define a generation of people, then they require close consideration as an evolution of social interaction. This interdisciplinary study focuses on photography as performance of looking involving social relationships between people. I ask “How might selfies suggest a transformation of everyday social relationships?” The selfie as active photographic performance is first examined through illustrative ethnographic observation. Then as performative photographic object the selfie is examined as interactive (Kress & Van Leeuwen’s, 2006, 2009) visual communication. Finally, the performative spaces of the selfie in process (from initial performance, to object and as it is shared and moves between private and public spaces) is examined as relationships of proxemic perception (Hall, 1966). For the selfie generation the private spaces in social relationships has perhaps evolved not simply because of changes in photographic technology, but also new spaces of socialising where private and public contexts are often blurred and unfixed.
APA, Harvard, Vancouver, ISO, and other styles
15

Smith, Michael A., and J. Douglas Crawford. "Distributed Population Mechanism for the 3-D Oculomotor Reference Frame Transformation." Journal of Neurophysiology 93, no. 3 (March 2005): 1742–61. http://dx.doi.org/10.1152/jn.00306.2004.

Full text
Abstract:
Human saccades require a nonlinear, eye orientation–dependent reference frame transformation to transform visual codes to the motor commands for eye muscles. Primate neurophysiology suggests that this transformation is performed between the superior colliculus and brain stem burst neurons, but provides little clues as to how this is done. To understand how the brain might accomplish this, we trained a 3-layer neural net to generate accurate commands for kinematically correct 3-D saccades. The inputs to the network were a 2-D, eye-centered, topographic map of Gaussian visual receptive fields and an efference copy of eye position in 6-dimensional, push–pull “neural integrator” coordinates. The output was an eye orientation displacement command in similar coordinates appropriate to drive brain stem burst neurons. The network learned to generate accurate, kinematically correct saccades, including the eye orientation–dependent tilts in saccade motor error commands required to match saccade trajectories to their visual input. Our analysis showed that the hidden units developed complex, eye-centered visual receptive fields, widely distributed fixed-vector motor commands, and “gain field”–like eye position sensitivities. The latter evoked subtle adjustments in the relative motor contributions of each hidden unit, thereby rotating the population motor vector into the correct correspondence with the visual target input for each eye orientation: a distributed population mechanism for the visuomotor reference frame transformation. These findings were robust; there was little variation across networks with between 9 and 49 hidden units. Because essentially the same observations have been reported in the visuomotor transformations of the real oculomotor system, as well as other visuomotor systems (although interpreted elsewhere in terms of other models) we suggest that the mechanism for visuomotor reference frame transformations identified here is the same solution used in the real brain.
APA, Harvard, Vancouver, ISO, and other styles
16

Gao, Peipei, Feng Liu, Xizhi Sun, Fang Wang, and Jiajun Li. "Rapid non-contact visual measurement method for key dimensions of revolving workpieces." International Journal of Metrology and Quality Engineering 12 (2021): 10. http://dx.doi.org/10.1051/ijmqe/2021008.

Full text
Abstract:
Aimed at the rapid non-contact measurement problem of a revolving workpiece's radial and axial dimensions, a fast and high-precision visual inspection method has been presented in this paper. For the workpiece with large axial size, the proposed method established the measurement transformation chain using the object-image and object-object transformations, thus realizing the rapid axial dimensional measurements. For the workpiece with large radial size, this method determined the measurement transformation model based on two-dimensional target and measurement correspondence relationship, and further achieved rapid radial dimensional measurements. The experimental results have shown that the method is effective and can be applied to in situ dimensional measurement of revolving workpieces on high quality production lines.
APA, Harvard, Vancouver, ISO, and other styles
17

Huang, Ellen. "An Art of Transformation." Archives of Asian Art 68, no. 2 (October 1, 2018): 133–56. http://dx.doi.org/10.1215/00666637-7162228.

Full text
Abstract:
Abstract This article examines the phenomenon of yaobian 窯變, or kiln transformations, in late imperial and early modern China as material epistemology and material practice. By providing a genealogical analysis of documentations of yaobian in late imperial texts spanning the twelfth through the nineteenth centuries, the article relates their supernatural connotations to the production of Qing-period Jingdezhen Jun-style wares, variously known as flambé wares or kiln transmutation glazes. The article advances that the significance of such eighteenth-century yaobian porcelain wares lies in their very inexplicability of craftsmanship and ability to index both physical transformation as well as infinite formal transformation for the Qing empire, particularly during the reign of the Qianlong emperor (1736–1795).
APA, Harvard, Vancouver, ISO, and other styles
18

Olivier, Gerard, and Jean Louis Juan De Mendoza. "Motor Dimension of Visual Mental Image Transformation Processes." Perceptual and Motor Skills 90, no. 3 (June 2000): 1008–26. http://dx.doi.org/10.2466/pms.2000.90.3.1008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Liu, Huan, Qinghua Zheng, Minnan Luo, Xiaojun Chang, Caixia Yan, and Lina Yao. "Memory transformation networks for weakly supervised visual classification." Knowledge-Based Systems 210 (December 2020): 106432. http://dx.doi.org/10.1016/j.knosys.2020.106432.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Sukhov, A. O. "Domain-Specific Language for Visual Models Transformation Creation." PROGRAMMNAYA INGENERIA 8, no. 9 (September 14, 2017): 396–406. http://dx.doi.org/10.17587/prin.8.396-406.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Makarychev, V. "Visual Servoing Based on Theory of Transformation Group." Procedia Computer Science 103 (2017): 183–89. http://dx.doi.org/10.1016/j.procs.2017.01.054.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Putten, Michel J. A. M. van. "WO24 Visual transformation of EEG in the ICU." Clinical Neurophysiology 119 (May 2008): S49—S50. http://dx.doi.org/10.1016/s1388-2457(08)60184-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Alter-Gartenberg, R. "Nonlinear dynamic range transformation in visual communication channels." IEEE Transactions on Image Processing 5, no. 3 (March 1996): 538–46. http://dx.doi.org/10.1109/83.491328.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Rensink, R. A. "The Invariance of Visual Search to Geometric Transformation." Journal of Vision 4, no. 8 (August 1, 2004): 178. http://dx.doi.org/10.1167/4.8.178.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Krapp, Holger G. "Sensorimotor Transformation: From Visual Responses to Motor Commands." Current Biology 20, no. 5 (March 2010): R236—R239. http://dx.doi.org/10.1016/j.cub.2010.01.024.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Preuss, N., L. R. Harris, and F. W. Mast. "Allocentric visual cues influence mental transformation of bodies." Journal of Vision 13, no. 12 (October 16, 2013): 14. http://dx.doi.org/10.1167/13.12.14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

de Lara, Juan, and Hans Vangheluwe. "Automating the transformation-based analysis of visual languages." Formal Aspects of Computing 22, no. 3 (May 19, 2009): 297–326. http://dx.doi.org/10.1007/s00165-009-0114-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Sun, Carol. "Transformation Parlor." Art Journal 60, no. 3 (September 2001): 42–47. http://dx.doi.org/10.1080/00043249.2001.10792076.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Sun, Carol. "Transformation Parlor." Art Journal 60, no. 3 (2001): 42. http://dx.doi.org/10.2307/778136.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Kalantar, Negar, and Alireza Borhani. "Studio in Transformation: Transformation in Studio." Journal of Architectural Education 70, no. 1 (January 2, 2016): 107–15. http://dx.doi.org/10.1080/10464883.2016.1122497.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Kasik, David J., David Ebert, Guy Lebanon, Haesun Park, and William M. Pottenger. "Data Transformations and Representations for Computation and Visualization." Information Visualization 8, no. 4 (January 2009): 275–85. http://dx.doi.org/10.1057/ivs.2009.27.

Full text
Abstract:
At the core of successful visual analytics systems are computational techniques that transform data into concise, human comprehensible visual representations. The general process often requires multiple transformation steps before a final visual representation is generated. This article characterizes the complex raw data to be analyzed and then describes two different sets of transformations and representations. The first set transforms the raw data into more concise representations that improve the performance of sophisticated computational methods. The second transforms internal representations into visual representations that provide the most benefit to an interactive user. The end result is a computing system that enhances an end user's analytic process with effective visual representations and interactive techniques. While progress has been made on improving data transformations and representations, there is substantial room for improvement.
APA, Harvard, Vancouver, ISO, and other styles
32

Korzina, Maria Igorevna. "Transformation of Visual Communications during the Fourth Industrial Revolution." Общество: философия, история, культура, no. 11 (November 13, 2020): 36–41. http://dx.doi.org/10.24158/fik.2020.11.5.

Full text
Abstract:
The paper provides a philosophical analysis of the transformation processes of visual communications in the framework of such a hypothetical phenome-non as the fourth industrial revolution. In the condi-tions of the digital revolution, visual communication gets an incentive to develop, acquires new proper-ties in the new conditions of development of post-industrial and digital society. As a scientific disci-pline, visual communication is at the stage of full-scale development and search for interdisciplinary connections. The author examines the history of visual communications from the Ancient world to the present, identifies their main features at the pre-sent stage of development, and analyzes the exam-ple of virtualization of a Museum exhibition because of modern transformation of the communication environment.
APA, Harvard, Vancouver, ISO, and other styles
33

Naito, Eiichi. "Controllability of Motor Imagery and Transformation of Visual Imagery." Perceptual and Motor Skills 78, no. 2 (April 1994): 479–87. http://dx.doi.org/10.2466/pms.1994.78.2.479.

Full text
Abstract:
This study examined the relation between control of motor imagery and generation and transformation of visual imagery by testing 54 subjects. We used two measures of the Controllability of Motor Imagery test to evaluate the ability to control motor imagery. One was a recognition test on which the subject imagines as if one sees another's movement, and the other was a regeneration test on which one imagines as if one moves one's own body. The former test score was related to processing time of a mental rotation task and the latter one was not but would reflect sport experience. It was concluded that two meanings of the test could reflect different aspects such as observational motor imagery and body-centered motor imagery.
APA, Harvard, Vancouver, ISO, and other styles
34

Li, Yang, Jianke Zhu, Steven C. H. Hoi, Wenjie Song, Zhefeng Wang, and Hantang Liu. "Robust Estimation of Similarity Transformation for Visual Object Tracking." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 8666–73. http://dx.doi.org/10.1609/aaai.v33i01.33018666.

Full text
Abstract:
Most of existing correlation filter-based tracking approaches only estimate simple axis-aligned bounding boxes, and very few of them is capable of recovering the underlying similarity transformation. To tackle this challenging problem, in this paper, we propose a new correlation filter-based tracker with a novel robust estimation of similarity transformation on the large displacements. In order to efficiently search in such a large 4-DoF space in real-time, we formulate the problem into two 2-DoF sub-problems and apply an efficient Block Coordinates Descent solver to optimize the estimation result. Specifically, we employ an efficient phase correlation scheme to deal with both scale and rotation changes simultaneously in log-polar coordinates. Moreover, a variant of correlation filter is used to predict the translational motion individually. Our experimental results demonstrate that the proposed tracker achieves very promising prediction performance compared with the state-of-the-art visual object tracking methods while still retaining the advantages of high efficiency and simplicity in conventional correlation filter-based tracking methods.
APA, Harvard, Vancouver, ISO, and other styles
35

Jiang, Xiaowei, and Sato Reika. "Research on Visual Transformation and Development of Huanglongfu Folklore." Research Journal of Applied Sciences, Engineering and Technology 17, no. 1 (February 15, 2020): 18–23. http://dx.doi.org/10.19026/rjaset.17.6030.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Mian Qaisar, Saeed. "Isolated Speech Recognition and Its Transformation in Visual Signs." Journal of Electrical Engineering & Technology 14, no. 2 (January 23, 2019): 955–64. http://dx.doi.org/10.1007/s42835-018-00071-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Taentzer, G. "Visual Modeling of Distributed Object Systems by Graph Transformation." Electronic Notes in Theoretical Computer Science 51, no. 1 (February 2004): 1–15. http://dx.doi.org/10.1016/s1571-0661(04)00212-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Taentzer, Gabriele. "Visual Modeling of Distributed Object Systems by Graph Transformation." Electronic Notes in Theoretical Computer Science 51 (May 2002): 304–18. http://dx.doi.org/10.1016/s1571-0661(04)80212-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Chen, Woei-Kae, and Pin-Ying Tu. "VisualTPL: A visual dataflow language for report data transformation." Journal of Visual Languages & Computing 25, no. 3 (June 2014): 210–26. http://dx.doi.org/10.1016/j.jvlc.2013.11.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Yuan, Liping. "Research on Image Visual Information Coordinate Transformation Model Algorithm." Journal of Physics: Conference Series 1575 (June 2020): 012042. http://dx.doi.org/10.1088/1742-6596/1575/1/012042.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Bonin-Font, F., A. Burguera, A. Ortiz, and G. Oliver. "Concurrent visual navigation and localisation using inverse perspective transformation." Electronics Letters 48, no. 5 (2012): 264. http://dx.doi.org/10.1049/el.2011.3577.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Basole, Rahul C. "Accelerating Digital Transformation: Visual Insights from the API Ecosystem." IT Professional 18, no. 6 (November 2016): 20–25. http://dx.doi.org/10.1109/mitp.2016.105.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Signorile, Vito. "Capitulating to captions: The verbal transformation of visual images." Human Studies 10, no. 3-4 (October 1987): 281–310. http://dx.doi.org/10.1007/bf00157601.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Franck, J. P., K. R. Lundgren, and K. A. McGreer. "Visual observation of the martensitic transformation hcp⇆fcc 4He." Physica B+C 139-140 (May 1986): 230–32. http://dx.doi.org/10.1016/0378-4363(86)90564-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Tomatsu, Saeka, and Tatsuyuki Ohtsuki. "The effect of visual transformation on bimanual circling movement." Experimental Brain Research 166, no. 2 (September 8, 2005): 277–86. http://dx.doi.org/10.1007/s00221-005-2363-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Dzhafarov, Ehtibar N. "Visual kinematics III. Transformation of spatiotemporal coordinates in motion." Journal of Mathematical Psychology 36, no. 4 (December 1992): 524–46. http://dx.doi.org/10.1016/0022-2496(92)90107-i.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Gnadt, James W., R. Martyn Bracewell, and Richard A. Andersen. "Sensorimotor transformation during eye movements to remembered visual targets." Vision Research 31, no. 4 (January 1991): 693–715. http://dx.doi.org/10.1016/0042-6989(91)90010-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Vujović, Marija, and Tatjana Đukić. "“NARODNE NOVINE”: GENRE, THEMATIC AND VISUAL TRANSFORMATION 1949-2019." MEDIA STUDIES AND APPLIED ETHICS 2, no. 1 (October 8, 2020): 51–67. http://dx.doi.org/10.46630/msae.2.2020.04.

Full text
Abstract:
The importance of local media is indisputable for media theorists and media practitioners, despite the widespread globalization. Although electronic media has the largest audience, while online media is constantly expanding, the traditional print media is still surviving and they influence the political and social life of local communities. “Narodne novine” [English: The People’s Newspaper] are a daily news and political newspaper with the longest publication time in south-eastern Serbia. By using the quantitative and qualitative content analysis, as well as the comparative and synthetic research method, the authors investigate how this newspaper has transformed in relation to genre, and in thematic and visual terms during the two socially and historically different periods - in the socialist and transitional period. The research corpus consists of 63 articles from two editions of “Narodne novine” – the newspapers published on May 1st 1949 (28 articles) and 70 years later, on May 1st 2019 (35 articles).
APA, Harvard, Vancouver, ISO, and other styles
49

Zheng, Weiwei, Huimin Yu, and Zhaohui Lu. "Two-Step Affine Transformation Prediction for Visual Object Tracking." IEEE Access 9 (2021): 36512–21. http://dx.doi.org/10.1109/access.2021.3056469.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Jakola, Asgeir S., David Bouget, Ingerid Reinertsen, Anne J. Skjulsvik, Lisa Millgård Sagberg, Hans Kristian Bø, Sasha Gulati, Kristin Sjåvik, and Ole Solheim. "Spatial distribution of malignant transformation in patients with low-grade glioma." Journal of Neuro-Oncology 146, no. 2 (January 2020): 373–80. http://dx.doi.org/10.1007/s11060-020-03391-1.

Full text
Abstract:
Abstract Background Malignant transformation represents the natural evolution of diffuse low-grade gliomas (LGG). This is a catastrophic event, causing neurocognitive symptoms, intensified treatment and premature death. However, little is known concerning the spatial distribution of malignant transformation in patients with LGG. Materials and methods Patients histopathological diagnosed with LGG and subsequent radiological malignant transformation were identified from two different institutions. We evaluated the spatial distribution of malignant transformation with (1) visual inspection and (2) segmentations of longitudinal tumor volumes. In (1) a radiological transformation site < 2 cm from the tumor on preceding MRI was defined local transformation. In (2) overlap with pretreatment volume after importation into a common space was defined as local transformation. With a centroid model we explored if there were particular patterns of transformations within relevant subgroups. Results We included 43 patients in the clinical evaluation, and 36 patients had MRIs scans available for longitudinal segmentations. Prior to malignant transformation, residual radiological tumor volumes were > 10 ml in 93% of patients. The transformation site was considered local in 91% of patients by clinical assessment. Patients treated with radiotherapy prior to transformation had somewhat lower rate of local transformations (83%). Based upon the segmentations, the transformation was local in 92%. We did not observe any particular pattern of transformations in examined molecular subgroups. Conclusion Malignant transformation occurs locally and within the T2w hyperintensities in most patients. Although LGG is an infiltrating disease, this data conceptually strengthens the role of loco-regional treatments in patients with LGG.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography