Kliknij ten link, aby zobaczyć inne rodzaje publikacji na ten temat: Content Based Texture Coding (CBTC).

Artykuły w czasopismach na temat „Content Based Texture Coding (CBTC)”

Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych

Wybierz rodzaj źródła:

Sprawdź 18 najlepszych artykułów w czasopismach naukowych na temat „Content Based Texture Coding (CBTC)”.

Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.

Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.

Przeglądaj artykuły w czasopismach z różnych dziedzin i twórz odpowiednie bibliografie.

1

HOSUR, PRABHUDEV, i ROLANDO CARRASCO. "ENHANCED FRAME-BASED VIDEO CODING TO SUPPORT CONTENT-BASED FUNCTIONALITIES". International Journal of Computational Intelligence and Applications 06, nr 02 (czerwiec 2006): 161–75. http://dx.doi.org/10.1142/s1469026806001939.

Pełny tekst źródła
Streszczenie:
This paper presents the enhanced frame-based video coding scheme. The input source video to the enhanced frame-based video encoder consists of a rectangular-sized video and shapes of arbitrarily shaped objects on video frames. The rectangular frame texture is encoded by the conventional frame-based coding technique and the video object's shape is encoded using the contour-based vertex coding. It is possible to achieve several useful content-based functionalities by utilizing the shape information in the bitstream at the cost of a very small overhead to the bit-rate.
Style APA, Harvard, Vancouver, ISO itp.
2

Zhang, Qiuwen, Shuaichao Wei i Rijian Su. "Low-Complexity Texture Video Coding Based on Motion Homogeneity for 3D-HEVC". Scientific Programming 2019 (15.01.2019): 1–13. http://dx.doi.org/10.1155/2019/1574081.

Pełny tekst źródła
Streszczenie:
Three-dimensional extension of the high efficiency video coding (3D-HEVC) is an emerging international video compression standard for multiview video system applications. Similar to HEVC, a computationally expensive mode decision is performed using all depth levels and prediction modes to select the least rate-distortion (RD) cost for each coding unit (CU). In addition, new tools and intercomponent prediction techniques have been introduced to 3D-HEVC for improving the compression efficiency of the multiview texture videos. These techniques, despite achieving the highest texture video coding efficiency, involve extremely high-complex procedures, thus limiting 3D-HEVC encoders in practical applications. In this paper, a fast texture video coding method based on motion homogeneity is proposed to reduce 3D-HEVC computational complexity. Because the multiview texture videos instantly represent the same scene at the same time (considering that the optimal CU depth level and prediction modes are highly multiview content dependent), it is not efficient to use all depth levels and prediction modes in 3D-HEVC. The motion homogeneity model of a CU is first studied according to the motion vectors and prediction modes from the corresponding CUs. Based on this model, we present three efficient texture video coding approaches, such as the fast depth level range determination, early SKIP/Merge mode decision, and adaptive motion search range adjustment. Experimental results demonstrate that the proposed overall method can save 56.6% encoding time with only trivial coding efficiency degradation.
Style APA, Harvard, Vancouver, ISO itp.
3

Chen, Yan-Hong, Chin-Chen Chang, Chia-Chen Lin i Cheng-Yi Hsu. "Content-Based Color Image Retrieval Using Block Truncation Coding Based on Binary Ant Colony Optimization". Symmetry 11, nr 1 (27.12.2018): 21. http://dx.doi.org/10.3390/sym11010021.

Pełny tekst źródła
Streszczenie:
In this paper, we propose a content-based image retrieval (CBIR) approach using color and texture features extracted from block truncation coding based on binary ant colony optimization (BACOBTC). First, we present a near-optimized common bitmap scheme for BTC. Then, we convert the image to two color quantizers and a bitmap image-utilizing BACOBTC. Subsequently, the color and texture features, i.e., the color histogram feature (CHF) and the bit pattern histogram feature (BHF) are extracted to measure the similarity between a query image and the target image in the database and retrieve the desired image. The performance of the proposed approach was compared with several former image-retrieval schemes. The results were evaluated in terms of Precision-Recall and Average Retrieval Rate, and they showed that our approach outperformed the referenced approaches.
Style APA, Harvard, Vancouver, ISO itp.
4

Dumitras, A., i B. G. Haskell. "An Encoder–Decoder Texture Replacement Method With Application to Content-Based Movie Coding". IEEE Transactions on Circuits and Systems for Video Technology 14, nr 6 (czerwiec 2004): 825–40. http://dx.doi.org/10.1109/tcsvt.2004.828336.

Pełny tekst źródła
Style APA, Harvard, Vancouver, ISO itp.
5

WANG, XING-YUAN, i YAHUI LANG. "A FAST FRACTAL ENCODING METHOD BASED ON FRACTAL DIMENSION". Fractals 17, nr 04 (grudzień 2009): 459–65. http://dx.doi.org/10.1142/s0218348x09004491.

Pełny tekst źródła
Streszczenie:
In this paper a fast fractal coding method based on fractal dimension is proposed. Image texture is an important content in image analysis and processing which can be used to describe the extent of irregular surface. The fractal dimension in fractal theory can be used to describe the image texture, and it is the same with the human visual system. The higher the fractal dimension, the rougher the surface of the corresponding graph, and vice versa. Therefore in this paper a fast fractal encoding method based on fractal dimension is proposed. During the encoding process, using the fractal dimension of the image, all blocks of the given image first are defined into three classes. Then each range block searches the best match in the corresponding class. The method is based on differential box counting which is chosen specifically for texture analysis. Since the searching space is reduced and the classification operation is simple and computationally efficient, the encoding speed is improved and the quality of the decoded image is preserved. Experiments show that compared with the full search method, the proposed method greatly reduced the encoding time, obtained a rather good retrieved image, and achieved the stable speedup ratio.
Style APA, Harvard, Vancouver, ISO itp.
6

Han, Xinying, Yang Wu i Rui Wan. "A Method for Style Transfer from Artistic Images Based on Depth Extraction Generative Adversarial Network". Applied Sciences 13, nr 2 (8.01.2023): 867. http://dx.doi.org/10.3390/app13020867.

Pełny tekst źródła
Streszczenie:
Depth extraction generative adversarial network (DE-GAN) is designed for artistic work style transfer. Traditional style transfer models focus on extracting texture features and color features from style images through an autoencoding network by mixing texture features and color features using high-dimensional coding. In the aesthetics of artworks, the color, texture, shape, and spatial features of the artistic object together constitute the artistic style of the work. In this paper, we propose a multi-feature extractor to extract color features, texture features, depth features, and shape masks from style images with U-net, multi-factor extractor, fast Fourier transform, and MiDas depth estimation network. At the same time, a self-encoder structure is used as the content extraction network core to generate a network that shares style parameters with the feature extraction network and finally realizes the generation of artwork images in three-dimensional artistic styles. The experimental analysis shows that compared with other advanced methods, DE-GAN-generated images have higher subjective image quality, and the generated style pictures are more consistent with the aesthetic characteristics of real works of art. The quantitative data analysis shows that images generated using the DE-GAN method have better performance in terms of structural features, image distortion, image clarity, and texture details.
Style APA, Harvard, Vancouver, ISO itp.
7

Deep, G., J. Kaur, Simar Preet Singh, Soumya Ranjan Nayak, Manoj Kumar i Sandeep Kautish. "MeQryEP: A Texture Based Descriptor for Biomedical Image Retrieval". Journal of Healthcare Engineering 2022 (11.04.2022): 1–20. http://dx.doi.org/10.1155/2022/9505229.

Pełny tekst źródła
Streszczenie:
Image texture analysis is a dynamic area of research in computer vision and image processing, with applications ranging from medical image analysis to image segmentation to content-based image retrieval and beyond. “Quinary encoding on mesh patterns (MeQryEP)” is a new approach to extracting texture features for indexing and retrieval of biomedical images, which is implemented in this work. An extension of the previous study, this research investigates the use of local quinary patterns (LQP) on mesh patterns in three different orientations. To encode the gray scale relationship between the central pixel and its surrounding neighbors in a two-dimensional (2D) local region of an image, binary and nonbinary coding, such as local binary patterns (LBP), local ternary patterns (LTP), and LQP, are used, while the proposed strategy uses three selected directions of mesh patterns to encode the gray scale relationship between the surrounding neighbors for a given center pixel in a 2D image. An innovative aspect of the proposed method is that it makes use of mesh image structure quinary pattern features to encode additional spatial structure information, resulting in better retrieval. On three different kinds of benchmark biomedical data sets, analyses have been completed to assess the viability of MeQryEP. LIDC-IDRI-CT and VIA/I–ELCAP-CT are the lung image databases based on computed tomography (CT), while OASIS-MRI is a brain database based on magnetic resonance imaging (MRI). This method outperforms state-of-the-art texture extraction methods, such as LBP, LQEP, LTP, LMeP, LMeTerP, DLTerQEP, LQEQryP, and so on in terms of average retrieval precision (ARP) and average retrieval rate (ARR).
Style APA, Harvard, Vancouver, ISO itp.
8

Xu, Jianming, Weichun Liu, Yang Qin i Guangrong Xu. "Image Super-Resolution Reconstruction Method for Lung Cancer CT-Scanned Images Based on Neural Network". BioMed Research International 2022 (18.07.2022): 1–10. http://dx.doi.org/10.1155/2022/3543531.

Pełny tekst źródła
Streszczenie:
The super-resolution (SR) reconstruction of a single image is an important image synthesis task especially for medical applications. This paper is studying the application of image segmentation for lung cancer images. This research work is utilizing the power of deep learning for resolution reconstruction for lung cancer-based images. At present, the neural networks utilized for image segmentation and classification are suffering from the loss of information where information passes through one layer to another deep layer. The commonly used loss functions include content-based reconstruction loss and generative confrontation network. The sparse coding single-image super-resolution reconstruction algorithm can easily lead to the phenomenon of incorrect geometric structure in the reconstructed image. In order to solve the problem of excessive smoothness and blurring of the reconstructed image edges caused by the introduction of this self-similarity constraint, a two-layer reconstruction framework based on a smooth layer and a texture layer is proposed for a medical application of lung cancer. This method uses a global nonzero gradient number constrained reconstruction model to reconstruct the smooth layer. The proposed sparse coding method is used to reconstruct high-resolution texture images. Finally, a global and local optimization models are used to further improve the quality of the reconstructed image. An adaptive multiscale remote sensing image super-division reconstruction network is designed. The selective core network and adaptive gating unit are integrated to extract and fuse features to obtain a preliminary reconstruction. Through the proposed dual-drive module, the feature prior drive loss and task drive loss are transmitted to the super-resolution network. The proposed work not only improves the subjective visual effect but the robustness has also been enhanced with more accurate construction of edges. The statistical evaluators are used to test the viability of the proposed scheme.
Style APA, Harvard, Vancouver, ISO itp.
9

Xing, Qiang, Jie Chen, Jieyu Liu i Baifeng Song. "A Double Random Matrix Design Model for Fractal Art Patterns Based on Visual Characteristics". Mathematical Problems in Engineering 2022 (31.08.2022): 1–11. http://dx.doi.org/10.1155/2022/5376587.

Pełny tekst źródła
Streszczenie:
This paper adopts the method of visual characteristics of the double random matrix to conduct in-depth research and analysis on the design of fractal art patterns. For the practical application needs in fractal graphic design, a method is proposed to automatically extract the core base pattern based on fractal graphic content and generate a four-sided continuous pattern. The method first uses the Canny operator for edge detection to analyze the area of the main pattern. Then, it uses a grayscale cogeneration matrix to extract and analyze the graphic texture features, based on which the best splicing method is selected to splice the extracted pattern, and then, it achieves two splicing methods of flat row and staggered four-sided continuous pattern. The method has good practicality with low complexity and high versatility under the premise of ensuring the beauty of the generated four-sided continuous pattern. It can assist designers to design patterns, improve efficiency, and save design costs. In this paper, we improve the existing image segmentation methods, adopt two segmentation methods, namely quadratic tree segmentation and HV segmentation, propose a new local codebook selection strategy, and study the degree of self-adaptation of different methods to images in terms of segmentation methods and local codebook selection strategies. It makes the network unable to train efficiently on long text prediction problems. Finally, the improved algorithm of this paper is tested on the standard and living image libraries, and the experimental results show that the use of local codebooks makes the image coding speed significantly improved compared with the fixed fractal. In the face of the image to be retrieved, it is only necessary to perform the coding operation to obtain the fractal code to perform similarity matching, which can meet the requirement of real-time retrieval. Applying the improved distance formula, the search accuracy obtained on the test gallery is significantly better than that of the grayscale histogram algorithm.
Style APA, Harvard, Vancouver, ISO itp.
10

Jenks, Robert A., Ashkan Vaziri, Ali-Reza Boloori i Garrett B. Stanley. "Self-Motion and the Shaping of Sensory Signals". Journal of Neurophysiology 103, nr 4 (kwiecień 2010): 2195–207. http://dx.doi.org/10.1152/jn.00106.2009.

Pełny tekst źródła
Streszczenie:
Sensory systems must form stable representations of the external environment in the presence of self-induced variations in sensory signals. It is also possible that the variations themselves may provide useful information about self-motion relative to the external environment. Rats have been shown to be capable of fine texture discrimination and object localization based on palpation by facial vibrissae, or whiskers, alone. During behavior, the facial vibrissae brush against objects and undergo deflection patterns that are influenced both by the surface features of the objects and by the animal's own motion. The extent to which behavioral variability shapes the sensory inputs to this pathway is unknown. Using high-resolution, high-speed videography of unconstrained rats running on a linear track, we measured several behavioral variables including running speed, distance to the track wall, and head angle, as well as the proximal vibrissa deflections while the distal portions of the vibrissae were in contact with periodic gratings. The measured deflections, which serve as the sensory input to this pathway, were strongly modulated both by the properties of the gratings and the trial-to-trial variations in head-motion and locomotion. Using presumed internal knowledge of locomotion and head-rotation, gratings were classified using short-duration trials (<150 ms) from high-frequency vibrissa motion, and the continuous trajectory of the animal's own motion through the track was decoded from the low frequency content. Together, these results suggest that rats have simultaneous access to low- and high-frequency information about their environment, which has been shown to be parsed into different processing streams that are likely important for accurate object localization and texture coding.
Style APA, Harvard, Vancouver, ISO itp.
11

Pande, Sandeep D., i Manna S. R. Chetty. "Linear Bezier Curve Geometrical Feature Descriptor for Image Recognition". Recent Advances in Computer Science and Communications 13, nr 5 (5.11.2020): 930–41. http://dx.doi.org/10.2174/2213275912666190617155154.

Pełny tekst źródła
Streszczenie:
Background: Image retrieval has a significant role in present and upcoming usage for different image processing applications where images within a desired range of similarity are retrieved for a query image. Representation of image feature, accuracy of feature selection, optimal storage size of feature vector and efficient methods for obtaining features plays a vital role in Image retrieval, where features are represented based on the content of an image such as color, texture or shape. In this work an optimal feature vector based on control points of a Bezier curve is proposed which is computation and storage efficient. Aim: To develop an effective and storage, computation efficient framework model for retrieval and classification of plant leaves. Objective: The primary objective of this work is developing a new algorithm for control point extraction based on the global monitoring of edge region. This observation will bring a minimization in false feature extraction. Further, computing a sub clustering feature value in finer and details component to enhance the classification performance. Finally, developing a new search mechanism using inter and intra mapping of feature value in selecting optimal feature values in the estimation process. Methods: The work starts with the pre-processing stage that outputs the boundary coordinates of shape present in the input image. Gray scale input image is first converted into binary image using binarization then, the curvature coding is applied to extract the boundary of the leaf image. Gaussian Smoothening is then applied to the extracted boundary to remove the noise and false feature reduction. Further interpolation method is used to extract the control points of the boundary. From the extracted control points the Bezier curve points are estimated and then Fast Fourier Transform (FFT) is applied on the curve points to get the feature vector. Finally, the K-NN classifier is used to classify and retrieve the leaf images. Results: The performance of proposed approach is compared with the existing state-of-the-artmethods (Contour and Curve based) using the evaluation parameters viz. accuracy, sensitivity, specificity, recall rate, and processing time. Proposed method has high accuracy with acceptable specificity and sensitivity. Other methods fall short in comparison to proposed method. In case of sensitivity and specificity Contour method out performs proposed method. But in case accuracy and specificity proposed method outperforms the state-of-the-art methods. Conclusion: This work proposed a linear coding of Bezier curve control point computation for image retrieval. This approach minimizes the processing overhead and search delay by reducing feature vectors using a threshold-based selection approach. The proposed approach has an advantage of distortion suppression and dominant feature extraction simultaneously, minimizing the effort of additional filtration process. The accuracy of retrieval for the developed approach is observed to be improved as compared to the tangential Bezier curve method and conventional edge and contour-based coding. The approach signifies an advantage in low resource overhead in computing shape feature.
Style APA, Harvard, Vancouver, ISO itp.
12

Cherrier, Pierre, Sebastian Lentz, Jana Moser i Laura Pflug. "Maps under the global condition: a new tool to study the evolution of cartographic language". Abstracts of the ICA 1 (15.07.2019): 1–4. http://dx.doi.org/10.5194/ica-abs-1-44-2019.

Pełny tekst źródła
Streszczenie:
<p><strong>Abstract.</strong> Maps are a means of communication with their own language. This contribution makes a methodological proposal for a tool for to analyse the cartographic language of thematic maps and atlases. Based on the work of Jacques Bertin and on approaches of the Visual Studies, this methodology works on decoding maps in terms of their basic elements, the signs and graphic objects that compose them. As a tool it should allow comparative research on cartographic productions, both, synchronically and diachronically. It suggests two analytical schemes, one for maps and the other for complex map-editions, e.g. atlases.</p><p> On the example of spatial entities (state territories, natural areas etc.), the first part of this contribution introduces the semiotic analysis-scheme for thematic maps. It shows how to deal systematically with signs, signatures and graphic objects on maps. Such analyses should produce the fundament for comparative approaches, which allow to detect typical patterns in cartography and to identify elements of cartographic languages.</p><p> We are interested in the cartographic languages of maps used in atlases. To do this we have chosen a quantitative analysis of the visual content, maps, diagrams and images. The quantitative method makes it possible to analyse a large corpus of maps and atlases, thus making it possible to make comparisons between contents both diachronically and synchronically, i.e. comparisons in time and space. This is an approach relatively rarely used in cartography. There are few studies that produce a quantitative analysis of cartographic content. Among the existing ones, that of Alexandre Kent and especially that of Muelhenhaus on the Goode atlas series. We are following in the footsteps of these studies. To do this, we decided to adopt a semiological approach to the study of maps. Of course, we cannot talk about maps and semiology without mentioning Jacques Bertin and his book: graphic semiology: diagrams, networks, maps (1963) in which he tried to define a “grammar” by establishing rules of good cartographic practice, even if the book is not exclusively reduced to the map.</p><p> The book itself does not contain any reference, but it can be said that graphic semiology is itself derived from linguistic semiology, developed in particular by Ferdinand Saussure. However, although Bertin's work has influenced many cartographers in the design of maps, the method has been little used in the cartographic analysis itself. Semiology is an approach that has been used mainly in the analysis of images and diagrams rather than in cartography. Although it is true that iconographic analysis studies in semiology claim more Barthes and Saussure than Bertin.</p><p> The map can also be considered as an image. Several iconographic analysis studies have thus integrated the map as an object of study. This is the case, for example, of engelhardt who, in his <i>thesis</i> “<i>the language of graphics: a framework for the analysis of syntax and meaning in maps, charts and diagram</i>” (2002), focuses on several types of iconography, even if the map remains a central element of his analysis. Another example is the work of André Lavarde, who in his article “<i>la flèche : le signe qui anime les schémas</i>” (1996) focuses on the history of the use of the arrow in diagrams, while evoking its use in geographical maps. There are therefore bridges between iconographic and cartographic analysis.</p><p> This research is therefore a continuation of the work of Bertin, Mulhenhaus and to a certain extent Engelhardt. The coding system we have developed for our cartographic analysis is divided into three parts and divided tehemselves into several categories. Each category corresponds to a column in the table. From there, there are two ways to fill in the columns. In the first case by filling in the field with the requested information such as the title of a map. Or in a second case to enter 0; 1; or 2 depending on whether the information that corresponds to the absence, presence or uncertainty of the requested information. So if the map coded uses the Mercator projection then it will be entered 1 in the column “map projection: cylindrical projection” and 0 in the column “map projection: compromise”.</p><p> The table is composed of three parts. The first part concerns the general information of the coded map (image 1). This is for example the name of the atlas, the page, the chapter in which the map is located. Then more general information about the map itself is coded like for example its title, theme, scale, type of projection used, etc. This makes it possible to collect a set of basic data. It should be noted that, as mentioned above, we do not only code maps but also other forms of visual representations of space that can be found in atlases. For example, there are images, satellite photos or diagrams that can represent different geographical areas. If the coded object is not a map, this is specified. There is a category provided for this purpose. When coding, cartography-specific elements, such as map projection, are therefore not taken into account. Not all the columns in our table are intended to be filled by each map or coded image. The codification process is therefore flexible. Although the code does not focus only on maps, they represent the vast majority of the content of the atlases studied. This is why we refer more to the “map” rather than to the “visual representation of space”. However, even if they are in the minority, it is important in the analysis to take into account representations of space other than cartography.</p><p> The second part of the table focuses on the signs used by the maps. First of all, we have chosen to divide them into three categories: symbols that are related to the point of the line and the surface. These are the three elementary figures of geometry that Bertin calls implantations. It is from these three types of locations that the different symbols are created. We have distinguished them between the thematic symbols, which are there to illustrate the theme of the map, to convey its message and the Background symbol present to help the reader to orientate himself in space. This is the case, for example, of the equator's path, which is rarely thematic, but rather serves as a geographical point of reference. Of course, the thematic symbols vary according to the theme of the map. Thus, territorial borders can be considered thematic if it is a political map, but will be considered Background information if it is present on a map representing global forest cover. The purpose of this part is to have as much content as possible on the elements that make a map.</p><p> The third and last part of the table refers to visual variables. To be interested in visual variables is to be interested in the interactions between symbols. It is on this part that we rely most on Bertin's work. We have thus taken 5 of the 7 variables he defined. The orientation and the two dimensions of the plan were excluded from our study because they are constant in the cartographic production. It would therefore be irrelevant to record them each time. This is not the case for the remaining components: size, value, texture, shape, and colour. These are elements that may be present in cartography but are not individually necessary. These visual variables form the basic grammar of the “cartographic language”. Studying the visual variables is a way for us to observe how the different signs interact with each other and to see how an information is convey. These visual rules have been established in the 1960s, therefore it the relevance of using this framework to study historical map can be questioned. But Bertin did not design his rules from scratch, he relied on previous mapping practices. It is therefore interesting to observe how often they have been used.</p><p> The second part deals with map themes and regional structures of atlases. Using principles of Visual Studies, it suggests to observe atlases as a whole as cultural products, each subject to a visual programme that determines the frameworks of its expressions and its claim for representativeness. By comparing elements like projection, scale, maps-themes, regional sequences etc. systematically, one may unveil the specific interpretations of world views which are contained in the atlas’ concepts. As some atlases are published in a long series of editions, they become interesting research objects in an evolutionary perspective.</p><p> In a diachronic perspective the coding scheme suggested here, focussing themes and regional subdivisions of atlases, builds the fundament for longitudinal studies. Both methodological parts should make cartographic and atlas-studies more compatible to cultural and historical research approaches.</p><p> Taking the example of a few maps from French atlases from nineteen centuries to the early 2000s the second part of this contribution wants to give an idea, how this methodology can be used to study the evolution of cartographic language over time under the influence of the global condition and how French cartographers faced the challenge of representing a growing interconnected world and which graphical tools they developed.</p>
Style APA, Harvard, Vancouver, ISO itp.
13

"Image Retrieval with Fusion of Thepade’s Sorted Block Truncation Coding n-ary based Color and Local Binary Pattern based Texture Features with Different Color Places". International Journal of Innovative Technology and Exploring Engineering 9, nr 5 (10.03.2020): 28–34. http://dx.doi.org/10.35940/ijitee.e1963.039520.

Pełny tekst źródła
Streszczenie:
In these years, there has been a gigantic growth in the generation of data. Innovations such as the Internet, social media and smart phones are the facilitators of this information boom. Since ancient times images were treated as an effective mode of communication. Even today most of the data generated is image data. The technology for capturing, storing and transferring images is well developed but efficient image retrieval is still a primitive area of research. Content Based Image Retrieval (CBIR) is one such area where lot of research is still going on. CBIR systems rely on three aspects of the image content namely texture, shape and color. Application specific CBIR systems are effective whereas Generic CBIR systems are being explored. Previously, descriptors are used to extract shape, color or texture content features, but the effect of using more than one descriptor is under research and may yield better results. The paper presents the fusion of TSBTC n-ary (Thepade's Sorted n-ary Block Truncation Coding) Global Color Features and Local Binary Pattern (LBP) Local Texture Features in Content Based Image with Different Color Places TSBTC n-ary devises global color features from an image. It is a faster and better technique compared to Block Truncation Coding. It is also rotation and scale invariant. When applied on an image TSBTC n-ary gives a feature vector based on the color space, if TSBTC n-ary is applied on the obtained LBP (Local Binary Patterns) of the image color planes, the feature vector obtained is be based on local texture content. Along with RGB, the Luminance chromaticity color space like YCbCr and Kekre’s LUV are also used in experimentation of proposed CBIR techniques. Wang dataset has been used for exploration of proposed method. It consists of 1000 images (10 categories having 100 images each). Obtained results have shown performance improvement using fusion of BTC extracted global color features and local texture features extracted with TSBTC n-ary applied on Local Binary Patterns (LBP).
Style APA, Harvard, Vancouver, ISO itp.
14

Ms, R. Ramya, i C. Kalaiselvan Mr. "Feature Extraction in Content based Image Retrieval". International Journal of Business Intelligent 8, nr 1 (11.06.2019). http://dx.doi.org/10.20894/ijbi.105.008.001.002.

Pełny tekst źródła
Streszczenie:
A technique for Content Based Image Retrieval (CBIR) for the generation of image content descriptor which exploiting the advantage of low complexity Order Dither Block Truncation Coding (ODBTC). The quantizer and bitmap image are the compressed form of image obtained from the ODBTC technique in encoding step. Decoding is not performed in this method. It has two image feature such as Color Co-occurrence Feature (CCF) and Bit Pattern Feature (BPF) for indexing the image. These features are directly obtained from ODBTC encoded data stream. By comparing with the BTC image retrieval system and other earlier method the experimental result show the proposed method is superior. ODBTC is suited for image compression and it is a simple and effective descriptor to index the image in CBIR system. Content-based image retrieval is a technique which is used to extract the images on the basis of their content such as texture, color, shape and spatial layout. In order to minimize this gap many concepts was introduced. Moreover, Images can be stored and extracted based on various features and one of the prominent feature is Texture.
Style APA, Harvard, Vancouver, ISO itp.
15

"Framework for Color and Texture Feature Fusion in Content Based Image Retrieval using Block Truncation Coding with Color Spaces". International Journal of Engineering and Advanced Technology 9, nr 3 (29.02.2020): 769–74. http://dx.doi.org/10.35940/ijeat.c5242.029320.

Pełny tekst źródła
Streszczenie:
With tremendous growth in social media and digital technologies, generation, storing and transfer of huge amount of information over the internet is on the rise. Images or visual mode of communication have been prevailing and widely accepted as a mode of communication since ages. And with the growth of internet, the rate at which images are generated is growing exponentially. But the methods used to retrieve images are still very slow and inefficient, compared to the rate of increase in image databases. To cope up with this explosive increase in images, this information age has seen huge research advancement in Content Based Image Retrieval (CBIR). CBIR systems provide a way of utilizing the 3 major ways in which content is portrayed in images, those are shape, texture and color. In CBIR system, features are extracted from query image and similarity is found with features stored in database for retrieval. This provides an objective way of image retrieval, which is more efficient compared to subjective human annotation. Application specific CBIR systems have been developed and perform really well, but Generic CBIR systems are still under developed. Block Truncation Coding (BTC) has been chosen as a feature extractor. BTC applied directly on input image provides color content-based features of image and BTC applied after applying LBP on the image provide texture content-based features of image. Previous work consists of either color, shape or texture, but usage of more than one descriptor is still in research and might give better performance. The paper presents framework for color and texture feature fusion in content-based image retrieval using block truncation coding with color spaces. Experimentation is carried out on Wang Dataset of 1000 images consisting of 10 classes. Each class has 100 images in it. Obtained results have shown performance improvement using fusion of BTC extracted color features and texture features extracted with BTC applied on Local Binary Patterns (LBP). Conversion of color space from RGB to LUV is done using Kekre's LUV.
Style APA, Harvard, Vancouver, ISO itp.
16

Livani, Masoumeh, Hamidreza Saremi i Mojtaba Rafieian. "Developing a conceptual model for the relationship between rituals and the city based on the Theory of Sense of Place: the case of the historical texture of Gorgan, Iran". urbe. Revista Brasileira de Gestão Urbana 13 (2021). http://dx.doi.org/10.1590/2175-3369.013.e20200139.

Pełny tekst źródła
Streszczenie:
Abstract The aim of this study is to investigate how the city is influenced by the ritual of Muharram. The main research question is: what is the relationship between the city and the ritual of Muharram? To answer this question, we examined different intangible layers of this ritual heritage. This study is based on the three components of the sense of place. The research method is qualitative and a context-oriented approach is adopted. The context of the study is the historical texture of the city of Gorgan, Iran. The data were collected through library research and immediate observation. Next, content analysis and data coding were used to obtain a set of thematic categories. The results suggest that, as a kind of ritual-social behavior, the ritual of Muharram has had remarkable, enduring effects on the city over centuries. The non-urban-development dimension has thus allowed for the formation of sense of place in the relationship between people and the urban environment through a different process.
Style APA, Harvard, Vancouver, ISO itp.
17

Campanioni, Chris. "How Bizarre: The Glitch of the Nineties as a Fantasy of New Authorship". M/C Journal 21, nr 5 (6.12.2018). http://dx.doi.org/10.5204/mcj.1463.

Pełny tekst źródła
Streszczenie:
As the ball dropped on 1999, is it any wonder that No Doubt played, “It’s the End of the World as We Know It” by R.E.M. live on MTV? Any discussion of the Nineties—and its pinnacle moment, Y2K—requires a discussion of both the cover and the glitch, two performative and technological enactments that fomented the collapse between author-reader and user-machine that has, twenty years later, become normalised in today’s Post Internet culture. By staging failure and inviting the audience to participate, the glitch and the cover call into question the original and the origin story. This breakdown of normative borders has prompted the convergence of previously demarcated media, genres, and cultures, a constellation from which to recognise a stochastic hybrid form. The Cover as a Revelation of Collaborative MurmurBefore Sean Parker collaborated with Shawn Fanning to launch Napster on 1 June 1999, networked file distribution existed as cumbersome text-based programs like Internet Relay Chat and Usenet, servers which resembled bulletin boards comprising multiple categories of digitally ripped files. Napster’s simple interface, its advanced search filters, and its focus on music and audio files fostered a peer-to-peer network that became the fastest growing website in history, registering 80 million users in less than two years.In harnessing the transgressive power of the Internet to force a new mode of content sharing, Napster forced traditional providers to rethink what constitutes “content” at a moment which prefigures our current phenomena of “produsage” (Bruns) and the vast popularity of user-generated content. At stake is not just the democratisation of art but troubling the very idea of intellectual property, which is to say, the very concept of ownership.Long before the Internet was re-routed from military servers and then mainstreamed, Michel Foucault understood the efficacy of anonymous interactions on the level of literature, imagining a culture where discourse would circulate without any need for an author. But what he was asking in 1969 is something we can better answer today, because it seems less germane to call into question the need for an author in a culture in which everyone is writing, producing, and reproducing text, and more effective to think about re-evaluating the notion of a single author, or what it means to write by yourself. One would have to testify to the particular medium we have at our disposal, the Internet’s ultimate permissibility, its provocations for collaboration and co-creation. One would have to surrender the idea that authors own anything besides our will to keep producing, and our desire for change; and to modulate means to resist without negating, to alter without omitting, to enable something new to come forward; the unfolding of the text into the anonymity of a murmur.We should remind ourselves that “to author” all the way down to its Latin roots signifies advising, witnessing, and transferring. We should be reminded that to author something means to forget the act of saying “I,” to forget it or to make it recede in the background in service of the other or others, on behalf of a community. The de-centralisation of Web development and programming initiated by Napster inform a poetics of relation, an always-open structure in which, as Édouard Glissant said, “the creator of a text is effaced, or rather, is done away with, to be revealed in the texture of his creation” (25). When a solid melts, it reveals something always underneath, something at the bottom, something inside—something new and something that was always already there. A cover, too, is both a revival and a reworking, an update and an interpretation, a retrospective tribute and a re-version that looks toward the future. In performing the new, the original as singular is called into question, replaced by an increasingly fetishised copy made up of and made by multiples.Authorial Effacement and the Exigency of the ErrorY2K, otherwise known as the Millennium Bug, was a coding problem, an abbreviation made to save memory space which would disrupt computers during the transition from 1999 to 2000, when it was feared that the new year would become literally unrecognisable. After an estimated $300 billion in upgraded hardware and software was spent to make computers Y2K-compliant, something more extraordinary than global network collapse occurred as midnight struck: nothing.But what if the machine admits the possibility of accident? Implicit in the admission of any accident is the disclosure of a new condition—something to be heard, to happen, from the Greek ad-cadere, which means to fall. In this drop into non-repetition, the glitch actualises an idea about authorship that necessitates multi-user collaboration; the curtain falls only to reveal the hidden face of technology, which becomes, ultimately, instructions for its re-programming. And even as it deviates, the new form is liable to become mainstreamed into a new fashion. “Glitch’s inherently critical moment(um)” (Menkman 8) indicates this potential for technological self-insurgence, while suggesting the broader cultural collapse of generic markers and hierarchies, and its ensuing flow into authorial fluidity.This feeling of shock, this move “towards the ruins of destructed meaning” (Menkman 29) inherent in any encounter with the glitch, forecasted not the immediate horror of Y2K, but the delayed disasters of 9/11, Hurricane Katrina, Deepwater Horizon Oil Spill, Indian Ocean tsunami, Sichuan Province earthquake, global financial crisis, and two international wars that would all follow within the next nine years. If, as Menkman asserts, the glitch, in representing a loss of self-control “captures the machine revealing itself” (30), what also surfaces is the tipping point that edges us toward a new becoming—not only the inevitability of surrender between machine and user, but their reversibility. Just as crowds stood, transfixed before midnight of the new millennium in anticipation of the error, or its exigency, it’s always the glitch I wait for; it’s always the glitch I aim to re-create, as if on command. The accidental revelation, or the machine breaking through to show us its insides. Like the P2P network that Napster introduced to culture, every glitch produces feedback, a category of noise (Shannon) influencing the machine’s future behaviour whereby potential users might return the transmission.Re-Orienting the Bizarre in Fantasy and FictionIt is in the fantasy of dreams, and their residual leakage into everyday life, evidenced so often in David Lynch’s Twin Peaks, where we can locate a similar authorial agency. The cult Nineties psycho-noir, and its discontinuous return twenty-six years later, provoke us into reconsidering the science of sleep as the art of fiction, assembling an alternative, interactive discourse from found material.The turning in and turning into in dreams is often described as an encounter with the “bizarre,” a word which indicates our lack of understanding about the peculiar processes that normally happen inside our heads. Dreams are inherently and primarily bizarre, Allan J. Hobson argues, because during REM sleep, our noradrenergic and serotonergic systems do not modulate the activated brain, as they do in waking. “The cerebral cortex and hippocampus cannot function in their usual oriented and linear logical way,” Hobson writes, “but instead create odd and remote associations” (71). But is it, in fact, that our dreams are “bizarre” or is it that the model itself is faulty—a precept premised on the normative, its dependency upon generalisation and reducibility—what is bizarre if not the ordinary modulations that occur in everyday life?Recall Foucault’s interest not in what a dream means but what a dream does. How it rematerialises in the waking world and its basis in and effect on imagination. Recall recollection itself, or Erin J. Wamsley’s “Dreaming and Offline Memory Consolidation.” “A ‘function’ for dreaming,” Wamsley writes, “hinges on the difficult question of whether conscious experience in general serves any function” (433). And to think about the dream as a specific mode of experience related to a specific theory of knowledge is to think about a specific form of revelation. It is this revelation, this becoming or coming-to-be, that makes the connection to crowd-sourced content production explicit—dreams serve as an audition or dress rehearsal in which new learning experiences with others are incorporated into the unconscious so that they might be used for production in the waking world. Bert O. States elaborates, linking the function of the dream with the function of the fiction writer “who makes models of the world that carry the imprint and structure of our various concerns. And it does this by using real people, or ‘scraps’ of other people, as the instruments of hypothetical facts” (28). Four out of ten characters in a dream are strangers, according to Calvin Hall, who is himself a stranger, someone I’ve never met in waking life or in a dream. But now that I’ve read him, now that I’ve written him into this work, he seems closer to me. Twin Peak’s serial lesson for viewers is this—even the people who seem strangers to us can interact with and intervene in our processes of production.These are the moments that a beginning takes place. And even if nothing directly follows, this transfer constitutes the hypothesised moment of production, an always-already perhaps, the what-if stimulus of charged possibility; the soil plot, or plot line, for freedom. Twin Peaks is a town in which the bizarre penetrates the everyday so often that eventually, the bizarre is no longer bizarre, but just another encounter with the ordinary. Dream sequences are common, but even more common—and more significant—are the moments in which what might otherwise be a dream vision ruptures into real life; these moments propel the narrative.Exhibit A: A man who hasn’t gone outside in a while begins to crumble, falling to the earth when forced to chase after a young girl, who’s just stolen the secret journal of another young girl, which he, in turn, had stolen.B: A horse appears in the middle of the living room after a routine vacuum cleaning and a subtle barely-there transition, a fade-out into a fade-in, what people call a dissolve. No one notices, or thinks to point out its presence. Or maybe they’re distracted. Or maybe they’ve already forgotten. Dissolve.(I keep hitting “Save As.” As if renaming something can also transform it.)C: All the guests at the Great Northern Hotel begin to dance the tango on cue—a musical, without any music.D: After an accident, a middle-aged woman with an eye patch—she was wearing the eye patch before the accident—believes she’s seventeen again. She enrolls in Twin Peaks High School and joins the cheerleading team.E: A woman pretending to be a Japanese businessman ambles into the town bar to meet her estranged husband, who fails to recognise his cross-dressing, race-swapping wife.F: A girl with blond hair is murdered, only to come back as another girl, with the same face and a different name. And brown hair. They’re cousins.G: After taking over her dead best friend’s Meals on Wheels route, Donna Hayward walks in to meet a boy wearing a tuxedo, sitting on the couch with his fingers clasped: a magician-in-training. “Sometimes things can happen just like this,” he says with a snap while the camera cuts to his grandmother, bed-ridden, and the appearance of a plate of creamed corn that vanishes as soon as she announces its name.H: A woman named Margaret talks to and through a log. The log, cradled in her arms wherever she goes, becomes a key witness.I: After a seven-minute diegetic dream sequence, which includes a one-armed man, a dwarf, a waltz, a dead girl, a dialogue played backward, and a significantly aged representation of the dreamer, Agent Cooper wakes up and drastically shifts his investigation of a mysterious small-town murder. The dream gives him agency; it turns him from a detective staring at a dead-end to one with a map of clues. The next day, it makes him a storyteller; all the others, sitting tableside in the middle of the woods become a captive audience. They become readers. They read into his dream to create their own scenarios. Exhibit I. The cycle of imagination spins on.Images re-direct and obfuscate meaning, a process of over-determination which Foucault says results in “a multiplication of meanings which override and contradict each other” (DAE 34). In the absence of image, the process of imagination prevails. In the absence of story, real drama in our conscious life, we form complex narratives in our sleep—our imaginative unconscious. Sometimes they leak out, become stories in our waking life, if we think to compose them.“A bargain has been struck,” says Harold, an under-5 bit player, later, in an episode called “Laura’s Secret Diary.” So that she might have the chance to read Laura Palmer’s diary, Donna Hayward agrees to talk about her own life, giving Harold the opportunity to write it down in his notebook: his “living novel” the new chapter which reads, after uncapping his pen and smiling, “Donna Hayward.”He flips to the front page and sets a book weight to keep the page in place. He looks over at Donna sheepishly. “Begin.”Donna begins talking about where she was born, the particulars of her father—the lone town doctor—before she interrupts the script and asks her interviewer about his origin story. Not used to people asking him the questions, Harold’s mouth drops and he stops writing. He puts his free hand to his chest and clears his throat. (The ambient, wind-chime soundtrack intensifies.) “I grew up in Boston,” he finally volunteers. “Well, actually, I grew up in books.”He turns his head from Donna to the notebook, writing feverishly, as if he’s begun to write his own responses as the camera cuts back to his subject, Donna, crossing her legs with both hands cupped at her exposed knee, leaning in to tell him: “There’s things you can’t get in books.”“There’s things you can’t get anywhere,” he returns, pen still in his hand. “When we dream, they can be found in other people.”What is a call to composition if not a call for a response? It is always the audience which makes a work of art, re-framed in our own image, the same way we re-orient ourselves in a dream to negotiate its “inconsistencies.” Bizarreness is merely a consequence of linguistic limitations, the overwhelming sensory dream experience which can only be re-framed via a visual representation. And so the relationship between the experience of reading and dreaming is made explicit when we consider the associations internalised in the reader/audience when ingesting a passage of words on a page or on the stage, objects that become mental images and concept pictures, a lens of perception that we may liken to another art form: the film, with its jump-cuts and dissolves, so much like the defamiliarising and dislocating experience of dreaming, especially for the dreamer who wakes. What else to do in that moment but write about it?Evidence of the bizarre in dreams is only the evidence of the capacity of our human consciousness at work in the unconscious; the moment in which imagination and memory come together to create another reality, a spectrum of reality that doesn’t posit a binary between waking and sleeping, a spectrum of reality that revels in the moments where the two coalesce, merge, cross-pollinate—and what action glides forward in its wake? Sustained un-hesitation and the wish to stay inside one’s self. To be conscious of the world outside the dream means the end of one. To see one’s face in the act of dreaming would require the same act of obliteration. Recognition of the other, and of the self, prevents the process from being fulfilled. Creative production and dreaming, like voyeurism, depend on this same lack of recognition, or the recognition of yourself as other. What else is a dream if not a moment of becoming, of substituting or sublimating yourself for someone else?We are asked to relate a recent dream or we volunteer an account, to a friend or lover. We use the word “seem” in nearly every description, when we add it up or how we fail to. Everything seems to be a certain way. It’s not a place but a feeling. James, another character on Twin Peaks, says the same thing, after someone asks him, “Where do you want to go?” but before he hops on his motorcycle and rides off into the unknowable future outside the frame. Everything seems like something else, based on our own associations, our own knowledge of people and things. Offline memory consolidation. Seeming and semblance. An uncertainty of appearing—both happening and seeing. How we mediate—and re-materialise—the dream through text is our attempt to re-capture imagination, to leave off the image and better become it. If, as Foucault says, the dream is always a dream of death, its purpose is a call to creation.Outside of dreams, something bizarre occurs. We call it novelty or news. We might even bestow it with fame. A man gets on the wrong plane and ends up halfway across the world. A movie is made into the moment of his misfortune. Years later, in real life and in movie time, an Iranian refugee can’t even get on the plane; he is turned away by UK immigration officials at Charles de Gaulle, so he spends the next sixteen years living in the airport lounge; when he departs in real life, the movie (The Terminal, 2004) arrives in theaters. Did it take sixteen years to film the terminal exile? How bizarre, how bizarre. OMC’s eponymous refrain of the 1996 one-hit wonder, which is another way of saying, an anomaly.When all things are counted and countable in today’s algorithmic-rich culture, deviance becomes less of a statistical glitch and more of a testament to human peculiarity; the repressed idiosyncrasies of man before machine but especially the fallible tendencies of mankind within machines—the non-repetition of chance that the Nineties emblematised in the form of its final act. The point is to imagine what comes next; to remember waiting together for the end of the world. There is no need to even open your eyes to see it. It is just a feeling. ReferencesBruns, Axel. “Towards Produsage: Futures for User-Led Content Production.” Cultural Attitudes towards Technology and Communication 2006: Proceedings of the Fifth International Conference, eds. Fay Sudweeks, Herbert Hrachovec, and Charles Ess. Murdoch: School of Information Technology, 2006. 275-84. <https://eprints.qut.edu.au/4863/1/4863_1.pdf>.Foucault, Michel. “Dream, Imagination and Existence.” Dream and Existence. Ed. Keith Hoeller. Pittsburgh: Review of Existential Psychology & Psychiatry, 1986. 31-78.———. “What Is an Author?” The Foucault Reader: An Introduction to Foucault’s Thought. Ed. Paul Rainbow. New York: Penguin, 1991.Glissant, Édouard. Poetics of Relation. Trans. Betsy Wing. Ann Arbor: U of Michigan P, 1997.Hall, Calvin S. The Meaning of Dreams. New York: McGraw Hill, 1966.Hobson, J. Allan. The Dream Drugstore: Chemically Altered State of Conscious­ness. Cambridge: MIT Press, 2001.Menkman, Rosa. The Glitch Moment(um). Amsterdam: Network Notebooks, 2011.Shannon, Claude Elwood. “A Mathematical Theory of Communication.” The Bell System Technical Journal 27 (1948): 379-423.States, Bert O. “Bizarreness in Dreams and Other Fictions.” The Dream and the Text: Essays on Literature and Language. Ed. Carol Schreier Rupprecht. Albany: SUNY P, 1993.Twin Peaks. Dir. David Lynch. ABC and Showtime. 1990-3 & 2017. Wamsley, Erin. “Dreaming and Offline Memory Consolidation.” Current Neurology and Neuroscience Reports 14.3 (2014): 433. “Y2K Bug.” Encyclopedia Britannica. 18 July 2018. <https://www.britannica.com/technology/Y2K-bug>.
Style APA, Harvard, Vancouver, ISO itp.
18

Loess, Nicholas. "Augmentation and Improvisation". M/C Journal 16, nr 6 (7.11.2013). http://dx.doi.org/10.5204/mcj.739.

Pełny tekst źródła
Streszczenie:
Preamble: Medium/Format/Marker Medium/Format/Marker (M/F/M) was a visual-aural improvisational performance involving myself, and musicians Joe Sorbara, and Ben Grossman. It was formed through my work as a PhD candidate at the Improvisation, Community, and Social Practice research initiative at the University of Guelph. This performance was conceived as an attempted intervention against the propensity to reify the “new.” It also sought to address the proliferation of the screen and question how the increased presence of screens in everyday life has augmented the way in which an audience is conceived and positioned. This conception is in direct conversation with my thesis, which is a practice-based research project exploring what the experimental combination of intermediality, improvisation, and the cinema might offer towards developing a reflexive approach to "new" media, screen culture, and expanded cinemas. One of the ways I chose to explore this area involved developing an interface that allowed an audio-visual ensemble to improvise with a film's audio-visual projection. I experimented with different VJ programs. These programs often utilize digital filters and effects to alter images through real-time mixing and layering, much like a DJ does with sound. I found a program developed by Chicago-based artist Ontologist called Ontoplayer, which he developed out of his practice as an improvisational video artist. The program works through a dual-channel interface where two separate digital files could be augmented, with their projected tempo capable of being determined by musicians through a MIDI interface. I conceptualized the performance around the possibility of networking myself with two other musicians via this interface. I approached percussionist Joe Sorbara and multi-instrumentalist Ben Grossman with the idea to use Ontoplayer as a means to improvise with Chris Marker's La Jetée (1962, 28 mins). The film itself would be projected simultaneously in four different formats: 16mm celluloid, VHS, Blu-ray, and Standard Definition video (the format the ensemble improvised with) projected onto four separate screens. From left to right, the first screen contained the projected version of La Jetée that we improvised with, next to it was its Blu-ray format, next to that, a degraded VHS copy of the film, and next to that, the 16mm print. The performance materialized through performing a number of improvisatory experiments. A last minute experiment conceived a few hours before the performance involved placing contact microphones overtop of the motor on a Bell & Howell 16mm projector. The projector was tested in the days leading up to the performance and it ran as smoothly as could be expected. It had a nice cacophonous hum that Ben Grossman intended to improvise with using some contact mics attached directly over the projector’s motor, a $5 iPad app, and his hurdy-gurdy. Fifteen minutes before the performance began, the three of us huddled to discuss how long we'd like to go. We had met briefly the day before to discuss the technical setup of the performance but not its execution and length. I hadn't considered duration. Joe broke the silence by asking if we'd be "finding beginnings and endings." I didn't know what that entailed, but nodded. We started. I turned on the projector and it immediately started to cough and chew on the 40 year old 16mm print I found online. My first impulse was to intervene, to try to save it. The film continued and I sat frozen for a moment. Joe started playing and Ben, expecting me to send him the audio track from La Jetée, prompted me to do so. I let the projector go and began. Joe had a digital kick-drum and two contact mics on his drum kit hooked into a MIDI hub, while Ben's hurdy-gurdy had a contact mic inside it, wired into the hub. The hub hooked into my laptop and allowed for an intermedial conversation to emerge between the three of us. While the 16mm, VHS, and Blu-Ray formats proceeded relatively unimpeded alongside each other on their respective screens, the fourth screen was where this conversation took place. I digitally reordered different image sequences from La Jetée. The fact that it’s a film (almost) comprised entirely of still images made this reordering intriguing in that I was able control the speed of progressing from each image to the next. The movement from image to image was structured between Ben and Joe’s improvisations and the kind of effects and filters I had initialized. Ontoplayer has a number of effects and filters that push the base image into more abstract territories (e.g.: geometric shapes, over pixelation) I was uninterested in exploring. I utilized effects that to some degree still kept the representational content of the image intact. The degree to which these effects took hold of the image were determined by whether or not Ben and Joe decided to use the part of their instrument that would trigger them. The decision to linger on an image, colour it differently, or skip ahead in the film’s real-time projection destabilized my sense of where I was in the film. It became an event in the sense that each movement, both visual and aural was happening with an indeterminate duration. La Jetée opens with the narrator proclaiming: “this is the story of a man marked by an image from his childhood.” The story itself is situated around a man in a post-apocalyptic world, haunted by the persistent memory of a woman he saw as a child while standing on the jetty at Orly Airport in Paris. The man was a soldier, now captured, and imprisoned in an underground camp. The prison guards have been conducting experiments on the prisoners, attempting to use the prisoner’s memories as a mechanism to send them backwards and forwards in time. The narrator explains, “with the surface of the planet irradiated … The human race was doomed. Space was off limits. The only link with survival passed through time … The purpose of the experiments was to throw emissaries into time to call the past and future to the aid of the present.” La Jetée is visually structured as a photomontage, with voice-over narration, diegetic and non-diegetic sound existing as component parts to the whole film. I decided to separate these components for the sake of isolating them before the performance as instruments of the film to be improvisationally deployed through the intermedial connection between Ben, Joe, and myself. The resulting projections that emerged from our interface became a kind of improvised "grooving" to La Jetée that restricted the impulse to discriminately place sound beneath and behind the image. I selected images from different points in the film that felt "timely" given the changing dynamic between the three of us. I remember lingering on an image of the woman's face, her hand against her mouth, her hair being blown back by the wind. I looked and listened for the moment when the film would catch and then catch fire. It never came. We let the reel run to the end and continued on improvising until we found an ending. But the sound of that film catching but never breaking, the intention and tension of the film being near death the entire time made everything we did more precious, teetering on the brink of failure. We could never have predicted that, and it gave us something I continue to ponder and be thankful for. Celluloid junkies in the room commented on how precipitous the whole thing was, given how rare it is to encounter the sound of celluloid film travelling through a projector inside a cinematic space. An audiophile mused over how there wasn’t any document, his mind adequately blown by how “funky” the projector sounded. With there being no document of the performance, I'm left with my own memories. In mining the aftermath of this performance, I hope to find an addendum that considers how improvisation might negotiate with augmentation in ways that speak to Walter Benjamin's assertion that the "camera, the film, on the one hand, extends our comprehension of the necessities which rule our lives; on the other hand, it manages to assure us of an immense and unexpected field of action” (Benjamin 236-7).Images to be Determined I got a job working in a photo lab eight years ago, right around the time digital cameras started becoming not only affordable, but technologically-comparable alternatives to film cameras. The photo printer in the lab was setup to scan and digitize celluloid filmstrips to allow for digital “touchups” by the technician. It was also hooked into touchscreen media stations that accepted a variety of memory card formats so that customers could “touchup” their own images. Celluloid film meant that as long as their format was chemical, touching up their images remained the task of the technician. Against the urging of the lab’s manager, I resisted altering other people’s images. It felt like a violation, despite the fact that almost every customer was unaware of this process. They assumed a degree of responsibility for a chemically-exposed image. I still got blamed for a lot of bad photography, but an image chemically under or overexposed was irreparable. Digital cameras changed all of that. I still preferred an evenly exposed celluloid print to a digital, but the allure was the ability for these images to be augmented. Augmentation is synonymous with "enhancement," "prosthesis," "addition," "amplification," "enrichment," "expansion,” and "extension" (to name a few). For the purpose of this essay, I am situating augmentation as an agential act engaging with a static form to purposefully alter its aesthetic and political relation to a reality. To what extent can we say that the digital image is itself, an augmentation? If Instagram is any indication, the digital image's existence is bound by its perpetual augmentation. A digital image is only as good as its capacity to be worked on. The ubiquity of digitally applying lomographic filters to digital images, as a defining step in their distributive chain, is indicative of the discursive impact remediating the old into the new has on digital forms. These digitally-coded filters used to augment “clear” digital images are comprised of exaggerated imperfections that existed to varying degrees, as unforeseen side effects of working with comparatively more unstable celluloid textures. The filtered images themselves are digital distortions of a digital original. The filters augment this original through obscuring one or a number of components. Some filters might exaggerate the green values or sharpen a particular quadrant within the frame that might coincide with the look of a particular film stock from the past. The discourse of “film” and “vintage” photography has become a synonymous component of the digital aesthetic, discursively warming up what is often considered to be a cold, and disembodied medium. Augmentation works to re-establish a congruous relationship between the filmic and the digital, attempting to reconcile the aesthetic distance between granularity and pixelation. This is ironic because this process is encapsulated through digitally encoding and applying these filters for the sake of obscuring clarity. Thus, the object is both hailed as clear and clearly manipulable. Another example a bit closer to the cinema is the development of digital video cameras offering RAW, or minimally compressed file formats for the sole purpose of augmenting the initial recording in post-production workflows in an attempt to minimize degradation in the image. The colour values and dynamic range of these images are muted, or flattened so that the human can control their elevation after the fact. To some degree the initial image, in itself, is an augmentation of its filmic relative. From early experiments with video synthesizers to the present digital coding of film effects, digital images have tantalized video artists and filmmakers with possibility shrouded in instantaneity and malleability. A key problem with this structure remains the unbridled proliferation and expansion of the digital image, set free for the sake of newness. How might improvisation work towards establishing an ethics of augmentation? An ethics of this kind must disrupt the popular notion of the digital image existing beyond analogical constraints. The belief that “if you can imagine it, you can do it” obfuscates the reality that to work with images, whatever their texture, is a negotiation with constraint. Part of M/F/M’s fruition emerged from a conversation I'd had with Canadian Animator Pierre Hébert last summer. Now obvious, but for Hébert, the first obstacle he needed to overcome as an improviser was developing an instrument that he could gig with. Through the act of designing an instrument I immediately became aware of what wasn't possible, and so the work leading up to the performance involved attempting to expand the possibilities of that instrument. How might I conceive of my own treatment of images simultaneously treated by Joe and Ben as a kind of cinematic extended technique we collaboratively bring into being? Constraint necessitates the need for extension, for finding new ways to sound and appear. Constraint is also consistently conceived as shackling progress. In scientific methodologies it is often arbitrarily imposed to steer an experiment into a desired direction. This sort of experimental methodology is in the business of presupposing outcomes, which I feel is often the case with what ultimately becomes the essay of end result in Humanities research. Constraint is an important imposition in improvisation only if the parties involved are willing to find new ways to move in consort with it. The act of improvisation is thus an engagement with the spatio-temporal constraints of performance, politics, memory, texture, and difference. My conception of the cinema is that of an instrument, whose past is what I work with to better understand its future. Critic Gene Youngblood, in his landmark book, Expanded Cinema, theorized a new conception of the cinema as a global planetary phenomenon suffused inside a space of intermedia, where immersive, interactive, and interconnected realms necessitated the need to critically conceptualise the cinema in cosmic terms. At around the time of Youngblood's writing, another practitioner of the cosmic way, improviser and composer Sun Ra was staking a similar claim for music's ability to uplift the species cosmically. Ra's popular line “If we came from nowhere here, why can’t we go somewhere there?” (Heble 125), articulated the problematic racial politics in post-WWII America, that fixed African-American identity into a static domain with little room to move upward. The "somewhere there" to Ra was a non-space, created from "a desire to opt out of the very codes of representation and intelligibility, the very frameworks of interpretation and assumption which have legitimated the workings of dominant culture" (Heble 125). Though Youngblood's and Ra's intellectual and creative impulses formed from differing political circumstances, the work and thinking of these two figures remain significant articulations of the need to work from and towards the cosmic. In 2003, Youngblood published a follow-up essay in a reprint of Expanded Cinema entitled Cinema and the Code. In it, he defines cinema as a “phenomenology of the moving image.” Rather than conceiving of it through any of its particular media, Youngblood advocates for a segregated conception of the cinema: Just as we separate music from its instruments. Cinema is the art of organizing a stream of audiovisual events in time. It is an event-stream, like music. There are at least four media through which we can practice cinema – film, video, holography, and structured digital code—just as there are many instruments through which we can practice music. (Youngblood cited in Marchessault and Lord 7) Music and cinema are thus conceived as the exterior consequences of creative and co-creative instrumental experimentation. For Ra and Youngblood, the planetary stakes of this project are infused with the need to manufacture and occupy an imaginative space (if only for a moment) outside of the known. This is not to say that the action itself is transcendental. But rather this outside is the planetary. For the past year I've been making a documentary with Joe Sorbara on the free improv scene in Toronto. Listening to musicians talk about improvisation in expansive terms, as this ethereal and ephemeral experience, that exists on the brink of failure, that is as much an act of memory as renewal, reverberated with my own feelings surrounding the cinema. Improvisation, to philosopher Gary Peters, is the "entwinement of preservation and destruction", that "invites us to make a transition from a closed conception of the past to one that re-thinks it as an endlessly ongoing event or occurrence whereby tradition is re-originated (Benjamin) or re-opened (Heidegger)” (Peters 2). This “entwinement of preservation and destruction” takes me back to my earlier discussion of the ways in which digital photography, in particular lomographically filtered snapshots, is structured through preserving the discursive past of film while destroying its standard. The performance of M/F/M attempted to connect the augmentation of the digital image and the impact this augmentation had on conceptualizing the past through an improvisational approach to intermediality. The issue I have with the determination of images concerns their technological standardization. As long as manufacturers and technicians control this process then the practice of gathering, projecting, and experiencing digital images is predetermined by their commercial obligation. It assures that augmenting the “immense and unexpected field of action” comprising the domain of images is itself a predetermination. References Benjamin, Walter. Illuminations. New York: Schocken Books, 1985. Heble, Ajay. Landing on the Wrong Note. London: Routledge, 2000. Marker, Chris, dir. La Jetée. Argos Films. 1962. Marchessault, Janine, and Susan Lord. Fluid Screens, Expanded Cinema. Toronto: University of Toronto Press, 2007. Peters, Gary. The Philosophy of Improvisation. Chicago: University of Chicago Press, 2009.
Style APA, Harvard, Vancouver, ISO itp.
Oferujemy zniżki na wszystkie plany premium dla autorów, których prace zostały uwzględnione w tematycznych zestawieniach literatury. Skontaktuj się z nami, aby uzyskać unikalny kod promocyjny!

Do bibliografii