To see the other types of publications on this topic, follow the link: Digital art Technique.

Dissertations / Theses on the topic 'Digital art Technique'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Digital art Technique.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Mitchell, Joanne. "Precision air entrapment through applied digital and kiln technologies : a new technique in glass art." Thesis, University of Sunderland, 2015. http://sure.sunderland.ac.uk/8548/.

Full text
Abstract:
The motivation for the research was to expand on the creative possibilities of air bubbles in glass, through the application of digital and kiln technologies to formulate and control complex air entrapment, for new configurations in glass art. In comparison to glassblowing, air entrapment in kiln forming glass practice is under-developed and undocumented. This investigation has devised new, replicable techniques to position and manipulate air in kiln-formed glass, termed collectively as Kiln-controlled Precision Air Entrapment. As a result of the inquiry, complex assemblages of text and figurative imagery have been produced that allow the articulation of expressive ideas using air voids, which were not previously possible. The research establishes several new innovations for air-entrapment in glass, as well as forming a technical hypotheses and a practice-based methodology. The research focuses primarily on float glass and the application of CNC abrasive waterjet cutting technology; incorporating computer aided design and fabrication alongside more conventional glass-forming methods. The 3-axis CNC abrasive waterjet cutting process offers accuracy of cut and complexity of form and scale, across a flat plane of sheet glass. The new method of cleanly fusing layered, waterjet-cut float glass permits the fabrication of artwork containing air entrapment as multilayered, intricate groupings and composite three-dimensional void forms. Kiln-controlled air entrapment presents a number of significant advantages over conventional glassblowing techniques of air entrapment which are based around the decorative vessel or solid spheroid shaped on the blowing iron. The integration of digital and traditional technologies and the resulting technical glassmaking discoveries in this research advance potential new contexts for air entrapment, in sculptural and architectural glass applications. Contexts include solid sculptures which explore the internal space of glass, to flat-plane panels and hot glass roll-up processes which take air entrapment beyond the limitations of its previous incarnations. The creative potential of Kiln-controlled Precision Air Entrapment for glass art is demonstrated through the development of a body of artworks and their dissemination in the field of practice. Documentation of the findings in the thesis has resulted in a 3 significant body of knowledge which opens up new avenues of understanding for academics, creative practitioners and professionals working with glass.
APA, Harvard, Vancouver, ISO, and other styles
2

Ke, Lijia. "Relating tradition to innovation within the Chinese arts : the application of digital technique to visual art." Thesis, University of West London, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.516337.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wolin, Martin Michael. "Digital high school photography curriculum." CSUSB ScholarWorks, 2003. https://scholarworks.lib.csusb.edu/etd-project/2414.

Full text
Abstract:
The purpose of this thesis is to create a high school digital photography curriculum that is relevant to real world application and would enable high school students to enter the work force with marketable skills or go on to post secondary education with advanced knowledge in the field of digital imaging.
APA, Harvard, Vancouver, ISO, and other styles
4

Thoisy, Eric de. "La maison du cyborg : apprendre, transmettre, habiter un monde numérique." Thesis, Paris 8, 2019. http://www.theses.fr/2019PA080019.

Full text
Abstract:
Le contexte numérique, à comprendre dans sa double dimension technique et culturelle, produit des nouvelles relations au savoir ; la tradition « livresque » de transmission d’un contenu explicite laisse place à un régime documentaire revalorisant la capacité de l’usager à se saisir d’un système inachevé. Les architectures conçues pour l’apprentissage sont, dans ce contexte, remises en question.Une analyse des relations entre architecture et informatique dans les dernières décennies apporte des éléments de compréhension : l’architecture a été prise comme modèle pour construire l’environnement informatique et, au-delà des emprunts sémantiques, c’est sa responsabilité – la prise en charge de la mémoire – qui semble avoir été déplacée vers l’(architecture) informatique. Le modèle du « théâtre de la mémoire », immobilisant son occupant pour lui donner à voir une signification prédéterminée du monde, s’est alors vu concurrencé par d’autres pensées organisant le déplacement et l’apprentissage.Mais cette grille de lecture est insuffisante, et la problématique est à reformuler dans le cadre proposé par Alan Turing. Le modèle computationnel, mis en relation avec le système logique de Ludwig Wittgenstein, produit des relations renégociées entre calcul et pensée, entre humain et machine. Dans un monde co-occupé par des machines apprenantes, les pratiques de l'apprentissage sont reformulées dans un rapport renouvelé entre un modèle et son usage. Surtout, le déplacement numérique de la notion de signification – de l’explicite vers l’implicite – pourrait constituer alors une fondation pour proposer quelques hypothèses constitutives d’une pensée numérique de l’architecture
The digital context, understood as both a technical and a cultural phenomenon, produces new relationships to knowledge. The “bookish” paradigm of transmission is being challenged by documentary practices enabling the user to take hold of an uncompleted knowledge structure. Within this framework, there is a strong need for reevaluating physical buildings conceived for learning.The situation can be apprehended by looking at the interactions between architecture and computer sciences during the last decades. Architecture was taken as a model to build the virtual environment and, most importantly, we believe that the historical responsibility of architecture – taking charge of memory – was displaced towards (computer) architecture. But this shift does not replicate the pattern of « the theater of memory » that organizes the transfer of a set of predetermined meanings into the mind of a sedentary inhabitant. Instead, incoming models foster movement and learning.The hypothesis of a « digital caesurae » requires then a further reading : the problematic needs to be rephrased within the computational framework built by Alan Turing. We have chosen to embed our argument into Ludwig Wittgenstein’s logical system in order to disclose the main features of the computational thinking : renewed relations between thinking and calculating, between human and machine. Learning relies on a new kind of balance between the logical model and the use we make of it. Most of all, we will focus on the shift of the concept of meaning, from an explicit existence to an implicit one : this may constitute a relevant « foundation » to build hypotheses for a digital thinking of architecture
APA, Harvard, Vancouver, ISO, and other styles
5

Meintjes, Anthony Arthur. ""From digital to darkroom"." Thesis, Rhodes University, 2001. http://hdl.handle.net/10962/d1007418.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Milton-Smith, Melissa. "A conversation on globalisation and digital art." University of Western Australia. Communication Studies Discipline Group, 2008. http://theses.library.uwa.edu.au/adt-WU2009.0057.

Full text
Abstract:
Globalisation is one of the most important cultural phenomena of our times and yet, one of the least understood. In popular and critical discourse there has been a struggle to articulate its human affects. The tendency to focus upon macro accounts can leave gaps in our understanding of its micro experiences.1 1 As Jonathon Xavier Inda and Renato Rosaldo argue there is a strong pattern of thinking about globalisation 'principally in terms of very large-scale economic, political, or cultural processes'. (See: Jonathon Xavier Inda and Renato Rosaldo (Eds.), The Anthropology of Globalisation: A Reader, Malden, Blackwell Publishing, 2002, p. 5.) In this thesis, I will describe globalisation as a dynamic matrix of flows. I will argue that globalisation's spatial, temporal, and kinetic re-arrangements have particular impacts upon bodies and consciousnesses, creating contingent and often unquantifiable flows. I will introduce digital art as a unique platform of articulation: a style borne of globalisation's oeuvre, and technically well-equipped to converse with and emulate its affects. By exploring digital art through an historical lens I aim to show how it continues dialogues established by earlier art forms. I will claim that digital art has the capacity to re-centre globalisation around the individual, through sensory and experiential forms that encourage subjective and affective encounters. By approaching it in this way, I will move away from universal theorems in favour of particular accounts. Through exploring a wide array of digital artworks, I will discuss how digital art can capture fleeting experiences and individual expressions. I will closely examine its unique tools of articulation to include: immersive, interactive, haptic, and responsive technologies, and analyse the theories and ideas that they converse with. Through this iterative process, I aim to explore how digital art can both facilitate and generate new articulations of globalisation, as an experiential phenomenon.
APA, Harvard, Vancouver, ISO, and other styles
7

Jacobs, Bidhan. "Vers une esthétique du signal. Dynamiques du flou et libérations du code dans les arts filmiques (1990-2010)." Thesis, Paris 3, 2014. http://www.theses.fr/2014PA030089.

Full text
Abstract:
Au cours de la décennie 1990, l'introduction puis l’expansion accélérée du numérique par les industries techniques ont favorisé le développement dans les arts filmiques d’un courant critique spontané et collectif portant sur le signal. Nous élaborons une histoire des techniques et des formes esthétiques critiques du signal, en faisceaux de segments, selon une taxinomie qui prolonge le structuralisme matériologique des années 70 (Malcolm Le Grice, Peter Gidal, Paul Sharits, Anthony McCall). Cette histoire embrasse et met en perspective cinéma, vidéo et numérique, de manière à réorganiser nos conceptions distinctes de ces champs selon le domaine des sciences (duquel dépendent détection, codification et visualisation du signal). Nous proposons une histoire des techniques à rebours sous l’angle particulier de la computation du signal, processus systématique commun à l’ensemble des technologies filmiques et entendu comme un ensemble de règles opératoires propres à un traitement calculatoire de données. Dans la double tradition d’une part de Jean Epstein, Marcel L’Herbier ou Jean Renoir, et de l’autre du structuralisme expérimental (Paul Sharits, Malcolm Le Grice…), de nombreux artistes contemporains, tels Paolo Gioli, Philippe Grandrieux, Peter Tscherkassky, Marylène Negro, Leighton Pierce, Augustin Gimel, Jacques Perconte ou HC Gilje (pour n’en mentionner que quelques uns), ont élaboré une intelligence du signal grâce à deux entreprises critiques simultanées. La première, au registre du dispositif, conteste la technologie programmante et vise les libérations du code ; la seconde, au registre de l’image, conteste les normes de visualité et enrichit les palettes visuelles et sonores du flou. Nous tentons d’établir, formuler et organiser les logiques qui, traversant et déterminant la diversité des initiatives artistiques dont nous observons les spécificités et singularités, relèvent d’un même combat artistique contre la standardisation
During the 90s, with the introduction, then accelerated expansion of digital by the technical industries, has promoted the development of a spontaneous and collective critical current on the signal in the filmic arts. We develop a history of technics and critical aesthetic forms of signal, in beam segments, according to a taxinomy that extends the 70s’ materiologic structuralism (Malcolm le Grice, Peter Gidal, Paul Sharits, Anthony McCall). This history embrace film, video and digital, to reorganize our different conceptions of these fields according to the scientific viewpoint (which detection, codification and display of the signal depend on). We propose a backward technological history from the viewpoint of the signal computation, a systematic process common to all filmic technologies, and understood as a set of operating rules specific to computational data processing.In the double tradition, first of Jean Epstein, Marcel L'Herbier or Jean Renoir, on the other hand experimental structuralism (Paul Sharits, Malcolm Le Grice...), many contemporary artists such as Paolo Gioli, Philippe Grandrieux Peter Tscherkassky, Marylène Negro, Leighton Pierce, Augustin Gimel, Jacques Perconte or HC Gilje (just to mention a few) has developed a signal intelligence thanks to two simultaneous critical enterprises. The first, on the register of the apparatus, challenges the programming technology and aims the liberation of the code ; the second, on the register of the image, challenges the norms of the visuality and expand the visual and sound palettes of blur. We try to formulate and organize the logics which, crossing and determining the diversity of artistic initiatives whom we observe specificities and singularities, belong to the same artistic battle against standardization
APA, Harvard, Vancouver, ISO, and other styles
8

McQuade, Patrick John Art College of Fine Arts UNSW. "Visualising the invisible :articulating the inherent features of the digital image." Awarded by:University of New South Wales. Art, 2007. http://handle.unsw.edu.au/1959.4/43307.

Full text
Abstract:
Contemporary digital imaging practice has largely adopted the visual characteristics of its closest mediatic relative, the analogue photograph, In this regard, new media theorist Lev Manovich observes that "Computer software does not produce such images by default. The paradox of digital visual culture is that although all imaging is becoming computer-based, the dominance of photographic and cinematic imagery is becoming even stronger. But rather than being a direct, "natural" result of photo and film technology, these images are constructed on computers" (Manovich 2001: 179), Manovich articulates the disjuncture between the technical processes involved in the digital image creation process and the visual characteristics of the final digital image with its replication of the visual qualities of the analogue photograph. This research addresses this notion further by exploring the following. What are the defining technical features of these computer-based imaging processes? Could these technical features be used as a basis in developing an alternative aesthetic for the digital image? Why is there a reticence to visually acknowledge these technical features in contemporary digital imaging practice? Are there historic mediated precedents where the inherent technical features of the medium are visually acknowledged in the production of imagery? If these defining technical features of the digital imaging process were visually acknowledged in this image creation process, what would be the outcome? The studio practice component of the research served as a foundation for the author's artistic and aesthetic development where the intent was to investigate and highlight four technical qualities of the digital image identified through the case studies of three digital artists, and other secondary sources, These technical qualities include: the composite RGB colour system of the digital image as it appears on screen; the pixellated microstructure of the digital image; the luminosity of the digital image as it appears on a computer monitor, and the underlying numeric and (ASCII based) alphanumeric codes of the image file which enables that most defining feature of the image file, that of programmability, Based on research in the visualization of these numeric and alphanumeric codes, digital images of bacteria produced through the use of the scanning electron microscope, were chosen as image content for an experimental body of work to draw the conceptual link between these numeric and alphanumeric codes of the image file and the coded genetic sequence of an individual bacterial entity.
APA, Harvard, Vancouver, ISO, and other styles
9

Ge, He. "Flexible Digital Authentication Techniques." Thesis, University of North Texas, 2006. https://digital.library.unt.edu/ark:/67531/metadc5277/.

Full text
Abstract:
Abstract This dissertation investigates authentication techniques in some emerging areas. Specifically, authentication schemes have been proposed that are well-suited for embedded systems, and privacy-respecting pay Web sites. With embedded systems, a person could own several devices which are capable of communication and interaction, but these devices use embedded processors whose computational capabilities are limited as compared to desktop computers. Examples of this scenario include entertainment devices or appliances owned by a consumer, multiple control and sensor systems in an automobile or airplane, and environmental controls in a building. An efficient public key cryptosystem has been devised, which provides a complete solution to an embedded system, including protocols for authentication, authenticated key exchange, encryption, and revocation. The new construction is especially suitable for the devices with constrained computing capabilities and resources. Compared with other available authentication schemes, such as X.509, identity-based encryption, etc, the new construction provides unique features such as simplicity, efficiency, forward secrecy, and an efficient re-keying mechanism. In the application scenario for a pay Web site, users may be sensitive about their privacy, and do not wish their behaviors to be tracked by Web sites. Thus, an anonymous authentication scheme is desirable in this case. That is, a user can prove his/her authenticity without revealing his/her identity. On the other hand, the Web site owner would like to prevent a bunch of users from sharing a single subscription while hiding behind user anonymity. The Web site should be able to detect these possible malicious behaviors, and exclude corrupted users from future service. This dissertation extensively discusses anonymous authentication techniques, such as group signature, direct anonymous attestation, and traceable signature. Three anonymous authentication schemes have been proposed, which include a group signature scheme with signature claiming and variable linkability, a scheme for direct anonymous attestation in trusted computing platforms with sign and verify protocols nearly seven times more efficient than the current solution, and a state-of-the-art traceable signature scheme with support for variable anonymity. These three schemes greatly advance research in the area of anonymous authentication. The authentication techniques presented in this dissertation are based on common mathematical and cryptographical foundations, sharing similar security assumptions. We call them flexible digital authentication schemes.
APA, Harvard, Vancouver, ISO, and other styles
10

Rahaim, Margaret. "Material-digital resistance : toward a tactics of visibility." Thesis, Royal College of Art, 2015. http://researchonline.rca.ac.uk/1685/.

Full text
Abstract:
This research considers the ways in which digital, networked technologies influence contemporary everyday life and creative practice. Through studio practice and writing, I ask how a contemporary condition of everyday life, characterised by the suppression of distance in speed of communication and the ubiquitous presence of surveillant apparatuses, affects the way we understand and use the image. I also consider the role of the digital image in both destabilizing and reinforcing human agency. In the past, tactical creativity was protected by a level of invisibility from the vision of authority, as described by Michel de Certeau. With the the spread of networked technologies, that invisibility is no longer possible. I take Vilem Flusser’s methodology of ‘playing against the camera’—a recipe for overcoming of the functionalist relationship between human and image technology—as a possible model for establishing my own and identifying other artists’ practices as tactics of visibility. I seek to develop a material consciousness of the digital image based on ontologies that assert the materiality of its processes and effects. In studio work, I blend manual and digital techniques for image-making in order to expose the structure of the digital image. I attempt the work of the apparatus outside the apparatus, by performing digital processes by hand, creating a model of difference and refining a physical sense of the disparity between human and computer scales through the reassertion of the body in a process of making. Using Kendall Walton’s notio of photographic transparency, I make an argument for the affective potency of the ‘poor image’, evidenced in artwork and mass media, as inseparable from its materiality. I fictionalize aspects of this transparency, depicting an impossible reality and allowing me to model present anxieties stemming from the rise of digital image production. I find that transparency and the instantaneity of the digital network are responsible in part for the obfuscation of digital materiality, as well as a confused sense of spatial relationships and personal interconnection. Image quality is politicized by connotations of credibility or agenda as it bends to the need for ever-faster communications. Though certain characteristics of the digital image encourage or sustain an ignorance with regard to its materiality, these characteristics can also be exploited to foreground materiality in art practice that aligns itself with the spirit and purpose, if not the invisibility, of de Certeau’s tactics, and the critical methods of resistance to a programme of technology suggested by Flusser.
APA, Harvard, Vancouver, ISO, and other styles
11

Brisbane, Gareth Charles Beattie. "On information hiding techniques for digital images." Access electronically, 2004. http://www.library.uow.edu.au/adt-NWU/public/adt-NWU20050221.122028/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Lim, Anthony Galvin K. C. "Digital compensation techniques for in-phase quadrature (IQ) modulator." University of Western Australia. School of Electrical, Electronic and Computer Engineering, 2004. http://theses.library.uwa.edu.au/adt-WU2005.0018.

Full text
Abstract:
[Formulae and special characters can only be approximated here. Please see the pdf version of the abstract for an accurate reproduction.] In In-phase/Quadrature (IQ) modulator generating Continuous-Phase-Frequency-Shift-Keying (CPFSK) signals, shortcomings in the implementation of the analogue reconstruction filters result in the loss of the constant envelope property of the output signal. Ripples in the envelope function cause undesirable spreading of the transmitted signal spectrum into adjacent channels when the signal passes through non-linear elements in the transmission path. This results in the failure of the transmitted signal in meeting transmission standards requirements. Therefore, digital techniques compensating for these shortcomings play an important role in enhancing the performance of the IQ modulator. In this thesis, several techniques to compensate for the irregularities in the I and Q channels are presented. The main emphasis is on preserving a constant magnitude and linear phase characteristics in the pass-band of the analogue filters as well as compensating for the imbalances between the I and Q channels. A generic digital pre-compensation model is used, and based on this model, the digital compensation schemes are formulated using control and signal processing techniques. Four digital compensation techniques are proposed and analysed. The first method is based on H2 norm minimization while the second method solves for the pre-compensation filters by posing the problem as one of H∞ optimisation. The third method stems from the well-known principle of Wiener filtering. Note that the digital compensation filters found using these methods are computed off-line. We then proceed by designing adaptive compensation filters that runs on-line and uses the “live” modulator input data to make the necessary measurements and compensations. These adaptive filters are computed based on the well-known Least-Mean-Square (LMS) algorithm. The advantage of using this approach is that the modulator does not require to be taken off-line in the process of calculating the pre-compensation filters and thus will not disrupt the normal operation of the modulator. The compensation performance of all methods is studied analytically using computer simulations and practical experiments. The results indicate that the proposed methods are effective and are able to provide substantial compensation for the shortcomings of the analogue reconstruction filters in the I and Q channels. In addition, the adaptive compensation scheme, implemented on a DSP platform shows that there is significant reduction in side-lobe levels for the compensated signal spectrum.
APA, Harvard, Vancouver, ISO, and other styles
13

Robins, Michael John. "Local energy feature tracing in digital images and volumes." University of Western Australia. Dept. of Computer Science, 1999. http://theses.library.uwa.edu.au/adt-WU2003.0010.

Full text
Abstract:
Digital image feature detectors often comprise two stages of processing: an initial filtering phase and a secondary search stage. The initial filtering is designed to accentuate specific feature characteristics or suppress spurious components of the image signal. The second stage of processing involves searching the results for various criteria that will identify the locations of the image features. The local energy feature detection scheme combines the squares of the signal convolved with a pair of filters that are in quadrature with each other. The resulting local energy value is proportional to phase congruency which is a measure of the local alignment of the phases of the signals constituent Fourier components. Points of local maximum phase alignment have been shown to correspond to visual features in the image. The local energy calculation accentuates the location of many types of image features, such as lines, edges and ramps and estimates of local energy can be calculated in multidimensional image data by rotating the quadrature filters to several orientations. The second stage search criterion for local energy is to locate the points that lie along the ridges in the energy map that connect the points of local maxima. In three dimensional data the relatively higher energy values will form films between connecting laments and tendrils. This thesis examines the use of recursive spatial domain filtering to calculate local energy. A quadrature pair of filters which are based on the first derivative of the Gaussian function and its Hilbert transform, are rotated in space using a kernel of basis functions to obtain various orientations of the filters. The kernel is designed to be separable and each term is implemented using a recursive digital filter. Once local energy has been calculated the ridges and surfaces of high energy values are determined using a flooding technique. Starting from the points of local minima we perform an ablative skeletonisation of the higher energy values. The topology of the original set is maintained by examining and preserving the topology of the neighbourhood of each point when considering it for removal. This combination of homotopic skeletonisation and sequential processing of each level of energy values, results in a well located, thinned and connected tracing of the ridges. The thesis contains examples of the local energy calculation using steerable recursive filters and the ridge tracing algorithm applied to two and three dimensional images. Details of the algorithms are contained in the text and details of their computer implementation are provided in the appendices.
APA, Harvard, Vancouver, ISO, and other styles
14

Tourabaly, Jamil A. "A jittered-sampling correction technique for ADCs." Connect to thesis, 2008. http://portal.ecu.edu.au/adt-public/adt-ECU2008.0009.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Ingraham, Mark R. "Histological age estimation of the midshaft clavicle using a new digital technique." Thesis, University of North Texas, 2004. https://digital.library.unt.edu/ark:/67531/metadc4604/.

Full text
Abstract:
Histological methods to estimate skeletal age at death, in forensic cases, are an alternative to the more traditional gross morphological methods. Most histological methods utilize counts of bone type within a given field for their estimation. The method presented in this paper uses the percentage area occupied by unremodeled bone to estimate age. The percentage area occupied by unremodeled bone is used in a linear regression model to predict skeletal age at death. Additionally, this method uses digital software to measure area rather than the traditional technique in which a gridded microscope is used to estimate area. The clavicle was chosen as a sample site since it is not a weight bearing bone and has little muscular insertion. These factors reduce the variation seen as a result of differences in lifestyle or activity pattern.
APA, Harvard, Vancouver, ISO, and other styles
16

Robbins, Benjamin John. "The detection of 2D image features using local energy." University of Western Australia. Dept. of Computer Science, 1996. http://theses.library.uwa.edu.au/adt-WU2003.0005.

Full text
Abstract:
Accurate detection and localization of two dimensional (2D) image features (or 'key-points') is important for vision tasks such as structure from motion, stereo matching, and line labeling. 2D image features are ideal for these vision tasks because 2D image features are high in information and yet they occur sparsely in typical images. Several methods for the detection of 2D image features have already been developed. However, it is difficult to assess the performance of these methods because no one has produced an adequate definition of corners that encompasses all types of 2D luminance variations that make up 2D image features. The fact that there does not exist a consensus on the definition of 2D image features is not surprising given the confusion surrounding the definition of 1D image features. The general perception of 1D image features has been that they correspond to 'edges' in an image and so are points where the intensity gradient in some direction is a local maximum. The Sobel [68], Canny [7] and Marr-Hildreth [37] operators all use this model of 1D features, either implicitly or explicitly. However, other profiles in an image also make up valid 1D features, such as spike and roof profiles, as well as combinations of all these feature types. Spike and roof profiles can also be found by looking for points where the rate of change of the intensity gradient is locally maximal, as Canny did in defining a 'roof-detector' in much the same way he developed his 'edge-detector'. While this allows the detection of a wider variety of 1D features profiles, it comes no closer to the goal of unifying these different feature types to an encompassing definition of 1D features. The introduction of the local energy model of image features by Marrone and Owens [45] in 1987 provided a unified definition of 1D image features for the first time. They postulated that image features correspond to points in an image where there is maximal phase congruency in the frequency domain representation of the image. That is, image features correspond to points of maximal order in the phase domain of the image signal. These points of maximal phase congruency correspond to step-edge, roof, and ramp intensity profiles, and combinations thereof. They also correspond to the Mach bands perceived by humans in trapezoidal feature profiles. This thesis extends the notion of phase congruency to 2D image features. As 1D image features correspond to points of maximal 1D order in the phase domain of the image signal, this thesis contends that 2D image features correspond to maximal 2D order in this domain. These points of maximal 2D phase congruency include all the different types of 2D image features, including grey-level corners, line terminations, blobs, and a variety of junctions. Early attempts at 2D feature detection were simple 'corner detectors' based on a model of a grey-level corner in much the same way that early 1D feature detectors were based on a model of step-edges. Some recent attempts have included more complex models of 2D features, although this is basically a more complex a priori judgement of the types of luminance profiles that are to be labeled as 2D features. This thesis develops the 2D local energy feature detector based on a new, unified definition of 2D image features that marks points of locally maximum 2D order in the phase domain representation of the image as 2D image features. The performance of an implementation of 2D local energy is assessed, and compared to several existing methods of 2D feature detection. This thesis also shows that in contrast to most other methods of 2D feature detection, 2D local energy is an idempotent operator. The extension of phase congruency to 2D image features also unifies the detection of image features. 1D and 2D image features correspond to 1D and 2D order in the phase domain respresentation of the image respectively. This definition imposes a hierarchy of image features, with 2D image features being a subset of 1D image features. This ordering of image features has been implied ever since 1D features were used as candidate points for 2D feature detection by Kitchen [28] and others. Local energy enables the extraction of both 1D and 2D image features in a consistent manner; 2D image features are extracted from the 1D image features using the same operations that are used to extract 1D image features from the input image. The consistent approach to the detection of image features presented in this thesis allows the hierarchy of primitive image features to be naturally extended to higher order image features. These higher order image features can then also be extracted from higher order image data using the same hierarchical approach. This thesis shows how local energy can be naturally extended to the detection of 1D (surface) and higher order image features in 3D data sets. Results are presented for the detection of 1D image features in 3D confocal microscope images, showing superior performance to the 3D extension of the Sobel operator [74].
APA, Harvard, Vancouver, ISO, and other styles
17

Kovesi, Peter. "Invariant measures of image features from phase information." University of Western Australia. Dept. of Computer Science, 1996. http://theses.library.uwa.edu.au/adt-WU2003.0006.

Full text
Abstract:
If reliable and general computer vision techniques are to be developed it is crucial that we find ways of characterizing low-level image features with invariant quantities. For example, if edge significance could be measured in a way that was invariant to image illumination and contrast, higher-level image processing operations could be conducted with much greater confidence. However, despite their importance, little attention has been paid to the need for invariant quantities in low-level vision for tasks such as feature detection or feature matching. This thesis develops a number of invariant low-level image measures for feature detection, local symmetry/asymmetry detection, and for signal matching. These invariant quantities are developed from representations of the image in the frequency domain. In particular, phase data is used as the fundamental building block for constructing these measures. Phase congruency is developed as an illumination and contrast invariant measure of feature significance. This allows edges, lines and other features to be detected reliably, and fixed thresholds can be applied over wide classes of images. Points of local symmetry and asymmetry in images give rise to special arrangements of phase, and these too can be characterized by invariant measures. Finally, a new approach to signal matching that uses correlation of local phase and amplitude information is developed. This approach allows reliable phase based disparity measurements to be made, overcoming many of the difficulties associated with scale-space singularities.
APA, Harvard, Vancouver, ISO, and other styles
18

Puhakka, Heli. "From analogue to digital: Drawing the human form by examining creative practices, techniques and experiences of practitioners within immersive technology." Thesis, Queensland University of Technology, 2019. https://eprints.qut.edu.au/134466/1/Heli_Puhakka_Thesis.pdf.

Full text
Abstract:
Advancements in virtual reality (VR) have facilitated a new drawing experience for digital artists. These have provided the experience for artists to have an embodied human-computer interaction (HCI) while drawing. This project focuses on exploring and understanding how analogue life drawing practices can be redefined in the digital realm of virtual reality. In this practice-led project, the analogue life drawing creative practice is the foundation for making immersive drawing artworks in the virtual environment. This is alongside theoretical research into aesthetic experience, embodiment, disembodiment and presence in conjunction with conducting semi-structured interviews to understand other drawing practitioner experiences with immersive drawing.
APA, Harvard, Vancouver, ISO, and other styles
19

Pitsillides, Andreas, and Andreas Pitsillides@ucy ac cy. "Control structures and techniques for broadband-ISDN communication systems." Swinburne University of Technology, 1993. http://adt.lib.swin.edu.au./public/adt-VSWT20060321.132650.

Full text
Abstract:
A structured organisation of tasks, possibly hierarchical, is necessary in a BISDN network due to the complexity of the system, its large dimension and its physical distribution in space. Feedback (possibly supplemented by feedforward) control has an essential role in the effective and efficient control of BISDN. Additionally, due to the nonstationarity of the network and its complexity, a number of different (dynamic) modelling techniques are required at each level of the hierarchy. Also, to increase the efficiency of the network and allow flexibility in the control actions (by extending the control horizon) the (dynamic) tradeoff between service-rate, buffer-space, cell-delay and cell-loss must be exploited. In this thesis we take account of the above and solve three essential control problems, required for the effective control of BISDN. These solutions are suitable for both stationary and nonstationary conditions. Also, they are suitable for implementation in a decentralised coordinated form, that can form a part of a hierarchical organisation of control tasks. Thus, the control schemes aim for global solutions, yet they are not limited by the propagation delay, which can be high in comparison to the dynamics of the controlled events. Specifically, novel control approaches to the problems of Connection Admission Control (CAC), flow control and service-rate control are developed. We make use of adaptive feedback and adaptive feedforward control methodologies to solve the combined CAC and flow control problem. Using a novel control concept, based on only two groups of traffic (the controllable and uncontrollable group) we formulate a problem aimed at high (unity) utilisation of resources while maintaining quality of service at prescribed levels. Using certain assumptions we have proven that in the long term the regulator is stable and that it converges to zero regulation error. Bounds on operating conditions are also derived, and using simulation we show that high utilisation can be achieved as suggested by the theory, together with robustness for unforeseen traffic connections and disconnections. Even with such a high efficiency and strong properties on the quality of service provided, the only traffic descriptor required from the user is that of the peak rate of the uncontrollable traffic. A novel scheme for the dynamic control of service-rate is formulated, using feedback from the network queues. We use a unified dynamic fluid flow equation to describe the virtual path (VP) and hence formulate two illustrative examples for the control of service-rate (at the VP level). One is a nonlinear optimal multilevel implementation, that features a coordinated decentralised solution. The other is a single level implementation that turns out to be computationally complex. Therefore, for the single level implementation the costate equilibrium solution is also derived. For the optimal policies derived, we discuss their implementation complexity and provide implementable solutions. Their performance is evaluated using simulation. Additionally, using an ad hoc approach we have extended previous published works on the decentralised coordinated control of large scale nonlinear systems to also deal with time-delayed systems.
APA, Harvard, Vancouver, ISO, and other styles
20

Chang, Kuo-Lung. "A Real-Time Merging-Buffering Technique for MIDI Messages." Thesis, University of North Texas, 1991. https://digital.library.unt.edu/ark:/67531/metadc500471/.

Full text
Abstract:
A powerful and efficient algorithm has been designed to deal with the critical timing problem of the MIDI messages. This algorithm can convert note events stored in a natural way to MIDI messages dynamically. Only limited memory space (the buffer) is required to finish the conversion work, and the size of the buffer is independent of the size of the original sequence (notes). This algorithm's real-time variable properties suggest not only the flexible real-time controls in the use of musical aspects, but also the expandability to interactive multi-media applications. A compositional environment called MusicSculptor has been implemented in terms of this algorithm.
APA, Harvard, Vancouver, ISO, and other styles
21

King-Smith, Leah. "Reading the reading : an exegesis on "traces... vestiges... energies... a relic... landmark... stage: New Farm Powerhouse Project"." Thesis, Queensland University of Technology, 2001.

Find full text
Abstract:
As an artist Leah King-Smith has worked for several years in the creative medium of photography exploring notions of multidimensional states of consciousness particularly in reference to the psychic work of Jane Roberts. In this thesis King-Smith presents her creative developments as a technological shift from analogue to digital imaging, and continues, in her Research Project, to follow the same threads of investigation into the use of multi-layering as symbol and expression of simultaneous time. The thesis is two-fold in approach where in the first section, processes, contexts and concepts are presented and in the second section the themes and analysis of the final works are expressed through a creative writing style. The dichotomous method of interpretation in these two sections embodies the antithetical relationship artists experience between reflexive analysis and creative practice as methods of knowledge.
APA, Harvard, Vancouver, ISO, and other styles
22

Dias, Luís Nuno Coelho. "Ideografias dinâmicas-o interface digital como suporte de novas escritas." Master's thesis, Instituições portuguesas -- UP-Universidade do Porto -- -Faculdade de Belas Artes, 2000. http://dited.bn.pt:80/29291.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Guideroli, Ilma Carla Zarotti. "Entre mapas, entre espaços = itinerários abertos." [s.n.], 2010. http://repositorio.unicamp.br/jspui/handle/REPOSIP/283973.

Full text
Abstract:
Orientador: Regina Helena Pereira Johas
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Artes
Made available in DSpace on 2018-08-16T14:23:39Z (GMT). No. of bitstreams: 1 Guideroli_IlmaCarlaZarotti_M.pdf: 37001017 bytes, checksum: c43b5a3b57dfe1e23b24169201dd8910 (MD5) Previous issue date: 2010
Resumo: A pesquisa Entre Mapas, Entre Espaços: Itinerários Abertos consiste no mapeamento de minha produção plástica no período entre o início de 2008 e o início de 2010. É formada por quatro livros de artista intitulados Lugares Imaginários I, II, III e IV e Espaços Afluentes, composto por dezoito imagens digitais divididas entre as séries Rotas Imaginárias, São Paulo, Intersecções I, Paisagens Mapeadas, UnHorizont e Paisagem Oceânica. Ambos partilham o mesmo campo de investigações, composto de três eixos norteadores que sustentam pensamento e processo criativo. O primeiro destes eixos trata de TranSituAções, conceito que abrange as diferentes maneiras de conceber espaço e lugar, apresentado a partir da reflexão de autores que discutiram este tema. O segundo eixo estrutura a proposição Espaços Empilhados, tratados a partir da definição do virtual segundo Pierre Lévy e do conceito de remix cunhado por Lev Manovich e citado, por sua vez, por autores brasileiros que destrincharam o assunto, como André Lemos e Marcus Bastos. O terceiro e último eixo aborda a questão dos mapas, sendo que a primeira parte compreende uma breve história da cartografia com seu surgimento e características específicas; e a segunda parte consiste na investigação dos mapas deturpados, ou seja, como alguns escritores, artistas e cineastas fizeram uso dos mapas convencionais, tirando deles a função utilitária de localização em relação ao lugar físico. O corpo prático reflexivo aqui apresentado pretende situar meus trabalhos dentro de questionamentos pertinentes ao nosso tempo, e inserir tal pesquisa como resultado de uma investigação que não se encerra, mas continua infinitamente
Abstract: The research Among Maps, Among Spaces: Open Itineraries consists in mapping my artistic practice from the beginning of 2008 until the beginning of 2010. It is composed of four artist's books entitled Imaginary Places I, II, III, and IV, and Affluent Spaces, composed of eighteen digital images divided in the series Imaginary Routes, São Paulo, Intersections I, Mapped Landscapes, UnHorizont, and Oceanic Landscape. Both works share the same investigations, composed of three guiding axes that support thoughts and the creative process. The first axis approaches TranSituActions, a concept that comprises different ways of conceiving space and place, presented from the reflection of authors that investigate this theme. The second axis structures the proposition Heaped Spaces, treated from the definition of virtual according to Pierre Lévy and the concept of remix, coined by Lev Manovich and cited by Brazilian authors that explained the subject, like André Lemos e Marcus Bastos. The last axis approaches the question of maps: the first part comprehends a brief history of cartography with its appearance and specific characteristics and the second one consists of investigating the corrupted maps, that is, how some writers, artists and filmmakers used conventional maps, extracting from them the utilitarian function of localization in relation to a geographic place. The practical-reflexive body presented here intends to place my works inside relevant questions to our time, and introduce such research as a result of an investigation that does not close but continues infinitely
Mestrado
Artes
Mestre em Artes
APA, Harvard, Vancouver, ISO, and other styles
24

Wedge, Daniel John. "Video sequence synchronization." University of Western Australia. School of Computer Science and Software Engineering, 2008. http://theses.library.uwa.edu.au/adt-WU2008.0084.

Full text
Abstract:
[Truncated abstract] Video sequence synchronization is necessary for any computer vision application that integrates data from multiple simultaneously recorded video sequences. With the increased availability of video cameras as either dedicated devices, or as components within digital cameras or mobile phones, a large volume of video data is available as input for a growing range of computer vision applications that process multiple video sequences. To ensure that the output of these applications is correct, accurate video sequence synchronization is essential. Whilst hardware synchronization methods can embed timestamps into each sequence on-the-fly, they require specialized hardware and it is necessary to set up the camera network in advance. On the other hand, computer vision-based software synchronization algorithms can be used to post-process video sequences recorded by cameras that are not networked, such as common consumer hand-held video cameras or cameras embedded in mobile phones, or to synchronize historical videos for which hardware synchronization was not possible. The current state-of-the-art software algorithms vary in their input and output requirements and camera configuration assumptions. ... Next, I describe an approach that synchronizes two video sequences where an object exhibits ballistic motions. Given the epipolar geometry relating the two cameras and the imaged ballistic trajectory of an object, the algorithm uses a novel iterative approach that exploits object motion to rapidly determine pairs of temporally corresponding frames. This algorithm accurately synchronizes videos recorded at different frame rates and takes few iterations to converge to sub-frame accuracy. Whereas the method presented by the first algorithm integrates tracking data from all frames to synchronize the sequences as a whole, this algorithm recovers the synchronization by locating pairs of temporally corresponding frames in each sequence. Finally, I introduce an algorithm for synchronizing two video sequences recorded by stationary cameras with unknown epipolar geometry. This approach is unique in that it recovers both the frame rate ratio and the frame offset of the two sequences by finding matching space-time interest points that represent events in each sequence; the algorithm does not require object tracking. RANSAC-based approaches that take a set of putatively matching interest points and recover either a homography or a fundamental matrix relating a pair of still images are well known. This algorithm extends these techniques using space-time interest points in place of spatial features, and uses nested instances of RANSAC to also recover the frame rate ratio and frame offset of a pair of video sequences. In this thesis, it is demonstrated that each of the above algorithms can accurately recover the frame rate ratio and frame offset of a range of real video sequences. Each algorithm makes a contribution to the body of video sequence synchronization literature, and it is shown that the synchronization problem can be solved using a range of approaches.
APA, Harvard, Vancouver, ISO, and other styles
25

Hill, Evelyn June. "Applying statistical and syntactic pattern recognition techniques to the detection of fish in digital images." University of Western Australia. School of Mathematics and Statistics, 2004. http://theses.library.uwa.edu.au/adt-WU2004.0070.

Full text
Abstract:
This study is an attempt to simulate aspects of human visual perception by automating the detection of specific types of objects in digital images. The success of the methods attempted here was measured by how well results of experiments corresponded to what a typical human’s assessment of the data might be. The subject of the study was images of live fish taken underwater by digital video or digital still cameras. It is desirable to be able to automate the processing of such data for efficient stock assessment for fisheries management. In this study some well known statistical pattern classification techniques were tested and new syntactical/ structural pattern recognition techniques were developed. For testing of statistical pattern classification, the pixels belonging to fish were separated from the background pixels and the EM algorithm for Gaussian mixture models was used to locate clusters of pixels. The means and the covariance matrices for the components of the model were used to indicate the location, size and shape of the clusters. Because the number of components in the mixture is unknown, the EM algorithm has to be run a number of times with different numbers of components and then the best model chosen using a model selection criterion. The AIC (Akaike Information Criterion) and the MDL (Minimum Description Length) were tested.The MDL was found to estimate the numbers of clusters of pixels more accurately than the AIC, which tended to overestimate cluster numbers. In order to reduce problems caused by initialisation of the EM algorithm (i.e. starting positions of mixtures and number of mixtures), the Dynamic Cluster Finding algorithm (DCF) was developed (based on the Dog-Rabbit strategy). This algorithm can produce an estimate of the locations and numbers of clusters of pixels. The Dog-Rabbit strategy is based on early studies of learning behaviour in neurons. The main difference between Dog-Rabbit and DCF is that DCF is based on a toroidal topology which removes the tendency of cluster locators to migrate to the centre of mass of the data set and miss clusters near the edges of the image. In the second approach to the problem, data was extracted from the image using an edge detector. The edges from a reference object were compared with the edges from a new image to determine if the object occurred in the new image. In order to compare edges, the edge pixels were first assembled into curves using an UpWrite procedure; then the curves were smoothed by fitting parametric cubic polynomials. Finally the curves were converted to arrays of numbers which represented the signed curvature of the curves at regular intervals. Sets of curves from different images can be compared by comparing the arrays of signed curvature values, as well as the relative orientations and locations of the curves. Discrepancy values were calculated to indicate how well curves and sets of curves matched the reference object. The total length of all matched curves was used to indicate what fraction of the reference object was found in the new image. The curve matching procedure gave results which corresponded well with what a human being being might observe.
APA, Harvard, Vancouver, ISO, and other styles
26

Masek, Martin. "Hierarchical segmentation of mammograms based on pixel intensity." University of Western Australia. School of Electrical, Electronic and Computer Engineering, 2004. http://theses.library.uwa.edu.au/adt-WU2003.0033.

Full text
Abstract:
Mammography is currently used to screen women in targeted risk classes for breast cancer. Computer assisted diagnosis of mammograms attempts to lower the workload on radiologists by either automating some of their tasks or acting as a second reader. The task of mammogram segmentation based on pixel intensity is addressed in this thesis. The mammographic process leads to images where intensity in the image is related to the composition of tissue in the breast; it is therefore possible to segment a mammogram into several regions using a combination of global thresholds, local thresholds and higher-level information based on the intensity histogram. A hierarchical view is taken of the segmentation process, with a series of steps that feed into each other. Methods are presented for segmentation of: 1. image background regions; 2. skin-air interface; 3. pectoral muscle; and 4. segmentation of the database by classification of mammograms into tissue types and determining a similarity measure between mammograms. All methods are automatic. After a detailed analysis of minimum cross-entropy thresholding, multi-level thresholding is used to segment the main breast tissue from the background. Scanning artefacts and high intensity noise are separated from the breast tissue using binary image operations, rectangular labels are identified from the binary image by their shape, the Radon transform is used to locate the edges of tape artefacts, and a filter is used to locate vertical running roller scratching. Orientation of the image is determined using the shape of the breast and properties of the breast tissue near the breast edge. Unlike most existing orientation algorithms, which only distinguish between left facing or right facing breasts, the algorithm developed determines orientation for images flipped upside down or rotated onto their side and works successfully on all images of the testing database. Orientation is an integral part of the segmentation process, as skin-air interface and pectoral muscle extraction rely on it. A novel way to view the skin-line on the mammogram is as two sets of functions, one set with the x-axis along the rows, and the other with the x-axis along the columns. Using this view, a local thresholding algorithm, and a more sophisticated optimisation based algorithm are presented. Using fitted polynomials along the skin-air interface, the error between polynomial and breast boundary extracted by a threshold is minimised by optimising the threshold and the degree of the polynomial. The final fitted line exhibits the inherent smoothness of the polynomial and provides a more accurate estimate of the skin-line when compared to another established technique. The edge of the pectoral muscle is a boundary between two relatively homogenous regions. A new algorithm is developed to obtain a threshold to separate adjacent regions distinguishable by intensity. Taking several local windows containing different proportions of the two regions, the threshold is found by examining the behaviour of either the median intensity or a modified cross-entropy intensity as the proportion changes. Image orientation is used to anchor the window corner in the pectoral muscle corner of the image and straight-line fitting is used to generate a more accurate result from the final threshold. An algorithm is also presented to evaluate the accuracy of different pectoral edge estimates. Identification of the image background and the pectoral muscle allows the breast tissue to be isolated in the mammogram. The density and pattern of the breast tissue is correlated with 1. Breast cancer risk, and 2. Difficulty of reading for the radiologist. Computerised density assessment methods have in the past been feature-based, a number of features extracted from the tissue or its histogram and used as input into a classifier. Here, histogram distance measures have been used to classify mammograms into density types, and ii also to order the image database according to image similarity. The advantage of histogram distance measures is that they are less reliant on the accuracy of segmentation and the quality of extracted features, as the whole histogram is used to determine distance, rather than quantifying it into a set of features. Existing histogram distance measures have been applied, and a new histogram distance presented, showing higher accuracy than other such measures, and also better performance than an established feature-based technique.
APA, Harvard, Vancouver, ISO, and other styles
27

Wong, Tzu Yen. "Image transition techniques using projective geometry." University of Western Australia. School of Computer Science and Software Engineering, 2009. http://theses.library.uwa.edu.au/adt-WU2009.0149.

Full text
Abstract:
[Truncated abstract] Image transition effects are commonly used on television and human computer interfaces. The transition between images creates a perception of continuity which has aesthetic value in special effects and practical value in visualisation. The work in this thesis demonstrates that better image transition effects are obtained by incorporating properties of projective geometry into image transition algorithms. Current state-of-the-art techniques can be classified into two main categories namely shape interpolation and warp generation. Many shape interpolation algorithms aim to preserve rigidity but none preserve it with perspective effects. Most warp generation techniques focus on smoothness and lack the rigidity of perspective mapping. The affine transformation, a commonly used mapping between triangular patches, is rigid but not able to model perspective effects. Image transition techniques from the view interpolation community are effective in creating transitions with the correct perspective effect, however, those techniques usually require more feature points and algorithms of higher complexity. The motivation of this thesis is to enable different views of a planar surface to be interpolated with an appropriate perspective effect. The projective geometric relationship which produces the perspective effect can be specified by two quadrilaterals. This problem is equivalent to finding a perspectively appropriate interpolation for projective transformation matrices. I present two algorithms that enable smooth perspective transition between planar surfaces. The algorithms only require four point correspondences on two input images. ...The second algorithm generates transitions between shapes that lie on the same plane which exhibits a strong perspective effect. It recovers the perspective transformation which produces the perspective effect and constrains the transition so that the in-between shapes also lie on the same plane. For general image pairs with multiple quadrilateral patches, I present a novel algorithm that is transitionally symmetrical and exhibits good rigidity. The use of quadrilaterals, rather than triangles, allows an image to be represented by a small number of primitives. This algorithm uses a closed form force equilibrium scheme to correct the misalignment of the multiple transitional quadrilaterals. I also present an application for my quadrilateral interpolation algorithm in Seitz and Dyer's view morphing technique. This application automates and improves the calculation of the reprojection homography in the postwarping stage of their technique. Finally I unify different image transition research areas into a common framework, this enables analysis and comparison of the techniques and the quality of their results. I highlight that quantitative measures can greatly facilitate the comparisons among different techniques and present a quantitative measure based on epipolar geometry. This novel quantitative measure enables the quality of transitions between images of a scene from different viewpoints to be quantified by its estimated camera path.
APA, Harvard, Vancouver, ISO, and other styles
28

Smith, Jennifer. "Theorizing Digital Narrative: Beginnings, Endings, and Authorship." VCU Scholars Compass, 2012. http://scholarscompass.vcu.edu/etd/316.

Full text
Abstract:
Since its development, critics of electronic literature have touted all that is “new” about the field, commenting on how these works make revolutionary use of non-linear structure, hyperlinks, and user interaction. Scholars of digital narrative have most often focused their critiques within the paradigms of either the text-centric structuralist model of narrativity or post-structuralist models that implicate the text as fundamentally fluid and dependent upon its reader for meaning. But neither of these approaches can account completely for the unique modes in which digital narratives prompt readerly progression, yet still exist as independent creative artifacts marked by purposive design. I argue that, in both practice and theory, we must approach digital-born narratives as belonging to a third, hybrid paradigm. In contrast to standard critical approaches, I interrogate the presumed “newness” of digital narratives to reveal many aspects of these works that hearken to print predecessors and thus confirm classical narratological theories of structure and authorship. Simultaneously, though, I demonstrate that narrative theory must be revised and expanded to account for some of the innovative techniques inherent to digital-born narrative. Across media formats, theories of narrative beginnings, endings, and authorship contribute to understanding of readerly progress and comprehension. My analysis of Leishman’s electronically animated work Deviant: The Possession of Christian Shaw shows how digital narratives extend theories of narrative beginnings, confirming theoretical suitability of existing rules of notice, expectations for mouseover actions, and the role of institutional and authorial antetexts. My close study of Jackson’s hypertext my body: a Wunderkammer likewise informs scholarship on narrative endings, as my body does not provide a neatly linear plot, and thus does not cleanly correspond to theories of endings that revolve around conceptions of instabilities or tensions. Yet I argue that there is still compelling reason to read for narrative closure, and thus narrative coherence, within this and other digital works. Finally, my inquiry into Pullinger and Joseph’s collaboratively written Flight Paths: A Networked Novel firmly justifies the theory of implied authorship in both print and digital environments and confirms the suitability of this construct to a range of texts.
APA, Harvard, Vancouver, ISO, and other styles
29

Novaes, Marcos (Marcos Nogueira). "Multiresolution Signal Cross-correlation." Thesis, University of North Texas, 1994. https://digital.library.unt.edu/ark:/67531/metadc277645/.

Full text
Abstract:
Signal Correlation is a digital signal processing technique which has a wide variety of applications, ranging from geophysical exploration to acoustic signal enhancements, or beamforming. This dissertation will consider this technique in an underwater acoustics perspective, but the algorithms illustrated here can be readily applied to other areas. Although beamforming techniques have been studied for the past fifty years, modern beamforming systems still have difficulty in operating in noisy environments, especially in shallow water.
APA, Harvard, Vancouver, ISO, and other styles
30

Darrington, John Mark. "Real time extraction of ECG fiducial points using shape based detection." University of Western Australia. School of Computer Science and Software Engineering, 2009. http://theses.library.uwa.edu.au/adt-WU2009.0152.

Full text
Abstract:
The electrocardiograph (ECG) is a common clinical and biomedical research tool used for both diagnostic and prognostic purposes. In recent years computer aided analysis of the ECG has enabled cardiographic patterns to be found which were hitherto not apparent. Many of these analyses rely upon the segmentation of the ECG into separate time delimited waveforms. The instants delimiting these segments are called the
APA, Harvard, Vancouver, ISO, and other styles
31

Slack-Smith, Amanda Jennifer, and not supplied. "The practical application of McCloud's horizontal 'Infinite canvas' through the design, composition and creation of an online comic." RMIT University. Computer Science and Information Technology, 2006. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20070205.162540.

Full text
Abstract:
This research examines the application of Scott McCloud's theory of the Infinite canvas, specifically the horizontal example outlined in Reinventing Comics (McCloud, 2000). It focuses on the useability and effectiveness of the Infinite canvas theory when applied as a practical example of a comic outcome for the Internet. This practical application of McCloud's horizontal Infinite canvas model has been achieved by creating a digital comic entitled Sad Reflections; a continuous horizontal narrative that is 20cm in height and 828cm in length and was designed to be viewed in a digital environment. This comic incorporates traditional comic techniques of gutters, time frames, line, with combining words and pictures, as outlined by McCloud (1993) in his first theoretical text Understanding Comics. These techniques are used to ensure that the project fulfilled the technical criteria used by the comic book industry to create comics. The project also incorporates McCloud's personally devised Infinite canvas techniques of trails, distance pacing, narrative subdivision, sustained rhythm and gradualism as outlined on his website. These new techniques are applied to assess their effectiveness in the creation of the horizontal Infinite canvas and ability to be integrated with traditional comic techniques. The focus of this project is to examine the strengths and weaknesses of McCloud's Infinite canvas theory when applied to the practical comic outcome of the Sad Reflections. Three key questions are used to guide this research. These questions are: 1. Does the application of traditional comic techniques affect the effectiveness of the Infinite canvas when implemented to a horizontal format? 2. Are the new Infinite canvas techniques as outlined by McCloud able to be applied to a horizontal format and what impact do these techniques have on the process? 3. Is the application of a horizontal Infinite canvas of benefit to future developers of web comics? Based on the outcomes of the above questions, this paper nominates strategies, considerations and suitable production processes for future developers of web comics.
APA, Harvard, Vancouver, ISO, and other styles
32

Cardoso, Silvia Helena dos Santos. "Estrada, paisagem e capim - = fotografias e relatos no Jalapão." [s.n.], 2011. http://repositorio.unicamp.br/jspui/handle/REPOSIP/284439.

Full text
Abstract:
Orientador: Luise Weiss
A biblioteca do IA acompanha 2 DVD-R
Tese (doutorado) - Universidade Estadual de Campinas, Instituto de Artes
Made available in DSpace on 2018-08-19T02:45:00Z (GMT). No. of bitstreams: 1 Cardoso_SilviaHelenadosSantos_D.pdf: 204143936 bytes, checksum: 8684d797a1436e3233a9c041032a67bb (MD5) Previous issue date: 2011
Resumo: Estrada, Paisagem e Capim - Fotografias e Relatos no Jalapão é uma pesquisa em Poética Visual constituída por viagens - como deslocamento e experiência estética - ao cerrado jalapoeiro, no interior do Estado do Tocantins. A fotografia digital e as anotações se constituem como expressão e desenvolvimento do percurso processual do trabalho realizado. As referências teóricas e visuais contaram com a Antropologia como essência, metodologia e inserção no campo de pesquisa e a Arte como espaço de reflexão e criação para o caminho poético. Diferentes questionamentos surgiram ao longo do desenvolvimento do fazer artístico e acabaram por delimitar o trabalho. Nesta pesquisa, arte, natureza e cultura tornam-se pares no processo de registro e percepção da intuição criativa fotográfica, enfatizando assim, o caráter de "work in progress". Um Livro de Fotografias e um DVD sonorizado com 170 imagens são apresentados como processo e resultado do trabalho poético
Abstract: Road, Landscape and Grass - Photographs and Reports in the Jalapão is a research in Poetic Visual consisting of travels - such as displacement and aesthetic experience - to the brazilian savannah, in the State of Tocantins/BR. The digital photography and the written summary notes are as expression and development of the proceedings of the visual work done. The theoretical and visual references counted with the Anthropology as well as essence, methodology and insertion in the field of research, and the Art to be a space for reflection and creation for the poetic way. Different questions have arisen in the course of the development of artistic making and ultimately define the work. In this research, art, nature and culture have become parts in the process of registration and perception of the creative intuition photographic, emphasizing the character of "work in progress". A book of photographs and a DVD with sound and 170 images are presented as a process and outcome of the research poetic
Doutorado
Artes Visuais
Doutor em Artes
APA, Harvard, Vancouver, ISO, and other styles
33

Pedroso, Anderson Antonio. "Vilém flusser : de la philosophie de la photographie à l’univers des images techniques." Thesis, Sorbonne université, 2020. http://www.theses.fr/2020SORUL103.

Full text
Abstract:
La pensée de Vilém Flusser (1920-1991) a donné lieu à une abondante bibliographie critique sur ses apports en matière de communication, où son pari d’une « nouvelle imagination » fait tenir ensemble l’art, la science et la technologie, sous l’égide de sa philosophie de la photographie. L’archéologie de sa pensée, proposée ici, a été construite en tenant compte de l’ancrage disciplinaire de ses travaux dans les théories de la communication et des médias : elle s’est intéressée à la notion de l’art que Flusser développe au cours de son parcours, afin de saisir le déploiement de cette dimension artistique, son statut et ses contours, aussi bien que sa portée. La notion d’art chez Flusser est indissociablement liée à l’histoire : l’expérience traumatisante de l’exil est devenue centrale dans sa pensée critique de tout totalitarisme. L’ensemble de pratiques et de savoirs qui fondent son rapport au monde et qui informent sur l’élaboration de sa pensée sont travaillés par une Kulturgeschichte, vue depuis la perspective d’une « post-histoire ». Mais il y a en particulier une pensée du jeu, ludique, où l’objectif est moins de jouer à l’intérieur des règles établies que de déjouer les règles, de « jouer pour changer le jeu ». Autrement dit, il s’agit d’une pensée radicalement dialogique et polyphonique, dont la Kommunikologie, façonnée comme métathéorie, représente l’essor de Flusser. Sa trajectoire historique et les savoirs qui lui sont associés contribuent à la construction d’une pensée cybernétique qui projette les lignes fondamentales d’une Kunstwissenschaft, dans laquelle il propose une sorte d’iconoclasme sans renoncer aux images
The thought of Vilém Flusser (1920-1991) has given rise to an abundant critical bibliography on his contributions in the field of communication, where his wager on a "new imagination" brings together art, science and technology, under the aegis of his philosophy of photography. The archaeology of his thought, proposed here, is constructed alongside this disciplinary underpinning of his work in the theories of communication and media: it involves the notion of art that Flusser developed during his career, in order to grasp the deployment of this artistic dimension, its status and its contours, in his thought. Flusser's notion of art is inseparably linked to history: the traumatic experience of exile becames central in his critical thinking regarding all totalitarianism. The practices and knowledge that underpin his relationship to the world and guide the development of his conception are worked on by a Kulturgeschichte, seen from the perspective of a "post-history". But there is, in particular, a playful thinking of the game, where the objective is less to play within the established rules than to thwart the rules, to "play to change the game". In other words, it is a radically dialogical and polyphonic way of thinking, whose Kommunikologie, shaped as metatheory, directs the development of Flusser's work. His historical trajectory and the knowledge associated with it contribute to the construction of a cybernetic way of thinking that sketches the fundamental lines of a Kunstwissenschaft, in which he proposes a kind of iconoclasm without giving up images
APA, Harvard, Vancouver, ISO, and other styles
34

Mackley, Joshua, and mikewood@deakin edu au. "Extracting fingerprint features using textures." Deakin University. School of Engineering & Technology, 2004. http://tux.lib.deakin.edu.au./adt-VDU/public/adt-VDU20050815.111403.

Full text
Abstract:
Personal identification of individuals is becoming increasingly adopted in society today. Due to the large number of electronic systems that require human identification, faster and more secure identification systems are pursued. Biometrics is based upon the physical characteristics of individuals; of these the fingerprint is the most common as used within law enforcement. Fingerprint-based systems have been introduced into the society but have not been well received due to relatively high rejection rates and false acceptance rates. This limited acceptance of fingerprint identification systems requires new techniques to be investigated to improve this identification method and the acceptance of the technology within society. Electronic fingerprint identification provides a method of identifying an individual within seconds quickly and easily. The fingerprint must be captured instantly to allow the system to identify the individual without any technical user interaction to simplify system operation. The performance of the entire system relies heavily on the quality of the original fingerprint image that is captured digitally. A single fingerprint scan for verification makes it easier for users accessing the system as it replaces the need to remember passwords or authorisation codes. The identification system comprises of several components to perform this function, which includes a fingerprint sensor, processor, feature extraction and verification algorithms. A compact texture feature extraction method will be implemented within an embedded microprocessor-based system for security, performance and cost effective production over currently available commercial fingerprint identification systems. To perform these functions various software packages are available for developing programs for windows-based operating systems but must not constrain to a graphical user interface alone. MATLAB was the software package chosen for this thesis due to its strong mathematical library, data analysis and image analysis libraries and capability. MATLAB enables the complete fingerprint identification system to be developed and implemented within a PC environment and also to be exported at a later date directly to an embedded processing environment. The nucleus of the fingerprint identification system is the feature extraction approach presented in this thesis that uses global texture information unlike traditional local information in minutiae-based identification methods. Commercial solid-state sensors such as the type selected for use in this thesis have a limited contact area with the fingertip and therefore only sample a limited portion of the fingerprint. This limits the number of minutiae that can be extracted from the fingerprint and as such limits the number of common singular points between two impressions of the same fingerprint. The application of texture feature extraction will be tested using variety of fingerprint images to determine the most appropriate format for use within the embedded system. This thesis has focused on designing a fingerprint-based identification system that is highly expandable using the MATLAB environment. The main components that are defined within this thesis are the hardware design, image capture, image processing and feature extraction methods. Selection of the final system components for this electronic fingerprint identification system was determined by using specific criteria to yield the highest performance from an embedded processing environment. These platforms are very cost effective and will allow fingerprint-based identification technology to be implemented in more commercial products that can benefit from the security and simplicity of a fingerprint identification system.
APA, Harvard, Vancouver, ISO, and other styles
35

Lamure, Michel. "Espaces abstraits et reconnaissance des formes application au traitement des images digitales /." Grenoble 2 : ANRT, 1987. http://catalogue.bnf.fr/ark:/12148/cb37607029s.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Thorne, Chris. "Origin-centric techniques for optimising scalability and the fidelity of motion, interaction and rendering." University of Western Australia. School of Computer Science and Software Engineering, 2008. http://theses.library.uwa.edu.au/adt-WU2008.0177.

Full text
Abstract:
[Truncated abstract] This research addresses endemic problems in the fields of computer graphics and simulation such as jittery motion, spatial scalability, rendering problems such as z-buffer tearing, the repeatability of physics dynamics and numerical error in positional systems. Designers of simulation and computer graphics software tend to map real world navigation rules onto the virtual world, expecting to see equivalent virtual behaviour. After all, if computers are programmed to simulate the real world, it is reasonable to expect the virtual behaviour to correspond. However, in computer simulation many behaviours and other computations show measurable problems inconsistent with realworld experience, particularly at large distances from the virtual world origin. Many of these problems, particularly in rendering, can be imperceptible, so users may be oblivious to them, but they are measurable using experimental methods. These effects, generically termed spatial jitter in this thesis, are found in this study to stem from floating point error in positional parameters such as spatial coordinates. This simulation error increases with distance from the coordinate origin and as the simulation progresses through the pipeline. The most common form of simulation error relevant to this study is spatial error which is found by this thesis to not be calculated, as may be expected, using numerical relative error propagation rules but using the rules of geometry. ... The thesis shows that the thinking behind real-world rules, such as for navigation, has to change in order to properly design for optimal fidelity simulation. Origincentric techniques, formulae, terms, architecture and processes are all presented as one holistic solution in the form of an optimised simulation pipeline. The results of analysis, experiments and case studies are used to derive a formula for relative spatial error that accounts for potential pathological cases. A formula for spatial error propagation is then derived by using the new knowledge of spatial error to extend numerical relative error propagation mathematics. Finally, analytical results are developed to provide a general mathematical expression for maximum simulation error and how it varies with distance from the origin and the number of mathematical operations performed. We conclude that the origin centric approach provides a general and optimal solution to spatial jitter. Along with changing the way one thinks about navigation, process guidelines and formulae developed in the study, the approach provides a new paradigm for positional computing. This paradigm can improve many aspects of computer simulation in areas such as entertainment, visualisation for education, industry, science, or training. Examples are: spatial scalability, the accuracy of motion, interaction and rendering; and the consistency and predictability of numerical computation in physics. This research also affords potential cost benefits through simplification of software design and code. These cost benefits come from some core techniques for minimising position dependent error, error propagation and also the simplifications and from new algorithms that flow naturally out of the core solution.
APA, Harvard, Vancouver, ISO, and other styles
37

Houlis, Pantazis Constantine. "A novel parametrized controller reduction technique based on different closed-loop configurations." University of Western Australia. School of Electrical, Electronic and Computer Engineering, 2009. http://theses.library.uwa.edu.au/adt-WU2010.0052.

Full text
Abstract:
This Thesis is concerned with the approximation of high order controllers or the controller reduction problem. We firstly consider approximating high-order controllers by low order controllers based on the closed-loop system approximation. By approximating the closed-loop system transfer function, we derive a new parametrized double-sided frequency weighted model reduction problem. The formulas for the input and output weights are derived using three closed-loop system configurations: (i) by placing a controller in cascade with the plant, (ii) by placing a controller in the feedback path, and (iii) by using the linear fractional transformation (LFT) representation. One of the weights will be a function of a free parameter which can be varied in the resultant frequency weighted model reduction problem. We show that by using standard frequency weighted model reduction techniques, the approximation error can be easily reduced by varying the free parameter to give more accurate low order controllers. A method for choosing the free parameter to get optimal results is being suggested. A number of practical examples are used to show the effectiveness of the proposed controller reduction method. We have then considered the relationships between the closed-loop system con gurations which can be expressed using a classical control block diagram or a modern control block diagram (LFT). Formulas are derived to convert a closed-loop system represented by a classical control block diagram to a closed-loop system represented by a modern control block diagram and vice versa.
APA, Harvard, Vancouver, ISO, and other styles
38

Monjour, Servanne. "La littérature à l’ère photographique : mutations, novations et enjeux (de l’argentique au numérique)." Thèse, Rennes 2, 2015. http://hdl.handle.net/1866/13614.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Sekercioglu, Ahmet, and ahmet@hyperion ctie monash edu au. "Fuzzy logic control techniques and structures for Asynchronous Transfer Mode (ATM) based multimedia networks." Swinburne University of Technology, 1999. http://adt.lib.swin.edu.au./public/adt-VSWT20050411.130014.

Full text
Abstract:
The research presented in this thesis aims to demonstrate that fuzzy logic is a useful tool for developing mechanisms for controlling traffc flow in ATM based multimedia networks to maintain quality of service (QoS) requirements and maximize resource utilization. The study first proposes a hierarchical, multilevel control structure for ATM networks to exploit the reported strengths of fuzzy logic at various control levels. Then, an extensive development and evaluation is presented for a subset of the proposed control architecture at the congestion control level. An ATM based multimedia network must have quite sophisticated traffc control capabilities to effectively handle the requirements of a dynamically varying mixture of voice, video and data services while meeting the required levels of performance. Feedback control techniques have an essential role for the effective and efficient management of the resources of ATM networks. However, development of conventional feedback control techniques relies on the availability of analytical system models. The characteristics of ATM networks and the complexity of service requirements cause the analytical modeling to be very difficult, if not impossible. The lack of realistic dynamic explicit models leads to substantial problems in developing control solutions for B-ISDN networks. This limits the ability of conventional techniques to directly address the control objectives for ATM networks. In the literature, several connection admission and congestion control methods for B-ISDN networks have been reported, and these have achieved mixed success. Usually they either assume heavily simplified models, or they are too complicated to implement, mainly derived using probabilistic (steady-state) models. Fuzzy logic controllers, on the other hand, have been applied successfully to the task of controlling systems for which analytical models are not easily obtainable. Fuzzy logic control is a knowledge-based control strategy that can be utilized when an explicit model of a system is not available or, the model itself, if available, is highly complex and nonlinear. In this case, the problem of control system design is based on qualitative and/or empirically acquired knowledge regarding the operation of the system. Representation of qualitative or empirically acquired knowledge in a fuzzy logic controller is achieved by linguistic expressions in the form of fuzzy relational equations. By using fuzzy relational equations, classifications related to system parameters can be derived without explicit description. The thesis presents a new predictive congestion control scheme, Fuzzy Explicit Rate Marking (FERM), which aims to avoid congestion, and by doing so minimize the cell losses, attain high server utilization, and maintain the fair use of links. The performance of the FERM scheme is extremely competitive with that of control schemes developed using traditional methods over a considerable period of time. The results of the study demonstrate that fuzzy logic control is a highly effective design tool for this type of problems, relative to the traditional methods. When controlled systems are highly nonlinear and complex, it keeps the human insight alive and accessible at the lower levels of the control hierarchy, and so higher levels can be built on this understanding. Additionally, the FERM scheme has been extended to adaptively tune (A-FERM) so that continuous automatic tuning of the parameters can be achieved, and thus be more adaptive to system changes leading to better utilization of network bandwidth. This achieves a level of robustness that is not exhibited by other congestion control schemes reported in the literature. In this work, the focus is on ATM networks rather than IP based networks. For historical reasons, and due to fundamental philosophical differences in the (earlier) approach to congestion control, the research for control of TCP/IP and ATM based networks proceeded separately. However, some convergence between them has recently become evident. In the TCP/IP literature proposals have appeared on active queue management in routers, and Explicit Congestion Notication (ECN) for IP. It is reasonably expected that, the algorithms developed in this study will be applicable to IP based multimedia networks as well.
APA, Harvard, Vancouver, ISO, and other styles
40

Wei, Ming. "A Study of Perceptually Tuned, Wavelet Based, Rate Scalable, Image and Video Compression." Thesis, University of North Texas, 2002. https://digital.library.unt.edu/ark:/67531/metadc3074/.

Full text
Abstract:
In this dissertation, first, we have proposed and implemented a new perceptually tuned wavelet based, rate scalable, and color image encoding/decoding system based on the human perceptual model. It is based on state-of-the-art research on embedded wavelet image compression technique, Contrast Sensitivity Function (CSF) for Human Visual System (HVS) and extends this scheme to handle optimal bit allocation among multiple bands, such as Y, Cb, and Cr. Our experimental image codec shows very exciting results in compression performance and visual quality comparing to the new wavelet based international still image compression standard - JPEG 2000. On the other hand, our codec also shows significant better speed performance and comparable visual quality in comparison to the best codec available in rate scalable color image compression - CSPIHT that is based on Set Partition In Hierarchical Tree (SPIHT) and Karhunen-Loeve Transform (KLT). Secondly, a novel wavelet based interframe compression scheme has been developed and put into practice. It is based on the Flexible Block Wavelet Transform (FBWT) that we have developed. FBWT based interframe compression is very efficient in both compression and speed performance. The compression performance of our video codec is compared with H263+. At the same bit rate, our encoder, being comparable to the H263+ scheme, with a slightly lower (Peak Signal Noise Ratio (PSNR) value, produces a more visually pleasing result. This implementation also preserves scalability of wavelet embedded coding technique. Thirdly, the scheme to handle optimal bit allocation among color bands for still imagery has been modified and extended to accommodate the spatial-temporal sensitivity of the HVS model. The bit allocation among color bands based on Kelly's spatio-temporal CSF model is designed to achieve the perceptual optimum for human eyes. A perceptually tuned, wavelet based, rate scalable video encoding/decoding system has been designed and implemented based on this new bit allocation scheme. Finally to present the potential applications of our rate scalable video codec, a prototype system for rate scalable video streaming over the Internet has been designed and implemented to deal with the bandwidth unpredictability of the Internet.
APA, Harvard, Vancouver, ISO, and other styles
41

Zarate, Orozco Ismael. "Software and Hardware-In-The-Loop Modeling of an Audio Watermarking Algorithm." Thesis, University of North Texas, 2010. https://digital.library.unt.edu/ark:/67531/metadc33221/.

Full text
Abstract:
Due to the accelerated growth in digital music distribution, it becomes easy to modify, intercept, and distribute material illegally. To overcome the urgent need for copyright protection against piracy, several audio watermarking schemes have been proposed and implemented. These digital audio watermarking schemes have the purpose of embedding inaudible information within the host file to cover copyright and authentication issues. This thesis proposes an audio watermarking model using MATLAB® and Simulink® software for 1K and 2K fast Fourier transform (FFT) lengths. The watermark insertion process is performed in the frequency domain to guarantee the imperceptibility of the watermark to the human auditory system. Additionally, the proposed audio watermarking model was implemented in a Cyclone® II FPGA device from Altera® using the Altera® DSP Builder tool and MATLAB/Simulink® software. To evaluate the performance of the proposed audio watermarking scheme, effectiveness and fidelity performance tests were conducted for the proposed software and hardware-in-the-loop based audio watermarking model.
APA, Harvard, Vancouver, ISO, and other styles
42

Kühn, Carol. "Digital sculpture : conceptually motivated sculptural models through the application of three-dimensional computer-aided design and additive fabrication technologies." Thesis, [Bloemfontein] : Central University of Technology, Free State, 2009. http://hdl.handle.net/11462/50.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Baudouin, Olivier. "Problématiques musicales du recours aux techniques de synthèse sonore numérique aux États-Unis et en France de 1957 à 1977." Thesis, Paris 4, 2010. http://www.theses.fr/2010PA040117.

Full text
Abstract:
Le répertoire musical produit au moyen de la synthèse numérique du son durant les vingt premières années de développement de ces techniques (1957-1977) n’avait jamais fait l’objet d’une étude musicologique exhaustive, malgré l’importance de ce domaine en ce qui regarde l’évolution de l’univers sonore contemporain, et les nombreux documents disponibles. Notre méthode, empruntée aux musicologues Leigh Landy et Marc Battier, a consisté à rendre justice à des œuvres et à des pièces souvent traitées de façon isolée ou purement illustrative, en reconstruisant autour d’elles un récit mêlant considérations historiques, analytiques, techniques, esthétiques et culturelles. À travers cette étude, les liens qui unirent art musical, science et technologie numérique dans les années 1960 et 1970 apparaissent ainsi clairement, en renouvelant l’appréhension de répertoires basés sur le matériau sonore
The musical repertoire generated by digital sound synthesis during the first twenty years of development of these techniques (1957-1977) had never been fully studied by musicologists, in spite of the importance of this field regarding evolution of contemporary sonic world, and the availability of numerous documents. Our method, based on Leigh Landy and Marc Battier’ideas, consisted of giving credit to pieces often dealt in an insulated or illustrative manner, by reconstructing around them a story combining historical, analytical, technical, aesthetical, and cultural elements. Thus, links that bound music, science, and digital technology in the 1960s and 1970s clearly appear, renewing comprehension of sound-based repertoires
APA, Harvard, Vancouver, ISO, and other styles
44

Serain, Clément. "La conservation-restauration du patrimoine au regard des humanités numériques : enjeux techniques, sociocognitifs et politiques." Thesis, Paris 8, 2019. http://www.theses.fr/2019PA080048.

Full text
Abstract:
En prenant acte des transformations induites par les technologies numériques au sein des institutions patrimoniales, cette thèse propose d’analyser l’impact de ces technologies dans le domaine particulier de la conservation-restauration des collections muséales. Ce travail part de l’hypothèse selon laquelle la discipline de la conservation-restauration, loin d’être neutre, construit notre compréhension et notre perception de la matérialité des objets du patrimoine. Aussi, l’objectif de cette thèse est de montrer en quoi les technologies de l’information et de la communication utilisées au sein de la discipline participent à l’élaboration de cette compréhension et de cette perception et orientent ainsi notre rapport cognitif et sensoriel à la matérialité des objets du patrimoine. Il s’agit également, dans la perspective des humanités numériques, de voir comment le numérique est susceptible de s’adapter aux objectifs de la conservation-restauration tout en reconfigurant aussi les conceptions mêmes des notions de conservation, de restauration, de matérialité, de transmission et de patrimoine. À ce titre, cette thèse s’intéresse aussi à la façon dont les technologies numériques contribuent au partage des savoirs et d’une nouvelle appréhension sensorielle relatifs à la matérialité des objets pour un public plus large que le seul public des spécialistes de la conservation-restauration
Whilst acknowledging the mutations induced by digital technologies within cultural institutions, this thesis proposes to analyse the impact of these technologies in the particular field of conservation and restoration of museum collections. This work assumes that the discipline of conservation-restoration, far from being neutral, builds our understanding and perception of the materiality of cultural heritage objects. As a result, this thesis aims at demonstrating how communication and information technologies, used within the field of conservation-restoration, have a crucial role in the development of that understanding and perception, and how they guide our cognitive and sensory relation to the materiality of cultural objects. Moreover, in the framework of digital humanities, this work also aims at studying how digital tools can adapt to conservation-restoration purposes on the one hand, while reconfiguring how we conceive the very notions of conservation, restoration, materiality, transmission and cultural heritage on the other hand. In this perspective, this thesis also deals with the way digital technologies contribute to the sharing of knowledge and to a new sensory apprehension of the materiality of cultural objects for a wider public than that of conservation-restoration specialists alone
APA, Harvard, Vancouver, ISO, and other styles
45

Karlaputi, Sarada. "Evaluating the Feasibility of Accelerometers in Hand Gestures Recognition." Thesis, University of North Texas, 2014. https://digital.library.unt.edu/ark:/67531/metadc699862/.

Full text
Abstract:
Gesture recognition plays an important role in human computer Interaction for intelligent computing. Major applications like Gaming, Robotics and Automated Homes uses gesture recognition techniques which diminishes the usage of mechanical devices. The main goal of my thesis is to interpret SWAT team gestures using different types of sensors. Accelerometer and flex sensors were explored extensively to build a prototype for soldiers to communicate in the absence of line of sight. Arm movements were recognized by flex sensors and motion gestures by Accelerometers. Accelerometers are used to measure acceleration in respect to movement of the sensor in 3D. Flex sensors changes its resistance based on the amount of bend in the sensor. SVM is the classification algorithm used for classification of the samples. LIBSVM (Library for Support Vector Machines) is integrated software for support vector classification, regression and distribution estimation which supports multi class classification. Sensors data is connected to the WI micro dig to digitize the signal and to transmit it wirelessly to the computing device. Feature extraction and Signal windowing were the two major factors which contribute for the accuracy of the system. Mean Average value and Standard Deviation are the two features considered for accelerometer sensor data classification and Standard deviation is used for the flex sensor analysis for optimum results. Filtering of the signal is done by identifying the different states of signals which are continuously sampled.
APA, Harvard, Vancouver, ISO, and other styles
46

Pettit, Elaine J. (Elaine Joyce). "Synthesis of 2-D Images From the Wigner Distribution with Applications to Mammography and Edge Extraction." Thesis, University of North Texas, 1995. https://digital.library.unt.edu/ark:/67531/metadc332685/.

Full text
Abstract:
A new method for the general application of quadratic spatial/spatial frequency domain filtering to imagery is presented in this dissertation. The major contribution of this research is the development of an original algorithm for approximating the inverse psuedo Wigner distribution through synthesis of an image in the spatial domain which approximates the result of filtering an original image in the DPWD domain.
APA, Harvard, Vancouver, ISO, and other styles
47

Morita, Yasuhiro. "Study of the effects of background and motion camera on the efficacy of Kalman and particle filter algorithms." Thesis, University of North Texas, 2009. https://digital.library.unt.edu/ark:/67531/metadc12166/.

Full text
Abstract:
This study compares independent use of two known algorithms (Kalmar filter with background subtraction and Particle Filter) that are commonly deployed in object tracking applications. Object tracking in general is very challenging; it presents numerous problems that need to be addressed by the application in order to facilitate its successful deployment. Such problems range from abrupt object motion, during tracking, to a change in appearance of the scene and the object, as well as object to scene occlusions, and camera motion among others. It is important to take into consideration some issues, such as, accounting for noise associated with the image in question, ability to predict to an acceptable statistical accuracy, the position of the object at a particular time given its current position. This study tackles some of the issues raised above prior to addressing how the use of either of the aforementioned algorithm, minimize or in some cases eliminate the negative effects
APA, Harvard, Vancouver, ISO, and other styles
48

Evans, Fiona H. "Syntactic models with applications in image analysis." University of Western Australia. Dept. of Mathematics and Statistics, 2007. http://theses.library.uwa.edu.au/adt-WU2007.0001.

Full text
Abstract:
[Truncated abstract] The field of pattern recognition aims to develop algorithms and computer programs that can learn patterns from data, where learning encompasses the problems of recognition, representation, classification and prediction. Syntactic pattern recognition recognises that patterns may be hierarchically structured. Formal language theory is an example of a syntactic approach, and is used extensively in computer languages and speech processing. However, the underlying structure of language and speech is strictly one-dimensional. The application of syntactic pattern recognition to the analysis of images requires an extension of formal language theory. Thus, this thesis extends and generalises formal language theory to apply to data that have possibly multi-dimensional underlying structure and also hierarchic structure . . . As in the case for curves, shapes are modelled as a sequence of local relationships between the curves, and these are estimated using a training sample. Syntactic square detection was extremely successful – detecting 100% of squares in images containing only a single square, and over 50% of the squares in images containing ten squares highly likely to be partially or severely occluded. The detection and classification of polygons was successful, despite a tendency for occluded squares and rectangles to be confused. The algorithm also peformed well on real images containing fish. The success of the syntactic approaches for detecting edges, detecting curves and detecting, classifying and counting occluded shapes is evidence of the potential of syntactic models.
APA, Harvard, Vancouver, ISO, and other styles
49

Ruprecht, Nathan Alexander. "Implementation of Compressive Sampling for Wireless Sensor Network Applications." Thesis, University of North Texas, 2018. https://digital.library.unt.edu/ark:/67531/metadc1157614/.

Full text
Abstract:
One of the challenges of utilizing higher frequencies in the RF spectrum, for any number of applications, is the hardware constraints of analog-to-digital converters (ADCs). Since mid-20th century, we have accepted the Nyquist-Shannon Sampling Theorem in that we need to sample a signal at twice the max frequency component in order to reconstruct it. Compressive Sampling (CS) offers a possible solution of sampling sub-Nyquist and reconstructing using convex programming techniques. There has been significant advancements in CS research and development (more notably since 2004), but still nothing to the advantage of everyday use. Not for lack of theoretical use and mathematical proof, but because of no implementation work. There has been little work on hardware in finding the realistic constraints of a working CS system used for digital signal process (DSP). Any parameters used in a system is usually assumed based on stochastic models, but not optimized towards a specific application. This thesis aims to address a minimal viable platform to implement compressive sensing if applied to a wireless sensor network (WSN), as well as address certain parameters of CS theory to be modified depending on the application.
APA, Harvard, Vancouver, ISO, and other styles
50

Talasila, Mahendra. "Implementation of Turbo Codes on GNU Radio." Thesis, University of North Texas, 2010. https://digital.library.unt.edu/ark:/67531/metadc33206/.

Full text
Abstract:
This thesis investigates the design and implementation of turbo codes over the GNU radio. The turbo codes is a class of iterative channel codes which demonstrates strong capability for error correction. A software defined radio (SDR) is a communication system which can implement different modulation schemes and tune to any frequency band by means of software that can control the programmable hardware. SDR utilizes the general purpose computer to perform certain signal processing techniques. We implement a turbo coding system using the Universal Software Radio Peripheral (USRP), a widely used SDR platform from Ettus. Detail configuration and performance comparison are also provided in this research.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography