Dissertations / Theses on the topic 'Perceptual learning'

To see the other types of publications on this topic, follow the link: Perceptual learning.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Perceptual learning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Sowden, Paul Timothy. "On perceptual learning." Thesis, University of Surrey, 1995. http://epubs.surrey.ac.uk/771375/.

Full text
Abstract:
A fundamental concern in Psychology is the extent to which we learn to perceive our world and, further, the degree to which perception remains modif"Iable even in adulthood. Yet despite the significance of these concerns, perceptual learning has been somewhat sporadically studied, and often only at a phenomenal level. This thesis proposes a new theoretical framework for perceptual learning, and argues that a multiplicity of processes have been examined under this single term. The empirical work reported in this thesis examines a range of these different learning processes, and illustrates methods by which the process/processes underlying a particular phenomenon can be revealed. Extended replications of seminal studies on 'perceptual learning' demonstrate the non-perceptual learning nature of the processes reported in those studies. Further empirical work presents new evidence for the plasticity of human vision on fundamental dimensions of visual processing. These fmdings suggest that even adults I perceptual experience is modifiable as a result of changes at an early stage of visual processing. Final empirical work considers the types of learning that may occur in the more complex and naturalistic task of detecting features in X-rays, and this leads on to an examination of visual search learning. It is concluded that, given the varied nature of the learning processes identified, a unified theory of perceptual learning may be an unrealistic goal. Instead, a detailed understanding of the different mechanisms underlying each of the identified learning processes is likely to prove more useful. Finally, it is argued that all of the identified processes, previously regarded as perceptual learning, could underlie improvements on complex 'real-world' discrimination tasks. This is illustrated through the application of the theoretical framework, developed in this thesis, to mammographic ftlm reading. It is argued that by isolating and systematically targeting each of the learning processes involved in a task, more effective training programmes could be designed.
APA, Harvard, Vancouver, ISO, and other styles
2

Notman, Leslie. "On perceptual learning, categorical perception and perceptual expertise." Thesis, University of Surrey, 2005. http://epubs.surrey.ac.uk/844066/.

Full text
Abstract:
The empirical work reported in the current thesis set out to explore the relationship between perceptual expertise, categorical perception (CP) and perceptual learning. Evidence to support the idea that the way people organise the world into categories can qualitatively affect their perception of it has been provided by CP research. Recent work indicates that categorisation experience can lead to enhanced sensitivity to diagnostic stimulus features and is consistent with the possibility that, as experts have learned to distinguish among objects, they have also acquired new ways of perceptually structuring the objects to be categorised. Nevertheless, there is debate about whether these effects are really perceptual and if so about the mechanisms and locus of learning. Here, experiments were designed to test whether the process of acquiring perceptual categories drives a perceptual learning process that enhances the discrimination of category relevant features thereby contributing to the development of perceptual expertise. The work therefore sought to test the possibility that category learning could drive changes to early stages of perceptual processing. Two classes of stimuli were used to address these issues. Initial experiments showed that learning to categorise Gabor patches can lead to learned CP effects that are specific to the trained spatial frequency, orientation and retinal location. Experiments using morphed cervical cell stimuli showed that expert cervical screeners have acquired heightened discrimination to cells that cross the normal/abnormal category boundary and that training novices to categorise cells as normal or abnormal can also lead to retinotopically specific learned CP effects. Taken together, the results reported in the current thesis support a general explanation of CP effects arising from categorisation driven perceptual learning at early stages of visual processing. Furthermore, the work speculated that modifications to intra-cortical connections at this stage of processing may underpin the learned CP effects observed.
APA, Harvard, Vancouver, ISO, and other styles
3

Poulter, Damian. "Perceptual learning and consciousness." Thesis, University of Reading, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.413930.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Mundy, Matthew Edward. "Perceptual learning in humans." Thesis, Cardiff University, 2006. http://orca.cf.ac.uk/56121/.

Full text
Abstract:
Unsupervised exposure to confusable stimuli facilitates later discrimination between them. It is known that the schedule of exposure is critical to this perceptual learning effect, but several issues remain unresolved: I) it is not known whether a mechanism of mutual inhibition, taken by some to underpin perceptual learning in rats, is also evident in humans. II) Although simultaneous presentation of the to-be- discriminated stimuli has been suggested by some to be the most efficient way to promote perceptual learning, the associative mechanisms proposed by others (e.g., that of mutual inhibition) predict the opposite. Ill) Perceptual learning has been invoked as the process by which a face becomes familiar but surprisingly, this idea has received little empirical evaluation. The experimental work reported in this thesis addresses these three issues. Experiments 1 and 2, using flavours as stimuli, reveal that the inhibitory mechanisms that contribute to perceptual learning in rats also contribute to perceptual learning in humans. Experiments 3 and 4 demonstrate a perceptual learning effect using visual stimuli, pictures of human faces and that these effects too, exhibit parallels with studies of perceptual learning with rats. In particular they demonstrate that intermixed exposure results in greater perceptual learning than does blocked exposure. Experiments 5 to 7 indicate that perceptual learning seen following simultaneous exposure is, in turn, superior to intermixed exposure - implicating a process of stimulus comparison. Experiment 8 confirms that this novel effect is also observed with other visual stimuli, chequerboards, while those of Experiments 9 and 10 indicate that the face stimuli used exhibit some of the hallmarks of face processing. These findings establish, along with Experiments 3 to 6, that perceptual learning contributes to the process by which a face becomes familiar.
APA, Harvard, Vancouver, ISO, and other styles
5

Honeycutt, Hunter Gibson. "Prenatal Perceptual Experience and Postnatal Perceptual Preferences: Evidence for Attentional-Bias in Perceptual Learning." Thesis, Virginia Tech, 2000. http://hdl.handle.net/10919/36148.

Full text
Abstract:
Previous studies have indicated that concurrent multimodal stimulation can interfere with prenatal perceptual learning. However, the nature and extent of this interference is not well understood. This study further assessed this issue by exposing three groups of bobwhite quail embryos to (a) no unusual prenatal stimulation, (b) a bobwhite maternal call, or (c) a maternal call + light compound in the period prior to hatching. Experiments differed in terms of the types of stimuli presented during postnatal preference tests (Exp 1 = familiar call vs. unfamiliar call; Exp 2 = familiar compound vs. unfamiliar compound; Exp 3 = familiar compound verses unfamiliar call; Exp 4 = familiar call vs. unfamiliar compound). Embryos receiving no supplemental stimulation showed no preference between stimulus events in all testing conditions. Embryos receiving exposure to a unimodal call preferred the familiar call over the unfamiliar call regardless of the presence or absence of patterned light during testing. Embryos receiving concurrent audio-visual exposure showed no preference between stimulus events in Exp 1 and Exp 4, but did prefer the familiar call when it was paired with light during testing (Exp 2 and 3). These findings suggest that concurrent multimodal stimulation does not interfere with prenatal perceptual learning by overwhelming the young organism's limited attentional capacities. Rather, multimodal biases what information is attended to during exposure and subsequent testing. Results are discussed within an attentional-bias framework, which maintains that young organisms tend to initially process non-redundant compound events as integrative units rather than processing the components of the compound separately.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
6

Kacelnik, Oliver. "Perceptual learning in sound localization." Thesis, University of Oxford, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.270638.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Jones, Peter R. "Mechanisms of auditory perceptual learning." Thesis, University of Nottingham, 2013. http://eprints.nottingham.ac.uk/13376/.

Full text
Abstract:
Practice improves performance on many basic auditory tasks. However, while the phenomenon of auditory perceptual learning is well established, little is known about the mechanisms underlying such improvements. What is learned during auditory perceptual learning? This thesis attempts to answer this question by applying models of performance to behavioural response data, and examining which parameters change with practice. On a simple pure tone discrimination task, learning is shown to primarily represent a reduction in internal noise, with encoding efficiency, attentiveness and bias appearing invariant. In a more complex auditory detection task, learning and development are also shown to involve improvements in listening strategy, with listeners becoming better able to selectively-attend to task-relevant information. Finally, task performance is potentially constrained not just by the strength of the sensory evidence, but also by the efficiency of the wider decision process that the sensory evidence informs. Thus, in the final chapters learning is also shown to involve reductions in both stationary and nonstationary bias. In short, learning is shown to be subserved by multiple mechanisms that: operate in parallel, vary in importance depending on the task demands, and incorporate both sensory and non-sensory processes. The methods of analysis described herein are shown to effectively partition components of perception in normal hearing children and adults, and may help to understand learning processes needed for the rehabilitation of listening difficulties.
APA, Harvard, Vancouver, ISO, and other styles
8

Borrie, Stephanie Anna. "Perceptual learning of dysarthric speech." Thesis, University of Canterbury. Department of Communication Disorders, 2011. http://hdl.handle.net/10092/5480.

Full text
Abstract:
Perceptual learning, when applied to speech, describes experience-evoked adjustments to the cognitive-perceptual processes required for recognising spoken language. It provides the theoretical basis for improved understanding of a speech signal that is initially difficult to perceive. Reduced intelligibility is a frequent and debilitating symptom of dysarthria, a speech disorder associated with neurological disease or injury. The current thesis investigated perceptual learning of dysarthric speech, by jointly considering intelligibility improvements and associated learning mechanisms for listeners familiarised with the neurologically degraded signal. Moderate hypokinetic dysarthria was employed as the test case in the three phases of this programme of research. The initial research phase established strong empirical evidence of improved recognition of dysarthric speech following a familiarisation experience. Sixty normal hearing listeners were randomly assigned to one of three groups and familiarised with passage readings under the following conditions: (1) neurologically intact speech (control) (n = 20), dysarthric speech (passive familiarisation) (n = 20), and (3) dysarthric speech coupled with written information (explicit familiarisation) (n = 20). Subsequent phrase transcription analysis revealed that the intelligibility scores of both groups familiarised with dysarthric speech were significantly higher than those of the control group. Furthermore, performance gains were superior, in both size and longevity, when the familiarisation conditions were explicit. A condition discrepancy in segmentation strategies, in which attention towards syllabic stress contrast cues increased following explicit familiarisation but decreased following passive familiarisation, indicated that performance differences were more than simply magnitude of benefit. Thus, it was speculated that the learning that occurred with passive familiarisation may be qualitatively different to that which occurred with explicit familiarisation. The second phase of the research programme followed up on the initial findings and examined whether the key variable behind the use of particular segmentation strategies was simply the presence or absence of written information during familiarisation. Forty normal hearing listeners were randomly assigned to one of two groups and were familiarised with experimental phrases under either passive (n = 20) or explicit (n = 20) learning conditions. Subsequent phrase transcription analysis revealed that regardless of condition, all listeners utilised syllabic stress contrast cues to segment speech following familiarisation with phrases that emphasised this prosodic perception cue. Furthermore, the study revealed that, in addition to familiarisation condition, intelligibility gains were dependent on the type of the familiarisation stimuli employed. Taken together, the first two research phases demonstrated that perceptual learning of dysarthric speech is influenced by the information afforded within the familiarisation procedure. The final research phase examined the role of indexical information in perceptual learning of dysarthric speech. Forty normal hearing listeners were randomly assigned to one of two groups and were familiarised with dysarthric speech via a training task that emphasised either the linguistic (word identification) (n = 20) or indexical (speaker identification) (n = 20) properties of the signal. Intelligibility gains for listeners trained to identify indexical information paralleled those achieved by listeners trained to identify linguistic information. Similarly, underlying error patterns were also comparable between the two training groups. Thus, phase three revealed that both indexical and linguistic features of the dysarthric signal are learnable, and can be used to promote subsequent processing of dysarthric speech. In summary, this thesis has demonstrated that listeners can learn to better understand neurologically degraded speech. Furthermore, it has offered insight into how the information afforded by the specific familiarisation procedure is differentially leveraged to improve perceptual performance during subsequent encounters with the dysarthric signal. Thus, this programme of research affords preliminary evidence towards the development of a theoretical framework that exploits perceptual learning for the treatment of dysarthria.
APA, Harvard, Vancouver, ISO, and other styles
9

Gold, Jason Michael. "Signal and noise in perceptual learning." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp05/NQ63775.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Symonds, Michelle. "Perceptual learning in flavour aversion conditioning." Thesis, University of York, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.282301.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Leonard, Sarah. "Mediated learning in the rat : implications for perceptual learning." Thesis, University of York, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.265556.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Leboe, Jason P. Milliken Bruce. "The inferential basis of perceptual performance /." *McMaster only, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
13

Barden, Katharine. "Perceptual learning of context-sensitive phonetic detail." Thesis, University of Cambridge, 2011. https://www.repository.cam.ac.uk/handle/1810/241032.

Full text
Abstract:
Although familiarity with a talker or accent is known to facilitate perception, it is not clear what underlies this phenomenon. Previous research has focused primarily on whether listeners can learn to associate novel phonetic characteristics with low-level units such as features or phonemes. However, this neglects the potential role of phonetic information at many other levels of representation. To address this shortcoming, this thesis investigated perceptual learning of systematic phonetic detail relating to higher levels of linguistic structure, including prosodic, grammatical and morphological contexts. Furthermore, in contrast to many previous studies, this research used relatively natural stimuli and tasks, thus maximising its relevance to perceptual learning in ordinary listening situations. This research shows that listeners can update their phonetic representations in response to incoming information and its relation to linguistic-structural context. In addition, certain patterns of systematic phonetic detail were more learnable than others. These findings are used to inform an account of how new information is integrated with prior experience in speech processing, within a framework that emphasises the importance of phonetic detail at multiple levels of representation.
APA, Harvard, Vancouver, ISO, and other styles
14

Jackson, Stephen R. "Implicit and explicit processes in perceptual learning." Thesis, University of Cambridge, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.333303.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Hervais-Adelman, Alexis Georges. "The perceptual learning of noise-vocoded speech." Thesis, University of Cambridge, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.611867.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Scahill, Victoria Louise. "Perceptual learning and transfer along a continuum." Thesis, University of Cambridge, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.620585.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Grand, Christopher S. "Perceptual and functional categorisation in associative learning." Thesis, Cardiff University, 2007. http://orca.cf.ac.uk/54587/.

Full text
Abstract:
This thesis investigated the theoretical processes that underlie perceptual and functional categorisation: perceptual categorisation refers to the process of forming an integrated representation of a pattern of stimulation and functional categorisation refers to the process of integrating otherwise equivalent patterns of stimulation according to their uses or consequences. Investigation of perceptual categorisation in people and of functional categorisation in rats provided results that place important constraints on the nature of the involvement of elemental and configural processes.
APA, Harvard, Vancouver, ISO, and other styles
18

Zeigler, Derek E. "Concept Learning, Perceptual Fluency, and Expert Classification." Ohio University / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1468418263.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Pia, Alex Albert. "Preferred perceptual learning styles of Chinese students." PDXScholar, 1989. https://pdxscholar.library.pdx.edu/open_access_etds/3918.

Full text
Abstract:
The basis for this study was work done by Joy Reid (1987) of Colorado State University. Reid's work analyzed the pref erred perceptual learning styles of several groups of English as a Second Language students and one group of American students. The learning styles concept has been established on the theory that students have a particular mode through which they learn best. The learning styles analyzed in this study were: auditory, visual, kinesthetic, tactile, individual, and group. The objectives of this study were to determine the relationships that exist between the preferred perceptual learning styles of P.R.C. and American students and such variables as country where student is studying, native language, length of time in the U.S., and sex.
APA, Harvard, Vancouver, ISO, and other styles
20

McGuire, Grant Leese. "Phonetic category learning." Columbus, Ohio : Ohio State University, 2007. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1190065715.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Munson, Cheyenne Michele. "Perceptual learning in speech reveals pathways of processing." Diss., University of Iowa, 2011. https://ir.uiowa.edu/etd/2747.

Full text
Abstract:
Listeners use perceptual learning to rapidly adapt to manipulated speech input. Examination of this learning process can reveal the pathways used during speech perception. By assessing generalization of perceptually learned categorization boundaries, others have used perceptual learning to help determine whether abstract units are necessary for listeners and models of speech perception. Here we extend this approach to address the inverse issue of specificity. In these experiments we have sought to discover the levels of specificity for which listeners can learn variation in phonetic contrasts. We find that (1) listeners are able to learn multiple voicing boundaries for different pairs of phonemic contrasts relying on the same feature contrast. (2) Listeners generalize voicing boundaries to untrained continua with the same onset as the trained continua, but generalization to continua with different onsets depends on previous experience with other continua sharing this different onset. (3) Listeners can learn different voicing boundaries for continua with the same CV onset, which suggests that boundaries are lexically-specific. (4) Listeners can learn different voicing boundaries for multiple talkers even when they are not given instructions about talkers and their task does not require talker identification. (5) Listeners retain talker-specific boundaries after training on a new boundary for a second talker, but generalize boundaries across talkers when they have no previous experience with a talker. These results were obtained using a new paradigm for unsupervised perceptual learning in speech. They suggest that models of speech perception must be highly flexible in order to accommodate both specificity and generalization of perceptually learned categorization boundaries.
APA, Harvard, Vancouver, ISO, and other styles
22

Stivala, Giada Martina. "Perceptual Web Crawlers." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019.

Find full text
Abstract:
Web crawlers are a fundamental component of web application scanners and are used to explore the attack surface of web applications. Crawlers work as follows. First, for each page, they extract URLs and UI elements that may lead to new pages. Then, they use a depth-first or breadth-first tree traversal to explore new pages. In this approach, crawlers cannot distinguish between "terminate user account" and "next page" buttons and they will click on both without taking into account the consequences of their actions. The goal of this project is to devise a new family of crawlers that builds on client-side code analysis and expand with the inference of the semantic of UI element by using visual clues. The new crawler will be able to identify in real time types and semantics of the UI elements, and it will use the semantics to choose the right action. This project will include the development of a prototype and evaluation against a selection of real-size web applications.
APA, Harvard, Vancouver, ISO, and other styles
23

Mikheeva, Olga. "Perceptual facial expression representation." Thesis, KTH, Robotik, perception och lärande, RPL, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-217307.

Full text
Abstract:
Facial expressions play an important role in such areas as human communication or medical state evaluation. For machine learning tasks in those areas, it would be beneficial to have a representation of facial expressions which corresponds to human similarity perception. In this work, the data-driven approach to representation learning of facial expressions is taken. The methodology is built upon Variational Autoencoders and eliminates the appearance-related features from the latent space by using neutral facial expressions as additional inputs. In order to improve the quality of the learned representation, we modify the prior distribution of the latent variable to impose the structure on the latent space that is consistent with human perception of facial expressions. We conduct the experiments on two datasets and the additionally collected similarity data, show that the human-like topology in the latent representation helps to improve the performance on the stereotypical emotion classification task and demonstrate the benefits of using a probabilistic generative model in exploring the roles of latent dimensions through the generative process.
Ansiktsuttryck spelar en viktig roll i områden som mänsklig kommunikation eller vid utvärdering av medicinska tillstånd. För att tillämpa maskininlärning i dessa områden skulle det vara fördelaktigt att ha en representation av ansiktsuttryck som bevarar människors uppfattning av likhet. I det här arbetet används ett data-drivet angreppssätt till representationsinlärning av ansiktsuttryck. Metodologin bygger på s. k. Variational Autoencoders och eliminerar utseende-relaterade drag från den latenta rymden genom att använda neutrala ansiktsuttryck som extra input-data. För att förbättra kvaliteten på den inlärda representationen så modifierar vi a priori-distributionen för den latenta variabeln för att ålägga den struktur på den latenta rymden som är överensstämmande med mänsklig perception av ansiktsuttryck. Vi utför experiment på två dataset och även insamlad likhets-data och visar att den människolika topologin i den latenta representationen hjälper till att förbättra prestandan på en typisk emotionsklassificeringsuppgift samt fördelarna med att använda en probabilistisk generativ modell när man undersöker latenta dimensioners roll i den generativa processen.
APA, Harvard, Vancouver, ISO, and other styles
24

McAuliffe, Michael. "Attention and salience in lexically-guided perceptual learning." Thesis, University of British Columbia, 2015. http://hdl.handle.net/2429/54152.

Full text
Abstract:
Psychophysical studies of perceptual learning find that perceivers only improve the accuracy of their perception on stimuli similar to what they were trained on. In contrast, speech perception studies of perceptual learning find generalization to novel contexts when words contain a modified ambiguous sound. This dissertation seeks to resolve the apparent conflict between these findings by framing the results in terms of attentional sets. Attention can be oriented towards comprehension of the speaker’s intended meaning or towards perception of a speaker’s pronunciation. Attention is proposed to affect perceptual learning as follows. When attention is oriented towards comprehension, more abstract and less context-dependent representations are updated and the perceiver shows generalized perceptual learning, as seen in the speech perception literature. When attention is oriented towards perception, more finely detailed and more context-dependent representations are updated and the perceiver shows less generalized perceptual learning, similar to what is seen in the psychophysics literature. This proposal is supported by three experiments. The first two implement a standard paradigm for perceptual learning in speech perception. In these experiments, promoting a more perception-oriented attentional set causes less generalized perceptual learning. The final experiment uses a novel paradigm where modified sounds are embedded in sentences during exposure. Perceptual learning is found only when the modified sound is embedded in words that are not predictable from the sentence. When modified sounds are in predictable words, no perceptual learning is observed. To account for this lack of perceptual learning, I hypothesize that sounds in predictable sentences are less reliable than sounds in words in isolation or unpredictable sentences. In the cases where perceptual learning is present, contexts which support comprehension-oriented attentional sets show larger perceptual learning effects than contexts promoting perception-oriented attentional sets. I argue that attentional sets are a key component to the generalization of perceptual learning to new contexts.
Arts, Faculty of
Linguistics, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
25

Weatherholtz, Kodi. "Perceptual learning of systemic cross-category vowel variation." The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1429782580.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Shao, Yunming. "Image-based Perceptual Learning Algorithm for Autonomous Driving." The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1503302777088283.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Montuori, Luke Michael. "Investigating perceptual learning with textured stimuli in rats." Thesis, Cardiff University, 2015. http://orca.cf.ac.uk/90295/.

Full text
Abstract:
In this thesis I present a series of experiments that aimed to examine the effect of experience on the subsequent discriminability of similar stimuli. It has oft been observed that preexposure to stimuli enhances the rate at which a discrimination with similar stimuli will progress, or will reduce the amount of generalisation that occurs to similar stimuli following training. In animals, this effect has typically been studied using the conditioned taste aversion paradigm. Here, I describe a novel experimental method whereby animals learn to discriminate between textured stimuli, and do so differentially based on their previous experience with textures.
APA, Harvard, Vancouver, ISO, and other styles
28

Sulman, Noah. "The influence of valenced images on perceptual learning." [Tampa, Fla] : University of South Florida, 2008. http://purl.fcla.edu/usf/dc/et/SFE0002381.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Farrow, Damian. "Expertise and the acquisition of perceptual-motor skill /." St. Lucia, Qld, 2002. http://www.library.uq.edu.au/pdfserve.php?image=thesisabs/absthe16469.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Ostrovsky, Yuri Ph D. Massachusetts Institute of Technology. "Learning to see : the early stages of perceptual organization." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/62087.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Brain and Cognitive Sciences, 2010.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references.
One of the great puzzles of vision science is how, over the course of development, the complex visual array comprising many regions of different colors and luminances is transformed into a sophisticated and meaningful constellation of objects. Gestaltists describe some of the rules that seem to govern a mature parsing of the visual scene, but where do these rules come from? Are they innate--endowed by evolution, or do they come somehow from visual experience? The answer to this question is usually confounded in infant studies as the timelines of maturation and experience are inextricably linked. Here, we describe studies with a special population of late--onset vision patients, which suggest a distinction between those capabilities available innately and those which are crafted via learning from the visual environment. We conclude with a hypothesis, based on these findings and other evidence, that early-available common fate motion cues provide a level of perceptual organization which forms the basis for the learning of subsequent cues.
by Yuri Ostrovsky.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
31

GALLIUSSI, JESSICA. "The role of task-irrelevant information in perceptual learning." Doctoral thesis, Università degli Studi di Trieste, 2017. http://hdl.handle.net/11368/2908164.

Full text
Abstract:
In human perception, visual perceptual learning is a well-known effect, showing that the adult neural system can achieve long-term enhanced performance on a visual task as a result of visual experience (Fahle & Poggio, 2002). The mechanisms underlying visual perceptual learning were debated for decades. Task-relevance, attention and awareness were thought necessary for perceptual learning (Shiu & Pashler, 1992; Ahissar & Hochstein, 1993; Schoups, Vogel, Qian, and Orban, 2001), but this view has been challenged by the discovery of task-irrelevant perceptual learning (TIPL), which occurs for task-irrelevant, unattended and even sub-threshold stimuli (Watanabe, Nanez & Sasaki, 2001). TIPL is a slow phenomenon, because thousands of training trials are necessary in order to observe perceptual learning for task-irrelevant stimuli. However, a fast form of TIPL (fast-TIPL) has been recently studied in the context of perceptual memories, accounting evidence of a learning mechanism similar to TIPL, in which task-irrelevant stimuli are better learned when presented at behaviourally relevant points in time (Lin, Pype, Murray & Boynton, 2010). In the present dissertation, the role of task-irrelevant stimuli in visual perceptual learning is examined. The first line of experiments aimed to deepen the understanding about the mechanisms underlying visual perceptual learning by investigating whether perceptual learning can be produced by the mere exposure to a task-irrelevant, sub-threshold feature, even when, during training, participants attend and perform a task on another feature of a homologous stimulus to that used during test stages. Additionally, the task-specificity of TIPL was examined. The results provided further evidence about TIPL by corroborating the hypothesis that TIPL can occur even when the training stimuli are homologous to those in pre- and post-test. A further interesting finding was that the visual perceptual learning yielded by the task-irrelevant and sub-threshold feature is task-specific, because it occurs only in the task for which participants received a specific training, and is not transferred to another task performed on the same stimulus. Second, it has been investigated whether and how the modulation of the primary task difficulty level affects TIPL, by using a fast-TIPL paradigm which allows to study the phenomenon of TIPL within a single experimental session. In a dual-task condition, the amount of attention towards task-irrelevant stimuli which is needed for fast-TIPL to be observed was investigated by modulating the attentional and cognitive load of the primary task. The results showed a massive dual-task interference between the processing of primary task stimuli and the processing and encoding of task-irrelevant stimuli: the increase in the attentional and cognitive load required by the primary task determined a complete depletion of attentional resources such as no other resources remained available to process the task-irrelevant stimuli.
APA, Harvard, Vancouver, ISO, and other styles
32

Caruso, Valeria Carmen. "Effects of categorical learning on the auditory perceptual space." Doctoral thesis, SISSA, 2009. http://hdl.handle.net/20.500.11767/3969.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Engelman, William R. "A functional analysis of multiple movements." Diss., Georgia Institute of Technology, 1987. http://hdl.handle.net/1853/28924.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Lavis, Yvonna Marie Psychology Faculty of Science UNSW. "An investigation of the mechanisms responsible for perceptual learning in humans." Publisher:University of New South Wales. Psychology, 2008. http://handle.unsw.edu.au/1959.4/42882.

Full text
Abstract:
Discrimination between similar stimuli is enhanced more by intermixed pre-exposure than by blocked pre-exposure to those stimuli. The salience modulation account of this intermixed-blocked effect proposes that the unique elements of intermixed stimuli are more salient than those of blocked stimuli. The inhibition account proposes that inhibitory links between the unique elements of intermixed stimuli enhance discrimination. The current thesis evaluated the two accounts in their ability to explain this effect in humans. In Experiments 1 and 2, categorisation and same-different judgements were more accurate for intermixed than for blocked stimuli. This indicates that intermixed pre-exposure decreases generalisation and increases discriminability more than does blocked pre-exposure. In Experiments 3 ?? 5, same-different judgements were more accurate when at least one of the two stimuli was intermixed. This enhanced discrimination was not confined to two stimuli that had been directly intermixed. These results are better explained by salience modulation than by inhibition. Experiments 6 ?? 8 employed dot probe tasks, in which a grid stimulus was followed immediately by a probe. Neither intermixed nor blocked stimuli showed facilitated reaction times when the probe appeared in the location of the unique element. In Experiments 9 ?? 11 participants learned to categorise the intermixed unique elements more successfully than the blocked unique elements, but only when the unique elements were presented on a novel background during categorisation. Experiments 6 ?? 11 provide weak evidence that the intermixed unique elements are more salient than their blocked counterparts. In Experiment 12, participants were presented with the shape and location of a given unique element, and were required to select the correct colour. Performance was more accurate for intermixed than for blocked unique elements. In Experiment 13, participants learned to categorise intermixed, blocked and novel unique elements. Performance was better for intermixed than for blocked and novel unique elements, which did not differ. None of the proposed mechanisms for salience modulation anticipate these results. The intermixed-blocked effect in human perceptual learning is better explained by salience modulation than by inhibition. However, the salience modulation accounts that have been proposed received little support. An alternative account of salience modulation is considered.
APA, Harvard, Vancouver, ISO, and other styles
35

Bergerud, Donna Burgess. "Textbook adaptations for secondary students with learning disabilities /." Thesis, Connect to this title online; UW restricted, 1987. http://hdl.handle.net/1773/7793.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Hardy, Nicolle Chantelle. "Perceptual Learning Style Modalities: Comparing Latino, Black, and Caucasian Adults." Scholar Commons, 2017. http://scholarcommons.usf.edu/etd/6854.

Full text
Abstract:
Abstract The purpose of this study was to compare the individual learning modalities of Latino, Black, and Caucasian males and females with at least some college education utilizing the Multi-modal Paired Associates Learning Test IV (MMPALT IV). Using the MMPALT IV, 20 participants from each of the three race/ethnicities above the age of 40 were measured in each of the seven perceptual modalities: Visual, Print, Aural, Interactive, Haptic, Kinesthetic, and Olfactory. The MMPALT IV is a performance-based test, which measures a person’s capacity to acquire information through each of the seven learning channels. ANOVA tests (2 x 3) with a follow-up Tukey test were used with race/ethnicity and gender identified as independent variables. The dependent variable was the individual perceptual modality sub-test scores. This study presented four research questions that addressed the following: the strongest modality profile for the participants, identifiable patterns of perceptual modalities within and between the groups, gender differences between learning styles, and consistencies for race/ethnicity with respect to gender. Statistically significant differences were found only in the Kinesthetic sub-test involving Latino participants, where they scored higher than both Black and Caucasians. The three highest scoring modalities for the Latino participants were Visual, Print, and Haptic; whereas the Black participants were Visual, Interactive, and Print. Caucasian participants scored highest on Visual, Print, and Interactive. Males and females responded similarly. All race/ethnicities responded similarly to previous MMPALT research with the exception of Kinesthetic where Latino’s performed better then Caucasians and Blacks. Implications for practice would include the incorporation of more interactive activities in a learning environment. Based on the results of this research, instructors may benefit from paying closer attention to kinesthetic activities for Latino students in a learning environment and not over relying on just traditional methods of teaching. This study was exploratory and was necessary to validate the current revisions to the MMAPLT IV. Future research could include modifying some of the subtests for more variation between test items, including more warm-up exercises to reduce any possible disorientation, adding other languages other than English, and testing other race/ethnicities.
APA, Harvard, Vancouver, ISO, and other styles
37

Buchholz, Leah Kee. "Perceptual learning of dysarthric speech : effects of familiarization and feedback." Thesis, University of British Columbia, 2009. http://hdl.handle.net/2429/5404.

Full text
Abstract:
The current study investigated the presence of perceptual learning following familiarization with spastic dysarthric speech secondary to cerebral palsy. The phonemic level of speech perception was examined using the word-initial stop voicing contrast. Stimuli were produced by a male speaker with severe spastic dysarthria. Speech samples were selected from the speaker’s utterances based on negative voice-onset time (prevoicing duration). Stimuli were systematically selected to create the voicing contrast using tokens with either short prevoicing or abnormally long prevoicing durations. Thirty naïve listeners were randomly assigned to one of three familiarization groups: one group was provided written feedback during familiarization, the second group listened to the same stimuli but was not provided with feedback, and the third group listened to different stimuli, which did not contain the voicing contrast. A forced-choice testing format was used to measure listeners’ responses preceding and following familiarization. Results showed changes in listeners’ response patterns following familiarization across the three groups indicating that perceptual learning occurred. Theoretical, clinical, and design implications are explored.
APA, Harvard, Vancouver, ISO, and other styles
38

Astle, Andrew. "A study of perceptual learning effects in human adult amblyopia." Thesis, University of Nottingham, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.537787.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Al-Omari, Muhannad A. R. I. "Joint perceptual learning and natural language acquisition for autonomous robots." Thesis, University of Leeds, 2017. http://etheses.whiterose.ac.uk/18860/.

Full text
Abstract:
Understanding how children learn the components of their mother tongue and the meanings of each word has long fascinated linguists and cognitive scientists. Equally, robots face a similar challenge in understanding language and perception to allow for a natural and effortless human-robot interaction. Acquiring such knowledge is a challenging task, unless this knowledge is preprogrammed, which is no easy task either, nor does it solve the problem of language difference between individuals or learning the meaning of new words. In this thesis, the problem of bootstrapping knowledge in language and vision for autonomous robots is addressed through novel techniques in grammar induction and word grounding to the perceptual world. The learning is achieved in a cognitively plausible loosely-supervised manner from raw linguistic and visual data. The visual data is collected using different robotic platforms deployed in real-world and simulated environments and equipped with different sensing modalities, while the linguistic data is collected using online crowdsourcing tools and volunteers. The presented framework does not rely on any particular robot or any specific sensors; rather it is flexible to what the modalities of the robot can support. The learning framework is divided into three processes. First, the perceptual raw data is clustered into a number of Gaussian components to learn the ‘visual concepts’. Second, frequent co-occurrence of words and visual concepts are used to learn the language grounding, and finally, the learned language grounding and visual concepts are used to induce probabilistic grammar rules to model the language structure. In this thesis, the visual concepts refer to: (i) people’s faces and the appearance of their garments; (ii) objects and their perceptual properties; (iii) pairwise spatial relations; (iv) the robot actions; and (v) human activities. The visual concepts are learned by first processing the raw visual data to find people and objects in the scene using state-of-the-art techniques in human pose estimation, object segmentation and tracking, and activity analysis. Once found, the concepts are learned incrementally using a combination of techniques: Incremental Gaussian Mixture Models and a Bayesian Information Criterion to learn simple visual concepts such as object colours and shapes; spatio-temporal graphs and topic models to learn more complex visual concepts, such as human activities and robot actions. Language grounding is enabled by seeking frequent co-occurrence between words and learned visual concepts. Finding the correct language grounding is formulated as an integer programming problem to find the best many-to-many matches between words and concepts. Grammar induction refers to the process of learning a formal grammar (usually as a collection of re-write rules or productions) from a set of observations. In this thesis, Probabilistic Context Free Grammar rules are generated to model the language by mapping natural language sentences to learned visual concepts, as opposed to traditional supervised grammar induction techniques where the learning is only made possible by using manually annotated training examples on large datasets. The learning framework attains its cognitive plausibility from a number of sources. First, the learning is achieved by providing the robot with pairs of raw linguistic and visual inputs in a “show-and-tell” procedure akin to how human children learn about their environment. Second, no prior knowledge is assumed about the meaning of words or the structure of the language, except that there are different classes of words (corresponding to observable actions, spatial relations, and objects and their observable properties). Third, the knowledge in both language and vision is obtained in an incremental manner where the gained knowledge can evolve to adapt to new observations without the need to revisit previously seen ones (previous observations). Fourth, the robot learns about the visual world first, then it learns about how it maps to language, which aligns with the findings of cognitive studies on language acquisition in human infants that suggest children come to develop considerable cognitive understanding about their environment in the pre-linguistic period of their lives. It should be noted that this work does not claim to be modelling how humans learn about objects in their environments, but rather it is inspired by it. For validation, four different datasets are used which contain temporally aligned video clips of people or robots performing activities, and sentences describing these video clips. The video clips are collected using four robotic platforms, three robot arms in simple block-world scenarios and a mobile robot deployed in a challenging real-world office environment observing different people performing complex activities. The linguistic descriptions for these datasets are obtained using Amazon Mechanical Turk and volunteers. The analysis performed on these datasets suggest that the learning framework is suitable to learn from complex real-world scenarios. The experimental results show that the learning framework enables (i) acquiring correct visual concepts from visual data; (ii) learning the word grounding for each of the extracted visual concepts; (iii) inducing correct grammar rules to model the language structure; (iv) using the gained knowledge to understand previously unseen linguistic commands; and (v) using the gained knowledge to generate well-formed natural language descriptions of novel scenes.
APA, Harvard, Vancouver, ISO, and other styles
40

Coen, Michael Harlan. "Multimodal dynamics : self-supervised learning in perceptual and motor systems." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/34022.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Includes bibliographical references (leaves 178-192).
This thesis presents a self-supervised framework for perceptual and motor learning based upon correlations in different sensory modalities. The brain and cognitive sciences have gathered an enormous body of neurological and phenomenological evidence in the past half century demonstrating the extraordinary degree of interaction between sensory modalities during the course of ordinary perception. We develop a framework for creating artificial perceptual systems that draws on these findings, where the primary architectural motif is the cross-modal transmission of perceptual information to enhance each sensory channel individually. We present self-supervised algorithms for learning perceptual grounding, intersensory influence, and sensorymotor coordination, which derive training signals from internal cross-modal correlations rather than from external supervision. Our goal is to create systems that develop by interacting with the world around them, inspired by development in animals. We demonstrate this framework with: (1) a system that learns the number and structure of vowels in American English by simultaneously watching and listening to someone speak. The system then cross-modally clusters the correlated auditory and visual data.
(cont.) It has no advance linguistic knowledge and receives no information outside of its sensory channels. This work is the first unsupervised acquisition of phonetic structure of which we are aware, outside of that done by human infants. (2) a system that learns to sing like a zebra finch, following the developmental stages of a juvenile zebra finch. It first learns the song of an adult male and then listens to its own initially nascent attempts at mimicry through an articulatory synthesizer. In acquiring the birdsong to which it was initially exposed, this system demonstrates self-supervised sensorimotor learning. It also demonstrates afferent and efferent equivalence - the system learns motor maps with the same computational framework used for learning sensory maps.
by Michael Harlan Coen.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
41

Civile, Ciro. "The face inversion effect and perceptual learning : features and configurations." Thesis, University of Exeter, 2013. http://hdl.handle.net/10871/13564.

Full text
Abstract:
This thesis explores the causes of the face inversion effect, which is a substantial decrement in performance in recognising facial stimuli when they are presented upside down (Yin,1969). I will provide results from both behavioural and electrophysiological (EEG) experiments to aid in the analysis of this effect. Over the course of six chapters I summarise my work during the four years of my PhD, and propose an explanation of the face inversion effect that is based on the general mechanisms for learning that we also share with other animals. In Chapter 1 I describe and discuss some of the main theories of face inversion. Chapter 2 used behavioural and EEG techniques to test one of the most popular explanations of the face inversion effect proposed by Diamond and Carey (1986). They proposed that it is the disruption of the expertise needed to exploit configural information that leads to the inversion effect. The experiments reported in Chapter 2 were published as in the Proceedings of the 34th annual conference of the Cognitive Science Society. In Chapter 3 I explore other potential causes of the inversion effect confirming that not only configural information is involved, but also single feature orientation information plays an important part in the inversion effect. All the experiments included in Chapter 3 are part of a paper accepted for publication in the Quarterly Journal of Experimental Psychology. Chapter 4 of this thesis went on to attempt to answer the question of whether configural information is really necessary to obtain an inversion effect. All the experiments presented in Chapter 4 are part of a manuscript in preparation for submission to the Quarterly Journal of Experimental Psychology. Chapter 5 includes some of the most innovative experiments from my PhD work. In particular it offers some behavioural and electrophysiological evidence that shows that it is possible to apply an associative approach to face inversion. Chapter 5 is a key component of this thesis because on the one hand it explains the face inversion effect using general mechanisms of perceptual learning (MKM model). On the other hand it also shows that there seems to be something extra needed to explain face recognition entirely. All the experiments included in Chapter 5 were reported in a paper submitted to the Journal of Experimental Psychology; Animal Behaviour Processes. Finally in Chapter 6 I summarise the implications that this work will have for explanations of the face inversion effect and some of the general processes involved in face perception.
APA, Harvard, Vancouver, ISO, and other styles
42

Tang-Wright, Kimmy. "Visual topography and perceptual learning in the primate visual system." Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:388b9658-dceb-443a-a19b-c960af162819.

Full text
Abstract:
The primate visual system is organised and wired in a topological manner. From the eye well into extrastriate visual cortex, a preserved spatial representation of the vi- sual world is maintained across many levels of processing. Diffusion-weighted imaging (DWI), together with probabilistic tractography, is a non-invasive technique for map- ping connectivity within the brain. In this thesis I probed the sensitivity and accuracy of DWI and probabilistic tractography by quantifying its capacity to detect topolog- ical connectivity in the post mortem macaque brain, between the lateral geniculate nucleus (LGN) and primary visual cortex (V1). The results were validated against electrophysiological and histological data from previous studies. Using the methodol- ogy developed in this thesis, it was possible to segment the LGN reliably into distinct subregions based on its structural connectivity to different parts of the visual field represented in V1. Quantitative differences in connectivity from magno- and parvo- cellular subcomponents of the LGN to different parts of V1 could be replicated with this method in post mortem brains. The topological corticocortical connectivity be- tween extrastriate visual area V5/MT and V1 could also be mapped in the post mortem macaque. In vivo DWI scans previously obtained from the same brains have lower resolution and signal-to-noise because of the shorter scan times. Nevertheless, in many cases, these yielded topological maps similar to the post mortem maps. These results indicate that the preserved topology of connection between LGN to V1, and V5/MT to V1, can be revealed using non-invasive measures of diffusion-weighted imaging and tractography in vivo. In a preliminary investigation using Human Connectome data obtained in vivo, I was not able to segment the retinotopic map in LGN based on con- nections to V1. This may be because information about the topological connectivity is not carried in the much lower resolution human diffusion data, or because of other methodological limitations. I also investigated the mechanisms of perceptual learning by developing a novel task-irrelevant perceptual learning paradigm designed to adapt neuronal elements early on in visual processing in a certain region of the visual field. There is evidence, although not clear-cut, to suggest that the paradigm elicits task- irrelevant perceptual learning, but that these effects only emerge when practice-related effects are accounted for. When orientation and location specific effects on perceptual performance are examined, the largest improvement occurs at the trained location, however, there is also significant improvement at one other 'untrained' location, and there is also a significant improvement in performance for a control group that did not receive any training at any location. The work highlights inherent difficulties in inves- tigating perceptual learning, which relate to the fact that learning likely takes place at both lower and higher levels of processing, however, the paradigm provides a good starting point for comprehensively investigating the complex mechanisms underlying perceptual learning.
APA, Harvard, Vancouver, ISO, and other styles
43

Batson, Melissa Anne. "Task-irrelevant perceptual learning of crossmodal links: specificity and mechanisms." Thesis, Boston University, 2010. https://hdl.handle.net/2144/42191.

Full text
Abstract:
It is clear that in order to perceive the external environment in its entirety, inputs from multiple sensory systems (i.e. modalities) must be combined with regard to each object in the environment. Humans are highly vision-dependent creatures, with a large portion of the human cortex dedicated to visual perception and many multimodal areas proposed to integrate vision with other modalities. Recent studies of multimodal integration have shown crossmodal facilitation (increased performance at short stimulus onset asynchronies, SOA s) and/or inhibition of return ( IOR ; decreased performance at long SOAs) for detection of a target stimulus in one modality following a location-specific cue in a different modality. It has also been shown that unimodal systems maintain some level of plasticity through adulthood, as revealed through studies of sensory deprivation (i.e. unimodal areas respond to multimodal stimuli), and especially through perceptual learning ( PL )--a well-defined type of cortical plasticity. Few studies have attempted to investigate the specificity and plasticity of crossmodal effects or the contexts in which multimodal processing is necessary for accurate visual perception. This dissertation addresses these unanswered questions of audiovisual ( AV ) crossmodal cuing effects by combining findings from unimodal perceptual learning with those of multimodal cuing effects as follows: (1) the short- and long-term effects of audiovisual crossmodal cuing, as well as the plasticity of these effects were systematically examined using spatially specific audiovisual training to manipulate crossmodal associations using perceptual learning; (2) neural correlates of these plastic crossmodal effects were deduced using monocular viewing tests (discriminating simple and complex stimuli) following monocular and orientation specific crossmodal perceptual training; and (3) psychophysical boundaries of plasticity within and among these mechanisms as dependent on task/training type and difficulty were determined by varying stimulus salience and looking at post-PL changes in response operating characteristics.
APA, Harvard, Vancouver, ISO, and other styles
44

Olson, Erin(Eric K. ). "Loanwords and the perceptual map : a perspective from MaxEnt Learning." Thesis, Massachusetts Institute of Technology, 2020. https://hdl.handle.net/1721.1/129120.

Full text
Abstract:
Thesis: Ph. D. in Linguistics, Massachusetts Institute of Technology, Department of Linguistics and Philosophy, September, 2020
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 213-216).
This dissertation examines the predictions of two computational models of grammar within the domain of loanword phonology. These models, formulated within a Maximum Entropy (MaxEnt) framework, have been shown to be successful when simulating the effects that a substantive bias such as the Perceptual Map (PMap) hypothesis of Steriade (2001) may have on a phonological learner. While previous studies have focused primarily on modelling data taken from artificial grammar learning experiments (Wilson, 2006; White, 2013), this dissertation will instead model loanword adaptation. Loanword adaptation was chosen as a useful test domain as speakers will often choose to repair phonotactically-illicit loanwords in ways that are not attested in their native grammar. It thus provides a wealth of data about how speakers structure their grammar in the absence of overt phonological evidence. To this end, a case study of English loanword adaptation in Cantonese is undertaken.
It will be shown that the patterns of consonant deletion and vowel epenthesis used by speakers of Cantonese to adapt English words are compatible with the PMap, and can be modelled through the MaxEnt learners mentioned above. It will also be shown through a series of computational simulations that Wilson's (2006) learner fails to acquire the grammar necessary to account for the patterns of loanword adaptation, while White's (2013) learner succeeds. This is a result of the way in which the PMap is encoded within these learners. While both encode the PMap as a series of asymmetrical Gaussian distributions on the weights of constraints, Wilson (2006) encodes this asymmetry through the variances, or plasticities, of the distributions, while White (2013) encodes it through the means, or target weights. A grammar which encodes the PMap through asymmetrical plasticities must encounter evidence from the phonology of the language in order to alter the weights of constraints.
However, the loanword phonology of Cantonese crucially lacks such phonological evidence, and Wilson's (2006) model cannot make use of it when establishing constraint asymmetries. White's (2013) model, however, allows constraint asymmetries to be maintained in the absence of overt evidence, and results in more accurate grammars of Cantonese loanword adaptation.
by Erin Olson.
Ph. D. in Linguistics
Ph.D.inLinguistics Massachusetts Institute of Technology, Department of Linguistics and Philosophy
APA, Harvard, Vancouver, ISO, and other styles
45

Sanders, Paul D. "An exploratory study of the relationship between perceptual modality strength and music achievement among fifth-grade students /." Full-text version available from OU Domain via ProQuest Digital Dissertations, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
46

Lui, Catherine Johnston. "The Perceptual Learning Style Preferences of Hispanic Students in Higher Education." BYU ScholarsArchive, 2017. https://scholarsarchive.byu.edu/etd/6712.

Full text
Abstract:
This paper addresses the question of whether higher education Hispanic students of different nationalities have different perceptual learning style preferences. Independent samples t-tests findings suggest the country of origin of a Hispanic student's parents has a statistically significant relationship (n=165, p<0.0073) with student's learning style preferences. ANOVA results also identified a statistically significant relationship between SES and group learning style (p<0.004,) and between visual learning style and two factors: age (p<0.011) and family education (p<0.033).
APA, Harvard, Vancouver, ISO, and other styles
47

Nagel, Karin Lynne. "Training visual pattern recognition : using worked examples to aid schema acquisition." Thesis, Georgia Institute of Technology, 1993. http://hdl.handle.net/1853/28851.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Lee, Larry Dee. "An investigation on computer-based instructional presentation modes and perceptual learning styles in concept learning /." Digital version accessible at:, 1999. http://wwwlib.umi.com/cr/utexas/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Stanley, Mary Louise. "Invariant relative timing in the learning of a perceptual motor skill." Thesis, University of British Columbia, 1989. http://hdl.handle.net/2429/28130.

Full text
Abstract:
The concept of invariant relative timing has typically been associated with the concept of a generalized motor program. The present study approaches the phenomenon of invariant relative timing from the perspective of learning. The underlying question of concern for this study is "What is learned?". The specific question addressed by the present study is whether relative timing is one of the essential properties of movement that is learned during skill acquisition. In the present experiment, subjects were given extensive practice in learning to visually track and reproduce a criterion waveform using a joystick control for their response. In order to test whether subjects learn the relative timing of a movement, they were transferred to waveforms which were identical to the criterion in terms of relative timing, but different in terms of absolute timing. Measurements were taken on all waveforms in two conditions: 1) in a pursuit tracking condition where subjects were temporally constrained by the stimulus, and 2) in a reproduction condition where subjects' timing was not constrained. Pursuit tracking performance was evaluated using three dependent measures: RMS error, lead-lag index, and variability. Performance in the reproduction condition was subjected to three analyses: 1) an harmonic analysis, which described each response waveform in terms of its phase, frequency, amplitude, and period; 2)proportional interval durations; and 3) proportional interval displacements. The outcome from both conditions gives support to the idea that the invariant relative timing of movement is one of the aspects of a movement that humans learn.
Education, Faculty of
Curriculum and Pedagogy (EDCP), Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
50

Knox, Pamela Jane. "Global motion processing, binocular interactions and perceptual learning in human amblyopia." Thesis, Glasgow Caledonian University, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.687405.

Full text
Abstract:
Amblyopia, a developmental disorder of the visual system, is widely known to cause a reduction in optotype acuity but it can also be associated with disrupted binocular vision, reduced contrast sensitivity and many other subtle high level visual processing deficits. The initial stages of the work presented in this thesis involved laboratory investigation of the functional visual deficit in global motion processing that has previously been reported abnormal in the presence of amblyopia. The key question is whether higher-levels of visual processing "inherit" abnormalities from lower levels, or whether additional developmental abnormalities arise in direct consequence of impoverished visual input. Overall, the results imply a far more complex perceptual change in amblyopia than would be predicted by the well -established losses in resolution and contrast sensitivity. The motivation behind Chapters 5 and 6 stems from the current observation that the recovery of visual function in amblyopia is contingent on even brief periods of correlated binocular vision, suggesting that amblyopia is intrinsically a binocular problem and that it is suppressive mechanisms that render the cortex, which is a structurally binocular system, functionally monocular. Research is now casting doubts on the idea that amblyopes do not possess cortical binocular connections, suggesting an active suppression rather than a deficit of cellular function. Interestingly, this is echoed in the clinical domain where, in cases of de-correlated visual input, strabismus clinical protocols have now established that the correction of refractive error alone can be sufficient to improve acuity, again implying incomplete inhibition mechanisms. The clinical investigations in this thesis have involved the validation of a series of psychophysical paradigms in cohorts of juvenile and adult amblyopes (as well as age-matched controls) to establish the degree of binocular interaction present and to explore the potential for treating amblyopia with prolonged viewing of a binocular stimulus adapted to correlate the visual imput from both eyes.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography