To see the other types of publications on this topic, follow the link: Map scale.

Dissertations / Theses on the topic 'Map scale'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Map scale.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Eldridge, Simon Michael, and n/a. "The impact of the scale of mapping on soil map quality." University of Canberra. Resource, Environmental & Heritage Sciences, 1997. http://erl.canberra.edu.au./public/adt-AUC20060707.102807.

Full text
Abstract:
It is generally assumed that increased map precision (ie map unit homogeneity) and map purity (map unit accuracy) should result from increasing the scale of mapping of the soil resource, since it should enable a more intricate breakdown of the landscape into landform facet based units. This study compared the predictive success of a 1:1 OK scale soil association map with the 1:25K and 1:1OOK scale soil landscape maps within the Birrigai area of the Paddy's river catchment, south west of Canberra, A.C.T. The 1:25K and the 1:100K scale soil landscape maps were also evaluated in a second larger evaluation area in the Paddy's river catchment which allowed more of the larger soil landscape map units to be evaluated. The 1:25K scale soil map was produced by another author for the A.C.T Government, and was surveyed at a substantially lower survey intensity than that for the 1:100K and 1:10K scale soil maps (ie only 0.05 observation sites / cm2 of published map). These maps were evaluated using a set of randomly located independent evaluation sites in each evaluation area, and from these calculating and comparing standard Marsman & de Gruijter(1986) measures of Map Purity. The strength of soil-landscape relationships within this catchment were determined from a Fixed One Way Analysis of Variance, and from more simplistic graphical comparisons of the means and standard deviations of the discrete soil data within these landform based map units. Soil-landscape relationships for the Nominal scale soil data (ie class type data) were evaluated by comparing the Marsman & de Gruijter(1986) Homogeneity index ratings among the soil map units. Intensive survey traverses were also carried out in selected soil landscapes to further evaluate the strength of soil landscapes present. The results revealed obvious improvements in map quality associated with increasing map scale from 1:100,000 to 1:10,000, and these included increases in the predictive success (Map Purity), reductions in the extent of map unit impurities, and planning advantages associated with having individual land facets delineated on the 1:10,000 scale map. The respectable purity ratings achieved by the 1:100,000 scale soil landscape map (ie average purity rating of 63%) was largely attributed to the flexibility of the "soil material" approach to soil landscape mapping. The relatively poor performance of the 1:25K consultancy soil landscape map demonstrated the fact that; any benefit gained from the improved intricacy in the representation of map unit delineation's with increased mapping scale, will be drastically reduced if it is not matched by an associated increase in the intensity of field investigations. Evaluations of the soil-landscape relationships found that the land facets of the Paddy's river catchment generally failed to delineate areas that were both uniform and unique in respect of their soil properties. Soil-landscape relationships were instead found to be quite complex, applying to only certain land facets, and in regards to only certain soil properties. Soil maps with units based on landsurface features were recommended on the basis of the importance of other landscape factors other than soils to land capability ratings, as well as on the useability of such maps. This study recommended the adoption of a " >2 detailed soil profile observations / land facet in each map unit " mapping standard to ensure a reasonable estimate of the variability and modal soil conditions present, as well as a reliable confirmation of the perceived soil-landscape relationships. The error usually associated with small scale mapping was effectively reduced by rapid ground truthing, involving driving along the major roads dissecting the map area and making brief observations of soil exposures on road batters, despite the bias of the road network making such mapping improvements uneven across the map. The major point to come from this study was the re-emphasising of the point that soil spatial variability has to be accepted as a "real landscape attribute" which needs to be accurately described and communicated to land users, and must not be considered as some sort of soil mapping failure. The fact that individual facets of the landscape rarely coincide with unique pockets of uniform and unique soils and soil properties must be considered simply an on the ground reality of nature, and not some mapping failure. It was thought that since other landscape factors (eg hillslope gradient) most often dominate the determination of land use suitability and capability, it is better to effectively describe the range and modal state of the soil conditions within such facets, then to attempt to extrapolate possible soil boundaries using geostatistical techniques which cut across such land facets, and may or may not correlate with real groupings of soil properties, depending on the spatial resolution of the soil variability distribution in the landscape. Even so the results of this investigation do put the validity of the physiographic terrain class mapping model as a predictor of soil traits under question, at least for the more complex landscape settings.
APA, Harvard, Vancouver, ISO, and other styles
2

Miller, Scott N., D. Phillip Guertin, and Lainie R. Levick. "Influences of Map Scale on Drainage Network Representation." Arizona-Nevada Academy of Science, 1999. http://hdl.handle.net/10150/296536.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Jones, Eagle Sunrise. "Large scale visual navigation and community map building." Diss., Restricted to subscribing institutions, 2009. http://proquest.umi.com/pqdweb?did=1905636871&sid=1&Fmt=2&clientId=1564&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Forrest, David. "The application of expert systems to small scale map design." Thesis, University of Glasgow, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.284711.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Al-Bairmani, Sukaina. "Synthetic turbulence based on the multi-scale turnover Lagrangian map." Thesis, University of Sheffield, 2017. http://etheses.whiterose.ac.uk/19080/.

Full text
Abstract:
Synthetic turbulence refers to stochastic fields having characteristics of real hydro- dynamic turbulent flows, which has been useful in the modelling and simulation of turbulence, and for further understanding fundamental properties of turbulent motion. Synthetic turbulence aims to construct the field variables (such as velocity distributions) by simpler processes to reproduce characteristic features of turbulent fluctuations with a reduced computational cost in comparison with a formal numerical solution of the Navier-Stokes equations. A new approach of synthetic turbulence has been recently proposed, which showed that realistic synthetic isotropic turbulent fields could be generated using the Multi-scale turnover Lagrangian map (MTLM). The initial focus of this thesis is on studying the MTLM synthetic fields using the filtering approach. This approach, which has not been pursued so far, sheds new light on the potential applications of the synthetic fields in large eddy simulations and subgrid-scale (SGS) modelling. Our investigation includes SGS stresses, and SGS dissipations and related statistics, SGS scalar variance, and its relations with other quantities (such as the filtered molecular scalar dissipation). It is well-known that, even if a synthetic field had reproduced faithfully the multi-fractal statistics, it may not be able to produce the energy flux across the energy spectrum. Therefore, from the LES and/or SGS modelling perspective, many questions remain unclear, such as the PDF of the SGS dissipation, the amount of back-scattering, among others. They are addressed in this work. It demonstrates that using the MTLM is able to build a synthetic SGS model with a number of good features which many current SGS models (including those for the scalar flux) do not have. We also show that it has advantages in representing the filtered molecular scalar dissipation. In addition, we generalize the formulation of MTLM to include the effects of a mean scalar gradient on the scalar field. Our numerical tests provide the necessary proof that the effects of the mean gradient can be captured by MTLM. Furthermore, we investigate the effects of the input spectra on the statistics of the MTLM fields. We study the effects of the shape of the spectra by using truncated spectra and a model spectra (the Kovasznay spectra) as the input. The additional case, and the additional quantities we examine, have shedded light on how to apply the MTLM technique in simulations, as well as the robustness of the technique. The Constrained MTLM is a new technique generalizing the MTLM procedure to generate anisotropic synthetic turbulence in order to model inhomogeneous turbu- lence by using the adjoint formulation. Li and Rosales [107] derived the optimality system corresponding to the MTLM map and applied this method to synthesize two Kolmogorov flows. In this thesis, we derive a new optimality system to generate anisotropic synthetic turbulence according to the CMTLM approach in order to include the effects of solid wall boundaries, which were not taken into account in the last study. We consider the difference introduced by the solid wall, under the impermeable boundary conditions, where the normal components velocity field are zero, while the tangent components may be non-zero. To accomplish this task, we have modified the CMTLM procedure to generate a reflectionally symmetric synthetic field which serves as a model of the velocity field in a fully developed channel flow. That the MTLM procedure preserves the reflectional symmetries is proved, the adjoint optimality system with reflectional symmetry are derived. We aim to obtain accurate turbulent statistics, and compare our results with computed and experimental results. CMTLM procedure formulates MTLM procedure as an optimization problem with the initial Gaussian random field as the control and some known velocity field as the target. Thus, with the purpose to quantify the contributions of the adjoint operator in the modelling process, the effects of the control variable on the cost function gradient and the corresponding adjoint field is examined. Contours of the mean of the gradients of the cost functions and adjoint fields for three cases with data taken from synthetics CMTLM Kolmogorov flows and from CMTLM synthetics velocity field generated with DNS data as the target are computed. Finally, in order to define a new SGS model to simulate interactions between different length scales in turbulence, we will combine DNS data with Constrained MTLM method. Three data sets are truncated from DNS data with different degrees of resolution, filtered with the cutoff filter with large filter scale, which are then used as target fields to synthesize three CMTLM fields. The CMTLM fields are merged with these target fields. Data from the merged fields are used to predict the SGS quantities, and are compared with exact SGS quantities which have been computed from DNS field. In addition, the statistical geometry between the SGS and filtered quantities for real and predicted data are also investigated.
APA, Harvard, Vancouver, ISO, and other styles
6

Li, Fang. "An automated generalized system for large scale topographic maps." Thesis, University of Reading, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.387080.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Anand, Suchith. "Automatic derivation of schematic maps from large scale digital geographic datasets for mobile GIS." Thesis, University of South Wales, 2006. https://pure.southwales.ac.uk/en/studentthesis/automatic-derivation-of-schematic-maps-from-large-scale-digital-geographic-datasets-for-mobile-gis(653b12bb-7e0c-41a9-aada-e8cf361064a3).html.

Full text
Abstract:
"Mapping is a way of visualizing parts of the world and maps are largely diagrammatic and two dimensional. There is usually a one-to-one correspondence between places in the world and places on the map, but while there are limitless aspects to the world, the cartographer can only select a few to map" Daniel Dorling, 1996 Map generalization is the process by which small scale maps are derived from large scale maps. This requires the application of operations such as simplification, selection, displacement and amalgamation to map features subsequent to scale reduction. The work is concerned with the problem of effective rendering of large scale datasets on small display devices by developing appropriate map generalization techniques for generating schematic maps. With the advent of high-end miniature technology and large scale digital geographic data products it is essential to devise proper methodologies and techniques for the automated generation of schematic maps specifically tailored for mobile GIS applications. Schematic maps are diagrammatic representation based on linear abstractions of networks. Transportation networks are the key candidates for applying schematization to help ease the interpretation of information by the process of cartographic abstraction. This study looks at how simulated annealing optimisation technique can be successfully applied for automated generation of schematic maps from large scale digital geographic datasets tailored specifically for mobile GIS applications. The software developed makes use of a simulated annealing based schematic map generator algorithm to generate route maps from OSCAR® dataset corresponding to a series of user defined start and end points. The generated schematic route maps are displayed and tested on mobile handheld devices shows promising results for mobile GIS applications. This work concentrates on the automatic generation of schematic maps, which, in the context of mobile mapping, are seen as being a particularly useful means of displaying routes for way finding type and utility network applications.
APA, Harvard, Vancouver, ISO, and other styles
8

Gong, Nan. "Using Map-Reduce for Large Scale Analysis of Graph-Based Data." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-102822.

Full text
Abstract:
As social networks have gained in popularity, maintaining and processing the social network graph information using graph algorithms has become an essential source for discovering potential features of the graph. The escalating size of the social networks has made it impossible to process the huge graphs on a single ma chine in a “real-time” level of execution. This thesis is looking into representing and distributing graph-based algorithms using Map-Reduce model. Graph-based algorithms are discussed in the beginning. Then, several distributed graph computing infrastructures are reviewed, followed by Map-Reduce introduction and some graph computation toolkits based on Map-Reduce model. By reviewing the background and related work, graph-based algorithms are categorized, and adaptation of graph-based algorithms to Map-Reduce model is discussed. Two particular algorithms, MCL and DBSCAN are chosen to be designed using Map- Reduce model, and implemented using Hadoop. New matrix multiplication method is proposed while designing MCL. The DBSCAN is reformulated into connectivity problem using filter method, and Kingdom Expansion Game is proposed to do fast expansion. Scalability and performance of these new designs are evaluated. Conclusion is made according to the literature study, practical design experience and evaluation data. Some suggestions of graph-based algorithms design using Map-Reduce model are also given in the end.
APA, Harvard, Vancouver, ISO, and other styles
9

Hopfstock, Anja. "A User-Oriented Map Design in the SDI Environment: Using the Example of a European Reference Map at Medium Scale." Doctoral thesis, Verlag des Bundesamtes für Kartographie und Geodäsie, 2010. https://tud.qucosa.de/id/qucosa%3A25665.

Full text
Abstract:
The ever increasing demand of our information society for reliable Geographic Information (GI) is the moving power for the development and maintenance of Spatial Data Infrastructures (SDI). Consequently, an SDI works to full benefit of its users if the SDI data collection is accessible and can be efficiently used by all users in spatial problem solving and decision-making. Current development and use of SDI focuses on handling geospatial data entirely by means of information technology. Thereby, low awareness seems to be paid to a user-friendly and understandable presentation of geospatial data. Based on the understanding that GI is the result of human geospatial information processing, it is argued that cartography is essential in the SDI context in order to achieve the objectives of SDI. Specifically, the thesis aimed at exploring the concept of user-oriented map design in relation to SDI and elaborating a methodology for creating effective cartographic representations for SDI relevant user types. First of all, the SDI concept, its objectives and principles are explored using the example of the current European SDI initiatives as to the human aspect of an SDI. Secondly, in order to determine the role and task of cartography in the SDI context, the conceptual framework of contemporary cartography is reviewed to provide the theoretical and technological framework for a user-oriented map design. Given this, the SDI environment is assessed in relation to cartography with respect to the services providing access to the SDI data collection. Further, an SDI map production framework is elaborated utilising Spiess’ concept of the graphic filter as a model for the transformation of SDI data into useful cartographic representations. Besides, the map design strategy by Grünreich provides the starting point for developing the process of map production. The main tasks are detailed and justified taking into consideration the semiotic-cognitive and action-related concepts underpinning contemporary cartography. The applied research encompasses a case study which is performed to implement and, thus, evaluate the proposed methodology. It starts from a use case scenario where an international spatial planning team requires getting familiar with the overall geographic characteristics of a European cross-border area. Following the process steps of user-oriented map design in the SDI environment, a map design specification is elaborated and implemented under real world conditions. The elaborated methodology for creating user-friendly and understandable cartographic representations of geospatial data in the SDI environment is based on theoretical and technological foundation of contemporary cartography. Map design in the SDI context, first of all, means to establish a graphic filter that determines the parameters and rules of the cartographic transformation process to be applied. As both an applied art and engineering the design of the graphic filter is a creative process developing a map design solution which enables SDI users to easily produce their map. It requires on the one hand an understanding of map use, map user and map use situation, and on the other hand insight into the data used as the source. The case study proves that the elaborated methodology is practicable and functional. Cartographic reverse engineering provides a systematic and pragmatic approach to the cartographic design task. This way, map design solutions can be built upon existing cartographic experience and common traditions as suggested by the INSPIRE recommendation for portrayal. The resulting design solution constitutes a prototype of a European Reference Map at medium scale built upon existing cartographic experience and common traditions. A user-friendly, understandable and comparable presentation of geospatial data in Europe would support the human and institutional potential for cross-border cooperation and collaboration. Besides that, the test implementation shows that tools are available which make it technically feasible and viable to produce a map from geospatial data in the SDI data collection. The research project raises awareness to the human aspect of SDI inherit in its objective to support end users to derive GI and knowledge from the geospatial data gathered in the SDI data collection. The role and task of cartography in the SDI context is to contribute to the initiation, creation, and maintenance of portrayal services to facilitate a comprehensive access to the underlying geospatial data by means of a user-friendly and understandable graphic interface. For cartography to take effect in the SDI development and use, cartographic design knowledge has to be made explicit and operational. It is the responsibility of cartographic professionals to prepare the map design. The wide range of map use contexts requires a great flexibility of design variants depending on the dimension of human-map interaction. Therefore, the design of the maps needs to be user-driven to enable an efficient map use in the user’s task. Besides their function as a graphic interface, maps facilitate a common understanding of the depicted geographic features and phenomena when sharing GI between SDI users. In other words, map design can be regarded a measure to establish interoperability of geospatial data beyond the technical level. The research work is in the scope of communication cartography, a research domain seeking to deepen the understanding of the role of cartographic expressions when understanding and communication of GI is involved.
Der wachsende Bedarf unserer Wissensgesellschaft an zuverlässigen Informationen über räumliche Strukturen und Sachverhalte ist die treibende Kraft bei Aufbau und Einsatz von Geodateninfrastrukturen (GDI). Eine Geodateninfrastruktur wirkt zum vollen Nutzen der Gesellschaft, wenn die Daten in der GDI zugänglich sind und effektiv für Erkenntnis- und Entscheidungsprozesse genutzt werden können. Die gegenwärtige Entwicklung von GDI setzt auf moderne Informationstechnologien bei der Geodatenverarbeitung. Dabei, wird einer bedarfsgerechten und nutzerfreundlichen Präsentation von Geodaten in ansprechender visueller Form wenig Aufmerksamkeit zuteil. Da Geoinformation erst durch die Interaktion des Nutzers mit den Geodaten entsteht, ist es Aufgabe der Kartographie, bedarfsgerechte Kartendarstellungen zu gestalten und an der Schnittstelle zwischen einer Geodateninfrastruktur und ihren Nutzern bereitzustellen. Ziel der vorliegenden Dissertation ist es, eine Methodik für den Kartenherstellungsprozess in einer GDI-Umgebung zu entwickeln und beispielhaft zu erproben. Zunächst, werden Konzept, Ziele und Prinzipien von Geodateninfrastruktur beispielhaft anhand der Europäischen GDI-Initiativen dargestellt und hinsichtlich des Bedarfs an kartographischen Darstellungen untersucht. Danach wird, ausgehend von der Forderung nach verständlichen und gut interpretierbaren Geoinformationen, die Rolle der Kartographie im GDI-Kontext bestimmt. Dabei werden zunächst Funktion und Aufgaben der Kartographie sowie die tragenden Konzepte und Grundlagen einer nutzerorientierten Kartengestaltung dargelegt. Der Vergleich der bestehenden Geodatenzugangsdienste zur Funktion der Kartographie ergibt eine Lücke, die es zu schließen gilt, um den Nutzeranforderungen gerecht zu werden. Dazu wird der Gesamtprozess für die Herstellung von Karten im GDI-Kontext beschrieben. In diesem Prozess kommt dem Graphikfilter von Spiess (2003) besondere Bedeutung als Modell eines wissensbasierten Systems zur Aufstellung und Umsetzung von kartographischen Gestaltungsregeln zu. Den Ausgangspunkt für die Ausarbeitung der Teilprozesse bieten die von Grünreich (2008) vorgeschlagenen Teilaufgaben der Kartographie im Rahmen der GDI. Mittels eines Anwendungsfalls im Europäischen Kontext wird der vorgeschlagene Gesamtprozess erprobt. Dieses Beispiel geht davon aus, dass eine internationale Planungsgruppe im Zuge der Konzeption einer grenzüberschreitenden Verkehrsverbindung eine anschauliche Beschreibung der Landschaft in Form einer einheitlich gestalteten und flächendeckenden Karte benötigt. Durch Anwendung des kartographischen Reverse Engineering anerkannt gut gestalteter Karten werden die Vorgaben für die Kartengestaltung ermittelt. Einschließlich der Anwendung auf konkrete GDI-Daten wird der zuvor entwickelte Herstellungsprozess ausgeführt und diskutiert. Die entwickelte Methodik für den Kartenherstellungsprozess in der GDI-Umgebung basiert auf den semiotisch-kognitiven und handlungstheoretischen Konzepten der modernen Kartographie. Kartengestaltung im Kontext von Geodateninfrastrukturen bedeutet die Entwicklung eines Graphikfilters, der eine optimale bedarfsgerechte Visualisierung der Geodaten mittels nutzerspezifischer Parameter und Gestaltungsregeln ermöglicht. Wie das Fallbeispiel zeigt, ist es die durch die entwickelte Methodik möglich, brauchbare und nützliche Kartendarstellungen zu gestalten. Die Anwendung des kartographischen Reverse Engineering erlaubt es, Kartendarstellungen zu entwickeln, die - wie von INSPIRE empfohlen - bewährten kartographischen Erfahrungen und allgemeinen Traditionen entsprechen. Das Ergebnis des Anwendungsfalls ist ein Prototyp einer Europäischen Referenzkarte im Maßstab 1: 250,000. Die einheitliche und somit vergleichbare Darstellung über Grenzen hinweg unterstützt das Planungsteam in seiner Arbeit. Die praktische Umsetzung der Karte zeigt zudem, dass funktionsfähige Werkzeuge und Technologien für die regelbasierte Kartenherstellung aus GDI-Daten vorhanden sind. Die Dissertation trägt dazu bei, das Bewusstsein für den menschlichen Aspekt der Nutzung einer Geodateninfrastruktur zu schärfen. Der Beitrag der Kartographie zur Nutzung der Geodaten einer GDI besteht in der Initiierung, Gestaltung und Pflege von Darstellungsdiensten, da die Nutzbarkeit der Geodaten am besten gewährleistet ist, wenn die Gestaltungsmethoden der Kartographie angewendet werden. Dabei liegt es in der Verantwortung der Kartographen, die nutzerseitigen Aspekte dieser graphischen Schnittstelle unter Berücksichtigung der modernen kartographischen Konzepte zu betreuen. Gemäß INSPIRE-Richtlinie werden auf Karten gestützte Informationen bei zahlreichen Tätigkeiten verwendet. Für eine effektive visuelle Informationsverarbeitung durch den Nutzer ist daher eine nutzerorientierte Kartengestaltung in Abhängigkeit von der geplanten Interaktion (z.B. Kommunikation oder Analyse) unerlässlich. Neben der Funktion als Schnittstelle machen kartographische Darstellungen räumliche Strukturen verständlich. Daher ist die Kartenherstellung im GDI-Kontext eine Maßnahme, um Interoperabilität von Geodaten über die technische Ebene hinaus auf menschlicher Ebene zu ermöglichen. Die Relevanz dieser Forschungsarbeit liegt im Bereich der Kommunikationskartographie, die die Effektivität und Verbindlichkeit der Kommunikation über räumliche Strukturen und Sachverhalte zu vertiefen sucht.
APA, Harvard, Vancouver, ISO, and other styles
10

Bamford, Simeon A. "Synaptic rewiring in neuromorphic VLSI for topographic map formation." Thesis, University of Edinburgh, 2009. http://hdl.handle.net/1842/3997.

Full text
Abstract:
A generalised model of biological topographic map development is presented which combines both weight plasticity and the formation and elimination of synapses (synaptic rewiring) as well as both activity-dependent and -independent processes. The question of whether an activity-dependent process can refine a mapping created by an activity-independent process is investigated using a statistical approach to analysingmapping quality. The model is then implemented in custom mixed-signal VLSI. Novel aspects of this implementation include: (1) a distributed and locally reprogrammable address-event receiver, with which large axonal fan-out does not reduce channel capacity; (2) an analogue current-mode circuit for Euclidean distance calculation which is suitable for operation across multiple chips; (3) slow probabilistic synaptic rewiring driven by (pseudo-)random noise; (4) the application of a very-low-current design technique to improving the stability of weights stored on capacitors; (5) exploiting transistor non-ideality to implement partially weightdependent spike-timing-dependent plasticity; (6) the use of the non-linear capacitance of MOSCAP devices to compensate for other non-linearities. The performance of the chip is characterised and it is shown that the fabricated chips are capable of implementing the model, resulting in biologically relevant behaviours such as activity-dependent reduction of the spatial variance of receptive fields. Complementing a fast synaptic weight change mechanism with a slow synapse rewiring mechanism is suggested as a method of increasing the stability of learned patterns.
APA, Harvard, Vancouver, ISO, and other styles
11

Liu, Yue. "Multi-scale 3-D map building and representation for robotics applications and vehicle guidance." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq25437.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Samperi, Katrina. "Using trails to improve map generation for virtual agents in large scale, online environments." Thesis, University of Birmingham, 2015. http://etheses.bham.ac.uk//id/eprint/5660/.

Full text
Abstract:
This thesis looks at improving the generation of maps for intelligent virtual agents in large scale environments. Virtual environments are growing larger in size and becoming more complex. There is a major challenge in providing agents that are able to autonomously generate their own map representations of the environment for use in navigation. Currently, map generation for agents in large scale virtual environments is performed either by hand or requires a lengthy pre-processing step where the map is built online. We are interested in environments where this process is not possible, such as those that encourage user generated content. We look at improving map generation in these environments by using trails. Trails are a set of observations of how a user navigates an environment over time. By observing trails an agent is able to identify free space in an environment and how to navigate between points without needing to perform any collision checking. We found that trails in a virtual environments are a useful source of information for an agent's map building process. Trails can be used to improve rapidly exploring randomised tree and probabilistic roadmap generation, as well as being used as a source of information for segmenting maps in very large scale environments.
APA, Harvard, Vancouver, ISO, and other styles
13

Omori, Y., R. Chown, G. Simard, K. T. Story, K. Aylor, E. J. Baxter, B. A. Benson, et al. "A 2500 deg2 CMB Lensing Map from Combined South Pole Telescope and Planck Data." IOP PUBLISHING LTD, 2017. http://hdl.handle.net/10150/626179.

Full text
Abstract:
We present a cosmic microwave background (CMB) lensing map produced from a linear combination of South Pole Telescope (SPT) and Planck temperature data. The 150 GHz temperature data from the 2500 deg(2) SPT-SZ survey is combined with the Planck 143 GHz data in harmonic space to obtain a temperature map that has a broader l coverage and less noise than either individual map. Using a quadratic estimator technique on this combined temperature map, we produce a map of the gravitational lensing potential projected along the line of sight. We measure the auto-spectrum of the lensing potential C-L(phi phi), and compare it to the theoretical prediction for a.CDM cosmology consistent with the Planck 2015 data set, finding a best-fit amplitude of 0.95(-0.06)(+0.06) (stat.)(-0.01)(+0.01)+ (sys.). The null hypothesis of no lensing is rejected at a significance of 24 sigma. One important use of such a lensing potential map is in cross-correlations with other dark matter tracers. We demonstrate this cross-correlation in practice by calculating the cross-spectrum, C-L(phi) G, between the SPT+ Planck lensing map and Wide-field Infrared Survey Explorer (WISE) galaxies. We fit C-L(phi G) to a power law of the form p(L) = a(L/L-0)(-b) with a, L-0, and b fixed, and find eta(phi G) = C-L(phi G)/p(L) = 0.94(-0.04)(+0.04), which is marginally lower, but in good agreement with eta(phi G) = 1.00-(+0.02)(0.01), the best-fit amplitude for the cross-correlation of Planck-2015 CMB lensing and WISE galaxies over similar to 67% of the sky. The lensing potential map presented here will be used for cross-correlation studies with the Dark Energy Survey, whose footprint nearly completely covers the SPT 2500 deg(2) field.
APA, Harvard, Vancouver, ISO, and other styles
14

Brookey, Carla M. "Application of Machine Learning and Statistical Learning Methods for Prediction in a Large-Scale Vegetation Map." DigitalCommons@USU, 2017. https://digitalcommons.usu.edu/etd/6962.

Full text
Abstract:
Original analyses of a large vegetation cover dataset from Roosevelt National Forest in northern Colorado were carried out by Blackard (1998) and Blackard and Dean (1998; 2000). They compared the classification accuracies of linear and quadratic discriminant analysis (LDA and QDA) with artificial neural networks (ANN) and obtained an overall classification accuracy of 70.58% for a tuned ANN compared to 58.38% for LDA and 52.76% for QDA. Because there has been tremendous development of machine learning classification methods over the last 35 years in both computer science and statistics, as well as substantial improvements in the speed of computer hardware, I applied five modern machine learning algorithms to the data to determine whether significant improvements in the classification accuracy were possible using one or more of these methods. I found that only a tuned gradient boosting machine had a higher accuracy (71.62%) that the ANN of Blackard and Dean (1998), and the difference in accuracies was only about 1%. Of the other four methods, Random Forests (RF), Support Vector Machines (SVM), Classification Trees (CT), and adaboosted trees (ADA), a tuned SVM and RF had accuracies of 67.17% and 67.57%, respectively. The partition of the data by Blackard and Dean (1998) was unusual in that the training and validation datasets had equal representation of the seven vegetation classes, even though 85% of the data fell into classes 1 and 2. For the second part of my analyses I randomly selected 60% of the data for the training data and 20% for each of the validation data and test data. On this partition of the data a single classification tree achieved an accuracy of 92.63% on the test data and the accuracy of RF is 83.98%. Unsurprisingly, most of the gains in accuracy were in classes 1 and 2, the largest classes which also had the highest misclassification rates under the original partition of the data. By decreasing the size of the training data but maintaining the same relative occurrences of the vegetation classes as in the full dataset I found that even for a training dataset of the same size as that of Blackard and Dean (1998) a single classification tree was more accurate (73.80%) that the ANN of Blackard and Dean (1998) (70.58%). The final part of my thesis was to explore the possibility that combining several of the machine learning classifiers predictions could result in higher predictive accuracies. In the analyses I carried out, the answer seems to be that increased accuracies do not occur with a simple voting of five machine learning classifiers.
APA, Harvard, Vancouver, ISO, and other styles
15

Ibáñez, Marcelo Esther. "Evolutionary dynamics of populations with genotype-phenotype map." Doctoral thesis, Universitat Politècnica de Catalunya, 2014. http://hdl.handle.net/10803/284931.

Full text
Abstract:
In this thesis we develop a multi-scale model of the evolutionary dynamics of a population of cells, which accounts for the mapping between genotype and phenotype as determined by a model of the gene regulatory network. We study topological properties of genotype-phenotype networks obtained from the multi-scale model. Moreover, we study the problem of evolutionary escape and survival taking into account a genotype-phenotype map. An outstanding feature of populations with genotype-phenotype map is that selective pressures are determined by the phenotype, rather than genotypes. Our multi-scale model generates the evolution of a genotype-phenotype network represented by a pseudo-bipartite graph, that allows formulate a topological definition of the concepts of robustness and evolvability. We further study the problem of evolutionary escape for cell populations with genotype-phenotype map, based on a multi-type branching process. We present a comparative analysis between genotype-phenotype networks obtained from the multi-scale model and networks constructed assuming that the genotype space is a regular hypercube. We compare the effects on the probability of escape and the escape rate associated to the evolutionary dynamics between both classes of graphs. We further the study of evolutionary escape by analysing the long term survival conditioned to escape. Traditional approaches to the study of escape assume that the reproduction number of the escape genotype approaches infinity, and, therefore, survival is a surrogate of escape. Here, we analyse the process of survival upon escape by taking advantage of the fact that the natural setting of the escape problem endows the system with a separation of time scales: an initial, fast-decaying regime where escape actually occurs, is followed by a much slower dynamics within the (neutral network of) the escape phenotype. The probability of survival is analysed in terms of topological features of the neutral network of the escape phenotype.
En aquesta tesi es desenvolupa un model multi-escala de la dinàmica evolutiva d'una població de cèl·lules, tenint en compte la correspondència entre el genotip i el fenotip determinat per un model de la xarxa de regulació genètica. Estudiem les propietats topològiques de les xarxes genotip-fenotip obtingudes a partir del model multi-escala. D'altra banda, s'estudia el problema de la fugida evolutiva i la supervivència, tenint en compte una aplicació entre genotip i fenotip. Una característica destacable de les poblacions amb aplicació genotip-fenotip és que les pressions selectives actuen sobre els fenotips, en lloc dels genotips. El nostre model multi-escala genera l'evolució d'una xarxa genotip-fenotip representada per un graf pseudo-bipartit, el qual permet formular una definició topològica dels conceptes de robustesa y capacitat evolutiva. A més a més, estudiem el problema de fugida evolutiva de poblacions de cèl¿lules amb una aplicació genotip-fenotip, basat en en un procés de ramificació multi-tipus. Presentem un anàlisi comparatiu entre les xarxes de genotip-fenotip obtingudes a partir del model multi-escala i les xarxes construïdes assumint un espai de genotips de tipus hipercub regular. Comparem els efectes de la probabilitat de fugida i la freqüència d'escapament associades a la dinàmica evolutiva entre ambdues classes de grafs. Anem més enllà de l'estudi de fugida evolutiva mitjançant l'anàlisi de la supervivència a llarg plaç condicionat a fugir. Els enfocaments tradicionals per a l'estudi de la fugida o escapament suposen una taxa de reproducció en el genotip de fugida propera a infinit. Per tant, la supervivència és equivalent a la fugida. Aquí analitzem el procés de supervivència suposant fugida aprofitant el fet que l'entorn natural del problema de fugida dota al sistema amb una separació d'escales de temps: un règim inicial, de temps ràpid, on la fugida realment es produeix; seguit d'una dinàmica molt més lenta dins de la (xarxa neutra del) fenotip de fugida. La probabilitat de supervivència s'analitza en termes de les característiques topològiques de la xarxa neutra del fenotip de fugida
APA, Harvard, Vancouver, ISO, and other styles
16

Wang, Huan. "A Large-scale Dynamic Vector and Raster Data Visualization Geographic Information System Based on Parallel Map Tiling." FIU Digital Commons, 2011. http://digitalcommons.fiu.edu/etd/550.

Full text
Abstract:
With the exponential increasing demands and uses of GIS data visualization system, such as urban planning, environment and climate change monitoring, weather simulation, hydrographic gauge and so forth, the geospatial vector and raster data visualization research, application and technology has become prevalent. However, we observe that current web GIS techniques are merely suitable for static vector and raster data where no dynamic overlaying layers. While it is desirable to enable visual explorations of large-scale dynamic vector and raster geospatial data in a web environment, improving the performance between backend datasets and the vector and raster applications remains a challenging technical issue. This dissertation is to implement these challenging and unimplemented areas: how to provide a large-scale dynamic vector and raster data visualization service with dynamic overlaying layers accessible from various client devices through a standard web browser, and how to make the large-scale dynamic vector and raster data visualization service as rapid as the static one. To accomplish these, a large-scale dynamic vector and raster data visualization geographic information system based on parallel map tiling and a comprehensive performance improvement solution are proposed, designed and implemented. They include: the quadtree-based indexing and parallel map tiling, the Legend String, the vector data visualization with dynamic layers overlaying, the vector data time series visualization, the algorithm of vector data rendering, the algorithm of raster data re-projection, the algorithm for elimination of superfluous level of detail, the algorithm for vector data gridding and re-grouping and the cluster servers side vector and raster data caching.
APA, Harvard, Vancouver, ISO, and other styles
17

Thorsteinsson, Russell. "WATER CONTAMINATION RISK DURING URBAN FLOODS : Using GIS to map and analyze risk at a local scale." Thesis, Högskolan i Gävle, Avdelningen för Industriell utveckling, IT och Samhällsbyggnad, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-18183.

Full text
Abstract:
Water contamination during urban flood events can have a negative impact on human health and the environment. Prior flood studies lack investigation into how GIS can map and analyze this at a large scale (cadastral) level. This  thesis  focused on how GIS can  help map and analyze water contamination risk in urban  areas  using  LiDAR  elevation  data,  at  a  large-scale  (cadastral)  level,  and  symbology  and  flood classification  intervals  specifically  selected  for  contamination  risk.  This  was  done  by  first  completing a literature review about past research and studies  of similar scope. Based on  the findings, a method to map and analyze water contamination risk during sea-based flood scenarios was tested in the Näringen district of Gävle, Sweden. This study area was investigated and flood contamination risk maps were produced  for two different  flood scenarios which illustrated  which properties are vulnerable to flooding and at what depth, what their contamination risk is, and if they are hydrologically connected to the ocean.  The findings from this investigation  are that this method of examining water contamination risk could be useful to planning officials who are in charge of policies relating to land-use. These findings could help guide landuse  or  hazardous  material  storage  regulations  or  restrictions.  To  further  research  in  this  topic,  it  is recommended  that  similar  studies  are  performed  that  use  a  more  detailed  land-use  map  which  has information  on  what  type  and  quantity  of  possible  contaminants  are  stored  on  individual  properties. Furthermore, flood modeling should be employed in place of the flood mapping which was conducted in this thesis.
APA, Harvard, Vancouver, ISO, and other styles
18

Davis, Maia C. "Spatial and Geochemical Characterization of an Anomalous, Map-Scale Dolomite Breccia in the Monterey Formation, Santa Maria Basin, California." Thesis, California State University, Long Beach, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10750906.

Full text
Abstract:

An approximately eighteen square kilometer dolomite breccia mapped by Dibblee and Ehrenspeck in 1988 outcrops at or near the base of the Monterey formation in the southern margin of the SMB. Although not recognized as such by the original mappers, it marks the location of an extensive detachment surface, along which large amounts of fluids flowed that dolomitized and cemented an undulating fault zone. This surface is key to allowing excess folding of Monterey strata relative to older strata.

The dolomite breccia exposed in the old Grefco Quarry road cut is analyzed in detail using outcrop description, macro- and micro- rock fabric description, thin section petrography, X-ray diffraction data, carbon and oxygen isotopes, and trace element geochemistry. Deformation, mineralogy, and isotope signatures are consistent with hydrothermal dolomite (HTD) emplacement from evolved, Monterey-sourced connate fluids that ranged in temperature from 36.6 to 99.5oC. Clasts of dolomite, Monterey siliceous rocks and sandstone from underlying formations are locally supported by >35% micritic dolomite and microcrystalline quartz cement in a dilation breccia. A minimum of 128,000-231,000 cm3 of fluid per cm3 of breccia volume were required to deposit the dolomite cements.

APA, Harvard, Vancouver, ISO, and other styles
19

Lenz, Michael [Verfasser], Andreas [Akademischer Betreuer] Schuppert, and Martin [Akademischer Betreuer] Zenke. "A two-scale map of global gene expression for characterising in vitro engineered cells / Michael Lenz ; Andreas Schuppert, Martin Zenke." Aachen : Universitätsbibliothek der RWTH Aachen, 2015. http://d-nb.info/1127143689/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Cebecauer, Matej. "Short-Term Traffic Prediction in Large-Scale Urban Networks." Licentiate thesis, KTH, Transportplanering, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-250650.

Full text
Abstract:
City-wide travel time prediction in real-time is an important enabler for efficient use of the road network. It can be used in traveler information to enable more efficient routing of individual vehicles as well as decision support for traffic management applications such as directed information campaigns or incident management. 3D speed maps have been shown to be a promising methodology for revealing day-to-day regularities of city-level travel times and possibly also for short-term prediction. In this paper, we aim to further evaluate and benchmark the use of 3D speed maps for short-term travel time prediction and to enable scenario-based evaluation of traffic management actions we also evaluate the framework for traffic flow prediction. The 3D speed map methodology is adapted to short-term prediction and benchmarked against historical mean as well as against Probabilistic Principal Component Analysis (PPCA). The benchmarking and analysis are made using one year of travel time and traffic flow data for the city of Stockholm, Sweden. The result of the case study shows very promising results of the 3D speed map methodology for short-term prediction of both travel times and traffic flows. The modified version of the 3D speed map prediction outperforms the historical mean prediction as well as the PPCA method. Further work includes an extended evaluation of the method for different conditions in terms of underlying sensor infrastructure, preprocessing and spatio-temporal aggregation as well as benchmarking against other prediction methods.

QC 20190531

APA, Harvard, Vancouver, ISO, and other styles
21

Hopfstock, Anja [Verfasser], Manfred F. [Akademischer Betreuer] Buchroithner, Dietmar [Akademischer Betreuer] Grünreich, and Frank [Akademischer Betreuer] Dickmann. "A User-Oriented Map Design in the SDI Environment : Using the Example of a European Reference Map at Medium Scale / Anja Hopfstock. Gutachter: Manfred F. Buchroithner ; Frank Dickmann. Betreuer: Manfred F. Buchroithner ; Dietmar Grünreich." Dresden : Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2011. http://d-nb.info/1067729011/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Mustafa, Atif. "Nutrient removal with integrated constructed wetlands : microbial ecology and treatment performance evaluation of full-scale integrated constructed wetlands." Thesis, University of Edinburgh, 2010. http://hdl.handle.net/1842/4111.

Full text
Abstract:
Wastewaters from intensive agricultural activities contain high concentrations of nitrogen and phosphorus that contributes to water management problems. During the past few years, there has been considerable interest in the use of constructed wetlands for treating surface water runoff from farmyards. If the contaminated runoff is not treated, this wastewater along with other non-point sources of pollution can seriously contaminate the surface water and groundwater. Integrated Constructed Wetlands (ICWs) are a type of free water surface wetlands. They are engineered systems that are designed, constructed and operated successfully for treating farmyard runoff in the British Isles. However, the long-term treatment performance of these systems, the processes involved in contaminant removal and the impact on associated water bodies are not well-known. The aims of this project were to assess the performance of full-scale integrated constructed wetlands and understand nutrient removal in them. Performance evaluation of these systems through physical, chemical and microbiological parameters collected for more than 7 years showed good removal efficiencies compared to international literature. The monitored nutrient concentrations in groundwater and surface waters indicate that ICW systems did not pollute the receiving waters. The role of plants (Typha latifolia) and sediment in removing nutrients was also assessed. More nitrogen and phosphorus were stored in wetland soils and sediments than in plants. The results demonstrate that the soil component of a mature wetland system is an important and sustainable nutrient storage compartment. A novel molecular toolbox was used to characterise and compare microbial diversity responsible for nitrogen removal in sediment and litter components of ICW systems. Diverse populations of nitrogen removing bacteria were detected. The litter component of the wetland systems supported more diverse nitrogen removing bacteria than the sediments. Nitrogen removing bacteria in the wetland systems appeared to be stochastically assembled from the same source community. The self-organising map model was applied as a prediction tool for the performance of ICW and to investigate an alternative method of analysing water quality performance indicators. The model performed very well in predicting nutrients and biochemical oxygen demand with easy to measure and cost-effective water quality parameters. The results indicate that the model was an appropriate approach to monitor wastewater treatment processes and can be used to support management of ICW in real-time.
APA, Harvard, Vancouver, ISO, and other styles
23

Zubík, Tomáš. "Vyhotovení mapových podkladů areálu Metra v Blansku - jižní část." Master's thesis, Vysoké učení technické v Brně. Fakulta stavební, 2019. http://www.nusl.cz/ntk/nusl-400183.

Full text
Abstract:
The diploma thesis deals with planimetric and altimetry determination of the area Metro company in Blansko , its southern part. The content of the thesis is a detailed description of the schedule, survey section, calculations, graphic processing in the GEOSTORE V6® program, description information connection. The result is a printed 1: 250 scale maps in the S-JTSK coordinate system and the Bpv elevation system.
APA, Harvard, Vancouver, ISO, and other styles
24

Peck, Riley D. "Seasonal Habitat Selection by Greater Sage Grouse in Strawberry Valley Utah." BYU ScholarsArchive, 2011. https://scholarsarchive.byu.edu/etd/3180.

Full text
Abstract:
This study examined winter habitat use and nesting ecology of greater sage grouse (Centrocercus urophasianus) in Strawberry Valley (SV), Utah located in the north-central part of the state. We monitored sage grouse with the aid of radio telemetry throughout the year, but specifically used information from the winter and nesting periods for this study. Our study provided evidence that sage grouse show fidelity to nesting areas in subsequent years regardless of nest success. We found only 57% of our nests located within the 3 km distance from an active lek typically used to delineate critical nesting habitat. We suggest a more conservative distance of 10 km for our study area. Whenever possible, we urge consideration of nest-area fidelity in conservation planning across the range of greater sage grouse. We also evaluated winter-habitat selection at multiple spatial scales. Sage grouse in our study area selected gradual slopes with high amounts of sagebrush exposed above the snow. We produced a map that identified suitable winter habitat for sage grouse in our study area. This map highlighted core areas that should be conserved and will provide a basis for management decisions affecting Strawberry Valley, Utah.
APA, Harvard, Vancouver, ISO, and other styles
25

Marques, Ana Paula da Silva. "Generalização cartográfica para um Sistema de Navegação e Guia de Rota em Automóvel áudio-dinâmico com múltiplas escalas /." Presidente Prudente : [s.n.], 2011. http://hdl.handle.net/11449/86769.

Full text
Abstract:
Resumo: O objetivo desta pesquisa consiste na elaboração de mapas áudio-dinâmicos em múltiplas escalas automáticas, para um Sistema de Navegação e Guia de Rota em Automóvel (SINGRA). O projeto das representações cartográficas foi dividido em duas fases: projeto de composição geral e projeto áudio-gráfico. Os mapas visuais dinâmicos foram elaborados com base nos princípios da comunicação cartográfica e da percepção visual, com ênfase nas operações de generalização. A área de estudo apresenta uma malha urbana com diferentes tipos de vias, cruzamentos e limites de velocidade. Os mapas foram projetados para serem exibidos em um monitor de pequeno formato (sete polegadas), com alta resolução, e um total de quatro escalas de representação foi determinado: 1/10.000, 1/5.000, 1/2.500 e 1/1.000. Tais escalas foram definidas em função do tamanho da mídia de apresentação e do tipo de tarefa tática. Os mapas generalizados foram obtidos pela aplicação das operações de simplificação, exagero e deslocamento, sobre uma base cartográfica na escala 1/1.000. As representações áudio-dinâmicas foram produzidas a partir de variáveis áudio-dinâmicas. As mensagens de voz foram pré-gravadas na voz feminina, executadas em sincronia com as informações visuais. O projeto foi implementado em um SINGRA disponível na FCT-UNESP, a partir do compilador Visual Basic e da biblioteca MapObjects. Ao comparar o sistema de múltiplas escalas com o de escala única, observa-se que os novos mapas adaptados ao contexto de direção do motorista, podem permitir que o usuário receba a informação de acordo com a tarefa de navegação desenvolvida ao longo da rota ... (Resumo completo, clicar acesso eletrônico abaixo)
Abstract: The aim of this research is to design and implement an automatics multi-scale and audio-dynamic map for an In-Car Route Guidance and Navigation System (RGNS). The design was organized in two stages: general composition and auditory-graphic design. The visual-dynamic maps were designed based on cartographic communication principles and visual perception, especially on the generalization operators. The area of study presents an urban network with different types of roads, nodes, and speed limits. The maps were designed for a small-screen display, and a total of four different scales were employed: 1:10.000, 1:5.000, 1:2.500 and 1:1.000. These scales were chosen according to the media size and type of tactical task. The maps were derived from an accurate cartographic database at scale of 1:1000, by applying generalization techniques, such as simplification, displacement, and enhancement. The audio-dynamic representations were produced by taking account a set of audio-dynamic variables. The voice messages were recorded in a female voice, and they were presented with visual information, simultaneously. The design was implemented in a navigation system, which is available in the Faculty of Sciences and Technology, by using Visual Basic compiler and MapObjects library. The results of comparison between the automatic multiple-scale and single scale system show that the new system, enhanced driver's context, can allow the user receiving information according to the tasks performed along of the route. From the employment of generalization technique it was possible to present in a properly way the amount of information in the display, in which it can contribute for reducing navigational errors and visual demand, when compared with single-scale map ... (Complete abstract click electronic access below)
Orientador: Mônica Modesta Santos Decanini
Coorientador: Edmur Azevedo Pugliesi
Banca: Claudia Robbi Sluter
Banca: Milton Hirokazu Shimabukuro
Mestre
APA, Harvard, Vancouver, ISO, and other styles
26

Van, Gaalen Joseph Frank. "Alternative Statistical Methods for Analyzing Geological Phenomena: Bridging the Gap Between Scientific Disciplines." Scholar Commons, 2011. http://scholarcommons.usf.edu/etd/3424.

Full text
Abstract:
When we consider the nature of the scientific community in conjunction with a sense of typical economic circumstances we find that there are two distinct paths for development. One path involves hypothesis testing and evolution of strategies that are linked with iterations in equipment advances. A second, more complicated scenario, can involve external influences whether economic, political, or otherwise, such as the government closure of NASA's space program in 2011 which will no doubt influence research in associated fields. The following chapters are an account of examples of two statistical techniques and the importance of both on the two relatively unrelated geological fields of coastal geomorphology and ground water hydrology. The first technique applies a multi-dimensional approach to defining groundwater water table response based on precipitation in areas where it can reasonably be assumed to be the only recharge. The second technique applies a high resolution multi-scalar approach to a geologic setting most often restricted to either high resolution locally, or low resolution regionally. This technique uses time-frequency analysis to characterize cuspate patterns in LIDAR data are introduced using examples from the Atlantic coast of Florida, United States. These techniques permit the efficient study of beachface landforms over many kilometers of coastline at multiple spatial scales. From a LIDAR image, a beach-parallel spatial series is generated. Here, this series is the shore-normal position of a specific elevation (contour line). Well-established time-frequency analysis techniques, wavelet transforms, and S-Transforms, are then applied to the spatial series. These methods yield results compatible with traditional methods and show that it is useful for capturing transitions in cuspate shapes. To apply this new method, a land-based LIDAR study allowing for rapid high-resolution surveying is conducted on Melbourne Beach, Florida and Tairua Beach, New Zealand. Comparisons and testing of two different terrestrial scanning stations are evaluated during the course of the field investigation. Significant cusp activity is observed at Melbourne Beach. Morphological observations and sediment analysis are used to study beach cusp morphodynamics at the site. Surveys at Melbourne were run ~500 m alongshore and sediment samples were collected intertidally over a five-day period. Beach cusp location within larger scale beach morphology is shown to directly influence cusp growth as either predominantly erosional or accretional. Sediment characteristics within the beach cusp morphology are reported coincident with cusp evolution. Variations in pthesis size distribution kurtosis are exhibited as the cusps evolve; however, no significant correlation is seen between grain size and position between horn and embayment. During the end of the study, a storm resulted in beach cusp destruction and increased sediment sorting. In the former technique using multi-dimensional studies, a test of a new method for improving forecasting of surficial aquifer system water level changes with rainfall is conducted. The results provide a more rigorous analysis of common predictive techniques and compare them with the results of the tested model. These results show that linear interpretations of response-to-rainfall data require a clarification of how large events distort prediction and how the binning of data can change the interpretation. Analyses show that the binning ground water recharge data as is typically done in daily format may be useful for quick interpretation but only describes how fast the system responds to an event, not the frequency of return of such a response. Without a secure grasp on the nonlinear nature of water table and rainfall data alike, any binning or isolation of specific data carries the potential for aliasing that must be accounted for in an interpretation. The new model is proven capable of supplanting any current linear regression analysis as a more accurate means of prediction through the application of a multivariate technique. Furthermore, results show that in the Florida surficial aquifer system response-to-rainfall ratios exhibit a maxima most often linked with modal stage.
APA, Harvard, Vancouver, ISO, and other styles
27

Nguyen, Anthony Ngoc. "Importance Prioritised Image Coding in JPEG 2000." Queensland University of Technology, 2005. http://eprints.qut.edu.au/16005/.

Full text
Abstract:
Importance prioritised coding is a principle aimed at improving the interpretability (or image content recognition) versus bit-rate performance of image coding systems. This can be achieved by (1) detecting and tracking image content or regions of interest (ROI) that are crucial to the interpretation of an image, and (2)compressing them in such a manner that enables ROIs to be encoded with higher fidelity and prioritised for dissemination or transmission. Traditional image coding systems prioritise image data according to an objective measure of distortion and this measure does not correlate well with image quality or interpretability. Importance prioritised coding, on the other hand, aims to prioritise image contents according to an 'importance map', which provides a means for modelling and quantifying the relative importance of parts of an image. In such a coding scheme the importance in parts of an image containing ROIs would be higher than other parts of the image. The encoding and prioritisation of ROIs means that the interpretability in these regions would be improved at low bit-rates. An importance prioritised image coder incorporated within the JPEG 2000 international standard for image coding, called IMP-J2K, is proposed to encode and prioritise ROIs according to an 'importance map'. The map can be automatically generated using image processing algorithms that result in a limited number of ROIs, or manually constructed by hand-marking OIs using a priori knowledge. The proposed importance prioritised coder coder provides a user of the encoder with great flexibility in defining single or multiple ROIs with arbitrary degrees of importance and prioritising them using IMP-J2K. Furthermore, IMP-J2K codestreams can be reconstructed by generic JPEG 2000 decoders, which is important for interoperability between imaging systems and processes. The interpretability performance of IMP-J2K was quantitatively assessed using the subjective National Imagery Interpretability Rating Scale (NIIRS). The effect of importance prioritisation on image interpretability was investigated, and a methodology to relate the NIIRS ratings, ROI importance scores and bit-rates was proposed to facilitate NIIRS specifications for importance prioritised coding. In addition, a technique is proposed to construct an importance map by allowing a user of the encoder to use gaze patterns to automatically determine and assign importance to fixated regions (or ROIs) in an image. The importance map can be used by IMP-J2K to bias the encoding of the image to these ROIs, and subsequently to allow a user at the receiver to reconstruct the image as desired by the user of the encoder. Ultimately, with the advancement of automated importance mapping techniques that can reliably predict regions of visual attention, IMP-J2K may play a significant role in matching an image coding scheme to the human visual system.
APA, Harvard, Vancouver, ISO, and other styles
28

Dumont, Marion. "Généralisation de représentations intermédiaires dans une carte topographique multi-échelle pour faciliter la navigation de l'utilisateur." Thesis, Paris Est, 2018. http://www.theses.fr/2018PESC1076/document.

Full text
Abstract:
Une carte multi-échelle est un ensemble de cartes à différentes échelles, dans lequel l’utilisateur peut naviguer via un géoportail. Chacune de ces cartes est préalablement construite par généralisation cartographique, processus qui adapte la représentation cartographique à une échelle donnée. Les changements de représentations qu’implique la généralisation entre deux cartes à différentes échelles sont susceptibles de perturber l’utilisateur, rendant sa navigation plus difficile. Nous proposons dans cette thèse d’ajouter des représentations intermédiaires dans une carte multi-échelle existante, pour créer une évolution plus fluide du contenu cartographique au fil des échelles. Alors que de solides connaissances théoriques existent pour la conception cartographique traditionnelle, on ne sait pas encore comment concevoir une carte multi-échelle efficace. Pour formaliser des connaissances à ce sujet, nous avons étudié un panel de seize cartes multi-échelles existantes. Nous avons analysé les systèmes de zoom utilisés ainsi que l’évolution des représentations cartographiques au fil des échelles, en particulier les changements de niveaux d’abstraction pour les objets bâtis et routiers. Nous avons aussi évalué la variation de complexité visuelle du contenu cartographique au fil des échelles, en utilisant des mesures de clutter visuel. Nous avons ainsi identifié les tendances générales en termes de représentations multi-échelles (comme l’application du standard WMTS), certains facteurs que nous considérons comme ayant une influence négative sur la navigation de l’utilisateur (comme l’utilisation d’une même carte à différentes échelles), ainsi que des pratiques intéressantes visant à la faciliter (comme les représentations mixtes). A partir de ces constats, nous avons formulé des hypothèses sur l’influence des variables de construction des représentations intermédiaires sur la fluidité de navigation. Nous avons construit un matériel de test à partir d’un extrait de la carte multi-échelle Scan Express de l’IGN, entre les cartes existant au 1 : 25k et au 1 : 100k. Nous avons ainsi produit quatre versions différentes de représentations intermédiaires entre ces deux cartes, implémentant nos différentes hypothèses. Cet exercice nous a permis de mieux cerner les verrous techniques que soulève la production de représentations intermédiaires. Nous avons enfin conduit un test utilisateurs contrôlé, en demandant à 15 participants de réaliser une tâche cartographique sur ces différentes cartes multi-échelles, pour évaluer la pertinence de nos hypothèses
A multi-scale map is a set of maps at different scales, displayed on mapping applications, in which users may navigate by zooming in or out. Each of these maps is produced beforehand by cartographic generalization, which aims to adapt the cartographic representation for a target scale. Due to generalization, the representation changes between maps at different scales may disturb the user during its navigation. We assume that adding intermediate representations in an existing multi-scale map may enable a smooth evolution of cartographic content across scales. While theoretical knowledge exists for traditional cartography, we still do not know how to design efficient multi-scale maps. To formalize knowledge on that subject, we studied sixteen existing multi-scale maps. We focused on the used zooming system (zoom levels and display scales) and on the evolution of cartographic representations across scales, in particular for building and road entities. We also analyzed the variation of visual complexity of the map content across scales, using visual clutter measures. We thus identified general trends in terms of multi-scale representation (i.e. use of WMTS standard), some potential disturbing factors (i.e. use of a same map at different scales), but also good practices which may ease the user navigation (i.e. mixed representations). Based on these findings, we made assumptions on the influence of intermediate representations design on user navigation. We built test material from an extract of the Scan Express multi-scale map of the French IGN, between the existing maps at 1:25k and 1:100k scales. We thus produced four different versions of intermediate representations between these two maps, implementing our different hypotheses. This way, we highlighted the technical issues that we faced when producing intermediate representations. Finally, we conducted a controlled user study, asking 15 participants to perform a cartographic task on these different multi-scale maps, to evaluate our hypotheses
APA, Harvard, Vancouver, ISO, and other styles
29

Žůrek, Ondřej. "Hodnocení výkonnosti společnosti s využitím Balanced Scorecard." Master's thesis, Vysoké učení technické v Brně. Fakulta podnikatelská, 2020. http://www.nusl.cz/ntk/nusl-416777.

Full text
Abstract:
The diploma thesis focuses on the evaluation of the performance of a business company using Balanced Scorecard. The theoretical part of the thesis defines important theoretical knowledge taken from scholarly literature, which forms the necessary basis for the following parts. The analytical part is used to introduce the business company and perform strategic analysis. The obtained values and knowledge are then applied in the design part of the work. The design part focuses on the implementation of the Balanced Scorecard model in a business company and the design of other recommendations that enable performance increase and competitive market position improvement.
APA, Harvard, Vancouver, ISO, and other styles
30

Al-Mohandes, Ibrahim. "Energy-Efficient Turbo Decoder for 3G Wireless Terminals." Thesis, University of Waterloo, 2005. http://hdl.handle.net/10012/838.

Full text
Abstract:
Since its introduction in 1993, the turbo coding error-correction technique has generated a tremendous interest due to its near Shannon-limit performance. Two key innovations of turbo codes are parallel concatenated encoding and iterative decoding. In its IMT-2000 initiative, the International Telecommunication Union (ITU) adopted turbo coding as a channel coding standard for Third-Generation (3G) wireless high-speed (up to 2 Mbps) data services (cdma2000 in North America and W-CDMA in Japan and Europe). For battery-powered hand-held wireless terminals, energy consumption is a major concern. In this thesis, a new design for an energy-efficient turbo decoder that is suitable for 3G wireless high-speed data terminals is proposed. The Log-MAP decoding algorithm is selected for implementation of the constituent Soft-Input/Soft-Output (SISO) decoder; the algorithm is approximated by a fixed-point representation that achieves the best performance/complexity tradeoff. To attain energy reduction, a two-stage design approach is adopted. First, a novel dynamic-iterative technique that is appropriate for both good and poor channel conditions is proposed, and then applied to reduce energy consumption of the turbo decoder. Second, a combination of architectural-level techniques is applied to obtain further energy reduction; these techniques also enhance throughput of the turbo decoder and are area-efficient. The turbo decoder design is coded in the VHDL hardware description language, and then synthesized and mapped to a 0. 18μm CMOS technology using the standard-cell approach. The designed turbo decoder has a maximum data rate of 5 Mb/s (at an upper limit of five iterations) and is 3G-compatible. Results show that the adopted two-stage design approach reduces energy consumption of the turbo decoder by about 65%. A prototype for the new turbo codec (encoder/decoder) system is implemented on a Xilinx XC2V6000 FPGA chip; then the FPGA is tested using the CMC Rapid Prototyping Platform (RPP). The test proves correct functionality of the turbo codec implementation, and hence feasibility of the proposed turbo decoder design.
APA, Harvard, Vancouver, ISO, and other styles
31

Marques, Ana Paula da Silva [UNESP]. "Generalização cartográfica para um Sistema de Navegação e Guia de Rota em Automóvel áudio-dinâmico com múltiplas escalas." Universidade Estadual Paulista (UNESP), 2011. http://hdl.handle.net/11449/86769.

Full text
Abstract:
Made available in DSpace on 2014-06-11T19:22:25Z (GMT). No. of bitstreams: 0 Previous issue date: 2011-04-29Bitstream added on 2014-06-13T18:08:12Z : No. of bitstreams: 1 marques_aps_me_prud.pdf: 1844754 bytes, checksum: 98269ab519565c997b4f261950db8198 (MD5)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
O objetivo desta pesquisa consiste na elaboração de mapas áudio-dinâmicos em múltiplas escalas automáticas, para um Sistema de Navegação e Guia de Rota em Automóvel (SINGRA). O projeto das representações cartográficas foi dividido em duas fases: projeto de composição geral e projeto áudio-gráfico. Os mapas visuais dinâmicos foram elaborados com base nos princípios da comunicação cartográfica e da percepção visual, com ênfase nas operações de generalização. A área de estudo apresenta uma malha urbana com diferentes tipos de vias, cruzamentos e limites de velocidade. Os mapas foram projetados para serem exibidos em um monitor de pequeno formato (sete polegadas), com alta resolução, e um total de quatro escalas de representação foi determinado: 1/10.000, 1/5.000, 1/2.500 e 1/1.000. Tais escalas foram definidas em função do tamanho da mídia de apresentação e do tipo de tarefa tática. Os mapas generalizados foram obtidos pela aplicação das operações de simplificação, exagero e deslocamento, sobre uma base cartográfica na escala 1/1.000. As representações áudio-dinâmicas foram produzidas a partir de variáveis áudio-dinâmicas. As mensagens de voz foram pré-gravadas na voz feminina, executadas em sincronia com as informações visuais. O projeto foi implementado em um SINGRA disponível na FCT-UNESP, a partir do compilador Visual Basic e da biblioteca MapObjects. Ao comparar o sistema de múltiplas escalas com o de escala única, observa-se que os novos mapas adaptados ao contexto de direção do motorista, podem permitir que o usuário receba a informação de acordo com a tarefa de navegação desenvolvida ao longo da rota...
The aim of this research is to design and implement an automatics multi-scale and audio-dynamic map for an In-Car Route Guidance and Navigation System (RGNS). The design was organized in two stages: general composition and auditory-graphic design. The visual-dynamic maps were designed based on cartographic communication principles and visual perception, especially on the generalization operators. The area of study presents an urban network with different types of roads, nodes, and speed limits. The maps were designed for a small-screen display, and a total of four different scales were employed: 1:10.000, 1:5.000, 1:2.500 and 1:1.000. These scales were chosen according to the media size and type of tactical task. The maps were derived from an accurate cartographic database at scale of 1:1000, by applying generalization techniques, such as simplification, displacement, and enhancement. The audio-dynamic representations were produced by taking account a set of audio-dynamic variables. The voice messages were recorded in a female voice, and they were presented with visual information, simultaneously. The design was implemented in a navigation system, which is available in the Faculty of Sciences and Technology, by using Visual Basic compiler and MapObjects library. The results of comparison between the automatic multiple-scale and single scale system show that the new system, enhanced driver's context, can allow the user receiving information according to the tasks performed along of the route. From the employment of generalization technique it was possible to present in a properly way the amount of information in the display, in which it can contribute for reducing navigational errors and visual demand, when compared with single-scale map ... (Complete abstract click electronic access below)
APA, Harvard, Vancouver, ISO, and other styles
32

Berthias, Francis. "Thermalisation dans une nanogoutte d’eau." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSE1164/document.

Full text
Abstract:
L'évaporation d'une molécule d'eau se traduit par la rupture d'une ou plusieurs liaisons hydrogène. Ces liaisons hydrogène sont à l'origine de nombreuses propriétés remarquables de l'eau. A l'échelle macroscopique, l'eau est connue pour son efficacité à thermaliser un système, tandis qu'au niveau microscopique, un transfert ultrarapide d'énergie vibrationnelle par l'intermédiaire des liaisons hydrogène a été observé. Qu'en est-il à l'échelle d'une nanogoutte lorsque qu'un nombre limité de molécules entre en jeu? Dans l'expérience réalisée auprès du dispositif DIAM de l'IPN Lyon, la relaxation d'une nanogoutte d'eau protonée est observée après excitation électronique d'une de ses molécules. La mise en œuvre d'une méthode d'imagerie de vecteur vitesse associée à la technique COINTOF (COrrelated Ion and Neutral Time-Of-Flight) a permis la mesure de la distribution de vitesse de molécules évaporées d'agrégats d'eau protonés, préalablement sélectionnés en masse et en énergie. La forme des distributions de vitesse mesurées montre que, même pour des nanogouttes composées de quelques molécules d'eau, l'énergie est redistribuée dans la goutte avant évaporation. Pour des nanogouttes contenant moins d'une dizaine de molécules d'eau, les distributions de vitesse mesurées sont proches de celles attendues pour des gouttes macroscopiques. La redistribution statistique de l'énergie apparaît comme un processus de relaxation dominant. Cependant, la mesure de la distribution des vitesses met aussi en évidence une contribution distincte à haute vitesse correspondant à l'éjection d'une molécule avant redistribution complète de l'énergie. Les distributions de vitesse mesurées pour des nanogouttes d'eau lourdes deutérées mettent en évidence une proportion d'événements non ergodiques plus importante que pour l'eau normale. Les mesures réalisées avec différents atomes cibles montrent que la proportion d'événements non ergodique diminue avec la diminution de l'énergie déposée dans la nanogoutte
The evaporation of a water molecule resulting in the rupture of one or more hydrogen bonds. These hydrogen bonds are responsible for many remarkable properties of water. At the macroscopic scale, water is well known for its ability to thermalize a system, while at the microscopic level, a high-speed transfer of vibrational energy through hydrogen bonding was observed. What scale of nanogoutte when a limited number of molecules come into play? In the experiment carried out with the device DIAM IPN Lyon, the relaxation of a nanogoutte of protonated water is observed after electronic excitation of one of its molecules. The implementation of a velocity vector imaging method associated with the technical COINTOF (Correlated Ion and Neutral Time-Of-Flight) allowed the measurement of the velocity distribution of molecules of evaporated protonated water clusters, mass and energy preselected. The shape of the measured velocity distributions shows that even for some nanodroplets composed of few water molecules, the energy is redistributed in the drop before evaporation. For nanodroplets containing less than ten water molecules, the measured velocity distributions are closed to those expected for macroscopic droplets. The statistical redistribution of energy appears as a dominant relaxation process. However, the measurement of the velocity distribution also highlights a distinct contribution at high velocity corresponding to the ejection of a molecule before complete redistribution of energy. The measured velocity distributions for heavy water nanodroplets deuterated show a proportion of non-ergodic most important events that for normal water. The measurements carried out with different target atoms show that the proportion of non-ergodic events decreases with decreasing the energy deposited in the nanogoutte
APA, Harvard, Vancouver, ISO, and other styles
33

Macedo, Francis Gomes. "O lugar do mapa no ensino e aprendizagem de Geografia: a questão de escala na formação de professores." Universidade de São Paulo, 2014. http://www.teses.usp.br/teses/disponiveis/8/8136/tde-26052015-114455/.

Full text
Abstract:
O presente trabalho visa discutir a questão de escala na prática de ensino de geografia; propondo o ensino de geografia pelo mapa. Num plano secundário, são analisadas falas provenientes de professores de geografia para então sugerir-lhes que o próprio mapa seja um instrumento de mediação que contribua para a modificação de suas práticas. Entre as leituras realizadas, temos na primeira parte a análise de referências sobre a questão de escala, direcionada para conceber o estudo metodológico e cognitivo do mapa, em articulação com a cartografia no ensino de geografia, a exemplo de Oliveira (1978), Simielli (1989) e Martinelli (1991), além de outros estudos contemporâneos. Estes estudos são articulados com determinadas fronteiras teóricas, a exemplo da filosofia da linguagem (Bakhtin), das contribuições sobre a psicologia da aprendizagem (Piaget/Vygotsky/Feuerstein) e, finalmente, da religação de saberes (Morin) na escola, com a intenção de formular uma teoria sobre como propor um novo modelo de formação de professores de geografia no país. Para a elaboração deste trabalho, as técnicas e o método utilizados foram a realização de diversas oficinas de aprendizagem mediada para o desenvolvimento de funções cognitivas por meio da resolução de exercícios de escala, tendo como embasamento a Experiência de Aprendizagem Mediada de Reuven Feuerstein. O estudo desta teoria para a renovação do ensino de geografia se justifica em função da importância do papel do professor no ensino da disciplina na escola, já que a questão da escala é idealizada de forma superficial e fragmentada como um problema; construindo-se um temor por professores e alunos. Em outros termos, utiliza-se uma abordagem da questão de escala, buscando aproximações, interfaces, continuidades e descontinuidades evidenciadas em diversos estudos preocupados com o ensino de cartografia na escola; valorizando os processos cognitivos do aluno, para então resgatar as bases históricas do estudo cognitivo do mapa e repensar, assim, uma didática da aprendizagem mediada. Este estudo consiste numa contribuição para pensarmos que não basta o ensino dos mapas na escola como uma linguagem, quando a carência realmente se manifesta sobre a necessidade da mediação da palavra falada, já que o pensamento se revela como precursor da aquisição da linguagem, e não o contrário. Mais ainda, o estudo serve como um balizador de uma alternativa para o ensino e a aprendizagem na escola, em que a criança e o adolescente sejam capazes de utilizar os mapas como instrumentos para realizar processos cognitivos e resolver diversas situações-problema envolvidas em suas rotinas escolares. O aluno é finalmente compreendido como sujeito e autor de conhecimento científico, numa relação de dialogia com o professor.
The present work aims at discussing the use of scales in the teaching practice of Geography. It secondarily aims at analyzing Geography teachers\' speeches in order to suggest that maps can be used as an instrument for mediation which contributes for a modification in their practices. Among the readings carried out, we have, in the first part, the analysis of references concerning the question of scales, such as Oliveira (1978), Simielli (1989) and Martinelli (1991) as well as other contemporary studies, which are directed to conceive a methodological and cognitive study of maps in articulation with cartography in Geography teaching. In order to formulate a new theory for the formation of Geography teachers in Brazil, these studies are articulated within certain theoretical boundaries such as the Philosophy of Language (Bakhtin), the contributions of Learning Psychology (Piaget/Vygotsky/Feuerstein), and, eventually, the relink of knowledge (Morin) in the school. The methodology and techniques used in the development of this work focus on the implementation of several mediated learning workshops for the development of cognitive functions through the resolution of scale exercises, having Reuven Feuerstein\'s Mediated Learning Experience as basis. The use of this theory to promote a renewal in Geography teaching is justified by the importance given to the teacher\'s role in the teaching of this subject at schools, since the question of scales is both dealt with in a superficial and fragmented way and considered a problem, representing a fear for both teachers and students. In other words, we approach the question of scales searching for approximations, interfaces, continuities and discontinuities evidenced by a lot of studies concerned about the teaching of cartography at schools. We also highlight students\' cognitive processes and then rescue the historical bases of map cognitive studies so as to rethink, in this way, a mediated learning didactics. This study consists of a contribution for us to reflect that the teaching of maps at schools as a language is not enough when our real necessity concerns the mediation of the spoken words, since thoughts appear to be, here, the forerunners of language acquisition, not the opposite. Still, this thesis also offers an alternative for the teaching and learning processes at schools, in which children and teenagers are able to use maps as instruments to carry out cognitive processes and solve problem-situations involved in their school routines. The students are finally understood as subjects and authors of scientific knowledge, in a dialogic relationship with teachers.
APA, Harvard, Vancouver, ISO, and other styles
34

Kadirkamanathan, Mahapathy. "A scale-space approach to segmentation and recognition of cursive script." Thesis, University of Cambridge, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.276741.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Wirth, Henry. "Analysis of large-scale molecular biological data using self-organizing maps." Doctoral thesis, Universitätsbibliothek Leipzig, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-101298.

Full text
Abstract:
Modern high-throughput technologies such as microarrays, next generation sequencing and mass spectrometry provide huge amounts of data per measurement and challenge traditional analyses. New strategies of data processing, visualization and functional analysis are inevitable. This thesis presents an approach which applies a machine learning technique known as self organizing maps (SOMs). SOMs enable the parallel sample- and feature-centered view of molecular phenotypes combined with strong visualization and second-level analysis capabilities. We developed a comprehensive analysis and visualization pipeline based on SOMs. The unsupervised SOM mapping projects the initially high number of features, such as gene expression profiles, to meta-feature clusters of similar and hence potentially co-regulated single features. This reduction of dimension is attained by the re-weighting of primary information and does not entail a loss of primary information in contrast to simple filtering approaches. The meta-data provided by the SOM algorithm is visualized in terms of intuitive mosaic portraits. Sample-specific and common properties shared between samples emerge as a handful of localized spots in the portraits collecting groups of co-regulated and co-expressed meta-features. This characteristic color patterns reflect the data landscape of each sample and promote immediate identification of (meta-)features of interest. It will be demonstrated that SOM portraits transform large and heterogeneous sets of molecular biological data into an atlas of sample-specific texture maps which can be directly compared in terms of similarities and dissimilarities. Spot-clusters of correlated meta-features can be extracted from the SOM portraits in a subsequent step of aggregation. This spot-clustering effectively enables reduction of the dimensionality of the data in two subsequent steps towards a handful of signature modules in an unsupervised fashion. Furthermore we demonstrate that analysis techniques provide enhanced resolution if applied to the meta-features. The improved discrimination power of meta-features in downstream analyses such as hierarchical clustering, independent component analysis or pairwise correlation analysis is ascribed to essentially two facts: Firstly, the set of meta-features better represents the diversity of patterns and modes inherent in the data and secondly, it also possesses the better signal-to-noise characteristics as a comparable collection of single features. Additionally to the pattern-driven feature selection in the SOM portraits, we apply statistical measures to detect significantly differential features between sample classes. Implementation of scoring measurements supplements the basal SOM algorithm. Further, two variants of functional enrichment analyses are introduced which link sample specific patterns of the meta-feature landscape with biological knowledge and support functional interpretation of the data based on the ‘guilt by association’ principle. Finally, case studies selected from different ‘OMIC’ realms are presented in this thesis. In particular, molecular phenotype data derived from expression microarrays (mRNA, miRNA), sequencing (DNA methylation, histone modification patterns) or mass spectrometry (proteome), and also genotype data (SNP-microarrays) is analyzed. It is shown that the SOM analysis pipeline implies strong application capabilities and covers a broad range of potential purposes ranging from time series and treatment-vs.-control experiments to discrimination of samples according to genotypic, phenotypic or taxonomic classifications.
APA, Harvard, Vancouver, ISO, and other styles
36

Khaustova, Darya. "Objective assessment of stereoscopic video quality of 3DTV." Thesis, Rennes 1, 2015. http://www.theses.fr/2015REN1S021/document.

Full text
Abstract:
Le niveau d'exigence minimum pour tout système 3D (images stéréoscopiques) est de garantir le confort visuel des utilisateurs. Le confort visuel est un des trois axes perceptuels de la qualité d'expérience (QoE) 3D qui peut être directement lié aux paramètres techniques du système 3D. Par conséquent, le but de cette thèse est de caractériser objectivement l'impact de ces paramètres sur la perception humaine afin de contrôler la qualité stéréoscopique. La première partie de la thèse examine l'intérêt de prendre en compte l'attention visuelle des spectateurs dans la conception d'une mesure objective de qualité 3D. Premièrement, l'attention visuelle en 2D et 3D sont comparées en utilisant des stimuli simples. Les conclusions de cette première expérience sont validées en utilisant des scènes complexes avec des disparités croisées et décroisées. De plus, nous explorons l'impact de l'inconfort visuel causé par des disparités excessives sur l'attention visuelle. La seconde partie de la thèse est dédiée à la conception d'un modèle objectif de QoE pour des vidéos 3D, basé sur les seuils perceptuels humains et le niveau d'acceptabilité. De plus nous explorons la possibilité d'utiliser la modèle proposé comme une nouvelle échelle subjective. Pour la validation de ce modèle, des expériences subjectives sont conduites présentant aux sujets des images stéréoscopiques fixes et animées avec différents niveaux d'asymétrie. La performance est évaluée en comparant des prédictions objectives avec des notes subjectives pour différents niveaux d'asymétrie qui pourraient provoquer un inconfort visuel
The minimum requirement for any 3D (stereoscopic images) system is to guarantee visual comfort of viewers. Visual comfort is one of the three primary perceptual attributes of 3D QoE, which can be linked directly with technical parameters of a 3D system. Therefore, the goal of this thesis is to characterize objectively the impact of these parameters on human perception for stereoscopic quality monitoring. The first part of the thesis investigates whether visual attention of the viewers should be considered when designing an objective 3D quality metrics. First, the visual attention in 2D and 3D is compared using simple test patterns. The conclusions of this first experiment are validated using complex stimuli with crossed and uncrossed disparities. In addition, we explore the impact of visual discomfort caused by excessive disparities on visual attention. The second part of the thesis is dedicated to the design of an objective model of 3D video QoE, which is based on human perceptual thresholds and acceptability level. Additionally we explore the possibility to use the proposed model as a new subjective scale. For the validation of proposed model, subjective experiments with fully controlled still and moving stereoscopic images with different types of view asymmetries are conducted. The performance is evaluated by comparing objective predictions with subjective scores for various levels of view discrepancies which might provoke visual discomfort
APA, Harvard, Vancouver, ISO, and other styles
37

Dacosta, Italo. "Practical authentication in large-scale internet applications." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44863.

Full text
Abstract:
Due to their massive user base and request load, large-scale Internet applications have mainly focused on goals such as performance and scalability. As a result, many of these applications rely on weaker but more efficient and simpler authentication mechanisms. However, as recent incidents have demonstrated, powerful adversaries are exploiting the weaknesses in such mechanisms. While more robust authentication mechanisms exist, most of them fail to address the scale and security needs of these large-scale systems. In this dissertation we demonstrate that by taking into account the specific requirements and threat model of large-scale Internet applications, we can design authentication protocols for such applications that are not only more robust but also have low impact on performance, scalability and existing infrastructure. In particular, we show that there is no inherent conflict between stronger authentication and other system goals. For this purpose, we have designed, implemented and experimentally evaluated three robust authentication protocols: Proxychain, for SIP-based VoIP authentication; One-Time Cookies (OTC), for Web session authentication; and Direct Validation of SSL/TLS Certificates (DVCert), for server-side SSL/TLS authentication. These protocols not only offer better security guarantees, but they also have low performance overheads and do not require additional infrastructure. In so doing, we provide robust and practical authentication mechanisms that can improve the overall security of large-scale VoIP and Web applications.
APA, Harvard, Vancouver, ISO, and other styles
38

An, Li. "LARGE-SCALE DATA ANALYSIS OF GENE EXPRESSION MAPS OBTAINED BY VOXELATION." Diss., Temple University Libraries, 2012. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/183204.

Full text
Abstract:
Computer and Information Science
Ph.D.
Gene expression signatures in the mammalian brain hold the key to understanding neural development and neurological diseases, and gene expression profiles have been widely used in functional genomic studies. However, not much work in traditional gene expression profiling takes into account the location information of a gene's expressions in the brain. Gene expression maps, which are obtained by combining voxelation and microarrays, contain spatial information regarding the expression of genes in mice's brain. We study approaches for identifying the relationship between gene expression maps and gene functions, for mining association rules, and for predicting certain gene functions and functional similarities based on the gene expression maps obtained by voxelation. First, we identified the relationship between gene functions and gene expression maps. On one side, we chose typical genes as queries and aimed at discovering the groups of the genes which have similar gene expression maps to the queries. Then we study the relationship between functions and maps by checking the similarities of gene functions in the detected gene groups. The similarity between a pair of gene expression maps was identified by calculating the Euclidean Distance between the pair of feature vectors which were extracted by wavelet transformation from the hemispheres averaged gene expression maps. Similarities of gene functions were identified by Lin's method based on gene ontology structures. On the other side, we proposed a multiple clustering approach, combined with hierarchical clustering method to detect significant clusters of genes which have both similar gene functions and similar gene expression maps. Among each group of similar genes, the gene function similarity was measured by calculating the average pair-wise gene function distance in the group and then ranking it in random cases. By finding groups of similar genes toward typical genes, we were able to improve our understanding of gene expression patterns and gene functions. By doing the multiple clustering, we obtained significant clusters of similar genes and very similar gene functions respectively to their corresponding gene ontologies. The cellular component ontology resulted in prominent clusters expressed in cortex and corpus callosum. The molecular function ontology gave prominent clusters in cortex, corpus callosum and hypothalamus. The biological process ontology resulted in clusters in cortex, hypothalamus and choroid plexus. Clusters from all three ontologies combined were most prominently expressed in cortex and corpus callosum. The experimental results confirm the hypothesis that genes with similar gene expression maps have similar gene functions for certain genes. Based on the relationship between gene functions and expression maps, we developed a modified Apriori algorithm to mine association rules among gene functions in the significant clusters. The experimental results show that the detected association rules (frequent itemsets of gene functions) make sense biologically. By inspecting the obtained clusters and the genes having the same frequent itemsets of functions, interesting clues were discovered that provide valuable insight to biological scientists. The discovered association rules can be potentially used to predict gene functions based on similarity of gene expression maps. Moreover, proposed an efficient approach to identify gene functions. A gene function or a set of certain gene functions can potentially be associated with a specific gene expression profile. We named this specific gene expression profile, Functional Expression Profile (FEP) for one function, or Multiple Functional Expression Profile (MFEP) for a set of functions. We suggested two different ways of finding (M)FEPS, a cluster-based and a non-cluster-based method. Both of these methods achieved high accuracy in predicting gene functions, each for different kinds of gene functions. Compared to the traditional K-nearest neighbor method, our approach shows higher accuracy in predicting functions. The visualized gene expression maps of (M)FEPs were in good agreement with anatomical components of mice's brain Furthermore, we proposed a supervised learning methodology to predict pair-wise gene functional similarity from gene expression maps. By using modified AdaBoost algorithm coupled with our proposed weak classifier, we predicted the gene functional similarities between genes to a certain degree. The experimental results showed that with increasing similarities of gene expression maps, the functional similarities were increased too. The weights of the features in the model indicated the most significant single voxels and pairs of neighboring voxels which can be visualized in the expression map image of a mouse brain.
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
39

Koivula, Matti. "Carabid beetles (Coleoptera, Carabidae) in boreal managed forests : meso-scale ecological patterns in relation to modern forestry." Helsinki : University of Helsinki, 2001. http://ethesis.helsinki.fi/julkaisut/mat/ekolo/vk/koivula/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Scott, Matthew B., and n/a. "Fine-scale ecology of alpine patterned ground, Old Man Range, Central Otago, New Zealand." University of Otago. Department of Botany, 2007. http://adt.otago.ac.nz./public/adt-NZDU20080130.093120.

Full text
Abstract:
This study is an interdisciplinary ecological study addressing the fine-scale relationships between plants, invertebrates and the environment in an alpine ecosystem. Alpine environments are marked by steep environmental gradients and complex habitat mosaics at various spatial scales. Regular forming periglacial patterned ground landforms on the Old Man Range, Central Otago, South Island, New Zealand present an ideal medium for studying plant-invertebrate-environment relationships due to their partitioning of the landscape into discrete units of contrasting environmental conditions, and the existence of some baseline knowledge of the soil, microclimate, vegetation and flora. The study was conducted in three types of patterned ground (hummocks, stripes and solifluction terraces) on the Old Man Range. Each component of the study was sampled at the same spatial scale for comparison. Temperature was recorded in the soil and ground surface from April 2001 to March 2004 in microtopographic subunits (microsites) of each patterned ground landform. Plant species cover was sampled within each microsite; invertebrates were sampled from soil cores taken from the same locations as plant samples in April 2001 and September 2001. The two sampling occasions coincided with autumn before the soil freezes, and winter when maximum freezing was expected. Fine-scale changes in the topographic relief of the patterned ground led to notable differences in the timing and duration of snow. The steepest environmental gradients existed during periods of uneven snow distribution. The soil in exposed or south-facing microsites froze first, beginning in May, and typically froze to more than 40cm depth. Least exposed microsites rarely froze. Within the microtopography, patterns of freezing at specific locations were consistent between years with only minor differences in the timing or depths of freezing; however, notable variation in freezing existed between similar microsites. Within the microtopography, different assemblages of organisms were associated with different microsites. In total, 84 plant and lichen species were recorded, grouping into six community types. Species composition was best explained by growing degree-days, freeze-thaw cycles, time frozen and snow-free days; species diversity and richness increased with increasing environmental stress as indicated by freeze-thaw cycles, time frozen and exposure. In total 20,494 invertebrates, representing four Phyla, 12 Classes, 23 Orders and 295 morpho-taxa were collected from 0.17m� of soil. Acari, Collembola and Pseudococcidae were the most abundant invertebrates. Over 95% of the invertebrates were found in the plant material and first 10cm depth of soil. Few significant relationships were found between diversity, richness or abundance of invertebrate taxa and the microsites; however, multivariate analyses identified distinct invertebrate assemblages based on abundance. Invertebrate composition was best explained by recent low temperature and moisture, particularly in winter; however, plant composition also explained invertebrate composition, but more so in autumn. This research has shown that organisms in the alpine environment of the Old Man Range are sensitive to fine-scale changes in their environment. These results have implications as to how historical changes to the ecosystem may have had long-lasting influences on the biota, as well as how a currently changing climate may have further impacts on the composition and distribution of organisms.
APA, Harvard, Vancouver, ISO, and other styles
41

Oakley, Florence. "Generational differences in the frequency and importance of meaningful work." Thesis, University of Canterbury. Department of Management, 2015. http://hdl.handle.net/10092/10931.

Full text
Abstract:
This thesis aimed to investigate generational differences in the frequency and importance of meaningful work in employees based on the 7 facets of the Map of Meaning. Hypotheses were tested through Analysis of Variance of secondary data. 395 participants self-reported levels of meaningful work on the Comprehensive Meaningful Work Scale. Results indicated that Generation Y had significantly lower levels of meaningful work. Generation Y had significantly lower levels of Unity (importance), Serving (frequency and importance), Expressing full potential (frequency), Reality (frequency and importance) and Inspiration (frequency). Significant differences occurred mainly between Generation Y and Baby boomers, with some significant differences between Generation Y and Generation X and no significant differences between Generation X and Baby boomers. Results showed that overall frequency and importance levels were significantly lower for Generation Y. Overall frequency levels were lower than overall importance levels, which suggests that employees’ desire for meaningful work may not be satisfied. In light of this evidence, it is suggested that to improve organisational outcomes such as engagement, retention and performance, managers should provide opportunities for employees to engage in meaningful work with particular focus on Generation Y. Employees themselves should take responsibility to find meaning in their own work and life because engagement in meaningful activities can lead to satisfaction, belonging, fulfilment and a better understanding of one’s purpose in life.
APA, Harvard, Vancouver, ISO, and other styles
42

Selent, Douglas A. "Creating Systems and Applying Large-Scale Methods to Improve Student Remediation in Online Tutoring Systems in Real-time and at Scale." Digital WPI, 2017. https://digitalcommons.wpi.edu/etd-dissertations/308.

Full text
Abstract:
"A common problem shared amongst online tutoring systems is the time-consuming nature of content creation. It has been estimated that an hour of online instruction can take up to 100-300 hours to create. Several systems have created tools to expedite content creation, such as the Cognitive Tutors Authoring Tool (CTAT) and the ASSISTments builder. Although these tools make content creation more efficient, they all still depend on the efforts of a content creator and/or past historical. These tools do not take full advantage of the power of the crowd. These issues and challenges faced by online tutoring systems provide an ideal environment to implement a solution using crowdsourcing. I created the PeerASSIST system to provide a solution to the challenges faced with tutoring content creation. PeerASSIST crowdsources the work students have done on problems inside the ASSISTments online tutoring system and redistributes that work as a form of tutoring to their peers, who are in need of assistance. Multi-objective multi-armed bandit algorithms are used to distribute student work, which balance exploring which work is good and exploiting the best currently known work. These policies are customized to run in a real-world environment with multiple asynchronous reward functions and an infinite number of actions. Inspired by major companies such as Google, Facebook, and Bing, PeerASSIST is also designed as a platform for simultaneous online experimentation in real-time and at scale. Currently over 600 teachers (grades K-12) are requiring students to show their work. Over 300,000 instances of student work have been collected from over 18,000 students across 28,000 problems. From the student work collected, 2,000 instances have been redistributed to over 550 students who needed help over the past few months. I conducted a randomized controlled experiment to evaluate the effectiveness of PeerASSIST on student performance. Other contributions include representing learning maps as Bayesian networks to model student performance, creating a machine-learning algorithm to derive student incorrect processes from their incorrect answer and the inputs of the problem, and applying Bayesian hypothesis testing to A/B experiments. We showed that learning maps can be simplified without practical loss of accuracy and that time series data is necessary to simplify learning maps if the static data is highly correlated. I also created several interventions to evaluate the effectiveness of the buggy messages generated from the machine-learned incorrect processes. The null results of these experiments demonstrate the difficulty of creating a successful tutoring and suggest that other methods of tutoring content creation (i.e. PeerASSIST) should be explored."
APA, Harvard, Vancouver, ISO, and other styles
43

Järvinen, Marko. "Control of plankton and nutrient limitation in small boreal brown-water lakes : evidence from small- and large-scale manipulation experiments." Helsinki : University of Helsinki, 2002. http://ethesis.helsinki.fi/julkaisut/mat/ekolo/vk/jarvinen/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Shannon, Thomas. "Leveraging successful collaborative processes to improve performance outcomes in large-scale event planning Super Bowl, a planned Homeland Security event /." Thesis, Monterey, California : Naval Postgraduate School, 2010. http://edocs.nps.edu/npspubs/scholarly/theses/2010/Mar/10Mar%5FShannon.pdf.

Full text
Abstract:
Thesis (M.A. in Security Studies (Homeland Security and Defense))--Naval Postgraduate School, March 2010.
Thesis Advisor(s): Wollman, Lauren. Second Reader: Joyce, Nola. "March 2010." Description based on title screen as viewed on April 23, 2010. Author(s) subject terms: Event Planning, Super Bowl, Collaborative Process, Security in Special Events, Incident Management, Public Private Collaboration Includes bibliographical references (p. 79-87). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
45

Col, Stephen M. D. "Fine-scale variability in temperature, salinity, and pH in the upper-ocean and the effects on acoustic transmission loss in the Western Arctic Ocean." Thesis, Monterey, California : Naval Postgraduate School, 2010. http://edocs.nps.edu/npspubs/scholarly/theses/2010/Mar/10Mar%5FCol.pdf.

Full text
Abstract:
Thesis (M.S. in Engineering Acoustics)--Naval Postgraduate School, March 2010.
Thesis Advisor(s): Stanton, Tim ; Kapolka, Daphne. "March 2010." Description based on title screen as viewed on April 28, 2010. Author(s) subject terms: Acoustic propagation, transmission loss, Arctic Ocean, temperature salinity pH variability. Includes bibliographical references (p. 83-88). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
46

Thomas, Nathan James. "Resolving graphic conflict in scale reduced maps : a simulated annealing approach." Thesis, University of South Wales, 2004. https://pure.southwales.ac.uk/en/studentthesis/resolving-graphic-conflict-in-scale-reduced-maps-a-simulated-annealing-approach(6fbc9b78-4e96-473a-9ffe-9b5975207aaa).html.

Full text
Abstract:
This thesis explores the use of the stochastic optimisation technique of simulated annealing for cartographic map generalisation. The technique performs operations of displacement, deletion, reduction and enlargement of multiple map objects in order to resolve graphic conflict resulting from a reduction in map scale. A trial position approach is adopted in which each of n discrete polygonal objects is assigned k candidate trial positions that represent the original, displaced, reduced and enlarged states of the object. This gives rise to a possible K1 distinct map configurations; the expectation is that some of the configurations will contain reduced levels of conflict. Finding the configuration with least conflict by means of an exhaustive search is, however, not practical for realistic values of n and k. However, the thesis shows through an evaluation of a subset of the configurations, using simulated annealing, can result in an effective resolution of graphic conflict in real time. Furthermore the thesis explores various methods of improving upon the existing simulated annealing work. Firstly, two techniques were developed that aid in improving execution times using a data partitioning and a two-stage annealing strategy. Secondly, an investigation was carried out which explores the application of high-order feature alignment and the use of a continuous search space. Moreover the thesis explores the use of incorporating additional feature classes into the existing SA framework. A thorough evaluation has been carried out which demonstrate the usefulness of each approach. The research has achieved five key improvements over the original SA technique: a reduction in execution times; a greater support for generalisation operators; presented a solution to maintaining high-order feature alignment; provided a greater support for additional feature classes; and, a refinement to the search space resulting in an improved graphic display output. Although the thesis demonstrates the potential of simulated annealing as a means of reducing graphic conflict in scale-reduced maps, there still exists an enormous scope for further work. Additional techniques need to be devised to reduce execution times further for use with on-the-fly generalisation tasks. Other areas for future work include the incorporation of more sophisticated operators and an investigation to determine if SA can be adapted to resolve graphic conflict between linear features.
APA, Harvard, Vancouver, ISO, and other styles
47

Hollinger, David L. "Crop Condition and Yield Prediction at the Field Scale with Geospatial and Artificial Neural Network Applications." Kent State University / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=kent1310493197.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Samuelson, Heather Marie. "Scales modified for use on board the human centrifuge in the MIT Man Vehicle Lab." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/36703.

Full text
Abstract:
Thesis (S.B.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2006.
Page 54 blank.
Includes bibliographical references (p. 37-38).
The MIT Man Vehicle Lab (MVL) is currently performing research on the effects of rotational artificial gravity on humans through the use of a short-radius centrifuge. The MVL centrifuge allows subjects to spin in the supine position with their heads at the center of rotation and their feet facing outwards. To collect information regarding actual forces experienced by a subject while on the centrifuge, a set of scales was designed specifically to measure the equivalent of the human body's weight in artificial gravity. These were mounted on board a stair stepper exercise device to measure the forces exerted at the feet of subjects while exercising. Exercise is particularly important in preventing microgravity-induced deconditioning of the body and without exercise a deconditioned subject might not be able to withstand the stress of experiencing artificial gravity. The primary focus of the research is to gain a better understanding of the overall effects resulting from artificial gravity on humans and eventually to alleviate undesirable ones. The Contek WCS-20® bathroom scale was redesigned to fit on board the stair stepper device on the centrifuge and to safely and securely measure the forces exerted by each foot of a subject while exercising.
(cont.) It was also modified to give continuous force readouts; measurements were made while a subject was performing simple exercises on a stair stepper device in artificial gravity. The subject was spun at 0, 12.5, 23 and 30 rpm on the centrifuge and force measurements were taken at each speed. The forces from the two scales (left foot and right foot) sum to give the subject's total equivalent weight at each rotational speed. In addition, the forces exerted radially by the subject vary as he altered his body's distance from the center of the centrifuge through the series of exercises. These scales can be used to provide foot force data and, from this, a better understanding can be gained of the overall effects of artificial gravity on the human body.
by Heather Marie Samuelson.
S.B.
APA, Harvard, Vancouver, ISO, and other styles
49

Ecrement, Stephen M. "Amphibian Use of Man-Made Pools Created by Military Activity on Kisatchie National Forest, Louisiana." Ohio University / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1406307536.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Boltersdorf, Stefanie Helene [Verfasser], and Willy [Akademischer Betreuer] Werner. "A critical appraisal of accumulative biomonitors to assess and to map sources and rates of atmospheric nitrogen deposition on different regional scales in Germany / Stefanie Helene Boltersdorf ; Betreuer: Willy Werner." Trier : Universität Trier, 2014. http://d-nb.info/1197700579/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography