Dissertations / Theses on the topic 'Digital methods of information processing'

To see the other types of publications on this topic, follow the link: Digital methods of information processing.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Digital methods of information processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Hellyar, Mark Tremaine. "Investigation, development and application of knowledge based digital signal processing methods for enhancing human EEGs." Thesis, University of Plymouth, 1991. http://hdl.handle.net/10026.1/1820.

Full text
Abstract:
This thesis details the development of new and reliable techniques for enhancing the human Electroencephalogram (EEG). This development has involved the incorporation of adaptive signal processing (ASP) techniques, within an artificial intelligence (Al) paradigm, more closely matching the implicit signal analysis capabilities of the EEG expert. The need for EEG enhancement, by removal of ocular artefact (OA) , is widely recognised. However, conventional ASP techniques for OA removal fail to differentiate between OAs and some abnormal cerebral waveforms, such as frontal slow waves. OA removal often results in the corruption of these diagnostically important cerebral waveforms. However, the experienced EEG expert is often able to differentiate between OA and abnormal slow waveforms, and between different types of OA. This EEG expert knowledge is integrated with selectable adaptive filters in an intelligent OA removal system (tOARS). The EEG is enhanced by only removing OA when OA is identified, and by applying the OA removal algorithm pre-set for the specific OA type. Extensive EEG data acquisition has provided a database of abnormal EEG recordings from over 50 patients, exhibiting a variety of cerebral abnormalities. Structured knowledge elicitation has provided over 60 production rules for OA identification in the presence of abnormal frontal slow waveforms, and for distinguishing between OA types. The lOARS was implemented on personal computer (PC) based hardware in PROLOG and C software languages. 2-second, 18-channel, EEG signal segments are subjected to digital signal processing, to extract salient features from time, frequency, and contextual domains. OA is identified using a forward/backward hybrid inference engine, with uncertainty management, using the elicited expert rules and extracted signal features. Evaluation of the system has been carried out using both normal and abnormal patient EEGs, and this shows a high agreement (82.7%) in OA identification between the lOARS and an EEG expert. This novel development provides a significant improvement in OA removal, and EEG signal enhancement, and will allow more reliable automated EEG analysis. The investigation detailed in this thesis has led to 4 papers, including one in a special proceedings of the lEE, and been subject to several review articles.
APA, Harvard, Vancouver, ISO, and other styles
2

Ali, Salih Mohamed Sidahmed. "GIS time series mapping of a former South African homeland." Thesis, Cape Peninsula University of Technology, 2016. http://hdl.handle.net/20.500.11838/2506.

Full text
Abstract:
Thesis (MTech (Cartography))--Cape Peninsula University of Technology, 2016.
This case study investigates the change in the geographical boundaries by creating a Spatio-temporal mapping of Ciskei (one of the so-called Bantustans or Homelands) during the period of Apartheid. It examines the reasons for its establishment, and what impact the apartheid land legislation had on the geographical boundaries of Ciskei. GIS technology was used in this study to create time series animation and Static map to display the spatial change of the Ciskei boundaries. This investigation was split into quantitative and qualitative assessments. The aim of the quantitative assessments was to determine the amount of the spatial change of the Ciskei geographic boundary. The qualitative methods was used to investigate the map viewer’s understanding of the amount of the information in the static and animated maps. The results of qualitative assessments showed that static and animated maps have their respective advantages in the visualization of the map viewer. The importance of this research is to take advantage of time series mapping techniques to study the homeland areas in South Africa and see all the changes that have occurred as a result of a period of apartheid legislation. For this research, the following data were gathered: Attribute and metadata was the legislation and laws related to the land and the geographic data was the historical maps and coordinate data.
APA, Harvard, Vancouver, ISO, and other styles
3

Alonso, Miguel Jr. "A method for enhancing digital information displayed to computer users with visual refractive errors via spatial and spectral processing." FIU Digital Commons, 2007. http://digitalcommons.fiu.edu/etd/1112.

Full text
Abstract:
This research pursued the conceptualization, implementation, and verification of a system that enhances digital information displayed on an LCD panel to users with visual refractive errors. The target user groups for this system are individuals who have moderate to severe visual aberrations for which conventional means of compensation, such as glasses or contact lenses, does not improve their vision. This research is based on a priori knowledge of the user's visual aberration, as measured by a wavefront analyzer. With this information it is possible to generate images that, when displayed to this user, will counteract his/her visual aberration. The method described in this dissertation advances the development of techniques for providing such compensation by integrating spatial information in the image as a means to eliminate some of the shortcomings inherent in using display devices such as monitors or LCD panels. Additionally, physiological considerations are discussed and integrated into the method for providing said compensation. In order to provide a realistic sense of the performance of the methods described, they were tested by mathematical simulation in software, as well as by using a single-lens high resolution CCD camera that models an aberrated eye, and finally with human subjects having various forms of visual aberrations. Experiments were conducted on these systems and the data collected from these experiments was evaluated using statistical analysis. The experimental results revealed that the pre-compensation method resulted in a statistically significant improvement in vision for all of the systems. Although significant, the improvement was not as large as expected for the human subject tests. Further analysis suggest that even under the controlled conditions employed for testing with human subjects, the characterization of the eye may be changing. This would require real-time monitoring of relevant variables (e.g. pupil diameter) and continuous adjustment in the pre-compensation process to yield maximum viewing enhancement.
APA, Harvard, Vancouver, ISO, and other styles
4

Darbhamulla, Lalitha. "A Java image editor and enhancer." CSUSB ScholarWorks, 2004. https://scholarworks.lib.csusb.edu/etd-project/2705.

Full text
Abstract:
The purpose of this project is to develop a Java Applet that provides all the tools needed for creating image fantasies. It lets the user pick a template and an image, and combine them together. The user can then apply image processing techniques such as rotation, zooming, blurring etc according to his/her requirements.
APA, Harvard, Vancouver, ISO, and other styles
5

Kokkinou, Eleni. "Image processing methods in digital autoradiography." Thesis, University of Surrey, 2002. http://epubs.surrey.ac.uk/844272/.

Full text
Abstract:
Autoradiography is a common method in biomedical research for detecting and measuring biodistributions of labelled biomolecules within a specimen. The conventional method is based on using film or film-emulsions for the image acquisition. Although film autoradiography is still in widespread use, there are several disadvantages such as long exposure times, lack of sensitivity, non-linear response of the film and limited dynamic range that encouraged the development of digital autoradiographic systems. Most of the current digital imaging systems have demonstrated excellent performance as far as the above parameters are concerned but still cannot match the image resolution performance exhibited by film or film-emulsion. This thesis is focused on developing image processing methods for improving the quality of digital autoradiography images corrupted with noise and blur obtained by a hybrid CCD autoradiography system at room temperature. Initially, a novel fixed pattern noise method was developed which takes into account the non-ergodic nature of the dark current noise and its dependence on ambient temperature. Empirical formulae were also deduced as a further improvement of the above method for adapting the parameters of the noise distribution for ambient temperature shifts. Image restoration approaches were developed using simulated annealing as a global optimisation technique appropriate for removing the noise and blur from high particle flux samples. The performance of the proposed methods for low flux distributed sources (microscales and mouse brain sections) labelled with high energy beta emmiters has also been demonstrated at different temperatures and integration times and compared with images acquired by the conventional film-based method. Key words: Digital autoradiography, image restoration, simulated annealing, fixed pattern noise removal.
APA, Harvard, Vancouver, ISO, and other styles
6

Ribeiro, Manassés. "Deep learning methods for detecting anomalies in videos: theoretical and methodological contributions." Universidade Tecnológica Federal do Paraná, 2018. http://repositorio.utfpr.edu.br/jspui/handle/1/3172.

Full text
Abstract:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
A detecção de anomalias em vídeos de vigilância é um tema de pesquisa recorrente em visão computacional. Os métodos de aprendizagem profunda têm alcançado o estado da arte para o reconhecimento de padrões em imagens e o Autocodificador Convolucional (ACC) é uma das abordagens mais utilizadas por sua capacidade em capturar as estruturas 2D dos objetos. Neste trabalho, a detecção de anomalias se refere ao problema de encontrar padrões em vídeos que não pertencem a um conceito normal esperado. Com o objetivo de classificar anomalias adequadamente, foram verificadas formas de aprender representações relevantes para essa tarefa. Por esse motivo, estudos tanto da capacidade do modelo em aprender características automaticamente quanto do efeito da fusão de características extraídas manualmente foram realizados. Para problemas de detecção de anomalias do mundo real, a representação da classe normal é uma questão importante, sendo que um ou mais agrupamentos podem descrever diferentes aspectos de normalidade. Para fins de classificação, esses agrupamentos devem ser tão compactos (densos) quanto possível. Esta tese propõe o uso do ACC como uma abordagem orientada a dados aplicada ao contexto de detecção de anomalias em vídeos. Foram propostos métodos para o aprendizado de características espaço-temporais, bem como foi introduzida uma abordagem híbrida chamada Autocodificador Convolucional com Incorporação Compacta (ACC-IC), cujo objetivo é melhorar a compactação dos agrupamentos normais. Além disso, foi proposto um novo critério de parada baseado na sensibilidade e sua adequação para problemas de detecção de anomalias foi verificada. Todos os métodos propostos foram avaliados em conjuntos de dados disponíveis publicamente e comparados com abordagens estado da arte. Além do mais, foram introduzidos dois novos conjuntos de dados projetados para detecção de anomalias em vídeos de vigilância em rodovias. O ACC se mostrou promissor na detecção de anomalias em vídeos. Resultados sugerem que o ACC pode aprender características espaço-temporais automaticamente e a agregação de características extraídas manualmente parece ser valiosa para alguns conjuntos de dados. A compactação introduzida pelo ACC-IC melhorou o desempenho de classificação para a maioria dos casos e o critério de parada baseado na sensibilidade é uma nova abordagem que parece ser uma alternativa interessante. Os vídeos foram analisados qualitativamente de maneira visual, indicando que as características aprendidas com os dois métodos (ACC e ACC-IC) estão intimamente correlacionadas com os eventos anormais que ocorrem em seus quadros. De fato, ainda há muito a ser feito para uma definição mais geral e formal de normalidade, de modo que se possa ajudar pesquisadores a desenvolver métodos computacionais eficientes para a interpretação dos vídeos.
The anomaly detection in automated video surveillance is a recurrent topic in recent computer vision research. Deep Learning (DL) methods have achieved the state-of-the-art performance for pattern recognition in images and the Convolutional Autoencoder (CAE) is one of the most frequently used approach, which is capable of capturing the 2D structure of objects. In this work, anomaly detection refers to the problem of finding patterns in images and videos that do not belong to the expected normal concept. Aiming at classifying anomalies adequately, methods for learning relevant representations were verified. For this reason, both the capability of the model for learning automatically features and the effect of fusing hand-crafted features together with raw data were studied. Indeed, for real-world problems, the representation of the normal class is an important issue for detecting anomalies, in which one or more clusters can describe different aspects of normality. For classification purposes, these clusters must be as compact (dense) as possible. This thesis proposes the use of CAE as a data-driven approach in the context of anomaly detection problems. Methods for feature learning using as input both hand-crafted features and raw data were proposed, and how they affect the classification performance was investigated. This work also introduces a hybrid approach using DL and one-class support vector machine methods, named Convolutional Autoencoder with Compact Embedding (CAE-CE), for enhancing the compactness of normal clusters. Besides, a novel sensitivity-based stop criterion was proposed, and its suitability for anomaly detection problems was assessed. The proposed methods were evaluated using publicly available datasets and compared with the state-of-the-art approaches. Two novel benchmarks, designed for video anomaly detection in highways were introduced. CAE was shown to be promising as a data-driven approach for detecting anomalies in videos. Results suggest that the CAE can learn spatio-temporal features automatically, and the aggregation of hand-crafted features seems to be valuable for some datasets. Also, overall results suggest that the enhanced compactness introduced by the CAE-CE improved the classification performance for most cases, and the stop criterion based on the sensitivity is a novel approach that seems to be an interesting alternative. Videos were qualitatively analyzed at the visual level, indicating that features learned using both methods (CAE and CAE-CE) are closely correlated to the anomalous events occurring in the frames. In fact, there is much yet to be done towards a more general and formal definition of normality/abnormality, so as to support researchers to devise efficient computational methods to mimetize the semantic interpretation of visual scenes by humans.
APA, Harvard, Vancouver, ISO, and other styles
7

Ali-Bakhshian, Mohammad. "Digital processing of analog information adopting time-mode signal processing." Thesis, McGill University, 2013. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=114237.

Full text
Abstract:
As CMOS technologies advance to 22-nm dimensions and below, constructing analog circuits in such advanced processes suffers many limitations, such as reduced signal swings, sensitivity to thermal noise effects, loss of accurate switching functions, to name just a few. Time-Mode Signal Processing (TMSP) is a technique that is believed to be well suited for solving many of these challenges. It can be defined as the detection, storage, and manipulation of sampled analog information using time-mode variables. One of the important advantages of TMSP is the ability to realize analog functions using digital logic structures. This technique has a long history of application in electronics; however, due to lack of some fundamental functions, the use of TM variables has been mostly limited to intermediate stage processing and it has been always associated with voltage/current-to-time and time-to-voltage/current conversion. These conversions necessitate the inclusion of analog blocks that contradict the digital advantage of TMSP. In this thesis, an intensive research has been presented that provides an appropriate foundation for the development of TMSP as a general processing tool. By proposing the new concept of delay interruption, a completely new asynchronous approach for the manipulation of TM variables is suggested. As a direct result of this approach, practical techniques for storage, addition and subtraction of time-mode variables are presented. To Extend the digital implementation of TMSP to a wider range of applications, the comprehensive design of a unity gain dual-path time-to-time integrator (accumulator) is demonstrated. This integrator is then used to implement a digital second-order delta-sigma modulator. Finally, to demonstrate the advantage of TMSP, a very low power and compact tunable interface for capacitive sensors is presented that is composed of a number of delay blocks associated with typical logic gates. All the proposed theories are supported by experimental results and post-layout simulations.The emphasis on the digital construction of the proposed circuits has been the first priority of this thesis. Having the building blocks implemented with a digital structure, provides the feasibility of a simple, synthesizable, and reconfigurable design where affordable circuit calibrations can be adopted to remove the effects of process variations.
Les technologies CMOS progressant vers les procédés 22 nm et au delà, la abrication des circuits analogiques dans ces technologies se heurte a de nombreuses limitations. Entre autres limitations on peut citer la réduction d'amplitude des signaux, la sensibilité aux effets du bruit thermique et la perte de fonctions précises de commutation. Le traitement de signal en mode temps (TMSP pour Time-Mode Signal Processing) est une technique que l'on croit être bien adapté pour résoudre un grand nombre de problèmes relatifs a ces limitations. TMSP peut être défini comme la détection, le stockage et la manipulation de l'information analogique échantillonnée en utilisant des quantités de temps comme variables. L'un des avantages importants de TMSP est la capacité à réaliser des fonctions analogiques en utilisant des structures logiques digitales. Cette technique a une longue histoire en terme d'application en électronique. Cependant, en raison du manque de certaines fonctions fondamentales, l'utilisation de variables en mode temps a été limitée à une utilisation comme étape intermédiaire dans le traitement d'un signal et toujours dans le contexte d'une conversion tension/courant-temps et temps-tension/courant. Ces conversions nécessitent l'inclusion de blocs analogiques qui vont a l'encontre de l'avantage numérique des TMSP. Cette thèse fournit un fondement approprié pour le développement de TMSP comme outil général de traitement de signal. En proposant le concept nouveau d'interruption de retard, une toute nouvelle approche asynchrone pour la manipulation de variables en mode temps est suggéré. Comme conséquence directe de cette approche, des techniques pratiques pour le stockage, l'addition et la soustraction de variables en mode temps sont présentées. Pour étendre l'implémentation digitale de TMSP à une large gamme d'applications, la conception d'un intégrateur (accumulateur) à double voie temps- à -temps est démontrée. cet intégrateur est ensuite utilisé pour implémenter un modulateur delta-sigma de second ordre.Enfin, pour démontrer l'avantage de TMSP, une Interface de très basse puissance, compacte et réglable pour capteurs capacitifs est présenté. Cette interface est composé d'un certain nombre de blocs de retard associés à des portes logiques typiques. Toutes les théories proposées sont soutenues par des résultats expérimentaux et des simulations post-layout. L'implémentation digitale des circuits proposés a été la première priorité de cette thèse. En effet, une implémentation des bloc avec des structures digitales permet des conceptions simples, synthétisable et reconfigurables où des circuits de calibration très abordables peuvent être adoptées pour éliminer les effets des variations de process.
APA, Harvard, Vancouver, ISO, and other styles
8

Ollonqvist, K. (Kalle). "Moving from traditional software development methods to agile methods." Bachelor's thesis, University of Oulu, 2018. http://urn.fi/URN:NBN:fi:oulu-201806092553.

Full text
Abstract:
In this literature review, using existing research and sources, benefits and challenges of agile software development methods were discussed. Agile methods were also compared to traditional software development methods. Agile was defined and its four points of value were listed. This provided a basis for discussing the benefits and challenges of agile compared to the traditional software development methods. Customer involvement was found to have a positive effect on customer satisfaction in agile. The development team that was using agile as their development method was able to deliver something of value for the customer faster. This was useful if the company had to race to market. However, a software project that was developed with waterfall was found to be more predictable, especially if the development team was experienced. This was in part due to the customer not being able to change the requirements of the product they had ordered, and in part of the progressive or onward-moving development process typical to waterfall. Also, planning is a big part of waterfall development and it affects the predictability and measurability of the project as well. One of the biggest challenges when moving to agile from a traditional development method was changing the fundamental mindset of the people working in the organization. Especially the management-style needed to change from command-and-control management to leadership-and-collaboration. One of the challenges was that customer involvement might become a burden if the customer is continuously changing the requirements. Moving to agile from a traditional development method is not easy and it could lead to adding more sprints to the software development process than was planned.
APA, Harvard, Vancouver, ISO, and other styles
9

Tarczyńska, Anna. "Methods of Text Information Extraction in Digital Videos." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2656.

Full text
Abstract:
Context The huge amount of existing digital video files needs to provide indexing to make it available for customers (easier searching). The indexing can be provided by text information extraction. In this thesis we have analysed and compared methods of text information extraction in digital videos. Furthermore, we have evaluated them in the new context proposed by us, namely usefulness in sports news indexing and information retrieval. Objectives The objectives of this thesis are as follows: providing a better understanding of the nature of text extraction; performing a systematic literature review on various methods of text information extraction in digital videos of TV sports news; designing and executing an experiment in the testing environment; evaluating available and promising methods of text information extraction from digital video files in the proposed context associated with video sports news indexing and retrieval; providing an adequate solution in the proposed context described above. Methods This thesis consists of three research methods: Systematic Literature Review, Video Content Analysis with the checklist, and Experiment. The Systematic Literature Review has been used to study the nature of text information extraction, to establish the methods and challenges, and to specify the effective way of conducting the experiment. The video content analysis has been used to establish the context for the experiment. Finally, the experiment has been conducted to answer the main research question: How useful are the methods of text information extraction for indexation of video sports news and information retrieval? Results Through the Systematic Literature Review we identified 29 challenges of the text information extraction methods, and 10 chains between them. We extracted 21 tools and 105 different methods, and analyzed the relations between them. Through Video Content Analysis we specified three groups of probability of text extraction from video, and 14 categories for providing video sports news indexation with the taxonomy hierarchy. We have conducted the Experiment on three videos files, with 127 frames, 8970 characters, and 1814 words, using the only available MoCA tool. As a result, we reported 10 errors and proposed recommendations for each of them. We evaluated the tool according to the categories mentioned above and offered four advantages, and nine disadvantages of the Tool mentioned above. Conclusions It is hard to compare the methods described in the literature, because the tools are not available for testing, and they are not compared with each other. Furthermore, the values of recall and precision measures highly depend on the quality of the text contained in the video. Therefore, performing the experiments on the same indexed database is necessary. However, the text information extraction is time consuming (because of huge amount of frames in video), and even high character recognition rate gives low word recognition rate. Therefore, the usefulness of text information extraction for video indexation is still low. Because most of the text information contained in the videos news is inserted in post-processing, the text extraction could be provided in the root: during the processing of the original video, by the broadcasting company (e.g. by automatically saving inserted text in separate file). Then the text information extraction will not be necessary for managing the new video files
The huge amount of existing digital video files needs to provide indexing to make it available for customers (easier searching). The indexing can be provided by text information extraction. In this thesis we have analysed and compared methods of text information extraction in digital videos. Furthermore, we have evaluated them in the new context proposed by us, namely usefulness in sports news indexing and information retrieval.
APA, Harvard, Vancouver, ISO, and other styles
10

Lindström, Fredric. "Digital signal processing methods and algorithms for audio conferencing systems /." Karlskrona : Department of Signal Processing, School of Engineering, Blekinge Institute of Technology, 2007. http://www.bth.se/fou/Forskinfo.nsf/allfirst2/9cc008f2fa400e82c12572bb00331533?OpenDocument.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Gallay, Michal. "Assessing alternative methods for acquiring and processing digital elevation data." Thesis, Queen's University Belfast, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.534738.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Jiang, Yuchong. "Automatic assessment of skeletal maturity by digital image processing methods." Thesis, University of Liverpool, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.333501.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Wetzel, Daniel T. "Digital Methods for Cohere-On-Receive Radar Applications." University of Dayton / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1533319129578964.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Habe, Hitoshi. "Geometric information processing methods for elaborating computer vision algorithms." 京都大学 (Kyoto University), 2006. http://hdl.handle.net/2433/136028.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Korhonen, J. (Johanna). "Piracy prevention methods in software business." Bachelor's thesis, University of Oulu, 2016. http://urn.fi/URN:NBN:fi:oulu-201605131733.

Full text
Abstract:
There are various forms of piracy in software business, and many prevention techniques have been developed against them. Forms of software piracy are, for example, cracks and serials, softlifting and hard disk loading, internet piracy and software counterfeiting, mischanneling, reverse engineering, and tampering. There are various prevention methods that target these types of piracy, although all of these methods have been broken. The piracy prevention measures can be divided into ethical, legal, and technical measures. Technical measures include measures like obfuscation and tamper-proofing, for example. However, relying on a single method does not provide complete protection from attacks against intellectual property, so companies wishing to protect their product should consider combining multiple methods of piracy prevention.
APA, Harvard, Vancouver, ISO, and other styles
16

Gilkerson, Paul. "Digital signal processing for time of flight sonar." Thesis, University of Oxford, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.343467.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Jha, Sanjay Kumar. "Artificial neural networks for digital signal processing applications." Thesis, University of Strathclyde, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.284097.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Marjamaa-Mankinen, L. (Liisa). "Technology ecosystems and digital business ecosystems for business." Master's thesis, University of Oulu, 2016. http://urn.fi/URN:NBN:fi:oulu-201603251356.

Full text
Abstract:
The purpose of this study was to find out the progress in the research of technology ecosystems and digital business ecosystems and to combine that information for business purposes by the utilization of information about business ecosystems. The need for this information emerged at the Department of Information Processing Science in the context of European Union research projects. The information gained is expected to assist to increase possibilities both for the research and for the personal competence to work with enterprises in new kinds of technology environments. The main research question to be answered in this study was: How are technology ecosystems and digital business ecosystems for business perceived and approached in the literature? Instead of a systematic review, a method of systematic mapping was selected to structure the selected research areas for getting a broad overview over the two streams of research, and for identification the possible research evidence. To answer the main question the following subquestions were set for both systematic mapping studies: RQ1 Which journals include papers on technology ecosystems / digital business ecosystems for business? RQ2 What are the most investigated topics of technology ecosystems / digital business ecosystems and how have these changed over time? RQ3 What are the most frequently applied research approaches and methods, and in what study context? Based on structuring the selected research areas according to the set subquestions, broad overviews were established presenting findings. Based on the identification and evaluation of publication channels the forums for discussion were exposed. Based on the identification of topics and their evolution the trends of discussion were exposed. Based on the identification of research types the non-empirical and the empirical research were exposed. Found research evidence and found solution proposals (from non-empirical research) were discussed and the need for further research was considered. The main contribution of this mapping study was the identification of different perceptions of two vague concepts, technology ecosystem and digital business ecosystem, and notion of their convergence and interlace over time (especially in relation to the exposed scarce research evidence). The recommendations for future research were set based on the found empirical research and solution proposals, as well as limitations of this study.
APA, Harvard, Vancouver, ISO, and other styles
19

Kim, Younhee. "Towards lower bounds on distortion in information hiding." Fairfax, VA : George Mason University, 2008. http://hdl.handle.net/1920/3403.

Full text
Abstract:
Thesis (Ph.D.)--George Mason University, 2008.
Vita: p. 133. Thesis directors: Zoran Duric, Dana Richards. Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Computer Science. Title from PDF t.p. (viewed Mar. 17, 2009). Includes bibliographical references (p. 127-132). Also issued in print.
APA, Harvard, Vancouver, ISO, and other styles
20

Kreuzinger, Tobias. "Digital Signal Processing Methods for Source Function Extraction of Piezoelectric Elements." Thesis, Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/4792.

Full text
Abstract:
Guided wave techniques have great potential for the structural health monitoring of plate-like components. Previous research has demonstrated the effectiveness of combining laser-ultrasonic techniques with time-frequency representations to experimentally develop the dispersion relationship of a plate; the high fidelity, broad bandwidth and point-like nature of laser ultrasonics are critical for the success of these results. Unfortunately, laser ultrasonic techniques are time and cost intensive, and are impractical for many in-service applications. Therefore this research develops a complementary digital signal processing methodology that uses mounted piezoelectric elements instead of optical devices. This study first characterizes the spatial and temporal effects of oil coupled and glued piezoelectric sources, and then develops a procedure to interpret and model the distortion caused by their limited bandwidth and finite size. Furthermore, it outlines any inherent difficulties for time and frequency domain considerations. The deconvolution theory for source function extraction in the time - and frequency domain under the presence of noise is provided and applied to measured data. These considerations give the background for further studies to develop a dispersion relationship of a plate with the fidelity and bandwidth similar to results possible with laser ultrasonics, but made using mounted piezoelectric sources.
APA, Harvard, Vancouver, ISO, and other styles
21

Chen, Sha. "Digital image processing-based numerical methods for mechanics of heterogeneous geomaterials." Click to view the E-thesis via HKUTO, 2005. http://sunzi.lib.hku.hk/hkuto/record/B36357765.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Chen, Sha, and 陳沙. "Digital image processing-based numerical methods for mechanics of heterogeneous geomaterials." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2005. http://hub.hku.hk/bib/B36357765.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Seifi, Mozhdeh. "Signal processing methods for fast and accurate reconstruction of digital holograms." Phd thesis, Université Jean Monnet - Saint-Etienne, 2013. http://tel.archives-ouvertes.fr/tel-01004605.

Full text
Abstract:
Techniques for fast, 3D, quantitative microscopy are of great interest in many fields. In this context, in-line digital holography has significant potential due to its relatively simple setup (lensless imaging), its three-dimensional character and its temporal resolution. The goal of this thesis is to improve existing hologram reconstruction techniques by employing an "inverse problems" approach. For applications of objects with parametric shapes, a greedy algorithm has been previously proposed which solves the (inherently ill-posed) inversion problem of reconstruction by maximizing the likelihood between a model of holographic patterns and the measured data. The first contribution of this thesis is to reduce the computational costs of this algorithm using a multi-resolution approach (FAST algorithm). For the second contribution, a "matching pursuit" type of pattern recognition approach is proposed for hologram reconstruction of volumes containing parametric objects, or non-parametric objects of a few shape classes. This method finds the closest set of diffraction patterns to the measured data using a diffraction pattern dictionary. The size of the dictionary is reduced by employing a truncated singular value decomposition to obtain a low cost algorithm. The third contribution of this thesis was carried out in collaboration with the laboratory of fluid mechanics and acoustics of Lyon (LMFA). The greedy algorithm is used in a real application: the reconstruction and tracking of free-falling, evaporating, ether droplets. In all the proposed methods, special attention has been paid to improvement of the accuracy of reconstruction as well as to reducing the computational costs and the number of parameters to be tuned by the user (so that the proposed algorithms are used with little or no supervision). A Matlab® toolbox (accessible on-line) has been developed as part of this thesis
APA, Harvard, Vancouver, ISO, and other styles
24

Hickish, Jack. "Digital signal processing methods for large-N, low-frequency radio telescopes." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:7d983fb3-9411-4906-92cd-70e2c1040b54.

Full text
Abstract:
Current attempts to make precision measurements of the HI power spectrum at high redshifts have led to the construction of several low-frequency, large-N, interferometric arrays. The computational demands of digital correlators required by these arrays present a significant challenge. These demands stem from the treatment of radio telescopes as collections of two-element interferometers, which results in the need to multiply O(N2) pairs of antenna signals in an N-element array. Given the unparalleled flexibility offered by modern digital processing systems, it is apt to consider whether a different way of treating the signals from antennas in an array might be fruitful in current and future radio telescopes. Such methods potentially avoid the unfavourable N2 scaling of computation rate with array size. In this thesis I examine the prospect of using direct-imaging methods to map the sky without first generating correlation matrices. These methods potentially provide great computational savings by creating images using efficient, FFT-based algorithms. This thesis details the design and deployment of such a system for the Basic Element of SKA Training II (BEST-2) array in Medicina, Italy. Here the 32-antenna BEST-2 array is used as a test bed for comparison of FX correlation and direct-imaging systems, and to provide a frontend for a real-time transient event detection pipeline. Even in the case of traditional O(N2) correlation methods, signal processing algorithms can be significantly optimized to deliver large performance gains. In this thesis I present a new mechanism for optimizing the cross-correlation operation on Field Programmable Gate Array (FPGA) hardware. This implementation is shown to achieve a 75% reduction in multiplier usage, and has a variety of benefits over existing optimization strategies. Finally, this thesis turns its focus towards The Square Kilometre Array (SKA). When constructed, the SKA will be the world's largest radio telescope and will comprise a variety of arrays targeting different observing frequencies and science goals. The low-frequency component of the SKA (SKA-low) will feature ~250,000 individual antennas, sub-divided into a number of stations. This thesis explores the impact of the station size on the computational requirements of SKA-low, investigating the optimal array configuration and signal processing realizations.
APA, Harvard, Vancouver, ISO, and other styles
25

Giddens, Spencer. "Applications of Mathematical Optimization Methods to Digital Communications and Signal Processing." BYU ScholarsArchive, 2020. https://scholarsarchive.byu.edu/etd/8601.

Full text
Abstract:
Mathematical optimization is applicable to nearly every scientific discipline. This thesis specifically focuses on optimization applications to digital communications and signal processing. Within the digital communications framework, the channel encoder attempts to encode a message from a source (the sender) in such a way that the channel decoder can utilize the encoding to correct errors in the message caused by the transmission over the channel. Low-density parity-check (LDPC) codes are an especially popular code for this purpose. Following the channel encoder in the digital communications framework, the modulator converts the encoded message bits to a physical waveform, which is sent over the channel and converted back to bits at the demodulator. The modulator and demodulator present special challenges for what is known as the two-antenna problem. The main results of this work are two algorithms related to the development of optimization methods for LDPC codes and the two-antenna problem. Current methods for optimization of LDPC codes analyze the degree distribution pair asymptotically as block length approaches infinity. This effectively ignores the discrete nature of the space of valid degree distribution pairs for LDPC codes of finite block length. While large codes are likely to conform reasonably well to the infinite block length analysis, shorter codes have no such guarantee. Chapter 2 more thoroughly introduces LDPC codes, and Chapter 3 presents and analyzes an algorithm for completely enumerating the space of all valid degree distribution pairs for a given block length, code rate, maximum variable node degree, and maximum check node degree. This algorithm is then demonstrated on an example LDPC code of finite block length. Finally, we discuss how the result of this algorithm can be utilized by discrete optimization routines to form novel methods for the optimization of small block length LDPC codes. In order to solve the two-antenna problem, which is introduced in greater detail in Chapter 2, it is necessary to obtain reliable estimates of the timing offset and channel gains caused by the transmission of the signal through the channel. The timing offset estimator can be formulated as an optimization problem, and an optimization method used to solve it was previously developed. However, this optimization method does not utilize gradient information, and as a result is inefficient. Chapter 4 presents and analyzes an improved gradient-based optimization method that solves the two-antenna problem much more efficiently.
APA, Harvard, Vancouver, ISO, and other styles
26

MORGAN, KEITH PATRICK. "IMPROVED METHODS OF IMAGE SMOOTHING AND RESTORATION (NONSTATIONARY MODELS)." Diss., The University of Arizona, 1985. http://hdl.handle.net/10150/187959.

Full text
Abstract:
The problems of noise removal, and simultaneous noise removal and deblurring of imagery are common to many areas of science. An approach which allows for the unified treatment of both problems involves modeling imagery as a sample of a random process. Various nonstationary image models are explored in this context. Attention is directed to identifying the model parameters from imagery which has been corrupted by noise and possibly blur, and the use of the model to form an optimal reconstruction of the image. Throughout the work, emphasis is placed on both theoretical development and practical considerations involved in achieving this reconstruction. The results indicate that the use of nonstationary image models offers considerable improvement over traditional techniques.
APA, Harvard, Vancouver, ISO, and other styles
27

Mattila, J. (Juho). "Customer experience management in digital channels with marketing automation." Master's thesis, University of Oulu, 2016. http://urn.fi/URN:NBN:fi:oulu-201601291092.

Full text
Abstract:
Superior customer experience and successful customer experience management can create a competitive advantage that is difficult to match by the competitors. However managing customer experience successfully the customer experience needs to be considered in the context of multiple marketing channels and multiple customer segments which requires a lot of resources. With the difficult financial conditions today the use of technology and marketing automation offer companies a chance to work efficiently across the marketing channels and segments. Managing customer relationships and customer experience has been studied quite extensively before. In recent years the scope in the research has moved more from customer relationship management (CRM) to customer experience management (CEM). Segmentation of the customers origins from the 1950s and it is one of the fundamental concepts of marketing. Also digital marketing and using technology in the marketing has been studied quite a lot in the past decades. However using technology and specially marketing automation as a tool to manage customer experience has not been studied. Since customer experience is seen one of the most important concepts of marketing in the coming years and companies are all the time looking a ways how to enhance their marketing with better use of software these two concepts make an interesting combination. In this study I have combined the customer experience management with marketing automation. My main research question is how customer experience can be managed with marketing automation. The sub questions are what kind of channels can be used in the digital marketing and how customers can be segmented effectively. I’ll answer these questions by reviewing the research that has already been made in these areas and with a case study of the customer experience and marketing automation in a company called Fintoto. The literature research includes customer experience and it’s management, segmentation, digital marketing channels and use of technology in marketing. Based on this research is built a framework which is used the case study to analyze the customer experience and marketing automation of Fintoto. Main results were that there is a way how customer experience can be managed with marketing automation. The main argument is that as customer experience happens in the touch-points where customer interacts with the company with the use of marketing automation technology the respond can differ and be most optimal one for each customer. The data gathered from the customers can be used with marketing automation to tailor the right response or message for each customer.
APA, Harvard, Vancouver, ISO, and other styles
28

Wang, Jue. "Foreground segmentation in images and video : methods, systems, and applications /." Thesis, Connect to this title online; UW restricted, 2007. http://hdl.handle.net/1773/6130.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Liu, Xianhua Mechanical &amp Manufacturing Engineering Faculty of Engineering UNSW. "Blind source separation methods and their mechanical applications." Awarded by:University of New South Wales. School of Mechanical and Manufacturing Engineering, 2006. http://handle.unsw.edu.au/1959.4/24961.

Full text
Abstract:
Blind Source Separation is a modern signal processing technique which recovers both the unknown sources and unknown mixing systems from only measured mixtures of signals. It has application in diverse fields such as communication, image processing, geological exploration and biomedical signal processing etc. This project studies the BSS problem, develop separation methods and reveal the potential for mechanical engineering applications. There are two models for blind source separation corresponding to the two ways that the sources are mixed, the instantaneous mixing model and the convolved mixing model. The author carried out a theoretical study of the first model by proposing an idea called Redundant Data Elimination which leads to geometric interpretation of the model, explains that circular distribution property is the reason why Gaussian signal mixtures can not be separated, and showed that this idea can improve separation accuracy for unsymmetrically distributed sources. This new idea enabled evaluation and comparison of two well-known algorithms and proposal of a simplified algorithm based on Joint Approximate Diagonalization of fourth order cumulant matrices, which is further developed by determining an optimized parameter value for separation convergence. Also based on the understanding from the RDE, an outlier spherical projection method is proposed to improve separation accuracy against outlier errors. Mechanical vibration or acoustic problems belong to the second model. After some theoretical study of the problem and the model, a novel application of the Blind Least Mean Square algorithm using Gray's variable norm as cost function is applied to engine vibration data to separate piston slap, fuel injection noise and cylinder pressure effects. Further, the algorithm is combined with a deflation algorithm for successive subtraction of recovered source responses from the measured mixture to enable the recovery of more sources. The algorithms are verified to be successful by simulation, and the separated engine sources are proved reasonable by analysing the engine operation and physical properties of the sources. The author also studied the relationship between these two models, the problems of different approaches for solving the model such as the frequency domain approach and the Bussgang approach, and sets out future research interests.
APA, Harvard, Vancouver, ISO, and other styles
30

Yung, H. C. "Recursive and concurrent VLSI architectures for digital signal processing." Thesis, University of Newcastle Upon Tyne, 1985. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.481423.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Xie, Xuemei, and 謝雪梅. "New design and realization methods for perfect reconstruction nonuniform filter banks." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2004. http://hub.hku.hk/bib/B31246175.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Persson, Per. "Annealing Based Optimization Methods for Signal Processing Applications." Doctoral thesis, Ronneby : Department of Telecommunications and Signal Processing, Blekinge Institute of Technology, 2003. http://www.bth.se/fou/forskinfo.nsf/01f1d3898cbbd490c12568160037fb62/da44274e9f86a54ec1256d260044e0dd!OpenDocument.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Jarrett, David Ward 1963. "Digital image noise smoothing using high frequency information." Thesis, The University of Arizona, 1987. http://hdl.handle.net/10150/276599.

Full text
Abstract:
The goal of digital image noise smoothing is to smooth noise in the image without smoothing edges and other high frequency information. Statistically optimal methods must use accurate statistical models of the image and noise. Subjective methods must also characterize the image. Two methods using high frequency information to augment existing noise smoothing methods are investigated: two component model (TCM) smoothing and second derivative enhancement (SDE) smoothing. TCM smoothing applies an optimal noise smoothing filter to a high frequency residual, extracted from the noisy image using a two component source model. The lower variance and increased stationarity of the residual compared to the original image increases this filters effectiveness. SDE smoothing enhances the edges of the low pass filtered noisy image with the second derivative, extracted from the noisy image. Both methods are shown to perform better than the methods they augment, through objective (statistical) and subjective (visual) comparisons.
APA, Harvard, Vancouver, ISO, and other styles
34

Pasanen, P. (Panu). "Applying usability testing methods into game development:case Casters of Kalderon." Master's thesis, University of Oulu, 2016. http://urn.fi/URN:NBN:fi:oulu-201606032208.

Full text
Abstract:
Small game development companies don’t nessecarily have strict prodecures or methods when it comes to testing their product. This was the case with the Oulu based game company Meizi Games, who felt the need to conduct a testing activity for their still developed game, Casters of Kalderon. Casters of Kalderon was tested through conducting usability testing in order to find out the issues it currently has. These issues would be communicated to the developers by conducting the testing with potential customers eg. People who play mobile games. The testing activities were observed and feedback was gathered, and the results were delivered to and discussed with the developers of the game. As Casters of Kalderon is a relatively complex strategy game, the biggest usability issue it has is the lacking in introducing the game’s many mechanics to new players. The game currently has a tutorial mechanism that aims to help the player get familiar with the game, but it is currently poorly implemented, as the test results show. This would often cause confusion and frustration in the testers, who had no previous experience with the game. There were also some usability related issues that came in the way of the players being able to enjoy the game. Game developers need to pay close attention to the designing of tutorials for complex games, like this case. Meizi Games found the usability testing activities to be a feasible way to find out the flaws and issues in their game, and promised to work on the found issues to improve the game in the future.
APA, Harvard, Vancouver, ISO, and other styles
35

Popescu, George. "Digital Signal Processing Methods for Safety Systems Employed in Nuclear Power Industry." University of Cincinnati / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1479815935917872.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Scott-Fleming, Ian Crerar 1955. "DIGITAL SIGNAL PROCESSING METHODS FOR ESTIMATING SOLAR RADIOMETER ZERO AIRMASS INTERCEPT PARAMETERS." Thesis, The University of Arizona, 1987. http://hdl.handle.net/10150/276442.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Celano, Daniel Guiseppe Cyril. "Studies of texture and other methods for segmentation of digital images." Thesis, Royal Holloway, University of London, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.363061.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Sulley, C. E. "A functional multiprocessor system for real-time digital signal processing." Thesis, University of Bath, 1985. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.370454.

Full text
Abstract:
This thesis is concerned primarily with the architecture of Digital Signal Computers. The work is supported by the design, development and application of a novel Digital Signal Computer system, the MAC68. The MAC68 is a Functional Multiprocessor, using two independent processors, one of which executes general-purpose tasks, and the other executes sequences of arithmetic. The particular MAC68 design was arrived at after careful evaluation of existing Digital Signal Computer architectures. MAC68 features are fully evaluated via its application to the Sub-Band Coding of speech, and in particular by the development of a 16Kb/s Sub-band Coder using six sub-bands. MAC68 performance was found to be comparable to that of current DSP micros for basic digital filter tasks, and superior for FFT tasks. The MAC68 architecture is a balance of high-speed arithmetic and general- purpose capabilities, and is likely to have a greater range of application than General-Purpose micros or DSP micros used alone. Suggestions are put forward for MAC68 enhancements utilising state-of-the-art hardware and software technologies. Because of the current widespread use of General-Purpose micros, and because of the possible performance gains to be had with the MAC68-type architecture, it is thought that MAC68 architectural concepts will be of value in the design of future high-performance Digital Signal Computer systems.
APA, Harvard, Vancouver, ISO, and other styles
39

Chambers, Jonathon Arthur. "Digital signal processing algorithms and structures for adaptive line enhancing." Thesis, Imperial College London, 1990. http://hdl.handle.net/10044/1/47797.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Leach, Christopher. "Novel Internet based methods for chemical information control." Thesis, Imperial College London, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.300623.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Gunturk, Bahadir K. "Multi-frame information fusion for image and video enhancement." Diss., Available online, Georgia Institute of Technology, 2004:, 2003. http://etd.gatech.edu/theses/available/etd-04072004-180015/unrestricted/gunturk%5Fbahadir%5Fk%5F200312%5Fphd.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Pramila, A. (Anu). "Benchmarking and modernizing print-cam robust watermarking methods on modern mobile phones." Master's thesis, University of Oulu, 2015. http://urn.fi/URN:NBN:fi:oulu-201504021271.

Full text
Abstract:
Digital watermarking has generally been used for copy-‭ ‬and copyright protection,‭ ‬authentication and,‭ ‬more recently,‭ ‬value-added services.‭ ‬It has been restricted to digital world but by developing methods which are able to read a watermark from a printed image with a camera phone would enable the watermark extraction to be free of time and place. The aim of this work was to compare print-cam robust watermarking methods and illustrate the problems that emerge when the watermarking methods are implemented on a camera phone.‭ ‬This was achieved by selecting three print-cam robust watermarking methods and implementing them on a camera phone.‭ ‬The robustness of the methods was tested and the results reported. The obtained results showed that some changes are required to make the earliest proposed watermarking methods work on a modern camera phone.‭ ‬The robustness of the methods was in line with the intended applications as well as camera phone specifications.
APA, Harvard, Vancouver, ISO, and other styles
43

Svensson, Stina. "Representing and analyzing 3D digital shape using distance information /." Uppsala : Swedish Univ. of Agricultural Sciences (Sveriges lantbruksuniv.), 2001. http://epsilon.slu.se/avh/2001/91-576-6095-6.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Steggall, Stephen William. "Evolution of Digital Reinstatement Methods Within Private Cadastral Organisations." Queensland University of Technology, 2001. http://eprints.qut.edu.au/15845/.

Full text
Abstract:
Cadastral reinstatement methods within Queensland involve the use of modern digital surveying techniques in combination with traditional non-digital methods of recording and reporting information. This leads to the need to manually enter and re-enter data into a digital format at different stages of a survey. The requirement to lodge survey information with government organisations in a non-digital survey plan format also forces a break in digital data flow throughout the cadastral surveying system, which can only be updated by changes in the lodgement regulations. The private cadastral organisations are predominantly responsible for carrying out the cadastral surveys and the government agencies are primarily responsible for the examination, verification and administration of the cadastral data. These organisations will have no communication link for digital cadastral data until the introduction of digital data lodgement. The digital system within the private cadastral surveying organisations can therefore be considered to be an independent system with consideration needed to be given to the future introduction of a digital lodgement system at some undefined time in the future. Cadastral surveyors hold large amounts of digital information that is suitable for digital reinstatement systems. This information, if appropriately archived and distributed, has the capacity to meet the needs of reinstatement systems including as an alternative source of digital information that will eventually be obtained from digital lodgement systems. The existing technology and the private organisation structures are capable of supporting continuous digital data flow and automated systems. This research proposes a process of development for private cadastral organisations to advance from traditional systems to continuous digital data flow and automated processes within their cadastral reinstatement systems. The development process is linked to existing legislation and technology taking into consideration likely future directions. The current legislative and technological environments within Queensland allow for development towards automated digital systems that will enhance most current cadastral reinstatement systems.
APA, Harvard, Vancouver, ISO, and other styles
45

Gallagher, Mark. "The use of digital signal processing in adaptive HF frequency management." Thesis, University of Hull, 1995. http://hydra.hull.ac.uk/resources/hull:3497.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Nader, Charles. "Enhancing Radio Frequency System Performance by Digital Signal Processing." Licentiate thesis, University of Gävle, Department of Electronics, Mathematics and Natural Sciences, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-7312.

Full text
Abstract:

In this thesis measurement systems for the purpose of characterization of radio frequency power amplifiers are studied. Methods to increase the speed, accuracy, bandwidth, as well as to reduce the sampling requirements and testing cost are presented. A method intended for signal shaping with respect to peak to-average ratio reduction and its effects-improvements on the radio frequency front-end performance is investigated.

A time domain measurement system intended for fast and accurate measurements and characterization of radio frequency power amplifiers is discussed. An automated, fast and accurate technique for power and frequency sweep measurements is presented. Multidimensional representation of measured figure of merits is evaluated for its importance on the production-testing phase of power amplifiers.

A technique to extend the digital bandwidth of a measurement system is discussed. It is based on the Zhu-Frank generalized sampling theorem which decreases the requirements on the sampling rate of the measurement system. Its application for power amplifiers behavioral modeling is discussed and evaluated experimentally.

A general method for designing multitone for the purpose of out-of-band characterization of nonlinear radio frequency modules using harmonic sampling is presented. It has an application with the validation of power amplifiers behavioral models in their out-of-band frequency spectral support when extracted from undersampled data.

A method for unfolding the frequency spectrum of undersampled wideband signals is presented. It is of high relevance to state-of-the-art radio frequency measurement systems which capture repetitive waveform based on a sampling rate that violates the Nyquist constraint. The method is presented in a compact form, it eliminates ambiguities caused by folded frequency spectra standing outside the Nyquist band, and is relevant for calibration matters.

A convex optimization reduction-based method of peaks-to-average ratio of orthogonal frequency division multiplexing signals is presented and experimentally validated for a wireless local area network system. Improvements on the radio frequency power amplifier level are investigated with respect to power added efficiency, output power, in-band and out-of-band errors. The influence of the power distribution in the excitation signal on power amplifier performance was evaluated.

APA, Harvard, Vancouver, ISO, and other styles
47

Tsui, Kai-man, and 徐啟民. "New design methods for perfect reconstruction filter banks." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2004. http://hub.hku.hk/bib/B30144991.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Kovesi, Peter. "Invariant measures of image features from phase information." University of Western Australia. Dept. of Computer Science, 1996. http://theses.library.uwa.edu.au/adt-WU2003.0006.

Full text
Abstract:
If reliable and general computer vision techniques are to be developed it is crucial that we find ways of characterizing low-level image features with invariant quantities. For example, if edge significance could be measured in a way that was invariant to image illumination and contrast, higher-level image processing operations could be conducted with much greater confidence. However, despite their importance, little attention has been paid to the need for invariant quantities in low-level vision for tasks such as feature detection or feature matching. This thesis develops a number of invariant low-level image measures for feature detection, local symmetry/asymmetry detection, and for signal matching. These invariant quantities are developed from representations of the image in the frequency domain. In particular, phase data is used as the fundamental building block for constructing these measures. Phase congruency is developed as an illumination and contrast invariant measure of feature significance. This allows edges, lines and other features to be detected reliably, and fixed thresholds can be applied over wide classes of images. Points of local symmetry and asymmetry in images give rise to special arrangements of phase, and these too can be characterized by invariant measures. Finally, a new approach to signal matching that uses correlation of local phase and amplitude information is developed. This approach allows reliable phase based disparity measurements to be made, overcoming many of the difficulties associated with scale-space singularities.
APA, Harvard, Vancouver, ISO, and other styles
49

Ou, Shiyan, Christopher S. G. Khoo, and Dion H. Goh. "Automatic multi-document summarization for digital libraries." School of Communication & Information, Nanyang Technological University, 2006. http://hdl.handle.net/10150/106042.

Full text
Abstract:
With the rapid growth of the World Wide Web and online information services, more and more information is available and accessible online. Automatic summarization is an indispensable solution to reduce the information overload problem. Multi-document summarization is useful to provide an overview of a topic and allow users to zoom in for more details on aspects of interest. This paper reports three types of multi-document summaries generated for a set of research abstracts, using different summarization approaches: a sentence-based summary generated by a MEAD summarization system that extracts important sentences using various features, another sentence-based summary generated by extracting research objective sentences, and a variable-based summary focusing on research concepts and relationships. A user evaluation was carried out to compare the three types of summaries. The evaluation results indicated that the majority of users (70%) preferred the variable-based summary, while 55% of the users preferred the research objective summary, and only 25% preferred the MEAD summary.
APA, Harvard, Vancouver, ISO, and other styles
50

Mercier, Wilfred Jean-Baptiste. "Generation of Forest Stand Type Maps Using High-Resolution Digital Imagery." Fogler Library, University of Maine, 2009. http://www.library.umaine.edu/theses/pdf/MercierWJB2009.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography