Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Embedding techniques.

Дисертації з теми "Embedding techniques"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 дисертацій для дослідження на тему "Embedding techniques".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Thanh, Trung Huynh. "On leverage embedding techniques for network alignment." Thesis, Griffith University, 2022. http://hdl.handle.net/10072/416055.

Повний текст джерела
Анотація:
Networks are natural but powerful structures that capture relationships between different entities in many domains, such as social networks, citation networks, bioinformatic networks. In many applications that require multiple networks analysis, network alignment, the task of recognizing node correspondence across different networks, plays an important role. A wellknown application of network alignment is to identify which accounts in different social networks belong to the same person. Given the appeal of network alignment, there is a rich body of researches that aims to tackle this problem. However, many research challenges still exist, such as enhancing the accuracy and improving the scalability due to the information explosion. With such motivation, in scope of our PhD work, we address the three crucial challenges in network alignment literature, namely (i) enhancing scalability of network alignment on large-scale graphs, (ii) enhancing the robustness of network alignment to adversarial conditions and (iii) multi-modal information integration for network aligners. To do so, we focus on proposing aligner frameworks for different types of input attributed networks from simple to complex. Each framework attempts to answer simultaneously all three research questions by leveraging embedding techniques, where the input networks are embedded into insightful, low-dimensional vector spaces. This helps to enrich the nodes’ individual context with multi-modal information, thus facilitates the distinction between nodes. The learnt embeddings also enables faster alignment retrieval by direct vector comparison. Our proposed techniques improve upon the state-of-the-art for different types of attributed networks and cover a large range of applications.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Info & Comm Tech
Institute of Integrated and Intelligent Systems
Science, Environment, Engineering and Technology
Full Text
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Chochlidakis, Georgios. "Mobility-aware virtual network embedding techniques for next-generation mobile networks." Thesis, King's College London (University of London), 2018. https://kclpure.kcl.ac.uk/portal/en/theses/mobilityaware-virtual-network-embedding-techniques-for-nextgeneration-mobile-networks(174e714f-2a4a-447a-bcd5-d526170377fd).html.

Повний текст джерела
Анотація:
Network virtualisation has become one of the most prominent solutions for sus-tainability towards the dramatic increase of data demand in next-generation mobile networks. In addition, apart from increasing the overall infrastructure utilisation, it also greatly improves the manageability, the scalability and the robustness of the network. In order to allow multiple virtual networks to coexist in the same substrate network, the need for efficient network sharing techniques is imperative. The main purpose of this work is to provide a holistic optimization framework for vir-tual network embedding solutions, where the actual user mobility effect is explicitly considered. First, the main focus is given on the study of the mobility effect and the impact of the mobility management techniques on the end-to-end communication of the mobile user. A hybrid-distributed mobility management scheme is proposed and compared against the latest mobility management schemes. Then, an optimisation framework for efficient mobility-aware virtual network embedding is proposed and evaluated by comparison with other works from the literature. Moving deeper in the area of virtual network embedding, the focus is given on minimizing the end-to-end delay and providing service differentiation, allowing in this way delay sensitive services to use the formed virtual networks with the minimum possible delay, as op-posed to other more elastic services that use the same substrate network. The last part of this work is the study and the analysis of the stochastic nature of the virtual network embedding parameters and the proposal of an optimisation framework for adjustable-robustness virtual network embedding. Driven by the benefits from virtualising the network and its functions, research as well as industry are expected to exploit in a greater degree than today the merits of this concept. The co-existence of multiple tenants not only will greatly change the network industry from a business perspective, but also will emphasise the need for more efficient and flexible network sharing techniques. This work belongs to the initial efforts to embrace and adopt the virtualisation concept in the next-generation wireless networks.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Obidallah, Waeal. "Multi-Layer Web Services Discovery using Word Embedding and Clustering Techniques." Thesis, Université d'Ottawa / University of Ottawa, 2021. http://hdl.handle.net/10393/41840.

Повний текст джерела
Анотація:
Web services discovery is the process of finding the right Web services that best match the end-users’ functional and non-functional requirements. Artificial intelligence, natural language processing, data mining, and text mining techniques have been applied by researchers in Web services discovery to facilitate the process of matchmaking. This thesis contributes to the area of Web services discovery and recommendation, adopting the Design Science Research Methodology to guide the development of useful knowledge, including design theory and artifacts. The lack of a comprehensive review of Web services discovery and recommendation in the literature motivated us to conduct a systematic literature review. Our main purpose in conducting the systematic literature review was to identify and systematically compare current clustering and association rules techniques for Web services discovery and recommendation by providing answers to various research questions, investigating the prior knowledge, and identifying gaps in the related literature. We then propose a conceptual model and a typology of Web services discovery systems. The conceptual model provides a high-level representation of Web services discovery systems, including their various elements, tasks, and relationships. The proposed typology of Web services discovery systems is composed of five groups of characteristics: storage and location characteristics, formalization characteristics, matchmaking characteristics, automation characteristics, and selection characteristics. We reference the typology to compare Web services discovery methods and architectures from the extant literature by linking them to the five proposed characteristics. We employ the proposed conceptual model with its specified characteristics to design and develop the multi-layer data mining architecture for Web services discovery using word embedding and clustering techniques. The proposed architecture consists of five layers: Web services description and data preprocessing; word embedding and representation; syntactic similarity; semantic similarity; and clustering. In the first layer, we identify the steps to parse and preprocess the Web services documents. Bag of Words with Term Frequency–Inverse Document Frequency and three word-embedding models are employed for Web services representation in the second layer. Then in the third layer, four distance measures, including Cosine, Euclidean, Minkowski, and Word Mover, are studied to find the similarities between Web services documents. In layer four, WordNet and Normalized Google Distance are employed to represent and find the similarity between Web services documents. Finally, in the fifth layer, three clustering algorithms, including affinity propagation, K-means, and hierarchical agglomerative clustering, are investigated to cluster Web services based on the observed documents’ similarities. We demonstrate how each component of the five layers is employed in the process of Web services clustering using random-ly selected Web services documents. We conduct experimental analysis to cluster Web services using a collected dataset of Web services documents and evaluating their clustering performances. Using a ground truth for evaluation purposes, we observe that clusters built based on the word embedding models performed better compared to those built using the Bag of Words with Term Frequency–Inverse Document Frequency model. Among the three word embedding models, the pre-trained Word2Vec’s skip-gram model reported higher performance in clustering Web services. Among the three semantic similarity measures, path-based WordNet similarity reported higher clustering performance. By considering the different words representations models and syntactic and semantic similarity measures, the affinity propagation clustering technique performed better in discovering similarities among Web services.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Ojike, Uzoma. "Combining tools and techniques for embedding an ecosystem approach in spatial planning." Thesis, Imperial College London, 2013. http://hdl.handle.net/10044/1/11196.

Повний текст джерела
Анотація:
Despite the attention garnered by sustainability in the last three decades and the advances in its tools and techniques, we are no closer to attaining sustainability now than we were at the start. This elusiveness has been attributed to the lack of a clearly defined global method for evaluating sustainability and poor integration into sector, national and international policies and decision-making, amongst others. A clear limitation observed in most concepts/methods is their inability to integrate effectively ecological, economic and social sustainability during assessment. Rather, there is a tendency to assess them separately and integrate them after the assessment. This process often leaves loopholes in sustainability assessment as there are trade-offs created that often favour economic sustainability but more rarely favour environmental, or even social, sustainability. In order to address this limitation, the Millennium Ecosystem Assessment (MEA) in 2005 recognized that the complex interactions between these ecological, economic and social processes have to be understood and established a universal valuation concept known as ecosystem services which can be used in sustainability assessment and Spatial Planning. Ecosystem services are the benefits or services created by the ecosystem which are essential for the daily functioning of humans and economies. This research explores how best to achieve integration of the Ecosystem Approach within environmental/sustainability assessment. It adopts a mixed method approach that combines the use of existing qualitative techniques, Network Analysis and stakeholder engagement, and quantitative techniques, Geographical Information Systems, within a regeneration case study at local level (Dartford in North Kent, United Kingdom). The thesis makes recommendations for better integration of an Ecosystem Approach in Spatial Planning and decision making and the ways in which assessment tools and techniques can be best combined.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Alain, Martin. "A compact video representation format based on spatio-temporal linear embedding and epitome." Thesis, Rennes 1, 2016. http://www.theses.fr/2016REN1S001/document.

Повний текст джерела
Анотація:
L'efficacité des services de compression vidéo est de nos jours un enjeu essentiel, et est appelé à le devenir d'autant plus dans le futur, comme l'indique la croissance constante du trafic vidéo et la production de nouveaux formats tels que la vidéo à haute résolution, à gamme de couleur ou dynamique étendues, ou encore à fréquence d'images augmentée. Le standard MPEG HEVC est aujourd'hui un des schémas de compression les plus efficaces, toutefois, il devient nécessaire de proposer de nouvelles méthodes originales pour faire face aux nouveaux besoins de compression. En effet, les principes de bases des codecs modernes ont été conçu il y a plus de 30 ans : la réduction des redondances spatiales et temporelles du signal en utilisant des outils de prédiction, l'utilisation d'une transformée afin de diminuer d'avantage les corrélations du signal, une quantification afin de réduire l'information non perceptible, et enfin un codage entropique pour prendre en compte les redondances statistiques du signal. Dans cette thèse, nous explorons de nouvelles méthodes ayant pour but d'exploiter d'avantage les redondances du signal vidéo, notamment à travers des techniques multi-patchs. Dans un premier temps, nous présentons des méthodes multi-patchs basées LLE pour améliorer la prédiction Inter, qui sont ensuite combinées pour la prédiction Intra et Inter. Nous montrons leur efficacité comparé à H.264. La seconde contribution de cette thèse est un schéma d'amélioration en dehors de la boucle de codage, basé sur des méthodes de débruitage avec épitome. Des épitomes de bonne qualité sont transmis au décodeur en plus de la vidéo encodée, et nous pouvons alors utiliser coté décodeur des méthodes de débruitage multi-patchs qui s'appuient sur les patchs de bonne qualité contenu dans les épitomes, afin d'améliorer la qualité de la vidéo décodée. Nous montrons que le schéma est efficace en comparaison de SHVC. Enfin, nous proposons un autre schéma d'amélioration en dehors de la boucle de codage, qui s'appuie sur un partitionnement des patchs symétrique à l'encodeur et au décodeur. Coté encodeur, on peut alors apprendre des projections linéaires pour chaque partition entre les patchs codés/décodés et les patchs sources. Les projections linéaires sont alors envoyés au décodeur et appliquées aux patchs décodés afin d'en améliorer la qualité. Le schéma proposé est efficace comparé à HEVC, et prometteur pour des schémas scalables comme SHVC
Efficient video compression is nowadays a critical issue, and is expected to be more and more crucial in the future, with the ever increasing video traffic and the production of new digital video formats with high resolution, wide color gamut, high dynamic range, or high frame rate. The MPEG standard HEVC is currently one of the most efficient video compression scheme, however, addressing the future needs calls for novel and disruptive methods. In fact, the main principles of modern video compression standards rely on concepts designed more than 30 years ago: the reduction of spatial and temporal redundancies, through prediction tools, the use of a transform to further reduce the inner correlations of the signal, followed by quantization to remove non-perceptive information, and entropy coding to remove the remaining statistical redundancies. In this thesis, we explore novel methods which aims at further exploiting the natural redundancies occurring in video signals, notably through the use of multi-patches techniques. First, we introduce LLE-based multi-patches methods in order to improve Inter prediction, which are then combined for both Intra and Inter predictions, and are proven efficient over H.264. We then propose epitome-based de-noising methods to improve the performances of existing codecs in a out-of-the-loop scheme. High quality epitomes are transmitted to the decoder in addition to the coded sequence, and we can then use at the decoder side multi-patches de-noising methods relying on the high quality patches from the epitomes, in order to improve the quality of the decoded sequence. This scheme is shown efficient compared to SHVC. Finally, we proposed another out-of-the-loop scheme relying on a symmetric clustering of the patches performed at both encoder and decoder sides. At the encoder side, linear mappings are learned for each cluster between the coded/decoded patches and the corresponding source patches. The linear mappings are then sent to the decoder and applied to the decoded patches in order to improve the quality of the decoded sequence. The proposed scheme improves the performances of HEVC, and is shown promising for scalable schemes such as SHVC
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Raahemi, Mohammad. "Intelligent Prediction of Stock Market Using Text and Data Mining Techniques." Thesis, Université d'Ottawa / University of Ottawa, 2020. http://hdl.handle.net/10393/40934.

Повний текст джерела
Анотація:
The stock market undergoes many fluctuations on a daily basis. These changes can be challenging to anticipate. Understanding such volatility are beneficial to investors as it empowers them to make inform decisions to avoid losses and invest when opportunities are predicted to earn funds. The objective of this research is to use text mining and data mining techniques to discover the relationship between news articles and stock prices fluctuations. There are a variety of sources for news articles, including Bloomberg, Google Finance, Yahoo Finance, Factiva, Thompson Routers, and Twitter. In our research, we use Factive and Intrinio news databases. These databases provide daily analytical articles about the general stock market, as well as daily changes in stock prices. The focus of this research is on understanding the news articles which influence stock prices. We believe that different types of stocks in the market behave differently, and news articles could provide indications on different stock price movements. The goal of this research is to create a framework that uses text mining and data mining algorithms to correlate different types of news articles with stock fluctuations to predict whether to “Buy”, “Sell”, or “Hold” a specific stock. We train Doc2Vec models on 1GB of financial news from Factiva to convert news articles into vectors of 100 dimensions. After preprocessing the data, including labeling and balancing the data, we build five predictive models, namely Neural Networks, SVM, Decision Tree, KNN, and Random Forest to predict stock movements (Buy, Sell, or Hold). We evaluate the performances of the predictive models in terms of accuracy and area under the ROC. We conclude that SVM provides the best performance among the five models to predict the stock movement.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Senjean, Bruno. "Development of new embedding techniques for strongly correlated electrons : from in-principle-exact formulations to practical approximations." Thesis, Strasbourg, 2018. http://www.theses.fr/2018STRAF035/document.

Повний текст джерела
Анотація:
Cette thèse traite du développement et de l’implémentation de nouvelles méthodes visant à décrire la corrélation électronique forte dans les molécules et les solides. Après avoir introduit l’état de l’art des méthodes utilisées en chimie quantique et en physique de la matière condensée, une nouvelle méthode hybride combinant théorie de la fonction d’onde et théorie de la fonctionnelle de la densité (DFT) est présentée et s’intitule “site-occupation embedding theory” (SOET). Celle-ci est appliquée au modèle de Hubbard à une dimension. Ensuite, le problème du gap fondamental est revisité en DFT pour les ensembles, où la dérivée discontinue est réécrite comme une fonctionnelle de la densité de l'état fondamental. Enfin, une extension à la chimie quantique est proposée, basée sur une fonction d’onde de séniorité zéro complémentée par une fonctionnelle de la matrice densité, et exprimée dans la base des orbitales naturelles
The thesis deals with the development and implementation of new methods for the description of strong electron correlation effects in molecules and solids. After introducing the state of the art in quantum chemistry and in condensed matter physics, a new hybrid method so-called ``site-occupation embedding theory'' (SOET) is presented and is based on the merging of wavefunction theory and density functional theory (DFT). Different formulations of this theory are described and applied to the one-dimensional Hubbard model. In addition, a novel ensemble density functional theory approach has been derived to extract the fundamental gap exactly. In the latter approach, the infamous derivative discontinuity is reformulated as a derivative of a weight-dependent exchange-correlation functional. Finally, a quantum chemical extension of SOET is proposed and based on a seniority-zero wavefunction, completed by a functional of the density matrix and expressed in the natural orbital basis
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Paoli, Roberto. "Cell culture interfaces for different organ-on-chip applications: from photolithography to rapid-prototyping techniques with sensor embedding." Doctoral thesis, Universitat de Barcelona, 2019. http://hdl.handle.net/10803/668376.

Повний текст джерела
Анотація:
Despite the last 60 years have seen major advances in many scientific and technological inputs of drug Research and Development, the number of new molecules hitting the market per billion US dollars of R&D spending has been declined steadily during the same period. The current scenario highlights the need for new research tools to enable reduce costly animal and clinical trials while providing a better prediction about drug efficacy and security in humans A recent emerging approach to improve the current models is emerging from the field of microfluidics, which studies systems that process or manipulate tiny amounts of fluids using channels with dimensions of tens to hundreds of micrometers. Combining microfluidics with cell culture, scientists gave rise to a new field named “Organ-on-chip” (OOC). Microfluidic OOCs are advanced platforms designed to mimic physiological structures and continuous flow conditions, thus allowing the culture of cells in a friendlier microenvironment. This thesis, titled “Cell culture interfaces for different organ-on-chip applications: from photolithography to rapid-prototyping techniques with sensor embedding”, aims to design, simulate and test new OOC devices to reproduce cell culture interface under flow conditions. The work has a focus on the exploration of novel fabrication techniques which enable rapid prototyping of OOC devices, reducing costs, time and human labor associated to the fabrication process. The final objective is to demonstrate the viability of the devices as research tools for biological problems, applying them to the tubular kidney and the blood brain barrier (BBB). To achieve the objective, at least three device version have been developed: 1) OOCv1, fabricated by multilayer PDMS soft lithography; 2) OOCv2, fabricated in thermoplastic by layered object manufacturing using both a vinyl cutter and a laser cutter, integrating standard fluidic connectors alone (OOCv2.1) or together with embedded electrodes (OOCv2.2); 3) OOCv3 using a mixed technique of laser cut and 3D printing by stereolithography. All devices are fabricated using biocompatible materials with high optical quality and an embedded commercial membrane. The biological experiments with renal tubular epithelial cells, realized on OOCv1 and OOCv2.1 devices, demonstrated the viability of the device for culturing cells under flow conditions. The study realized on fatty acid oxidation and accumulation in cells exposed to physiological and diabetogenic oscillating levels of glucose suggest a possible positive role of shear stress in activation of fatty acid metabolism. The studies were performed using a compact experimental unit with embedded flow control which reduce significatively the complexity and cost of the fluidic experimental setup. The biological experiments on the BBB confirmed viability of OOCv2.1 and OOCv2.2 for compartmentalized co-culturing of endothelial cells and pericytes. The formation and recovery of the barrier after disruptive treatment has been assessed using different techniques, including immunostaining, fluorescence and live phase contrast imaging, and electrical impedance spectroscopy. The repeatability of measurements using electrodes was verified. A model to classify measurements from different timepoints has been developed, resulting in accuracy of 100% in learning and 90% in testing case. Results are confirmed by imaging data, which also suggest a critical role of pericytes in the development, maintenance, and regulation of BBB, in accordance with the literature.
En los últimos años está emergiendo una nueva propuesta para mejorar los modelos actuales en el estudio de nuevos fármacos. Mediante la fusión de cultivos celulares y microfluídica ha nacido un nuevo campo de aplicación denominado “Órgano-en-un-chip” (OOC), donde se recrea un entorno fisiológico capaz de reproducir unidades funcionales mínimas de diversos órganos del cuerpo humano. Un elemento importante para el desarrollo de dispositivos OOC es la reproducción de zonas de interacción entre varios tejidos formados por diferentes tipos celulares. Esta tesis, titulada “Interfaces de cultivo celular para diferentes aplicaciones de OOC: desde fotolitografía a técnicas de prototipado rápido con inclusión de sensores”, tiene como objetivo el diseño, simulación y evaluación de dispositivos OOC capaces de reproducir superficies de contacto de tejidos contiguos expuestos a flujo. El trabajo está enfocado a la exploración de nuevas técnicas de fabricación que permitan el prototipado rápido de dispositivos OOC, reduciendo costes, tiempo y mano de obra asociada a dicha fabricación. El objetivo final es demostrar la utilidad de los dispositivos como herramientas de investigación para problemas biológicos, aplicándolos en esta tesis al estudio del túbulo renal y de la barrera hematoencefálica. Para ello se han fabricado tres versiones de dispositivos: 1) OOCv1 fabricado por litografía suave en múltiples capas de PDMS; 2) OOCv2 fabricado con cortadora de vinilo y cortadora láser en múltiples capas de materiales termoplásticos y con electrodos integrados en la versión OOCv2.2; 3) OOCv3 fabricado mediante impresión 3D por esterolitografía. Todos los dispositivos están hechos de materiales biocompatibles de alta calidad óptica, con conectores fluídicos y una membrana comercial integrada. Los experimentos biológicos sobre túbulo renal, realizados en los dispositivos OOCv1 y OOCv2, han demostrado la viabilidad de los dispositivos, integrados con un sistema de flujo, para estudios de la metabolización de ácidos grasos en el riñón relacionados con condiciones diabetogénicas. Los experimentos biológicos sobre la barrera hematoencefálica han confirmado la viabilidad de OOCv2 para el cocultivo compartimentado de células endoteliales de cerebro y pericitos. La integración de electrodos en el OOCv2.2 ha demostrado ser una técnica fiable para la medición de la integridad de barreras biológicas de modo no-invasivo, libre de etiqueta (“label-free”), y a tiempo real gracias a la espectroscopía de impedancia.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Antici, Francesco. "Advanced techniques for cross-language annotation projection in legal texts." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/23884/.

Повний текст джерела
Анотація:
Nowadays, the majority of the services we benefit from, are provided online and their use is regulated by the acceptance to the terms of service by the users. All our data are handled accordingly with the clauses of such document and all our behaviours must comply with it. Given so, it would be very useful to find automated techniques to ensure fairness of the document or inform the users about possible threats. The focus of this work, is to create resources aimed to the development of such tools in languages other than English, which may lack in linguistic resources and annotated corpus. The enormous breakthroughs of the last years in Natural Language Processing techniques made it possible the creation of such tools through automated and unsupervised process. One of the means to achieve that is through the annotation projection between two parallel corpora. The difficulties and costs of creating ad hoc resource for every language has brought the need to find another way for achieving the goal.\\ This work investigates the cross language annotation projection technique based on sentence embedding and similarity metrics to find matches between sentences. Several combination of methods and algorithms are compared, among which there are monolingual and multilingual embedding neural models. The experiments are conducted on two datasets, where the reference language is always English and the projection are evaluated on Italian, German and Polish. The results obtained provide a robust and reliable technique for the task and a good starting point to build multilingual tools.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Sakarya, Hatice. "A Contribution To Modern Data Reduction Techniques And Their Applications By Applied Mathematics And Statistical Learning." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12612819/index.pdf.

Повний текст джерела
Анотація:
High-dimensional data take place from digital image processing, gene expression micro arrays, neuronal population activities to financial time series. Dimensionality Reduction - extracting low dimensional structure from high dimension - is a key problem in many areas like information processing, machine learning, data mining, information retrieval and pattern recognition, where we find some data reduction techniques. In this thesis we will give a survey about modern data reduction techniques, representing the state-of-the-art of theory, methods and application, by introducing the language of mathematics there. This needs a special care concerning the questions of, e.g., how to understand discrete structures as manifolds, to identify their structure, preparing the dimension reduction, and to face complexity in the algorithmically methods. A special emphasis will be paid to Principal Component Analysis, Locally Linear Embedding and Isomap Algorithms. These algorithms are studied by a research group from Vilnius, Lithuania and Zeev Volkovich, from Software Engineering Department, ORT Braude College of Engineering, Karmiel, and others. The main purpose of this study is to compare the results of the three of the algorithms. While the comparison is beeing made we will focus the results and duration.
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Chammem, Afef. "Robust watermarking techniques for stereoscopic video protection." Phd thesis, Institut National des Télécommunications, 2013. http://tel.archives-ouvertes.fr/tel-00917964.

Повний текст джерела
Анотація:
The explosion in stereoscopic video distribution increases the concerns over its copyright protection. Watermarking can be considered as the most flexible property right protection technology. The watermarking applicative issue is to reach the trade-off between the properties of transparency, robustness, data payload and computational cost. While the capturing and displaying of the 3D content are solely based on the two left/right views, some alternative representations, like the disparity maps should also be considered during transmission/storage. A specific study on the optimal (with respect to the above-mentioned properties) insertion domain is also required. The present thesis tackles the above-mentioned challenges. First, a new disparity map (3D video-New Three Step Search - 3DV-NTSS) is designed. The performances of the 3DV-NTSS were evaluated in terms of visual quality of the reconstructed image and computational cost. When compared with state of the art methods (NTSS and FS-MPEG) average gains of 2dB in PSNR and 0.1 in SSIM are obtained. The computational cost is reduced by average factors between 1.3 and 13. Second, a comparative study on the main classes of 2D inherited watermarking methods and on their related optimal insertion domains is carried out. Four insertion methods are considered; they belong to the SS, SI and hybrid (Fast-IProtect) families. The experiments brought to light that the Fast-IProtect performed in the new disparity map domain (3DV-NTSS) would be generic enough so as to serve a large variety of applications. The statistical relevance of the results is given by the 95% confidence limits and their underlying relative errors lower than er<0.1
Стилі APA, Harvard, Vancouver, ISO та ін.
12

Van, Huyssteen Rudolph Hendrik. "Comparative evaluation of video watermarking techniques in the uncompressed domain." Thesis, Stellenbosch : Stellenbosch University, 2012. http://hdl.handle.net/10019.1/71842.

Повний текст джерела
Анотація:
Thesis (MScEng)--Stellenbosch University, 2012.
ENGLISH ABSTRACT: Electronic watermarking is a method whereby information can be imperceptibly embedded into electronic media, while ideally being robust against common signal manipulations and intentional attacks to remove the embedded watermark. This study evaluates the characteristics of uncompressed video watermarking techniques in terms of visual characteristics, computational complexity and robustness against attacks and signal manipulations. The foundations of video watermarking are reviewed, followed by a survey of existing video watermarking techniques. Representative techniques from different watermarking categories are identified, implemented and evaluated. Existing image quality metrics are reviewed and extended to improve their performance when comparing these video watermarking techniques. A new metric for the evaluation of inter frame flicker in video sequences is then developed. A technique for possibly improving the robustness of the implemented discrete Fourier transform technique against rotation is then proposed. It is also shown that it is possible to reduce the computational complexity of watermarking techniques without affecting the quality of the original content, through a modified watermark embedding method. Possible future studies are then recommended with regards to further improving watermarking techniques against rotation.
AFRIKAANSE OPSOMMING: ’n Elektroniese watermerk is ’n metode waardeur inligting onmerkbaar in elektroniese media vasgelê kan word, met die doel dat dit bestand is teen algemene manipulasies en doelbewuste pogings om die watermerk te verwyder. In hierdie navorsing word die eienskappe van onsaamgeperste video watermerktegnieke ondersoek in terme van visuele eienskappe, berekeningskompleksiteit en weerstandigheid teen aanslae en seinmanipulasies. Die onderbou van video watermerktegnieke word bestudeer, gevolg deur ’n oorsig van reedsbestaande watermerktegnieke. Verteenwoordigende tegnieke vanuit verskillende watermerkkategorieë word geïdentifiseer, geïmplementeer en geëvalueer. Bestaande metodes vir die evaluering van beeldkwaliteite word bestudeer en uitgebrei om die werkverrigting van die tegnieke te verbeter, spesifiek vir die vergelyking van watermerktegnieke. ’n Nuwe stelsel vir die evaluering van tussenraampie flikkering in video’s word ook ontwikkel. ’n Tegniek vir die moontlike verbetering van die geïmplementeerde diskrete Fourier transform tegniek word voorgestel om die tegniek se bestandheid teen rotasie te verbeter. Daar word ook aangetoon dat dit moontlik is om die berekeningskompleksiteit van watermerktegnieke te verminder, sonder om die kwaliteit van die oorspronklike inhoud te beïnvloed, deur die gebruik van ’n verbeterde watermerkvasleggingsmetode. Laastens word aanbevelings vir verdere navorsing aangaande die verbetering van watermerktegnieke teen rotasie gemaak.
Стилі APA, Harvard, Vancouver, ISO та ін.
13

CONTINO, Salvatore. "Study and identification of new molecular descriptors, finalized to the development of Virtual Screening techniques through the use of deep neural networks." Doctoral thesis, Università degli Studi di Palermo, 2022. https://hdl.handle.net/10447/554714.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
14

Al-Nu'aimi, Abdallah Saleem Na. "Design, implementation and performance evaluation of robust and secure watermarking techniques for digital coloured images : designing new adaptive and robust imaging techniques for embedding and extracting 2D watermarks in the spatial and transform domain using imaging and signal processing techniques." Thesis, University of Bradford, 2009. http://hdl.handle.net/10454/4255.

Повний текст джерела
Анотація:
The tremendous spreading of multimedia via Internet motivates the watermarking as a new promising technology for copyright protection. This work is concerned with the design and development of novel algorithms in the spatial and transform domains for robust and secure watermarking of coloured images. These algorithms are adaptive, content-dependent and compatible with the Human Visual System (HVS). The host channels have the ability to host a large information payload. Furthermore, it has enough capacity to accept multiple watermarks. Abstract This work achieves several contributions in the area of coloured images watermarking. The most challenging problem is to get a robust algorithm that can overcome geometric attacks, which is solved in this work. Also, the search for a very secure algorithm has been achieved via using double secret keys. In addition, the problem of multiple claims of ownership is solved here using an unusual approach. Furthermore, this work differentiates between terms, which are usually confusing the researchers and lead to misunderstanding in most of the previous algorithms. One of the drawbacks in most of the previous algorithms is that the watermark consists of a small numbers of bits without strict meaning. This work overcomes this weakness III in using meaningful images and text with large amounts of data. Contrary to what is found in literature, this work shows that the green-channel is better than the blue-channel to host the watermarks. A more general and comprehensive test bed besides a broad band of performance evaluation is used to fairly judge the algorithms.
Стилі APA, Harvard, Vancouver, ISO та ін.
15

Al-Nu'aimi, Abdallah S. N. A. "Design, Implementation and Performance Evaluation of Robust and Secure Watermarking Techniques for Digital Coloured Images. Designing new adaptive and robust imaging techniques for embedding and extracting 2D watermarks in the spatial and transform domain using imaging and signal processing techniques." Thesis, University of Bradford, 2009. http://hdl.handle.net/10454/4255.

Повний текст джерела
Анотація:
The tremendous spreading of multimedia via Internet motivates the watermarking as a new promising technology for copyright protection. This work is concerned with the design and development of novel algorithms in the spatial and transform domains for robust and secure watermarking of coloured images. These algorithms are adaptive, content-dependent and compatible with the Human Visual System (HVS). The host channels have the ability to host a large information payload. Furthermore, it has enough capacity to accept multiple watermarks. Abstract This work achieves several contributions in the area of coloured images watermarking. The most challenging problem is to get a robust algorithm that can overcome geometric attacks, which is solved in this work. Also, the search for a very secure algorithm has been achieved via using double secret keys. In addition, the problem of multiple claims of ownership is solved here using an unusual approach. Furthermore, this work differentiates between terms, which are usually confusing the researchers and lead to misunderstanding in most of the previous algorithms. One of the drawbacks in most of the previous algorithms is that the watermark consists of a small numbers of bits without strict meaning. This work overcomes this weakness III in using meaningful images and text with large amounts of data. Contrary to what is found in literature, this work shows that the green-channel is better than the blue-channel to host the watermarks. A more general and comprehensive test bed besides a broad band of performance evaluation is used to fairly judge the algorithms.
Стилі APA, Harvard, Vancouver, ISO та ін.
16

Rilk, Albrecht Johannes. "The flicker electroretinogram in phase space embeddings and techniques /." [S.l.] : [s.n.], 2003. http://deposit.ddb.de/cgi-bin/dokserv?idn=969749384.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Gezahagne, Azamed Yehuala. "Qualitative Models of Neural Activity and the Carleman Embedding Technique." Digital Commons @ East Tennessee State University, 2009. https://dc.etsu.edu/etd/1875.

Повний текст джерела
Анотація:
The two variable Fitzhugh Nagumo model behaves qualitatively like the four variable Hodgkin-Huxley space clamped system and is more mathematically tractable than the Hodgkin Huxley model, thus allowing the action potential and other properties of the Hodgkin Huxley system to be more readily be visualized. In this thesis, it is shown that the Carleman Embedding Technique can be applied to both the Fitzhugh Nagumo model and to Van der Pol's model of nonlinear oscillation, which are both finite nonlinear systems of differential equations. The Carleman technique can thus be used to obtain approximate solutions of the Fitzhugh Nagumo model and to study neural activity such as excitability.
Стилі APA, Harvard, Vancouver, ISO та ін.
18

Clarke, John-Paul Barrington. "An improved technique to determine the mount embedding impedance of SIS mixers." Thesis, Massachusetts Institute of Technology, 1992. http://hdl.handle.net/1721.1/43267.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Dzacka, Charles Nunya. "A Variation of the Carleman Embedding Method for Second Order Systems." Digital Commons @ East Tennessee State University, 2009. https://dc.etsu.edu/etd/1877.

Повний текст джерела
Анотація:
The Carleman Embedding is a method that allows us to embed a finite dimensional system of nonlinear differential equations into a system of infinite dimensional linear differential equations. This technique works well when dealing with first-order nonlinear differential equations. However, for higher order nonlinear ordinary differential equations, it is difficult to use the Carleman Embedding method. This project will examine the Carleman Embedding and a variation of the method which is very convenient in applying to second order systems of nonlinear equations.
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Bouaziz, Mohamed. "Réseaux de neurones récurrents pour la classification de séquences dans des flux audiovisuels parallèles." Thesis, Avignon, 2017. http://www.theses.fr/2017AVIG0224/document.

Повний текст джерела
Анотація:
Les flux de contenus audiovisuels peuvent être représentés sous forme de séquences d’événements (par exemple, des suites d’émissions, de scènes, etc.). Ces données séquentielles se caractérisent par des relations chronologiques pouvant exister entre les événements successifs. Dans le contexte d’une chaîne TV, la programmation des émissions suit une cohérence définie par cette même chaîne, mais peut également être influencée par les programmations des chaînes concurrentes. Dans de telles conditions,les séquences d’événements des flux parallèles pourraient ainsi fournir des connaissances supplémentaires sur les événements d’un flux considéré.La modélisation de séquences est un sujet classique qui a été largement étudié, notamment dans le domaine de l’apprentissage automatique. Les réseaux de neurones récurrents de type Long Short-Term Memory (LSTM) ont notamment fait leur preuve dans de nombreuses applications incluant le traitement de ce type de données. Néanmoins,ces approches sont conçues pour traiter uniquement une seule séquence d’entrée à la fois. Notre contribution dans le cadre de cette thèse consiste à élaborer des approches capables d’intégrer conjointement des données séquentielles provenant de plusieurs flux parallèles.Le contexte applicatif de ce travail de thèse, réalisé en collaboration avec le Laboratoire Informatique d’Avignon et l’entreprise EDD, consiste en une tâche de prédiction du genre d’une émission télévisée. Cette prédiction peut s’appuyer sur les historiques de genres des émissions précédentes de la même chaîne mais également sur les historiques appartenant à des chaînes parallèles. Nous proposons une taxonomie de genres adaptée à de tels traitements automatiques ainsi qu’un corpus de données contenant les historiques parallèles pour 4 chaînes françaises.Deux méthodes originales sont proposées dans ce manuscrit, permettant d’intégrer les séquences des flux parallèles. La première, à savoir, l’architecture des LSTM parallèles(PLSTM) consiste en une extension du modèle LSTM. Les PLSTM traitent simultanément chaque séquence dans une couche récurrente indépendante et somment les sorties de chacune de ces couches pour produire la sortie finale. Pour ce qui est de la seconde proposition, dénommée MSE-SVM, elle permet de tirer profit des avantages des méthodes LSTM et SVM. D’abord, des vecteurs de caractéristiques latentes sont générés indépendamment, pour chaque flux en entrée, en prenant en sortie l’événement à prédire dans le flux principal. Ces nouvelles représentations sont ensuite fusionnées et données en entrée à un algorithme SVM. Les approches PLSTM et MSE-SVM ont prouvé leur efficacité dans l’intégration des séquences parallèles en surpassant respectivement les modèles LSTM et SVM prenant uniquement en compte les séquences du flux principal. Les deux approches proposées parviennent bien à tirer profit des informations contenues dans les longues séquences. En revanche, elles ont des difficultés à traiter des séquences courtes.L’approche MSE-SVM atteint globalement de meilleures performances que celles obtenues par l’approche PLSTM. Cependant, le problème rencontré avec les séquences courtes est plus prononcé pour le cas de l’approche MSE-SVM. Nous proposons enfin d’étendre cette approche en permettant d’intégrer des informations supplémentaires sur les événements des séquences en entrée (par exemple, le jour de la semaine des émissions de l’historique). Cette extension, dénommée AMSE-SVM améliore remarquablement la performance pour les séquences courtes sans les baisser lorsque des séquences longues sont présentées
In the same way as TV channels, data streams are represented as a sequence of successive events that can exhibit chronological relations (e.g. a series of programs, scenes, etc.). For a targeted channel, broadcast programming follows the rules defined by the channel itself, but can also be affected by the programming of competing ones. In such conditions, event sequences of parallel streams could provide additional knowledge about the events of a particular stream. In the sphere of machine learning, various methods that are suited for processing sequential data have been proposed. Long Short-Term Memory (LSTM) Recurrent Neural Networks have proven its worth in many applications dealing with this type of data. Nevertheless, these approaches are designed to handle only a single input sequence at a time. The main contribution of this thesis is about developing approaches that jointly process sequential data derived from multiple parallel streams. The application task of our work, carried out in collaboration with the computer science laboratory of Avignon (LIA) and the EDD company, seeks to predict the genre of a telecast. This prediction can be based on the histories of previous telecast genres in the same channel but also on those belonging to other parallel channels. We propose a telecast genre taxonomy adapted to such automatic processes as well as a dataset containing the parallel history sequences of 4 French TV channels. Two original methods are proposed in this work in order to take into account parallel stream sequences. The first one, namely the Parallel LSTM (PLSTM) architecture, is an extension of the LSTM model. PLSTM simultaneously processes each sequence in a separate recurrent layer and sums the outputs of each of these layers to produce the final output. The second approach, called MSE-SVM, takes advantage of both LSTM and Support Vector Machines (SVM) methods. Firstly, latent feature vectors are independently generated for each input stream, using the output event of the main one. These new representations are then merged and fed to an SVM algorithm. The PLSTM and MSE-SVM approaches proved their ability to integrate parallel sequences by outperforming, respectively, the LSTM and SVM models that only take into account the sequences of the main stream. The two proposed approaches take profit of the information contained in long sequences. However, they have difficulties to deal with short ones. Though MSE-SVM generally outperforms the PLSTM approach, the problem experienced with short sequences is more pronounced for MSE-SVM. Finally, we propose to extend this approach by feeding additional information related to each event in the input sequences (e.g. the weekday of a telecast). This extension, named AMSE-SVM, has a remarkably better behavior with short sequences without affecting the performance when processing long ones
Стилі APA, Harvard, Vancouver, ISO та ін.
21

Alu, Kelechukwu Iroajanma. "Solving the Differential Equation for the Probit Function Using a Variant of the Carleman Embedding Technique." Digital Commons @ East Tennessee State University, 2011. https://dc.etsu.edu/etd/1306.

Повний текст джерела
Анотація:
The probit function is the inverse of the cumulative distribution function associated with the standard normal distribution. It is of great utility in statistical modelling. The Carleman embedding technique has been shown to be effective in solving first order and, less efficiently, second order nonlinear differential equations. In this thesis, we show that solutions to the second order nonlinear differential equation for the probit function can be approximated efficiently using a variant of the Carleman embedding technique.
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Huddlestone, Grant E. "Implementation and evaluation of two prediction techniques for the Lorenz time series." Thesis, Stellenbosch : Stellenbosch University, 2003. http://hdl.handle.net/10019.1/53459.

Повний текст джерела
Анотація:
Thesis (MSc)-- Stellenbosch University, 2003.
ENGLISH ABSTRACT: This thesis implements and evaluates two prediction techniques used to forecast deterministic chaotic time series. For a large number of such techniques, the reconstruction of the phase space attractor associated with the time series is required. Embedding is presented as the means of reconstructing the attractor from limited data. Methods for obtaining the minimal embedding dimension and optimal time delay from the false neighbour heuristic and average mutual information method are discussed. The first prediction algorithm that is discussed is based on work by Sauer, which includes the implementation of the singular value decomposition on data obtained from the embedding of the time series being predicted. The second prediction algorithm is based on neural networks. A specific architecture, suited to the prediction of deterministic chaotic time series, namely the time dependent neural network architecture is discussed and implemented. Adaptations to the back propagation training algorithm for use with the time dependent neural networks are also presented. Both algorithms are evaluated by means of predictions made for the well-known Lorenz time series. Different embedding and algorithm-specific parameters are used to obtain predicted time series. Actual values corresponding to the predictions are obtained from Lorenz time series, which aid in evaluating the prediction accuracies. The predicted time series are evaluated in terms of two criteria, prediction accuracy and qualitative behavioural accuracy. Behavioural accuracy refers to the ability of the algorithm to simulate qualitative features of the time series being predicted. It is shown that for both algorithms the choice of the embedding dimension greater than the minimum embedding dimension, obtained from the false neighbour heuristic, produces greater prediction accuracy. For the neural network algorithm, values of the embedding dimension greater than the minimum embedding dimension satisfy the behavioural criterion adequately, as expected. Sauer's algorithm has the greatest behavioural accuracy for embedding dimensions smaller than the minimal embedding dimension. In terms of the time delay, it is shown that both algorithms have the greatest prediction accuracy for values of the time delay in a small interval around the optimal time delay. The neural network algorithm is shown to have the greatest behavioural accuracy for time delay close to the optimal time delay and Sauer's algorithm has the best behavioural accuracy for small values of the time delay. Matlab code is presented for both algorithms.
AFRIKAANSE OPSOMMING: In hierdie tesis word twee voorspellings-tegnieke geskik vir voorspelling van deterministiese chaotiese tydreekse ge"implementeer en geevalueer. Vir sulke tegnieke word die rekonstruksie van die aantrekker in fase-ruimte geassosieer met die tydreeks gewoonlik vereis. Inbedmetodes word aangebied as 'n manier om die aantrekker te rekonstrueer uit beperkte data. Metodes om die minimum inbed-dimensie te bereken uit gemiddelde wedersydse inligting sowel as die optimale tydsvertraging te bereken uit vals-buurpunt-heuristiek, word bespreek. Die eerste voorspellingsalgoritme wat bespreek word is gebaseer op 'n tegniek van Sauer. Hierdie algoritme maak gebruik van die implementering van singulierwaarde-ontbinding van die ingebedde tydreeks wat voorspel word. Die tweede voorspellingsalgoritme is gebaseer op neurale netwerke. 'n Spesifieke netwerkargitektuur geskik vir deterministiese chaotiese tydreekse, naamlik die tydafhanklike neurale netwerk argitektuur word bespreek en ge"implementeer. 'n Modifikasie van die terugprapagerende leer-algoritme vir gebruik met die tydafhanklike neurale netwerk word ook aangebied. Albei algoritmes word geevalueer deur voorspellings te maak vir die bekende Lorenz tydreeks. Verskeie inbed parameters en ander algoritme-spesifieke parameters word gebruik om die voorspelling te maak. Die werklike waardes vanuit die Lorentz tydreeks word gebruik om die voorspellings te evalueer en om voorspellingsakkuraatheid te bepaal. Die voorspelde tydreekse word geevalueer op grand van twee kriteria, naamlik voorspellingsakkuraatheid, en kwalitatiewe gedragsakkuraatheid. Gedragsakkuraatheid verwys na die vermoe van die algoritme om die kwalitatiewe eienskappe van die tydreeks korrek te simuleer. Daar word aangetoon dat vir beide algoritmes die keuse van inbed-dimensie grater as die minimum inbeddimensie soos bereken uit die vals-buurpunt-heuristiek, grater akkuraatheid gee. Vir die neurale netwerkalgoritme gee 'n inbed-dimensie grater as die minimum inbed-dimensie ook betel' gedragsakkuraatheid soos verwag. Vir Sauer se algoritme, egter, word betel' gedragsakkuraatheid gevind vir 'n inbed-dimensie kleiner as die minimale inbed-dimensie. In terme van tydsvertraging word dit aangetoon dat vir beide algoritmes die grootste voorspellingsakkuraatheid verkry word by tydvertragings in 'n interval rondom die optimale tydsvetraging. Daar word ook aangetoon dat die neurale netwerk-algoritme die beste gedragsakkuraatheid gee vir tydsvertragings naby aan die optimale tydsvertraging, terwyl Sauer se algoritme betel' gedragsakkuraatheid gee by kleineI' waardes van die tydsvertraging. Die Matlab kode van beide algoritmes word ook aangebied.
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Lammersen, Christiane [Verfasser], Christian [Akademischer Betreuer] Sohler, and auf der Heide Friedhelm [Gutachter] Meyer. "Approximation Techniques for Facility Location and Their Applications in Metric Embeddings / Christiane Lammersen. Betreuer: Christian Sohler. Gutachter: Friedhelm Meyer auf der Heide." Dortmund : Universitätsbibliothek Dortmund, 2011. http://d-nb.info/1103231618/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Mieth, Therese [Verfasser], Dorothee [Gutachter] Haroske, Hans [Gutachter] Triebel, and David E. [Gutachter] Edmunds. "Entropy and approximation numbers of weighted Sobolev embeddings : a bracking technique / Therese Mieth ; Gutachter: Dorothee Haroske, Hans Triebel, David E. Edmunds." Jena : Friedrich-Schiller-Universität Jena, 2016. http://d-nb.info/117761152X/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Majed, Aliah. "Sensing-based self-reconfigurable strategies for autonomous modular robotic systems." Electronic Thesis or Diss., Brest, École nationale supérieure de techniques avancées Bretagne, 2022. http://www.theses.fr/2022ENTA0013.

Повний текст джерела
Анотація:
Les systèmes robotiques modulaires (MRS) font aujourd’hui l’objet de recherches très actives. Ils ont la capacité de changer la perspective des systèmes robotiques, passant de machines conçues pour effectuer certaines tâches à des outils polyvalents capables d'accomplir presque toutes les tâches. Ils sont utilisés dans un large éventail d'applications, notamment la reconnaissance, les missions de sauvetage, l'exploration spatiale, les tâches militaires, etc. Constamment, MRS est constitué de "modules" allant de quelques à plusieurs centaines, voire milliers. Chaque module implique des actionneurs, des capteurs, des capacités de calcul et de communication. Habituellement, ces systèmes sont homogènes où tous les modules sont identiques ; cependant, il pourrait y avoir des systèmes hétérogènes contenant différents modules pour maximiser la polyvalence. L’un des avantages de ces systèmes est leur capacité à fonctionner dans des environnements difficiles dans lesquels les schémas de travail contemporains avec intervention humaine sont risqués, inefficaces et parfois irréalisables. Dans cette thèse, nous nous intéressons à la robotique modulaire auto-reconfigurable. Dans de tels systèmes, il utilise un ensemble de détecteurs afin de détecter en permanence son environnement, de localiser sa propre position, puis de se transformer en une forme spécifique pour effectuer les tâches requises. Par conséquent, MRS est confronté à trois défis majeurs. Premièrement, il offre une grande quantité de données collectées qui surchargent la mémoire de stockage du robot. Deuxièmement, cela génère des données redondantes qui compliquent la prise de décision concernant la prochaine morphologie du contrôleur. Troisièmement, le processus d'auto-reconfiguration nécessite une communication massive entre les modules pour atteindre la morphologie cible et prend un temps de traitement important pour auto-reconfigurer le robot. Par conséquent, les stratégies des chercheurs visent souvent à minimiser la quantité de données collectées par les modules sans perte considérable de fidélité. Le but de cette réduction est d'abord d'économiser de l'espace de stockage dans le MRS, puis de faciliter l'analyse des données et la prise de décision sur la morphologie à utiliser ensuite afin de s'adapter aux nouvelles circonstances et d'effectuer de nouvelles tâches. Dans cette thèse, nous proposons un mécanisme efficace de traitement de données et de prise de décision auto-reconfigurable dédié aux systèmes robotiques modulaires. Plus spécifiquement, nous nous concentrons sur la réduction du stockage de données, la prise de décision d'auto-reconfiguration et la gestion efficace des communications entre les modules des MRS dans le but principal d'assurer un processus d'auto-reconfiguration rapide
Modular robotic systems (MRSs) have become a highly active research today. It has the ability to change the perspective of robotic systems from machines designed to do certain tasks to multipurpose tools capable of accomplishing almost any task. They are used in a wide range of applications, including reconnaissance, rescue missions, space exploration, military task, etc. Constantly, MRS is built of “modules” from a few to several hundreds or even thousands. Each module involves actuators, sensors, computational, and communicational capabilities. Usually, these systems are homogeneous where all the modules are identical; however, there could be heterogeneous systems that contain different modules to maximize versatility. One of the advantages of these systems is their ability to operate in harsh environments in which contemporary human-in-the-loop working schemes are risky, inefficient and sometimes infeasible. In this thesis, we are interested in self-reconfigurable modular robotics. In such systems, it uses a set of detectors in order to continuously sense its surroundings, locate its own position, and then transform to a specific shape to perform the required tasks. Consequently, MRS faces three major challenges. First, it offers a great amount of collected data that overloads the memory storage of the robot. Second it generates redundant data which complicates the decision making about the next morphology in the controller. Third, the self reconfiguration process necessitates massive communication between the modules to reach the target morphology and takes a significant processing time to self-reconfigure the robotic. Therefore, researchers’ strategies are often targeted to minimize the amount of data collected by the modules without considerable loss in fidelity. The goal of this reduction is first to save the storage space in the MRS, and then to facilitate analyzing data and making decision about what morphology to use next in order to adapt to new circumstances and perform new tasks. In this thesis, we propose an efficient mechanism for data processing and self-reconfigurable decision-making dedicated to modular robotic systems. More specifically, we focus on data storage reduction, self-reconfiguration decision-making, and efficient communication management between modules in MRSs with the main goal of ensuring fast self-reconfiguration process
Стилі APA, Harvard, Vancouver, ISO та ін.
26

Chinea, Ríos Mara. "Advanced techniques for domain adaptation in Statistical Machine Translation." Doctoral thesis, Universitat Politècnica de València, 2019. http://hdl.handle.net/10251/117611.

Повний текст джерела
Анотація:
[ES] La Traducción Automática Estadística es un sup-campo de la lingüística computacional que investiga como emplear los ordenadores en el proceso de traducción de un texto de un lenguaje humano a otro. La traducción automática estadística es el enfoque más popular que se emplea para construir estos sistemas de traducción automáticos. La calidad de dichos sistemas depende en gran medida de los ejemplos de traducción que se emplean durante los procesos de entrenamiento y adaptación de los modelos. Los conjuntos de datos empleados son obtenidos a partir de una gran variedad de fuentes y en muchos casos puede que no tengamos a mano los datos más adecuados para un dominio específico. Dado este problema de carencia de datos, la idea principal para solucionarlo es encontrar aquellos conjuntos de datos más adecuados para entrenar o adaptar un sistema de traducción. En este sentido, esta tesis propone un conjunto de técnicas de selección de datos que identifican los datos bilingües más relevantes para una tarea extraídos de un gran conjunto de datos. Como primer paso en esta tesis, las técnicas de selección de datos son aplicadas para mejorar la calidad de la traducción de los sistemas de traducción bajo el paradigma basado en frases. Estas técnicas se basan en el concepto de representación continua de las palabras o las oraciones en un espacio vectorial. Los resultados experimentales demuestran que las técnicas utilizadas son efectivas para diferentes lenguajes y dominios. El paradigma de Traducción Automática Neuronal también fue aplicado en esta tesis. Dentro de este paradigma, investigamos la aplicación que pueden tener las técnicas de selección de datos anteriormente validadas en el paradigma basado en frases. El trabajo realizado se centró en la utilización de dos tareas diferentes de adaptación del sistema. Por un lado, investigamos cómo aumentar la calidad de traducción del sistema, aumentando el tamaño del conjunto de entrenamiento. Por otro lado, el método de selección de datos se empleó para crear un conjunto de datos sintéticos. Los experimentos se realizaron para diferentes dominios y los resultados de traducción obtenidos son convincentes para ambas tareas. Finalmente, cabe señalar que las técnicas desarrolladas y presentadas a lo largo de esta tesis pueden implementarse fácilmente dentro de un escenario de traducción real.
[CAT] La Traducció Automàtica Estadística és un sup-camp de la lingüística computacional que investiga com emprar els ordinadors en el procés de traducció d'un text d'un llenguatge humà a un altre. La traducció automàtica estadística és l'enfocament més popular que s'empra per a construir aquests sistemes de traducció automàtics. La qualitat d'aquests sistemes depèn en gran mesura dels exemples de traducció que s'empren durant els processos d'entrenament i adaptació dels models. Els conjunts de dades emprades són obtinguts a partir d'una gran varietat de fonts i en molts casos pot ser que no tinguem a mà les dades més adequades per a un domini específic. Donat aquest problema de manca de dades, la idea principal per a solucionar-ho és trobar aquells conjunts de dades més adequades per a entrenar o adaptar un sistema de traducció. En aquest sentit, aquesta tesi proposa un conjunt de tècniques de selecció de dades que identifiquen les dades bilingües més rellevants per a una tasca extrets d'un gran conjunt de dades. Com a primer pas en aquesta tesi, les tècniques de selecció de dades són aplicades per a millorar la qualitat de la traducció dels sistemes de traducció sota el paradigma basat en frases. Aquestes tècniques es basen en el concepte de representació contínua de les paraules o les oracions en un espai vectorial. Els resultats experimentals demostren que les tècniques utilitzades són efectives per a diferents llenguatges i dominis. El paradigma de Traducció Automàtica Neuronal també va ser aplicat en aquesta tesi. Dins d'aquest paradigma, investiguem l'aplicació que poden tenir les tècniques de selecció de dades anteriorment validades en el paradigma basat en frases. El treball realitzat es va centrar en la utilització de dues tasques diferents. D'una banda, investiguem com augmentar la qualitat de traducció del sistema, augmentant la grandària del conjunt d'entrenament. D'altra banda, el mètode de selecció de dades es va emprar per a crear un conjunt de dades sintètiques. Els experiments es van realitzar per a diferents dominis i els resultats de traducció obtinguts són convincents per a ambdues tasques. Finalment, cal assenyalar que les tècniques desenvolupades i presentades al llarg d'aquesta tesi poden implementar-se fàcilment dins d'un escenari de traducció real.
[EN] La Traducció Automàtica Estadística és un sup-camp de la lingüística computacional que investiga com emprar els ordinadors en el procés de traducció d'un text d'un llenguatge humà a un altre. La traducció automàtica estadística és l'enfocament més popular que s'empra per a construir aquests sistemes de traducció automàtics. La qualitat d'aquests sistemes depèn en gran mesura dels exemples de traducció que s'empren durant els processos d'entrenament i adaptació dels models. Els conjunts de dades emprades són obtinguts a partir d'una gran varietat de fonts i en molts casos pot ser que no tinguem a mà les dades més adequades per a un domini específic. Donat aquest problema de manca de dades, la idea principal per a solucionar-ho és trobar aquells conjunts de dades més adequades per a entrenar o adaptar un sistema de traducció. En aquest sentit, aquesta tesi proposa un conjunt de tècniques de selecció de dades que identifiquen les dades bilingües més rellevants per a una tasca extrets d'un gran conjunt de dades. Com a primer pas en aquesta tesi, les tècniques de selecció de dades són aplicades per a millorar la qualitat de la traducció dels sistemes de traducció sota el paradigma basat en frases. Aquestes tècniques es basen en el concepte de representació contínua de les paraules o les oracions en un espai vectorial. Els resultats experimentals demostren que les tècniques utilitzades són efectives per a diferents llenguatges i dominis. El paradigma de Traducció Automàtica Neuronal també va ser aplicat en aquesta tesi. Dins d'aquest paradigma, investiguem l'aplicació que poden tenir les tècniques de selecció de dades anteriorment validades en el paradigma basat en frases. El treball realitzat es va centrar en la utilització de dues tasques diferents d'adaptació del sistema. D'una banda, investiguem com augmentar la qualitat de traducció del sistema, augmentant la grandària del conjunt d'entrenament. D'altra banda, el mètode de selecció de dades es va emprar per a crear un conjunt de dades sintètiques. Els experiments es van realitzar per a diferents dominis i els resultats de traducció obtinguts són convincents per a ambdues tasques. Finalment, cal assenyalar que les tècniques desenvolupades i presentades al llarg d'aquesta tesi poden implementar-se fàcilment dins d'un escenari de traducció real.
Chinea Ríos, M. (2019). Advanced techniques for domain adaptation in Statistical Machine Translation [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/117611
TESIS
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Tai, Wei-liang, and 戴維良. "Image Steganography and Reversible Data Embedding Techniques." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/19593007137305362660.

Повний текст джерела
Анотація:
博士
國立中正大學
資訊工程所
97
Steganography is the science of data hiding, which is proposed for undetectable communication. The sender embeds a secret message into the cover media, such as a digital image, with a slight distortion, so as to enable the receiver to extract the embedded message from the stego media, which is the distorted cover media. At the same time, the very existence of the embedded message must be impossible to be detected by any third party. The main requirement of steganography is statistical undetectability. That is, any attacker can not distinguish the cover media from the stego media with success rate better than guessing. In the first and second subjects of this proposal, we are going to present some steganographic schemes. Inevitably, embedding some data will change the cover media even though the distortion caused by embedding is imperceptible to the human visual system. Although the distortion is often quite small, it may be unacceptable for medical or legal images or images with a high importance in some applications. To make sure an important image can be completely recovered after embedded messages are completely extracted, reversible data hiding, or so-called lossless data hiding, has been proposed. Reversibility allows original media to be completely recovered from marked media without distortion after embedded message has been extracted. Thus, we propose some reversible data hiding algorithms in the third and fourth subjects.
Стилі APA, Harvard, Vancouver, ISO та ін.
28

Chen, Chang-Chu, and 陳昌助. "Efficient Image Encoding and Information Embedding Techniques." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/98218762506643291291.

Повний текст джерела
Анотація:
博士
國立中正大學
資訊工程所
98
Due to the progress made with computer hardware and software, the Internet has become the most popular channel for transmitting various forms of digital media. Since the environment of the Internet is insecure, covert communication via the network has become an important research topic in recent years. Data hiding is one useful solution to meet the security requirement. However, to save transmission bandwidth and storage space, digital media must be compressed first. Recent researches have examined hiding secret data in compression codes. In this dissertation, we aim to design data hiding schemes with different compression domains to solve the problems of secure transmission and protection of secret information. For the data compression techniques, we utilize the vector quantization (VQ), side match vector quantization (SMVQ) and Lempel-Ziv-Welch (LZW) coding. The encoding process of VQ is computational complex and time consuming. We present two methods to improve the encoding complexity of the traditional full-search VQ, first one is using two-bounds triangle inequality and second one is based on the Torres and Huguet’s double test method and Huang et al.’s three fast searching method. For the information embedding technique, a method for the lossy compression codes and a method for lossless compression codes are described. In the first method, data hiding based on side match vector quantization (SMVQ) was proposed for improving the compression rate of VQ-based data hiding schemes. In the second method, we propose a high-capacity data-hiding LZW (HCDH-LZW) method that hides data in LZW compression codes reversibly by shrinking the symbol length. The hiding scheme increases capacity for the number of symbols available to hide secrets but also the capacity of each symbol. The performances of all the aforementioned proposed methods have been evaluated via some experiments. The experimental result support that they are indeed superior to other state-of the-art methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Herath, Samudra Dilrukshi. "Embedding Techniques to Solve Large-scale Entity Resolution." Thesis, 2022. https://hdl.handle.net/2440/135396.

Повний текст джерела
Анотація:
Entity resolution (ER) identifies and links records that belong to the same real-world entities, where an entity refer to any real-world object. It is a primary task in data integration. Accurate and efficient ER substantially impacts various commercial, security, and scientific applications. Often, there are no unique identifiers for entities in datasets/databases that would make the ER task easy. Therefore record matching depends on entity identifying attributes and approximate matching techniques. The issues of efficiently handling large-scale data remain an open research problem with the increasing volumes and velocities in modern data collections. Fast, scalable, real-time and approximate entity matching techniques that provide high-quality results are highly demanding. This thesis proposes solutions to address the challenges of lack of test datasets and the demand for fast indexing algorithms in large-scale ER. The shortage of large-scale, real-world datasets with ground truth is a primary concern in developing and testing new ER algorithms. Usually, for many datasets, there is no information on the ground truth or ‘gold standard’ data that specifies if two records correspond to the same entity or not. Moreover, obtaining test data for ER algorithms that use personal identifying keys (e.g., names, addresses) is difficult due to privacy and confidentiality issues. Towards this challenge, we proposed a numerical simulation model that produces realistic large-scale data to test new methods when suitable public datasets are unavailable. One of the important findings of this work is the approximation of vectors that represent entity identification keys and their relationships, e.g., dissimilarities and errors. Indexing techniques reduce the search space and execution time in the ER process. Based on the ideas of the approximate vectors of entity identification keys, we proposed a fast indexing technique (Em-K indexing) suitable for real-time, approximate entity matching in large-scale ER. Our Em-K indexing method provides a quick and accurate block of candidate matches for a querying record by searching an existing reference database. All our solutions are metric-based. We transform metric or non-metric spaces to a lowerdimensional Euclidean space, known as configuration space, using multidimensional scaling (MDS). This thesis discusses how to modify MDS algorithms to solve various ER problems efficiently. We proposed highly efficient and scalable approximation methods that extend the MDS algorithm for large-scale datasets. We empirically demonstrate the improvements of our proposed approaches on several datasets with various parameter settings. The outcomes show that our methods can generate large-scale testing data, perform fast real-time and approximate entity matching, and effectively scale up the mapping capacity of MDS.
Thesis (Ph.D.) -- University of Adelaide, School of Mathematical Sciences, 2022
Стилі APA, Harvard, Vancouver, ISO та ін.
30

Tseng, Chun-Sen, and 曾春森. "Various Secret Embedding Schemes for Binary and Gray Level Techniques." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/37990572791021308352.

Повний текст джерела
Анотація:
碩士
國立中正大學
資訊工程所
93
The rapid advancement of the Internet and the technologies concerned has made the Internet the most popular channel for digital data exchanges. Generally speaking, digital data traveling or hanging around on the Internet can be in the form of text messages, images, audios, or videos. Despite the convenience the Internet offers for data exchanging, one major problem occurs. That is, the data on the Internet are vulnerable to attackers’ tampering, wiretapping, or stealing during the transmission processes. In order to deal with this problem, an approach has been proposed: steganography, which works by hiding a secret message in a widespread cover material to avoid arousing attackers’ attention. The security technique, steganography or data hiding, is discussed mainly in this thesis. There are three data hiding scheme in this thesis. The first steganography scheme uses a binary image as a cover image for hiding secret. However, most cover images of the above schemes are gray-level images or color images because of getting easily detected when a single pixel is modified in a binary image. Therefore hiding data in binary image is a challenge task. The second method proposes a secret image sharing method based on the (t, n)-threshold. The (t, n)-threshold function shares the secret image among n shadows, and only a collection of t (or more) out of the n shadows can reconstruct the secret image. The third scheme shows a reversible data hiding scheme for DCT-based compressed images. In reversible data hiding, receiver can invert the stego-image back to the original media after receiver extracts the hidden data he can repeatedly use the reconstructed cover images.
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Liu, Jin-Rung, and 劉津榮. "Watermarking Scheme for Digital Images Using PSK、FSK Embedding Techniques." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/51278756286329285122.

Повний текст джерела
Анотація:
碩士
國立勤益科技大學
電子工程系
96
Nowadays, effective tools and equipment can be easily duplicated, manipulated, and digital multimedia materials broadcast without confinement. A duplication of a digital article can be undistinguishable from the original. Manipulation and editing of digital copies can deceive even the human eyes. Distribution without permission can spread over the world by the internet. Misappropriation of digital assets greatly concerns content owners and creators, especially when the content is made available through the internet. Without the assurance of proper protection against lost revenues, the owners of digital content will be reluctant to make these assets available. From the internet, the illicit user and hackers can easy to accept, tamper, destroy, and reproduce the valuable message include personal privacy, product skeleton, industrial technique, commercial secret, the military security or multimedia (music, audio, video and texture). Thus, protection of valuable products becomes an important and emergent problem for anyone and anywhere. And developing an effective and secure information hiding method is a task of the engineer. Watermarking is a potential method for copyright protection and authentication of multimedia data on the internet. In this paper, a novel watermarking scheme using phase shift keying (PSK) modulation and techniques are used to construct a robust Watermarking scheme. Frequency Shift Keying (FSK) modulation . In our scheme, with amplitude boost (AB) and low amplitude block selection (LABS) is proposed to achieve superior performance in terms of robustness and imperceptibility. AB is hired to increase the robustness while LABS is employed to improve the imperceptibility. In order to demonstrate the effectiveness of the proposed scheme, simulations under various conditions were conducted. The empirical results show that our proposed scheme can sustain most common attacks including JPEG compression, rotating, resizing, cropping, painting, noising and blurring etc.
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Lu, Che-Wei, and 呂哲緯. "DWT Watermarking Techniques Based on Translation Map and Embedding Rule." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/15021774649852561337.

Повний текст джерела
Анотація:
碩士
淡江大學
資訊工程學系碩士班
98
Information hiding has become an important research issue in recent years, since developing techniques to solve unauthorized copying, tampering, and multimedia data delivery through the internet has been more and more urgent. The information hiding techniques mainly include steganography and digital watermarking. In this paper, we present two approaches that are able to reach image authentication and ownership protection even tampering detection. For the first approach, we use Discrete Wavelet Transformation (DWT) as major components. In order to gain the best translation maps, we use Particle Swarm Optimization (PSO) to train. On the other hand, the second approach embeds watermark in the HL and LH subbands of DWT associated with an embedding rule. The experimental results show that DWTMPSO is more efficient in computational time and more robust than the method proposed by V. Aslantas et al.. Furthermore, DWTMPSO is not only capable of image authentication and ownership protection but it is also able to detect exactly where the image has been tampered with. On the other hand, DWTER indeed produces better results than the compared method in terms of quality of the stego images and robustness of watermarks, and time efficiency.
Стилі APA, Harvard, Vancouver, ISO та ін.
33

Cho, Hsiu-Ying, and 卓秀英. "High-Performance Slow-Wave Transmission lines and Improved De-embedding Techniques." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/56359136905052179804.

Повний текст джерела
Анотація:
博士
國立交通大學
電子研究所
98
The patterned ground shield (PGS) must be well designed; otherwise they may not at all able to improve the quality factor. Investigations into different strip length, strip spacing and metal layer positions of the slot-type floating shields for wavelength, attenuation loss, and characteristic impedance, which have not yet been conducted before, are performed in this work. In general, the assumption for lumped-equivalent-circuit-model-based techniques is valid only if the lengths of the DUT devices are much smaller than the distances between two ports. However, this is not always true for larger DUT devices and may result in over de-embedding when intrinsic device performance is involved. Therefore, the proposed de-embedding technique can address the problem of over de-embedding. The contribution of the interconnection and the via stack becomes important as the frequencies increase. Unfortunately, currently existing techniques do not account for via stack parasitic contributions. In this dissertation, high-performance transmission lines and improved de-embedding techniques are presented. The slow-wave concept has been used in order to design high-performance transmission lines and reduce the size of the transmission lines. Accurate models that describe the behavior of RF devices are critical for the circuit designs, and improved parasitic de-embedding techniques are proposed as to achieve accurate device characterization. A novel slow-wave transmission line with optimized slot-type floating shields in advanced CMOS technology is presented. Periodical slot-type floating shields are inserted beneath the transmission line to provide the substrate shield and shorten the electromagnetic propagation wavelength. This is the first study that demonstrates how the wavelength, attenuation loss, and characteristic impedance can be adjusted by changing the strip length, the strip spacing, and the metal layer positions of the slot-type floating shields. Wavelength shortening needs to be achieved with a trade-off between the slow-wave effect and the attenuation loss. The slot-type floating shields with different strip lengths, strip spacings and metal layer positions are analyzed. It is concluded that the minimum strip length provides the most optimal result. A design guideline can be established that enables circuit designers to achieve the most appropriate slot-type floating shields for optimal circuit performance. Transmission line test structures were fabricated by using 45 nm CMOS process technology. Both measurement and electro-magnetic (EM) wave simulation were performed up to 50 GHz. Transmission lines are frequently used at a length of half- or quarter-wavelength. With a shortened wavelength, a saving in silicon area of more than 67% can be achieved by using optimized slot-type floating shields. Experimental results demonstrated a higher effective relative permittivity value, improved by a factor of more than 9, and a better quality factor, improved by a factor of more than 6, as compared to conventional transmission lines. A novel transmission line de-embedding technique is presented. With this technique, the left- and right-side ground-signal-ground (GSG) probe pads can be extracted directly using two transmission line test structures of length L and 2L. An additional through structure is designed using via stack de-embedding, which is unique amongst current de-embedding methods. The advantages of the proposed method include the following: (1) a smaller silicon area; (2) the consideration for discontinuity between the pad and interconnect; (3) the consideration for substrate coupling and contact effects; (4) the employment of via stack de-embedding; and (5) the solution to the over de-embedding. The proposed novel methodology could be considered as a breakthrough in the area of ultra-high frequency de-embedding and should enable more accurate RF models to be developed. In the proposed methodology, intrinsic slow-wave coplanar waveguide (CPW) transmission line structures are placed on the inter-level metallization layers, as they are the most appropriate RF device for a cascade-based de-embedding method involving the via stack de-embedding technique. Experimental results have demonstrated that attenuation loss and wavelength can be optimized by changing the metal density and the metal layer positions of the floating shields. With a shortened wavelength, a reduction in silicon area of more than 66% can be achieved by using optimized slot-type floating shields located both above and below the CPW structure.
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Liu, Tien-Chung, and 劉典忠. "The Study of Digital Images Embedding Techniques and Visual Secret Sharing Schemes." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/97818057453425691880.

Повний текст джерела
Анотація:
碩士
國立中正大學
資訊工程所
98
With the effect of world information technology, people transmit numerous digital data via the Internet in their daily life. Some of these digital data include the secret information of which the security and the safety is paid attention by scholars. To enhance the protection of the secret information, many encryption schemes are proposed. One of these schemes to encode the secret information into the digital images is called visual cryptography scheme. In this thesis, we propose two image encryption schemes which are non-reversible and reversible to embed a mount of secret information into the digital gray-level images called stego-images. After the simple computing, the receiver can get the secret data using the stego-images and the little extra information. For the digital images which are non-important or with local important region, the non-reversible data hiding scheme are used. This non-reversible scheme uses simple exclusive or (XOR) calculation, and can prevent to damage the region of interesting (ROI) according to the location map. However, some images like military images or maps which are sensitive and intolerant with distortion, then, the reversible data hiding scheme is used. This reversible data hiding scheme can not only extract the secret data from the stego-image, but also recover the stego-image to the original one. The receiver only needs the threshold for extracting the secret data from the stego-image. This threshold is produced when the preprocessing phase of data embedding process. Besides, we also proposed a visual secret sharing (VSS) scheme for images transmission. The transmitted gray-image are turned into the corresponding halftone image, the random-like images called shadows can be obtained using the blocking distribution. After stacking the set number or more than the set number random-like shadows, the receiver can obtain the secret image information. This scheme not only improves the security but also ensures that the receiver can obtain the secret information when some shadows are missed. According to the experiment results and the visual quality, the two proposed schemes in this thesis are demonstrated both the high hiding capacity and the well image quality. The visual secret sharing scheme in this thesis also achieves required security and improves the visual quality.
Стилі APA, Harvard, Vancouver, ISO та ін.
35

Kratochvíl, Jakub. "Dimension Reduction Techniques in Morhpometrics." Master's thesis, 2011. http://www.nusl.cz/ntk/nusl-313740.

Повний текст джерела
Анотація:
This thesis centers around dimensionality reduction and its usage on landmark-type data which are often used in anthropology and morphometrics. In particular we focus on non-linear dimensionality reduction methods - locally linear embedding and multidimensional scaling. We introduce a new approach to dimensionality reduction called multipass dimensionality reduction and show that improves the quality of classification as well as requiring less dimensions for successful classification than the traditional singlepass methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
36

賴宣宏. "A Study on Large-volume Data Embedding and Search Techniques for Information Hiding Applications." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/29276029258879526965.

Повний текст джерела
Анотація:
碩士
國立交通大學
資訊科學與工程研究所
95
With the advance of computer technologies and the popularity of the Internet, more and more data can be transmitted speedily and conveniently on public networks. In this study, first we propose a distortion-free and high-capacity data hiding method on GIF. This method duplicates colors of high frequencies to fill the unused color entries in the palette and uses the duplicated colors to hide secret message bits. It is found that more bits can be hidden by the method if colors with higher appearance frequencies are duplicated first. For PNG images, we propose a data hiding method for implanting secret messages in web pages. The hiding capability is achieved by changing the transparency values of the alpha channel of the pixels of the foreground PNG image. A scheme to adjust the intensity values of both foreground and background image pixels to reduce the artifacts caused by data hiding is also proposed. Finally, we propose two fast methods for searching desired BMP images in databases. One is a non-block-level method that hides comments in images sequentially, while the other is a block-level method that divides images into blocks in which comments are embedded. Good experimental results show the feasibility and applicability of the proposed methods.
Стилі APA, Harvard, Vancouver, ISO та ін.
37

Yang, Yu-Ze, and 楊渝澤. "The Study on the Application of De-embedding Techniques, Inductors and Pads in Silicon Process." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/85918835108331124049.

Повний текст джерела
Анотація:
碩士
國立中央大學
電機工程研究所
93
Abstract The purpose of this thesis studies on the de-embedded techniques applied in silicon base processes, such as CMOS and Silicon-Germanium technologies, the de-embedding techniques, parasitic effects, spiral inductors, and PADs will be discussed in detail. The spiral inductor designs are investigated with different shielding techniques. The quality factor Q and self-resonant frequency are the figure of merit of these inductor designs. The inductor with mesh deep trench shielding achieved the best quality factor and highest self resonant frequency. Finally, a PAD model is investigated and the equivalent circuit model is presented up to 20 GHz. The first part in this thesis is to introduce the de-embedding structure. Several de-embedding methods are introduced and given some meaningful comparisons. The completed de-embedding techniques and parameter extraction procedures are presented using straightforward mathematic calculations. The ground-shield de-embedding structure obtains the best results among the others methods. A small signal device model of CMOS is demonstrated by using these techniques. The second part is the study of spiral inductors. The basic inductance principle is first introduced. Modern planar spiral inductor used on the silicon-based technology is then studied including its parasitic effects, device model, quality factor and the method of improving. Finally, compare the difference with inductance value and quality factor by the measurement of inductors with different specifications. The inductor with mesh deep trench shield in silicon germanium obtains the best quality factor. The third part discussion is the parasitic effect of the PAD and improving method of PAD is proposed. Using the guard ring can reduce the resistance loss and the substrate effects effectively. Since the implementation of radio frequency circuits on silicon plays very important role in the recent novel emerging applications, the request for precise processes, accuracy device models get more and more important. However, the silicon process still has existence of limiting. It’s a goal to use existing process under the major premise of maintaining the cost of manufacture which makes the circuit design accomplish the greatest efficiency.
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Götz, Kathrin Claudia. "Bond analysis of metal-element interactions in molecules and solids applying embedding and density functional techniques." Doctoral thesis, 2009. https://nbn-resolving.org/urn:nbn:de:bvb:20-opus-39373.

Повний текст джерела
Анотація:
Within this thesis, the analysis and hence the better comprehension of the chemical bond within metal–element compounds is the central topic. By use of various DFT methods a selection of M–E interactions have been modeled and analyzed via Bader’s QTAIM, the ELF and NBO techniques. Special focus was set on a series of transition metal borylene and carbene complexes, and the Li–C bonds as representatives for main group organometallics. Therefore, this thesis is split into three parts:(I) An introduction reviewing the quantum chemical machinery as well as the analysis tools applied for the evaluation of chemical bonds. (II) Within the second part the chemical interactions taking place in transition metal complexes are studied focusing on borylenes and cognate carbenes. (III) In Part III, a broad overview of the appropriate modeling and nature of the Li–C bond as well as intermolecular interactions in methyllithium is provided
Im Zentrum dieser Arbeit stand die Analyse von Metall–Element Wechselwirkungen, um ein tieferes Verständnis der chemischen Bindung zu erlangen. Unter Verwendung verschiedener DFT Methoden wurde eine Serie von M–E Bindungen modelliert und anschließend mittels Baders QTAIM, der ELF und dem NBO Ansatz analysiert. Im Fokus standen hierbei besonders Borylen und Carben Komplexe, sowie die Li–C Bindung stellvertretend für Organometallverbindungen der Hauptgruppen. Folglich gliederte sich die vorliegende Arbeit sich in drei Teile: (I) In einem einführenden Kapitel wurden die quantenchemischen Methoden sowie die verwendeten Techniken zur Bindungsanalyse vorgestellt. (II) Innerhalb des zweiten Teils wurden chemische Wechselwirkungen in Übergangsmetallkomplexen untersucht, im Besonderen in Borylen und gleichartigen Carben Verbindungen. (III) Teil III bot einen weitgefächerten Überblick über Modellierung und Natur der Li–C Bindung sowie der intermolekularen Wechselwirkungen in Methyllithium als Beispielverbindung
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Hong, Jung M. "Integration of Micro Patterning Techniques into Volatile Functional Materials and Advanced Devices." 2009. http://hdl.handle.net/1969.1/ETD-TAMU-2009-05-791.

Повний текст джерела
Анотація:
Novel micro patterning techniques have been developed for the patterning of volatile functional materials which cannot be conducted by conventional photolithography. First, in order to create micro patterns of volatile materials (such as bio-molecules and organic materials), micro-contact printing and shadow mask methods are investigated. A novel micro-contact printing technique was developed to generate micro patterns of volatile materials with variable size and density. A PDMS (Polydimethylsiloxane) stamp with 2-dimensional pyramidal tip arrays has been fabricated by anisotropic silicon etching and PDMS molding. The variable size of patterns was achieved by different external pressures on the PDMS stamp. A novel inking process was developed to enhance the uniformity and repeatability in micro-contact printing. The variable density of patterns could be obtained by alignment using x-y transitional stage and multiple stamping with a z-directional moving part. Second, for direct patterning of small molecule organic materials (e.g. pentacene), a novel shadow mask method has been developed with a simple and accurate alignment system. To make accurate dimensions of patterning windows, a silicon wafer was used for the shadow mask since a conventional semiconductor process gives a great advantage for accurate and repeatable fabrication processes. A sphere ball alignment system was developed for the accurate alignment between the shadow mask and the silicon substrate. In this alignment system, four matching pyramidal cavities were fabricated on each side of the shadow mask and silicon wafer substrate using an anisotropic silicon bulk etching. By placing four steel spheres in between the matching cavities, the self-alignment system could be demonstrated with 2-3um alignment accuracy in x-y directions. For OTFT (Organic thin film transistor) application, an organic semiconducting layer was directly deposited and patterned on the substrate using the developed shadow mask method. On the other hand, novel embedding techniques were developed for enabling conventional semiconductor processes including photolithography to be applied on the small substrate. The polymer embedding method was developed to provide an extended processing area as well as easy handling of the small substrate. As an application, post CMOS (Complementary metal-oxide-semiconductor) integration of a relatively large microstructure which might be even larger than the substrate was demonstrated on a VCO (Voltage-controlled oscillator) chip. In addition, micro patterning on the optical fiber was demonstrated by using a silicon wafer holder designed to surround and hold the optical fiber. The micro Fresnel lens could be successfully patterned and integrated on the optical fiber end.
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Sarnari, Alberto Jose. "Numerically Robust Load Flow Techniques in Power System Planning." Thesis, 2019. http://hdl.handle.net/2440/119928.

Повний текст джерела
Анотація:
Since deregulation of the electric power industry, investment in the sector has not kept up with demand. State grids were interconnected to form vast power networks, which increased the overall system’s complexity. Conventional generation sources have, in some cases, closed under financial stress caused by the growing penetration of renewable sources and unfavourable government measures. The power system must adapt to a more demanding environment to that for which it was conceived. This thesis investigates the robustness of planning and simulation study tools for the determination of bus-voltages and voltage stability limits. It also provides an approach to obtain greater certainty in the determination of voltages where conventional methods fail to be deterministic. Two complementary methods for determining the collapse voltage are developed in this thesis. The first method applies Robust Padé approximations to the holomorphic embedding load flow method; while the second method uses the Newton-Raphson numerical calculation method to obtain both high and low voltage solution branches, and voltage stability limits of power system load buses. The proposed methods have been implemented using MATLAB and been demonstrated through a number of IEEE power system test cases. The robust Padé approximation algorithm improves the reliability of solutions of load flow problems when bus-voltages are presented in Taylor series form by converting the series into optimised rational functions. Differences between the classic Padé approximation algorithm and the new robust version, which is based on singular value decomposition (SVD), are described. The new robust approximation method can determine an optimal rational function approximation using the coefficients of a Taylor series expansion. Consequently, the voltage collapse points, as well as the steady-state voltage stability margin, can be calculated with high reliability. Voltage collapse points (i.e. branching points) are identified by using the locations of poles/zeros of a rational function approximation. Numerical examples are devised to illustrate potential use of the proposed method in practical applications. Use of the Newton-Raphson method, combined with the discrete Fourier transform and robust Padé approximation, enables the calculation of the voltage stability limits and both the high and low voltage solution branches for the load buses of a power system. This can work to a great advantage of existing N-R based software users, as problems of initial guess, multiple solutions and Jacobian matrix conditioning when operating close to the voltage collapse point are avoided. The findings are assessed by comparisons with conventional Newton-Raphson, the holomorphic embedding load flow method, and continuation power flow method. This thesis contains a combination of conventional and publication formats, where some introductory materials are included to ensure that the thesis delivers a consistent narrative. For this reason, the first two chapters provide the required background information, research gap identification and contributions, whilst other chapters are written to provide more detailed work that has not yet been published or to summarise the research outcomes and future research directions. Furthermore, publications are listed in their publication formats, complete with statements of the authors’ contributions.
Thesis (Ph.D.) -- University of Adelaide, School of Electrical and Electronic Engineering, 2019
Стилі APA, Harvard, Vancouver, ISO та ін.
41

Lin, Chi-Yuan, and 林基源. "Embedding Genetic Algorithm, Grey Relation and Fuzzy Clustering Techniques into Neural Networks for Search of Optimal Codebook." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/70055091359056400880.

Повний текст джерела
Анотація:
博士
國立成功大學
電機工程學系碩博士班
92
A fundamental goal of image compression is to reduce the bit rate for transmission or data storage while maintaining an acceptable fidelity or image quality. Vector Quantization (VQ) is a popular method for image compression. The purpose of vector quantization is to create a codebook such that the average distortion between training vectors and their corresponding codevectors in the codebook is minimized. Neural networks are well suited to the problem of image compression due to their massively parallel and distributed architecture. The use of neural networks for vector quantization has a significant advantage, that is neural networks are highly parallel computing architecture and, thus, offer the potential for real-time VQ.   This dissertation describes the use of neural networks for vector quantization (VQ), two un-supervised neural network with grey relation and fuzzy clustering schemes for training the vector quantizer. A powerful feature of these new training algorithms is that the VQ codewords are determined in an adaptive manner, as compared to the popular LBG training algorithm, which requires that the entire training data be processed in a batch mode. In the first proposed grey-based neural network schemes, the grey theory is applied to a 2-D competitive Hopfield neural network (named GHNN) and two layer competitive learning network (named GCLN) in order to generate optimal solution for VQ. In accordance with the degree of similarity measure between training vectors and codevectors, the grey relational analysis is used to measure the relationship degree among them.   In most cases, unsupervised training algorithms attempt to “cluster” or average portions of the training data into representative groups. In the second proposed fuzzy neural network schemes, the codebook design is conceptually considered as a clustering problem. Here, it is a kind of neural network model imposed by the fuzzy clustering strategy working toward minimizing an objective function defined as the average distortion measure between any two training vectors within the same class. In order to generate feasible results, its implementation consists of neural networks and fuzzy clustering with penalty term methods (named FCLN and PFHNN).   While the GCLN, GHNN, FCLN and PFHNN algorithms converge to a local optimum, it is not guaranteed to reach the global optimum. The Genetic Algorithm (GA) is used in an attempt to optimize a specified objective function related to vector quantizer design. The physical processes of competition, selection and reproduction operating in populations are adopted in combination with GCLN and PFHNN and to produce a superior Genetic Grey-based Competitive Learning Network (GGCLN) and Genetic Fuzzy Hopfield Neural Network with penalty term (GFHNN) for codebook design in image compression. Simulation results illustrate that embedding GA, grey relation and fuzzy clustering techniques into neural networks provides an approach for search of globally optimal or near-optimum codebook to image compression.   Color images are widely used in our daily lives, and color image compression and cryptosystem are closed related for secure internet multimedia application. In this dissertation an invisible virtual color image system based on Interpolative Vector Quantization (IVQ) using a spread neural network with Penalized Fuzzy C-Means (PFCM) clustering technology (named SPFNN) is proposed. The goal is to offer safe exchange of a color stego-image in the internet. In the proposed scheme, is first compressed the secret color image by a spread-unsupervised neural network with PFCM based on IVQ, then the block cipher Data Encryption Standard (DES) and the Rivest, Shamir and Adleman (RSA) algorithms are hired to provide the mechanism of a hybrid cryptosystem for secure communication and convenient environment in the internet. In the SPFNN, the PFHNN algorithm is modified into spread neural network in order to generate optimal solution for color IVQ. Then we encrypted color IVQ indices and sorted codebooks of secret color image information and embedded into the frequency domain of the cover color image by Hadamard Transform (HT). Our proposed method has two benefits. One is the highly secure and convenience offered by the hybrid DES and RSA cryptosystems to exchange color image data in the internet. The other benefit is the excellent results can be obtained using our proposed color image compression scheme SPFNN method.
Стилі APA, Harvard, Vancouver, ISO та ін.
42

Martinez-Garcia, Isaac Dutton Robert W. Howe Roger Wong S. "In-situ calibration and direct de-embedding of RF integrated circuits and microwave structures using self-compensating techniques." 2010. http://purl.stanford.edu/vd527pg7763.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
43

Mardis, Kristy Lynn. "Studies of rotational-vibrational coupling in coordinate embedding and the methane association reaction and potential energy surface refinement techniques." 1998. http://catalog.hathitrust.org/api/volumes/oclc/40807788.html.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
44

Götz, Kathrin Claudia [Verfasser]. "Bond analysis of metal element interactions in molecules and solids applying embedding and density functional techniques / vorgelegt von Kathrin Claudia Götz." 2009. http://d-nb.info/997741392/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Krishna, Kumar S. "Blind Detection Techniques For Spread Spectrum Audio Watermarking." Thesis, 2007. https://etd.iisc.ac.in/handle/2005/588.

Повний текст джерела
Анотація:
In spreads pectrum (SS)watermarking of audio signals, since the watermark acts as an additive noise to the host audio signal, the most important challenge is to maintain perceptual transparency. Human perception is a very sensitive apparatus, yet can be exploited to hide some information, reliably. SS watermark embedding has been proposed, in which psycho-acoustically shaped pseudo-random sequences are embedded directly into the time domain audio signal. However, these watermarking schemes use informed detection, in which the original signal is assumed available to the watermark detector. Blind detection of psycho-acoustically shaped SS watermarking is not well addressed in the literature. The problem is still interesting, because, blind detection is more practical for audio signals and, psycho-acoustically shaped watermarks embedding offers the maximum possible watermark energy under requirements of perceptual transparency. In this thesis we study the blind detection of psycho-acoustically shaped SS watermarks in time domain audio signals. We focus on a class of watermark sequences known as random phase watermarks, where the watermark magnitude spectrum is defined by the perceptual criteria and the randomness of the sequence lies in their phase spectrum. Blind watermark detectors, which do not have access to the original host signal, may seem handicapped, because an approximate watermark has to be re-derived from the watermarked signal. Since the comparison of blind detection with fully informed detection is unfair, a hypothetical detection scheme, denoted as semi-blind detection, is used as a reference benchmark. In semi-blind detection, the host signal as such is not available for detection, but it is assumed that sufficient information is available for deriving the exact watermark, which could be embedded in the given signal. Some reduction in performance is anticipated in blind detection over the semi-blind detection. Our experiments revealed that the statistical performance of the blind detector is better than that of the semi-blind detector. We analyze the watermark-to-host correlation (WHC) of random phase watermarks, and the results indicate that WHC is higher when a legitimate watermark is present in the audio signal, which leads to better detection performance. Based on these findings, we attempt to harness this increased correlation in order to further improve the performance. The analysis shows that uniformly distributed phase difference (between the host signal and the watermark) provides maximum advantage. This property is verified through experimentation over a variety of audio signals. In the second part, the correlated nature of audio signals is identified as a potential threat to reliable blind watermark detection, and audio pre-whitening methods are suggested as a possible remedy. A direct deterministic whitening (DDW) scheme is derived, from the frequency domain analysis of the time domain correlation process. Our experimental studies reveal that, the Savitzky-Golay Whitening (SGW), which is otherwise inferior to DDW technique, performs better when the audio signal is predominantly low pass. The novelty of this work lies in exploiting the complementary nature of the two whitening techniques and combining them to obtain a hybrid whitening (HbW) scheme. In the hybrid scheme the DDW and SGW techniques are selectively applied, based on short time spectral characteristics of the audio signal. The hybrid scheme extends the reliability of watermark detection to a wider range of audio signals. We also discuss enhancements to the HbW technique for robustness to temporal offsets and filtering. Robustness of SS watermark blind detection, with hybrid whitening, is determined through a set of experiments and the results are presented. It is seen that the watermarking scheme is robust to common signal processing operations such as additive noise, filtering, lossy compression, etc.
Стилі APA, Harvard, Vancouver, ISO та ін.
46

Krishna, Kumar S. "Blind Detection Techniques For Spread Spectrum Audio Watermarking." Thesis, 2007. http://hdl.handle.net/2005/588.

Повний текст джерела
Анотація:
In spreads pectrum (SS)watermarking of audio signals, since the watermark acts as an additive noise to the host audio signal, the most important challenge is to maintain perceptual transparency. Human perception is a very sensitive apparatus, yet can be exploited to hide some information, reliably. SS watermark embedding has been proposed, in which psycho-acoustically shaped pseudo-random sequences are embedded directly into the time domain audio signal. However, these watermarking schemes use informed detection, in which the original signal is assumed available to the watermark detector. Blind detection of psycho-acoustically shaped SS watermarking is not well addressed in the literature. The problem is still interesting, because, blind detection is more practical for audio signals and, psycho-acoustically shaped watermarks embedding offers the maximum possible watermark energy under requirements of perceptual transparency. In this thesis we study the blind detection of psycho-acoustically shaped SS watermarks in time domain audio signals. We focus on a class of watermark sequences known as random phase watermarks, where the watermark magnitude spectrum is defined by the perceptual criteria and the randomness of the sequence lies in their phase spectrum. Blind watermark detectors, which do not have access to the original host signal, may seem handicapped, because an approximate watermark has to be re-derived from the watermarked signal. Since the comparison of blind detection with fully informed detection is unfair, a hypothetical detection scheme, denoted as semi-blind detection, is used as a reference benchmark. In semi-blind detection, the host signal as such is not available for detection, but it is assumed that sufficient information is available for deriving the exact watermark, which could be embedded in the given signal. Some reduction in performance is anticipated in blind detection over the semi-blind detection. Our experiments revealed that the statistical performance of the blind detector is better than that of the semi-blind detector. We analyze the watermark-to-host correlation (WHC) of random phase watermarks, and the results indicate that WHC is higher when a legitimate watermark is present in the audio signal, which leads to better detection performance. Based on these findings, we attempt to harness this increased correlation in order to further improve the performance. The analysis shows that uniformly distributed phase difference (between the host signal and the watermark) provides maximum advantage. This property is verified through experimentation over a variety of audio signals. In the second part, the correlated nature of audio signals is identified as a potential threat to reliable blind watermark detection, and audio pre-whitening methods are suggested as a possible remedy. A direct deterministic whitening (DDW) scheme is derived, from the frequency domain analysis of the time domain correlation process. Our experimental studies reveal that, the Savitzky-Golay Whitening (SGW), which is otherwise inferior to DDW technique, performs better when the audio signal is predominantly low pass. The novelty of this work lies in exploiting the complementary nature of the two whitening techniques and combining them to obtain a hybrid whitening (HbW) scheme. In the hybrid scheme the DDW and SGW techniques are selectively applied, based on short time spectral characteristics of the audio signal. The hybrid scheme extends the reliability of watermark detection to a wider range of audio signals. We also discuss enhancements to the HbW technique for robustness to temporal offsets and filtering. Robustness of SS watermark blind detection, with hybrid whitening, is determined through a set of experiments and the results are presented. It is seen that the watermarking scheme is robust to common signal processing operations such as additive noise, filtering, lossy compression, etc.
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Lee, Chun-Yi, and 李宗嶧. "Image Characteristics Applied to Data Embedding Technique." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/20953993045840470667.

Повний текст джерела
Анотація:
碩士
育達商業技術學院
資訊管理所
95
After the personal computer came out and internet made revolution, the popularization of digitization information was application, also cause the creator’s working were easy to be usurped to did publicly at the same time, the intellectual property rights needed protection urgently. Besides, the secret information while conveying was easier to be intercepted, stolen, altering. Under the urgent demand of security for the information of internet, the good disguises carriers of appearance of the image media, has offered an important information protection method by the image disguised and hidden. The purpose of this research was hiding information among the image, in order to improve the security of secret information and offered the identification of the intellectual property rights. In various study of image disguised and hidden information at present, three main problems appears: The better the visual effect is, the fewer hiding amount is. As to the greater the hiding amount is, the worse the visual effect is. And it is the more complicated the algorithms of hiding perform, the higher the costs of time are, and visual effect and hiding amount are influenced. The critical lies in that the less the image’s record bit is, and the simplicity the image’s color is, and the more distinct image appearance form border, it is the more the degree of difficulty to hide. This research combined steganography and cryptography by way of dividing the work cooperatively, reached the best efficiency of each professional function. This research proposed the data embedding technique with image characteristics, got both of the best visual effect and the greatest hiding amount simultaneously; and reached the security of the data with pre-treatment, hiding algorithm and post-treatment.
Стилі APA, Harvard, Vancouver, ISO та ін.
48

Chang, Chia-Chin, and 張嘉欽. "Self-Correction of Digital Images Using Self-Embedding Technique." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/32154163751287343330.

Повний текст джерела
Анотація:
碩士
大葉大學
資訊工程學系碩士班
91
Digital images transferred in the internet maybe be modified by hackers. In this paper, we propose a new self-embedding image watermark to do the error detection and self-recovery of the modified images to improve the image qualities. In the embedding procedure, we use a key-dependent basis transform to transfer an image from the spatial domain into the frequency domain. The basis has arranged so that the number of zero crossing increases with the row number. The similar neighboring-block-direction-codes and the approximation of an original image are extracted and embedded them into the image in the frequency domain. After all, we get an image which is embedded with the recovering data. The embedded image maybe tampered by the attackers. After attack, we need to improve the quality of modified image. In the recovery procedure, we use the secret key to generate the basis which is the same as embedding procedure. Use the key-dependent basis, the approximation and the block-direction-codes can be extracted from the modified image. We can use them to detect error regions of the modified image and use the similar neighboring-block-direction-codes to help us recover the error regions. The experimental results have shown that the proposed self-healing method can detect and recover the error regions and improve the qualities of modified images.
Стилі APA, Harvard, Vancouver, ISO та ін.
49

"Qualitative Models of Neural Activity and the Carleman Embedding Technique." East Tennessee State University, 2009. http://etd-submit.etsu.edu/etd/theses/available/etd-0710109-101927/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
50

CHEN, YING-CHING, and 陳映親. "Scalable Secret Image Sharing based on Adaptive Pixel-embedding Technique." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/z482rv.

Повний текст джерела
Анотація:
碩士
逢甲大學
資訊工程學系
106
Different from secret image sharing technique, the secret of a scalable secret image sharing is displayed in the way that it could be progressively recovered by a set of shares. In other word, incomplete gathering of shadows cannot be used to reconstruct the whole image S immediately. To improve the security of SSIS, Lee and Chen have designed a selective scalable secret image sharing mechanism (SSSIS) to reduce the awareness of malicious attackers. Nevertheless, the quality of Lee and Chen’s scheme is not good due to the image distortion and storage overhead of static embedding. Thus, we introduce the concept of adaptive pixel-embedding into SSSIS, in which the embedded bits could be uniformly distributed in the stego image. Aside from the human vision perception, experimental results have demonstrated the superiority of new method over related works in terms of two objective indexes, including peak signal to noise ratio (PSNR) and structural similarity (SSIM).
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії