Teses / dissertações sobre o tema "Identification and Authentication Techniques"

Siga este link para ver outros tipos de publicações sobre o tema: Identification and Authentication Techniques.

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 50 melhores trabalhos (teses / dissertações) para estudos sobre o assunto "Identification and Authentication Techniques".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja as teses / dissertações das mais diversas áreas científicas e compile uma bibliografia correta.

1

Bhamra, Sukvinder. "Investigating the use and identity of traditional herbal remedies amongst South Asian communities using surveys and biomolecular techniques". Thesis, De Montfort University, 2016. http://hdl.handle.net/2086/12392.

Texto completo da fonte
Resumo:
Herbal medicines (HMs) have been used to supplement, maintain, and treat health conditions, and have inspired the development of many Western pharmaceuticals. Migrant South Asian (SA) communities in the UK have brought with them their own traditional forms of medicine, yet little is known about their current use of HMs in the UK. Consuming HMs alongside conventional Western medicines could affect pharmacological treatment and lead to herb-drug interactions; hence, healthcare professionals (HCPs) should be aware of their patients’ use of HMs. The import of HMs to the UK raises concerns over the quality, safety and regulation of HMs. Deoxyribonucleic acid (DNA) barcoding can be used to discriminate between different species, and identify contaminants and adulterants, thus can be used for the authentication of HMs. The South Asian Traditional Medicines (SATMED) questionnaire explored the knowledge and use of HMs by diasporic SA communities in the UK. It uncovered a vast range of HMs which were used by participants, where ingredients were sourced from, the concurrent use of herbal and Western medicines, and how minor ailments were treated. An online survey designed to investigate UK based practitioners’ views of HMs revealed that HCPs claimed to lack sufficient knowledge of HMs. HCPs said they needed more training on HMs to help them make better informed decisions. Tulsi (Ocimum tenuiflorum L.) was identified as a culturally and commercially valuable plant, which was used for molecular analysis. A variety of tulsi samples were collected for authentication: community samples from SA families in the UK, commercial samples, and referenced specimens. Both ITS and trnH-psbA regions were successfully used to distinguish between several Ocimum species, and identify a potential species substitution. This research represents the first time that DNA based methods have been used to authenticate medicinal plants species used by migrant SA communities living in the UK. The results of this multi-disciplinary study provide a unique contribution to the evolving discipline of ethnopharmacology.
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Brandão, Luís T. A. N. "The Forge-and-Lose Technique and Other Contributions to Secure Two-Party Computation with Commitments". Research Showcase @ CMU, 2017. http://repository.cmu.edu/dissertations/1001.

Texto completo da fonte
Resumo:
This doctoral dissertation presents contributions advancing the state-of-the-art of secure two-party computation (S2PC) — a cryptographic primitive that allows two mutually distrustful parties, with respective private inputs, to evaluate a function of their combined input, while ensuring privacy of inputs and outputs and integrity of the computation, externally indistinguishable from an interaction mediated by a trusted party. The dissertation shows that S2PC can be made more practical by means of innovative cryptographic techniques, namely by engineered use of commitment schemes with special properties, enabling more efficient protocols, with provable security and applicable to make systems more dependable. This is one further step toward establishing S2PC as a practical tool for privacy-preserving applications. The main technical contribution is a new protocol for S2PC of Boolean circuits, based on an innovative technique called forge-and-lose.1 Building on top of a traditional cut-and-choose of garbled circuits (cryptographic versions of Boolean circuits), the protocol improves efficiency by reducing by a factor of approximately 3 the needed number of garbled circuits. This significantly reduces a major communication component of S2PC with malicious parties, for circuits of practical size. The protocol achieves simulatable S2PC-with-commitments, producing random commitments of the circuit input and output bits of both parties. The commitments also enable direct linkage of several S2PCs in a malicious adversarial setting. As second result, the dissertation describes an improvement to the efficiency of one of the needed sub-protocols: simulatable two-party coin-flipping.1 The sub-protocol is based on a new universally composable commitment scheme that for bit-strings of increasing size can achieve an asymptotic communication-complexity rate arbitrarily close to 1. The dissertation then discusses how S2PC-with-commitments can enable in brokered identification systems a difficult-to-achieve privacy property — a kind of unlinkability.1 This mitigates a vector of potential mass surveillance by an online central entity (a hub), which is otherwise empowered in systems being developed at nation scale for authentication of citizens. When the hub mediates between identity providers and service providers the authentication of users, an adequate S2PC (e.g., of a block-cipher) can prevent the hub from learning user pseudonyms that would allow linking transactions of the same user across different services providers. 1 Parts of these contributions were previously presented at ASIACRYPT 2013, PETS 2015 and PKC 2016.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Ge, He. "Flexible Digital Authentication Techniques". Thesis, University of North Texas, 2006. https://digital.library.unt.edu/ark:/67531/metadc5277/.

Texto completo da fonte
Resumo:
Abstract This dissertation investigates authentication techniques in some emerging areas. Specifically, authentication schemes have been proposed that are well-suited for embedded systems, and privacy-respecting pay Web sites. With embedded systems, a person could own several devices which are capable of communication and interaction, but these devices use embedded processors whose computational capabilities are limited as compared to desktop computers. Examples of this scenario include entertainment devices or appliances owned by a consumer, multiple control and sensor systems in an automobile or airplane, and environmental controls in a building. An efficient public key cryptosystem has been devised, which provides a complete solution to an embedded system, including protocols for authentication, authenticated key exchange, encryption, and revocation. The new construction is especially suitable for the devices with constrained computing capabilities and resources. Compared with other available authentication schemes, such as X.509, identity-based encryption, etc, the new construction provides unique features such as simplicity, efficiency, forward secrecy, and an efficient re-keying mechanism. In the application scenario for a pay Web site, users may be sensitive about their privacy, and do not wish their behaviors to be tracked by Web sites. Thus, an anonymous authentication scheme is desirable in this case. That is, a user can prove his/her authenticity without revealing his/her identity. On the other hand, the Web site owner would like to prevent a bunch of users from sharing a single subscription while hiding behind user anonymity. The Web site should be able to detect these possible malicious behaviors, and exclude corrupted users from future service. This dissertation extensively discusses anonymous authentication techniques, such as group signature, direct anonymous attestation, and traceable signature. Three anonymous authentication schemes have been proposed, which include a group signature scheme with signature claiming and variable linkability, a scheme for direct anonymous attestation in trusted computing platforms with sign and verify protocols nearly seven times more efficient than the current solution, and a state-of-the-art traceable signature scheme with support for variable anonymity. These three schemes greatly advance research in the area of anonymous authentication. The authentication techniques presented in this dissertation are based on common mathematical and cryptographical foundations, sharing similar security assumptions. We call them flexible digital authentication schemes.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Nastasiu, Dragos-Florin. "Développement de solutions pour l’identification (THID) et l’authentification par des approches non intrusives dans le domaine THz". Electronic Thesis or Diss., Chambéry, 2024. http://www.theses.fr/2024CHAMA007.

Texto completo da fonte
Resumo:
L'imagerie THz est un domaine émergent depuis les avancées technologiques en termes d'émission de rayonnement THz et d'équipement de détection. L'objectif principal de la thèse est de contribuer et d'améliorer les systèmes d'imagerie THz, de la reconstruction et de l'analyse d'images aux tâches de classification d'images. Dans la première partie de la thèse, nous nous attaquons au défi de l'estimation de l'amplitude dans des conditions de bruit idéal et multiplicatif. Le bruit multiplicatif déforme la phase et introduit des artefacts complexes, tels que la perte d'information sur les contours et la dégradation du contraste, qui ne peuvent être éliminés à l'aide des techniques de reconstruction d'image les plus récentes. À cet égard, nous présentons cinq nouvelles méthodes de reconstruction qui exploitent la représentation du diagramme de phase des signaux. Deux de ces méthodes sont basées sur le filtrage du diagramme de phase pour estimer l'amplitude dans les deux conditions. Deux autres méthodes utilisent le concept de déformation temporelle dynamique (DTW) pour augmenter la capacité à modéliser le type de bruit multiplicatif. Enfin, nous exploitons la dynamique de la trajectoire de phase décrite par les courbures pour reconstruire l'image. Parmi le grand nombre de méthodes, nous évaluons tout au long de la thèse que la méthode basée sur les courbures reconstruit efficacement l'image dans des contextes idéaux et bruités. Après une reconstruction efficace de l'image, la deuxième partie de la thèse, nous étudions les méthodes d'analyse et de classification d'images en tenant compte des instabilités des systèmes d'imagerie du monde réel, telles que les translations et les rotations. Dans ce sens, nous proposons d'utiliser des décompositions de paquets d'ondelettes invariantes par rapport à la translation et à la rotation, qui fournissent une représentation unique et optimale d'une image, indépendamment de la translation ou de la rotation de l'image. Sur la base des représentations d'images invariantes, de nouvelles techniques d'extraction de caractéristiques sont introduites, telles que les cadres verticaux, horizontaux, N-directionnels et N-zonaux. En outre, deux structures de caractéristiques sont introduites, qui prennent en compte le partitionnement en fréquence de la décomposition en ondelettes et sont adaptées pour fonctionner avec des réseaux neuronaux graphiques (GNN) et des classificateurs ML classiques tels que les k-voisins les plus proches (k-NN), les machines à vecteurs de support (SVM), etc. Dans l'ensemble, les approches que nous proposons augmentent la précision de tous les classificateurs
THz imaging is an emerging field since the technological advances in terms of THz radiation emission and detection equipment. The main objective of the thesis is to contribute and to improve THz imaging systems, from image reconstruction and analysis to image classification tasks. In the first part of the thesis, we tackle the amplitude estimation challenge under ideal and multiplicative noise conditions. The multiplicative noise deforms the phase and introduces complex artefacts, such as contour information loss and contrast degradation, that cannot be eliminated using state-of-the-art image reconstruction techniques. In this regard, we introduce five novel reconstruction methods which exploit the phase diagram representation of signals. Two of the methods are based on phase-diagram match filtering to estimate the amplitude in both conditions. Another two methods use the concept of dynamic time warping (DTW) to increase the capability to model the multiplicative type of noise. Lastly, we exploit the dynamic of the phase trajectory described by the curvatures to reconstruct the image. From the large pool of methods, we evaluate throughout the thesis that the curvature-based method efficiently reconstructs the image in both ideal and noisy contexts. After an efficient image reconstruction, the second part of the thesis, we study image analysis and classification methods considering the instabilities of real-world imaging systems, such as translations and rotations. In this sense, we propose to use translation and rotation invariant wavelet packet decompositions, that provide a unique and optimal representation of an image, regardless if the image is translated or rotated. Based on the invariant image representations, novel feature extraction techniques are introduced such as vertical, horizontal, N-directional and N-zonal frameworks. Additionally, two feature structures are introduced and that consider the frequency partitioning of the wavelet decomposition and are adapted to work with Graph Neural Networks (GNNs) and classic ML classifiers such as k-nearest neighbors (k-NN), support vector machine (SVM), etc. Overall, our proposed approaches increase the accuracy of all classifiers
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Wong, Chin Man. "Personal identification/authentication by using hand geometry /". View Abstract or Full-Text, 2003. http://library.ust.hk/cgi/db/thesis.pl?COMP%202003%20WONG.

Texto completo da fonte
Resumo:
Thesis (M. Phil.)--Hong Kong University of Science and Technology, 2003.
Includes bibliographical references (leaves 104-109). Also available in electronic version. Access restricted to campus users.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Jiang, Feng. "Efficient Public-Key Watermark Techniques for Authentication". Thesis, Purdue University, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10618833.

Texto completo da fonte
Resumo:

The security of digital media content has received significant attention as the usage of multimediahas increased in today's society. Digital watermarking is widely applied for digital image copyright protection and authentication. The extraction and verification of the watermark can be used for many applications, for example, authenticating the image. In some situations, the authentication should be accessible to all, thus public-key watermarking would be necessary.

In addition, many essential image-embedded documents are kept in a physical format and used widely for authentication purposes. These documents include the personal ID, license, passport, immigration document, commercial ticket with identity information, personal medical report, etc.

A digital watermarking system with high embedding capacity, robust to various attacks, high extraction efficiency is needed for such practical use. A public-key watermarking system is proposed for such applications. The embedded watermark/message can be extracted and verified publicly using a public-key. The watermark extraction process is efficient and blind. The watermark can be only embedded by the document issuer. The watermark embedded is robust against not only common digital signal processing attacks, geometric attacks but also the print-scan process. Differing from existing watermarking approaches, the watermark is embedded according to the result of proposed object weight map detection and automatic object segmentation. Higher watermark robustness and embedding capacity are achieved. Our simulation results demonstrate that the proposed approach is effective and is able to be applied to various applications.

Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Balisane, Hewa. "Human gait analysis for biometric identification and authentication". Thesis, Manchester Metropolitan University, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.539385.

Texto completo da fonte
Resumo:
The study of biometrics is concerned with any human physiological or behavioural characteristic which is universal, unique and measurable. Biometric systems operate by acquiring biometric data from individuals, extracting feature sets from data and comparing this feature with the enrolment set in a database. The aIm of this research is to compare the performance of gait-based user recognition of children with adults. Existing analyses techniques in gait-based recognition using wearable sensors for adults are applied to gait analyses in children. This is the first known study to be conducted on children (5-16 years old) for biometric gait recognition. Results presented here show that the performance degradation for children's walking compared to adult walking is approximately 100%. In comparable settings, a 6.21 % Equal Error Rate (EER) for adult gait recognition was reached, whilst for children's walking an EER of 12.69% was achieved. The performance of children's walking whilst carrying an object has also been studied. Results show that carrying an object actually improves the performance when walking normally, but when the children were asked to walk faster the walking becomes unstable, resulting in a higher Equal Error Rate (EER). A comparative investigation of the effects of time on gait recognition in children's walking pattern was carried out. The effects of age and gender have also been considered. In addition, children were tested six months apart; with the sensor on the hip position the performance of gait recognition shows significant variations with EER values. Abstract Finally, this thesis offers for the first time a coupled approach of statistical timedomain and frequency domain methods have been employed in order to match biometric gait signals. It has been shown that initially using root mean squared, crest-factor and kurtosis obtained similar matches in gait signals of children for the ages of 5-16 than for the traditional methods. Hence these novel methods employed can be exploited to verify these more established methods resident in gait recognition software.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Jiang, Weina. "Multi-level image authentication techniques in printing-and-scanning". Thesis, University of Surrey, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.576163.

Texto completo da fonte
Resumo:
Printed media, such as facsimile, newspaper, document, magazine and any other publishing works, plays an important role in communicating information in today's world. The printed media can be easily manipulated by advanced image editing software. Image authentication techniques are, therefore, indispensable for preventing undesired manipulations and protecting infringement of copyright. In this thesis, we investigate image authentication for multi-level greyscale and halftone images using digital watermarking, image hashing and digital forensic techniques including application for printing-and-scanning process. Digital watermarking is the process of embedding information into the cover image which is used to verify its authenticity. The challenge of digital watermarking is the trade-off between embedding capacity and image imperceptibility. In this thesis, we compare the work of halftone watermarking algorithm proposed by Fu and Au. We observe that the image perceptual quality is reduced after watermark embedding due to the problems of sharpening distortion and uneven tonality distribution. To optimize the imperceptibility of watermark embedding, we propose an iterative linear gain halftoning algorithm. Our experiments show that the proposed halftone watermarking algorithm improves a significant amount of image quality of 6.5% to 12% by Weighted Signal-to- Noise Ratio (WSNR) and of 11% to 23% by Visual Information Fidelity (VIF), compared to Fu and Au's algorithm. While halftone watermarking provides a limited robustness against print-and-scan processes, im- age hashing provides an alternative way to verify the authenticity of the content. Little work has been reported for image hashing to printed media. In this thesis, we develop a novel image hashing algorithm based on SIFT local descriptor and introduce a normalization procedure to syn- chronize the printed-and-scanned image. We compare our proposed hashing algorithm with the singular value decomposition based image hashing (SVD-hash) and feature-point based image hashing (FP-hash) using the average Normalized Hamming Distance (NHD) and the Receiver Operating Characteristic (ROC). The proposed hash algorithm has shown good performance trade-off between robustness and discrimination, as compared to the SVD-hash and FP-hash al- gorithms quantified by the results obtained via NHD and ROC. Our proposed algorithm is found to be robust against a wide range of content preserving attacks, including non-geometric attacks, geometric attacks and printing-and-scanning. For our work in digital forensics, we propose in this thesis a statistical approach based on Multi- sized block Benford's Law (MBL), and a texture analysis based on Local Binary Pattern (LBP) to identify the origins of printed documents. We compare MBL-based and LBP-based approaches to a statistical feature-based approach proposed by Gou et al .. The proposed MBL-based approach provides an ability to identify printers from a relatively diverse sets, while it proves less accurate at identifying printers of similar models. The proposed LBP-based approach provides a highly accurate identification rate at approximately 99.4%, with a low variance. In particular, our LBP- based approach only causes 2% mis-identification rate between two identical printers, whereas Gou et al.'s approach causes 20% mis-identification rate. Our proposed LBP-based approach has also successfully demonstrated on printed-and-scanned text documents. Moreover, it remains robust against common image processing attacks, including averaging filtering, median filtering, sharpening, rotation, resizing, and JPEG compression, with computational efficiency of the order of O(N). Key words: Authentication, Watermarking, Image Hashing, Perceptual Quality, Local Binary Pattern, Scale Invariant Feature Transform, Printer Identification, Scanner Identification, Sensor
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Parker, William A. "Evaluation of data processing techniques for unobtrusive gait authentication". Thesis, Monterey, California: Naval Postgraduate School, 2014. http://hdl.handle.net/10945/41429.

Texto completo da fonte
Resumo:
Approved for public release; distribution is unlimited.
The growth in smartphone usage has led to increased storage of sensitive data on these easily lost or stolen devices. In order to mitigate the effects of users who ignore, disable, or circumvent authentication measures like passwords, we evaluate a method employing gait as a source of identifying information. This research is based on previously reported methods with a goal of evaluating gait signal processing and classification techniques. This thesis evaluates the performance of four signal normalization techniques (raw signal, zero-scaled, gravity-rotated, and gravity rotated with zero-scaling). Additionally, we evaluate the effect of carrying position on classification. Data was captured from 23 subjects carrying the device in the front pocket, back pocket, and on the hip. Unlike previous research, we analyzed classifier performance on data collected from multiple positions and tested on each individual location, which would be necessary in a robust, deployable system. Our results indicate that restricting device position can achieve the best overall performance using zero-scaling with 6.13% total error rate (TER) on the XY-axis but with a high variance across different axes. Using data from all positions with gravity rotation can achieve 12.6% TER with a low statistical variance.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Bhide, Priyanka. "Design and Evaluation of Aceelerometer Based Mobile Authentication Techniques". Thesis, Linköpings universitet, Datorteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-133968.

Texto completo da fonte
Resumo:
Smartphones’ usages are growing rapidly. Smart phone usages are not limited to the receiving/calling or SMSing anymore. People use smartphone for online shopping, searching various information in the web, bank transactions, games, different applications for different usages etc. Anything is possible by just having a smartphone and the internet. The more usages of the smartphone also increase keeping more secrete information about the user in the phone. The popularity is increasing and so is different ways to steal/hack the phones. There are many areas which require further investigation in the field of smartphone security and authentication. This thesis work evaluates the scope of different inbuilt sensors in smartphones for mobile authentication based techniques. The Android Operating system was used in the implementation phase. Android OS has many open source library and Services which have been used for the sensor identification using Java Android platform. Two applications using Accelerometer sensor and one using Magnetometer sensor were developed. Two foremost objectives of this thesis work were-1) To figure it out the possibilities of sensor based authentication technique. 2) To check the end user perception/opinion about the applications. Usability testing was conducted to gather the user’s assessments/vision of the applications. Two methods which were used for usability testing are named Magical move and Tapping. Users (Most of them) have shown interest and inclination towards tapping application. Although, some users were also expressed inhibitions using both sensor based methods.
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

Rashid, Rasber Dhahir. "Robust steganographic techniques for secure biometric-based remote authentication". Thesis, University of Buckingham, 2015. http://bear.buckingham.ac.uk/236/.

Texto completo da fonte
Resumo:
Biometrics are widely accepted as the most reliable proof of identity, entitlement to services, and for crime-related forensics. Using biometrics for remote authentication is becoming an essential requirement for the development of knowledge-based economy in the digital age. Ensuring security and integrity of the biometric data or templates is critical to the success of deployment especially because once the data compromised the whole authentication system is compromised with serious consequences for identity theft, fraud as well as loss of privacy. Protecting biometric data whether stored in databases or transmitted over an open network channel is a serious challenge and cryptography may not be the answer. The main premise of this thesis is that Digital Steganography can provide an alternative security solutions that can be exploited to deal with the biometric transmission problem. The main objective of the thesis is to design, develop and test steganographic tools to support remote biometric authentication. We focus on investigating the selection of biometrics feature representations suitable for hiding in natural cover images and designing steganography systems that are specific for hiding such biometric data rather than being suitable for general purpose. The embedding schemes are expected to have high security characteristics resistant to several types of steganalysis tools and maintain accuracy of recognition post embedding. We shall limit our investigations to embedding face biometrics, but the same challenges and approaches should help in developing similar embedding schemes for other biometrics. To achieve this our investigations and proposals are done in different directions which explain in the rest of this section. Reviewing the literature on the state-of-art in steganography has revealed a rich source of theoretical work and creative approaches that have helped generate a variety of embedding schemes as well as steganalysis tools but almost all focused on embedding random looking secrets. The review greatly helped in identifying the main challenges in the field and the main criteria for success in terms of difficult to reconcile requirements on embedding capacity, efficiency of embedding, robustness against steganalysis attacks, and stego image quality. On the biometrics front the review revealed another rich source of different face biometric feature vectors. The review helped shaping our primary objectives as (1) identifying a binarised face feature factor with high discriminating power that is susceptible to embedding in images, (2) develop a special purpose content-based steganography schemes that can benefit from the well-defined structure of the face biometric data in the embedding procedure while preserving accuracy without leaking information about the source biometric data, and (3) conduct sufficient sets of experiments to test the performance of the developed schemes, highlight the advantages as well as limitations, if any, of the developed system with regards to the above mentioned criteria. We argue that the well-known LBP histogram face biometric scheme satisfies the desired properties and we demonstrate that our new more efficient wavelet based versions called LBPH patterns is much more compact and has improved accuracy. In fact the wavelet version schemes reduce the number of features by 22% to 72% of the original version of LBP scheme guaranteeing better invisibility post embedding. We shall then develop 2 steganographic schemes. The first is the LSB-witness is a general purpose scheme that avoids changing the LSB-plane guaranteeing robustness against targeted steganalysis tools, but establish the viability of using steganography for remote biometric-based recognition. However, it may modify the 2nd LSB of cover pixels as a witness for the presence of the secret bits in the 1st LSB and thereby has some disadvantages with regards to the stego image quality. Our search for a new scheme that exploits the structure of the secret face LBPH patterns for improved stego image quality has led to the development of the first content-based steganography scheme. Embedding is guided by searching for similarities between the LBPH patterns and the structure of the cover image LSB bit-planes partitioned into 8-bit or 4-bit patterns. We shall demonstrate the excellent benefits of using content-based embedding scheme in terms of improved stego image quality, greatly reduced payload, reduced lower bound on optimal embedding efficiency, robustness against all targeted steganalysis tools. Unfortunately our scheme was not robust against the blind or universal SRM steganalysis tool. However we demonstrated robustness against SRM at low payload when our scheme was modified by restricting embedding to edge and textured pixels. The low payload in this case is sufficient to embed a secret full face LBPH patterns. Our work opens new exciting opportunities to build successful real applications of content-based steganography and presents plenty of research challenges.
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

Cetin, Cagri. "Authentication and SQL-Injection Prevention Techniques in Web Applications". Scholar Commons, 2019. https://scholarcommons.usf.edu/etd/7766.

Texto completo da fonte
Resumo:
This dissertation addresses the top two “most critical web-application security risks” by combining two high-level contributions. The first high-level contribution introduces and evaluates collaborative authentication, or coauthentication, a single-factor technique in which multiple registered devices work together to authenticate a user. Coauthentication provides security benefits similar to those of multi-factor techniques, such as mitigating theft of any one authentication secret, without some of the inconveniences of multi-factor techniques, such as having to enter passwords or biometrics. Coauthentication provides additional security benefits, including: preventing phishing, replay, and man-in-the-middle attacks; basing authentications on high-entropy secrets that can be generated and updated automatically; and availability protections against, for example, device misplacement and denial-of-service attacks. Coauthentication is amenable to many applications, including m-out-of-n, continuous, group, shared-device, and anonymous authentications. The principal security properties of coauthentication have been formally verified in ProVerif, and implementations have performed efficiently compared to password-based authentication. The second high-level contribution defines a class of SQL-injection attacks that are based on injecting identifiers, such as table and column names, into SQL statements. An automated analysis of GitHub shows that 15.7% of 120,412 posted Java source files contain code vulnerable to SQL-Identifier Injection Attacks (SQL-IDIAs). We have manually verified that some of the 18,939 Java files identified during the automated analysis are indeed vulnerable to SQL-IDIAs, including deployed Electronic Medical Record software for which SQL-IDIAs enable discovery of confidential patient information. Although prepared statements are the standard defense against SQL injection attacks, existing prepared-statement APIs do not protect against SQL-IDIAs. This dissertation therefore proposes and evaluates an extended prepared-statement API to protect against SQL-IDIAs.
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

ALI, ARSLAN. "Deep learning techniques for biometric authentication and robust classification". Doctoral thesis, Politecnico di Torino, 2021. http://hdl.handle.net/11583/2910084.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Breedt, Morné. "Integrating biometric authentication into multiple applications". Pretoria : [s.n.], 2005. http://upetd.up.ac.za/thesis/available/etd-08282007-135540.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Al-Athamneh, Mohammad Hmoud. "Studies in source identification and video authentication for multimedia forensics". Thesis, Queen's University Belfast, 2017. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.725326.

Texto completo da fonte
Resumo:
Nowadays, powerful and easy to use editing software which is available to almost everyone allows forgers to create convincing digital forgeries. As multimedia applications require a certain level of trust in the integrity and authenticity of the data become more common, there is an increasing need to restore some of the lost trustworthiness of digital media. In multimedia forensics, Digital Signature and Digital Watermarking have long been commonly used in video authentication, but these methods have proven to have shortcomings. The main drawback of these techniques is that information must generally be inserted at the time of video capture or before video broadcasting. Both techniques require two stages are: at the sender side and then at the receiver side, which in some real world applications is not feasible. For the problem of source type identification, digital fingerprints are usually extracted and then compared with a dataset of possible fingerprints to determine the acquisition devices. Photo-Response Non-Uniformity (PRNU), which is caused by the different sensitivity of pixels to the light, has proven to be a distinctive link between the camera and its images/videos. With this in mind, this thesis proposes several new digital forensic techniques to detect evidence of manipulations in digital video content based on blind techniques (Chapter 4 and Chapter 5) where there is no need for per-embedded watermarks or per-generated digital signature. These methods showed potential to be reliable techniques in digital video authentication based on the local video information. For the problem of determining the source of digital evidence, this thesis proposes a G-PRNU method (in Chapter 3) that overcomes the accuracy obtained in PRNU method in the problem of digital videos source type identification and it is less computationally expensive. Each proposed method was tested on a dataset of videos and detailed experimental results are presented.
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

Dasun, Weerasinghe P. W. H. "Parameter based identification, authentication and authorization method for mobile services". Thesis, City University London, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.510696.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
17

Palombo, Hernan Miguel. "A Comparative Study of Formal Verification Techniques for Authentication Protocols". Scholar Commons, 2015. http://scholarcommons.usf.edu/etd/6008.

Texto completo da fonte
Resumo:
Protocol verification is an exciting area of network security that intersects engineering and formal methods. This thesis presents a comparison of formal verification tools for security protocols for their respective strengths and weaknesses supported by the results from several case studies. The formal verification tools considered are based on explicit model checking (SPIN), symbolic analysis (Proverif) and theorem proving (Coq). We formalize and provide models of several well-known authentication and key-establishment protocols in each of the specification languages, and use the tools to find attacks that show protocols insecurity. We contrast the modelling process on each of the tools by comparing features of their modelling languages, verification efforts involved, and analysis results Our results show that authentication and key-establishment protocols can be specified in Coq’s modeling language with an unbounded number of sessions and message space. However, proofs in Coq require human guidance. SPIN runs automated verification with a restricted version of the Dolev-Yao attacker model. Proverif has several advantages over SPIN and Coq: a tailored specification language, and better performance on infinite state space analysis.
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

Norrington, Peter. "Novel, robust and cost-effective authentication techniques for online services". Thesis, University of Bedfordshire, 2009. http://hdl.handle.net/10547/134951.

Texto completo da fonte
Resumo:
This thesis contributes to the study of the usability and security of visuo-cognitive authentication techniques, particularly those relying on recognition of abstract images, an area little researched. Many usability and security problems with linguistic passwords (including traditional text-based passwords) have been known for decades. Research into visually-based techniques intends to overcome these by using the extensive human capacity for recognising images, and add to the range of commercially viable authentication solutions. The research employs a mixed methodology to develop several contributions to the field. A novel taxonomy of visuo-cognitive authentication techniques is presented. This is based on analysis and synthesis of existing partial taxonomies, combined with new and extensive analysis of features of existing visuo-cognitive and other techniques. The taxonomy advances consistent terminology, and coherent and productive classification (cognometric, locimetric, graphimetric and manipulometric, based respectively on recognition of, location in, drawing of and manipulation of images) and discussion of the domain. The taxonomy is extensible to other classes of cognitive authentication technique (audio-cognitive, spatio-cognitive, biometric and token-based, etc.). A revised assessment process of the usability and security of visuo-cognitive techniques is proposed (employing three major assessment categories – usability, memorability and security), based on analysis, synthesis and refinement of existing models. The revised process is then applied to the features identified in the novel taxonomy to prove the process‘s utility as a tool to clarify both the what and the why of usability and security issues. The process is also extensible to other classes of authentication technique. iii Cognitive psychology experimental methods are employed, producing new results which show with statistical significance that abstract images are harder to learn and recall than face or object images. Additionally, new experiments and a new application of the chi-squared statistic show that users‘ choices of abstract images are not necessarily random over a group, and thus, like other cognitive authentication techniques, can be attacked by probabilistic dictionaries. A new authentication prototype is designed and implemented, embodying the usability and security insights gained. Testing of this prototype shows good usability and user acceptance, although speed of use remains an issue. A new experiment shows that abstract image authentication techniques are vulnerable to phishing attacks. Further, the testing shows two new results: that abstract image visuo-cognitive techniques are usable on mobile phones; and that such phones are not, currently, necessarily a threat as part of observation attacks on visual passwords.
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

Shcherbina, Anna. "Short tandem repeat (STR) profile authentication via machine learning techniques". Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/77020.

Texto completo da fonte
Resumo:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 169-171).
Short tandem repeat (STR) DNA profiles have multiple uses in forensic analysis, kinship identification, and human biometrics. However, as biotechnology progresses, there is a growing concern that STR profiles can be created using standard laboratory techniques such as whole genome amplification and molecular cloning. Such technologies can be used to synthesize any STR profile without the need for a physical sample, only knowledge of the desired genetic sequence. Therefore, to preserve the credibility of DNA as a forensic tool, it is imperative to develop means to authenticate STR profiles. The leading technique in the field, methylation analysis, is accurate but also expensive, time-consuming, and degrades the forensic sample so that further analysis is not possible. The realm of machine learning offers techniques to address the need for more effective STR profile authentication. In this work, a set of features were identified at both the channel and profile levels of STR electropherograms. A number of supervised and unsupervised machine learning algorithms were then used to predict whether a given STR electropherogram was authentic or synthesized by laboratory techniques. With the aid of the LNKnet machine learning toolkit, various classifiers were trained with the default set of parameters and the full set of features to quantify their baseline performance. Particular emphasis was placed on detecting profiles generated by Whole Genome Amplification (WGA). A greedy forward-backward search algorithm was implemented to determine the most useful subset of features from the initial group. Though the set of optimal feature values varied by classifier, a trend was observed indicating that the inter-locus imbalance error, stutter count, and range of peak widths for a profile were particularly useful features. These were selected by over two thirds of the classifiers. The signal-to- noise ratio was also a useful feature, selected by seven out of 16 classifiers. The selected features were in turn used to tune the parameters of machine learning algorithms and to compare their performance. From a set of 16 initial classifiers, the K-nearest neighbors, condensed K-nearest neighbors, multi-layer perceptron, Parzen window, and support vector machine classifiers achieved the best performance. These classification algorithms all attained error rates of approximately ten percent, defined as the percentage of profiles misclassified with the highest performing classifier achieving an error rate of less than eight percent. Overall, the classifiers performed well at detecting artificial profiles but had more difficulty accurately distinguishing natural profiles. There were many false positives for the artificial class, since profiles in this category took on a greater range of feature values. Finally, preliminary steps were taken to form classifier committees. However, combining the top performing classifiers via a majority vote did not significantly improve performance. The results of this work demonstrate the feasibility of a completely software-based approach to profile authentication. They confirm that machine learning techniques are a useful tool to trigger further investigation of profile authenticity via more expensive approaches.
by Anna Shcherbina.
M.Eng.
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

Kapoor, Gaurav. "Secure ownership transfer and authentication protocols for Radio Frequency Identification (RFID)". [Gainesville, Fla.] : University of Florida, 2008. http://purl.fcla.edu/fcla/etd/UFE0022783.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

Dooley, John J. "Molecular techniques for rhizobium identification". Thesis, University of Bath, 1997. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.338595.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

Komanduri, Saranga. "Improving Password Usability with Visual Techniques". Bowling Green State University / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1194297698.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

Abbadi, Laith. "Multi-factor Authentication Techniques for Video Applications over the Untrusted Internet". Thèse, Université d'Ottawa / University of Ottawa, 2012. http://hdl.handle.net/10393/23413.

Texto completo da fonte
Resumo:
Designing a completely secure and trusted system is a challenge that still needs to be addressed. Currently, there is no online system that is: (i) easy to use, (ii) easy to deploy, (iii) inexpensive, and (iv) completely secure and trusted. The proposed authentication techniques aim to enhance security and trust for video applications in the untrustworthy online environments. We propose a transparent multimodal biometric authentication (TMBA) for video conferencing applications. The user is identified based on his/her physiological and behavioral biometrics. The technique is based on a ‘Steps-Free’ method, where the user does not have to perform any specific steps during authentication. The system will authenticate the user in a transparent way. We propose authentication techniques as an additional security layer for various ‘user-to-user’ and ‘user-to-service’ systems. For ‘user-to-user’ video conferencing systems, we propose an authentication and trust establishment procedure to identify users during a video conference. This technique enables users that have never met before to verify the identity of each other, and aims at enhancing the user’s trust in each other. For ‘user-to-service’ video conferencing systems, we propose a transparent multimodal biometric authentication technique for video banking. The technique can be added to online transaction systems as an additional security layer to enhance the security of online transactions, and to resist against web attacks, malware, and Man-In-The-Browser (MITB) attacks. In order to have a video banking conference between a user and a bank employee, the user has to be logged in to an online banking session. This requires a knowledge-based authentication. Knowledge-based authentication includes a text-based password, the ‘Challenge Questions’ method, and graphical passwords. We analyzed several graphical password schemes in terms of usability and security factors. A graphical password scheme can be an additional security layer add-on to the proposed multimodal biometric video banking system. The combined techniques provide a multimodal biometric multi-factor continuous authentication system.
Estilos ABNT, Harvard, Vancouver, APA, etc.
24

Maddi, Satyanarayana. "DNA-based food authentication techniques : differentiation of tetraploid and hexaploid wheat". Thesis, Glasgow Caledonian University, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.517961.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
25

Carrillo, Cassandra M. "Continuous biometric authentication for authorized aircraft personnel : a proposed design". Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2003. http://library.nps.navy.mil/uhtbin/hyperion-image/03Jun%5FCarrillo.pdf.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
26

Quinn, Marguerite Claire. "The characterization of olive oils by various chromatographic and spectroscopic techniques". Thesis, University of South Wales, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.265732.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

Arteaga, Falconi Juan Sebastian. "ECG Authentication for Mobile Device". Thesis, Université d'Ottawa / University of Ottawa, 2013. http://hdl.handle.net/10393/30221.

Texto completo da fonte
Resumo:
Mobile devices users are storing more and more private and often highly sensitive information on their mobiles. Protective measures to ensure that users of mobile devices are appropriately safeguarded are thus imperative to protect users. Traditional mobile login methods, like numerical or graphical passwords, are vulnerable to passive attacks. It is common for criminal s to gain access to victims' personal information by watching victims enter their passwords into their cellphone screens from a short distance away. With this in mind, a Biometric authentication algorithm based on electrocardiogram or ECG is proposed. In this system the user will only need to touch the ECG electrodes of the mobile device to gain access. With this authentication mode no one will be able to see the biometric pattern that is used to unlock the de vices. This will increase the protection for the users. The algorithm was tested with ten subjects from MCRlab at the University of Ottawa at different days and conditions using a two electrode ECG phone case. Several tests were performed in order to reach the best setting for the algorithm to work properly. The final results show that the system has a 1.41% of chance to accept false users and 81.82% of accepting the right users. The algorithm was also tested with 73 subjects from Physionet database and the results were around the same, which confirms the consistency of the algorithm. This is the first approach on mobile authentication using ECG biometric signals and shows a promising future for this technology to be used in mobiles.
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

Wang, Tao. "Wireless Physical Layer Design for Confidentiality and Authentication". Scholar Commons, 2019. https://scholarcommons.usf.edu/etd/7985.

Texto completo da fonte
Resumo:
As various of wireless techniques have been proposed to achieve fast and efficient data communication, it’s becoming increasingly important to protect wireless communications from being undermined by adversaries. A secure and reliable wireless physical layer design is essential and critical to build a solid foundation for upper layer applications. This dissertation present two works that explore the physical layer features to secure wireless communications towards the data confidentiality and user authentication. The first work builds a reliable wireless communication system to enforce the location restricted service access control. In particular, the work proposes a novel technique named pinpoint waveforming to deliver the services to users at eligible locations only. The second work develops a secure far proximity identification approach that can determine whether a remote device is far away, thus preventing potential spoofing attacks in long-haul wireless communications. This dissertation lastly describes some future work efforts, designing a light-weight encryption scheme to facilitate sensitive data encryption for applications which cannot support expensive cryptography encryption operations such as IoT devices.
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

Atilgan, Erdinc Levent. "Target Identification Using Isar Imaging Techniques". Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/12606765/index.pdf.

Texto completo da fonte
Resumo:
A proper time-frequency transform technique suppresses the blurring and smearing effect of the time-varying Doppler shift on the target image. The conventional target imaging method uses the Fourier transform for extracting the Doppler shift from the received radar pulse. Since the Doppler shift is timevarying for rotating targets, the constructed images will be degraded. In this thesis, the Doppler shift information required for the Range-Doppler image of the target is extracted by using high resolution time-frequency transform techniques. The Wigner-Ville Distribution and the Adaptive Gabor Representation with the Coarse-to-Fine and the Matching Pursuit Search Algorithms are examined techniques for the target imaging system. The modified Matching Pursuit Algorithm, the Matching Pursuit with Reduced Dictionary is proposed which decreases the signal processing time required by the Adaptive Gabor Representation. The Hybrid Matching Pursuit Search Algorithm is also introduced in this thesis work and the Coarse-to-Fine Algorithm and the Matching Pursuit Algorithm are combined for obtaining better representation quality of a signal in the time-frequency domain. The stated techniques are applied on to the sample signals and compared with each other. The application of these techniques in the target imaging system is also performed for the simulated aircrafts.
Estilos ABNT, Harvard, Vancouver, APA, etc.
30

Aslan, Mehmet Kadir. "Emitter Identification Techniques In Electronic Warfare". Master's thesis, METU, 2006. http://etd.lib.metu.edu.tr/upload/12607675/index.pdf.

Texto completo da fonte
Resumo:
In this study, emitter identification techniques have been investigated and a schema has been proposed to solve the emitter identification problem in Electronic Warfare systems. Clustering technique, histogram based deinterleaving techniques and a continuous wavelet transform based deinterleaving technique have been reviewed. A receiver simulator software has been developed to test the performance of these techniques and to compare them against each other. To compensate the disadvantages of these techniques, a schema utilizing the beneficial points of them has been developed. With the modifications proposed a resultant schema has been obtained. Proposed schema uses clustering and deinterleaving together with other proposed modifications. Tests made through out this study have shown that this usage improves performance of emitter identification system. Hence, proposed schema can be used to identify the emitters in real EW systems.
Estilos ABNT, Harvard, Vancouver, APA, etc.
31

Lindgren, David. "Projection techniques for classification and identification /". Linköping : Univ, 2004. http://www.bibl.liu.se/liupubl/disp/disp2005/tek915s.pdf.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
32

Liu, Xuefeng. "Vibration-based structural damage identification techniques". Thesis, University of Bristol, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.445826.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
33

Eyecioglu, Ozmutlu Asli. "Paraphrase identification using knowledge-lean techniques". Thesis, University of Sussex, 2016. http://sro.sussex.ac.uk/id/eprint/65497/.

Texto completo da fonte
Resumo:
This research addresses the problem of identification of sentential paraphrases; that is, the ability of an estimator to predict well whether two sentential text fragments are paraphrases. The paraphrase identification task has practical importance in the Natural Language Processing (NLP) community because of the need to deal with the pervasive problem of linguistic variation. Accurate methods for identifying paraphrases should help to improve the performance of NLP systems that require language understanding. This includes key applications such as machine translation, information retrieval and question answering amongst others. Over the course of the last decade, a growing body of research has been conducted on paraphrase identification and it has become an individual working area of NLP. Our objective is to investigate whether techniques concentrating on automated understanding of text requiring less resource may achieve results comparable to methods employing more sophisticated NLP processing tools and other resources. These techniques, which we call “knowledge-lean”, range from simple, shallow overlap methods based on lexical items or n-grams through to more sophisticated methods that employ automatically generated distributional thesauri. The work begins by focusing on techniques that exploit lexical overlap and text-based statistical techniques that are much less in need of NLP tools. We investigate the question “To what extent can these methods be used for the purpose of a paraphrase identification task?” For the two gold standard data, we obtained competitive results on the Microsoft Research Paraphrase Corpus (MSRPC) and reached the state-of-the-art results on the Twitter Paraphrase Corpus, using only n-gram overlap features in conjunction with support vector machines (SVMs). These techniques do not require any language specific tools or external resources and appear to perform well without the need to normalise colloquial language such as that found on Twitter. It was natural to extend the scope of the research and to consider experimenting on another language, which is poor in resources. The scarcity of available paraphrase data led us to construct our own corpus; we have constructed a paraphrasecorpus in Turkish. This corpus is relatively small but provides a representative collection, including a variety of texts. While there is still debate as to whether a binary or fine-grained judgement satisfies a paraphrase corpus, we chose to provide data for a sentential textual similarity task by agreeing on fine-grained scoring, knowing that this could be converted to binary scoring, but not the other way around. The correlation between the results from different corpora is promising. Therefore, it can be surmised that languages poor in resources can benefit from knowledge-lean techniques. Discovering the strengths of knowledge-lean techniques extended with a new perspective to techniques that use distributional statistical features of text by representing each word as a vector (word2vec). While recent research focuses on larger fragments of text with word2vec, such as phrases, sentences and even paragraphs, a new approach is presented by introducing vectors of character n-grams that carry the same attributes as word vectors. The proposed method has the ability to capture syntactic relations as well as semantic relations without semantic knowledge. This is proven to be competitive on Twitter compared to more sophisticated methods.
Estilos ABNT, Harvard, Vancouver, APA, etc.
34

Azevedo, João Henrique Albino de. "Aeroelastic studies using system identification techniques". Instituto Tecnológico de Aeronáutica, 2013. http://www.bd.bibl.ita.br/tde_busca/arquivo.php?codArquivo=2864.

Texto completo da fonte
Resumo:
The present work is concerned with studying techniques which would allow the identification of a multiple degree of freedom aeroelastic system from a single computational fluid dynamics (CFD) unsteady simulation. This data is, then, used to generate the root locus for aeroelastic stability analysis of the dynamic system. The system being considered in the present work is a NACA 0012 airfoil-based typical section in the transonic regime. The CFD calculations are based on the Euler equations and the code uses a finite volume formulation for general unstructured grids. A centered spatial discretization with added artificial dissipation is used, and an explicit Runge-Kutta time marching method is employed. Unsteady calculations are performed for several types of excitation on the plunge and pitch degrees of freedom of the dynamic system. These inputs are mostly based on step and orthogonal Walsh functions. System identification techniques are used to allow the splitting of the aerodynamic coeficient time histories into the contributions of each individual mode to the corresponding aerodynamic transfer functions. Such transfer functions are, then, represented by rational polynomials and used in an aeroelastic stability analysis in the frequency domain. The work compares the results provided for each case and attempts to contribute with guidelines for such analyses.
Estilos ABNT, Harvard, Vancouver, APA, etc.
35

JANAKIRAMAN, KRISHNAMOORTHY. "ENTITY IDENTIFICATION USING DATA MINING TECHNIQUES". University of Cincinnati / OhioLINK, 2001. http://rave.ohiolink.edu/etdc/view?acc_num=ucin989852516.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
36

Camara, C. D. J. "Plant identification using model reference techniques". Thesis, University of Cape Town, 1987. http://hdl.handle.net/11427/23544.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
37

Frushour, John H. "Design considerations for a computationally-lightweight authentication mechanism for passive RFID tags". Thesis, Monterey, California : Naval Postgraduate School, 2009. http://edocs.nps.edu/npspubs/scholarly/theses/2009/Sep/09Sep%5FFrushour.pdf.

Texto completo da fonte
Resumo:
Thesis (M.S. in Computer Science)--Naval Postgraduate School, September 2009.
Thesis Advisor(s): Fulp, J.D. ; Huffmire, Ted. "September 2009." Description based on title screen as viewed on November 6, 2009. Author(s) subject terms: Passive RFID Systems, Tags, Clock, Electro-magnetic induction, authentication, hash, SHA--1. Includes bibliographical references (p. 59-60). Also available in print.
Estilos ABNT, Harvard, Vancouver, APA, etc.
38

El, Khoury Franjieh. "Modélisation de la sécurisation d’accès aux réseaux par la technique de cryptographie asymétrique en utilisant la reconnaissance de l’iris et la technologie des agents". Thesis, Lyon 1, 2009. http://www.theses.fr/2009LYO10308.

Texto completo da fonte
Resumo:
La croissance exponentielle dans l’utilisation du réseau Internet ainsi que l’apparition de nouveaux types d’applications ont augmenté les contraintes du réseau en termes de sécurité. Depuis quelques années, les techniques biométriques ont prouvé une grande précision et fiabilité et ont été utilisées dans plusieurs domaines afin de sécuriser l’accès à différentes ressources. Des solutions intégrant des agents et des systèmes multi-agents (SMA) ont aussi prouvé leur efficacité pour la résolution de nombreux problèmes dans les réeaux. Nous proposons un modèle « IrisCrptoAgentSystem » (ICAS) basé sur la méthode biométrique pour l’authentification utilisant l’iris de l’œil et la méthode de cryptographie asymétrique utilisant l’algorithme « Rivest-Shamir-Adleman » (RSA), et en intégrant des agents. Ce modèle doit assurer un accès sécurisé aux informations et garantir la protection des informations confidentielles. Notre travail porte sur la mise en place de nouvelles méthodes dans le modèle d’authentification biométrique afin de donner plus d’efficacité à notre modèle ICAS. Nous introduisons des aspects prétopologiques dans l’élaboration de la hiérarchie indexée pour classer les gabarits DHVA. Notre approche consiste à améliorer les méthodes relatives à la localisation des contours externe et interne de l’iris
The exponential growth in the use of the Internet as well as the emergence of new types of applications has increased the network’s constraints in terms f security. Fort the last several years, biometric techniques have proven their applicability and reliability in providing secure access to shared resources in different domains. Furthermore, software agents and multi-agent systems (MAS) have evidently been efficient in resolving several problems in network. Therefore, the aim of this research is to propose a model “IrisCryptoAgentSystem” (ICAS) that is based on a biometric method for authentication using the iris of the eyes and an asymmetric cryptography method using “Rivest-Shamir-Adleman” (RSA) in an agent-based architecture. This model should provide secure access to information and ensure the protection of confidential information. Therefore, our work focuses on the development of new methods in biometric autheitcation in order to provide greater efficiency in the ICAS model. We introduce pretopological aspects in the development of the indexed hierarchy to classify DHVA templates. Our approach aims to improve the existing methods for the localization of the external and the internal edges of the iris
Estilos ABNT, Harvard, Vancouver, APA, etc.
39

Henriksson, Michael. "Authentication and Identification of Sensor Nodes to Avoid Unauthorized Access in Sensor Networks". Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-279558.

Texto completo da fonte
Resumo:
With the increasing popularity of Internet of Things (IoT), network connected devices and sensors, to easier collect data is security an aspect that must not be forgotten. When sensitive data, such as personal or private data, is sent over the network without protection can it easier be obtained by anyone who want to get their hands on it. This risk increases with the value of the data sent and an increase in security should therefore follow this value. Based on this is it therefore important to look at the security aspects of a sensor network to find ways to easy integrate security such as authentication. This to make sure that the only devices and users accessing or sending data on the network is authorized and not malicious devices. This thesis focuses on the authentication and identification of the devices joining the network to make sure that only trusted devices would be able to join. The protocol in focus is ZigBee but the proposed solution can be integrated with any protocol and utilizes a Key Distribution Center (KDC) together with an authentication method based on the Challenge Handshake Authentication Protocol (CHAP) to authenticate new devices before they are allowed into the network. This solution is secure and relatively simple which makes it easy to integrate with any sensor network.
Med en ökad popularitet av att koppla upp sensorer och apparater mot ett nät- verk för att enklare kunna samla in data är säkerhet en aspekt som inte får glömmas bort. När känslig data, så som personlig eller privat data, skickas över nätverket oskyddat kan någon som vill komma åt datan lättare få tag på den. Denna risk ökar med värdet av datan som skickas och en ökningen av säkerhet bör darav följa ökning av värdet på datan. Utav denna anledning är det viktigt att se över säkerheten i sensornätverk och finna lösningar som lätt kan integreras med ett sensornätverk. Detta för att säkerhetsställa att endast de snesornoder som har auktoritet kan gå med i, samt skicka data på nätverket och därmed undvika oönskad åtkomst. Denna avhandling fukuserar på autentisering och identifiering av de noder som ska anslutas till nätverket för att säkerhetsställa att endast pålitliga och auktoriserade noder blir insläppta. Det protokoll som är i fokus i denna avhandling är ZigBee men den föreslagna lösningen kan även integreras med andra protokoll. Den föreslagna lösning- en använder sig även av ett Key Distribution Center (KDC) samt en autentiseringsmetod som baseras på Challenge Handshake Authentication Protocol (CHAP) för att authentisera nya noder innan de blir insläppta i nätverket. Denna lösning är säker och relativt enkel vilket gör det enkelt att integrera med all typer av sensornätverk.
Estilos ABNT, Harvard, Vancouver, APA, etc.
40

Akif, Omar Zeyad. "Secure authentication procedures based on timed passwords, honeypots, honeywords and multi-factor techniques". Thesis, Brunel University, 2017. http://bura.brunel.ac.uk/handle/2438/16124.

Texto completo da fonte
Resumo:
A time-based password generating technique has been adopted and applied to protect sensitive datasets as the first technique used in this thesis. It specifically mitigates attacks and threats by adding time as a part of the password, which is generated using the shift-key. This in turn raises the possible combinations for the password and enhances the system's security. The Password Quality Indicator (PQI) was implemented to evaluate security improvement. Results showed that contemporary password techniques were up to 200% more secure than the traditional methods. The second method, 'honeypot', is based on web-session management. The authentication process is triggered if the web-session is initiated correctly when the first webpage is requested; legitimate users must perform the correct session through a precise links' sequence to be compatible with the session management that has been saved in the server side. The honeypot will present a sequence of links to lure the attacker into performing the authentication procedure directly from the login box. When compared to conventional methods, it was found that using the new method has improved user security by 200%. Additionally, a multi-factor authentication approach was tested, where combination of the timing password and the honeypot techniques was used. The outcomes were calculated and the results demonstrated that the passwords' strength was enhanced when using and increasing the number of links and the quantity of dwell time periods as a result of probabilities and complication. This approach yielded passwords that are 300% more secure than traditional methods would generate. Finally, a honeywords-generation method (decoy passwords) was also applied to detect attacks against the databases of hashed passwords. With an aim of achieving flatness, the original password for each user account was stored with many honeywords in order to confuse and mislead cyber-attackers. This technique relies on the abnormal generation method to achieve flatness among real password. A survey involving 820 participants was conducted to quantify how many users were able to recognise the real password among several honeywords. The results have shown that the new generation method was an improvement on traditional methods by 89.634% and attained sufficient flatness to confuse the attackers.
Estilos ABNT, Harvard, Vancouver, APA, etc.
41

Alsulaiman, Fawaz Abdulaziz A. "Towards a Continuous User Authentication Using Haptic Information". Thesis, Université d'Ottawa / University of Ottawa, 2013. http://hdl.handle.net/10393/23946.

Texto completo da fonte
Resumo:
With the advancement in multimedia systems and the increased interest in haptics to be used in interpersonal communication systems, where users can see, show, hear, tell, touch and be touched, mouse and keyboard are no longer dominant input devices. Touch, speech and vision will soon be the main methods of human computer interaction. Moreover, as interpersonal communication usage increases, the need for securing user authentication grows. In this research, we examine a user's identification and verification based on haptic information. We divide our research into three main steps. The first step is to examine a pre-defined task, namely a handwritten signature with haptic information. The user target in this task is to mimic the legitimate signature in order to be verified. As a second step, we consider the user's identification and verification based on user drawings. The user target is predefined, however there are no restrictions imposed on the order or on the level of details required for the drawing. Lastly, we examine the feasibility and possibility of distinguishing users based on their haptic interaction through an interpersonal communication system. In this third step, there are no restrictions on user movements, however a free movement to touch the remote party is expected. In order to achieve our goal, many classification and feature reduction techniques have been discovered and some new ones were proposed. Moreover, in this work we utilize evolutionary computing in user verification and identification. Analysis of haptic features and their significance on distinguishing users is hence examined. The results show a utilization of visual features by Genetic Programming (GP) towards identity verification, with a probability equal to 50% while the remaining haptic features were utilized with a probability of approximately 50%. Moreover, with a handwritten signature application, a verification success rate of 97.93% with False Acceptance Rate (FAR) of 1.28% and @11.54% False Rejection Rate (FRR) is achieved with the utilization of genetic programming enhanced with the random over sampled data set. In addition, with a totally free user movement in a haptic-enabled interpersonal communication system, an identification success rate of 83.3% is achieved when random forest classifier is utilized.
Estilos ABNT, Harvard, Vancouver, APA, etc.
42

Gorti, Bhaskar M. "Techniques for discrete, time domain system identification". Thesis, This resource online, 1991. http://scholar.lib.vt.edu/theses/available/etd-11242009-020121/.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
43

Alkaabi, Juma A. "Improved materials management using automatic identification techniques". Thesis, Loughborough University, 1994. https://dspace.lboro.ac.uk/2134/11155.

Texto completo da fonte
Resumo:
The success of any project depends upon accurate and timely information, and most construction related companies utilize computer systems for this purpose. However, these systems fail to provide a link to the physical movements of materials. Effective materials management is vital because of the contribution of materials' elements to the total cost of a construction project. Despite this the construction industry has a poor record in materials management. The reasons for this include delays in the delivery of materials to site and poor identification of materials on site. The severity of these problems and their impact on the construction industry highlighted the need for research in this area. At the initial stage, the research studied the clljTent systems of materials management in the manufacturing and construction industries. It was concluded that Automatic Identification offered advantages over other systems for controlling the physical movements of materials and related information. The traditional flow of materials and related information in the construction industry was examined and schematic diagrams were developed. The typical process of materials management through the life cycle of a typical project was determined. Automatic Identification technologies were reviewed with particular emphasis on bar coding techniques. These are widely used in other industries and provide significant benefits. Furthermore, they have greater flexibility over other Automatic Identification techniques. The potential applications of bar coding techniques in construction were identified as a result of a pilot study conducted with a local company. In order to determine the current status of bar coding techniques in the construction industry a questionnaire survey was undertaken. This survey revealed a high degree of interest from the respondents in implementing these techniques for applications such as material identification, stock control and delivery ticket automation. The research developed a generic barcoded delivery ticket, a goods received note and a standard bar code label for product identification. To investigate the feasibility of using bar coding techniques in construction, a case study was conducted with a local company to monitor and control pre-cast concrete beams from production through to delivery to the customer site. The study findings showed considerable benefits could be gained from the implementation of these techniques. To realize the full benefits of bar coding techniques, electronic data interchange, (EDI), was also considered. The proposed integration of these techniques produced in an improved methodology for materials management. This methodology was validated by a series of interviews, and evaluated during trials with the collaboration of a local company. The main outcomes of the research are: • A concepmal framework for an improved methodology for managing construction materials using automatic identification and in particularly bar coding techniques. • An understanding of the problems and benefits of the design, implementation, and verification of an Automatic Identification system. • An examination of how Automatic Identification and Electronic Data Interchange (EDI) could be linked to improve the flow of materials information. • A generic bar code standard format for Delivery Ticket and Goods Received Notes. • A generic bar code standard label for product identification throughout the supply chain. • The identification of potential applications of bar coding techniques in construction.
Estilos ABNT, Harvard, Vancouver, APA, etc.
44

Qiao, Guofeng. "Bioimpedence analysis techniques for malignant tissue identification". Thesis, University of Sussex, 2011. http://sro.sussex.ac.uk/id/eprint/7439/.

Texto completo da fonte
Resumo:
The use of bioimpedance techniques for malignancy identification is considered novel, with challenges existing that need to be overcome. In this thesis, such bioimpedance approaches have been developed for identifying malignancies through a systematic study, ranging from the investigation of the technical challenges affecting an imaging based breast cancer detection system, to the study of electrical properties of tissue and cells. Hence, this work provides proof-of-concept for cancer diagnosis based on the electrical signatures that differentiate malignancies from normal tissue, utilising bioimpedance analysis techniques. Further, this work will contribute to the understanding of correlations between electrical properties and biological functions, which will help to explore bioimpedance techniques for wider medical and bioscience applications. Furthermore, this research will also be conducive to investigations of novel devices for cancer diagnosis in clinic. The Ph. D work was carried out in two threads. In the first thread, technical challenges of using Electrical Impedance Mammography (EIM), an imaging modality developed based on bioimpedance technique for breast cancer diagnosis, were studied on 1) effectively reducing measurement errors from electrode contact interfaces, and 2) validating systems by using novel simulation phantoms. In the second thread, bioimpedance spectroscopy (BIS) of tissues and cells was investigated to 1) reveal their electrical properties, 2) identify malignant changes, and 3) establish correlations of electrical properties with biological function changes. By carrying studies in these two threads, bioimpedance was fully investigated for its applications on cancer detection and diagnosis. This work has made significant contributions to the field of study. It comprises the first systematic study on bioimpedance for cancer identification at the tissue and cellular levels. This work has also been pioneering in linking the electrical properties of malignant tissues and cells to the relevant biological changes brought on by the aforementioned malignancies.
Estilos ABNT, Harvard, Vancouver, APA, etc.
45

Patil, Jitendra. "Vehicle identification based on image processing techniques". Thesis, California State University, Long Beach, 2016. http://pqdtopen.proquest.com/#viewpdf?dispub=1606096.

Texto completo da fonte
Resumo:

The current project presents a method based on image processing techniques for the identification of moving vehicles as they approach a signaled intersection. A set of fixed cameras located before the intersection monitor the street continuously, taking pictures of the approaching object. A feature extraction algorithm is presented, which identifies a set of features in the image, and calculates a distance metric, measuring the difference of the current image from images stored in a database of vehicles. If the calculated distance metric is very small, then the present vehicle is successfully classified as being the same type as one of the vehicles stored in the database. A successful application of this method was implemented using real-time data. In the particular application presented in this project the vehicle to be identified is an ambulance car, whose images have been previously stored in the vehicle database.

Estilos ABNT, Harvard, Vancouver, APA, etc.
46

MARINELLI, GIUSEPPE. "Road geometry identification with mobile mapping techniques". Doctoral thesis, Politecnico di Torino, 2015. http://hdl.handle.net/11583/2606568.

Texto completo da fonte
Resumo:
Durante il mio dottorato mi sono occupato di Tecniche e Tecnologie innovative per la ricostruzione della geometria dei tracciati stradali esistenti, quali ad esempio Mobile Mapping, analisi immagini e dati GIS; a fronte degli elevatissimi costi oggi richiesti per l’utilizzo di veicoli strumentati già reperibili in commercio per il raggiungimento di tali scopi, il valore aggiunto del lavoro di dottorato riguarda l’uso di strumenti a basso costo che comportano un rilevante lavoro di analisi, trattamento e correzione del dato che risente in maniera decisiva della medio/bassa qualità della strumentazione in uso. L’obiettivo della ricerca è consistito nella realizzazione di un algoritmo di riconoscimento (in ambiente MATLAB) che sia in grado di restituire la geometria as-built di una strada esistente. Parte del lavoro è stata svolta nell’analisi e nell’estrazione delle curvature locali con approcci differenti (successive circonferenze locali, funzioni polinomiali di fitting locale di vario grado e con ampiezza di analisi variabile), nonché sullo studio degli angoli di deviazione locali. Usando questi parametri, nel resto del lavoro, si è prima ricercata una metodologia d’identificazione dei diversi elementi che compongono la geometria stradale, e poi si è lavorato su procedure di fitting con svariate tecniche (minimi quadrati, metodi robusti e altri algoritmi) cercando di estrarre informazioni di carattere geometrico, quali raggi di curvatura e relativi centri, lunghezza e orientamento dei rettifili, fattori di scala delle curve di transizione.
Estilos ABNT, Harvard, Vancouver, APA, etc.
47

Zhang, Yanbo. "Molecular approach to the authentication of lycium barbarum and its related species". HKBU Institutional Repository, 2000. http://repository.hkbu.edu.hk/etd_ra/227.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
48

Koppikar, Samir Dilip. "Privacy Preserving EEG-based Authentication Using Perceptual Hashing". Thesis, University of North Texas, 2016. https://digital.library.unt.edu/ark:/67531/metadc955127/.

Texto completo da fonte
Resumo:
The use of electroencephalogram (EEG), an electrophysiological monitoring method for recording the brain activity, for authentication has attracted the interest of researchers for over a decade. In addition to exhibiting qualities of biometric-based authentication, they are revocable, impossible to mimic, and resistant to coercion attacks. However, EEG signals carry a wealth of information about an individual and can reveal private information about the user. This brings significant privacy issues to EEG-based authentication systems as they have access to raw EEG signals. This thesis proposes a privacy-preserving EEG-based authentication system that preserves the privacy of the user by not revealing the raw EEG signals while allowing the system to authenticate the user accurately. In that, perceptual hashing is utilized and instead of raw EEG signals, their perceptually hashed values are used in the authentication process. In addition to describing the authentication process, algorithms to compute the perceptual hash are developed based on two feature extraction techniques. Experimental results show that an authentication system using perceptual hashing can achieve performance comparable to a system that has access to raw EEG signals if enough EEG channels are used in the process. This thesis also presents a security analysis to show that perceptual hashing can prevent information leakage.
Estilos ABNT, Harvard, Vancouver, APA, etc.
49

Siorat, Catherine. "Techniques d'identification odonto-stomatologique en médecine légale". Bordeaux 2, 1990. http://www.theses.fr/1990BOR25089.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
50

"Face authentication on mobile devices: optimization techniques and applications". 2005. http://library.cuhk.edu.hk/record=b5892581.

Texto completo da fonte
Resumo:
Pun Kwok Ho.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2005.
Includes bibliographical references (leaves 106-111).
Abstracts in English and Chinese.
Chapter 1. --- Introduction --- p.1
Chapter 1.1 --- Background --- p.1
Chapter 1.1.1 --- Introduction to Biometrics --- p.1
Chapter 1.1.2 --- Face Recognition in General --- p.2
Chapter 1.1.3 --- Typical Face Recognition Systems --- p.4
Chapter 1.1.4 --- Face Database and Evaluation Protocol --- p.5
Chapter 1.1.5 --- Evaluation Metrics --- p.7
Chapter 1.1.6 --- Characteristics of Mobile Devices --- p.10
Chapter 1.2 --- Motivation and Objectives --- p.12
Chapter 1.3 --- Major Contributions --- p.13
Chapter 1.3.1 --- Optimization Framework --- p.13
Chapter 1.3.2 --- Real Time Principal Component Analysis --- p.14
Chapter 1.3.3 --- Real Time Elastic Bunch Graph Matching --- p.14
Chapter 1.4 --- Thesis Organization --- p.15
Chapter 2. --- Related Work --- p.16
Chapter 2.1 --- Face Recognition for Desktop Computers --- p.16
Chapter 2.1.1 --- Global Feature Based Systems --- p.16
Chapter 2.1.2 --- Local Feature Based Systems --- p.18
Chapter 2.1.3 --- Commercial Systems --- p.20
Chapter 2.2 --- Biometrics on Mobile Devices --- p.22
Chapter 3. --- Optimization Framework --- p.24
Chapter 3.1 --- Introduction --- p.24
Chapter 3.2 --- Levels of Optimization --- p.25
Chapter 3.2.1 --- Algorithm Level --- p.25
Chapter 3.2.2 --- Code Level --- p.26
Chapter 3.2.3 --- Instruction Level --- p.27
Chapter 3.2.4 --- Architecture Level --- p.28
Chapter 3.3 --- General Optimization Workflow --- p.29
Chapter 3.4 --- Summary --- p.31
Chapter 4. --- Real Time Principal Component Analysis --- p.32
Chapter 4.1 --- Introduction --- p.32
Chapter 4.2 --- System Overview --- p.33
Chapter 4.2.1 --- Image Preprocessing --- p.33
Chapter 4.2.2 --- PCA Subspace Training --- p.34
Chapter 4.2.3 --- PCA Subspace Projection --- p.36
Chapter 4.2.4 --- Template Matching --- p.36
Chapter 4.3 --- Optimization using Fixed-point Arithmetic --- p.37
Chapter 4.3.1 --- Profiling Analysis --- p.37
Chapter 4.3.2 --- Fixed-point Representation --- p.38
Chapter 4.3.3 --- Range Estimation --- p.39
Chapter 4.3.4 --- Code Conversion --- p.42
Chapter 4.4 --- Experiments and Discussions --- p.43
Chapter 4.4.1 --- Experiment Setup --- p.43
Chapter 4.4.2 --- Execution Time --- p.44
Chapter 4.4.3 --- Space Requirement --- p.45
Chapter 4.4.4 --- Verification Accuracy --- p.45
Chapter 5. --- Real Time Elastic Bunch Graph Matching --- p.49
Chapter 5.1 --- Introduction --- p.49
Chapter 5.2 --- System Overview --- p.50
Chapter 5.2.1 --- Image Preprocessing --- p.50
Chapter 5.2.2 --- Landmark Localization --- p.51
Chapter 5.2.3 --- Feature Extraction --- p.52
Chapter 5.2.4 --- Template Matching --- p.53
Chapter 5.3 --- Optimization Overview --- p.54
Chapter 5.3.1 --- Computation Optimization --- p.55
Chapter 5.3.2 --- Memory Optimization --- p.56
Chapter 5.4 --- Optimization Strategies --- p.58
Chapter 5.4.1 --- Fixed-point Arithmetic --- p.60
Chapter 5.4.2 --- Gabor Masks and Bunch Graphs Precomputation --- p.66
Chapter 5.4.3 --- Improving Array Access Efficiency using ID array --- p.68
Chapter 5.4.4 --- Efficient Gabor Filter Selection --- p.75
Chapter 5.4.5 --- Fine Tuning System Cache Policy --- p.79
Chapter 5.4.6 --- Reducing Redundant Memory Access by Loop Merging --- p.80
Chapter 5.4.7 --- Maximizing Cache Reuse by Array Merging --- p.90
Chapter 5.4.8 --- Optimization of Trigonometric Functions using Table Lookup. --- p.97
Chapter 5.5 --- Summary --- p.99
Chapter 6. --- Conclusions --- p.103
Chapter 7. --- Bibliography --- p.106
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia