Dissertations / Theses on the topic 'Parse data'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 49 dissertations / theses for your research on the topic 'Parse data.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Mansfield, Martin F. "Design of a generic parse tree for imperative languages." Virtual Press, 1992. http://liblink.bsu.edu/uhtbin/catkey/834617.
Full textDepartment of Computer Science
Andrén, August, and Patrik Hagernäs. "Data-parallel Acceleration of PARSEC Black-Scholes Benchmark." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-128607.
Full textAlvestad, Gaute Odin, Ole Martin Gausnes, and Ole-Jakob Kråkenes. "Development of a Demand Driven Dom Parser." Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2006. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9311.
Full textXML is a tremendous popular markup language in internet applications as well as a storage format. XML document access is often done through an API, and perhaps the most important of these is the W3C DOM. The recommendation from W3C defines a number of interfaces for a developer to access and manipulate XML documents. The recommendation does not define implementation specific approaches used behind the interfaces. A problem with the W3C DOM approach however, is that documents often are loaded in to memory as a node tree of objects, representing the structure of the XML document. This tree is memory consuming and can take up to 4-10 times the document size. Lazy processing have been proposed, building the node tree as it accesses new parts of the document. But when the whole document has been accessed, the overhead compared to traditional parsers, both in terms of memory usage and performance, is high. In this thesis a new approach is introduced. With the use of well known indexing schemes for XML, basic techniques for reducing memory consumption, and principles for memoryhandling in operation systems, a new and alternative approach is introduced. By using a memory cache repository for DOM nodes and simultaneous utilize principles for lazy processing, the proposed implementation has full control over memory consumption. The proposed prototype is called Demand Driven Dom Parser, D3P. The proposed approach removes least recently used nodes from the memory when the cache has exceeded its memory limit. This makes the D3P able to process the document with low memory requirements. An advantage with this approach is that the parser is able to process documents that exceed the size of the main memory, which is impossible with traditional approaches. The implementation is evaluated and compared with other implementations, both lazy and traditional parsers that builds everything in memory on load. The proposed implementation performs well when the bottleneck is memory usage, because the user can set the desired amount of memory to be used by the XML node tree. On the other hand, as the coverage of the document increases, time spend processing the node tree grows beyond what is used by traditional approaches.
Seppecher, Manon. "Mining call detail records to reconstruct global urban mobility patterns for large scale emissions calculation." Electronic Thesis or Diss., Lyon, 2022. http://www.theses.fr/2022LYSET002.
Full textRoad traffic contributes significantly to atmospheric emissions in urban areas, a major issue in the fight against climate change. Therefore, joint monitoring of road traffic and related emissions is essential for urban public decision-making. And beyond this kind of procedure, public authorities need methods for evaluating transport policies according to environmental criteria.Coupling traffic models with traffic-related emission models is a suitable response to this need. However, integrating this solution into decision support tools requires a refined and dynamic char-acterization of urban mobility. Cell phone data, particularly Call Detail Records, are an interesting alternative to traditional data to estimate this mobility. They are rich, massive, and available worldwide. However, their use in literature for systematic traffic characterization has remained limited. It is due to low spatial resolution and temporal sampling rates sensitive to communication behaviors.This Ph.D. thesis investigates the estimation of traffic variables necessary for calculating air emis-sions (total distances traveled and average traffic speeds) from such data, despite their biases. The first significant contribution is to articulate methods of classification of individuals with two distinct approaches of mobility reconstruction. A second contribution is developing a method for estimating traffic speeds based on the fusion of large amounts of travel data. Finally, we present a complete methodological process of modeling and data processing. It relates the methods proposed in this thesis coherently
Shah, Meelap (Meelap Vijay). "PARTE : automatic program partitioning for efficient computation over encrypted data." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/79239.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (p. 45-47).
Many modern applications outsource their data storage and computation needs to third parties. Although this lifts many infrastructure burdens from the application developer, he must deal with an increased risk of data leakage (i.e. there are more distributed copies of the data, the third party may be insecure and/or untrustworthy). Oftentimes, the most practical option is to tolerate this risk. This is far from ideal and in case of highly sensitive data (e.g. medical records, location history) it is unacceptable. We present PARTE, a tool to aid application developers in lowering the risk of data leakage. PARTE statically analyzes a program's source, annotated to indicate types which will hold sensitive data (i.e. data that should not be leaked), and outputs a partitioned version of the source. One partition will operate only on encrypted copies of sensitive data to lower the risk of data leakage and can safely be run by a third party or otherwise untrusted environment. The second partition must have plaintext access to sensitive data and therefore should be run in a trusted environment. Program execution will flow between the partitions, levaraging third party resources when data leakage risk is low. Further, we identify operations which, if efficiently supported by some encryption scheme, would improve the performance of partitioned execution. To demonstrate the feasiblity of these ideas, we implement PARTE in Haskell and run it on a web application, hpaste, which allows users to upload and share text snippets. The partitioned hpaste services web request 1.2 - 2.5 x slower than the original hpaste. We find this overhead to be moderately high. Moreover, the partitioning does not allow much code to run on encrypted data. We discuss why we feel our techniques did not produce an attractive partitioning and offer insight on new research directions which could yield better results.
by Meelap Shah.
S.M.
Bucciarelli, Stefano. "Un compilatore per un linguaggio per smart contract intrinsecamente tipato." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/19573/.
Full textDall, Rasmus. "Statistical parametric speech synthesis using conversational data and phenomena." Thesis, University of Edinburgh, 2017. http://hdl.handle.net/1842/29016.
Full textErozel, Guzen. "Natural Language Interface On A Video Data Model." Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/12606251/index.pdf.
Full textAbel, Donald Randall. "The Parser Converter Loader: An Implementation of the Computational Chemistry Output Language (CCOL)." PDXScholar, 1995. https://pdxscholar.library.pdx.edu/open_access_etds/4926.
Full textSodhi, Bir Apaar Singh. "DATA MINING: TRACKING SUSPICIOUS LOGGING ACTIVITY USING HADOOP." CSUSB ScholarWorks, 2016. https://scholarworks.lib.csusb.edu/etd/271.
Full textCosta, Galligani Stéphanie. "Le français parlé par des migrants espagnols de longue date : biographie et pratiques langagières." Grenoble 3, 1998. http://www.theses.fr/1998GRE39041.
Full textThis study, which is at the intersection of the following disciplines linguistics, sociolinguistics and language acquisition, focuses on the french language skills, usages and practices of long term spanish migrants. It highlights the specificity of their language behaviour with regard to the relationship between their native language and french, the language of the country of migration. The linguistic objectives aim at a analysis of the data collected from four spanish migrants observed during an interview. A linguistic description of the speech of bilingual subjects, having received little schooling and no formal french lessons, is undertaken from the recorded interviews. This descriptive phase highlights their language behaviour in french (in terms of code-switching) which is examined in terms of grammatical categories selected according to the variance of forms. The sociolinguistic objectives embrace more individual aspects of bilingualism, particularly the vision subjects have of their own bilingualism, but also the importance of the languages in their verbal repertoire, their attitudes to the languages and finally the path of acquisition. The consequences of bilingualism in the context of migration will be apprehended largely from the point of view of identity with regard to the strategies developed by the subjects, such as the maintenance of their spanish accent as a sign of recognition of their bilingual identity
Lopes, Eduarda Escila Ferreira. "O uso dos dispositivos móveis e da internet como parte da cultura escolar de estudantes universitários." Universidade Estadual Paulista (UNESP), 2018. http://hdl.handle.net/11449/154311.
Full textApproved for entry into archive by Ana Paula Santulo Custódio de Medeiros null (asantulo@rc.unesp.br) on 2018-06-20T12:26:27Z (GMT) No. of bitstreams: 1 lopes_eef_dr_rcla.pdf: 9848538 bytes, checksum: afc699b16f6bde99cec693ea78f77265 (MD5)
Made available in DSpace on 2018-06-20T12:26:27Z (GMT). No. of bitstreams: 1 lopes_eef_dr_rcla.pdf: 9848538 bytes, checksum: afc699b16f6bde99cec693ea78f77265 (MD5) Previous issue date: 2018-04-13
O trabalho aqui apresentado é composto por estudos teóricos e empíricos com o objetivo de investigar a interferência da cultura digital no processo educacional de estudantes de cursos superiores. Considera-se a premissa de que a cada dia é maior a presença de dispositivos móveis e da internet como recursos para a vida escolar. No nível superior, a tecnologia digital torna-se cada vez mais presente como recurso das atividades acadêmicas e como resultado de uma cultura digital da sociedade atual. É proposta deste trabalho estudar o conceito de cultura, cultura digital e cultura escolar e em uma segunda etapa, serão revisados aspectos do desenvolvimento dos meios de comunicação e seus impactos até a compreensão da contemporaneidade com a presença dos dispositivos móveis. Como procedimento teórico-metodológico, a pesquisa está ancorada em diferentes autores que tematizaram as questões das práticas e da cultura escolar, como Marilena Chauí, Pierre Bourdieu, Pierre Levy, Raymond Williams, Roger Chartier, Anne Marie Chartier, Bernard Lahire, Marsall McLuhan, Melvin Defleur, Peter Burke, Asa Briggs, entre outros. O trabalho apresenta também estudos sobre a evolução do ensino superior no Brasil frente à expansão provocada por políticas públicas. São analisados dados da expansão em universidades públicas e privadas, assim como dados relativos aos cursos de graduação pesquisados. No presente estudo, também são abordados os preâmbulos da pesquisa empírica, composta por duas fases de investigação, sendo uma de caráter quantitativo e, outra, qualitativo. São apresentados, por fim, os resultados e uma análise das entrevistas com universitários dos cursos de Pedagogia, Biologia, Publicidade e Propaganda, Jornalismo e Design Digital de uma instituição de ensino pública e uma particular da cidade de Araraquara, no interior do estado de São Paulo.
The work presented here is based on the development of studies about culture, school culture and digital culture and will present the first notes on theoretical reflections. It considers the premise that the presence of mobile devices and the internet as a resource for school life is increasing every day. At the higher level, digital technology becomes more and more present as a resource for academic activities and as a result of a digital culture of today’s society. The purpose of this study is to review the theoretical issues involved in the process that will lead us to identify the changes in the daily life of the school culture, taking into account the use of mobile devices, the Internet and social networks by university students. In a second moment, aspects of communication means and its impact will be revisited on the comprehension about contemporaneity with presence of mobile devices. As a theoretical-methodological procedure, the research is anchored in different authors who have studied the issues of school practices and culture, such as Marilena Chauí, Pierre Bourdieu, Pierre Levy, Raymond Williams, Roger Chartier, Anne Marie Chartier, Bernard Lahire, Marsall McLuhan, Melvin Defleur, Peter Burke, Asa Briggs and others. This work presents also studies about the evolution of higher education in Brazil with regard to expansion on public and private universities, as well as data about the graduate courses researched. In the present study, will be approached the preambles of empirical research, formed by interviews with Pedagogy, Biology, Marketing and Advertising, Journalism and Digital Design students.
Pause, Marion [Verfasser], and Karsten [Akademischer Betreuer] Schulz. "Soil moisture retrieval using high spatial resolution Polarimetric L-Band Multi-beam Radiometer (PLMR) data at the field scale / Marion Pause. Betreuer: Karsten Schulz." München : Universitätsbibliothek der Ludwig-Maximilians-Universität, 2011. http://d-nb.info/1015083978/34.
Full textIsraelsson, Sigurd. "Energy Efficiency Analysis of ARM big.LITTLE Global Task Scheduling." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-11296.
Full textFusco, Elvis. "Modelos conceituais de dados como parte do processo da catalogação : perspectiva de uso dos FRBR no desenvolvimento de catálogos bibliográficos digitais /." Marília : [s.n.], 2010. http://hdl.handle.net/11449/103369.
Full textBanca: Ricardo César Gonçalves Sant'Ana
Banca: José Remo Ferreira Brega
Banca: Virgínia Bentes Pinto
Banca: Alex Sandro Romeu de Souza Poleto
Resumo: O processo de catalogação ocupa-se dos registros bibliográficos, enquanto suporte de informação, servindo como base para a interoperabilidade entre ambientes informacionais, levando em conta objetos diversificados de informação e bases cooperativas e heterogêneas. Dentre as principais propostas da área de catalogação estão os FRBR - Functional Requirements for Bibliographic Records (Requisitos Funcionais para Registros Bibliográficos), que constituem novos conceitos nas regras de catalogação. As regras dos FRBR mostram um caminho na reestruturação dos registros bibliográficos de maneira a refletir a estrutura conceitual de persistência e buscas de informação, levando em conta a diversidade de usuários, materiais, suporte físico e formatos. Neste contexto, o objetivo desta pesquisa é refletir e discutir, a partir de uma arquitetura conceitual, lógica e de persistência de ambientes informacionais, baseada nos FRBR e na Modelagem Entidade- Relacionamento e estendido pelo uso dos conceitos da Orientação a Objetos, o processo de catalogação no contexto do projeto de catálogos utilizando a metodologia computacional de Modelagem Conceitual de Dados, considerando a evolução dessa área no âmbito da Ciência da Informação em relação ao contexto da representação da informação com vistas ao uso e à interoperabilidade de todo e qualquer recurso informacional, que vise a preencher a lacuna entre o projeto conceitual de um domínio de aplicação e a definição dos esquemas de metadados das estruturas de registros bibliográficos. Esta pesquisa defende a necessidade e a urgência da releitura do processo de catalogação adicionado de elementos da Ciência da Computação com utilização de metodologias de Tratamento Descritivo da Informação (TDI) no âmbito da representação da informação na camada de... (Resumo completo, clicar acesso eletrônico abaixo)
Abstract: The cataloguing process is aimed at dealing with bibliographic registers as information support, serving as a basis for an interoperability among information environments, taking into account different objects and cooperative and heterogeneous basis. Among the main propositions of the cataloguing field are the FRBR - Functional Requirements for Bibliographic Records, which constitute of new concepts in cataloguing standards. These standards indicate access to the rearrangement of bibliographic registers in order to reflect on the conceptual framework of persistence and search of information, considering the diversity of users, material, physical support, and formats. In this context, the present research is aimed at reflecting and discussing, from a conceptual architecture, logic and information environment persistence based on FRBR and Entity-Relationship Modeling and extended by the use of concepts of Object Orientation, the process of cataloguing in the context of the catalogs project by using computation methodology of Data Conceptual Modeling, considering the evolution of this area in the scope of Information Science in relation to the context of the representation of information aiming use and interoperability of every and each information resource to fill the lack between the conceptual project of an application domain and the definition of the metadata scheme of bibliographic registers structures. Thus, this research defends the necessity and urgency to review the cataloguing process adding the elements of Computing Science with the use of Information Descriptive Treatment methodologies in the scope of the information representation in the layer of persistence of an automated information environment. The research issue is based in the presupposition of the existence in a relation of community among... (Complete abstract click electronic access below)
Doutor
Fusco, Elvis [UNESP]. "Modelos conceituais de dados como parte do processo da catalogação: perspectiva de uso dos FRBR no desenvolvimento de catálogos bibliográficos digitais." Universidade Estadual Paulista (UNESP), 2010. http://hdl.handle.net/11449/103369.
Full textCoordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
O processo de catalogação ocupa-se dos registros bibliográficos, enquanto suporte de informação, servindo como base para a interoperabilidade entre ambientes informacionais, levando em conta objetos diversificados de informação e bases cooperativas e heterogêneas. Dentre as principais propostas da área de catalogação estão os FRBR - Functional Requirements for Bibliographic Records (Requisitos Funcionais para Registros Bibliográficos), que constituem novos conceitos nas regras de catalogação. As regras dos FRBR mostram um caminho na reestruturação dos registros bibliográficos de maneira a refletir a estrutura conceitual de persistência e buscas de informação, levando em conta a diversidade de usuários, materiais, suporte físico e formatos. Neste contexto, o objetivo desta pesquisa é refletir e discutir, a partir de uma arquitetura conceitual, lógica e de persistência de ambientes informacionais, baseada nos FRBR e na Modelagem Entidade- Relacionamento e estendido pelo uso dos conceitos da Orientação a Objetos, o processo de catalogação no contexto do projeto de catálogos utilizando a metodologia computacional de Modelagem Conceitual de Dados, considerando a evolução dessa área no âmbito da Ciência da Informação em relação ao contexto da representação da informação com vistas ao uso e à interoperabilidade de todo e qualquer recurso informacional, que vise a preencher a lacuna entre o projeto conceitual de um domínio de aplicação e a definição dos esquemas de metadados das estruturas de registros bibliográficos. Esta pesquisa defende a necessidade e a urgência da releitura do processo de catalogação adicionado de elementos da Ciência da Computação com utilização de metodologias de Tratamento Descritivo da Informação (TDI) no âmbito da representação da informação na camada de...
The cataloguing process is aimed at dealing with bibliographic registers as information support, serving as a basis for an interoperability among information environments, taking into account different objects and cooperative and heterogeneous basis. Among the main propositions of the cataloguing field are the FRBR - Functional Requirements for Bibliographic Records, which constitute of new concepts in cataloguing standards. These standards indicate access to the rearrangement of bibliographic registers in order to reflect on the conceptual framework of persistence and search of information, considering the diversity of users, material, physical support, and formats. In this context, the present research is aimed at reflecting and discussing, from a conceptual architecture, logic and information environment persistence based on FRBR and Entity-Relationship Modeling and extended by the use of concepts of Object Orientation, the process of cataloguing in the context of the catalogs project by using computation methodology of Data Conceptual Modeling, considering the evolution of this area in the scope of Information Science in relation to the context of the representation of information aiming use and interoperability of every and each information resource to fill the lack between the conceptual project of an application domain and the definition of the metadata scheme of bibliographic registers structures. Thus, this research defends the necessity and urgency to review the cataloguing process adding the elements of Computing Science with the use of Information Descriptive Treatment methodologies in the scope of the information representation in the layer of persistence of an automated information environment. The research issue is based in the presupposition of the existence in a relation of community among... (Complete abstract click electronic access below)
Almeida, Carolina Porto de. "Ensinando professoras a analisar o comportamento do aluno: análise e interpretação de dados como parte de uma análise de contingências." Pontifícia Universidade Católica de São Paulo, 2009. https://tede2.pucsp.br/handle/handle/16847.
Full textCoordenação de Aperfeiçoamento de Pessoal de Nível Superior
Many studies about functional analysis (or contingency analysis) can be found in the literature and they are inserted in a diversity of research lines. The present study is part of a research line that employs the method proposed by Iwata, Dorsey, Slifer, Bauman e Richman (1982/1994) of systematic manipulation of environmental events for testing what happens to the frequency of the behavior of interest in a few sessions. The purpose of the current study was teaching school-teachers who had taken no prior courses and had no prior experience in behavior analysis to perform part of a contingency analysis: the analysis and the interpretation of data generated by the application of this method. Three preschool teachers participated in the study, all of which had students who presented behaviors considered inappropriate by the teachers. Data were collected in the school where the participants taught. Fourteen movies were used, with 9 minutes each one, that showed, in a simulated situation, one teacher implementing the method proposed by Iwata et al. (1982/1994) with one student who exhibited behaviors considered inappropriate. Seven movies showed the student s behavior maintained by a contingency of positive reinforcement (teacher s attention) and the other seven, by a contingency of negative reinforcement (escape from academic tasks). A training program was carried out in which the participants observed and recorded the occurrence or non occurrence of the student s target behavior (inappropriate behavior), the antecedent event and the consequence, on 30 second intervals. Then they answered five questions based on the records made, related to the data analysis and interpretation. The training included three phases: pre-test, training procedures and pos-test. The training procedure included the gradual removal of information, in which the register and the answers to all the questions were initially presented to the participants and, at each new step, one item of these models was removed. The pre-test results indicate that the participants made mistakes in the majority of the records about the three terms contingency at each 30 second interval. They also made the wrong interpretation about what was maintaining the behavior occurring in the movies exhibited. By comparing the results of the pre-test with those of the pos-test, in which they made correct records and correct interpretations in almost all the items, it is possible to say that the training procedure was effective to teach the participants to analyze and to interpret the recorded data. Based on the positive results about the performance of the participants, and considering that the maximum duration of the training was of 8 hours, it is possible to conclude that school-teachers can learn to perform part of a contingency analysis in a relatively short time when appropriately taught
Muitos estudos sobre análise funcional (ou análise de contingências) são encontrados na literatura e diversas são as linhas de pesquisa em que eles estão inseridos. O presente estudo faz parte de uma linha de pesquisa que se utiliza do método proposto por Iwata, Dorsey, Slifer, Bauman e Richman (1982/1994) de manipulação sistemática de eventos ambientais para se testar o que acontece com a frequência do comportamento de interesse em poucas sessões. O estudo teve por objetivo ensinar professoras sem formação ou experiência prévia em análise do comportamento a realizar parte de uma análise de contingências: a análise e a interpretação de dados gerados pela aplicação desse método. Participaram do estudo três professoras de educação infantil, que apresentavam alunos com comportamentos considerados por elas inadequados. A coleta ocorreu na própria escola onde as participantes trabalhavam. Foram utilizados 14 filmes, de 9 minutos cada, que mostravam, em uma situação simulada, uma professora aplicando o método proposto por Iwata et al. (1982/1994) diante de um aluno que exibia comportamentos tidos como inadequados. Sete filmes mostravam o comportamento do aluno mantido por uma contingência de reforçamento positivo (atenção da professora) e os outros sete, por uma contingência de reforçamento negativo (fuga de tarefas escolares). Foi realizado um treinamento, em que as participantes deveriam observar e registrar por escrito a ocorrência ou não ocorrência do comportamento alvo do aluno (inadequado), o evento antecedente e a consequência, em intervalos de 30 segundos. Além disso, deveriam responder cinco questões sobre os registros feitos, referentes à analise e à interpretação de dados. O treinamento foi composto por três fases: pré-teste, treino e pós-teste. Empregou-se no treino um procedimento de remoção gradual de informações, em que, inicialmente, foram apresentados às participantes todos os modelos de registros e de respostas às questões, sendo que a cada passo um item desses modelos era retirado. No pré-teste, constatou-se que as participantes erraram a grande maioria dos registros sobre a contingência de três termos em cada um dos intervalos de 30 segundos, bem como a interpretação sobre o que mantinha o comportamento alvo do aluno ocorrendo nos filmes exibidos. Comparando-se os resultados do pré e do pósteste, no qual acertaram todos ou quase todos os registros e as questões, verificou-se que o treino foi efetivo para ensiná-las a analisar e interpretar os dados sobre os registros. Com base nos resultados positivos sobre o desempenho das participantes e considerando que a duração máxima do treino foi de 8 horas, é possível dizer que professoras podem aprender a realizar parte de uma análise de contingências em um tempo relativamente curto, quando ensinadas adequadamente
Pugliese, Francesca. "Il controllo della posta elettronica e dell'utilizzo delle risorse informatiche da parte dei lavoratori nella giurisprudenza dell'Autorità Garante per la protezione dei dati personali." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2010. http://amslaurea.unibo.it/1614/.
Full textKondo, Daishi. "Preventing information leakage in NDN with name and flow filters." Thesis, Université de Lorraine, 2018. http://www.theses.fr/2018LORR0233/document.
Full textIn recent years, Named Data Networking (NDN) has emerged as one of the most promising future networking architectures. To be adopted at Internet scale, NDN needs to resolve the inherent issues of the current Internet. Since information leakage from an enterprise is one of the big issues even in the Internet and it is very crucial to assess the risk before replacing the Internet with NDN completely, this thesis investigates whether a new security threat causing the information leakage can happen in NDN. Assuming that (i) a computer is located in the enterprise network that is based on an NDN architecture, (ii) the computer has already been compromised by suspicious media such as a malicious email, and (iii) the company installs a firewall connected to the NDN-based future Internet, this thesis focuses on a situation that the compromised computer (i.e., malware) attempts to send leaked data to the outside attacker. The contributions of this thesis are fivefold. Firstly, this thesis proposes an information leakage attack through a Data and through an Interest in NDN. Secondly, in order to address the information leakage attack, this thesis proposes an NDN firewall which monitors and processes the NDN traffic coming from the consumers with the whitelist and blacklist. Thirdly, this thesis proposes an NDN name filter to classify a name in the Interest as legitimate or not. The name filter can, indeed, reduce the throughput per Interest, but to ameliorate the speed of this attack, malware can send numerous Interests within a short period of time. Moreover, the malware can even exploit an Interest with an explicit payload in the name (like an HTTP POST message in the Internet), which is out of scope in the proposed name filter and can increase the information leakage throughput by adopting a longer payload. To take traffic flow to the NDN firewall from the consumer into account, fourthly, this thesis proposes an NDN flow monitored at an NDN firewall. Fifthly, in order to deal with the drawbacks of the NDN name filter, this thesis proposes an NDN flow filter to classify a flow as legitimate or not. The performance evaluation shows that the flow filter complements the name filter and greatly chokes the information leakage throughput
Vrána, Pavel. "Webový portál pro správu a klasifikaci informací z distribuovaných zdrojů." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2011. http://www.nusl.cz/ntk/nusl-237043.
Full textCasalprim, Ramonet Montserrat. "L’eficiència del procés educatiu de maternal i primera ensenyança a Andorra." Doctoral thesis, Universitat d'Andorra, 2014. http://hdl.handle.net/10803/284699.
Full textEn aquest treball es realitza un estudi d'eficiència de les escoles públiques de maternal i primera ensenyança d'Andorra. Les particularitats de I'entorn on es desenvolupa I'estudi, on conviuen tres sistemes educatius públics (I'andorrà, I'espanyol -congregacional i no congregacional- i el francès) que representen quatre entorns organitzatius diferents, en un mateix país, permeten fer noves aportacions a la literatura existent sobre I'eficiència del procés educatiu i identificar factors organitzatius que poden influir en I'eficiència de les escoles. EI mètode utilitzat per mesurar I'eficiència es el DEA (Data Envelopment Analisys), introduït per Charnes, Cooper i Rhodes el 1978. En una segona fase s'aplica la tècnica del bootstrap (Simar & Wilson, 2000). Per analitzar les diferències entre sistemes educatius s'utilitza el test de Li (Li, 1996). EI procés de recollida de dades ha estat laboriós i ha requerit la necessitat de garantir I'anonimat de les escoles i dels sistemes. És per aquest motiu que s'identifiquen els quatre sistemes educatius diferents miljançant lIetres: A, B, C i D. Els resultats abans d'introduir I'efecte de les variables d'entorn mostren que les escoles del sistema educatiu C obtenen puntuacions d'eficiència més altes que la resta, i les escoles del sistema educatiu B obtenen puntuacions d'eficiència més altres. Aquests resultats poden ser explicats pels trets diferencials de I'entorn organitzatiu que dibuixen els diferents sistemes educatius i poden donar indicacions amb I'objectiu de millorar I'eficiència de les escoles. Alguns dels trets diferencials del sistema educatiu C són: un nivell més elevat d'autonomia en la gestió (Purkey & Smith, 1983), que facilita el rol de lideratge per part de la direcció (Antunez, 1994); i un horari lectiu i laboral més extens (Gimenez et al., 2007; Naper, 2010). Després d'incloure I'efecte de I'entorn socioeconòmic de les famílies i les característiques individuals de I'alumne (en termes de motivació i actitud) s'observa que desapareixen la major part de diferències en I'eficiència de les escoles per sistemes educatius. Aquest canvi pot venir explicat per I'entorn menys favorable en el que treballen les escoles del sistema educatiu B i en I'entorn més favorable de les del sistema educatiu C. Tal i com es va començar a debatre amb la publicació de I'informe Coleman (Coleman et aI., 1966), I'entorn familiar i les característiques individuals de I'alumne són variables que intervenen al procés educatiu, i aquests resultats ho constaten. Per acabar, en I'anàlisi de les variables que expliquen la satisfacció dels pares, s'ha trobat que I'eficiència de les escoles n'és una. Aquest darrer resultat aporta noves aplicacions als estudis d'eficiència que s'han fet fins ara, que no només permeten estudiar polítiques de millora del rendiment dels recursos públics sinó que també permeten orientar aquestes polítiques a la millora de la satisfacció dels ciutadans amb els serveis públics (Roch & Poister, 2006 i Van Ryzin el al., 2004).
Puna, Petr. "Extrakce dat z dynamických WWW stránek." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2009. http://www.nusl.cz/ntk/nusl-236686.
Full textSilva, Sónia Sofia Mendes. "Perceções de comportamentos (Des)adequados e relacionamento com pares na escola: um estudo com alunos de 9º ano." Master's thesis, Universidade de Évora, 2016. http://hdl.handle.net/10174/18734.
Full textAsplund, Fredrik. "Parsing of X.509 certificates in a WAP environment." Thesis, Linköping University, Department of Electrical Engineering, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-1455.
Full textThis master thesis consists of three parts. The first part contains a summary of what is needed to understand a X.509 parser that I have created, a discussion concerning the technical problems I encountered during the programming of this parser and a discussion concerning the final version of the parser. The second part concerns a comparison I made between the X.509 parser I created and a X.509 parser created"automatically"by a compiler. I tested static memory, allocation of memory during runtime and utilization of the CPU for both my parser (MP) and the parser that had a basic structure constructed by a compiler (OAP). I discuss changes in the parsers involved to make the comparison fair to OAP, the results from the tests and when circumstances such as time and non-standard content in the project make one way of constructing a X.509 parser better than the other way. The last part concerns a WTLS parser (a simpler kind of X.509 parser), which I created.
Raška, Jiří. "Dolování dat v prostředí sociálních sítí." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2013. http://www.nusl.cz/ntk/nusl-236409.
Full textMostafa, Mahmoud. "Analyse de sécurité et QoS dans les réseaux à contraintes temporelles." Thesis, Toulouse, INPT, 2011. http://www.theses.fr/2011INPT0074/document.
Full textQoS and security are two precious objectives for network systems to attain, especially for critical networks with temporal constraints. Unfortunately, they often conflict; while QoS tries to minimize the processing delay, strong security protection requires more processing time and causes traffic delay and QoS degradation. Moreover, real-time systems, QoS and security have often been studied separately and by different communities. In the context of the avionic data network various domains and heterogeneous applications with different levels of criticality cooperate for the mutual exchange of information, often through gateways. It is clear that this information has different levels of sensitivity in terms of security and QoS constraints. Given this context, the major goal of this thesis is then to increase the robustness of the next generation e-enabled avionic data network with respect to security threats and ruptures in traffic characteristics. From this perspective, we surveyed the literature to establish state of the art network security, QoS and applications with time constraints. Then, we studied the next generation e-enabled avionic data network. This allowed us to draw a map of the field, and to understand security threats. Based on this study we identified both security and QoS requirements of the next generation e-enabled avionic data network. In order to satisfy these requirements we proposed the architecture of QoS capable integrated security gateway to protect the next generation e-enabled avionic data network and ensure the availability of critical traffic. To provide for a true integration between the different gateway components we built an integrated session table to store all the needed session information and to speed up the packet processing (firewall stateful inspection, NAT mapping, QoS classification and routing). This necessitates the study of the existing session table structure and the proposition of a new structure to fulfill our objective. Also, we present the necessary processing algorithms to access the new integrated session table. In IPSec VPN component we identified the problem that IPSec ESP encrypted traffic cannot be classified appropriately by QoS edge routers. To overcome this problem, we developed a Q-ESP protocol which allows the classifications of encrypted traffic and combines the security services provided by IPSec ESP and AH. To manage the network traffic wisely, a variety of bandwidth management techniques have been developed. To assess their performance and identify which bandwidth management technique is the most suitable given our context we performed a delay-based comparison using experimental tests. In the final stage, we benchmarked our implemented security gateway against three commercially available software gateways. The goal of this benchmark test is to evaluate performance and identify problems for future research work. This dissertation is divided into two parts: in French and in English respectively. Both parts follow the same structure where the first is an extended summary of the second
Celis, Carranza Ines Jeovana, Haro Jhajaira Luz Marín, Zavala Edith Rosalina Palomino, Babilonia Fiorella Stefania Villafuerte, and Salazar Karen Villanueva. "Modelo para el manejo del impacto de la identidad cultural dada la adquisición de una empresa peruana por parte de la subsidiaria latinoamericana de una empresa japonesa en el año 2015." Bachelor's thesis, Universidad Peruana de Ciencias Aplicadas (UPC), 2016. http://hdl.handle.net/10757/618237.
Full textTesis
Piwko, Karel. "Nativní XML rozhraní pro relační databázi." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2010. http://www.nusl.cz/ntk/nusl-235541.
Full textSANTOS, Danilo Abreu. "Recomendação pedagógica para melhoria da aprendizagem em redações." Universidade Federal de Campina Grande, 2015. http://dspace.sti.ufcg.edu.br:8080/jspui/handle/riufcg/550.
Full textMade available in DSpace on 2018-05-02T13:28:09Z (GMT). No. of bitstreams: 1 DANILO ABREU SANTOS - DISSERTAÇÃO PPGCC 2015..pdf: 2955839 bytes, checksum: 45290d85cdffbae0320f29fc5e633cb6 (MD5) Previous issue date: 2015-08-24
A modalidade de educação online tem crescido significativamente nas últimas décadas em todo o mundo, transformando-se em uma opção viável tanto àqueles que não dispõem de tempo para trabalhar a sua formação acadêmica na forma presencial quanto àqueles que desejam complementá-la. Há também os que buscam ingressar no ensino superior por meio do Exame Nacional do Ensino Médio (ENEM) e utilizam esta modalidade de ensino para complementar os estudos, objetivando sanar lacunas deixadas pela formação escolar. O ENEM é composto por questões objetivas (subdivididas em 4 grandes áreas: Linguagens e Códigos; Matemática; Ciências Humanas; e Ciências Naturais) e a questão subjetiva (redação). Segundo dados do Ministério da Educação (MEC), mais de 50% dos candidatos que fizeram a prova do ENEM em 2014 obtiveram desempenho abaixo de 500 pontos na redação. Esta pesquisa utilizará recomendações pedagógicas baseadas no gênero textual utilizado pelo ENEM, visando prover uma melhoria na escrita da redação dissertativa. Para tanto, foi utilizado, como ferramenta experimental, o ambiente online de aprendizagem MeuTutor. O ambiente possui um módulo de escrita de redação, no qual é utilizada para correção dos textos elaborados pelos alunos, a metodologia de avaliação por pares, cujo pesquisas mostram que os resultados avaliativos são significativos e bastante similares aos obtidos por professores especialistas. Entretanto, apenas apresentar a pontuação da redação por si só, não garante a melhora da produção textual do aluno avaliado. Desta forma, visando um ganho em performance na produção da redação, foi adicionado ao MeuTutor um módulo de recomendação pedagógica baseado em 19 perfis resultados do uso de algoritmos de mineração de dados (DBScan e Kmeans) nos microdados do ENEM 2012 disponibilizado pelo MEC. Estes perfis foram agrupados em 6 blocos que possuíam um conjunto de tarefas nas áreas de escrita, gramática e coerências e concordância textual. A validação destas recomendações foi feita em um experimento de 3 ciclos, onde em cada ciclo o aluno: escreve a redação; avalia os seus pares; realiza a recomendação pedagógica que foi recebida. A partir da análise estatística destes dados, foi possível constatar que o modelo estratégico de recomendação utilizado nesta pesquisa, possibilitou um ganho mensurável na qualidade da produção textual.
Online education has grown significantly in recent years throughout the world, becoming a viable option for those who don’t have the time to pursuit traditional technical training or academic degree. In Brazil, people seek to enter higher education through the National Secondary Education Examination (ENEM) and use online education to complement their studies, aiming to remedy gaps in their school formation. The ENEM consists of objective questions (divided into 4 main areas: languages and codes; Mathematics; Social Sciences, and Natural Sciences), and the subjective questions (the essay). According to the Brazilian Department of Education (MEC), more than 50% of the candidates who took the test (ENEM) in 2014, obtained performance below 500 points (out of a 1000 maximum points) for their essays. This research uses educational recommendations based on the five official correction criteria for the ENEM essays, to improve writing. Thus, this research used an experimental tool in an online learning environment called MeuTutor. The mentioned learning environment has an essay writing/correction module. The correction module uses peer evaluation techniques, for which researches show that the results are, significantly, similar to those obtained by specialists’ correction. However, to simply display the scores for the criteria does not guarantee an improvement in students’ writing. Thus, to promote that, an educational recommendation module was added to MeuTutor. It is based on 19 profiles obtained mining data from the 2012 ENEM. It uses the algorithms DBSCAN and K-Means, and grouped the profiles into six blocks, to which a set of tasks were associated to the areas of writing, grammar and coherence, and textual agreement. The validation of these recommendations was made in an experiment with three cycles, where students should: (1) write the essay; (2) evaluate their peers; (3) perform the pedagogical recommendations received. From the analysis of these data, it was found that the strategic model of recommendation used in this study, enabled a measurable gain in quality of textual production.
Sekhi, Ikram. "Développement d'un alphabet structural intégrant la flexibilité des structures protéiques." Thesis, Sorbonne Paris Cité, 2018. http://www.theses.fr/2018USPCC084/document.
Full textThe purpose of this PhD is to provide a Structural Alphabet (SA) for more accurate characterization of protein three-dimensional (3D) structures as well as integrating the increasing protein 3D structure information currently available in the Protein Data Bank (PDB). The SA also takes into consideration the logic behind the structural fragments sequence by using the hidden Markov Model (HMM). In this PhD, we describe a new structural alphabet, improving the existing HMM-SA27 structural alphabet, called SAFlex (Structural Alphabet Flexibility), in order to take into account the uncertainty of data (missing data in PDB files) and the redundancy of protein structures. The new SAFlex structural alphabet obtained therefore offers a new, rigorous and robust encoding model. This encoding takes into account the encoding uncertainty by providing three encoding options: the maximum a posteriori (MAP), the marginal posterior distribution (POST), and the effective number of letters at each given position (NEFF). SAFlex also provides and builds a consensus encoding from different replicates (multiple chains, monomers and several homomers) of a single protein. It thus allows the detection of structural variability between different chains. The methodological advances and the achievement of the SAFlex alphabet are the main contributions of this PhD. We also present the new PDB parser(SAFlex-PDB) and we demonstrate that our parser is therefore interesting both qualitative (detection of various errors) and quantitative terms (program optimization and parallelization) by comparing it with two other parsers well-known in the area of Bioinformatics (Biopython and BioJava). The SAFlex structural alphabet is being made available to the scientific community by providing a website. The SAFlex web server represents the concrete contribution of this PhD while the SAFlex-PDB parser represents an important contribution to the proper function of the proposed website. Here, we describe the functions and the interfaces of the SAFlex web server. The SAFlex can be used in various fashions for a protein tertiary structure of a given PDB format file; it can be used for encoding the 3D structure, identifying and predicting missing data. Hence, it is the only alphabet able to encode and predict the missing data in a 3D protein structure to date. Finally, these improvements; are promising to explore increasing protein redundancy data and obtain useful quantification of their flexibility
Malakhovski, Ian. "Sur le pouvoir expressif des structures applicatives et monadiques indexées." Thesis, Toulouse 3, 2019. http://www.theses.fr/2019TOU30118.
Full textIt is well-known that very simple theoretic constructs such as Either (type-theoretic equivalent of the logical "or" operator), State (composable state transformers), Applicative (generalized function application), and Monad (generalized sequential program composition) structures (as they are named in Haskell) cover a huge chunk of what is usually needed to elegantly express most computational idioms used in conventional programs. However, it is conventionally argued that there are several classes of commonly used idioms that do not fit well within those structures, the most notable examples being transformations between trees (data types, which are usually argued to require ether generalized pattern matching or heavy metaprogramming infrastructure) and exception handling (which are usually argued to require special language and run-time support). This work aims to show that many of those idioms can, in fact, be expressed by reusing those well-known structures with minor (if any) modifications. In other words, the purpose of this work is to apply the KISS (Keep It Stupid Simple) and/or Occam's razor principles to algebraic structures used to solve common programming problems. Technically speaking, this work aims to show that natural generalizations of Applicative and Monad type classes of Haskell combined with the ability to make Cartesian products of them produce a very simple common framework for expressing many practically useful things, some of the instances of which are very convenient novel ways to express common programming ideas, while others are usually classified as effect systems. On that latter point, if one is to generalize the presented instances into an approach to design of effect systems in general, then the overall structure of such an approach can be thought of as being an almost syntactic framework which allows different effect systems adhering to the general structure of the "marriage" framework to be expressed on top of. (Though, this work does not go into too much into the latter, since this work is mainly motivated by examples that can be immediately applied to Haskell practice.) Note, however, that, after the fact, these technical observation are completely unsurprising: Applicative and Monad are generalizations of functional and linear program compositions respectively, so, naturally, Cartesian products of these two structures ought to cover a lot of what programs usually do
Banerji, Ranajoy. "Optimisation d’une mission spatiale CMB de 4eme génération." Thesis, Sorbonne Paris Cité, 2017. http://www.theses.fr/2017USPCC199/document.
Full textThe Cosmic Microwave Background radiation is a rich and clean source of Cosmological information. Study of the CMB over the past few decades has led to the establishment of a “Standard Model” for Cosmology and constrained many of its principal parameters. It hasalso transformed the field into a highly data-driven domain.Currently, Inflation is the leading paradigm describing the earliest moments of our Universe. It predicts the generation of primordial matter density fluctuations and gravitational waves. The CMB polarisation carries the signature of these gravitational waves in the form of primordial “B-modes”. A future generation of CMB polarisation space mission is well suited to observe this signature of Inflation.This thesis focuses on optimising a future CMB space mission that will observe the B-modesignal for reaching a sensitivity of r = 0.001. Specifically, I study the optimisation of the scanning strategy and the impact of systematics on the quality of polarisation measurement
Poliziani, Cristian. "Analisi dei percorsi ciclabili registrati tramite smartphone sulla rete stradale di Bologna nell'ultimo triennio." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2016. http://amslaurea.unibo.it/11247/.
Full textBiagini, Giulio. "Studio delle Problematiche ed Evoluzione dello Streaming Adattivo su HTTP." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/15474/.
Full textTsai, Feng-Tse, and 蔡豐澤. "The Research on Improvement of the Tunstall Parse Tree for Data Compression." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/79054446942455141098.
Full text淡江大學
資訊管理學系碩士班
96
There are lossy and lossless techniques for data compression. Most lossless compression techniques are vulnerable to noise. When bit flips occur due to noise, serious error propagation will be seen in such lossless compression techniques as Huffman coding, arithmetic coding, and Ziv-Lempel coding. By using fixed-length codewords and not using an adaptive dictionary, Tunstall coding stands out as an error-resilient lossless compression technique. However the compression ratio of Tunstall coding is not good. Therefore the aim of this work is to enhance the compression ratio of Tunstall coding without compromising its error-resilience. Traditional Tunstall coding grows a parse tree by iterative expansion of the maximum probability leaf. On each node expansion, it adopts the exhaustive branching of all symbol leaves, which will cause some precious codewords unassigned or waste of codewords on low probability nodes. In this work, three revised versions A, B, C of Tunstall coding are proposed. Versions A and B are based on the traditional Tunstall parse tree and aim to improve assignment of the complete codewords to high probability nodes via node insertion and deletion respectively. Version C is based on an infinitely extended parse tree and aims to grow the optimal parse tree by assigning the complete codewords only to top probability nodes in the infinite tree. For evaluation, the performances of the three revised versions are compared with that of the traditional Tunstall coding. The experiment shows that on common datasets, version C has better performance than versions A and B and can have up to 6% increase of compression ratio over traditional Tuntsall Coding. Furthermore, version C is not worse than version A and B in terms of compression time.
Panse, Christian [Verfasser]. "Visualizing geo-related data using cartograms / vorgelegt von Christian Panse." 2005. http://d-nb.info/976691841/34.
Full text"A robust unification-based parser for Chinese natural language processing." 2001. http://library.cuhk.edu.hk/record=b5895881.
Full textThesis (M.Phil.)--Chinese University of Hong Kong, 2001.
Includes bibliographical references (leaves 168-175).
Abstracts in English and Chinese.
Chapter 1. --- Introduction --- p.12
Chapter 1.1. --- The nature of natural language processing --- p.12
Chapter 1.2. --- Applications of natural language processing --- p.14
Chapter 1.3. --- Purpose of study --- p.17
Chapter 1.4. --- Organization of this thesis --- p.18
Chapter 2. --- Organization and methods in natural language processing --- p.20
Chapter 2.1. --- Organization of natural language processing system --- p.20
Chapter 2.2. --- Methods employed --- p.22
Chapter 2.3. --- Unification-based grammar processing --- p.22
Chapter 2.3.1. --- Generalized Phase Structure Grammar (GPSG) --- p.27
Chapter 2.3.2. --- Head-driven Phrase Structure Grammar (HPSG) --- p.31
Chapter 2.3.3. --- Common drawbacks of UBGs --- p.33
Chapter 2.4. --- Corpus-based processing --- p.34
Chapter 2.4.1. --- Drawback of corpus-based processing --- p.35
Chapter 3. --- Difficulties in Chinese language processing and its related works --- p.37
Chapter 3.1. --- A glance at the history --- p.37
Chapter 3.2. --- Difficulties in syntactic analysis of Chinese --- p.37
Chapter 3.2.1. --- Writing system of Chinese causes segmentation problem --- p.38
Chapter 3.2.2. --- Words serving multiple grammatical functions without inflection --- p.40
Chapter 3.2.3. --- Word order of Chinese --- p.42
Chapter 3.2.4. --- The Chinese grammatical word --- p.43
Chapter 3.3. --- Related works --- p.45
Chapter 3.3.1. --- Unification grammar processing approach --- p.45
Chapter 3.3.2. --- Corpus-based processing approach --- p.48
Chapter 3.4. --- Restatement of goal --- p.50
Chapter 4. --- SERUP: Statistical-Enhanced Robust Unification Parser --- p.54
Chapter 5. --- Step One: automatic preprocessing --- p.57
Chapter 5.1. --- Segmentation of lexical tokens --- p.57
Chapter 5.2. --- "Conversion of date, time and numerals" --- p.61
Chapter 5.3. --- Identification of new words --- p.62
Chapter 5.3.1. --- Proper nouns ´ؤ Chinese names --- p.63
Chapter 5.3.2. --- Other proper nouns and multi-syllabic words --- p.67
Chapter 5.4. --- Defining smallest parsing unit --- p.82
Chapter 5.4.1. --- The Chinese sentence --- p.82
Chapter 5.4.2. --- Breaking down the paragraphs --- p.84
Chapter 5.4.3. --- Implementation --- p.87
Chapter 6. --- Step Two: grammar construction --- p.91
Chapter 6.1. --- Criteria in choosing a UBG model --- p.91
Chapter 6.2. --- The grammar in details --- p.92
Chapter 6.2.1. --- The PHON feature --- p.93
Chapter 6.2.2. --- The SYN feature --- p.94
Chapter 6.2.3. --- The SEM feature --- p.98
Chapter 6.2.4. --- Grammar rules and features principles --- p.99
Chapter 6.2.5. --- Verb phrases --- p.101
Chapter 6.2.6. --- Noun phrases --- p.104
Chapter 6.2.7. --- Prepositional phrases --- p.113
Chapter 6.2.8. --- """Ba2"" and ""Bei4"" constructions" --- p.115
Chapter 6.2.9. --- The terminal node S --- p.119
Chapter 6.2.10. --- Summary of phrasal rules --- p.121
Chapter 6.2.11. --- Morphological rules --- p.122
Chapter 7. --- Step Three: resolving structural ambiguities --- p.128
Chapter 7.1. --- Sources of ambiguities --- p.128
Chapter 7.2. --- The traditional practices: an illustration --- p.132
Chapter 7.3. --- Deficiency of current practices --- p.134
Chapter 7.4. --- A new point of view: Wu (1999) --- p.140
Chapter 7.5. --- Improvement over Wu (1999) --- p.142
Chapter 7.6. --- Conclusion on semantic features --- p.146
Chapter 8. --- "Implementation, performance and evaluation" --- p.148
Chapter 8.1. --- Implementation --- p.148
Chapter 8.2. --- Performance and evaluation --- p.150
Chapter 8.2.1. --- The test set --- p.150
Chapter 8.2.2. --- Segmentation of lexical tokens --- p.150
Chapter 8.2.3. --- New word identification --- p.152
Chapter 8.2.4. --- Parsing unit segmentation --- p.156
Chapter 8.2.5. --- The grammar --- p.158
Chapter 8.3. --- Overall performance of SERUP --- p.162
Chapter 9. --- Conclusion --- p.164
Chapter 9.1. --- Summary of this thesis --- p.164
Chapter 9.2. --- Contribution of this thesis --- p.165
Chapter 9.3. --- Future work --- p.166
References --- p.168
Appendix I --- p.176
Appendix II --- p.181
Appendix III --- p.183
Aguiar, Daniel José Gomes. "IP network usage accounting: parte I." Master's thesis, 2015. http://hdl.handle.net/10400.13/1499.
Full textAn Internet Service Provider (ISP) is responsible for the management of networks made up of thousands of customers, where there must be strict control of bandwidth and avoid congestion of it. For that ISPs need systems capable of analyzing traffic data and assign the Quality of Service (QoS) suitable in reliance thereon. NOS Madeira is the leading ISP in Madeira, with thousands of customers throughout the region. The existing bandwidth control system in this company was obsolete and led to the need to create a new bandwidth control system. This new system, called IP Network Usage Accounting, consists of three subsystems: IP Mapping System, Accounting System and Policy Server System. This report talks about the design, implementation and testing of the first subsystem, IP Mapping System. The IP Mapping System is responsible for performing the collection of traffic data held by customers of NOS Madeira and provide them to the second subsystem (Accounting System). This, in turn, performs an analysis of these data and sends the results to the third subsystem (Policy Server System) applying QoS corresponding to each of the clients IP.
Chia-Yi, Pan. "An Extended PROMELA Parser Which Generates Compact CCS State Graphs Using Data Flow Analysis for Refactoring Automation." 2002. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0021-1904200715423664.
Full textPan, Chia Yi, and 潘珈逸. "An Extended PROMELA Parser Which Generates Compact CCS State Graphs Using Data Flow Analysis for Refactoring Automation." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/50929352371886929884.
Full text國立臺灣師範大學
資訊教育研究所
91
Automated finite-state verification techniques have matured considerably in the past decades, but state-space explosion remains an obstacle to their use. Theoretical lower bounds on complexity imply that all of the techniques that have been developed to avoid or mitigate state-space explosion depend on models that are “well-formed” in some way, and will usually fail for other models. This further implies that, when analysis is applied to models derived from designs or implementations of actual software systems, a model of the system “as built” is unlikely to be suitable for automated analysis. In particular, compositional hierarchical analysis (where state-space explosion is avoided by simplifying models of subsystems at several levels of abstraction) depends on the modular structure of the model to be analyzed. We describe how as-built finite-state models can be refactored for compositional state-space analysis, applying a series of transformations to produce an equivalent model whose structure exhibits suitable modularity. In this thesis, we adopt Promela the as front-end language to automate refactoring. We select a subset of Promela and add some keywords for refactoring. The extended syntax is called rc-Promela, where “r” stands for “refactor” and “c” stands for “ccs.” We build a parser for rc-Promela, and use the parser to construct AST for rc-Promela model. Finally, we apply data flow analysis on AST to generate compact CCS state graphs for refactoring.
Bhattacharjee, Abhinaba. "A Data Requisition Treatment Instrument For Clinical Quantifiable Soft Tissue Manipulation." Thesis, 2019. http://hdl.handle.net/1805/19009.
Full textSoft tissue manipulation is a widely used practice by manual therapists from a variety of healthcare disciplines to evaluate and treat neuromusculoskeletal impairments using mechanical stimulation either by hand massage or specially-designed tools. The practice of a specific approach of targeted pressure application using distinguished rigid mechanical tools to breakdown adhesions, scar tissues and improve range of motion for affected joints is called Instrument-Assisted Soft Tissue Manipulation (IASTM). The efficacy of IASTM has been demonstrated as a means to improve mobility of joints, reduce pain, enhance flexibility and restore function. However, unlike the techniques of ultrasound, traction, electrical stimulation, etc. the practice of IASTM doesn't involve any standard to objectively characterize massage with physical parameters. Thus, most IASTM treatments are subjective to practitioner or patient subjective feedback, which essentially addresses a need to quantify therapeutic massage or IASTM treatment with adequate treatment parameters to document, better analyze, compare and validate STM treatment as an established, state-of-the-art practice. This thesis focuses on the development and implementation of Quantifiable Soft Tissue Manipulation (QSTM™) Technology by designing an ergonomic, portable and miniaturized wired localized pressure applicator medical device (Q1), for characterizing soft tissue manipulation. Dose-load response in terms of forces in Newtons; pitch angle of the device ; stroke frequency of massage measured within stipulated time of treatment; all in real-time has been captured to characterize a QSTM session. A QSTM PC software (Q-WARE©) featuring a Treatment Record System subjective to individual patients to save and retrieve treatment diagnostics and a real-time graphical visual monitoring system has been developed from scratch on WINDOWS platform to successfully implement the technology. This quantitative analysis of STM treatment without visual monitoring has demonstrated inter-reliability and intra-reliability inconsistencies by clinicians in STM force application. While improved consistency of treatment application has been found when using visual monitoring from the QSTM feedback system. This system has also discriminated variabilities in application of high, medium and low dose-loads and stroke frequency analysis during targeted treatment sessions.
FICCADENTI, Valerio. "A rank-size approach to the analysis of socio-economics data." Doctoral thesis, 2018. http://hdl.handle.net/11393/251181.
Full textGUIDI, Arianna. "Il reato a concorso necessario improprio." Doctoral thesis, 2018. http://hdl.handle.net/11393/251080.
Full textGORLA, Sandra. "Metamorfosi e magia nel Roman de Renart. Traduzione e commento delle branches XXII e XXIII." Doctoral thesis, 2018. http://hdl.handle.net/11393/251268.
Full textVALENTE, LAURA. "GREGORIO NAZIANZENO Eij" ejpiskovpou" [carm. II,1,13. II,1,10] Introduzione, testo critico, commento e appendici." Doctoral thesis, 2018. http://hdl.handle.net/11393/251619.
Full textFORMICONI, Cristina. "LÈD: Il Lavoro È un Diritto. Nuove soluzioni all’auto-orientamento al lavoro e per il recruiting online delle persone con disabilità." Doctoral thesis, 2018. http://hdl.handle.net/11393/251119.
Full textFORESI, Elisa. "A Multisectoral Analysis for economic policy: an application for healthcare systems and for labour market composition by skills." Doctoral thesis, 2018. http://hdl.handle.net/11393/251178.
Full textPETRINI, Maria Celeste. "IL MARKETING INTERNAZIONALE DI UN ACCESSORIO-MODA IN MATERIALE PLASTICO ECO-COMPATIBILE: ASPETTI ECONOMICI E PROFILI GIURIDICI. UN PROGETTO PER LUCIANI LAB." Doctoral thesis, 2018. http://hdl.handle.net/11393/251084.
Full textRECCHI, Simonetta. "THE ROLE OF HUMAN DIGNITY AS A VALUE TO PROMOTE ACTIVE AGEING IN THE ENTERPRISES." Doctoral thesis, 2018. http://hdl.handle.net/11393/251122.
Full text