To see the other types of publications on this topic, follow the link: Extension bases.

Dissertations / Theses on the topic 'Extension bases'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Extension bases.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Martin, Exertier François. "Extension orientée objet d'un SGBD relationnel." Grenoble 1, 1991. http://tel.archives-ouvertes.fr/tel-00004715.

Full text
Abstract:
L'objectif de ce travail est la conception et la réalisation d'un Système de Gestion de Base de Données Relationnel (SGBDR) intégrant les concepts et la technologie "objets". Le principe de notre approche est d'étendre les domaines relationnels aux types abstraits (ADT) et revient à coupler de façon relativement faible les concepts et mécanismes objets au modèle et à un système relationnels. Cela introduit des problèmes de modélisation et d'optimisation nouveaux qui restent à étudier. Dans un premier temps, le modèle de données et les caractéristiques de l'extension sont définis. La notion de type abstrait est introduite pour exprimer de nouveaux domaines : un ADT définit une structure de données et un ensemble de méthodes (fonctions) qui constituent son unique interface de manipulation. Un mécanisme d'héritage simple est offert. Des constructeurs sont disponibles pour définir la structure de données d'un type ; on introduit ainsi la notion d'objet complexe. Le concept de partage, associé à l'identité d'objet, est un apport important de ce travail. Le langage associé au modèle est une extension de SQL appelée ESQL ; le langage d'écriture des méthodes actuellement disponible est une extension de C. La mise en oeuvre d'un tel système consiste à développer les composants nécessaires au support d'objets et à les intégrer à un noyau de SGBDR existant. Elle permet de mettre en évidence trois modules principaux. Le gestionnaire de types est un complément du gestionnaire de catalogue relationnel qui gère les définitions d'ADT. Le gestionnaire de méthodes regroupe un ensemble de fonctions allant de la compilation à l'exécution. Le gestionnaire d'objets assure le stockage et la manipulation des objets complexes (instances d'ADT) ; cette partie à notamment permis d'étudier des techniques évoluées de stockage d'objets
APA, Harvard, Vancouver, ISO, and other styles
2

BENMOHAMED, MOHAMED. "Etude des bases de donnees localisees de grande extension." Paris 7, 1993. http://www.theses.fr/1993PA077013.

Full text
Abstract:
L'une des caracteristiques des bases de donnees localisees de grande extension est l'augmentation reguliere du volume de donnees en fonction de la taille de la zone couverte, de la richesse souhaitee et de la frequence de mise a jour. Pour optimiser l'acces aux donnees, il est necessaire de decouper l'espace en partitions. Pour effectuer des applications spatio-temporelles, on conserve l'historique des donnees dans la bd. Il est aussi interessant de modeliser les operations au meme titre que les donnees
APA, Harvard, Vancouver, ISO, and other styles
3

Mello, Thiago Castilho de. "Sobre bases normais para extensões galoisianas de corpos." Universidade de São Paulo, 2008. http://www.teses.usp.br/teses/disponiveis/55/55135/tde-21052008-150202/.

Full text
Abstract:
Neste trabalho apresentamos várias demonstrações do Teorema da Base Normal para certos tipos de extensões galoisianas de corpos, algumas existenciais e outras construtivas, destacando as diferenças e dificuldades de cada situação. Apresentamos também generalizações de tal teorema e mostramos que toda extensão galoisiana de grau ímpar de corpos admite uma base normal autodual com respeito µa forma bilinear traço
In this work we present several demonstrations of The Normal Basis Theorem for certain kinds of galoisian extensions of fields, some of them existential and others constructive, pointing the diffculties and differences in each situation. We also present generalizations of such theorem and show that every odd degree galoisian extension of fields admits a self-dual normal base with respect to the trace bilinear map
APA, Harvard, Vancouver, ISO, and other styles
4

Mopolo-Moke, Gabriel. "Nice-c++ : une extension c++ pour la programmation persistante a partir d'un serveur de bases d'objets." Nice, 1991. http://www.theses.fr/1991NICE4516.

Full text
Abstract:
Le projet tootsi est un projet de recherche europeen qui a debute en fevrier 1989 et prend fin en fevrier 1991. Il vise l'amelioration de l'utilisation des serveurs et banques de donnees existants. Pour mener a bien ce projet, un certain nombre d'outils de developpement ont ete juges necessaires notamment un langage de programmation susceptible de supporter des types complexes et multimedias, la notion de la persistance, le partage et certains liens semantiques tels que l'heritage et l'association. Le langage selectionne c++ ne possede pas toutes ces caracteristiques. Cette etude a pour but de proposer, dans le cadre du projet tootsi, une approche d'extension du langage c++ vers la programmation persistante et la prise en compte de nouveaux types. Notre demarche consiste a ne pas reecrire ou modifier le compilateur c++ mais au contraire a utiliser les concepts d'abstraction de donnees, de polymorphisme et d'heritage deja presents dans ce langage. Dans une premiere partie nous faisons l'etat de l'art dans le domaine de la programmation orientee-objets et de la programmation persistante. Ce systeme est defini par: 1) un modele de programmation fonde sur l'extension du modele d'objets et du systeme de type c++; 2) des interfaces de manipulation des objets et meta-objets; 3) un serveur d'objets pour la gestion de persistance. L'objectif finale de notre travail est de permettre a nice-c++ de supporter: les types complexes et multimedias, les liens d'association, la persistance et le partage afin de repondre aux besoins du projet tootsi
APA, Harvard, Vancouver, ISO, and other styles
5

Buyukcan, Mehmet. "Preservation And Shelf Life Extension Of Shrimps And Mussels By High Hydrostatic Pressure(hpp)." Master's thesis, METU, 2006. http://etd.lib.metu.edu.tr/upload/3/12607290/index.pdf.

Full text
Abstract:
Shrimp and mussel samples were cleaned, washed and exposed to steam before freezing. HHP treatment was performed at combinations of 200, 220 and 250 MPa at 25, 30, 40 and 50°
C for 10 and 20 minutes. Microbial analysis were performed by analyzing the effect of treatments on the microbial reduction in the samples. Based on the results of the microbial reduction, the best combinations of HHP treatments were determined as 250 MPa, 50°
C, 10 minute for shrimps and 220 MPa, 50°
C, 10 minute for mussels where total microbial inactivation was achieved. Storage analysis was performed on the samples, treated at the selected HHP combinations and stored at room (25°
C) and refrigeration temperatures (4°
C). For the storage analysis, variations in Total Volatile Bases (TVB-N) and pH were measured. According to the results evaluated, shelf-life of the shrimps were detected as 10 and 16 days for storage at room and refrigeration temperature, respectively as compared to 4 days of untreated sample at 4oC. Similarly shelf-life for the mussel samples were obtained as 12 days for storage at room and 18 day for storage at refrigeration temperature as compared to 4 days of untreated sample at 4oC. HHP-at the studied parameters for shrimps and mussels- can be offered as an alternative method for the preservation of shell-fish instead of conventional frozen food technology, which is currently used in the industry, since it gives the opportunity to handle the samples at lower temperatures for the post-production period resulting in both reduction of energy required and operational costs without sacrificing from the quality as measured by microbial reduction, TVB-N and pH.
APA, Harvard, Vancouver, ISO, and other styles
6

Hmida, Hmida. "Extension des Programmes Génétiques pour l’apprentissage supervisé à partir de très larges Bases de Données (Big data)." Thesis, Paris Sciences et Lettres (ComUE), 2019. http://www.theses.fr/2019PSLED047.

Full text
Abstract:
Dans cette thèse, nous étudions l'adaptation des Programmes Génétiques (GP) pour surmonter l'obstacle du volume de données dans les problèmes Big Data. GP est une méta‐heuristique qui a fait ses preuves pour les problèmes de classification. Néanmoins, son coût de calcul est un frein à son utilisation avec les larges bases d’apprentissage. Tout d'abord, nous effectuons une revue approfondie enrichie par une étude comparative expérimentale des algorithmes d'échantillonnage utilisés avec GP. Puis, à partir des résultats de l'étude précédente, nous proposons quelques extensions basées sur l'échantillonnage hiérarchique. Ce dernier combine des algorithmes d'échantillonnage actif à plusieurs niveaux et s’est prouvé une solution appropriée pour mettre à l’échelle certaines techniques comme TBS et pour appliquer GP à un problème Big Data (cas de la classification des bosons de Higgs). Par ailleurs, nous formulons une nouvelle approche d'échantillonnage appelée échantillonnage adaptatif, basée sur le contrôle de la fréquence d'échantillonnage en fonction du processus d'apprentissage, selon les schémas fixe, déterministe et adaptatif. Enfin, nous présentons comment transformer une implémentation GP existante (DEAP) en distribuant les évaluations sur un cluster Spark. Nous démontrons comment cette implémentation peut être exécutée sur des clusters à nombre de nœuds réduit grâce à l’échantillonnage. Les expériences montrent les grands avantages de l'utilisation de Spark pour la parallélisation de GP
In this thesis, we investigate the adaptation of GP to overcome the data Volume hurdle in Big Data problems. GP is a well-established meta-heuristic for classification problems but is impaired with its computing cost. First, we conduct an extensive review enriched with an experimental comparative study of training set sampling algorithms used for GP. Then, based on the previous study results, we propose some extensions based on hierarchical sampling. The latter combines active sampling algorithms on several levels and has proven to be an appropriate solution for sampling techniques that can’t deal with large datatsets (like TBS) and for applying GP to a Big Data problem as Higgs Boson classification.Moreover, we formulate a new sampling approach called “adaptive sampling”, based on controlling sampling frequency depending on learning process and through fixed, determinist and adaptive control schemes. Finally, we present how an existing GP implementation (DEAP) can be adapted by distributing evaluations on a Spark cluster. Then, we demonstrate how this implementation can be run on tiny clusters by sampling.Experiments show the great benefits of using Spark as parallelization technology for GP
APA, Harvard, Vancouver, ISO, and other styles
7

Tawbi, Chawki. "Adactif : extension d'un SGBD à l'activité par une approche procédurale basée sur les rendez-vous." Toulouse 3, 1996. http://www.theses.fr/1996TOU30262.

Full text
Abstract:
L'activite dans les sgbd est mise en uvre grace aux regles evenement-condition-action (eca) qui servent a specifier le comportement actif du systeme et sa reaction vis a vis des situations rencontrees dans la bd et dans son environnement. L'evenement specifie le moment du declenchement de la regle, la condition verifie si l'etat de la bd necessite l'execution de l'action qui a son tour execute des operations en reponse a l'evenement. Ces operations peuvent servir a annuler la cause de l'evenement ou a executer ses consequences. Dans les travaux effectues durant la these, nous nous sommes interesses a un langage dote de mecanismes temps-reel afin de s'en inspirer pour etendre un sgbd passif a l'activite. Nous nous sommes inspires des taches ada et de leur mecanisme de synchronisation (les rendez-vous) pour implanter les regles dans notre sgbd actif baptise adactif. Les regles dans adactif sont considerees comme des taches actives attendant un rendez-vous du systeme en l'occurence de l'evenement declenchant. Chaque tache a en charge la verification d'une condition et, si elle est satisfaite, l'execution de l'action associee decrite de facon procedurale. Les taches dans adactif ont ete realisees a l'aide des processus unix et evoluent en parallele. La synchronisation dans le systeme, quant a elle, est mise en uvre a l'aide des pipe-line unix. D'autre part, comme les evenements composes (formes d'une combinaison d'evenements) jouent un role important dans les sgbd actifs, nous proposons un mecanisme de detection de ce type d'evenements base lui aussi sur le principe de tache et de rendez-vous. Ainsi, un evenement compose est detecte par une regle en attente de rendez-vous avec les evenements composants. Lorsque ces derniers sont au rendez-vous, et apres que la composition ait ete realisee, cette regle signale l'evenement compose au systeme. Cette approche permettra a l'utilisateur de specifier ses propres operateurs de composition conformement a la semantique de son application
APA, Harvard, Vancouver, ISO, and other styles
8

LECUBIN, FLORENCE. "Synthese et etude de nouvelles bases heterocycliques : extension des motifs de reconnaissance de l'adn double brin par les oligonucleotides anti-genes." Paris 11, 1999. http://www.theses.fr/1999PA112083.

Full text
Abstract:
La strategie anti-gene est fondee sur la capacite des oligonucleotides de type oligopyrimidines de reconnaitre des sequences oligopurine_oligopyrimidine d'un adn double brin, par formation d'une triple helice. Dans ces systemes, la reconnaissance et la specificite de sequences sont liees a la formation de liaisons hydrogenes de type hoogsteen entre les bases du troisieme brin et les sites donneurs ou accepteurs du brin homopurine dans le grand sillon de la double helice. Cependant, une inversion des paires de bases (pyrimidine_purine) destabilise considerablement la triple helice. Les travaux presentes dans cette these ont pour objectif d'elaborer de nouveaux analogues d'oligonucleotides qui puissent reconnaitre toute sequence cible quelque soit sa composition en nucleosides. Nous avons synthetise des bases heterocycliques susceptibles de reconnaitre les paires de bases t_a et c_g. Leur construction a mis en jeu une reaction de couplage de type stille. Nous avons ensuite etudie par rmn #1h, dans un solvant organique, la reconnaissance des paires t_a et c_g par ces nouvelles bases qui peuvent former trois liaisons hydrogenes. Cette approche constitue un moyen rapide de preselection des bases modifiees avant d'entreprendre leur incorporation dans des oligonucleotides, qui a ete effectuee au moyen d'unites serinol. La determination de la stabilite des triplex formes, par mesure des temperatures de fusion, a mis en evidence une trop grande flexibilite du systeme d'incorporation. Ce resultat nous a conduits a aborder la synthese de c-nucleosides derives des bases heterocycliques, en vue de conserver le squelette des enchainements phosphodiesters de l'adn.
APA, Harvard, Vancouver, ISO, and other styles
9

Ghederim, Alexandra. "Une extension des modèles sémantiques par un ordre sur les attributs : application à la migration de schémas relationnels vers des schémas orientés objet." Lyon 1, 1996. http://www.theses.fr/1996LYO10303.

Full text
Abstract:
La modelisation des systemes d'information et les applications de bases de donnees deviennent de plus en plus complexes. Dans ce contexte ou la quantite d'information augmente et se diversifie, ou les approches utilisateurs divergent et se multiplient et ou des nouvelles technologies s'imposent, le processus de conception de schemas de bases de donnees est de plus en plus difficile et laborieux. La demande des utilisateurs s'oriente de plus en plus vers des systemes capables de leur offrir une modelisation fidele a leur univers et des meilleures performances par rapport aux fonctions qu'ils doivent remplir. Pour une meilleure modelisation de schemas de bases de donnees orientes objet nous proposons dans cette these une extension du modele semantique graphe semantique normalise (pich90) qui est le graphe semantique normalise avec ordre (gsno). Ce modele ajoute un complement de specification formelle en privilegiant un sous-ensemble d'attributs et l'ordonnant. Il modelise mieux les relations entre les informations et permet de mieux repondre au contexte de l'utilisateur. Un autre aspect tres actuel dans le domaine des bases de donnees est la recuperation des anciennes bases relationnelles et leur migration vers les nouveaux systemes a objets. Ce processus passe inexorablement par une transformation conceptuelle entre ces deux modeles logiques, transformation qui a souvent besoin d'un complement d'information. Utilisant ce modele semantique etendu (gsno), comme modele conceptuel intermediaire, nous avons concu un outil automatique de conception de schemas statiques de bases de donnees orientees objet et de migration de schemas de bases de donnees relationnels vers des schemas de bases de donnees orientes objet
APA, Harvard, Vancouver, ISO, and other styles
10

Collet, Philippe. "Un modele fonde sur les assertions pour le genie logiciel et les bases de donnees : application au langage oqual, une extension d'eiffel." Nice, 1997. http://www.theses.fr/1997NICE5130.

Full text
Abstract:
La reutilisation de composants par une approche objet necessite un haut niveau de qualite de documentation et de fiabilite, qu'il est difficile d'obtenir dans un contexte d'evolution incessante. L'approche assertionnelle constitue un bon compromis entre les preuves et les developpements sans rigueur. Notre introduction des quantifications, sur des collections d'instances et des extensions de types, augmente de maniere significative l'expressivite actuelle des assertions du langage eiffel. Comme leur evolution necessite une technique d'exploration voisine de celle requise pour les bases de donnees, nous proposons un modele et un support d'execution communs a ces deux domaines. Pour determiner les moments et les techniques d'evaluation les plus appropries pour les quantifications, nous proposons une classification des assertions qui exprime les intentions semantiques de chaque assertion. Nous definissons alors oqual comme une extension d'eiffel pour exprimer des formules logiques avec quantification, qui servent aux assertions et aux criteres de requetes de bases de donnees. Des raccourcis de saisie, combines a une presentation proche du langage mathematique, permettent d'ecrire facilement des assertions, expressives et lisibles, ainsi que des requetes selectives avec emboitement. Nous etudions les aspects methodologiques de notre langage a travers des exemples concrets, de specification par assertions et de prototypage a l'aide de requetes. La realisation d'un traducteur d'oqual en eiffel montre la faisabilite de l'approche, en utilisant une technique de reification par necessite. De maniere plus exploratoire, nous proposons un systeme d'armement semi-automatique des assertions, base sur une perception de l'evolution de la construction d'un systeme, pour faciliter l'utilisation et ameliorer les performances. Les resultats que nous obtenons offrent de multiples perspectives et participent au rapprochement des domaines du genie logiciel et des bases de donnees.
APA, Harvard, Vancouver, ISO, and other styles
11

Bazhar, Youness. "Extension des systèmes de métamodélisation persistant avec la sémantique comportementale." Phd thesis, ISAE-ENSMA Ecole Nationale Supérieure de Mécanique et d'Aérotechique - Poitiers, 2013. http://tel.archives-ouvertes.fr/tel-00939900.

Full text
Abstract:
L'Ingénierie Dirigée par les Modèles (IDM) a suscité un grand intérêt grâce aux avantages qu'elle offre. Enparticulier, l'IDM vise à accélérer le processus de développement et à faciliter la maintenance des logiciels. Mais avecl'augmentation permanente de la taille des modèles et de leurs instances, l'exploitation des modèles et de leurs instances,en utilisant des outils classiques présente des insuffisances liées au passage à l'échelle. L'utilisation des bases de donnéesest une des solutions proposées pour répondre à ce problème. Dans ce contexte, deux approches ont été proposées. Lapremière consiste à équiper les outils de modélisation avec des bases de données dédiées au stockage de modèles,appelées model repositories (p. ex. EMFStore). Ces bases de données sont équipées de langages d'exploitation limitésseulement à l'interrogation des modèles et des instances. Par conséquent, ces langages n'offrent aucune capacité poureffectuer des opérations avancées sur les modèles telles que la transformation de modèles ou la génération de code. Ladeuxième approche, que nous suivons dans notre travail, consiste à définir des environnements persistants en base dedonnées dédiés à la méta-modélisation. Ces environnements sont appelés systèmes de méta-modélisation persistants(PMMS). Un PMMS consiste en (i) une base de données dédiée au stockage des méta-modèles, des modèles et de leursinstances, et (ii) un langage d'exploitation associé possédant des capacités de méta-modélisation et d'exploitation desmodèles. Plusieurs PMMS ont été proposés tels que ConceptBase ou OntoDB/OntoQL. Ces PMMS supportentprincipalement la définition de la sémantique structurelle et descriptive des méta-modèles et des modèles en terme de(méta-)classes, (méta-)attributs, etc. Par contre, ces PMMS fournissent des mécanismes limités pour définir la sémantiquecomportementale nécessaire à l'exploitation des modèles et des instances. En effet, la sémantique comportementalepourrait être utile pour calculer des concepts dérivés, effectuer des transformations de modèles, générer du code source,etc. Ainsi, nous proposons dans notre travail d'étendre les PMMS avec la possibilité d'introduire dynamiquement desopérations qui peuvent être implémentées en utilisant des mécanismes hétérogènes. Ces opérations peuvent ainsi utiliserdes mécanismes internes au système de gestion de base de données (p. ex. les procédures stockées) tout comme desmécanismes externes tels que les services web ou les programmes externes (p. ex. Java, C++). Cette extension permetd'améliorer les PMMS en leur donnant une plus large couverture de fonctionnalités et une plus grande flexibilité. Pourvalider notre proposition, elle a été implémentée sur le prototype OntoDB/OntoQ et a été mise en oeuvre dans troiscontextes différents : (1) pour calculer les concepts dérivés dans les bases de données à base ontologique, (2) pouraméliorer une méthodologie de conception de base de données à base ontologique et finalement (3) pour faire de latransformation et de l'analyse des modèles des systèmes embarqués temps réel.
APA, Harvard, Vancouver, ISO, and other styles
12

Karamitros, Mathieu. "Extension de l'outil Monte Carlo généraliste Geant4 pour la simulation de la radiolyse de l'eau dans le cadre du projet Geant4-DNA." Thesis, Bordeaux 1, 2012. http://www.theses.fr/2012BOR14629/document.

Full text
Abstract:
Ce travail, réalisé dans le cadre du projet Geant4-DNA, consiste à concevoir un prototype pour la simulation des effets chimiques précoces des rayonnements ionisants. Le modèle de simulation étudié repose sur la représentation particule-continuum où toutes les molécules sont explicitement simulées et où le solvant est traité comme un continuum. La méthode proposée par cette thèse a pour but d'améliorer les performances de ce type de simulation. Elle se base sur (1) la combinaison d'une méthode de pas en temps dynamiques avec un processus de pont Brownien pour la prise en compte des réactions chimiques et afin d'éviter une simulation à pas en temps fixe, coûteuse en temps de calcul, et (2) sur la structure de données k-d tree pour la recherche du voisin le plus proche permettant, pour une molécule donnée, une localisation rapide du réactif le plus proche. La précision de l'algorithme est démontrée par la comparaison des rendements radiochimiques en fonction du temps et en fonction du transfert d'énergie linéaire avec des résultats d'autres codes Monte-Carlo et des données expérimentales. A partir de ce prototype, une tentative de prédiction du nombre et du type d'interactions radicaux-ADN a été entreprise basée sur d'une description simplifiée du noyau cellulaire
The purpose of this work, performed under the Geant4-DNA project, is to design a prototype for simulating early chemical effects of ionizing radiation. The studied simulation model is based on the particle-continuum representation where all the molecules are explicitly simulated, and where the solvent is treated as a continuum. The method proposed by this thesis aims at improving the performance of this type of simulation. It is based on (1) a dynamical time steps method with a Brownian bridge process, to account for chemical reactions, which avoids the costly fixed time-step simulations, and (2) on the k-d tree data structure for quickly locating, for a given molecule, its closest reactants. The accuracy of the algorithm is demonstrated by comparing radiochemical yields over time and depending on the linear energy transfer with results obtained from other Monte Carlo codes and experimental data. Using this prototype, an attempt to predict the number and type of radical attacks on DNA has been performed using a simplified description of the cell nucleus
APA, Harvard, Vancouver, ISO, and other styles
13

Elashter, Mouna. "Gestion et extension automatiques du dictionnaire relationnel multilingues de noms propres Prolexbase : mise à jour multilingues et création d'un volume arabe via la Wikipédia." Thesis, Tours, 2017. http://www.theses.fr/2017TOUR4011/document.

Full text
Abstract:
Les bases de données lexicales jouent un grand rôle dans le TAL, mais, elles nécessitent un développement et un enrichissement permanents via l’exploitation des ressources libres du web sémantique, entre autres, l’encyclopédie Wikipédia, DBpedia, Geonames et Yago2. Prolexbase, comporte à ce jour dix langues, trois parmi elles sont bien couvertes : le francais, l’anglais et le polonais. Il a été conçu manuellement et une première tentative semi-automatique a été réalisée par le projet ProlexFeeder (Savary et al. 2013). L’objectif de notre travail était d’élaborer un outil de mise à jour et d’extension automatiques de ce lexique, et l'ajout de la langue arabe. Un système automatique a également été mis en place pour calculer via la Wikipédia l’indice de notoriété des entrées de Prolexbase ; cet indice dépend de la langue et participe, d'une part, à la construction d'un module de Prolexbase pour la langue arabe et, d'autre part, à la révision de la notoriété présente pour les autres langues de la base
Lexical databases play a significant role in natural language processing (NLP), however, they require permanent development and enrichment through the exploitation of free resources from the semantic web, among others, Wikipedia, DBpedia, Geonames and Yago2. Prolexbase, which issued of numerous studies on NLP, has ten languages, three of which are well covered: French, English and Polish. It was manually designed; the first semiautomatic attempt was made by the ProlexFeeder project (Savary et al., 2013). The objective of our work was to create an automatic updating and extension tool for Prolexbase, and to introduce the Arabic language. In addition, a fully automatic system has been implemented to calculate, via Wikipedia, the notoriety of the entries of Prolexbase. This notoriety is language dependent, is the first step in the construction of an Arabic module of Prolexbase, and it takes a part in the notoriety revision currently present for the other languages in the database
APA, Harvard, Vancouver, ISO, and other styles
14

Brown, Almeshia S. "An Assessment of Virginia Cooperative Extension's New Extension Agent Training Program." Diss., Virginia Tech, 2003. http://hdl.handle.net/10919/29890.

Full text
Abstract:
This study is an assessment of the New Extension Agent Training (NEAT) program in Virginia. Although new Extension agents have exceptional subject matter training, they often lack skills needed to be effective Extension professionals (Bennett, 1979). The NEAT program provides a way for new agents to receive hands-on experiences that will facilitate a smooth transition into their respective roles. There is currently no specific data that has the NEAT program. Therefore, an evaluation of the program by its participants to determine its importance and effectiveness may be utilized to enhance the effectiveness of the NEAT program. The survey utilized to collect data in the study was developed by the researcher. The instrument was put on a website where participants could access it during a given time frame. The population consisted of new Extension agents, training agents, and administrators who participated in the NEAT program and are currently employed by Virginia Cooperative Extension (VCE). Participants were asked to rate the importance and effectiveness of the NEAT program in facilitating new Extension agents' growth in a series of goals needed for a new agent to be proficient. These goals were then divided into eight competencies as outlined by National Policy Statement on Staff Training and Development (1968). Participants were asked to provide demographic information and suggestions that would be useful in designing future programs. Data were analyzed using SPSS. The data showed that communication was rated the most important competency while human development was considered the least important. The data related to the ratings of effectiveness of the NEAT program in relation to the eight competencies also demonstrated that respondents rated communication as the most effectively taught competency covered in the NEAT program, and human development as the least effectively taught competency. Significant differences among ratings by position in the NEAT program were measured at the 0.05 alpha level. Significant differences were observed both between new Extension agents and Extension administrators and between Extension training agents and Extension administrators were in the importance of a selected competency and the effectiveness of the NEAT program in teaching the some of the competencies.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
15

Guianvarc'h, Dominique. "Extension des motifs de reconnaissance de la triple-helice d'acides nucleiques. Synthese et evaluation dans un contexte oligonucleotidique de nouveaux analogues de nucleosides pour la reconnaissance specifique de la paire de bases a_t." Paris 6, 2001. http://www.theses.fr/2001PA066113.

Full text
Abstract:
Les oligonucleotides formant des triple-helices (oft) sont connus pour leur capacite a se lier dans le grand sillon de sequences d'adn double brin (adndb) de type oligopyrimidine_oligopurine, par des liaisons hydrogenes specifiques de type hoogsteen ou hoogsteen inverse avec le brin oligopurine. La formation de triple-helices a des applications concernant la modulation de l'expression des genes : cette approche constitue la strategie anti-gene. Cependant celle-ci est actuellement limitee par le fait que seules, des sequences cibles oligopyrimidines_oligopurines de l'adndb peuvent etre reconnues specifiquement par un oft. En effet, une inversion de paires de bases de type a_t ou g_c destabilise considerablement la triple-helice. Les travaux presentes dans cette these ont pour objectif la synthese et l'evaluation dans un contexte oligonucleotidique d'analogues de nucleosides capables de reconnaitre specifiquement une inversion de type a_t, afin d'etendre les motifs de reconnaissance de l'adn par les oft. La synthese de plusieurs series d'analogues de nucleosides, dont la partie aglycone, peut utiliser simultanement tous les sites donneurs ou accepteurs de liaisons hydrogene de la paire de bases inversee a_t, dans le grand sillon de l'adndb, a ete realisee. L'elaboration des ces composes nous a conduit au developpement de differentes voies de syntheses de c-nucleosides. Ces derniers ont ensuite ete incorpores dans des oligonucleotides oligopyrimidiques afin de determiner, par des mesures de temperature de fusion, leur capacite a restaurer la stabilite d'un triplex contenant une inversion de type a_t. Cette etude a permis de mettre en evidence la remarquable efficacite de l'un des composes synthetises, s, puisque la valeur de tm des triplex contenant le motif a_t*s correspond a celle de triplex canoniques. Des etudes complementaires ont ete entreprises afin de determiner si cet analogue s forme des liaisons hydrogene specifiques avec la paire de bases a_t selon le modele propose initialement.
APA, Harvard, Vancouver, ISO, and other styles
16

Chakraborty, Olive. "Design and Cryptanalysis of Post-Quantum Cryptosystems." Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS283.

Full text
Abstract:
La résolution de systèmes polynomiaux est l’un des problèmes les plus anciens et des plus importants en Calcul Formel et a de nombreuses applications. C’est un problème intrinsèquement difficile avec une complexité, en générale, au moins exponentielle en le nombre de variables. Dans cette thèse, nous nous concentrons sur des schémas cryptographiques basés sur la difficulté de ce problème. Cependant, les systèmes polynomiaux provenant d’applications telles que la cryptographie multivariée, ont souvent une structure additionnelle cachée. En particulier, nous donnons la première cryptanalyse connue du crypto-système « Extension Field Cancellation ». Nous travaillons sur le schéma à partir de deux aspects, d’abord nous montrons que les paramètres de challenge ne satisfont pas les 80bits de sécurité revendiqués en utilisant les techniques de base Gröbner pour résoudre le système algébrique sous-jacent. Deuxièmement, en utilisant la structure des clés publiques, nous développons une nouvelle technique pour montrer que même en modifiant les paramètres du schéma, le schéma reste vulnérable aux attaques permettant de retrouver le secret. Nous montrons que la variante avec erreurs du problème de résolution d’un système d’équations est encore difficile à résoudre. Enfin, en utilisant ce nouveau problème pour concevoir un nouveau schéma multivarié d’échange de clés nous présentons un candidat qui a été soumis à la compétition Post-Quantique du NIST
Polynomial system solving is one of the oldest and most important problems incomputational mathematics and has many applications in computer science. Itis intrinsically a hard problem with complexity at least single exponential in the number of variables. In this thesis, we focus on cryptographic schemes based on the hardness of this problem. In particular, we give the first known cryptanalysis of the Extension Field Cancellation cryptosystem. We work on the scheme from two aspects, first we show that the challenge parameters don’t satisfy the 80 bits of security claimed by using Gröbner basis techniques to solve the underlying algebraic system. Secondly, using the structure of the public keys, we develop a new technique to show that even altering the parameters of the scheme still keeps the scheme vulnerable to attacks for recovering the hidden secret. We show that noisy variant of the problem of solving a system of equations is still hard to solve. Finally, using this new problem to design a new multivariate key-exchange scheme as a candidate for NIST Post Quantum Cryptographic Standards
APA, Harvard, Vancouver, ISO, and other styles
17

Hossain, Akash. "Forking in valued fields and related structures." Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASM019.

Full text
Abstract:
Cette thèse est une contribution à la théorie des modèles des corps valués. On étudie la déviation dans les corps valués, ainsi que certains de leurs réduits. On s'intéresse particulièrement aux corps pseudo-locaux, les ultraproduits de caractéristique résiduelle nulle des corps valués p-adiques. Nous considérons d'abord aux groupes des valeurs des corps valués qui nous intéressent, les groupes Abéliens ordonnés réguliers. Nous y établissons description géométrique de la déviation, ainsi qu'une classification détaillée des extensions globales non-déviantes ou invariantes d'un type donné. Nous démontrons ensuite des principes d'Ax-Kochen-Ershov pour la division et la déviation dans la théorie resplendissante des expansions de suites exactes courtes pures de structures Abéliennes, telles qu'étudiées dans l'article sur la distalité d'Aschenbrenner-Chernikov-Gehret-Ziegler. En particulier, nos résultats s'appliquent aux groupes des termes dominants des (expansions de) corps valués. Pour finir, nous donnons diverses conditions suffisantes pour qu'un ensemble de paramètres soit une base d'extension dans un corps valué Hensélien de caractéristique résiduelle nulle. En particulier, nous démontrons que la déviation coïncide avec la division dans les corps pseudo-locaux de caractéristique résiduelle nulle. Nous discutons aussi des résultats de Ealy-Haskell-Simon sur la déviation pour les extensions séparées de corps valués Henséliens de caractéristique résiduelle nulle. Nous contribuons à la question en démontrant que, dans le cas d'une extension Abhyankar, et avec quelques hypothèses supplémentaires, la non-déviation d'un type dans in corps pseudo-local implique l'existence d'une mesure de Keisler globale invariante dont le support contient ce type, à l'instar des corps pseudo-finis
This thesis is a contribution to the model theory of valued fields. We study forking in valued fields and some of their reducts. We focus particularly on pseudo-local fields, the ultraproducts of residue characteristic zero of the p-adic valued fields. First, we look at the value groups of the valued fields we are interested in, the regular ordered Abelian groups. We establish for these ordered groups a geometric description of forking, as well as a full classification of the global extensions of a given type which are non-forking or invariant. Then, we prove an Ax-Kochen-Ershov principle for forking and dividing in expansions of pure short exact sequences of Abelian structures, as studied by Aschenbrenner-Chernikov-Gehret-Ziegler in their article about distality. This setting applies in particular to the leading-term structure of (expansions of) valued fields. Lastly, we give various sufficient conditions for a parameter set in a Henselian valued field of residue characteristic zero to be an extension base. In particular, we show that forking equals dividing in pseudo-local of residue characteristic zero. Additionally, we discuss results by Ealy-Haskell-Simon on forking in separated extensions of Henselian valued fields of residue characteristic zero. We contribute to the question in the setting of Abhyankar extensions, where we show that, with some additional conditions, if a type in a pseudo-local field does not fork, then there exists some global invariant Keisler measure whose support contains that type. This behavior is well-known in pseudo-finite fields
APA, Harvard, Vancouver, ISO, and other styles
18

Jomier, Geneviève. "Bases de données relationnelles : le système PEPIN et ses extensions." Paris 5, 1989. http://www.theses.fr/1989PA05S008.

Full text
Abstract:
Cette thèse concerne le SGBD relationnel PEPIN et un certain nombre de travaux auxquels il a donné lieu. Ce système a été conçu pour une implantation sur un ensemble de micro-ordinateurs interconnectés par un réseau local de manière à réaliser un système réparti composé de serveurs de base de données et de sites d'accès faiblement couplés. L'architecture en couches du logiciel gérant une base monofichier, a permis de réaliser un système très souple dans ses fonctions, aisément adaptable à différents domaines d'application par ajoût, suppression, modification ou adaptation de fonctions internes ou extérieures au système, très facilement portable sur différents systèmes d'exploitation et évolutifs. L'atomocité des transactions est assurée par un mécanisme original et particulièrement performant d'espace fantôme. Celui-ci permet la réalisation de validations en deux phases, d'abandons et de reprises de transactions en cas de panne ayant détruit le contenu de la mémoire centrale très rapide. À la suite de la description du système sont présentés des travaux faisant le lien entre base de données et logique, bases de données et analyse des données, base de données relationnelles et orientation-objet. Ces travaux ont donné lieu à des extensions du système de référence. Le système PEPIN a été utilisé par de nombreuses équipes de recherche, et aussi par des industriels, pour le développement de nouveaux prototypes dans des domaines très divers, en finance et à l'étranger, ainsi que pour l'enseignement des bases de données dans des universités et écoles d'ingénieurs.
APA, Harvard, Vancouver, ISO, and other styles
19

Dosi, Harsh. "Patway Pioneer: A Web-Based Metabolic Network Layout Extension." DigitalCommons@USU, 2014. https://digitalcommons.usu.edu/etd/2797.

Full text
Abstract:
The number and complexity of genome-scale metabolic networks is increasing as new systems are characterized and existing models are extended. Tools for visualization of network topology and dynamics are not keeping pace and are becoming a bottleneck for advancement. Specically, visualization tools are not optimized for human comprehension and often produce layouts where important interactions and inherent organization are not apparent. Researchers seek visualizations in which the network is partitioned into functional modules and compartments, arranged in linear, cyclic, or branching schema as appropriate, and most importantly, can be customized to their needs and shared. Challenges include the wide diversity in the biological standards, layout schemas, and network formats. This work introduces a web-based tool that provides this functionality as an extension to the existing web-based tool called Pathway Pioneer (www.pathwaypioneer.org). Pathway Pioneer is a dynamic web-based system built as a front-end graphical user interface to the ux balance analysis tool COBRA-py. Full click-and-drag layout editing capabilities are added allowing each metabolite and reaction to be translated and rotated as connecting edges are automatically redrawn. Initial automated layouts for new models maximize planarity while clustering reactions based on subsystem module and compartment. The users are given maximum exibility to design specific layouts while details of convention, such as joined in and out of reaction edges, disconnected co-factors, and connected metabolites, are automatically handled. Layouts can be shared among researchers and explored to archival Symphony format, along with pdf and png images. This tool provides the user with a semi automatic layout algorithm along with graphical and interactive tools to fully customize the network layout for optimal comprehension. Export capabilities are compatible with COBRA-py and other visualization tools. It provides a platform for share model development and innovation to the community, sharpening the R&D curve, and improving the turn-around time of model reconstruction at the genome-scale. Pathway Pioneer provides unique capabilities in customization of metabolic networks that complements and overcomes limitations of the growing body of existing tools.
APA, Harvard, Vancouver, ISO, and other styles
20

Holmberg, Wilhelm. "Cost-efficient method forlifetime extension ofinterconnectedcomputer-based systems." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-303014.

Full text
Abstract:
Lifetime and obsolescence of components for computer-based systems poses issues for continued usage and maintenance of the systems. This thesis investigates possible alternatives for lifetime extension of a train identification system used in Stockholm Metro. Research of other train identification systems available on the market were made to enable a cost comparison between lifetime extension and system replacement. Methods for extending lifetime of computer-based system, where components are obsolete, were investigated. Since most system documentation was inaccessible a reverse- engineering approach was chosen. Through usage of electrical schematics acquired and open-source hardware descriptions a hardware emulator was developed, which is directly compatible with the existing hardware. The total amount of resources used indicates it is possible to extend the systems lifetime at a low cost, as compared to the cost of system replacement.
Livslängd och åldrande av komponenter för datorbaserade system utgör problem för fortsatt användande och underhåll av systemen. Den här avhandlingen undersöker möjliga alternativ för livstidsförlängning av ett tågidentifieringssystem som används i Stockholms tunnelbana. Efterforskningar av andra tågidentifieringssystem tillgängliga på marknaden genomfördes för att möjliggöra en kostnadsjämförelse mellan livstidsförlängning och systemutbyte. Metoder för förlängning av livslängd av datorbaserade system, där komponenter är föråldrade, undersöktes. Då stora delar av systemdokumentationen inte var tillgänglig valdes baklängesutveckling som strategi. Genom användande av förvärvade elscheman och öppen-källkod hårdvarubeskrivningar kunde en hårdvaruemulator utvecklas, vilken är direkt kompatibel med befintlig hårdvara. Den totala resursanvändningen indikerar att det är möjligt att förlänga systemets livslängd till en låg kostnad, jämfört med kostnaden för ett systembyte.
APA, Harvard, Vancouver, ISO, and other styles
21

Graham, Matthew R. "Extensions in model-based system analysis." Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2007. http://wwwlib.umi.com/cr/ucsd/fullcit?p3273192.

Full text
Abstract:
Thesis (Ph. D.)--University of California, San Diego, 2007.
Title from first page of PDF file (viewed August 31, 2007). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references (p. 116-123).
APA, Harvard, Vancouver, ISO, and other styles
22

Cochran, Graham R. "Ohio State University Extension Competency Study: Developing a Competency Model for a 21st Century Extension Organization." The Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=osu1243620503.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Joshi, Laxman. "Incorporating farmers' knowledge in the planning of interdisciplinary research and extension." Thesis, Bangor University, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.364125.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Ennaoui, Karima. "Computational aspects of infinite automata simulation and closure system related issues." Thesis, Université Clermont Auvergne‎ (2017-2020), 2017. http://www.theses.fr/2017CLFAC031/document.

Full text
Abstract:
La thèse est consacrée à des problématiques d’algorithmique et de complexité sur deux sujets. Le premier sujet s’intéresse à la composition comportementale des services web. Ce problème a été réduit à la simulation d’un automate par le produit fermé d’un ensemble d’automates. La thèse étudie dans sa première partie la complexité de ce problème en considérant deux paramètres : le nombre des instances considéré de chaque service et la présence des états hybrides : état à la fois intermédiaire et final dans un automate. Le second sujet porte sur les systèmes de fermeture et s’intéresse au calcul de l’extension maximale d’un système de fermeture ainsi qu’à l’énumération des clefs candidates d’une base implicative. On donne un algorithme incrémental polynomial qui génère l’extension maximale d’un treillis codé par une relation binaire. Puis, la notion de key-ideal est définie, en prouvant que leur énumération est équivalente à l’énumération des clefs candidates. Ensuite, on donne un algorithme qui permet de générer les key-ideal minimaux en temps incrémental polynomial et les key-ideal non minimaux en délai polynomial
This thesis investigates complexity and computational issues in two parts. The first concerns an issue related to web services composition problem: Deciding whether the behaviour of a web service can be composed out of an existing repository of web services. This question has been reduced to simulating a finite automata to the product closure of an automata set. We study the complexity of this problem considering two parameters; the number of considered instances in the composition and the presence of the so-called hybrid states (states that are both intermediate and final). The second part concerns closure systems and two related issues; Maximal extension of a closure system : we give an incremental polynomial algorithm that computes a lattice's maximal extension when the input is a binary relation. Candidate keys enumeration : we introduce the notion of key-ideal sets and prove that their enumeration is equivalent to candidate keys enumeration. We then give an efficient algorithm that generates all non-minimal key-ideal sets in a polynomial delay and all minimal ones in incremental polynomial time
APA, Harvard, Vancouver, ISO, and other styles
25

Corney, Diane. "Implementation of object-oriented languages based on type extension." Thesis, Queensland University of Technology, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
26

Monge, Hernández Carmen Lidia. "La universidad latinoamericana en la sociedad. Análisis de la relación entre universidad y comunidad desde el enfoque de capacidades para el desarrollo humano." Doctoral thesis, Universitat Politècnica de València, 2021. http://hdl.handle.net/10251/166794.

Full text
Abstract:
[ES] La presente tesis explora dos experiencias de extensión universitaria latinoamericanas desde el enfoque de las capacidades para el desarrollo humano, las cuales son: la Universidad Nacional, Costa Rica y la Universidad Nacional de Rosario, Argentina. Nuestro objetivo es comprender la contribución de tales prácticas para transitar hacia una extensión transformadora. A partir de una aproximación de corte cualitativo, identificamos las capacidades más valoradas por las personas participantes (estudiantes y personas de las organizaciones sociales y comunitarias), así como los recursos institucionales y los factores de conversión que intervienen en tales prácticas extensionistas. Desde una perspectiva crítica de la extensión universitaria y sostenida en los paradigmas constructivista e interpretativista, elegimos el estudio de caso, buscando la comprensión de la realidad y del significado de la experiencia para las personas participantes. En el campo metodológico, se desplegaron diferentes métodos cualitativos, tales como la revisión documental, observación participante, entrevistas y grupos de discusión. En cuanto a los resultados, destacamos la recuperación y sistematización de literatura latinoamericana relacionada con la extensión universitaria, reconociendo la fortaleza de la cultura extensionista para una formación integral, así como su contribución social al desarrollo humano sostenible. Por otro lado, se pone en valor las virtudes metodológicas del enfoque de las capacidades en la educación superior que permitió el diseño del primer índice latinoamericano que recoge las capacidades individuales y colectivas más valoradas por los diferentes actores de la extensión universitaria, así como los principales recursos y factores de conversión que posibilitan la expansión de tales capacidades. El índice aquí presentado agrupa seis capacidades estudiantiles ampliadas durante la experiencia extensionista, que evidencian el enfoque integral y transformador de tales prácticas en el ámbito personal, comunitario y profesional, que se encuentra alineado con los valores del desarrollo humano y el ideario de la Reforma de Córdoba. Por otro lado, desde los contextos locales, se destacó la expansión de tres capacidades individuales y cinco colectivas. A partir de tal análisis, se logró visibilizar la confluencia de diferentes factores (personales, sociales y ambientales) en la expansión de esas capacidades, así como proponer transformaciones universitarias y educativas necesarias para transitar hacia una extensión transformadora. Por último, esta investigación aporta una serie de valores, principios y fines para alcanzar una extensión transformadora alineada con el desarrollo humano sostenible. Para promover esta transición, se requiere que las universidades públicas latinoamericanas impulsen y faciliten procesos de co-diseño que promuevan la creación y actualización de normativa institucional y el fomento de una cultura y gestión institucional dirigidas a potenciar la extensión transformadora.
[CA] La present tesi explora dues experiències d'extensió universitària llatinoamericanes des de l'enfocament de les capacitats per al desenvolupament humà, les quals són: la Universitat Nacional, Costa Rica i la Universitat Nacional de Rosario, Argentina. El nostre objectiu és comprendre la contribució d'aquestes pràctiques per transitar cap a una extensió transformadora. A partir d'una aproximació de tall qualitatiu, identifiquem les capacitats més valorades per les persones participants (estudiants i persones de les organitzacions socials i comunitàries), així com els recursos institucionals i els factors de conversió que intervenen en aquestes pràctiques extensionistes. Des d'una perspectiva crítica de l'extensió universitària i sostinguda en els paradigmes constructivista i interpretativista, triem l'estudi de cas, buscant la comprensió de la realitat i del significat de l'experiència per a les persones participants. En el camp metodològic, es van desplegar diferents mètodes qualitatius, com ara la revisió documental, l'observació participant, les entrevistes i els grups de discussió. Pel que fa als resultats, destaquem la recuperació i sistematització de literatura llatinoamericana relacionada amb l'extensió universitària, reconeixent la fortalesa de la cultura extensionista per a una formació integral, així com la seva contribució social al desenvolupament humà sostenible. D'altra banda, es posa en valor les virtuts metodològiques de l'enfocament de les capacitats en l'educació superior que va permetre el disseny del primer índex llatinoamericà que recull les capacitats individuals i col·lectives més valorades pels diferents actors de l'extensió universitària, així com els principals recursos i factors de conversió que possibiliten l'expansió d'aquestes capacitats. L'índex ací presentat agrupa sis capacitats estudiantils ampliades durant l'experiència extensionista, que evidencien l'enfocament integral i transformador de tals pràctiques en l'àmbit personal, comunitari i professional, que es troba alineat amb els valors del desenvolupament humà i l'ideari de la reforma de Còrdova. D'altra banda, des dels contextos locals, es va destacar l'expansió de tres capacitats individuals i cinc col·lectives. A partir d'aquest anàlisi, es va aconseguir visualitzar la confluència de diferents factors (personals, socials i ambientals) en l'expansió d'aquestes capacitats, així com proposar transformacions universitàries i educatives necessàries per transitar cap a una extensió transformadora. Finalment, aquesta investigació aporta una sèrie de valors, principis i fins per assolir una extensió transformadora alineada amb el desenvolupament humà sostenible. Per promoure aquesta transició, es requereix que les universitats públiques llatinoamericanes impulsin i facilitin processos de co-disseny que promoguin la creació i actualització de normativa institucional i el foment d'una cultura i gestió institucional dirigides a potenciar l'extensió transformadora.
[EN] This thesis explores two Latin American university extension experiences from the perspective of human development capabilities: Universidad Nacional, Costa Rica and Universidad Nacional de Rosario, Argentina. Its objective is to understand the contribution of such practices to move towards a transformative extension. From a qualitative approach, the participants' (students and people from social and community organizations) most valued capabilities were identified, along with the institutional resources and conversion factors involved in such extension practices. From a critical perspective of university extension and sustained in the constructivist and interpretivist paradigms, the case studies to understand the reality and meaning of the participants' experience were chosen. Different qualitative methods were deployed in the methodological field, such as documentary review, participant observation, interviews, and discussion groups. As for the results, the recovery and systematization of Latin American literature related to university extension were highlighted, recognizing the extensionist culture's strength for an integral formation and its social contribution to sustainable human development. On the other hand, there is an emphasis on the methodological virtues of the capabilities approach in higher education that allowed the design of the first Latin American index that gathers the individual and collective capabilities most valued by the different actors of university extension, as well as the primary resources and conversion factors that make possible the expansion of such capabilities. The index presented here brings together six student capabilities that were expanded during the extension experience, which show the comprehensive and transformative approach of such practices in the personal, community and professional fields, and aligned with the values of human development and the ideas of the Reforma de Córdoba. On the other hand, the expansion of three individual and five collective capabilities was highlighted from the local contexts. Based on this analysis, the confluence of different factors (personal, social and environmental) in expanding these capabilities was made visible and proposing the university and educational transformations needed to move towards a transformative extension. Finally, this research provides a series of values, principles and goals to achieve a transformative extension aligned with sustainable human development. To promote this transition, Latin American public universities are required to encourage and facilitate co-design processes that promote the creation and updating of institutional regulations and the promotion of institutional culture and management to enhance the transfer of knowledge and skills to the public sector.
Monge Hernández, CL. (2021). La universidad latinoamericana en la sociedad. Análisis de la relación entre universidad y comunidad desde el enfoque de capacidades para el desarrollo humano [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/166794
TESIS
APA, Harvard, Vancouver, ISO, and other styles
27

Turan, Bora. "Analysis for a trusted computing base extension prototype board." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2000. http://handle.dtic.mil/100.2/ADA377677.

Full text
Abstract:
Thesis (M.S. in Electrical Engineering) Naval Postgraduate School, March 2000.
Thesis advisor(s): Irvine, Cynthia E. "March 2000." Includes bibliographical references (p. 101-102). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
28

Apel, Joachim. "Computational Ideal Theory in Finitely Generated Extension Rings." Universität Leipzig, 1998. https://ul.qucosa.de/id/qucosa%3A34526.

Full text
Abstract:
One of the most general extensions of Buchberger's theory of Gröbner bases is the concept of graded structures due to Robbiano and Mora. But in order to obtain algorithmic solutions for the computation of Göbner bases it needs additional computability assumptions. In this paper we introduce natural graded structures of finitely generated extension rings and present subclasses of such structures which allow uniform algorithmic solutions of the basic problems in the associated graded ring and, hence, of the computation of Gröbner bases with respect to the graded structure. Among the considered rings there are many of the known generalizations. But, in addition, a wide class of rings appears first time in the context of algorithmic Gröbner basis computations. Finally, we discuss which conditions could be changed in order to find further effective Gröbner structures and it will turn out that the most interesting constructive instances of graded structures are covered by our results.
APA, Harvard, Vancouver, ISO, and other styles
29

Andersson, Anders. "Extensions for Distributed Moving Base Driving Simulators." Licentiate thesis, Linköpings universitet, Institutionen för datavetenskap, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-136146.

Full text
Abstract:
Modern vehicles are complex systems. Different design stages for such a complex system include evaluation using models and submodels, hardware-in-the-loop systems and complete vehicles. Once a vehicle is delivered to the market evaluation continues by the public. One kind of tool that can be used during many stages of a vehicle lifecycle is driving simulators. The use of driving simulators with a human driver is commonly focused on driver behavior. In a high fidelity moving base driving simulator it is possible to provide realistic and repetitive driving situations using distinctive features such as: physical modelling of driven vehicle, a moving base, a physical cabin interface and an audio and visual representation of the driving environment. A desired but difficult goal to achieve using a moving base driving simulator is to have behavioral validity. In other words, \A driver in a moving base driving simulator should have the same driving behavior as he or she would have during the same driving task in a real vehicle.". In this thesis the focus is on high fidelity moving base driving simulators. The main target is to improve the behavior validity or to maintain behavior validity while adding complexity to the simulator. One main assumption in this thesis is that systems closer to the final product provide better accuracy and are perceived better if properly integrated. Thus, the approach in this thesis is to try to ease incorporation of such systems using combinations of the methods hardware-in-the-loop and distributed simulation. Hardware-in-the-loop is a method where hardware is interfaced into a software controlled environment/simulation. Distributed simulation is a method where parts of a simulation at physically different locations are connected together. For some simulator laboratories distributed simulation is the only feasible option since some hardware cannot be moved in an easy way. Results presented in this thesis show that a complete vehicle or hardware-in-the-loop test laboratory can successfully be connected to a moving base driving simulator. Further, it is demonstrated that using a framework for distributed simulation eases communication and integration due to standardized interfaces. One identified potential problem is complexity in interface wrappers when integrating hardware-in-the-loop in a distributed simulation framework. From this aspect, it is important to consider the model design and the intersections between software and hardware models. Another important issue discussed is the increased delay in overhead time when using a framework for distributed simulation.
APA, Harvard, Vancouver, ISO, and other styles
30

Weiss, Christian. "Games with fuzzy coalitions: concepts based on the Choquet extension." [S.l. : s.n.], 2003. http://deposit.ddb.de/cgi-bin/dokserv?idn=968578438.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Liu, Enze. "Optimization and Application extension fora Bloom filter based sequence classifier." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-142011.

Full text
Abstract:
Abstract Nowadays, with the development of sequencing technologies, more sequencing reads are generated and involved in genomics research, which leads to a critical problem, how do people process these data rapidly and accurately? A data structure named Bloom filter which is initially developed in 1970 has been reused and applied more and more in Bioinformatics field for its relatively high storage efficiency and fast accessing speed. As an application of Bloom filter technique, FACS [1] system is a rapid and accurate sequence classifier. However, several bottlenecks have restricted its usage, for instance, neither supporting large query file nor fastq format files. Hence, in this report, an improved FACS system will be introduced, which includes a hashing system for FACS; making FACS become large query files (>2GB) and compressed files supported; making FACS become fastq file supported; making FACS system more user friendly etc. Moreover, the new paralleled FACS system (FACS 2.0) will be introduced and evaluated to prove that FACS 2.0 is at least 10 times faster and equally accurate compared with the original FACS system, Fastq_screen [7] and Deconseq [8] when doing sequence decontamination process. Last but not the least, the possibility of developing an adapter trimmer based on FACS system will also be analyzed in this report. Key words: Bloom filter; Decontamination; Adapter trimming; Parallelization; Large query file (compressed and normal) supported;
APA, Harvard, Vancouver, ISO, and other styles
32

Cao, Jun. "A Random-Linear-Extension Test Based on Classic Nonparametric Procedures." Diss., Temple University Libraries, 2009. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/48271.

Full text
Abstract:
Statistics
Ph.D.
Most distribution free nonparametric methods depend on the ranks or orderings of the individual observations. This dissertation develops methods for the situation when there is only partial information about the ranks available. A random-linear-extension exact test and an empirical version of the random-linear-extension test are proposed as a new way to compare groups of data with partial orders. The basic computation procedure is to generate all possible permutations constrained by the known partial order using a randomization method similar in nature to multiple imputation. This random-linear-extension test can be simply implemented using a Gibbs Sampler to generate a random sample of complete orderings. Given a complete ordering, standard nonparametric methods, such as the Wilcoxon rank-sum test, can be applied, and the corresponding test statistics and rejection regions can be calculated. As a direct result of our new method, a single p-value is replaced by a distribution of p-values. This is related to some recent work on Fuzzy P-values, which was introduced by Geyer and Meeden in Statistical Science in 2005. A special case is to compare two groups when only two objects can be compared at a time. Three matching schemes, random matching, ordered matching and reverse matching are introduced and compared between each other. The results described in this dissertation provide some surprising insights into the statistical information in partial orderings.
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
33

Walker, Daniel Harmen. "A knowledge-based systems approach to agroforestry research and extension." Thesis, Bangor University, 1994. https://research.bangor.ac.uk/portal/en/theses/a-knowledgebased-systems-approach-to-agroforestry-research-and-extension(01899f22-8cf4-42be-897e-00e904d186f3).html.

Full text
Abstract:
Agroforestry development programmes frequently rely on knowledge from a number of different sources. In particular, there is a growing recognition amongst development professionals of the value of augmenting partial scientific and professional understanding with the detailed knowledge held by local people. Taking advantage of the complementarity of local, scientific and professional knowledge demands the development of effective mechanisms for accessing, recording and evaluating knowledge on specified topics from each of these sources. The research described in this thesis developed a methodology for the acquisition, synthesis and storage of knowledge. The defining feature of the approach is the explicit representation of knowledge. This is achieved through the application of knowledge-based systems techniques. AKT2 (Agroforestry Knowledge Toolkit), a software toolkit developed in Prolog, an artificial intelligence programming language, provides the user with an environment for the creation, storage and exploration of large knowledge bases containing knowledge on a specified topic from a range of sources. The use of diagramming techniques, familiar to ecologists and resource managers through systems analysis, provides an intuitive and robust interface. This knowledge-based system drives incremental knowledge acquisition based on an iterative evaluation of the knowledge bases created. The iterative approach to knowledge acquisition provides a coherent, consistent and comprehensive, and therefore more useful, record of knowledge. Once created, knowledge bases can be maintained and updated as a record of current knowledge. Techniques for the exploration and evaluation of the knowledge base may be useful in : " giving research and extension staff access to a concise and flexible record of the current state of knowledge; " providing a resource and mechanisms for use in planning and prioritising research objectives; and " providing a resource and mechanisms for the generation of extension materials tailored to the needs of particular clients.
APA, Harvard, Vancouver, ISO, and other styles
34

Lee, Joseph Jiazong. "Extensions of Randomization-Based Methods for Causal Inference." Thesis, Harvard University, 2015. http://nrs.harvard.edu/urn-3:HUL.InstRepos:17463974.

Full text
Abstract:
In randomized experiments, the random assignment of units to treatment groups justifies many of the traditional analysis methods for evaluating causal effects. Specifying subgroups of units for further examination after observing outcomes, however, may partially nullify any advantages of randomized assignment when data are analyzed naively. Some previous statistical literature has treated all post-hoc analyses homogeneously as entirely invalid and thus uninterpretable. Alternative analysis methods and the extent of the validity of such analyses remain largely unstudied. Here Chapter 1 proposes a novel, randomization-based method that generates valid post-hoc subgroup p-values, provided we know exactly how the subgroups were constructed. If we do not know the exact subgrouping procedure, our method may still place helpful bounds on the significance level of estimated effects. Chapter 2 extends the proposed methodology to generate valid posterior predictive p-values for partially post-hoc subgroup analyses, i.e., analyses that compare existing experimental data --- from which a subgroup specification is derived --- to new, subgroup-only data. Both chapters are motivated by pharmaceutical examples in which subgroup analyses played pivotal and controversial roles. Chapter 3 extends our randomization-based methodology to more general randomized experiments with multiple testing and nuisance unknowns. The results are valid familywise tests that are doubly advantageous, in terms of statistical power, over traditional methods. We apply our methods to data from the United States Job Training Partnership Act (JTPA) Study, where our analyses lead to different conclusions regarding the significance of estimated JTPA effects. In all chapters, we investigate the operating characteristics and demonstrate the advantages of our methods through series of simulations.
Statistics
APA, Harvard, Vancouver, ISO, and other styles
35

Romano, Tara Lynn. "EVALUATION OF AN ASSETS-BASED YOUTH DEVELOPMENT PROGRAM DESIGNED TO PROVIDE UNDERPRIVILEGED YOUTH WITH EDUCATIONAL AND EMPLOYMENT RESOURCES." NCSU, 2001. http://www.lib.ncsu.edu/theses/available/etd-20010711-202352.

Full text
Abstract:

This study was an evaluation of a 4-H Youth DevelopmentProgram that provided a series of weekend camps and a weeklong summer camp to underprivileged youth. This programattempted to provide the youth with additional knowledge, skills, and aspirations necessary for a successful educational and employment future. This program took place in Carteret County, NC. The major purposes of this study were: (1) to determine whether or not the youths' knowledge, skills, and aspirations increased due to their participation in this program; and (2) to provide recommendations for the program so that the program may increase its effectiveness and possibly be replicated by other counties in North Carolina. A case study research design was used to gather data for this evaluation, with a variety of different data collected from program staff, local schools, parents and guardians of the program participants, and the participants themselves. Surveys, pre- and post-tests, interviews, and observations were the tools used to collect the data. A control group of inactive participants (who had rarely attended program activities) was used as a comparison for the group of youth that were active program participants. An analysis of the data determined if any trends or patterns existed that supported the program's objectives of increasing the youth's knowledge, skills, and aspirations. The major findings of this study were that: (1) the program, while providing some benefits to the children in terms of support and relationship-building, did not completely achieve its objectives; and (2) a number of recommendations, including increased family involvement in the program, could help to improve and possibly achieve the program's initial objectives.

APA, Harvard, Vancouver, ISO, and other styles
36

Cummings, Gregory Aaron. "Defining the knowledge base of our profession: a look at agricultural and extension education in the 21st century." Texas A&M University, 2003. http://hdl.handle.net/1969.1/2279.

Full text
Abstract:
The profession of agricultural and extension education has increased in complexity in response to the demands of the changing field of agriculture and the need for educators who are responsive to those demands. A standardization of the knowledge base of the profession is seen as necessary in light of geographic mobility, the nationwide emphasis on assessment, and the need for a public relations tool that clearly articulates the concepts forming the framework of agricultural and extension education. In this study a panel of experts consisting of agricultural and extension education leaders nationwide, responded to open-ended and Likert-type surveys online as part of a Delphi technique to establish the knowledge base for agricultural and extension education. Three rounds of the Delphi technique were used. A minimum of 13 of the 24 panel members were required to respond to each round. Ninety-five statements were initially generated by 16 panel members in response to an open-ended statement in Round I which asked the participants ??What are the articulated understandings, skills, and judgments that serve as the foundation of knowledge (??the body??) for professionals in agricultural and extension education??? These statements were presented to the panel members in Round II. Two-thirds of the panelists had to ??Strongly Agree?? or ??Agree?? with each item for it to be retained for Round III. Based on the responses of 14 panelists in Round II, 67 items were retained for Round III, and one item was added based on panel input. After Round III, three items were eliminated due to lack of twothirds achievement of ??Strongly Agree?? and ??Agree?? ratings by 17 respondents. Thus, 65 statements established the knowledge base of agricultural and extension education in this study. Among the knowledge base are concepts related to traits of effective educators; management issues; environmental impacts on instruction; curriculum development; learner-based contextual, applied pedagogical strategies; leadership development; communications; assessment strategies; community and collegial connections; integration of technology; critical thinking and problem solving; and teaching as a changing process grounded in sound theory.
APA, Harvard, Vancouver, ISO, and other styles
37

Starr, Cynthia Louise. "A graphical extension for Pascal based on the Graphical Kernel System." Thesis, University of British Columbia, 1987. http://hdl.handle.net/2429/28402.

Full text
Abstract:
The Graphical Kernel System (GKS), the first international standard in the area of computer graphics, was adopted by the International Standards Organization in 1985. The United Kingdom, France, Germany and the United States have also adopted GKS as a national standard. This thesis examines the feasibility of developing a high-level graphical extension to a general-purpose programming language based on the GKS standard. Because GKS was designed as a subroutine system, programming with it is awkward. The subroutine call provides a low-level mechanism for accessing the graphical capabilities standardized by GKS. EZ/GKS is a high-level graphical extension to the Pascal/VS language implementing the functionality found in GKS level 2A. The level of abstraction for graphics programming is elevated in EZ/GKS through the use of abstract graphical data types. Operations on graphical data types are provided by structured graphical assignments, high-level graphical statements, graphical expressions and system-defined functions. Complex user-defined data types may be constructed from any of the predefined graphical data types in the usual manner provided by Pascal. No major syntactic or semantic difficulties were encountered during the design and implementation of EZ/GKS. Thus, it appears that the GKS standard can indeed be elevated successfully to a high-level graphical extension of a general-purpose programming language.
Science, Faculty of
Computer Science, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
38

Gould, Gwyneth (Gwyneth Michelle) Carleton University Dissertation Engineering Mechanical. "Damage tolerance based life extension of turbine discs; a PFM approach." Ottawa, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
39

Krejčí, Jana. "MCDM methods based on pairwise comparison matrices and their fuzzy extension." Doctoral thesis, Università degli studi di Trento, 2017. https://hdl.handle.net/11572/369186.

Full text
Abstract:
Methods based on pairwise comparison matrices (PCMs) form a significant part of multi-criteria decision making (MCDM) methods. These methods are based on structuring pairwise comparisons (PCs) of objects from a finite set of objects into a PCM and deriving priorities of objects that represent the relative importance of each object with respect to all other objects in the set. However, the crisp PCMs are not able to capture uncertainty stemming from subjectivity of human thinking and from incompleteness of information about the problem that are often closely related to MCDM problems. That is why the fuzzy extension of methods based on PCMs has been of great interest. In order to derive fuzzy priorities of objects from a fuzzy PCM (FPCM), standard fuzzy arithmetic is usually applied to the fuzzy extension of the methods originally developed for crisp PCMs. %Fuzzy extension of the methods based on PCMs usually consists in simply replacing the crisp PCs in the given model by fuzzy PCs and applying standard fuzzy arithmetic to obtain the desired fuzzy priorities. However, such approach fails in properly handling uncertainty of preference information contained in the FPCM. Namely, reciprocity of the related PCs of objects in a FPCM and invariance of the given method under permutation of objects are violated when standard fuzzy arithmetic is applied to the fuzzy extension. This leads to distortion of the preference information contained in the FPCM and consequently to false results. Thus, the first research question of the thesis is: ``Based on a FPCM of objects, how should fuzzy priorities of these objects be determined so that they reflect properly all preference information available in the FPCM?'' This research question is answered by introducing an appropriate fuzzy extension of methods originally developed for crisp PCMs. That is, such fuzzy extension that does not violate reciprocity of the related PCs and invariance under permutation of objects, and that does not lead to a redundant increase of uncertainty of the resulting fuzzy priorities of objects. Fuzzy extension of three different types of PCMs is examined in this thesis - multiplicative PCMs, additive PCMs with additive representation, and additive PCMs with multiplicative representation. In particular, construction of PCMs, verifying consistency, and deriving priorities of objects from PCMs are studied in detail for each type of these PCMs. First, well-known and in practice most often applied methods based on crisp PCMs are reviewed. Afterwards, fuzzy extensions of these methods proposed in the literature are reviewed in detail and their drawbacks regarding the violation of reciprocity of the related PCs and of invariance under permutation of objects are pointed out. It is shown that these drawbacks can be overcome by properly applying constrained fuzzy arithmetic instead of standard fuzzy arithmetic to the computations. In particular, we always have to look at a FPCM as a set of PCMs with different degrees of membership to the FPCM, i.e. we always have to consider only PCs that are mutually reciprocal. Constrained fuzzy arithmetic allows us to impose the reciprocity of the related PCs as a constraint on arithmetic operations with fuzzy numbers, and its appropriate application also guarantees invariance of the methods under permutation of objects. Finally, new fuzzy extensions of the methods are proposed based on constrained fuzzy arithmetic and it is proved that these methods do not violate the reciprocity of the related PCs and are invariant under permutation of objects. Because of these desirable properties, fuzzy priorities of objects obtained by the methods proposed in this thesis reflect the preference information contained in fuzzy PCMs better in comparison to the fuzzy priorities obtained by the methods based on standard fuzzy arithmetic. Beside the inability to capture uncertainty, methods based on PCMs are also not able to cope with situations where it is not possible or reasonable to obtain complete preference information from DMs. This problem occurs especially in the situations involving large-dimensional PCMs. When dealing with incomplete large-dimensional PCMs, compromise between reducing the number of PCs required from the DM and obtaining reasonable priorities of objects is of paramount importance. This leads to the second research question: ``How can the amount of preference information required from the DM in a large-dimensional PCM be reduced while still obtaining comparable priorities of objects?'' This research question is answered by introducing an efficient two-phase method. Specifically, in the first phase, an interactive algorithm based on weak-consistency condition is introduced for partially filling an incomplete PCM. This algorithm is designed in such a way that minimizes the number of PCs required from the DM and provides sufficient amount of preference information at the same time. The weak-consistency condition allows for providing ranges of possible intensities of preference for every missing PC in the incomplete PCM. Thus, at the end of the first phase, a PCM containing intervals for all PCs that were not provided by the DM is obtained. Afterward, in the second phase, the methods for obtaining fuzzy priorities of objects from fuzzy PCMs proposed in this thesis within the answer to the first research question are applied to derive interval priorities of objects from this incomplete PCM. The obtained interval priorities cover all weakly consistent completions of the incomplete PCM and are very narrow. The performance of the method is illustrated by a real-life case study and by simulations that demonstrate the ability of the algorithm to reduce the number of PCs required from the DM in PCMs of dimension 15 and greater by more than 60\% on average while obtaining interval priorities comparable with the priorities obtainable from the hypothetical complete PCMs.
APA, Harvard, Vancouver, ISO, and other styles
40

Krejčí, Jana. "MCDM methods based on pairwise comparison matrices and their fuzzy extension." Doctoral thesis, University of Trento, 2017. http://eprints-phd.biblio.unitn.it/2009/3/Thesis-KrejciJ.pdf.

Full text
Abstract:
Methods based on pairwise comparison matrices (PCMs) form a significant part of multi-criteria decision making (MCDM) methods. These methods are based on structuring pairwise comparisons (PCs) of objects from a finite set of objects into a PCM and deriving priorities of objects that represent the relative importance of each object with respect to all other objects in the set. However, the crisp PCMs are not able to capture uncertainty stemming from subjectivity of human thinking and from incompleteness of information about the problem that are often closely related to MCDM problems. That is why the fuzzy extension of methods based on PCMs has been of great interest. In order to derive fuzzy priorities of objects from a fuzzy PCM (FPCM), standard fuzzy arithmetic is usually applied to the fuzzy extension of the methods originally developed for crisp PCMs. %Fuzzy extension of the methods based on PCMs usually consists in simply replacing the crisp PCs in the given model by fuzzy PCs and applying standard fuzzy arithmetic to obtain the desired fuzzy priorities. However, such approach fails in properly handling uncertainty of preference information contained in the FPCM. Namely, reciprocity of the related PCs of objects in a FPCM and invariance of the given method under permutation of objects are violated when standard fuzzy arithmetic is applied to the fuzzy extension. This leads to distortion of the preference information contained in the FPCM and consequently to false results. Thus, the first research question of the thesis is: ``Based on a FPCM of objects, how should fuzzy priorities of these objects be determined so that they reflect properly all preference information available in the FPCM?'' This research question is answered by introducing an appropriate fuzzy extension of methods originally developed for crisp PCMs. That is, such fuzzy extension that does not violate reciprocity of the related PCs and invariance under permutation of objects, and that does not lead to a redundant increase of uncertainty of the resulting fuzzy priorities of objects. Fuzzy extension of three different types of PCMs is examined in this thesis - multiplicative PCMs, additive PCMs with additive representation, and additive PCMs with multiplicative representation. In particular, construction of PCMs, verifying consistency, and deriving priorities of objects from PCMs are studied in detail for each type of these PCMs. First, well-known and in practice most often applied methods based on crisp PCMs are reviewed. Afterwards, fuzzy extensions of these methods proposed in the literature are reviewed in detail and their drawbacks regarding the violation of reciprocity of the related PCs and of invariance under permutation of objects are pointed out. It is shown that these drawbacks can be overcome by properly applying constrained fuzzy arithmetic instead of standard fuzzy arithmetic to the computations. In particular, we always have to look at a FPCM as a set of PCMs with different degrees of membership to the FPCM, i.e. we always have to consider only PCs that are mutually reciprocal. Constrained fuzzy arithmetic allows us to impose the reciprocity of the related PCs as a constraint on arithmetic operations with fuzzy numbers, and its appropriate application also guarantees invariance of the methods under permutation of objects. Finally, new fuzzy extensions of the methods are proposed based on constrained fuzzy arithmetic and it is proved that these methods do not violate the reciprocity of the related PCs and are invariant under permutation of objects. Because of these desirable properties, fuzzy priorities of objects obtained by the methods proposed in this thesis reflect the preference information contained in fuzzy PCMs better in comparison to the fuzzy priorities obtained by the methods based on standard fuzzy arithmetic. Beside the inability to capture uncertainty, methods based on PCMs are also not able to cope with situations where it is not possible or reasonable to obtain complete preference information from DMs. This problem occurs especially in the situations involving large-dimensional PCMs. When dealing with incomplete large-dimensional PCMs, compromise between reducing the number of PCs required from the DM and obtaining reasonable priorities of objects is of paramount importance. This leads to the second research question: ``How can the amount of preference information required from the DM in a large-dimensional PCM be reduced while still obtaining comparable priorities of objects?'' This research question is answered by introducing an efficient two-phase method. Specifically, in the first phase, an interactive algorithm based on weak-consistency condition is introduced for partially filling an incomplete PCM. This algorithm is designed in such a way that minimizes the number of PCs required from the DM and provides sufficient amount of preference information at the same time. The weak-consistency condition allows for providing ranges of possible intensities of preference for every missing PC in the incomplete PCM. Thus, at the end of the first phase, a PCM containing intervals for all PCs that were not provided by the DM is obtained. Afterward, in the second phase, the methods for obtaining fuzzy priorities of objects from fuzzy PCMs proposed in this thesis within the answer to the first research question are applied to derive interval priorities of objects from this incomplete PCM. The obtained interval priorities cover all weakly consistent completions of the incomplete PCM and are very narrow. The performance of the method is illustrated by a real-life case study and by simulations that demonstrate the ability of the algorithm to reduce the number of PCs required from the DM in PCMs of dimension 15 and greater by more than 60\% on average while obtaining interval priorities comparable with the priorities obtainable from the hypothetical complete PCMs.
APA, Harvard, Vancouver, ISO, and other styles
41

Kinney, Kimberlee Ann. "Exploration of Facilitators, Barriers and Opportunities for Faith-Based Organizations to Implement Nutrition and Physical Activity Programs and Partner with Virginia's Supplemental Nutrition Assistance Program Education." Diss., Virginia Tech, 2018. http://hdl.handle.net/10919/82927.

Full text
Abstract:
Poor diet and physical inactivity contribute to excessive weight and related diseases in the United States. Given the increasing rates of adult overweight and obesity among Americans, there is a need to develop and implement effective prevention and treatment strategies to decrease the public health burden of obesity-related chronic diseases. Faith-based organizations (FBOs) provide a unique setting and partnership opportunity for delivering evidence-based programs into communities that can be sustained. The federally funded Virginia Supplemental Nutrition Assistance Program Education (SNAP-Ed) delivered through Virginia Tech's Cooperative Extension and Family Nutrition Program, utilizes evidence-based programs to promote healthy eating and physical activity among limited income populations. The Virginia SNAP-Ed Volunteer Led Nutrition Education Initiative uses SNAP-Ed agents and educators to reach limited income populations by training and coordinating volunteers from communities to deliver nutrition education programs. However, these partnerships and training initiatives have been underutilized in FBOs across Virginia. This dissertation research describes four studies conducted to better understand how to facilitate collaborative partnerships and health-promotion programming initiatives between academic/extension educators and FBOs to build capacity and inform future initiatives within VCE. Study one conducted a literature review to examine FBO characteristics and multi-level strategies used to implement nutrition and physical activity interventions. Study two examined VCE SNAP-Ed agents' perspectives on FBO partnerships to deliver health programming. Study three assessed three FBOs and their member health needs to identify policies, systems and environments to support healthy lifestyles. Study four examined the acceptability of Faithful Families, a faith-based nutrition and physical activity program delivered in a rural church, and explored ways to build capacity for program sustainability through input from stakeholder partners. Results across studies yielded information which helped to identify and prioritize strategies for promoting FBO partnerships within VCE and helped to generate questions that merit further investigation to identify specific culturally relevant strategies for promoting health in FBOs. This exploratory body of research contributes to the field by describing relevant opportunities for academic sectors to partner with FBOs using participatory approaches to increase partnership readiness and build capacity to carry out and sustain health programs within faith settings.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
42

Er, Ngurah Agus Sanjaya. "Techniques avancées pour l'extraction d'information par l'exemple." Electronic Thesis or Diss., Paris, ENST, 2018. http://www.theses.fr/2018ENST0060.

Full text
Abstract:
La recherche d’information sur le Web requiert généralement la création d’une requête à partir d’un ensemble de mots-clés et de la soumettre à un moteur de recherche. Le résultat de la recherche, qui est une liste de pages Web, est trié en fonction de la pertinence de chaque page par rapport aux mots clés donnés. Cette méthode classique nécessite de l’utilisateur une connaissance relativement bonne du domaine de l’information ciblée afin de trouver les bons mots-clés. Étant donné une même requête, i.e. liste de mots-clés, les pages renvoyées par le moteur de recherche seraient classées différemment selon l’utilisateur. Sous un autre angle, la recherche d’informations trés précises telles qu’un pays et sa capitale obligerait, sans doute, l’utilisateur à parcourir tous les documents retournées et à lire chaque contenu manuellement. Cela prend non seulement du temps, mais exige également beaucoup d’efforts. Nous abordons dans cette thèse une méthode alternative de recherche d’informations, c’est-à-dire en donnant des exemples parmi les informations recherchées. Tout d’abord, nous essayons d’améliorer la précision de la recherche des méthodes existantes en étendant syntaxiquement les exemples donnés. Ensuite, nous utilisons le paradigme de découverte de la vérité pour classer les résultats renvoyés. Enfin, nous étudions la possibilité d’élargir les exemples sémantiquement en annotant (ou étiquetant) chaque groupe d’éléments des exemples
Searching for information on the Web is generally achieved by constructing a query from a set of keywords and firing it to a search engine. This traditional method requires the user to have a relatively good knowledge of the domain of the targeted information to come up with the correct keywords. The search results, in the form of Web pages, are ranked based on the relevancy of each Web page to the given keywords. For the same set of keywords, the Web pages returned by the search engine would be ranked differently depending on the user. Moreover, finding specific information such as a country and its capital city would require the user to browse through all the documents and reading its content manually. This is not only time consuming but also requires a great deal of effort. We address in this thesis an alternative method of searching for information, i.e. by giving examples of the information in question. First, we try to improve the accuracy of the search by example systems by expanding the given examples syntactically. Next, we use truth discovery paradigm to rank the returned query results. Finally, we investigate the possibility of expanding the examples semantically through labelling each group of elements of the examples
APA, Harvard, Vancouver, ISO, and other styles
43

Gonzalez, Raul. "Liquidity-Based Extensions of GARCH Models of Stock Volatility." St. Gallen, 2005. http://www.biblio.unisg.ch/org/biblio/edoc.nsf/wwwDisplayIdentifier/03608031001/$FILE/03608031001.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Walker, Ian James Victor. "Fact-based extensions to object-oriented analysis and design." Thesis, Leeds Beckett University, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.306970.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Thétiot, Franck. "Matériaux moléculaires magnétiques à base d'anions polynitrile : Extension aux matériaux bimétalliques." Brest, 2004. http://www.theses.fr/2004BRES2001.

Full text
Abstract:
De par leurs groupements nitrile potentiellement donneurs disposés de telle manière qu'ils ne peuvent pas tous se coordiner à un même cation métallique et de par leur haute conjugaison pouvant permettre des interactions entre des centres métalliques paramagnétiques, les anions polynitrile constituent de très bons ligands dans la conception de matériaux moléculaires magnétiques à structure étendue. Ainsi, l'étude de la réactivité de deux anions polynitrile originaux ([(CN)2CC(O)OEt]- et [(CN)2CC(OEt)C(CN)2]-) vis-à-vis des métaux de transition nous a conduit aux premières phases cristallines magnétiques mettant en jeu ces anions en tant que ligands. Afin d'évaluer l'influence d'un co-ligand dans ces systèmes "binaires" MII/polynitrile, nous avons ensuite considéré l'association des co-ligands 2,2'-bipyrimidine (bpym) et 1,3-diaminopropane (tri) avec différents anions polynitrile. Ces associations ont abouti à une grande diversité d'architectures moléculaires dans lesquelles des interactions magnétiques ont été observées entre les cations métalliques. Les résultats obtenus nous ont incités à modifier notre approche en remplaçant les anions polynitrile diamagnétiques par des anions paramagnétiques de type hexacyanométallate [M(CN)6 ]3- (Mʿ1 =Fe"1, Cr1ʿ)o Leur association avec le précurseur original [Cu(tn)]" a ainsi conduit à plusieurs phases, originales de par leurs structures et leurs propriétés magnétiques, avec notamment le dérivé [Cu(tn)]3[Cr(CN)1,],. 3H20 qui constitue le premier aimant moléculaire bidimensionnel impliquant l'entité "-Cu-NC-Cr-"
Due to their high electronic delocalization and their cyano groups juxtaposed in such a way Chat they cannot all coordinate to the same metal ion, cyano- and azacyano- carbanions are interesting ligands in the field of molecular materials with magnetic properties. In this context, reactivities of two original polynitrile anions [(CN),CC(OIOF. R] a_n_d [(CN),CC(OEt)C(CN)2ll with transition metal ions led to the first magnetic polymeric compounds in which these polynitrile anions act as bridging ligands. In order to estimate the influence of a co-ligand in these "binary" molecular systems M11/polynitrile, we then considered combination of neutral co-ligands 2,2'-bipyrimidine (bpym) and 1,3-diaminopropane (tri) with différent polynitrile anions. These combinations displayed rich and various structural architectures with magnetic coupling between the metal centers. Based on the studies of ibese polynitrile coordination compounds, we substituted the diamagnetic polynitrile anions by the paramagnetic hexacyanometallate anions [M(CN)6]3- (Ml" = Fe"', Cr"'). The combination of these anions with the two-coordinate original assembling unit [Cu(tn)]2 led to extended bimetallic cyano-bridged assemblies with varions structural architectures and magnetic properties, including the first two-dimensional ferromagnet [Cu(tn)]3[Cr(CN)612. 3H20 involving "-Cu-NC-Cr-" linkages
APA, Harvard, Vancouver, ISO, and other styles
46

Ackah-Nyamike, Edward Ernest. "Expanding the funding base for public agricultural extension delivery in Ghana : an analysis of farmer willingness to pay for extension services." Thesis, University of Reading, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.288736.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Inzinger, Dagmar, and Peter Haiss. "Integration of European Stock Markets. A Review and Extension of Quantity-Based Measures." Europainstitut, WU Vienna University of Economics and Business, 2006. http://epub.wu.ac.at/320/1/document.pdf.

Full text
Abstract:
We examine to what extent Europe´s stock markets are integrated, and how this can be measured. We review 54 empirical studies and find an overemphasis on price-based measures and a need for more quantity-based studies. We update the Baele et al (2004) study on investment funds' equity holdings to March 2006 for ten euro area and four non-euro area countries, provide additional quantity based evidence, and discuss integration theories. Our results indicate a decline in home bias particularly after the advent of the euro. We conclude that although European stock markets have undergone significant developments, the level of European integration is below expectations and there is a high joint integration with the U.S. (author's abstract)
Series: EI Working Papers / Europainstitut
APA, Harvard, Vancouver, ISO, and other styles
48

Tian, Meng. "Extension in domain specific code generation with meta-model based aspect weaving." Thesis, University of Southampton, 2016. https://eprints.soton.ac.uk/416607/.

Full text
Abstract:
Domain specific code generation improves software productivity and reliability. However, these advantages are lost if the generated code needs to be manually modified or adapted before deployment. Thus, the systematic extensibility of domain specific code generation becomes increasingly important to ensure that these advantages are maintained. However, the traditional extension approaches, like round-trip engineering, have their limitations in supporting certain code customization scenarios. In this thesis, we address this problem with aspect-oriented techniques. We first show that the meta-model and the code generator can be used to derive a domain specific aspect language whose join points are based on domain specific elements. We then show that a corresponding aspect weaver can be derived as well, provided a proper model tracing facility can be made available for the code generator. We demonstrate the viability of our approach on several concrete domain specific code generation case studies, respectively with the AUTOFILTER code generator, the ANTLR parser generator, and the CUP parser generator. We successfully construct a few Java program analysis tools as a result of these case studies.
APA, Harvard, Vancouver, ISO, and other styles
49

Manandhar, S. K. "Relational extensions to feature logic : applications to constraint based grammars." Thesis, University of Edinburgh, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.657261.

Full text
Abstract:
This thesis investigates the logical and computational foundations of unification-based or more appropriately constraint based grammars. The thesis explores extensions to feature logics (which provide the basic knowledge representation services to constraint based grammars) with multi- valued or relational features. These extensions are useful for knowledge representation tasks that cannot be expressed within current feature logics. The approach bridges the gap between concept languages (such as KL-ONE) which are the mainstay of knowledge representation languages in AI and feature logics. Various constraints on relational attributes are considered such as existential membership, universal membership, set descriptions, transitive relations and linear precedence constraints. The specific contributions of this thesis can be summarised as follows: 1. Development of an integrated feature/concept logic 2. Development of a constraint logic for so called partial set descriptions 3. Development of a constraint logic for expressing linear precedence constraints 4. The design of a constraint language CL-ONE that incorporates the central ideas provided by the above study. 5. A methodological study of the application of CL-ONE for constraint based grammars The thesis takes into account current insights in the areas of constraint logic programming, object-oriented languages, computational linguistics and knowledge representation.
APA, Harvard, Vancouver, ISO, and other styles
50

Okuno, Akifumi. "Studies on Neural Network-Based Graph Embedding and Its Extensions." Kyoto University, 2020. http://hdl.handle.net/2433/259075.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography