Дисертації з теми "Langage Python"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Langage Python.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 дисертацій для дослідження на тему "Langage Python".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Miled, Mahdi. "Ressources et parcours pour l'apprentissage du langage Python : aide à la navigation individualisée dans un hypermédia épistémique à partir de traces." Thesis, Cachan, Ecole normale supérieure, 2014. http://www.theses.fr/2014DENS0045/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Les travaux de recherche de cette thèse concernent principalement l‘aide à la navigation individualisée dans un hypermédia épistémique. Nous disposons d‘un certain nombre de ressources qui peut se formaliser à l‘aide d‘un graphe acyclique orienté (DAG) : le graphe des épistèmes. Après avoir cerné les environnements de ressources et de parcours, les modalités de visualisation et de navigation, de traçage, d‘adaptation et de fouille de données, nous avons présenté une approche consistant à corréler les activités de conception ou d‘édition à celles dédiées à l‘utilisation et la navigation dans les ressources. Cette approche a pour objectif de fournir des mécanismes d‘individualisation de la navigation dans un environnement qui se veut évolutif. Nous avons alors construit des prototypes appropriés pour mettre à l‘épreuve le graphe des épistèmes. L‘un de ces prototypes a été intégré à une plateforme existante. Cet hypermédia épistémique baptisé HiPPY propose des ressources et des parcours portant sur l‘apprentissage du langage Python. Il s‘appuie sur un graphe des épistèmes, une navigation dynamique et un bilan de connaissances personnalisé. Ce prototype a fait l‘objet d‘une expérimentation qui nous a donné la possibilité d‘évaluer les principes introduits et d‘analyser certains usages
This research work mainly concerns means of assistance in individualized navigation through an epistemic hypermedia. We have a number of resources that can be formalized by a directed acyclic graph (DAG) called the graph of epistemes. After identifying resources and pathways environments, methods of visualization and navigation, tracking, adaptation and data mining, we presented an approach correlating activities of design or editing with those dedicated to resources‘ use and navigation. This provides ways of navigation‘s individualization in an environment which aims to be evolutive. Then, we built prototypes to test the graph of epistemes. One of these prototypes was integrated into an existing platform. This epistemic hypermedia called HiPPY provides resources and pathways on Python language. It is based on a graph of epistemes, a dynamic navigation and a personalized knowledge diagnosis. This prototype, which was experimented, gave us the opportunity to evaluate the introduced principles and analyze certain uses
2

Tesser, Federico. "Solveur parallèle pour l’équation de Poisson sur mailles superposées et hiérarchiques, dans le cadre du langage Python." Thesis, Bordeaux, 2018. http://www.theses.fr/2018BORD0129/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Les discrétisations adaptatives sont importantes dans les problèmes de fluxcompressible/incompressible puisqu'il est souvent nécessaire de résoudre desdétails sur plusieurs niveaux, en permettant de modéliser de grandes régionsd'espace en utilisant un nombre réduit de degrés de liberté (et en réduisant letemps de calcul).Il existe une grande variété de méthodes de discrétisation adaptative, maisles grilles cartésiennes sont les plus efficaces, grâce à leurs stencilsnumériques simples et précis et à leurs performances parallèles supérieures.Et telles performance et simplicité sont généralement obtenues en appliquant unschéma de différences finies pour la résolution des problèmes, mais cetteapproche de discrétisation ne présente pas, au contraire, un chemin faciled'adaptation.Dans un schéma de volumes finis, en revanche, nous pouvons incorporer différentstypes de maillages, plus appropriées aux raffinements adaptatifs, en augmentantla complexité sur les stencils et en obtenant une plus grande flexibilité.L'opérateur de Laplace est un élément essentiel des équations de Navier-Stokes,un modèle qui gouverne les écoulements de fluides, mais il se produit égalementdans des équations différentielles qui décrivent de nombreux autres phénomènesphysiques, tels que les potentiels électriques et gravitationnels. Il s'agitdonc d'un opérateur différentiel très important, et toutes les études qui ontété effectuées sur celui-ci, prouvent sa pertinence.Dans ce travail seront présentés des approches de différences finies et devolumes finis 2D pour résoudre l'opérateur laplacien, en appliquant des patchsde grilles superposées où un niveau plus fin est nécessaire, en laissant desmaillages plus grossiers dans le reste du domaine de calcul.Ces grilles superposées auront des formes quadrilatérales génériques.Plus précisément, les sujets abordés seront les suivants:1) introduction à la méthode des différences finies, méthode des volumes finis,partitionnement des domaines, approximation de la solution;2) récapitulatif des différents types de maillages pour représenter de façondiscrète la géométrie impliquée dans un problème, avec un focussur la structure de données octree, présentant PABLO et PABLitO. Le premier estune bibliothèque externe utilisée pour gérer la création de chaque grille,l'équilibrage de charge et les communications internes, tandis que la secondeest l'API Python de cette bibliothèque, écrite ad hoc pour le projet en cours;3) la présentation de l'algorithme utilisé pour communiquer les données entreles maillages (en ignorant chacune l'existence de l'autre) en utilisant lesintercommunicateurs MPI et la clarification de l'approche monolithique appliquéeà la construction finale de la matrice pour résoudre le système, en tenantcompte des blocs diagonaux, de restriction et de prolongement;4) la présentation de certains résultats; conclusions, références.Il est important de souligner que tout est fait sous Python comme framework deprogrammation, en utilisant Cython pour l'écriture de PABLitO, MPI4Py pour lescommunications entre grilles, PETSc4py pour les parties assemblage et résolutiondu système d'inconnues, NumPy pour les objets à mémoire continue.Le choix de ce langage de programmation a été fait car Python, facile àapprendre et à comprendre, est aujourd'hui un concurrent significatif pourl'informatique numérique et l'écosystème HPC, grâce à son style épuré, sespackages, ses compilateurs et pourquoi pas ses versions optimisées pour desarchitectures spécifiques
Adaptive discretizations are important in compressible/incompressible flow problems since it is often necessary to resolve details on multiple levels,allowing large regions of space to be modeled using a reduced number of degrees of freedom (reducing the computational time).There are a wide variety of methods for adaptively discretizing space, but Cartesian grids have often outperformed them even at high resolutions due totheir simple and accurate numerical stencils and their superior parallel performances.Such performance and simplicity are in general obtained applying afinite-difference scheme for the resolution of the problems involved, but this discretization approach does not present, by contrast, an easy adapting path.In a finite-volume scheme, instead, we can incorporate different types of grids,more suitable for adaptive refinements, increasing the complexity on thestencils and getting a greater flexibility.The Laplace operator is an essential building block of the Navier-Stokes equations, a model that governs fluid flows, but it occurs also in differential equations that describe many other physical phenomena, such as electric and gravitational potentials, and quantum mechanics. So, it is a very importantdifferential operator, and all the studies carried out on it, prove itsrelevance.In this work will be presented 2D finite-difference and finite-volume approaches to solve the Laplacian operator, applying patches of overlapping grids where amore fined level is needed, leaving coarser meshes in the rest of the computational domain.These overlapping grids will have generic quadrilateral shapes.Specifically, the topics covered will be:1) introduction to the finite difference method, finite volume method, domainpartitioning, solution approximation;2) overview of different types of meshes to represent in a discrete way thegeometry involved in a problem, with a focuson the octree data structure, presenting PABLO and PABLitO. The first one is anexternal library used to manage each single grid’s creation, load balancing and internal communications, while the second one is the Python API ofthat library written ad hoc for the current project;3) presentation of the algorithm used to communicate data between meshes (beingall of them unaware of each other’s existence) using MPI inter-communicators and clarification of the monolithic approach applied building the finalmatrix for the system to solve, taking into account diagonal, restriction and prolongation blocks;4) presentation of some results; conclusions, references.It is important to underline that everything is done under Python as programmingframework, using Cython for the writing of PABLitO, MPI4Py for the communications between grids, PETSc4py for the assembling and resolution partsof the system of unknowns, NumPy for contiguous memory buffer objects.The choice of this programming language has been made because Python, easy to learn and understand, is today a significant contender for the numerical computing and HPC ecosystem, thanks to its clean style, its packages, its compilers and, why not, its specific architecture optimized versions
3

Monat, Raphaël. "Static type and value analysis by abstract interpretation of Python programs with native C libraries." Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS263.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Dans cette thèse, nous avons pour objectif de concevoir, à la fois théoriquement et expérimentalement, des méthodes pour la détection automatique de bogues potentiels dans les logiciels – ou la preuve de leur absence. Ces méthodes sont statiques : elles analysent le code source des programmes sans les exécuter. Nos travaux s’inscrivent dans le cadre de l’interprétation abstraite pour dériver une sémantique sûre et décidable. Le principal objet de ce travail est l’analyse des langages de programmation dynamiques. En particulier, ce travail se concentre sur les programmes écrits en Python, qui peuvent appeler des bibliothèques écrites en C
In this thesis, we aim at designing both theoretically and experimentally methods for the automatic detection of potential bugs in software – or the proof of the absence thereof. This detection is done statically by analyzing programs’ source code without running them. We rely on the abstract interpretation framework to derive sound, computable semantics. In particular, we focus on analyzing dynamic programming languages. The target of this work is the analysis of Python programs combined with native C libraries
4

Huth, Jacob. "Modelling Aging in the Visual System & The Convis Python Toolbox." Electronic Thesis or Diss., Sorbonne université, 2018. http://www.theses.fr/2018SORUS140.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Dans cette thèse, nous étudions les processus de vieillissement dans le système visuel à partir d’une perspective de modélisation computationnelle. Nous passons en revue les phénomènes de vieillissement neuronal, les changements fondamentaux du vieillissement et les mécanismes possibles qui peuvent relier les causes et les effets. Les hypothèses que nous formulons à partir de cette revue sont : l’hypothèse de bruit d’entrée, l’hypothèse de plasticité, l’hypothèse de matière blanche et l’hypothèse d’inhibition. Puisque l’hypothèse de bruit d’entrée a la possibilité d’expliquer un certain nombre de phénomènes de vieillissement à partir d’une prémisse très simple, nous nous concentrons principalement sur cette théorie. Puisque la taille et l’organisation des champs récepteurs est importante pour la perception et change à un âge élevé, nous avons développé une théorie sur l’interaction entre le bruit et la structure des champs récepteurs. Nous proposons ensuite la STDP comme mécanisme possible qui pourrait changer la taille du champ récepteur en réponse au bruit d’entrée. Dans deux chapitres distincts, nous examinons les approches pour modéliser les données neurales et les données psychophysiques respectivement. Dans ce processus, nous examinons respectivement un mécanisme de contrôle du gain de contraste et un modèle cortical simplifié. Enfin, nous présentons convis, une boîte à outils Python pour la création de modèles de vision convolutionnelle, qui a été développée lors de cette thèse. convis peut mettre en œuvre les modèles les plus importants utilisés actuellement pour modéliser les réponses des cellules ganglionnaires rétiniennes et des cellules des corticales inférieures (V1/V2)
In this thesis we investigate aging processes in the visual system from a computational modelling perspective. We give a review about neural aging phenomena, basic aging changes and possible mechanisms that can connect causes and effects. The hypotheses we formulate from this review are: the input noise hypothesis, the plasticity hypothesis, the white matter hypothesis and the inhibition hypothesis. Since the input noise hypothesis has the possibility to explain a number of aging phenomena from a very simple premise, we focus mainly on this theory. Since the size and organization of receptive fields is important for perception and is changing in high age, we developed a theory about the interaction of noise and receptive field structure. We then propose spike-time dependent plasticity (STDP) as a possible mechanism that could change receptive field size in response to input noise. In two separate chapters we investigate the approaches to model neural data and psychophysical data respectively. In this process we examine a contrast gain control mechanism and a simplified cortical model respectively. Finally, we present convis, a Python toolbox for creating convolutional vision models,which was developed during the studies for this thesis. convis can implement the most important models used currently to model responses of retinal ganglion cells and cells in the lower visual cortices (V1 and V2)
5

Bleuzé, Alexandre. "Transfert d'apprentissage intra et inter sujets en interfaces cerveau-machine non-invasives." Electronic Thesis or Diss., Université Grenoble Alpes, 2023. http://www.theses.fr/2023GRALS057.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Une interface cerveau-ordinateur (ICO) désigne un système de liaison directe entre un cerveau et un ordinateur, permettant à un individu d’effectuer des tâches sans passer par l’action des nerfs périphériques et des muscles. Ces dernières années les ICO sont devenues de plus en plus intéressantes, surtout dans le secteur de la santé en raison de leur potentiel d'aide aux patients. Elles ont été utilisées pour aider certaines personnes à récupérer leurs fonctions motrices après un accident vasculaire cérébral ou une lésion de la moelle épinière ou encore pour aider les personnes atteintes de maladies dégénératives telles que la sclérose latérale amyotrophique, et qui perdent peu à peu la capacité de contrôler leurs membres puis de communiquer. Un autre facteur qui ajoute à l'attrait des ICO est leur potentiel d'augmentation des capacités des personnes en bonne santé dans des domaines tels que le jeux vidéo. Aujourd’hui grâce aux avancées technologiques dans le domaine de la santé, les outils nécessaires à la mise en place d’ICO tels que les électroencéphalogrammes deviennent plus abordables et permettent la multiplication d’expérimentations et de test cliniques donnant alors accès à une grande quantité de données, parfois en accès libre sur internet. Ces données pourraient rendre possible la création de modèles ayant été entraînés à partir des données de nombreuses personnes, permettant à la fois d'augmenter les performances des futurs systèmes tout en réduisant leur temps de calibration. Cela aurait également pour conséquence de permettre l'utilisation de matériel moins couteux pour des performances équivalentes, rendant les ICO plus abordables. Le problème principal aujourd'hui est que les données disponibles en accès libre sont très hétérogènes, que ce soit en termes de qualité, de paradigme ou même simplement de matériel. Pour ces raisons, il est très difficile d’exploiter toutes ces données pour en retirer des caractéristiques communes. Cette thèse a pour objectif de trouver des méthodes permettant d'adapter et d'utiliser les données disponibles en accès libre afin de créer des modèles d'apprentissage automatique très robustes car entraînés sur les données de nombreux sujets. Pour ce faire, on s'intéresse à la géométrie Riemannienne dont l'utilisation dans le cadre des interfaces cerveau-ordinateur a récemment démontré une grande efficacité. Ce travail original porte plus précisément sur le développement de méthodes d’apprentissage par transfert dans l’espace tangent à la variété Riemannienne. Les méthodes proposées ont été évaluées sur un grand nombre de bases de données regroupant plusieurs paradigmes : l’imagerie motrice, les potentiels évoqués de type P300 et les potentiels évoqués visuels stables. Les travaux de cette thèse ont permis de développer une méthode nommée Tangent Space Alignment (TSA) avec laquelle une amélioration globale de la précision de 2,7 % est obtenue par rapport à une méthode riemannienne publiée précédemment, l'analyse de Procrustes Riemannienne (RPA). Une amélioration d’autant plus importante dans le cadre des ICO en P300. Un autre apport de cette thèse à la communauté scientifique est la recherche autour de l’utilisation de sources arbitraires mathématiques dans le cadre d’apprentissage par transfert en ICO. Les travaux poursuivis au cours de cette thèse montrent que peu d'informations sont perdues lors de l'alignement vers cette source arbitraire et étudient l'impact sur la précision entre les sujets, permettant alors d’explorer de nouvelles possibilités d’alignement et de chercher des sources d’alignement normées et mathématiques plutôt que des données de sujet existant qui ne possèdent pas forcément les bonnes propriétés mathématiques pour servir de source de qualité
A brain-computer interface (BCI) is a direct link between a brain and a computer, enabling an individual to perform tasks without the need for peripheral nerves or muscles. In recent years, BCIs have become increasingly interesting, especially in the healthcare sector, because of their potential to help patients. They have been used to help some people recover their motor functions after a stroke or spinal cord injury, or to help people with degenerative diseases such as amyotrophic lateral sclerosis, who gradually lose the ability to control their limbs and then communicate. Another factor adding to the appeal of BCIs is their potential to enhance the capabilities of healthy people in areas such as video games. Today, thanks to technological advances in the healthcare field, the tools needed to set up BCIs, such as electroencephalograms, are becoming more affordable, enabling the multiplication of experiments and clinical tests, giving access to a vast amount of data, sometimes freely available on the Internet. This data could make it possible to create models that have been trained from the data of many people, thereby increasing the performance of future systems while reducing their calibration time. This would also enable the use of less expensive hardware for equivalent performance, making BCIs more affordable. The main problem today is that the data available in open access is very heterogeneous, whether in terms of quality, paradigm or even simply hardware. For these reasons, it is very difficult to exploit all this data to extract common features. The aim of this thesis is to find methods for adapting and using open-access data to create machine learning models that are highly robust because they are trained on data from a wide range of subjects. To this end, we are focusing on Riemannian geometry, the use of which in brain-computer interfaces has recently shown to be highly effective. More specifically, this original work focuses on the development of transfer learning methods in the tangent space of the Riemannian variety. The proposed methods have been evaluated on a large number of databases covering several paradigms: motor imagery, P300 evoked potentials and steady-state visual evoked potentials. The work carried out in this thesis has led to the development of a method called Tangent Space Alignment (TSA), which achieves an overall improvement in accuracy of 2.7% over a previously published Riemannian method, Riemannian Procrustes Analysis (RPA). Another contribution of this thesis to the scientific community is research into the use of mathematical arbitrary sources in BCI transfer learning. The work carried out in this thesis shows that little information is lost when aligning to this arbitrary source, and studies the impact on accuracy between subjects, enabling new alignment possibilities to be explored and mathematically normalized alignment sources to be sought, rather than existing subject data which may not possess the right mathematical properties to serve as a quality source
6

Larouche, Tremblay François. "Analyse détaillée du fonctionnement interne du schéma de surface CLASS." Master's thesis, Université Laval, 2014. http://hdl.handle.net/20.500.11794/25359.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Le fonctionnement du schéma de surface canadien CLASS a été analysé en détail, basé sur une démarche de rétroconception. L’impact des multiples variables d’états du modèle sur les termes des bilans énergétique et hydrique a été expliqué. La valeur des albédos et des transmissivités de la canopée augmente en fin de saison lorsque la canopée devient moins dense. Donc, le rayonnement au sol augmente alors que celui à la canopée diminue. La résistance de couche limite de la feuille ralentit les transferts de chaleur sensible et latente à la canopée durant le jour, mais n’a aucune influence la nuit. La résistance aérodynamique au transfert de chaleur est plus élevée le jour que la nuit. Elle influe sur les flux de chaleur sensible et latente à la canopée. La résistance de surface au transfert de chaleur est très élevée le jour et peu élevée la nuit. Elle influe sur les flux de chaleur sensible et latente au sol. La résistance stomatale est très grande la nuit. Elle freine le transfert de chaleur latente durant le jour et n’a aucune influence sur les flux de chaleur sensible. Finalement, on a observé de grandes fluctuations de température et de teneur en eau dans les deux premières couches de sol. Tandis que la troisième couche de sol a montré une réaction très lente aux précipitations et aux variations de température à la surface du sol. Les résultats sont supportés d’explications théoriques très détaillées dans la section théorie.
Canadian Land Surface Scheme
7

Silva, Bruno Hartmann da. "Nano-κ : a Python code for the multiscale modelling of the thermal conductivity". Electronic Thesis or Diss., Université de Lorraine, 2023. http://www.theses.fr/2023LORR0212.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Les appareils électroniques sont présents dans presque tous les aspects de la société moderne et leur optimisation et leur contrôle sont d'une importance capitale pour le développement de nouvelles technologies. En outre, les préoccupations environnementales relatives à leur efficacité énergétique et à leur durée de vie nécessitent de tester des alternatives qui minimisent l'impact de l'homme sur la nature. Les semi-conducteurs, tels que le silicium (Si) et le germanium (Ge), sont l'un des matériaux les plus couramment utilisés dans les nanodispositifs électroniques. Dans ce contexte, l'étude des phonons, quanta de vibration du réseau cristallin, qui sont les principaux vecteurs de l'énergie thermique dans les semi-conducteurs, suscite une forte motivation. À l'échelle macroscopique, les propriétés des matériaux telles que la conductivité thermique sont généralement considérées comme indépendantes des conditions de bord. Ce n'est pas le cas à l'échelle nanométrique, où chaque mode de vibration du matériau peut se comporter différemment en raison de la configuration géométrique. Cela nécessite un calcul plus détaillé pour comprendre comment les paramètres géométriques affectent la capacité du nanodispositif à conduire la chaleur. Il est important de comprendre la conduction de la chaleur à l'échelle nanométrique pour éviter la surchauffe du système et pour comprendre comment la température affecte ses performances électriques. Les outils informatiques pourraient fournir des informations précieuses pour comprendre ces effets. En fait, plusieurs travaux ont déjà utilisé des calculs numériques pour comprendre le comportement thermique des nanodispositifs, mais généralement avec des codes internes qui ne sont pas ouverts à la communauté. Dans ce contexte, cette thèse présente Nano-κ, un code Python pour résoudre l'équation de transport de Boltzmann (BTE) dans les nanodispositifs en utilisant la méthode Monte Carlo avec des données ab initio en entrée. Tout d'abord, la théorie du transport des phonons et sa mise en œuvre dans Nano-κ sont discutées. Ensuite, une analyse de sensibilité est réalisée pour vérifier l'effet des principaux paramètres de simulation sur la conductivité thermique estimée. La conductivité thermique calculée par Nano-κ est ensuite comparée aux résultats de la littérature dans plusieurs contextes de couches minces et de nanofils, qui montrent en général une bonne concordance. En outre, une géométrie arbitraire est simulée dans deux cas différents, démontrant la flexibilité et la cohérence d'Nano-κ pour fournir de bonnes estimations du transfert de chaleur dans les nanodispositifs. La thèse conclut en suggérant des pistes d'amélioration possibles pour les travaux futurs
Electronic devices are present in almost every aspect of modern society and their optimisation and control is of paramount importance in the development of new technologies. In addition, environmental concerns about their energy efficiency and lifetime require the testing of alternatives that minimise human impact on nature. One of the most common materials used in electronic nanodevices is semiconductors, such as silicon (Si) and germanium (Ge). In this context, there is a strong motivation to study phonons, quanta of crystal lattice vibration, which are the main carriers of thermal energy in semiconductors. At the macroscale, material properties such as thermal conductivity are usually considered to be independent of boundary conditions. This is not the case at the nanoscale, where each vibrational mode of the material can behave differently due to the geometric configuration. This requires a more detailed calculation to understand how geometric parameters affect the ability of the nanodevice to conduct heat. Understanding heat conduction at the nanoscale is important to avoid overheating the system and to understand how temperature affects its electrical performance. Computational tools could efficiently provide great insights to understand these effects. In fact, several works have already used numerical calculations to understand the thermal behaviour of nanodevices, but usually with in-house codes that are not open to the community. In this context, this thesis presents Nano-κ, a Python code to solve the Boltzmann transport equation (BTE) in nanodevices using the Monte Carlo method with ab initio data as input. First, the theory behind phonon transport and its computational implementation in Nano-κ is discussed. Then, a sensitivity analysis is performed to verify the effect of the main simulation parameters on the estimated thermal conductivity. The thermal conductivity calculated by Nano-κ is then compared with results from the literature in several thin film and nanowire settings, which in general show good agreement. In addition, an arbitrary geometry is simulated in two different cases, demonstrating Nano-κ's flexibility and consistency in providing good estimates of heat transfer in nanodevices. The thesis concludes by suggesting possible avenues for improvement in future work
8

Hold-Geoffroy, Yannick. "SCOOP : cadriciel de calcul distribué générique." Master's thesis, Université Laval, 2015. http://hdl.handle.net/20.500.11794/25711.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Ce document présente SCOOP, un nouveau cadriciel Python pour la distribution automatique de hiérarchies de tâches dynamiques axé sur la simplicité. Une hiérarchie de tâches réfère à des tâches qui peuvent récursivement générer un nombre arbitraire de sous-tâches. L’infrastructure de calcul sous-jacente consiste en une simple liste de ressources de calcul. Le cas d’utilisation typique est l’exécution d’un programme principal sous la tutelle du module SCOOP, qui devient alors la tâche racine pouvant générer des sous-tâches au travers de l’interface standard des « futures » de Python. Ces sous-tâches peuvent elles-mêmes générer d’autres sous-sous-tâches, etc. La hiérarchie de tâches complète est dynamique dans le sens où elle n’est potentiellement pas entièrement connue jusqu’à la fin de l’exécution de la dernière tâche. SCOOP distribue automatiquement les tâches parmi les ressources de calcul disponibles en utilisant un algorithme de répartition de charge dynamique. Une tâche n’est rien de plus qu’un objet Python pouvant être appelé en conjonction avec ses arguments. L’utilisateur n’a pas à s’inquiéter de l’implantation du passage de message ; toutes les communications sont implicites.
This paper presents SCOOP, a new Python framework for automatically distributing dynamic task hierarchies focused on simplicity. A task hierarchy refers to tasks that can recursively spawn an arbitrary number of subtasks. The underlying computing infrastructure consists of a simple list of resources. The typical use case is to run the user’s main program under the umbrella of the SCOOP module, where it becomes a root task that can spawn any number of subtasks through the standard “futures” API of Python, and where these subtasks may themselves spawn other subsubtasks, etc. The full task hierarchy is dynamic in the sense that it is unknown until the end of the last running task. SCOOP automatically distributes tasks amongst available resources using dynamic load balancing. A task is nothing more than a Python callable object in conjunction with its arguments. The user need not worry about message passing implementation details; all communications are implicit.
9

Combrisson, Etienne. "Décodage des intentions et des exécutions motrices : étude du rôle des oscillations cérébrales via l’apprentissage machine et développement d’outils open-source." Thesis, Lyon, 2017. http://www.theses.fr/2017LYSE1327/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
L'exécution d'un simple mouvement est associée à des modulations complexes de l'activité oscillatoire du cerveau. Toutefois, notre compréhension du rôle spécifique des composantes de phase, d'amplitude ou de couplage phase-amplitude (PAC) durant la préparation et l'exécution motrice est encore partielle. La première partie de cette thèse traite de cette question en analysant des données d'EEG intracrânien obtenues chez des sujets épileptiques effectuant une tâche center out différée. Les outils d'apprentissage machine ont permis d'identifier des marqueurs neuronaux propres aux états moteur ou aux directions de mouvement. En plus du rôle déjà bien connu de la puissance spectrale, cette approche dictée par les données (data-driven) a identifié une implication importante de la composante de phase basse fréquence ainsi que du PAC dans les processus neuronaux de la préparation et de l'exécution motrice. En plus de cet apport empirique, une importante partie de ce travail de thèse a consisté à implémenter des outils d'analyse et de visualisation de données électrophysiologiques. Plusieurs utilitaires ont été conçus spécifiquement : une toolbox dédiée à l'extraction et à la classification de marqueurs neuronaux (Brainpipe), des outils de calcul de PAC modulaire basé sur des tenseurs (Tensorpac) ainsi qu'un ensemble d'interfaces graphiques dédiées à la visualisation de données cérébrales (Visbrain). Ces recherches auront permis de mieux comprendre le rôle des oscillations neuronales lors de comportements dirigés et met également à disposition un ensemble d'outils efficaces et libres permettant à la communauté scientifique de répliquer et d'étendre ces recherches
The execution of a motor task is associated with complex patterns of oscillatory modulations in the brain. However, the specific role of oscillatory phase, amplitude and phase-amplitude coupling (PAC) across the planning and execution stages of goal-directed motor behavior is still not yet fully understood. The aim of the first part of this PhD thesis was to address this question by analyzing intracranial EEG data recorded in epilepsy patients during the performance of a delayed center-out task. Using machine learning, we identified functionally relevant oscillatory features via their accuracy in predicting motor states and movement directions. In addition to the established role of oscillatory power, our data-driven approach revealed the prominent role of low-frequency phase as well as significant involvement of PAC in the neuronal underpinnings of motor planning and execution. In parallel to this empirical research, an important portion of this PhD work was dedicated to the development of efficient tools to analyze and visualize electrophysiological brain data. These packages include a feature extraction and classification toolbox (Brainpipe), modular and tensor-based PAC computation tools (Tensorpac) and a versatile brain data visualization GUI (Visbrain). Taken together, this body of research advances our understanding of the role of brain oscillations in goal-directed behavior, and provides efficient open-source packages for the scientific community to replicate and extend this research
10

Häggholm, Petter. "PyRemote : object mobility in the Python programming language." Thesis, University of British Columbia, 2007. http://hdl.handle.net/2429/31573.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The current trend in computation is one of concurrency and multiprocessors. Large supercomputers have long been eclipsed in popularity by cheaper clusters of small computers. In recent years, desktop processors with multiple cores have become commonplace. A plethora of languages, tools, and techniques for dealing with concurrency already exist. Where concurrency and multiprocessors meet modern, highly dynamic languages, however, is uncharted territory. Traditional distributed systems, however complex, tend to be simplified by assumptions of type consistency. Even in systems where types and not merely instances and primitive objects can be serialised and distributed, it is usually the case that such types are assumed to be static. The Python programming language, as an example of a modern language with highly dynamic types, presents novel challenges, in that classes may be altered at runtime, both through the addition, removal, or modification of attributes such as member variables and methods, and through modifications to the type's inheritance hierarchy. This thesis presents a system called PyRemote which aims to explore some of the issues surrounding type semantics in this environment.
Science, Faculty of
Computer Science, Department of
Graduate
11

Gaska, Benjamin James, and Benjamin James Gaska. "ParForPy: Loop Parallelism in Python." Thesis, The University of Arizona, 2017. http://hdl.handle.net/10150/625320.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Scientists are trending towards usage of high-level programming languages such as Python. The convenience of these languages often have a performance cost. As the amount of data being processed increases this can make using these languages unfeasible. Parallelism is a means to achieve better performance, but many users are unaware of it, or find it difficult to work with. This thesis presents ParForPy, a means for loop-parallelization to to simplify usage of parallelism in Python for users. Discussion is included for determining when parallelism matches well with the problem. Results are given that indicate that ParForPy is both capable of improving program execution time and perceived to be a simpler construct to understand than other techniques for parallelism in Python.
12

Daily, Jeffrey Alan. "Gain distributed array computation with python /." Pullman, Wash. : Washington State University, 2009. http://www.dissertations.wsu.edu/Thesis/Spring2009/j_daily_042409.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Thesis (M.S. in computer science)--Washington State University, May 2009.
Title from PDF title page (viewed on May 26, 2009). "School of Electrical Engineering and Computer Science." Includes bibliographical references (p. 41-44).
13

New, Wesley. "Python based FPGA design-flow." Master's thesis, University of Cape Town, 2016. http://hdl.handle.net/11427/20339.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This dissertation undertakes to establish the feasibility of using MyHDL as a basis on which to develop an FPGA-based DSP tool-ow to target CASPER hardware. MyHDL is an open-source package which enables Python to be used as a hardware definition and verification language. As Python is a high-level language, hardware designers can use it to model and simulate designs, without needing detailed knowledge of the underlying hardware. MyHDL has the ability to convert designs to Verilog or VHDL allowing it to integrate into the more traditional design-ow. The CASPER tool- ow exhibits limitations such as design environment instability and high licensing fees. These shortcomings are addressed by MyHDL. To enable CASPER to take advantage of its powerful features, MyHDL is incorporated into a next generation tool-ow which enables high-level designs to be fully simulated and implemented on the CASPER hardware architectures.
14

Barcellini, Flore. "Conception de l'artefact, conception du collectif : dynamique d'un processus de conception ouvert et continu dans une communauté de développement de logiciels libres." Phd thesis, Conservatoire national des arts et metiers - CNAM, 2008. http://tel.archives-ouvertes.fr/tel-00350212.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Ce travail de recherche porte sur la conception de Logiciels libres et Open Source (LOS), vue comme une nouvelle forme d'organisation du travail basée sur : des collectifs communautaires ouverts à la participation volontaire d'utilisateurs ; un processus de conception continu ; une distribution de la conception dans trois espaces d'activité sur Internet (espaces de discussion, de documentation et d'implémentation).
L'apport méthodologique de ce travail consiste à analyser des traces contextuelles d'un processus de conception du projet Python, le Python Enhancement Proposal (PEP), et ceci dans des perspectives synchronique (centrée sur des discussions en ligne PEPs) et diachronique (centrée sur l'évolution d'une proposition PEP dans les trois espaces d'activité). Nous avons adopté une méthodologie originale combinant des analyses structurelles des listes de discussion du projet, l'une orientée usage et l'autre orientée conception (p.ex. représentation des discussions) à des analyses de contenu de ces listes (p.ex. activités collaboratives de conception), ainsi que des analyses des traces des espaces de documentation et d'implémentation et des entretiens.
Concernant l'organisation de la conception de l'artefact dans les trois espaces d'activité, nous montrons que la liste orientée usage et la liste orientée conception sont spécialisées, en termes de phases du processus de conception et d'activités qui y ont lieu. De même, nous montrons les relations qui lient les actions dans les listes (espace de discussion) avec les actions dans les deux autres espaces d'activités (implémentation, documentation). Les discussions de conception sont focalisées et marquées par des moments d'échange quasi-synchrones, traduisant la présence de règles implicites encadrant les discussions. Enfin, la répartition des activités collaboratives de conception et des séquences d'activités est similaire à celle mise en évidence dans d'autres études concernant les réunions de conception en face à face.
Concernant ce qui constitue le collectif de conception, nous montrons que la communauté des concepteurs de Python est constituée de réseaux de conception locaux associant des membres provenant de diverses communautés d'utilisateurs, autour d'un noyau dur de développeurs. Dans ce collectif de conception, la participation est basée sur les rôles effectivement tenus par les participants, plus que sur leurs statuts (utilisateurs vs. développeurs). Notre analyse montre que les rôles cognitifs (génération-évaluation de solutions de conception) et épistémiques (clarification) sont pris en charge par l'ensemble des participants, y compris les utilisateurs. Des profils de participants spécifiques apparaissent néanmoins. Le chef de projet et les personnes proposant les nouvelles fonctionnalités (les champions) ont un profil d'animateur du processus de conception, caractérisé par un rôle de coordination, par un rôle interactif (gestion de l'interaction) central dans les discussions, et parfois par un rôle socio-relationnel (relations interpersonnelles). Des profils d'acteurs d'interface, articulant usage et conception, apparaissent comme des participants clés pour la performance du processus de conception. Ils se caractérisent par un rôle interactif basé sur la participation croisée, entre les listes orientées usage et conception, et une position centrale dans les discussions. Ils ont également un rôle épistémique basé sur des apports de connaissances spécifiques quant aux domaines d'application de la conception, et enfin un rôle de soutien du champion de la proposition.
Ces résultats peuvent fonder la spécification d'outils permettant de favoriser la participation aux projets LOS, en dépassant diverses barrières (p.ex. coût temporel d'intégration dans un projet) et en soutenant la construction et le maintien de la conscience du projet (conscience du processus de conception et conscience sociale).
15

Åkesson, Tobias, and Rasmus Horntvedt. "Java, Python and Javascript, a comparison." Thesis, Högskolan Kristianstad, Fakulteten för naturvetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:hkr:diva-20007.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
With the amount of programming languages currently available there is a high risk of confusion anddoubtfulness in aspiring programmers of which to choose. It may be motivating for a beginner tochoose “the perfect language” when starting, to avoid learning multiple languages. This thesiscompares three popular languages on three separate aspects, their syntax, usefulness in differentareas, and performance in terms of speed. Syntax wise the results varied with some aspects beingvery similar across all three languages to completely different in others. In terms of usefulness inspecific areas the languages flexibility allowed them to develop applications in most fields, while beingdominant in different areas. The speed comparison resulted in python being the slowest across alltests, with Java and Javascript (running inside Nodejs) competing for first place.
16

Eitzen, Benjamin. "GpuPy : efficiently using a GPU with Python." Online access for everyone, 2007. http://www.dissertations.wsu.edu/Thesis/Summer2007/b_eitzen_082307.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Kohli, Manav S. "Assessing the Suitability of Python as a Language for Parallel Programming." Scholarship @ Claremont, 2016. http://scholarship.claremont.edu/cmc_theses/1385.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
With diminishing gains in processing power from successive generations of hardware development, there is a new focus on using advances in computer science and parallel programming to build faster, more efficient software. As computers trend toward including multiple and multicore processors, parallel computing serves as a promising option for optimizing the next generation of software applications. However, models for implementing parallel programs remain highly opaque due to their reliance on languages such as Fortran, C, and C++. In this paper I investigate Python an option for implementing parallel programming techniques in application development. I analyze the efficiency and accessibility of MPI for Python and IPython Parallel packages by calculating in parallel using a Monte Carlo simulation and comparing their speeds to the sequential calculation. While MPI for Python offers the core functionality of MPI and C-like syntax in Python, IPython Parallel's architecture provides a truly unique model.
18

Tressou, Benjamin. "Contribution à l'homogénéisation des milieux viscoélastiques et introduction du couplage avec la température par extensions d'une approche incrémentale directe." Thesis, Chasseneuil-du-Poitou, Ecole nationale supérieure de mécanique et d'aérotechnique, 2016. http://www.theses.fr/2016ESMA0004/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Cette thèse traite de la modélisation micromécanique de composites viscoélastiques via une approche incrémentale(Al) proposée par Lahellec et Suquet (2007). En plus d'être fondée sur un cadre thermodynamique rigoureux, 1' Al permet une résolution du problème local dans l'espace-temps réel (i.e. sans passage dans l'espace de Laplace). Le premier objectif est d'élargir le spectre d'application de I' Al en termes de lois viscoélastiques locales et de microstructures. Le second objectif est de tenter d'introduire au sein de l'AI la prise en compte des effets de couplage entre la viscoélasticité et la température (couplage thermoélastique mais aussi l'échauffement induit par la dissipation viscoélastique). Tout d'abord, I' Al est codée en Python• puis le programme validé pour des lois viscoélastiques linéaires simples, des microstructures et des chargements déjà étudiés dans les travaux de Lahellec et Suquet. Une seconde partie opère une généralisation théorique de l'AI à de multiples variables internes, non nécessairement déviatoriques et des phases non nécessairement isotropes. Les différentes extensions sont validées progressivement par confrontations aux solutions exactes de référence (champs complets) et en particulier l'efficacité de I' Al étendue à traiter des matrices de type Maxwell généralisé (sans et avec déformations volumiques anélastiques). Cette partie se termine par une démonstration concrète de la possible associat on de l'AI à trois schémas d'homogénéisation (Mori-Tanaka, Double Inclusion, schéma de Malekmohammadi et al. (2014)) en vue de traiter diverses morphologies (composites à fibres, à particules, et à copeaux anisotropes de bois lamellés). La dernière partie traite du couplage entre la viscoélasticité et la température au sein de l'AI. Les versions initiale et discrétisée dans le temps du problème hétérogène thermoviscoélastique fortement couplé sont formulées. Puis, plusieurs degrés de couplage sont envisagés selon une approche progressive des difficultés. Le cas du seul couplage thermoélastique est tout d'abord étudié (couplage de la thermique vers la mécanique, sans résolution de l'équation de la cha leur). les estimations obtenues pour plusieurs chargements thermomécaniques imposés à un milieu périodique contenant des fibres élastiques, thermoélastiques puis thermoviscoélastiques,dans une matrice thermoviscoélastique sont confrontées avec succès aux solutions de référence. Enfin, la résolution simultanée de l'équation de la chaleur est abordée en intégrant comme terme source la dissipation viscoélastique au sein de la matrice en plus du terme de couplage thermoélastique, les fibres étant considérées élastiques. Les évolutions de la température et de la réponse globales révèlent des tendances cohérentes
This study is devoted to the micromechanical modeling of viscoelastic composites using an incremental approach (IA) due to Lahellec and Suquet (2007). ln addition to be based on a rigorous thermodynamic framework, the IA allows solving the heterogeneous viscoelastic problem in the real time domain (i.e. without the Laplace transform). The first aim is to extend the IA application range in terms of local linear viscoelastic laws and microstructures. The second one is to attempt to introduce the coupling effects between the viscoelasticity and the temperature within the IA framework. First, the IA is coded in Python• and the program validated for simple viscoelastic laws, and for microstructures and loading paths already studied in Lahellec and Suquet (2007). The second part focuses on a theoretical generalization of the IA for many internal variables which are not necessarily deviatoric and for anisotropie phases. The resulting estlmates are progressively validated by confrontation to reference so lutions (full -field simulations) and especially the IA ability to deal with matrices described by generalized Maxwell laws (without and with volumetric anelastic strains). This part ends with a demonstration of the possible association of the IA with three linear homogenization schemes (Mori-Tanaka, Lielen's interpolation, scheme of Malekmohammadi et al. (2014)) in order to deal with various morphologies (fiber or part icle reinforced composites, wood strand-based composites). The last part focuses on the coupling between the viscoelasticity and the temperature within the IA framework. The initial and time discretized versions of the strongly coupled local problem are formulated. Then, increasing coupling levels are envisioned for a progressive approach of the solving procedure. The thermoelastic coupling, alone, is first studied (effect of the thermies on mechanics, without solving the heat equation). The resulting estimates for a periodic microstructure with elastic, thermoelastic then thermoviscoelastic fibers, in a thermoviscoelastic matrix are successfully compared to reference solutions. At last, the heat equation is simultaneously solved by taklng into account the viscoelastic dissipation within the matrix as a source term, in addition to the thermoelastic coupling term. The evolutions of the global temperature and response reveal relevant tendencies
19

Hoffmann, Peter, Christoph Jacobi, and Sebastian Gimeno-Garcia. "Using Python language for analysing measurements from SABER instrument on TIMED satellite." Universität Leipzig, 2009. https://ul.qucosa.de/id/qucosa%3A16367.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The practical handling and analysis of satellite data is outlined using the programming language Python. The limb sounding technique of the SABER instrument on board of the TIMED satellite delivers vertical profiles of kinematic temperature from the stratosphere (∼30 km) up to the lower thermosphere (∼120 km). The procedure may be summarised as follow: In the first step the level 2 data for one month are extracted from the netCDF format and arranged into a new altitude-latitude grid for the ascending and descending orbits. The longitudinal structure is rearranged applying the decomposition into zonal harmonics. Various cross sections of the data give a good overview of the thermal structure and dynamics of the atmosphere up to 120 km. The monthly values of the zonal averaged temperature are compared to the available data from stratospheric reanalyses up to 60 km as well as the initialized background climatology of general circulation models for the middle atmosphere.
In diesem Artikel soll der praktische Umgang mit Satellitendaten und deren Auswertung unter Verwendung der Programmiersprache Python skizziert werden. Auf der Basis der Horizontsondierungen des SABER Instruments auf dem TIMED Satelliten werden vertikale Profile wie die kinetischen Temperatur von der Stratosphäre (∼30 km) bis zur unteren Thermosphäre (∼120 km) gewonnen. Die Arbeitsschritte bei der Analyse lassen sich wie folgt gliedern: Als erstes werden die Level 2 Produkte eines Monats aus dem netCDF Format extrahiert und an ein neues Höhen-Breiten Gitter für jeden auf- und absteigenden Orbit angepasst. Die Längenstruktur wird mit Hilfe einer Zerlegung in harmonische Funktionen regularisiert. Diverse Querschnitte der Daten geben ein guten Überblick über die thermischen Struktur und Dynamik der Atmosphäre bis 120 km. Die Monatswerte des Zonalmittels der Temperatur werden mit denen aus operationellen Reanalysedaten (∼60 km) sowie der Hintergrundklimatologie von Zirkulationsmodellen der mittleren Atmosphäre verglichen.
20

Arendáč, Tomáš. "Programovací jazyk Python a účelnosť jeho zaradenia do výučby." Master's thesis, Vysoká škola ekonomická v Praze, 2011. http://www.nusl.cz/ntk/nusl-73162.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This thesis is concerned by programming language Python and its suitability of his assignment to the tuition. The work is divided into three dominant parts. The first part describes programming language Python, its elementary characteristics and features. The purpose is to introduce its properties to the reader so that he could estimate if there is point in the deeper concern. There are elements of object-oriented programming in description, too. The second part analyses programming language Python on the basis of ten criteria which are defined considering applicability of the language in preliminary courses of programming. The purpose is to review if the Python is appropriate in these courses. The third part considers the possibilities of the Python tuition at the University of Economics in Prague. The main contribution of the thesis is to give opinion on the fundamental description of the language, to define framework and to pass judgment on the potential chance of use in preliminary courses of programming.
21

Mo, Eriksson Anton, and Hampus Dunström. "Measuring Architectural Degeneration : In Systems Written in the Interpreted Dynamically Typed Multi-Paradigm Language Python." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-159652.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Architectural degeneration is an ever-present threat to software systems with no exception based on the domain or tools used. This thesis focus on the architectural degeneration in systems written in multi-paradigm run-time evaluated languages like Python. The focus on Python in this kind of investigation is to our knowledge the first of its kind; thus the thesis investigates if the methods for measuring architectural degeneration also applies to run-time evaluated languages like Python as believed by other researchers. Whom in contrast to our research have only researched this phenomenon in systems written in compiled languages such as Java, C, C++ and C#. In our research a tool PySmell has been developed to recover architectures and identify the presence of architectural smells in a system. PySmell has been used and evaluated on three different projects Django, Flask and PySmell itself. The results of PySmell are promising and of great interest but in need of further investigating and fine-tuning to reach the same level as the architectural recovery tools available for compiled languages. The thesis presents the first step into this new area of detecting architectural degeneration in interpreted languages, revealing issues such as that of extracting dependencies and how that may affect the architectural smell detection.
22

Westman, Joakim, and Teodor Marinescu. "C, C++, Java och Python : En prestandajämförelse mellan fyra programmeringsspråk." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2304.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In today’s society computers are getting a much more important role. To get a computer to work as intended it has to be programmed. A computer program is written with programming languages. There is an abundance of programming languages available today and there are many differences and similarities between them. The different languages have their advantages and their disadvantages where some of them are intended for fast performance, some to be cheap on memory usage, and some are developed to be easy to program on. In our thesis we have chosen to compare four of todays most common languages, C, C++, Java and Python. These languages were chosen because we have worked with three of them during our study period (C, C++ and Java). Python was chosen because it is an interpreted language and not a compiled one. It also have a very different syntax compared to the other languages which makes it interesting. Our comparison, which focuses on performance, has its foundation in the tests we have made, but also on results from a research survey that we also made. I this survey forty software developers, from Swedish companies, have participated. The tests we have made measure the languages performance, regarding time, by implementing and running two common algorithms. During these tests vi have also chosen to register the amount of memory these algorithms use during runtime. The results we have extracted from our tests and our survey are compiled, and these results are then analysed to be able to compare the four programming languages to each other. The tests that have been done show that Java is the language that performs best, with C and C ++ second best and then Python performing the worst. Our survey answers, on the other hand, indicates that C and C++ should have outperformed Java.
23

Wang, Lingyun. "Qualitative Analysis of the Usability of Three Contemporary Scripting Languages: Perl, Python and Tcl." [Johnson City, Tenn. : East Tennessee State University], 2001. http://etd-submit.etsu.edu/etd/theses/available/etd-0712101-083723/restricted/Wang7242001.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
24

Košulič, Jaroslav. "Univerzální grafický editor jako knihovna a modul pro Python." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2008. http://www.nusl.cz/ntk/nusl-235988.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The diagrams, schemes, and graphs in general are widely used in the field of easy-to-read information visualisation. We use them for example in the school lessons for an algorithm presentation, or in the technical jobs such as software and hardware development by modelling UML diagrams, database schemes, etc. The project Universal Graph Editor has been established two years ago to fill the gap with the software tool providing such a modelling engine. The previous work has been reasumed in semestral project by design of the dynamic graph drawing (or the drawing of a vector graphic in general) and the library for graph manipulation with C-language interface. This master thesis continues further by creating a Python module using the developed interface. The documentation and the testing phase is conluding the annual work.
25

Conti, Matteo. "Machine Learning Based Programming Language Identification." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/20875/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
L'avvento dell'era digitale ha contribuito allo sviluppo di nuovi settori tecnologici, i quali, per diretta conseguenza, hanno portato alla richiesta di nuove figure professionali capaci di assumere un ruolo chiave nel processo d'innovazione tecnologica. L'aumento di questa richiesta ha interessato particolarmente il settore dello sviluppo del software, a seguito della nascita di nuovi linguaggi di programmazione e nuovi campi a cui applicarli. La componente principale di cui è composto un software, infatti, è il codice sorgente, il quale può essere rappresentato come un archivio di uno o più file testuali contenti una serie d'istruzioni scritte in uno o più linguaggi di programmazione. Nonostante molti di questi vengano utilizzati in diversi settori tecnologici, spesso accade che due o più di questi condividano una struttura sintattica e semantica molto simile. Chiaramente questo aspetto può generare confusione nell'identificazione di questo all'interno di un frammento di codice, soprattutto se consideriamo l'eventualità che non sia specificata nemmeno l'estensione dello stesso file. Infatti, ad oggi, la maggior parte del codice disponibile online contiene informazioni relative al linguaggio di programmazione specificate manualmente. All'interno di questo elaborato ci concentreremo nel dimostrare che l'identificazione del linguaggio di programmazione di un file `generico' di codice sorgente può essere effettuata in modo automatico utilizzando algoritmi di Machine Learning e non usando nessun tipo di assunzione `a priori' sull'estensione o informazioni particolari che non riguardino il contenuto del file. Questo progetto segue la linea dettata da alcune ricerche precedenti basate sullo stesso approccio, confrontando tecniche di estrazione delle features differenti e algoritmi di classificazione con caratteristiche molto diverse, cercando di ottimizzare la fase di estrazione delle features in base al modello considerato.
26

Silva, Evandro José da. "Um modelo computacional para análise de conformidade de áreas e superfícies de proteção de aeródromos aos critérios da ICAO." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/3/3138/tde-23062017-151532/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Esta tese propõe um modelo computacional para análise de conformidade de áreas e superfícies de proteção de aeródromos aos critérios de projeto geométrico previstos no Anexo 14 da ICAO (International Civil Aviation Organization). Não foram encontrados na literatura softwares open source com esta finalidade. Os critérios da ICAO impõem áreas e superfícies imaginárias de proteção que se originam na vizinhança de cada uma das pistas de pouso e/ou de decolagem. Dessas exigências normativas decorre um complexo conjunto de áreas em solo e superfícies no espaço aéreo, as quais ordenam a presença de objetos fixos e móveis dentro e fora dos limites do sítio aeroportuário. Os dados de entrada do modelo proposto compreendem: informações sobre a topografia e sobre os limites internos e externos do sítio; a posição de objetos fixos e móveis; a categoria da aeronave; o procedimento de aproximação empregado; e informações sobre a configuração do sistema de pistas. O modelo computacional proposto integra conceitos de CAD (Computer Aided Design) e de GIS (Geographic Information System) para a geração automática de geometrias georreferenciadas, de acordo com um MDE (Modelo Digital de Elevação), internamente representado por uma malha TIN (Triangulated Irregular Network). Além da geração virtual das geometrias, o modelo permite a detecção automática de eventuais interferências nas áreas e superfícies de proteção pelos objetos fixos e móveis. O modelo apresenta os resultados das análises por meio de janelas gráficas e permite a exportação dos arquivos KML para um globo virtual, como o Google Earth. Os arquivos KML representam as áreas e superfícies de proteção e os objetos fixos e móveis, destacando os obstáculos detectados. A modelagem proposta foi implementada em linguagem Python, testada e validada para instâncias fictícias e para um caso real, relacionado ao Aeroporto de Viracopos em Campinas, no Brasil (SBKP). Buscas sistemáticas na literatura científica nacional e internacional indicam que a modelagem aqui proposta é inédita, contribuindo para preencher a lacuna identificada na revisão bibliográfica realizada.
This thesis proposes a computational model for analysis of conformity of aerodrome protection areas and surfaces according to ICAO (International Civil Aviation Organization) Annex 14 geometric design criteria. No open source software with this purpose could be found in the literature. ICAO criteria impose imaginary protection areas and surfaces that start at the vicinity of each runway, leading to a complex set of geometries on the ground and in the airspace. Fixed and movable objects, both inside and outside the aerodrome property limits, are controlled by means of this set of imaginary surfaces. Input data for the herein proposed model comprises: aerodrome site topography and internal and external boundaries; fixed and movable objects position; aircraft category; approach procedures; and runway system configuration data. The model integrates CAD (Computer Aided Design) and GIS (Geographic Information System) technologies in order to automatically generate georeferenced geometries, that take into account a DEM (Digital Elevation Model), internally represented by a TIN (Triangulated Irregular Network) approach. In addition to geometry generation, the proposed model also performs obstacle assessment regarding the suppositional geometric interferences between protection areas and surfaces and the fixed and movable objects. The model results are outputted by means of screen plots, execution console (detected geometric interferences) and KML (Keyhole Markup Language) files, to be exported to virtual globes, like Google Earth. The KML files represent the geometries of protection areas and surfaces as well as fixed and movable objects, highlighting detected obstacles. The model was implemented in Python language and tested for validation, employing both fictitious and a real instance, related to the Viracopos International Airport (SBKP), in Campinas, Brazil. The undergone bibliographic search, considering national and international literature, indicates that this research introduces an unprecedented model, filling in a gap in the literature.
27

Bernuz, Fito Efrem. "Cosymlib: a Python library for continuous symmetry measures and its application to problems in structural chemistry." Doctoral thesis, Universitat de Barcelona, 2021. http://hdl.handle.net/10803/672227.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
For many years symmetry has been a useful concept in the study of the spatial organization of atoms in molecules or solids. The presence of symmetry elements in a given molecular structure gives a valuable information about its properties and chemical behaviour. However, it has been demonstrated that most molecules in nature tend to adopt shapes which are not fully symmetric, presenting small distortions from the ideal symmetric model structures used to rationalize the stereochemical knowledge. Continuous symmetry measures (CSMs) were developed precisely to quantify the amount of asymmetry of a given object by comparing a distorted structure with an ideal symmetric reference. This methodology has been very helpful to classify, for example, the shape of the coordination environment of transition metal atoms in several coordination complexes. In the present thesis, we present an overview of the formalism of CSMs, describing the computational methodologies that had been developed in the past. The main aim of the present thesis is the development of Cosymlib, a Python library englobing all previous algorithms within a unified computational framework that allows a seamless computation of different CSMs for a given molecule using a unified format. Extensive discussion of the advantage of implementing modern programming techniques such as object-oriented programming in the development of a unified computational approach to CSMs will be given in a second methodological chapter. Afterwards, the use of the different tools included in Cosymlib for the symmetry analysis of the molecular structure will be illustrated by applying it to different stereochemical problems related to organometallic coordination complexes, the effect of temperature on the shape of several polyhedral cage molecules and, the effect of temperature and the crystal environment on the shape of phosphate anions.
Durant anys, la simetria ha esdevingut una eina molt útil en l’estudi de l’organització d’àtoms i molècules en sòlids. La presencia d’elements de simetria en una estructura molecular dona informació important sobre les seves propietats i el seu comportament químic. Tot i això, s’ha demostrat que la majoria de molècules a la natura tendeixen a tenir formes que no son totalment simètriques, presentant petites distorsions en el model ideal simètric que s’utilitza per entendre l’estereoquímica d’un compost. Les mesures de simetria en continu (CSMs) es van desenvolupar per tal de quantificar amb precisió com d’asimètric pot ser un objecte comparant una estructura distorsionada amb una referencia amb simetria ideal. Fins ara, aquesta metodologia ha estat útil per classificar, per exemple, la forma de l’entorn de coordinació dels metalls de transició en diversos complexes de coordinació. En aquesta tesis es presenta de manera general el formalisme de les CSMs, descrivint els mètodes computacionals que es van desenvolupar en el passat. L’objectiu principal d’aquesta tesi es el desenvolupament del programa Cosymlib, un llibreria escrita en Python que engloba tots els algoritmes anteriors i els unifica en un sol marc que permet dins del mateix programa calcular diferents CSMs per a una mateixa molècula en un sol format. A més a més, en un segon capítol, s’exemplifica l’avantatge de implementat tècniques de programació modernes, com el llenguatge orientat a objectes, en el desenvolupament d’un programa unificat encarat a les CSMs. Posteriorment, l’ús de les diferents eines que inclou el programa Cosymlib per l’anàlisi de simetria d’una estructura molecular s’il·lustrarà aplicant-lo a diferents problemes d’estereoquímica relacionats amb els complexes de coordinació organometàl·lics, l’efecte de la temperatura en la forma d’un conjunt de molècules polièdriques i, en l’efecte de la temperatura i de l’entorn cristal·lí en la forma de l’anió fosfat.
28

Valgimigli, Lorenzo. "Job Recommendation Based on Deep Learning Methods for Natural Language Processing." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/20467/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
La ricerca di nuove soluzioni sempre più efficienti, da quando l’essere umano viveva nelle caverne ad oggi, ci ha spinto a sviluppare nuovi strumenti sempre più sofisticati e precisi. Con l’arrivo del computer, un altro grande passo in avanti è stato fatto potendo risolvere velocemente quei problemi complessi ma che possono essere espressi in maniera formale. Diversamente, esistono problemi più o meno semplici, ma difficili da porre con una certa formalità come capire una frase. Per questo tipo di problemi sono nate le Intelligenze Artificiali o le Artificial Neural Network e successivamente, grazie all’arrivo dei Big Data, le Deep Neural Network. Oggi esse hanno trovato grande impiego in vari settori quali banche, ospedali, … . Queste nuove tecnologie continuano però a essere molto studiate e a stupire per i risultati sempre migliori che riescono ad ottenere. Un campo in cui sono applicate, che ha ricevuto una recente attenzione, è quello della Job Recommendation. Esso comprende tutto l’insieme di tecnologie e strumenti utilizzati per facilitare un lavoratore nel trovare un lavoro e una azienda nel trovare i migliori candidati per le sue posizioni aperte. Questo campo si basa molto, in particolare per piccole realtà, su personale qualificato che si occupa di cercare candidati utilizzando alcuni modelli di Machine Learning per semplificare la ricerca. Questo lavoro vuole indagare come le nuove tecnologie di Deep Neural Network, che si sono affermate in vari settori come il Natural Language Processing, possano aiutare anche nel campo della Job Recommendation. L’idea è quella di prendere i migliori modelli nei task di NLP e provare, con le opportune modifiche, ad applicarli in un campo nuovo, con regole nuove. Infine, valutare i risultati ottenuti e come questi modelli possano essere applicati concretamente.
29

Parizotto, Giovanna Moreno. "Noções de programação estruturada em Python no ensino de Física: um caminho para o ensino médio por meio da cultura lúdica." Universidade Federal de Goiás, 2017. http://repositorio.bc.ufg.br/tede/handle/tede/7883.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Submitted by Luciana Ferreira (lucgeral@gmail.com) on 2017-10-16T11:41:10Z No. of bitstreams: 2 Dissertação - Giovanna Moreno Parizotto - 2017.pdf: 2790011 bytes, checksum: 84424125a05214d9b7536300c92cae6b (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2017-10-16T11:42:54Z (GMT) No. of bitstreams: 2 Dissertação - Giovanna Moreno Parizotto - 2017.pdf: 2790011 bytes, checksum: 84424125a05214d9b7536300c92cae6b (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Made available in DSpace on 2017-10-16T11:42:54Z (GMT). No. of bitstreams: 2 Dissertação - Giovanna Moreno Parizotto - 2017.pdf: 2790011 bytes, checksum: 84424125a05214d9b7536300c92cae6b (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2017-09-14
In this qualitative research with case study elements, we discuss why the use of notions of structured programming in Python language constituted as a manipulation of the play culture for the teaching of Physics in the first year of the High School, at night. Manipulation in this case, refers us to aspects related to notions of game and characteristics related to the game, recognizing the game as a place of emergency of the ludic culture. This theme is related to the teacher training of the researcher, who seeks to enrich the student's playful culture in which she has greater didactic difficulty. In the course of the research, the researchers find several characteristics of the games during the interventions. This process is discussed as to the characteristics of the philosophical game proposed by Brougère (1998) and also to the typical behaviors of them, considered as primary impulses by Caillois (1990), related to the term game. We also relate corruption characteristics of these primary impulses to the lubricant ludic term.
Nesta pesquisa qualitativa com elementos de estudo de caso discutimos por que o uso de noções de programação estruturada em linguagem Python constituiu-se como uma manipulação da cultura lúdica para o ensino de Física no primeiro ano do Ensino Médio noturno. Manipulação neste caso, remete-nos a aspectos ligados a noções de jogo e características relacionadas ao jogos, reconhecendo o jogo como lugar de emergência da cultura lúdica. Tal temática está de encontro a formação docente da pesquisadora, que busca enriquecer a cultura lúdica do alunado no qual possui maior dificuldade didática. No decorrer da pesquisa, os pesquisadores encontram várias características dos jogos durante as intervenções. Tal processo é discutido quanto as características do jogo filosófico propostas por Brougère (1998) e também aos comportamentos típicos dos mesmos, tidos como impulsões primárias por Caillois (1990), relacionado ao termo jogo. Relacionamos ainda características de corrupções destas impulsões primárias ao termo lúdico lúbrico.
30

Gennari, Riccardo. "End-to-end Deep Metric Learning con Vision-Language Model per il Fashion Image Captioning." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2022. http://amslaurea.unibo.it/25772/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
L'image captioning è un task di machine learning che consiste nella generazione di una didascalia, o caption, che descriva le caratteristiche di un'immagine data in input. Questo può essere applicato, ad esempio, per descrivere in dettaglio i prodotti in vendita su un sito di e-commerce, migliorando l'accessibilità del sito web e permettendo un acquisto più consapevole ai clienti con difficoltà visive. La generazione di descrizioni accurate per gli articoli di moda online è importante non solo per migliorare le esperienze di acquisto dei clienti, ma anche per aumentare le vendite online. Oltre alla necessità di presentare correttamente gli attributi degli articoli, infatti, descrivere i propri prodotti con il giusto linguaggio può contribuire a catturare l'attenzione dei clienti. In questa tesi, ci poniamo l'obiettivo di sviluppare un sistema in grado di generare una caption che descriva in modo dettagliato l'immagine di un prodotto dell'industria della moda dato in input, sia esso un capo di vestiario o un qualche tipo di accessorio. A questo proposito, negli ultimi anni molti studi hanno proposto soluzioni basate su reti convoluzionali e LSTM. In questo progetto proponiamo invece un'architettura encoder-decoder, che utilizza il modello Vision Transformer per la codifica delle immagini e GPT-2 per la generazione dei testi. Studiamo inoltre come tecniche di deep metric learning applicate in end-to-end durante l'addestramento influenzino le metriche e la qualità delle caption generate dal nostro modello.
31

Catanio, Jonathan Joseph. "Leave the Features: Take the Cannoli." DigitalCommons@CalPoly, 2018. https://digitalcommons.calpoly.edu/theses/1886.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Programming languages like Python, JavaScript, and Ruby are becoming increasingly popular due to their dynamic capabilities. These languages are often much easier to learn than other, statically type checked, languages such as C++ or Rust. Unfortunately, these dynamic languages come at the cost of losing compile-time optimizations. Python is arguably the most popular language for data scientists and researchers in the artificial intelligence and machine learning communities. As this research becomes increasingly popular, and the problems these researchers face become increasingly computationally expensive, questions are being raised about the performance of languages like Python. Language features found in Python, more specifically dynamic typing and run-time modification of object attributes, preclude common static analysis optimizations that often yield improved performance. This thesis attempts to quantify the cost of dynamic features in Python. Namely, the run-time modification of objects and scope as well as the dynamic type system. We introduce Cannoli, a Python 3.6.5 compiler that enforces restrictions on the language to enable opportunities for optimization. The Python code is compiled into an intermediate representation, Rust, which is further compiled and optimized by the Rust pipeline. We show that the analyzed features cause a significant reduction in performance and we quantify the cost of these features for language designers to consider.
32

Kone, Alassane. "Modelling and Decision Support for a Desertification Issue Using Cellular Automata Approach." Electronic Thesis or Diss., Guyane, 2023. http://www.theses.fr/2023YANE0001.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
La désertification, en tant que problématique majeure affectant la vie sur Terre, a d’énormes conséquences qui dégradent la qualité de vie des hommes, leurs activités quotidiennes et leurs moyens de subsistance. Pour lutter contre son avancée, les organisations internationales ont mis en place des actions pour ralentir ou arrêter son expansion et réduire ses impacts.Cette thèse s’inscrit dans la lutte contre la désertification en modélisant le processus de dégradation des terres conduisant à la désertification. Deux modèles sont développés : le premier combine des automates cellulaires continus et l'évaluation MEDALUS, évaluant la désertification sur la base des indices des facteurs sol, végétation, climat et management. Le deuxième modèle simule la dégradation des terres en utilisant le couple automates cellulaires/Modèle MEDALUS, enrichi par des facteurs anthropiques comme les pratiques d'utilisation des terres, le facteur d'exploitabilité et l’appartenance foncière, formant le Modèle Amélioré de Désertification. Ce modèle sert de base au logiciel DESERTIfication Cellular Automata Software (DESERTICAS), permettant de simuler l'évolution spatio- temporelle de la dégradation des terres. DESERTICAS facilite l'exploration de scénarios de dégradation des terres dans le temps et l'espace.Ces modèles développés intègrent des processus dynamiques dans le modèle MEDALUS à la base statique et permettent d’étendre la notion d’état des automates cellulaires classiques à des états continus. L’identification d’un facteur prédominant permet d’agir sur tout le système conduisant à la désertification. Notre étude met en évidence le management, action humaine, comme facteur prédominant affectant indirectement les autres facteurs. Agir positivement sur le management permet d’interrompre les sources de dégradation, de ralentir ou arrêter la dégradation des terres. La théorie du contrôle est également appliquée au modèle d'automates cellulaires développés et permet d’agir sur le facteur prédominant à partir des algorithmes génétiques. En intégrant des actions de protection des terres dans les simulations liées à la désertification, le logiciel DESERTICAS devient un outil d'aide à la décision
Desertification, as a significant challenge impacting life on Earth, has extensive consequences that degrade human life quality, daily activities, and livelihoods. In response, international organizations have implemented actions to slow or stop its progress and reduce its impacts. This thesis focuses on combating desertification by modelling the process of land degradation leading to desertification. Two models are developed: the first combines continuous Cellular Automata and the MEDALUS assessment, evaluating desertification based on soil, vegetation, climate, and management. The second model simulates land degradation using cellular automata approach, enriched with anthropogenic factors like land use practices, exploitability factor and ownership, forming the Enhanced Model of Desertification. This model serves as the basis for DESERTIfication Cellular Automata Software (DESERTICAS), simulating spatio- temporal land degradation evolution. DESERTICAS facilitates scenario exploration by simulating land degradation progression over time and space. The models incorporate dynamic processes into the MEDALUS model, expanding classical Cellular Automata to continuous states. Identifying a predominant factor influencing desertification, management emerges as crucial, affecting other factors indirectly. Positive management actions can interrupt degradation sources, slowing or halting land degradation. The thesis also applies control theory to the Cellular Automata model, aiming to influence the predominant factor using Genetic Algorithms. By integrating land protection actions into desertification simulations, the DESERTICAS software becomes a decision support tool
33

Van, Wyk Desmond Eustin. "Virtual human modelling and animation for real-time sign language visualisation." University of the Western Cape, 2008. http://hdl.handle.net/11394/2998.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
>Magister Scientiae - MSc
This thesis investigates the modelling and animation of virtual humans for real-time sign language visualisation. Sign languages are fully developed natural languages used by Deaf communities all over the world. These languages are communicated in a visual-gestural modality by the use of manual and non-manual gestures and are completely di erent from spoken languages. Manual gestures include the use of hand shapes, hand movements, hand locations and orientations of the palm in space. Non-manual gestures include the use of facial expressions, eye-gazes, head and upper body movements. Both manual and nonmanual gestures must be performed for sign languages to be correctly understood and interpreted. To e ectively visualise sign languages, a virtual human system must have models of adequate quality and be able to perform both manual and non-manual gesture animations in real-time. Our goal was to develop a methodology and establish an open framework by using various standards and open technologies to model and animate virtual humans of adequate quality to e ectively visualise sign languages. This open framework is to be used in a Machine Translation system that translates from a verbal language such as English to any sign language. Standards and technologies we employed include H-Anim, MakeHuman, Blender, Python and SignWriting. We found it necessary to adapt and extend H-Anim to e ectively visualise sign languages. The adaptations and extensions we made to H-Anim include imposing joint rotational limits, developing exible hands and the addition of facial bones based on the MPEG-4 Facial De nition Parameters facial feature points for facial animation. By using these standards and technologies, we found that we could circumvent a few di cult problems, such as: modelling high quality virtual humans; adapting and extending H-Anim; creating a sign language animation action vocabulary; blending between animations in an action vocabulary; sharing animation action data between our virtual humans; and e ectively visualising South African Sign Language.
South Africa
34

Cavallucci, Martina. "Speech Recognition per l'italiano: Sviluppo e Sperimentazione di Soluzioni Neurali con Language Model." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2022.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Le e-mail e i servizi di messaggistica hanno cambiato significativamente la comunicazione umana, ma la parola è ancora il metodo più importante di comunicazione tra esseri umani. Pertanto, il riconoscimento vocale automatico (ASR) è di particolare rilevanza perché fornisce una trascrizione della lingua parlata che può essere valutata da sistemi automatizzati. Con altoparlanti intelligenti come Google Home, Alexa o Siri, l' ASR è già un parte integrante di molte famiglie ed è usato per suonare musica, rispondere alle domande o controllare altri dispositivi intelligenti come un sistema di domotica. Tuttavia, l' ASR può essere trovato anche in molti altri sistemi, come sistemi di dettatura, traduttori vocali o interfacce utente vocali. Sempre più aziende ne comprendono le potenzialità sopratutto per migliorare i processi aziendali, il lavoro di tesi mira infatti a sperimentare modelli neurali per la trascrizione di Webinar creati dall'azienda ospitante Maggioli dove si è svolto il tirocinio, ottenendo così trascrizioni utili per il recupero delle informazioni e la loro gestione. A tale scopo si sono utilizzati modelli basati sui recenti Transformers e grazie alla tecnica dell'apprendimento auto-supervisionato che apprende da dati non etichettati è stato possibile ottenere buoni risultati su dataset con audio e trascrizioni in italiano di cui si dispongono ancora poche risorse rispetto alla lingua inglese.
35

Gilbert, Andrew. "Supported Programming for Beginning Developers." DigitalCommons@CalPoly, 2019. https://digitalcommons.calpoly.edu/theses/2032.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Testing code is important, but writing test cases can be time consuming, particularly for beginning programmers who are already struggling to write an implementation. We present TestBuilder, a system for test case generation which uses an SMT solver to generate inputs to reach specified lines in a function, and asks the user what the expected outputs would be for those inputs. The resulting test cases check the correctness of the output, rather than merely ensuring the code does not crash. Further, by querying the user for expectations, TestBuilder encourages the programmer to think about what their code ought to do, rather than assuming that whatever it does is correct. We demonstrate, using mutation testing of student projects, that tests generated by TestBuilder perform better than merely compiling the code using Python’s built-in compile function, although they underperform the tests students write when required to achieve 100% test coverage.
36

Scaglione, Emanuel. "BlenderBot 2.0: Studio e Modellazione di un Chatbot basato su Transformers." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
I Transformers hanno raggiunto lo stato dell’arte in qualsiasi ambito del Natural Language Processing (NLP) e del Natural Language Understanding (NLU), in questo lavoro di tesi è stata studiata l’architettura originale e alcune delle migliorie apportate a questa nel corso degli ultimi anni. In una seconda fase è stato studiato un Chat Bot rivoluzionario reso pubblico nel Luglio del 2021, chiamato Blender Bot 2.0. Questo bot di Facebook è sia capace di sfruttare una memoria a lungo termine facilmente estendibile e sostituibile per immagazzinare informazioni sui propri interlocutori e sul mondo esterno, sia di effettuare ricerche online quando posto di fronte a quesiti di cui non è sicuro di conoscere la risposta. Il tutto è stato osservato non solo in termini di qualità dei risultati generati dai modelli, ma anche da un punto di vista di risorse impiegate. L’obiettivo è stato quello di minimizzare il consumo di memoria e il tempo necessario per addestrare i modelli, in modo da poter rendere accessibili le loro abilità su larga scala anche in presenza di hardware economici, diminuendo conseguentemente i costi per chiunque voglia farci affidamento; un grande passo per singoli individui appassionati, ma soprattutto per aziende interessate ad impiegarli in un contesto produttivo.
37

Супрун, О. П. "Інтелектуальна технологія обробки природної мови". Master's thesis, Сумський державний університет, 2021. https://essuir.sumdu.edu.ua/handle/123456789/86886.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
38

Bick, Matthew A. "Central Force Optimization - Analysis of Data Structures & Multiplicity Factor." University of Toledo / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1447208248.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Montalti, Giacomo. "Identificazione di farmaci e dispositivi medici equivalenti con tecniche di natural language processing e deep learning." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/16828/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Il deep learning è un campo relativamente giovane le cui potenzialità sono ancora tutte da esplorare, in grado di elaborare in maniera ancora più approfondita i dati, e sarà affrontato nel dettaglio all'interno di questo lavoro di tesi. Questa tecnologia ha permesso di migliorare drasticamente i risultati raggiunti in passato in tantissimi settori, consentendo ad esempio lo sviluppo di auto a guida autonoma, assistenti virtuali in grado di comprendere una conversazione e di fornire risposte alle nostre domande o macchinari medicali capaci di identificare masse tumorali con una precisione maggiore rispetto a quella umana. All'interno di questo elaborato verranno analizzati e sperimentati diversi approcci recenti in ambito natural language processing (NLP) e deep learning (DL), allo scopo di identificare prodotti medicali equivalenti dalla loro breve descrizione testuale destrutturata.
40

Rasocha, David. "Návrh řídicího systému pro malý zkušební stroj." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2020. http://www.nusl.cz/ntk/nusl-417777.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This thesis focuses on design of small testing machine for measuring tensile strength of materials. Appropriate hardware for driving the motor with serial communication will be used. Main drive is a stepper motor with microstepping. Instructions for motor is provided by microcontroler which will be comunicating with aplication in computer. This aplication will have all user functions nessesary for using this device.
41

Sola, Yoann. "Contributions to the development of deep reinforcement learning-based controllers for AUV." Thesis, Brest, École nationale supérieure de techniques avancées Bretagne, 2021. http://www.theses.fr/2021ENTA0015.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
L’environnement marin est un cadre très hostile pour la robotique. Il est fortement non-structuré, très incertain et inclut beaucoup de perturbations externes qui ne peuvent pas être facilement prédites ou modélisées. Dans ce travail, nous allons essayer de contrôler un véhicule sous-marin autonome (AUV) afin d’effectuer une tâche de suivi de points de cheminement, en utilisant un contrôleur basé sur de l’apprentissage automatique. L’apprentissage automatique a permis de faire des progrès impressionnants dans de nombreux domaines différents ces dernières années, et le sous-domaine de l’apprentissage profond par renforcement a réussi à concevoir plusieurs algorithmes très adaptés au contrôle continu de systèmes dynamiques. Nous avons choisi d’implémenter l’algorithme du Soft Actor-Critic (SAC), un algorithme d’apprentissage profond par renforcement régularisé en entropie permettant de simultanément remplir une tâche d’apprentissage et d’encourager l’exploration de l’environnement. Nous avons comparé un contrôleur basé sur le SAC avec un contrôleur Proportionnel-Intégral-Dérivé (PID) sur une tâche de suivi de points de cheminement et en utilisant des métriques de performance spécifiques. Tous ces tests ont été effectués en simulation grâce à l’utilisation de l’UUV Simulator. Nous avons décidé d’appliquer ces deux contrôleurs au RexROV 2, un véhicule sous-marin téléguidé (ROV) de forme cubique et à six degrés de liberté converti en AUV. Grâce à ces tests, nous avons réussi à proposer plusieurs contributions intéressantes telles que permettre au SAC d’accomplir un contrôle de l’AUV de bout en bout, surpasser le contrôleur PID en terme d’économie d’énergie, et réduire la quantité d’informations dont l’algorithme du SAC a besoin. De plus nous proposons une méthodologie pour l’entraînement d’algorithmes d’apprentissage profond par renforcement sur des tâches de contrôle, ainsi qu’une discussion sur l’absence d’algorithmes de guidage pour notre contrôleur d’AUV de bout en bout
The marine environment is a very hostile setting for robotics. It is strongly unstructured, very uncertain and includes a lot of external disturbances which cannot be easily predicted or modelled. In this work, we will try to control an autonomous underwater vehicle (AUV) in order to perform a waypoint tracking task, using a machine learning-based controller. Machine learning allowed to make impressive progress in a lot of different domain in the recent years, and the subfield of deep reinforcement learning managed to design several algorithms very suitable for the continuous control of dynamical systems. We chose to implement the Soft Actor-Critic (SAC) algorithm, an entropy-regularized deep reinforcement learning algorithm allowing to fulfill a learning task and to encourage the exploration of the environment simultaneously. We compared a SAC-based controller with a Proportional-Integral-Derivative (PID) controller on a waypoint tracking task and using specific performance metrics. All the tests were performed in simulation thanks to the use of the UUV Simulator. We decided to apply these two controllers to the RexROV 2, a six degrees of freedom cube-shaped remotely operated underwater vehicle (ROV) converted in an AUV. Thanks to these tests, we managed to propose several interesting contributions such as making the SAC achieve an end-to-end control of the AUV, outperforming the PID controller in terms of energy saving, and reducing the amount of information needed by the SAC algorithm. Moreover we propose a methodology for the training of deep reinforcement learning algorithms on control tasks, as well as a discussion about the absence of guidance algorithms for our end-to-end AUV controller
42

Pachev, Ivan. "GPUMap: A Transparently GPU-Accelerated Map Function." DigitalCommons@CalPoly, 2017. https://digitalcommons.calpoly.edu/theses/1704.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
As GPGPU computing becomes more popular, it will be used to tackle a wider range of problems. However, due to the current state of GPGPU programming, programmers are typically required to be familiar with the architecture of the GPU in order to effectively program it. Fortunately, there are software packages that attempt to simplify GPGPU programming in higher-level languages such as Java and Python. However, these software packages do not attempt to abstract the GPU-acceleration process completely. Instead, they require programmers to be somewhat familiar with the traditional GPGPU programming model which involves some understanding of GPU threads and kernels. In addition, prior to using these software packages, programmers are required to transform the data they would like to operate on into arrays of primitive data. Typically, such software packages restrict the use of object-oriented programming when implementing the code to operate on this data. This thesis presents GPUMap, which is a proof-of-concept GPU-accelerated map function for Python. GPUMap aims to hide all the details of the GPU from the programmer, and allows the programmer to accelerate programs written in normal Python code that operate on arbitrarily nested objects using a majority of Python syntax. Using GPUMap, certain types of Python programs are able to be accelerated up to 100 times over normal Python code. There are also software packages that provide simplified GPU acceleration to distributed computing frameworks such as MapReduce and Spark. Unfortunately, these packages do not provide a completely abstracted GPU programming experience, which conflicts with the purpose of the distributed computing frameworks: to abstract the underlying distributed system. This thesis also presents GPU-accelerated RDD (GPURDD), which is a type of Spark Resilient Distributed Dataset (RDD) which incorporates GPUMap into its map, filter, and foreach methods in order to allow Spark applicatons to make use of the abstracted GPU acceleration provided by GPUMap.
43

Lat, Radek. "Nástroj pro automatické kategorizování webových stránek." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2014. http://www.nusl.cz/ntk/nusl-236054.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Tato diplomová práce popisuje návrh a implementaci nástroje pro automatickou kategorizaci webových stránek. Cílem nástroje je aby byl schopen se z ukázkových webových stránek naučit, jak každá kategorie vypadá. Poté by měl nástroj zvládnout přiřadit naučené kategorie k dříve nespatřeným webovým stránkám. Nástroj by měl podporovat více kategorií a jazyků. Pro vývoj nástroje byly použity pokročilé techniky strojového učení, detekce jazyků a dolování dat. Nástroj je založen na open source knihovnách a je napsán v jazyce Python 3.3.
44

Volonnino, Chiara. "Dialogo uomo-macchina nella piattaforma robotica Nao Aldebaran." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/14264/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Questa tesi a si concentrerà sullo sviluppo di un'applicazione di dialogo con il robot umanoide Nao Aldebaran che renda l'interazione uomo-macchina il più naturale possibile e, quindi atta ad essere utilizzata da chiunque non possieda particolari competenze informatiche o abilità di programmazione. Nao è uno fra i più famosi e migliori social robot esistente a livello mondiale sviluppato nel 2004 dall'azienda francese Aldebaran Robotics. Lo studio mira alla progettazione e allo sviluppo di un database di conoscenza, ampliato e integrato da una serie di regole più complesse che diano la possibilità di dialogare con il robot su un argomento a piacere e quindi rendano capace quest'ultimo di saper interagire e rispondere a domande a lui poste in completa autonomia. Questo insieme di regole viene implementato usando il modulo QiChat, reso disponibile dalla Nao SDK. Per la gestione dei topic verrà invece sfruttato Python. Si andranno ad illustrare in modo più dettagliato i requisiti e il design dell'applicazione che è stata creata con lo scopo di rendere possibile a tutti gli utenti di effettuare una conversazione informale con il robot.
45

Borghesi, Andrea. "Topic Analysis della letteratura scientifica sul tema Computer Chess con Metodi di Text Mining Non Supervisionati." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/24252/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Progettazione e implementazione di modelli di text mining non supervisionati su un dataset di dati non strutturati: articoli sulla storia del computer chess. Si sono affrontati per cui argomenti legati al Natural Language Processing (NLP). Inoltre, sono state affrontate tecniche di text augmentation per provvedere al bilanciamento delle classi del dataset. Tra i modelli utilizzati sono presenti: LDA, Word Embeddings, algoritmi di Clustering e Transformers.
46

Fiordalisi, Saverio. "Modélisation tridimensionelle de la fermeture induite par plasticité lors de la propagation d'une fissure de fatigue dans l'acier 304L." Thesis, Chasseneuil-du-Poitou, Ecole nationale supérieure de mécanique et d'aérotechnique, 2014. http://www.theses.fr/2014ESMA0018/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Ce travail de thèse s’inscrit dans le cadre des problèmes de fissuration par fatigue, détectéesnotamment dans des structures nucléaires et se situe dans la continuité de travaux déjà réalisésau laboratoire. L’objectif de cette étude est la réalisation d’un outil numérique de prédictiondu phénomène de fermeture induite par plasticité, au cours de la propagation d’une fissure defatigue dans une éprouvette CT, dans un acier inoxydable 304L, en prenant en comptel’influence simultanée de la forme des fronts et de la longueur de fissure. Celle-ci a d’abordété considérée par le biais de modèles numérique tridimensionnels sous ABAQUS, avec desgéométries pré-imposées des fronts de fissure. Les évolutions des facteurs d’intensité effectifs(FIC) locaux le long des fronts et au cours de la propagation ont été comparées. Ensuite, unoutil numérique, utilisant le code ABAQUS et le langage de programmation PYTHON a étédéveloppé pour la prédiction automatique de la forme de la fissure en fonction des donnéesd’entrée (géométrie, charge, conditions aux limites, définition du contact au cours de lapropagation, maillage), à partir d’un entaille droite de longueur égale à 0.1mm. La variationeffective du FIC ΔK l eff local a été supposée étant la force motrice de la propagation. Lesessais de fatigue ciblés réalisés ont permis une comparaison critique avec le numérique, enterme de formes finales du front de fissure dans les différentes conditions de chargementimposées
This PhD thesis deals with the problems of fatigue cracking, particularly detected in nuclearstructures, and is a continuation of work already carried out in the laboratory. The objective ofthis study is to provide a numerical prediction tool of the phenomenon of plasticity-inducedcrack closure, during the propagation of a fatigue crack in a CT specimen and in a 304Lstainless steel, taking into account the simultaneous influences of the crack shape and cracklength. This has been first considered through three-dimensional numerical models withABAQUS, through pre-imposed crack fronts geometries. The local stress intensity factors(FIC) evolutions along the crack fronts and over the whole propagation have been compared.Then, a numerical tool, using the ABAQUS code and the programming language PYTHONhas been developed in order to automatically predict the crack shape evolution, depending ondifferent input data (geometry, loads, boundary conditions, contact definition duringpropagation, mesh), starting from a 0.1mm straight notch. The local effective evolution of theSIF ΔK l eff has been supposed to be the driving force for the whole propagation. Fatiguetargeted tests have been carried out in order to allow a critical comparison with the numericalresults, in terms of final crack front shapes under different imposed loading conditions
47

Trombini, Marion. "Couplage endommagement-grandes déformations dans une modélisation multi-échelle pour composites particulaires fortement chargés." Thesis, Chasseneuil-du-Poitou, Ecole nationale supérieure de mécanique et d'aérotechnique, 2015. http://www.theses.fr/2015ESMA0002/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Cette thèse traite de la modélisation multi-échelle de composites particulaires fortement chargés. La méthode d’estimation, qualifiée d’“Approche Morphologique” (A.M.), repose sur une double schématisation géométrique et cinématique du composite permettant de fournir la réponse aux deux échelles. Afin d’évaluer les capacités prédictives de l’A.M. en élasticité linéaire avec évolution de l’endommagement, l’A.M. est testée vis-à-vis de ses aptitudes à rendre compte des effets de taille et d’interaction de particules sur la chronologie de décohésion. Pour cela, différentes microstructures périodiques simples, aléatoires monomodales et bimodale générées numériquement sont considérées. Les résultats obtenus sont cohérents avec les données de la littérature : la décohésion des grosses particules précède celle des plus petites et est d’autant plus précoce que le taux de charges est important. Puis, l’objectif est de coupler deux non-linéarités traitées séparément dans deux versions antérieures de l’A.M : l’endommagement par décohésion charges/matrice et les grandes déformations. La formulation du problème de localisation-homogénéisation est reprise à la source de manière analytique. Le critère de nucléation de défauts est étendu en transformations finies. Le problème obtenu, fortement non-linéaire, est résolu numériquement via un algorithme de Newton-Raphson. Les étapes sous-jacentes à la résolution (calcul de la matrice tangente, codage en langage Python®) sont explicitées. Des évaluations progressives (matériaux sain et endommagé)permettent de valider la mise en oeuvre numérique. Les effets de taille et d’interaction sont alors restitués en transformations finies
This study is devoted to multi-scale modeling of highly-filled particulate composites.This method, the “Morphological Approach” (M.A.), is based on a geometrical and kinematicalschematization which allows the access to both local fields and homogenized response. In order toevaluate the predictive capacities of the M.A. considering a linear elastic behavior for the constituentsand evolution of damage, analysis is performed regarding the ability of the M.A. to accountfor particle size and interaction effects on debonding chronology. For that purpose, simple periodic,random monomodal and bimodal microstructures are considered. The results are consistent withliterature data : debonding of large particles occurs before the one of smaller particles and thehigher the particle volume fraction, the sooner the debonding. Finally, the objective is to operatethe coupling of two non linearities which were separately studied in previous versions of the M.A. :debonding between particles and matrix, and finite strains. The whole analytical background of theapproach is reconsidered in order to define the localization-homogenization problem. The nucleationcriterion is extended to the finite strains context. The final problem, strongly non linear, is numericallysolved through a Newton-Raphson algorithm. The different solving steps (jacobian matrix,coding with Python®) are developed. Progressive evaluations (sound and damage materials) allowthe validation of numerical implementation. Then, size and interaction effects are reproduced infinite strains
48

Шишко, Артур Юрійович. "Інтелектуальна система кластеризації музикальних творів за жанрами". Master's thesis, КПІ ім. Ігоря Сікорського, 2019. https://ela.kpi.ua/handle/123456789/32182.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Магістерська дисертація: 62 с., 8 рис., 23 табл., 2 дод., 14 джерел. Об'єкт дослідження – набір даних у вигляді музикальних творів як набір характеристик та зміст цих даних. Мета роботи – аналіз інформації про музикальні твори, з використанням засобів інтелектуального аналізу даних задля кластеризації їх по жанрам. В роботі проаналізовано існуючі засоби, які використовуються при аналізі даних, визначено їхні основні переваги та недоліки, запропоновано метод для кластеризації творів. Побудовано архітектуру системи, що проводить задану кластеризацію. Реалізовано запропоновану архітектуру у форматі додатку для веб-інтерфейсу. В подальшому рекомендується покращити цю дипломну роботу, врахувавши більшу кількість характеристик музикальних творів та розширити географію досліджень.
Masters thesis: 62 p., 8 fig., 23 tables, 2 appendices, 14 sources. The object of study is a set of data in the form of musical works as a set of characteristics and content of this data. The purpose of the work is to analyze information about musical works, using data mining tools to cluster them by genre. The paper analyzes the existing tools used in data analysis, identifies their main advantages and disadvantages, and proposes a method for clustering works. The architecture of the system that performs the given clustering is built. The proposed web interface architecture has been implemented. In the future, it is recommended that you improve this diploma thesis by taking into account more characteristics of musical works and expanding the geography of research.
49

Neto, Dorival Piedade. "On the Generalized Finite Element Method in nonlinear solid mechanics analyses." Universidade de São Paulo, 2013. http://www.teses.usp.br/teses/disponiveis/18/18134/tde-20012014-094606/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The Generalized Finite Element Method (GFEM) is a numerical method based on the Partition of Unity (PU) concept and inspired on both the Partition of Unity Method (PUM) and the hp-Cloud method. According to the GFEM, the PU is provided by first-degree Lagragian interpolation functions, defined over a mesh of elements similar to the Finite Element Method (FEM) meshes. In fact, the GFEM can be considered an extension of the FEM to which enrichment functions can be applied in specific regions of the problem domain to improve the solution. This technique has been successfully employed to solve problems presenting discontinuities and singularities, like those that arise in Fracture Mechanics. However, most publications on the method are related to linear analyses. The present thesis is a contribution to the few studies of nonlinear analyses of Solid Mechanics by means of the GFEM. One of its main topics is the derivation of a segment-to-segment generalized contact element based on the mortar method. Material and kinematic nonlinear phenomena are also considered in the numerical models. An Object-Oriented design was developed for the implementation of a GFEM nonlinear analyses framework written in Python programming language. The results validated the formulation and demonstrate the gains and possible drawbacks observed for the GFEM nonlinear approach.
O Método dos Elementos Finitos Generalizados (MEFG) é um método numérico baseado no conceito de partição da unidade (PU) e inspirado no Método da Partição da Unidade (MPU) e o método das Nuvens-hp. De acordo com o MEFG, a PU é obtida por meio de funções de interpolação Lagragianas de primeiro grau, definidas sobre uma rede de elementos similar àquela do Método dos Elementos Finitos (MEF). De fato, o MEFG pode ser considerado uma extensão do MEF para a qual se pode aplicar enriquecimentos em regiões específicas do domínio, buscando melhorias na solução. Esta técnica já foi aplicada com sucesso em problemas com descontinuidades e singularidades, como os originários da Mecânica da Fratura. Apesar disso, a maioria das publicações sobre o método está relacionada a análises lineares. A presente tese é uma contribuição aos poucos estudos relacionados a análises não-lineares de Mecânica dos Sólidos por meio do MEFG. Um de seus principais tópicos é o desenvolvimento de um elemento de contato generalizado do tipo segmento a segmento baseado no método mortar. Fenômenos não lineares devidos ao material e à cinemática também são considerados nos modelos numéricos. Um projeto de orientação a objetos para a implementação de uma plataforma de análises não-lineares foi desenvolvido, escrito em linguagem de programação Python. Os resultados validam a formulação e demonstram os ganhos e possíveis desvantagens da abordagem a problemas não lineares por meio do MEFG.
50

Samia, Michel. "Databáze XML pro správu slovníkových dat." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2011. http://www.nusl.cz/ntk/nusl-412859.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The following diploma thesis deals with dictionary data processing, especially those in XML based formats. At first, the reader is acquainted with linguistic and lexicographical terms used in this work. Then particular lexicographical data format types and specific formats are introduced. Their advantages and disadvantages are discussed as well. According to previously set criteria, the LMF format has been chosen for design and implementation of Python application, which focuses especially on intelligent merging of more dictionaries into one. After passing all unit tests, this application has been used for processing LMF dictionaries, located on the faculty server of the research group for natural language processing. Finally, the advantages and disadvantages of this application are discussed and ways of further usage and extension are suggested.

До бібліографії