Dissertations / Theses on the topic 'Modelli Grafici'

To see the other types of publications on this topic, follow the link: Modelli Grafici.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 46 dissertations / theses for your research on the topic 'Modelli Grafici.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

NGUYEN, NGOC DUNG. "SELEZIONE DEL MODELLO NEI MODELLI GRAFICI COLORATI PER DATI APPAIATI." Doctoral thesis, Università degli studi di Padova, 2022. http://hdl.handle.net/11577/3449043.

Full text
Abstract:
Un modello grafico gaussiano (GGM) è una famiglia di distribuzioni normali multivariate la cui struttura di indipendenza condizionale viene rappresentata mediante un grafo non orientato. I vertici del grafo corrispondono alle variabili ed ogni arco assente dal grafo implica che il corrispondente elemento della matrice di concentrazione, ossia l'inversa della matrice di varianze e covarianze, è uguale a zero; si veda Lauritzen (1996). I modelli grafici colorati, introdotti da Hojsgaard and Lauritzen (2008), sono una famiglia di modelli grafici gaussiani con ulteriori vincoli di simmetria implementati come vincoli di uguaglianza negli elementi della matrice di concentrazione. L'utilizzo dei modelli grafici colorati fu motivato inizialmente dalla necessità di ridurre il numero di parametri nell'apprendimento di grafi con elevato numero di vertici in presenza di una limitata numerosità campionaria. Vi sono però contesti applicativi nei quali i vincoli di simmetria emergono naturalmente come quesiti scientifici di interesse. Un esempio rilevante è dato dall'apprendimento congiunto di network multipli nel caso di dati appaiati e oggetto di questa tesi è l'applicazione di modelli grafici colorati in questo ambito. Sebbene i vincoli di simmetria implichino naturalmente una riduzione della dimensionalità del modello, il problema dell'apprendimento del modello dai dati è estremamente complesso dato che la dimensione dello spazio di ricerca è molto maggiore rispetto a quella dei tradizionali modelli grafici non colorati. La costruzione di procedure di ricerca che siano efficienti è fondamentale comprendere la struttura dello spazio di ricerca. In questo lavoro noi consideriamo i modelli grafici colorati per dati appaiati (PDCGM) e mostriamo che se si utilizza il tradizionale ordinamento basato sulla relazione di sottomodello (ordinamento model-inclusion), questa famiglia forma un reticolo non-distributivo. Introduciamo quindi una nuova relazione d'ordine, che chiamiamo ordinamento twin. Mostriamo quindi che la famiglia di PDCGM forma un reticolo distributivo rispetto all'ordinamento twin e quindi utilizziamo questa struttura per introdurre una procedura di apprendimento di tipo stepwise. Gabriel (1969) ha introdotto il seguente principio detto principio di coerenza ``in una procedura in cui vengono verificate ipotesi multiple, una qualunque ipotesi non dovrebbe essere accettata quando, al contempo, un'ipotesi implicata da questa viene rifiutata''. Si noti che, per brevità, in questa formulazione utilizziamo il termine ``accettata'' invece del termine più rigoroso ``non-rifiutata''. Si consideri un test di livello $\alpha$ che può essere applicato al confronto di modelli in una procedura di apprendimento. In questo contesto, il principio di coerenza viene solitamente applicato richiedendo che non si deve accettare un qualunque modello quando un modello più generale è rifiutato; si veda, ad esempio, Edwards e Havranek (1987). Quindi, nell'implementazione di una procedura stepwise backward elimination coerente se un modello è rifiutato allora tutti i suoi sottomodelli sono automaticamente rifiutati. Tuttavia, noi mostriamo che per la famiglia di modelli grafici colorati per dati appaiati l'applicazione del principio di coerenza richiede ragionamenti più sofisticati e che l'applicazione automatica di questo principio sulla base del reticolo model inclusion porta ad effettuare dei passi che violano il principio di coerenza. Invece, il reticolo basato sulla relazione twin permette di identificare tali passi non coerenti e sostituirli con dei passi che rispettano il principio di coerenza. Questa variazione conferisce inoltre efficienza alla procedura. La procedura è implementata nel linguaggio R, le sue proprietà sono illustrate mediante una serie di applicazioni a dati simulati ed, infine, utilizzata per l'identificazione di un brain network sulla base di dati fMRI.
Gaussian graphical models (GGM) are a family of multivariate normal distributions whose conditional independence structure is represented by an undirected graph, where the vertices represent variables and every missing edge implies that the corresponding entry of the concentration matrix, which is the inverse of the covariance matrix, equals zero; see Lauritzen (1996). Hojsgaard and Lauritzen (2008) introduced colored GGMs which are GGMs with additional symmetry restrictions on the concentration matrix in the form of equality constraints on the parameters, which are depicted on the dependence graph by colorings of edges and vertices. The application of colored GGMs was motivated by the need of reducing the number of parameters when estimating covariance matrices of large dimensions with relatively few observations. On the other hand, there exist applied contexts where symmetry restrictions naturally follow from substantive research hypotheses of interest. A relevant instance is provided by the problem of joint learning of multiple graphical models, where observations come from two or more groups sharing the same variables. The association structure of each group is represented by a network and it is expected that there are similarities between groups. In paired data, the two groups are not independent because two sets of homologous variables are observed on every statistical unit. In this thesis, we focus on the application of colored GGMs to the joint learning of graphical models for paired data that, in the following, we call colored graphical models for paired data (PDCGMs). Although the symmetric restrictions implied by a colored GGM may usefully reduce the model dimensionality, the problem of model identification is much more challenging than in GGMs because both the dimensionality and complexity of the search spaces highly increase. For the construction of efficient model selection methods, it is imperative to understand the structure of model classes. In this work, we consider PDCGMs and show that this class of models forms a non-distributive lattice under the model inclusion order $\preceq_{\mathcal{C}}$. We then introduce a novel partial order $\preceq_{\tau}$ for this class of models and call it the twin order. Such order coincides with the model inclusion if two models are $\preceq_{\mathcal{C}}$ comparable but that also includes a relationship between certain models which are $\preceq_{\mathcal{C}}$ incomparable. We show that the class of PDCGMs forms a distributive lattice under the twin order and then we use this lattice to implement a coherent backward elimination stepwise procedure. Gabriel (1969) introduced the principle of coherence ``in any procedure involving multiple comparisons no hypothesis should be accepted if any hypothesis implied by it is rejected". We remark that we say ``accepted" instead of ``non-rejected". Consider a goodness-of-fit test for testing models at a level $\alpha$ so that for every model we can determine whether the model is rejected or accepted. In this context, the coherence is typically implemented by requiring that we should not accept a model while rejecting a more general model; see Edwards and Havranek (1987). Hence, under this formulation of the coherence, in a greedy search if a model is rejected then all its submodels are considered rejected without further testing. However, we show that the lattice of PDCGMs under model inclusion does not provide a proper implementation of the coherence principle. On the other hand, the coherence can be properly implemented on the distributive lattice under the twin order. We, therefore, introduce a backward elimination stepwise procedure with local moves on our distributive lattice which satisfies the coherence principle. This procedure is implemented in the programming language R and its behavior is investigated on the simulated data. Finally, this procedure is applied to the identification of the brain network from fMRI data.
APA, Harvard, Vancouver, ISO, and other styles
2

Liparesi, Andrea. "Sviluppo di modelli multiregressivi per la stima della percentuale annuale di giorni a deflusso nullo in corsi d’acqua a regime intermittente." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2016.

Find full text
Abstract:
L’obiettivo della Tesi è la costruzione di modelli multiregressivi che, attraverso le caratteristiche geomorfoclimatiche del bacino considerato, possano stimare i giorni a deflusso nullo per corsi a regime idrometrico intermittente in assenza di osservazioni idrometriche. Il lavoro è stato svolto in diverse fasi: una prima analisi degli studi precedenti per trarre informazioni utili alla costruzione dei modelli multiregressivi e la seconda di costruzione dei modelli stessi attraverso una regressione logistica implementata in ambiente R. L’area di studio è il Lower Colorado (USA). La costruzione dei modelli si è servita di due metodologie di verifica: un metodo grafico dove si sono comparati con grafici a dispersione i valori sperimentali della percentuale di giorni a deflusso pari a zero (PDN), e i valori di PDN stimati dal modello, cioè quelli ricavato utilizzando la regressione regionale logistica considerando le caratteristiche geormofoclimatiche; una serie di indici numerici di prestazione che sintetizzano gli scarti tra i valoro sperimentali e quelli stimati di PDN (ad es. NSE, cioè l’indice di Efficienza di Nash-Sutcliffe; SSR, cioè lo scarto quadratico medio). Analizzare i modelli identificati si può concludere che: 1) I migliori descrittori dei regimi idrometrici intermittenti sono la Temperatura, l’Area di deflusso del bacino e la Pendenza del bacino; 2) i valori di NSE e SSR ottenuti per i modelli caratterizzati dalle migliori prestazioni non sono pienamente soddisfacenti per poter utilizzare tali modelli nella pratica progettuale allo scopo predittivo in bacini non dotati di dati idrometrici, ma possono fornire una indicazione preliminare per condurre successivi approfondimenti.
APA, Harvard, Vancouver, ISO, and other styles
3

Bonantini, Andrea. "Analisi di dati e sviluppo di modelli predittivi per sistemi di saldatura." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/24664/.

Full text
Abstract:
Il presente lavoro di tesi ha come obiettivo quello di predire la lunghezza dell'arco elettrico che si forma nel processo di saldatura MIG/MAG, quando un elettrodo fusibile (filo) si porta a una distanza opportuna dal componente che deve essere saldato. In questo caso specifico, la lega di materiale che forma il filo è fatta di Alluminio-Magnesio. In particolare, in questo elaborato sarà presentato l'impatto che avranno alcune grandezze fisiche, come tensione, corrente e velocità di trascinamento del filo che viene fuso, durante il procedimento della saldatura, e come queste influenzeranno la dimensione dell'arco. Più precisamente, sono stati creati dei modelli previsionali capaci di prevedere la lunghezza d'arco sulla base di tali grandezze, secondo due criteri distinti: black-box e knowledge-driven. Nello specifico, il capitolo uno prevede una panoramica sullo stato dell'arte della saldatura MIG/MAG, introducendo concretamente il Gruppo Cebora, le modalità di acquisizione dei dati e il modello fisico con cui al momento si calcola la lunghezza dell'arco elettrico. Il secondo capitolo mostra l'analisi dei dati e spiega le decisioni sperimentali che sono state prese per gestirli e comprenderli al meglio; inoltre, in questo capitolo si capirà l'accuratezza del modello di Cebora, confrontando le sue predizioni con i dati reali. Il terzo capitolo è più operativo e vengono presentate le prime rete neurali costruite, che possiedono un approccio black-box ed alcune manipolazioni sulla corrente. Il quarto capitolo sposta l'attenzione sul ruolo della tensione, e sono realizzate nuove reti con un approccio differente, ovvero knowledge-driven. Il quinto capitolo trae le conclusioni di questo elaborato, esaminando gli aspetti positivi e negativi dei migliori modelli ottenuti.
APA, Harvard, Vancouver, ISO, and other styles
4

PENNONI, FULVIA. "Metodi statistici multivariati applicati all'analisi del comportamento dei titolari di carta di credito di tipo revolving." Bachelor's thesis, Universita' degli studi di Perugia, 2000. http://hdl.handle.net/10281/50024.

Full text
Abstract:
Il presente lavoro di tesi illustra un'applicazione dei modelli grafici per il l’analisi del credit scoring comportamentale o behavioural scoring. Quest'ultimo e' definito come: ‘the systems and models that allow lenders to make better decisions in managing existing clients by forcasting their future performance’, secondo Thomas (1999). La classe di modelli grafici presa in considerazione e’ quella dei modelli garfici a catena. Sono dei modelli statistici multivariati che consetono di modellizzare in modo appropriato le relazioni tra le variabili che descrivono il comporatemento dei titoloari della carta. Dato che sono basati su un'espansione log-lineare della funzione di densità delle variabili consentono di rappresentare anche graficamente associazioni orientate, inerenti sottoinsiemi di variabili. Consentono, inoltre, di individuare la struttura che rappresenti in modo più parsimonioso possibile tali relazioni e modellare simultaneamente più di una variabile risposta. Sono utili quando esiste un ordinamento anche parziale tra le variabili che permette di suddividerle in meramente esogene, gruppi d’intermedie tra loro concatenate e di risposta. Nei modelli grafici la struttura d’indipendenza delle variabili viene rappresentata visivamente attraverso un grafo. Nel grafo le variabili sono rappresentate da nodi legati da archi i quali mostrano le dipendenze in probabilità tra le variabili. La mancanza di un arco implica che due nodi sono indipendenti dati gli altri nodi. Tali modelli risultano particolarmente utili per la teoria che li accomuna con i sistemi esperti, per cui una volta selezionato il modello è possibile interrogare il sistema esperto per modellare la distribuzione di probabilità congiunta e marginale delle variabili. Nel primo capitolo vengono presentati i principali modelli statistici adottati nel credit scoring. Il secondo capitolo prende in considerazione le variabili categoriche. Le informazioni sui titolari di carta di credito sono, infatti, compendiate in tabelle di contingenza. Si introducono le nozioni d’indipendenza tra due variabili e di indipendenza condizionata tra più di due variabili. Si elencano alcune misure d’associazione tra variabili, in particolare, si introducono i rapporti di odds che costituiscono la base per la costruzione dei modelli multivariati utilizzati. Nel terzo capitolo vengono illustrati i modelli log-lineari e logistici che appartengono alla famiglia dei modelli lineari generalizzati. Essendo metodi multivariati consentono di studiare l’associazione tra le variabili considerandole simultaneamente. In particolare viene descritta una speciale parametrizzazione log-lineare che permette di tener conto della scala ordinale con cui sono misurate alcune delle variabili categoriche utilizzate. Questa è anche utile per trovare la migliore categorizzazione delle variabili continue. Si richiamano, inoltre, i risultati relativi alla stima di massima verosimiglianza dei parametri dei modelli, accennando anche agli algoritmi numerici iterativi necessari per la risoluzione delle equazioni di verosimiglianza rispetto ai parametri incogniti. Si fa riferimento al test del rapporto di verosimiglianza per valutare la bontà di adattamento del modello ai dati. Il capitolo quarto introduce alla teoria dei grafi, esponendone i concetti principali ed evidenziando alcune proprietà che consentono la rappresentazione visiva del modello mediante il grafo, mettendone in luce i vantaggi interpretativi. In tale capitolo si accenna anche al problema derivante dalla sparsità della tabella di contingenza, quando le dimensioni sono elevate. Vengono pertanto descritti alcuni metodi adottati per far fronte a tale problema ponendo l’accento sulle definizioni di collassabilità. Il quinto capitolo illustra un’applicazione dei metodi descritti su un campione composto da circa sessantamila titolari di carta di credito revolving, rilasciata da una delle maggiori società finanziarie italiane operanti nel settore. Le variabili prese in esame sono quelle descriventi le caratteristiche socioeconomiche del titolare della carta, desumibili dal modulo che il cliente compila alla richiesta di finanziamento e lo stato del conto del cliente in due periodi successivi. Ogni mese, infatti, i clienti vengono classificati dalla società in: ‘attivi’, ‘inattivi’ o ‘dormienti’ a seconda di come si presenta il saldo del conto. Lo scopo del lavoro è stato quello di ricercare indipendenze condizionate tra le variabili in particolare rispetto alle due variabili obbiettivo e definire il profilo di coloro che utilizzano maggiormente la carta. Le conclusioni riguardanti le analisi effettuate al capitolo quinto sono riportate nell’ultima sezione. L’appendice descrive alcuni dei principali programmi relativi ai software statistici utilizzati per le elaborazioni.
In this thesis work the use of graphical models is proposed to the analysis of credit scoring. In particular the applied application is related to the behavioural scoring which is defined by Thomas (1999) as ‘the systems and models that allow lenders to make better decisions in managing existing clients by forecasting their future performance’. The multivariate statistical models, named chain graph models, proposed for the application allow us to model in a proper way the relation between the variables describing the behaviour of the holders of the credit card. The proposed models are named chain graph models. They are based on a log-linear expansion of the density function of the variables. They allow to: depict oriented association between subset of variables; to detect the structure which accounts for a parsimonious description of the relations between variables; to model simultaneously more than one response variable. They are useful in particular when there is a partial ordering between variables such that they can be divided into exogenous, intermediate and responses. In the graphical models the independence structure is represented by a graph. The variables are represented by nodes, joint by edges showing the dependence in probability among variables. The missing edge means that two nodes are independent given the other nodes. Such class of models is very useful for the theory which combines them with the expert systems. In fact, once the model has been selected, it is possible to link it to the expert system to model the joint and marginal probability of the variables. The first chapter introduces the most used statistical models for the credit scoring analysis. The second chapter introduces the categorical variables. The information related to the credit card holder are stored in a contingency table. It illustrates also the notion of independence between two variables and conditional independence among more than two variables. The odds ratio is introduced as a measure of association between two variables. It is the base of the model formulation. The third chapter introduces the log-linear and logistic models belonging to the family of generalized linear models. They are multivariate methods allowing to study the association between variables considering them simultaneously. A log-linear parameterization is described in details. Its advantage is also that it allow us to take into account of the ordinal scale on which the categorical variables are measured. This is also useful to find the better categorization of the continuous variables. The results related to the maximum likelihood estimation of the model parameters are mentioned as well as the numerical iterative algorithm which are used to solve the likelihood equations with respect to the unknown parameters. The score test is illustrated to evaluate the goodness of fit of the model to the data. Chapter 4 introduces some main concepts of the graph theory in connection with their properties which allow us to depict the model through the graph, showing the interpretative advantages. The sparsity of the contingency table is also mentioned, when there are many cells. The collapsibility conditions are considered as well. Finally, Chapter 5 illustrates the application of the proposed methodology on a sample composed by 70000 revolving credit card holders. The data are released by a one of biggest Italian financial society working in this sector. The variables are the socioeconomic characteristics of the credit card holder, taken form the form filled by the customer when asking for the credit. Every months the society refines the classification of the customers in active, inactive or asleep according to the balance. The application of the proposed method was devoted to find the existing conditional independences between variables related to the two responses which are the balance of the account at two subsequent dates and therefore to define the profiles of most frequently users of the revolving credit card. The chapter ends with some conclusive remarks. The appendix of the chapter reports the code of the used statistical softwares.
APA, Harvard, Vancouver, ISO, and other styles
5

Menkevičius, Saulius. "Objektų savybių modelio grafinis redaktorius." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2006. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2006~D_20060113_215001-30663.

Full text
Abstract:
During the development of federated IS that make use of non-homogenous databases and data sources, XML documents are often used for data exchange among the local subsystems, while their corresponding XML Schemas are generated using the standard CASE tools for local systems. External data schemes of those systems must be specified in a unified common model. An assumption is given that OBJECT PROPERTY (OP) model is being used for the semantic integration of the local non-homogenous subsystems. A graphical editor was developed that can be used to specify relation objects, their identifiers, complex and multi-valued object attributes. As OP model’s semantic expression capabilities can map those available in XML, additionally rules have been defined and implemented that can transform specific OP model structures into XML Schemas. Also algorithm is specified that can be used to extract tree-like structures from the model.Example transformations are performed that illustrate the process of generation of XML Schemas documents from sample OP models.
APA, Harvard, Vancouver, ISO, and other styles
6

Srogis, Andrius. "Automatizuotas grafinio modelio performulavimas į natūralią kalbą." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2013. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2013~D_20130826_150207-45443.

Full text
Abstract:
Grafinių modelių projektavimas yra plačiai naudojamas tiek mokslo, tiek verslo srytyse. Pasaulyje naudojama įvairių kalbų, skirtų tiek sistemų architektūrų, tiek verslo procesų projektavimui. Daugumai kalbų yra sukurta įvairių įrankių, leidžiančių jų naudotojams projektuoti įvairius procesus ar statines sistemas. Vienai labiausiai paplitusių kalbų (UML) trūksta metodikos ir įrankių, gebančių korektiškai perteikti natūralia kalba sistemų architektų aprašytus grafinius modelius asmenims, mažai kvalifikuotiems grafinių modelių sudaryme, skaityme. Perteikimas tuo natūralesnis ir labiau suprantamesnis, kuo jis artimesnis natūraliai kalbai. Yra metodikų ir įrankių atliekančių grafinio modelio verbalizavimą, tačiau nėra koncentruotų ties diagramomis UML kalba, kurios geba formuoti ne tik statiką, bet ir dinamiką. Pagrindinis darbo tikslas yra sukurti metodiką ir realizuoti įrankį, kuris gebėtų grafinį modelį išreikštą UML kalba performuluoti natūralia kalba.
The graphical model architecture design is widely used for scientific and enterprise purposes. There are many languages concentrated on enterprise processes and static systems designing. One of the most popular modeling language (UML) is missing methodology and tools suitable for correct reformulation of graphical models (formulated by the UML) in natural language. The main purpose of the graphical model reformulation in natural language is to make models easier to understand for people whose are not specialized in UML. Methodology and tool which is capable of reformulating graphical models in natural language already exists, but it isn’t concentrated on UML or capable of reformulating static and dynamic processes. The main goal of this work is to define a methodology and implement a tool, which would be capable of translating the graphical UML model to a natural language text.
APA, Harvard, Vancouver, ISO, and other styles
7

Reichmann, Clemens [Verfasser]. "Grafisch notierte Modell-zu-Modell-Transformationen für den Entwurf eingebetteter elektronischer Systeme / Clemens Reichmann." Aachen : Shaker, 2005. http://d-nb.info/1186587946/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Lohikoski, Håkansson Laura, and Elin Rudén. "Optimization of 3D Game Models : A qualitative research study in Unreal Development Kit." Thesis, Södertörns högskola, Institutionen för naturvetenskap, miljö och teknik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:sh:diva-22822.

Full text
Abstract:
Our goal with this study is to examine how much optimization of 3D game models can affect the overall performance of a game. After a previous pilot study we decided on use a method where we worked with a 3D scene which was made earlier unconnected to this study. We created two versions of the scene in Unreal Development Kit, one with none of the meshes optimized and the second scene where the meshes are optimized. From these two scenes we wrote down the different stats: the draw calls, frame rate, millisecond per frame and visible static mesh elements as well as the memory usage. Comparing these stats from the two scenes, we found that there was a change in the stats. Draw calls and frame rate had dropped in the second scene, as well as the memory usage which made the game run more smoothly without losing much of its aesthetic quality.
Målet med vår studie var att se hur stor skillnad optimering av 3D-modeller i spel gör för att förbättra spelprestandan. Efter att ha utfört en pilotstudie beslutade vi oss för att använda en tidigare byggd 3D-scen för undersökningen i vår C-uppsats. Vi skapade två versioner av scenen i Unreal Development Kit, en där inga modeller var optimerade och den andra där vi optimerat modellerna. Vi skrev därefter ner statistik från de olika scenerna, nämligen draw calls, frame rate, millisecond per frame och visible static mesh elements liksom minnesanvändning. Efter att ha jämfört resultaten såg vi att det fanns en väsentlig skillnad mellan scenerna prestandamässigt. Både draw calls, frame rate och minnesanvändningen hade minskat efter optimeringen vilket ledde till att spelet kördes smidigare.
APA, Harvard, Vancouver, ISO, and other styles
9

Geležis, Jonas. "Programinio kodo generavimo iš grafinių veiklos taisyklių modelių galimybių tyrimas." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2009. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2009~D_20090831_153530-40477.

Full text
Abstract:
Programinio kodo generavimo iš veiklos taisyklių modelių sritis iki šiol yra menkai ištirta ir tai neigiamai veikia veiklos taisyklių koncepcijos plėtrą. Nagrinėjama kodo generavimo iš grafinių IS (informacinių sistemų) reikalavimus atspindinčių modelių problema. Pristatomas modifikuotu Roso metodu grindžiamo veiklos taisyklių modeliavimo IS projektavimo stadijoje metodo tyrimas, siekiant sukurti adekvačią programinio kodo generavimo iš taisyklių diagramų metodiką.
One of the reasons for a relatively slow growth of the business rules approach could be the lack of developments in the field of program code generation from the business rules models. During this work methods for code generation from IS requirements models are analysed. The focus is placed on a modified Ross method based rules modelling method aiming to create an adequate code generation methodology.
APA, Harvard, Vancouver, ISO, and other styles
10

Goes, Fernando Ferrari de. "Analise espectral de superficies e aplicações em computação grafica." [s.n.], 2009. http://repositorio.unicamp.br/jspui/handle/REPOSIP/275916.

Full text
Abstract:
Orientador: Siome Klein Goldenstein
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação
Made available in DSpace on 2018-08-14T02:23:56Z (GMT). No. of bitstreams: 1 Goes_FernandoFerraride_M.pdf: 31957234 bytes, checksum: c369081bcbbb5f360184a1f8467839ea (MD5) Previous issue date: 2009
Resumo: Em computação gráfica, diversos problemas consistem na análise e manipulação da geometria de superfícies. O operador Laplace-Beltrami apresenta autovalores e autofunções que caracterizam a geometria de variedades, proporcionando poderosas ferramentas para o processamento geométrico. Nesta dissertação, revisamos as propriedades espectrais do operador Laplace-Beltrami e propomos sua aplicação em computação gráfica. Em especial, introduzimos novas abordagens para os problemas de segmentação semântica e geração de atlas em superfícies
Abstract: Many applications in computer graphics consist of the analysis and manipulation of the geometry of surfaces. The Laplace-Beltrami operator presents eigenvalues and eigenfuncitons which caracterize the geometry of manifolds, supporting powerful tools for geometry processing. In this dissertation, we revisit the spectral properties of the Laplace-Beltrami operator and apply them in computer graphics. In particular, we introduce new approaches for the problems of semantic segmentation and atlas generation on surfaces
Mestrado
Computação Grafica
Mestre em Ciência da Computação
APA, Harvard, Vancouver, ISO, and other styles
11

Giampaoli, Marco. "Un applicativo Web con tecnologia single-page AJAX, servizi REST e grafica X3D, per la configurazione di modelli CAD tridimensionali." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2013. http://amslaurea.unibo.it/5012/.

Full text
Abstract:
In questa tesi è stato realizzato un sistema web-based, per la configurazione di modelli meccanici tridimensionali. L’intero software è basato su architettura multi-tier. Il back-end espone servizi RESTful che permettono l’interrogazione di una base di dati contenente l’anagrafica dei modelli e l’interazione con il CAD 3D SolidWorks. Il front-end è rappresentato da due pagine HTML ideate come SPA (Single Page Application), una per l’amministratore e l’altra per l’utente finale; esse sono responsabili delle chiamate asincrone verso i servizi, dell’aggiornamento automatico dell’interfaccia e dell’interazione con immagini tridimensionali.
APA, Harvard, Vancouver, ISO, and other styles
12

Romagnoli, Matteo. "Sviluppo e integrazione in ambiente di simulazione di modelli tridimensionali per la rappresentazione grafica della dinamica di velivoli ad ala rotante." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2020.

Find full text
Abstract:
Sin dallo sviluppo dei primi velivoli in grande scala si è evidenziata la necessità di poter usufruire di modelli matematici e di simulazione dinamica che potessero restituire stime affidabili del comportamento che i prototipi sottoposti ai test avrebbero potuto presentare in volo. La capacità di poter prevedere i comportamenti che i velivoli sviluppano in contesti operativi si rivela la soluzione vincente per risparmiare denaro e per poter identificare tutte quelle manovre che possono costituire un pericolo, sia per quanto riguarda problemi di carattere aeroelastico e strutturale, sia, di conseguenza, per la vita dei collaudatori e per quella dei futuri occupanti del mezzo. Tali simulazioni mantengono inizialmente livelli di difficoltà di modello minimi per poi affinarsi per approssimazioni successive, passo dopo passo, con lo scopo di fornire una sempre più precisa previsione del comportamento in volo del velivolo. Lo scopo del seguente lavoro di tesi è proporre il modello di un elicottero in scala a pilotaggio remoto disegnato in ambiente CAD e implementato in ambiente Simulink® SimScape. La finalità del lavoro qui proposto è schematizzare in maniera il più semplice possibile la meccanica di volo di un elicottero riducendo le attuazioni a: rotazione del rotore principale, rotazione del rotore di coda, inclinazione del passo delle pale del rotore principale, inclinazione del passo delle pale del rotore di coda, assetto del velivolo. Per fare ciò i passi fondamentali sono essenzialmente due: la modellazione del velivolo nelle sue componenti fondamentali (fusoliera, rotore principale e di coda, pale) e le simulazioni in ambiente matematico per mezzo di input forniti dall’esterno.
APA, Harvard, Vancouver, ISO, and other styles
13

Rinaldi, Luca. "Leap Aided Modelling (LAM) in Blender." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2016. http://amslaurea.unibo.it/10904/.

Full text
Abstract:
L'evoluzione tecnologica e l'utilizzo crescente della computer grafica in diversi settori stanno suscitando l'interesse di sempre più persone verso il mondo della modellazione 3D. I software di modellazione, tuttavia, si presentano spesso inadeguati all'utilizzo da parte di utenti senza esperienza, soprattutto a causa dei comandi di navigazione e modellazione poco intuitivi. Dal punto di vista dell'interazione uomo-computer, questi software devono infatti affrontare un grande ostacolo: il rapporto tra dispositivi di input 2D (come il mouse) e la manipolazione di una scena 3D. Il progetto presentato in questa tesi è un addon per Blender che consente di utilizzare il dispositivo Leap Motion come ausilio alla modellazione di superfici in computer grafica. L'obiettivo di questa tesi è stato quello di progettare e realizzare un'interfaccia user-friendly tra Leap e Blender, in modo da potere utilizzare i sensori del primo per facilitare ed estendere i comandi di navigazione e modellazione del secondo. L'addon realizzato per Blender implementa il concetto di LAM (Leap Aided Modelling: modellazione assistita da Leap), consentendo quindi di estendere le feature di Blender riguardanti la selezione, lo spostamento e la modifica degli oggetti in scena, la manipolazione della vista utente e la modellazione di curve e superfici Non Uniform Rational B-Splines (NURBS). Queste estensioni sono state create per rendere più veloci e semplici le operazioni altrimenti guidate esclusivamente da mouse e tastiera.
APA, Harvard, Vancouver, ISO, and other styles
14

Linde, Eva. "Användarcentrerad grafisk informationsutveckling : En studie på vädersymboler för ett förstärkt användarperspektiv genom mentala modeller." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-77008.

Full text
Abstract:
Ett användarcentrerat perspektiv i framtagandet av tryckt grafik är idag inte självklart. Stolthet för sin teknik, intressen som står i konflikt med varandra och dåligt anpassade verktyg är de största anledningarna till det motstånd som vuxit fram hos grafiska designers att ta till sig användarcentreringen. Syftet med detta material är att utforska möjligheterna att låta utomstående användare vara aktivt involverade i designprocessen av tryckta symboler, i detta fall vädersymboler. Mentala modeller har, tillsammans med användarcentrerade tekniker från interaktionsdesignsområdet, använts som teoretiskt metodramverk i studien. Studien har genomförts i tre faser som innefattar uppbyggandet av ett symbolbibliotek inklusive inledande testning, klustring och vidare utgallring av symboler samt utvärdering avsymbolbiblioteket med fokus på tillämpning. Resultatet visar att de användarcentrerade metoder som är etablerade inom interaktionsdesign med försiktighet ska föras över till informationsdesignsområdet. Detta för att informationshanteringen ser olika ut inom de olika disciplinerna. Användarna sätter högre tillit till grafiskt tryckt material än till interaktiva miljöer och har svårare att hantera otydligheter då de inte kan testa sig fram. Studien visar också att användarna har olika bild av vad vädersymboler visar och att mentala modeller som ramverk kan lyfta fram dessa olikheter. Med ökad kunskap om hur användare förhåller sig till symboler och vilka olikheter i tolkningen som finns kan man som designer fånga upp hur symboler bör se ut, och framför allt inte se ut, för att undvika missförstånd. Det gör informationsmaterialet effektivare i sin kommunikation och ökar informationsvärdet för användaren i ett redan informationskrävande samhälle.
APA, Harvard, Vancouver, ISO, and other styles
15

UBERTINI, ALESSIO. "Una metodologia di progettazione integrata mediante l’utilizzo di interfacce grafiche." Doctoral thesis, Università degli Studi di Roma "Tor Vergata", 2005. http://hdl.handle.net/2108/186.

Full text
Abstract:
L’utilizzo di un ambiente di software per le simulazioni e i test permette già in fase di progettazione di poter valutare la validità delle scelte progettuali operate. L’impiego, quindi, di tali tecniche riduce il numero dei prototipi da realizzare, diminuendo così i costi di progettazione e da la possibilità al progettista di ripetere i test con una facile riconfigurabilità del sistema in caso di modifica del modello originario. I moderni CAD sono integrati con moduli di calcolo specifici e necessari alla progettazione meccanica, in modo da costituire per il progettista uno strumento sempre più completo. Durante il processo di progettazione, comunque, è necessario utilizzare differenti strumenti software ed è quindi necessario che le feature del modello si mantengano inalterate nel passaggio tra un software e l’altro. Attualmente il protocollo API garantisce una conservazione delle caratteristiche del modello e una minimizzazione della possibilità di errore nello scambio dati. L’utilizzo di questo protocollo di scambio ha permesso di realizzare degli applicativi utili nella progettazione meccanica e allo stesso tempo facili da utilizzare. L’interfaccia grafica è di tipo user-friendly, in modo da guidare l’utente nelle singole operazioni riducendo al minimo la possibilità di errore e per permettere a quest’ultimo di interagire in maniera semplice con il software. Questa metodologia ha permesso di sviluppare i seguenti tools per: • la realizzazione di modelli tridimensionali del corpo umano per la riproduzione e l’analisi del movimento; • la modellazione CAD parametrica di modelli di abitacoli di autoveicoli; la catalogazione e la ricostruzione digitale dei reperti archeologici
The environment of software simulations allows to estimate, during the design process, the goodness of the planning choices. These applications reduce the number of prototypes, reducing the costs of design, moreover the designer can repeat the tests easily. Actually, the software CAD include calculating modules necessary to the mechanical planning. In this way they are a valid tool for the designer. Often, during the design process, it is necessary to use different kind of software keeping the feature of the original model. The API protocol assures the conservation of features of the original model minimizing the possibility of errors. By means of using this protocol of data exchange, some software tools are developed. Their graphical interface is a user-friendly one. It allows to drive the user across the operations, minimizing the risk of error. These tools have a high level of automation, allowing to the user to easily interact with the software. The applications developed allow to: • realize 3D model of the human body for reproduction and analysis of motion • shape parametric model of automotive inside • file and digital rebuild some kind of find
APA, Harvard, Vancouver, ISO, and other styles
16

Henning, Elisa. "Aperfeiçoamento e desenvolvimento dos gráficos combinados Shewhart-Cusum binomiais." reponame:Repositório Institucional da UFSC, 2012. http://repositorio.ufsc.br/xmlui/handle/123456789/94399.

Full text
Abstract:
Tese (doutorado) - Universidade Federal de Santa Catarina, Centro Tecnológico, Programa de Pós-Graduação em Engenharia de Produção, Florianópolis, 2010
Made available in DSpace on 2012-10-25T09:28:47Z (GMT). No. of bitstreams: 1 287163.pdf: 2736363 bytes, checksum: 7825e6acd30e7b72ad98c014d015db37 (MD5)
Os tradicionais gráficos de controle Shewhart são considerados efetivos na detecção de grandes mudanças na média, variância ou na fração não conforme, enquanto que gráficos de controle de soma cumulativa (CUSUM) são recomendados para a sinalização de pequenas e moderadas alterações nestes parâmetros. Nenhum dos gráficos mencionados terá um bom desempenho em todas as situações. Uma solução possível para este problema é combinar múltiplos gráficos para abranger mudanças de diversas magnitudes. Assim, um gráfico combinado Shewhart-CUSUM tem como finalidade aumentar a sensibilidade do procedimento CUSUM para alterações maiores. Este trabalho traz várias contribuições para o desenvolvimento e aperfeiçoamento de gráficos combinados Shewhart-CUSUM para dados com distribuição binomial. Inicialmente, a partir do resultado de simulações, analisa-se o desempenho de um gráfico combinado e, se a adição de linhas Shewhart a um gráfico CUSUM binomial unilateral superior realmente aumenta a sensibilidade deste. O desempenho de um gráfico combinado Shewhart-CUSUM é também comparado com o gráfico tipo Shewhart e com procedimentos CUSUM delineados para detecção de mudanças maiores. Pensando em aplicações, foi elaborada uma metodologia para construção de um gráfico combinado incluindo a análise das suposições necessárias (aderência, autocorrelação e superdispersão). Para finalizar, esta metodologia foi aplicada a dados adaptados da literatura e também de processos reais. O trabalho ainda contempla algumas contribuições adicionais como o uso de limites exatos (ou probabilísticos) na parte Shewhart do gráfico combinado e uma proposta de aproximação para o limite superior do CUSUM binomial. Os resultados obtidos revelam que o gráfico combinado Shewhart-CUSUM aumenta a sensibilidade de um gráfico CUSUM binomial para magnitudes de mudança maiores que as de planejamento e identificou-se a existência de uma região onde o gráfico combinado tem desempenho superior aos dois gráficos individuais. Os resultados das aplicações foram satisfatórios, validando a metodologia elaborada. A partir das aplicações foram sinalizadas situações práticas onde o gráfico combinado é mais efetivo que os gráficos individuais.
APA, Harvard, Vancouver, ISO, and other styles
17

Jordán, Palomar Isabel. "Protocol to manage heritage-building interventions using Heritage Building Information Modelling (HBIM)." Doctoral thesis, Universitat Politècnica de València, 2019. http://hdl.handle.net/10251/128416.

Full text
Abstract:
[ES] Los proyectos de arquitectura patrimonial conllevan trabajo colaborativo entre diferentes agentes tales como arquitectos, ingenieros, arqueólogos, historiadores, restauradores, propietarios, etc. Tradicionalmente cada disciplina ha trabajado de manera independiente generando información dispersa. El flujo de trabajo en los proyectos patrimoniales presenta problemas relacionados con la desorganización de procesos, la dispersión de información y el uso de herramientas obsoletas. Diferentes organizaciones abogan por usar métodos innovadores para tratar de resolver estos problemas. BIM (Building Information Modelling) se ha postulado como una metodología adecuada para mejorar la gestión del patrimonio arquitectónico. La aplicación de BIM a construcciones históricas, denominada HBIM (Heritage BIM), ha probado tener múltiples ventajas para gestionar proyectos patrimoniales. Sin embargo, la literatura científica pone de manifiesto la necesidad de seguir investigando en los procesos de los proyectos patrimoniales, la implementación práctica de HBIM, la simplificación de la laboriosa tarea de modelado HBIM y la documentación de los proyectos HBIM. La finalidad de esta investigación es el desarrollo de un protocolo que ordene la gestión de proyectos patrimoniales usando HBIM y el diseño de una plataforma web que sincronice la información patrimonial. DSR (Design Science Research) es el método de investigación usado para desarrollar dicho protocolo que ayude a mejorar el flujo de trabajo en los proyectos patrimoniales. Las técnicas de investigación usadas han sido el análisis documental, casos de estudio, entrevistas semiestructuradas y grupos focales. Se analizaron los procesos HBIM y se estudiaron los requerimientos de los agentes patrimoniales. Como resultado, se desarrolló el protocolo BIMlegacy, dividido en ocho pasos y contemplando a todos los agentes que participan en proyectos patrimoniales. Dicho protocolo se aplicó en el caso de estudio de Fixby Hall, en Huddersfield (Reino Unido), y sus resultados fueron expuestos en un workshop interdisciplinar para validar y mejorar el protocolo BIMlegacy. Basado en este protocolo, se desarrolló la plataforma BIMlegacy como herramienta para poder llevar a cabo este flujo de trabajo donde agentes interdisciplinares pueden unificar y sincronizar la información patrimonial. Este innovador sistema en la nube conecta la base de datos intrínseca de los programas HBIM con bases de datos patrimoniales usando un plug in para Revit de Autodesk, una web API, un servidor SQL y un portal web. La plataforma BIMlegacy se diseñó como una web de trabajo, pero también como una web de difusión cultural donde el público generalista puede acceder a cierta información de los monumentos. El protocolo y la plataforma BIMlegacy fueron usados para gestionar el proyecto de Registro de San Juan del Hospital. El protocolo, la plataforma y los resultados del proyecto de San Juan del Hospital fueron expuestos en un grupo focal en Valencia con profesionales para su evaluación científica. La contribución teórica de esta investigación ha sido el descubrimiento de problemas en el modelado HBIM que no habían sido especificados antes, beneficios del HBIM (por ejemplo, el uso de plataformas online o el filtrado de información en sistemas HBIM) y requerimientos para implementar HBIM en la práctica tales como la necesidad de un protocolo simple e intuitivo y de ofrecer entrenamiento específico a los agentes no técnicos. Las contribuciones prácticas al conocimiento han sido la creación del protocolo BIMlegacy con la lista de agentes patrimoniales y la integración de procesos tradicionales, el diseño de la plataforma BIMlegacy con la sincronización de la información en tiempo real que permite que los agentes no técnicos puedan participar activamente en los modelos HBIM, el uso de HBIM como una herramienta de gestión, y la aportación de información rigurosa volcada por profe
[CAT] Els projectes d`arquitectura patrimonial comporten treballs col·laboratius entre diferents agents tals com arquitectes , enginyers ,arqueòlegs , historiadors, restauradors , propietaris , etc. Tradicionalment cada disciplina ha treballat de manera independent generant informació dispersa. El flux de treball en els projectes patrimonials presenta problemes relacionats amb la desorganització de processos, la dispersió d'informació i l'ús d'eines obsoletes. Diferents organitzacions promouen fer servir mètodes innovadors per a tractar de resoldre aquests problemes i fer del patrimoni cultural un motor de desenvolupament socioeconòmic. BIM (Building Information Modelling) s'ha postulat com una metodologia adequada per millorar la gestió del patrimoni arquitectònic. L'aplicació de BIM a construccions històriques, anomenada HBIM (Heritage BIM), ha demostrat tenir múltiples avantatges per gestionar projectes patrimonials. No obstant això, la literatura científica posa de manifest la necessitat de seguir investigant en els processos dels projectes patrimonials, la implementació pràctica de HBIM, la simplificació de la laboriosa tasca de modelatge HBIM i la documentació dels projectes HBIM. L'objectiu d'aquesta investigació és el desenvolupament d'un protocol que ordeni la gestió de projectes patrimonials usant HBIM i el disseny d'una plataforma web que sincronitzi la informació patrimonial. DSR (Design Science Research) és el mètode d'investigació utilitzat per desenvolupar aquest protocol que ajudi a millorar el flux de treball en els projectes patrimonials. Les tècniques d'investigació utilitzades han estat l'anàlisi documental, entrevistes semi-estructurades i grups focals. També es van analitzar els processos HBIM i es van estudiar els requeriments dels agents patrimonials. HBIM es va proposar com el model virtual que acull la informació patrimonial i que articula els processos. Com a resultat, es va desenvolupar el protocol BIMlegacy, dividit en vuit fases, contemplant a tots els agents que participen en projectes patrimonials. Aquest protocol es va aplicar en el cas d'estudi real de Fixby Hall, a Huddersfield (Regne Unit), i els seus resultats van ser exposats en un workshop interdisciplinari per validar i millorar el protocol. Basat en aquest protocol, el grup de recerca va desenvolupar la plataforma BIMlegacy com a eina per poder dur a terme aquest flux de treball on agents interdisciplinaris poden unificar i sincronitzar la informació patrimonial. Aquest innovador sistema en el núvol connecta la base de dades intrínseca dels programes HBIM amb les bases de dades patrimonials fent servir un plug-in per Revit d'Autodesk, un web API, un servidor SQL i un portal web. La plataforma BIMlegacy es va dissenyar com un web de treball, però també com un web de difusió cultural on el públic generalista pot accedir a certa informació dels monuments. El protocol i la plataforma BIMlegacy van ser utilitzats per gestionar el projecte de Registre de Sant Joan de l'Hospital. El protocol i la plataforma i els resultats del projecte de Sant Joan van ser exposats en un grup focal amb professionals per a la seva avaluació científica a València. La contribució teòrica d'aquesta investigació ha estat el descobriment de problemes en el modelatge HBIM que mai havien estat especificats abans, beneficis del HBIM (per exemple l'ús de plataformes en línia, el filtrat d'informació en sistemes HBIM, la integració de la divulgació cultural amb HBIM) i requeriments per implementar HBIM en la pràctica, com ara la necessitat d'un protocol intuïtiu i simple on oferir entrenament específic als agents no tècnics. Les contribucions pràctiques al coneixement han estat la creació del protocol BIMlegacy amb els agents patrimonials i la integració de processos tradicionals,el disseny de la plataforma BIMlegacy amb la sincronització de la informació a temps real que permet que els agents que no son tècnics pugu
[EN] Heritage architectural projects involve collaborative work between different stakeholders, e.g. architects, engineers, archaeologists, historians, restorers, managers, etc. Traditionally, each discipline works independently, generating dispersed data. The workflow in historic architecture projects presents problems related to the lack of clarity of processes, dispersion of information, and the use of outdated tools. Different heritage organisations have showed interest in innovative methods to resolve those problems. Building Information Modelling (BIM) has emerged as a suitable computerised system to improve the management of heritage projects. BIM application to historic buildings, named Heritage Building Information Modelling (HBIM), has shown benefits in managing heritage projects. The HBIM literature highlights the need for further research in terms of the overall processes of heritage projects, its practical implementation, the need of simplifying the laborious modelling task, and need for better standards of cultural documentation. This investigation aims to develop a protocol for heritage project processes using HBIM and an online work platform prototype where interdisciplinary stakeholders can unify and synchronise heritage information. Design Science Research (DSR) is adopted to develop this protocol. Research techniques used include documentary analysis, case studies, semi-structured interviews, participative workshop, and focus groups. An analysis of HBIM processes and a study of heritage stakeholders' requirements were performed through documentary analysis and semi structured interviews with stakeholders involved with relevant monuments. HBIM is proposed as the virtual model which will hold heritage data and will articulate processes. As a result, a simple and visual HBIM protocol, BIMlegacy, was developed. It is divided in eight steps and it contemplates all the stakeholders involved. BIMlegacy was applied in the Fixby Hall case study and its results were evaluated in a workshop with interdisciplinary stakeholders. An online work platform prototype, also named BIMlegacy, was developed, where interdisciplinary stakeholders can unify and synchronise heritage information. This innovative in-cloud system connects the intrinsic HBIM software database with heritage documentary databases using a Revit Autodesk Plug-in, a web Application Program Interface, a Structured Query Language server, and a web portal. BIMlegacy is an online platform to facilitate working but also a cultural diffusion web where general visitors can access to the information of the monuments. The BIMlegacy protocol and platform were implemented in two case studies Fixby Hall in Huddersfield (United Kingdom) and San Juan del Hospital in Valencia (Spain). BIMlegacy and the results of San Juan project were revealed in a workshop and in a focus group with external professionals for its evaluation. This research contributes within the theoretical knowledge highlighting modelling issues that were unknown before, benefits of using HBIM (a.e. the use of online platforms, filtering the information in HBIM database systems, the integration of cultural divulgation with HBIM) and needs in terms of implementing HBIM in practice such as the importance to have a simple and intuitive protocol to be useful and that the non-designer stakeholders require specific HBIM training. The practical contributions are the creation of the BIMlegacy protocol with the list of stakeholders and processes, the design of the BIMlegacy platform with the synchronisation of information in real time allowing the non-technical stakeholders to actively participate in HBIM models, the use of HBIM as management system, and the benefit for society and local communities since the rigorous information uploaded by professionals will be accessible to the public.
Jordán Palomar, I. (2019). Protocol to manage heritage-building interventions using Heritage Building Information Modelling (HBIM) [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/128416
TESIS
APA, Harvard, Vancouver, ISO, and other styles
18

Gabriel, João Carlos 1964. "Proposição de novo metodo grafico e modelo matematico para determinação das condições de funcionamento de sistemas de filtração rapida com taxa declinante." [s.n.], 2000. http://repositorio.unicamp.br/jspui/handle/REPOSIP/258359.

Full text
Abstract:
Orientador: Carlos Gomes da Nave Mendes
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Civil
Made available in DSpace on 2018-08-02T03:06:34Z (GMT). No. of bitstreams: 1 Gabriel_JoaoCarlos_M.pdf: 6128870 bytes, checksum: 0aab2924772151337dcab9208a393704 (MD5) Previous issue date: 2000
Resumo: Neste trabalho, tem-se como objetivo propor uma metodologia simples, prática e até mesmo didática para a questão, através da utilização de metodologia gráfica, com a qual é possível determinar e visualizar os diferentes níveis operacionais de água e as taxas de filtração de cada um dos filtros da bateria, em condições normais ou durante a lavagem de um deles. Além disso, a partir desse método gráfico, há a propositura de apresentar uma nova modelação matemática, para a determinação algébrica dos parâmetros de projeto de operação de um sistema de filtração com taxa dec1inante variável (SFTDV). Essa proposta baseia-se no uso de dados de perda de carga total de um filtro limpo, do número de filtros da bateria, da taxa média de filtração desejada, bem como do nível máximo que a água pode atingir na caixa do filtro, de modo que não ocorra seu transbordamento , da taxa máxima de filtração, com manutenção da qualidade do efluente filtrado
Abstract: The objeetive in this work is to propose a simple, praetieal and didaetie methodology for a problem, using a graphie method, whieh is a toei to estimate the water leveI variations that will occur during operation and the water rates during normal operation of the filter bank and when the dirtiest filter is backwashed. Beside this, using this graphie method, a new mathematieal model, for algebraie determination of the design and operation parameters of a variable deelining rate filtration system, knowing the elean media head loss and turbulent head loss eoefficients, the number of filters of the bank, the mean filtration rate and the highest leveI that the water ean rieh without owerflowing the filter box or the highest filtration rate of a elean filter to maintain the effluent water quality. The mathematieal equations for a group of n filters of the bank are developed and generalized and, it is hoped with it use, obtain the optimal eonditions for design and operation for this system, showing the solution for the longest lenght of the filter run maintaining the effluent water quality ofthe filter
Mestrado
Saneamento
Mestre em Engenharia Civil
APA, Harvard, Vancouver, ISO, and other styles
19

Kurauskas, Valentas. "Du atsitiktinių grafų modeliai." Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2013. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2013~D_20131216_081809-09247.

Full text
Abstract:
Šioje santraukoje trumpai aprašoma V. Kurausko disertacija. Pristatomos abi disertacijos dalys, įvedami atsitiktinių sankirtų grafų ir digrafų modeliai, apibrėžiamos minorinės grafų klasės, suformuluojami sprendžiami uždaviniai bei pateikiami pagrindiniai rezultatai.
This paper summarizes (in Lithuanian) the doctoral dissertation "On two models of random graphs" (in English) by V. Kurauskas. We introduce the random graph models (random intersection graphs, graphs with disjoint excluded minors) studied in the thesis, overview the problems and state the main results.
APA, Harvard, Vancouver, ISO, and other styles
20

Stamenkovic, Andrija. "Grafisk stil i Battle Royale-spel: Realistisk och stiliserad stil." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-20528.

Full text
Abstract:
Denna undersökning utfördes för att ta reda på hur spelare upplever de grafiska stilarna i relation till Battle Royale-genren inom spel. De grafiska stilarna ifråga är realistisk och stiliserad stil. Två likadana 3D-modeller skapades med skillnaden att ena är realistisk medan den andra är stiliserad. Dessa användes i en intervju-undersökning 3D-modeller för att besvara frågan. Undersökningen visar att respondenterna upplever de grafiska stilarna annorlunda från varandra men att de ser på stiliserad stil som den mest genreanpassade stilen för Battle Royale-genren. Undersökningen visar även på en svag tendens bland yngre deltagare att se stiliserad stil som genreanpassad samt att manliga deltagare prioriterade att den grafiska stil ska hjälpa dem prestera bättre i spelet. Denna undersökning utfördes på en liten testgrupp och den kvalitativa metoden kan ses som subjektiv. Vidare forskning kan generalisera området med en kvantitativ undersökning.
APA, Harvard, Vancouver, ISO, and other styles
21

Eimersson, Simon, and Michael Tran. "Hantverksutveckling : En deltagande aktionsforskning i det 3D grafiska hantverket." Thesis, Blekinge Tekniska Högskola, Institutionen för teknik och estetik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-10743.

Full text
Abstract:
Det här kandidatarbetet undersöker en grafikers arbete i en mindre produktion. Målet är att ge en bättre förståelse inom det grafiska hantverket. För att undersöka problemområdet väljer vi att tillämpa deltagande aktionsforskning och sätta den i en kontext till en spelproduktion. I texten kommer det att presenteras tre praktiska exempel som behandlar problematiska situationer för 3D-grafiker. Undersökningen beskriver hur personliga uttryck växer fram när svåra situationer tvingar en till att utforska sitt hantverk.
APA, Harvard, Vancouver, ISO, and other styles
22

Martínez-Espejo, Zaragoza Isabel. "PRECISIONES SOBRE EL LEVANTAMIENTO 3D INTEGRADO CON HERRAMIENTAS AVANZADAS, APLICADO AL CONOCIMIENTO Y LA CONSERVACIÓN DEL PATRIMONO ARQUITECTÓNICO." Doctoral thesis, Universitat Politècnica de València, 2014. http://hdl.handle.net/10251/37512.

Full text
Abstract:
The aim of the thesis is to analyse new technologies for integrated architectural surveys, studying the advantages and limitations of each in different architectural contexts, providing a global vision and unifying terminology and methodology in the field of architecture and engineering. The new technologies analyzed include laser scanning (both time-of-flight and triangulation), image-based 3-D modelling and drone-based photogrammetry, along with their integration with classical surveying techniques. With this goal, some case studies were examined, using different survey techniques with several advanced applications, in the field of architectural heritage. The case studies enabled us to analyze and study these techniques, however having quite clear that Image- and Range-based Modelling techniques, rather than compared, must be analysed for their integration, which is essential for the rendering of models with high levels of morphological and chromatic detail. On the other hand, thanks to the experience of the two different faculties (Architecture in Valencia, Spain and Civil Engineering in Pisa, Italy), besides the issues of interpretation between the two languages, divergence was found between the terminology used by the different specialists involved in the process, be they engineers (although dealing with different branches), architects and archaeologists. It is obvious that each of these profiles has a different view of architectural heritage, general construction and surveys. The current trend to form multidisciplinary teams working on architectural heritage, leads us to conclude that an unified technical terminology in this field could facilitate understanding and integration between the different figures, thus creating a common code.
Martínez-Espejo Zaragoza, I. (2014). PRECISIONES SOBRE EL LEVANTAMIENTO 3D INTEGRADO CON HERRAMIENTAS AVANZADAS, APLICADO AL CONOCIMIENTO Y LA CONSERVACIÓN DEL PATRIMONO ARQUITECTÓNICO [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/37512
TESIS
APA, Harvard, Vancouver, ISO, and other styles
23

Löfstedt, Joachim. "Tillit och trovärdighet inom webbdesign : En stegvis modell för utvärdering av trovärdighet med utgångspunkt från ett kommunikativt perspektiv." Thesis, Linnaeus University, School of Computer Science, Physics and Mathematics, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-8392.

Full text
Abstract:

Denna studie undersöker hur privata vårdföretags webbplatser bör utformas för att förmedla en hög trovärdighet mot användare och potentiella kunder. Antalet privata vårdföretag blir allt fler och det blir allt viktigare att synas och framförallt att synas på rätt sätt. Därför väljer många privata vårdföretag att marknadsföra sig med hjälp av en företagswebbplats för att framhålla sig själva och sina produkter och tjänster.

Utifrån en stegvis modell för att utvärdera trovärdighet har ett antal faktorer identifierats som ligger till grund för hur användare uppfattar trovärdighet i sammanhanget. Genom att studera hur personer från målgruppen upplever trovärdighet på ett antal utvalda webbplatser har en djupare förståelse för de framkomna resultaten uppstått. Resultaten visar på att det bör läggas stor vikt på fokusering av målgrupp, bilder och bildspråk, den grafiska utformningen och informationsstrukturen vid utveckling av webbplatser för privata vårdföretag. Genom att tillämpa de faktorer som studien har resulterat i, på ett privat vårdföretags webbplats, finns det större chans till att användare från målgruppen kommer att uppleva en hög trovärdighet i samband med interaktion.


This study examines how a private healthcare company's website should be designed to convey a high level of credibility to users and potential customers. The number of private healthcare companies is increasing and it is important to be visible and above all to be seen properly. Many private healthcare companies promote themselves through a company website to highlight themselves and their products and services.

Based on a progressive model to evaluate the credibility have a number of factors been identified as the basis for how users perceive trust in the context. By studying how people from the target audience perceive credibility on selected sites, have a deeper understanding of the present upon the results emerged. The results show that there should be emphasis on the focus of the audience, images and imagery, graphic design and information architecture for the development of websites for private healthcare companies. By applying the factors identified in this study to the website of a private healthcare company, there is a greater chance that users from the target group will experience a high level of credibility in the context of interaction.

APA, Harvard, Vancouver, ISO, and other styles
24

Melkersson, Oskar, and Adam Wretström. "Grafisk modellering som stöd i förstudiefasen : En aktionsforskning om hur grafiska modeller kan underlätta kommunikation mellan utvecklare ochanvändare i en förstudie." Thesis, Linnéuniversitetet, Institutionen för informatik (IK), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-27734.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Tumanova, Natalija. "Netiesinių matematinių modelių grafuose skaitinė analizė." Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2012. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2012~D_20120720_121639-03735.

Full text
Abstract:
Disertacijoje nagrinėjami nestacionarių matematinių modelių nestandartinėse srityse skaitiniai sprendimo algoritmai. Uždavinio formulavimo sritis yra šakotosios struktūros (ang. branching structures), kurių išsišakojimo taškuose apibrėžiami tvermės dėsniai. Tvermės dėsnių skaitinė analizė ir nestandartinių kraštinių sąlygų analizė skiria nagrinėjamus uždavinius nuo klasikinių aprašytų literatūroje matematinės fizikos uždavinių. Disertacijoje suformuluoti uždaviniai apima skaitinių algoritmų šakotose struktūrose su skirtingais srautų tvermės dėsniais stabilumo ir konvergavimo tyrimą, lygiagrečiųjų algoritmų sudarymą ir taikymą, skaitinių schemų uždaviniams su nelokaliomis integralinėmis sąlygomis tyrimą. Disertacijoje sprendžiami taikomieji neurono sužadinimo ir impulso relaksacijos lazerio apšviestame puslaidininkyje uždaviniai, netiesinio modelio identifikavimo uždavinys. Disertaciją sudaro įvadas, penki skyriai, rezultatų apibendrinimas, literatūros šaltinių sąrašas ir autorės publikacijų disertacijos tema sąrašas. Įvadiniame skyriuje formuluojama problema, aprašytas tyrimų objektas, darbo aktualumas, formuluojami darbo tikslai ir uždaviniai, aprašoma tyrimų metodika, darbo mokslinis naujumas, darbo rezultatų praktinė reikšmė, pateikti ginamieji teiginiai ir disertacijos struktūra. Pabaigoje pristatomi pranešimai konferencijose disertacijos tema. Pirmajame skyriuje pateikta matematinių modelių nestandartinėse srityse arba su nestandartinėmis sąlygomis apžvalga. Antrajame... [toliau žr. visą tekstą]
The numerical algorithms for non-stationary mathematical models in non-standard domains are investigated in the dissertation. The problem definition domain is represented by branching structures with conjugation equations considered at the branching points. The numerical analysis of the conjugation equations and non-classical boundary conditions distinguish considered problems among the classical problems of mathematical physics presented in the literature. The scope of the dissertation covers the investigation of stability and convergence of the numerical algorithms on branching structures with different conjugation equations, the construction and implementation of parallel algorithms, the investigation of the numerical schemes for the problems with nonlocal integral conditions. The modeling of the excitation of neuron and photo-excited carrier decay in a semiconductor, also the problem of the identification of nonlinear model are considered in the dissertation. The dissertation consists of an introduction, five chapters, main conclusions, bibliography and the list of the author's publications on the topic of dissertation. Introductory chapter covers the problem formulation and the object of research, the topicality of the thesis, the aims and objectives of the dissertation, the methodology of research, scientific novelty and the practical value of the achieved results. The defended thesis and structure of the dissertation are given in this chapter. The first chapter... [to full text]
APA, Harvard, Vancouver, ISO, and other styles
26

Marturelli, Leandro Schaeffer. "Fluxo do Vetor Gradiente e Modelos Deformáveis Out-of-Core para Segmentação e Imagens." Laboratório Nacional de Computação Científica, 2006. http://www.lncc.br/tdmc/tde_busca/arquivo.php?codArquivo=9.

Full text
Abstract:
Limitações de memória principal podem diminuir a performance de aplicativos de segmentação de imagens para grandes volumes ou mesmo impedir seu funcionamento. Nesse trabalho nós integramos o modelo das T-Superfícies com um método de extração de iso-superfícies Out-of-Core formando um esquema de segmentação para imagens de grande volume. A T-Superficie é um modelo deformável paramétrico baseado em uma triangulação do domínio da imagem, um modelo discreto de superfície e um threshold da imagem. Técnicas de extração de isso-superfícies foram implementadas usando o método Out-of-Core que usa estruturas kd-tree, chamadas técnicas de Meta-Células. Usando essas técnicas, apresentamos uma versão Out-of-Core de um método de segmentação baseado nas T-Superfícies e em iso-superfícies. O fluxo do Vetor Gradiente (GVF) é um campo vetorial baseado em equações diferenciais parciais. Esse método é aplicado em conjunto com o modelo das Snakes para segmentação de imagens através de extração de contorno. A idéia principal é usar uma equação de difusão-reação para gerar um novo campo de força externa que deixa o modelo menos sensível a inicialização e melhora a habilidade das Snakes para extrair bordas com concavidades acentuadas. Nesse trabalho, primeiramente serão revistos resultados sobre condições de otimização global do GVF e feitas algumas considerações numéricas. Além disso, serão apresentadas uma análise analítica do GVF e uma análise no domínio da frequência, as quais oferecem elementos para discutir a dependência dos parâmetros do modelo. Ainda, será discutida a solução numérica do GVF baseada no método de SOR. Observamos também que o modelo pode ser estendido para Domínios Multiplamente Conexos e aplicamos uma metodologia de pré-processamento que pode tornar mais eficiente o método.
Main memory limitations can lower the performance of segmentation applications for large images or even make it undoable. In this work we integrate the T-Surfaces model and Out-of-Core isosurface generation methods in a general framework for segmentation of large image volumes. T-Surfaces is a parametric deformable model based on a triangulation of the image domain, a discrete surface model and an image threshold. Isosurface generation techniques have been implemented through an Out-of-Core method that uses a kd-tree structure, called Meta-Cell technique. By using the Meta-Cell framework, we present an Out-of-Core version of a segmentation method based on T-Surfaces and isosurface extraction. The Gradient Vector Flow (GVF) is an approach based on Partial Differential Equations. This method has been applied together with snake models for image segmentation through boundary extraction. The key idea is to use a diffusion-reaction PDE in order to generate a new external force field that makes snake models less sensitivity to initialization as well as improves the snakes ability to move into boundary concavities. In this work, we firstly review basic results about global optimization conditions of the GVF and numerical considerations of usual GVF schemes. Besides, we present an analytical analysis of the GVF and a frequency domain analysis, which gives elements to discuss the dependency from the parameter values. Also, we discuss the numerical solution of the GVF based in a SOR method. We observe that the model can be used for Multiply Connected Domains and applied an image processing approach in order to increase the GVF efficiency.
APA, Harvard, Vancouver, ISO, and other styles
27

Prateat, Jonathan. "Um estudo sobre a aplicação do design como orientador visual." reponame:Repositório Institucional da UFSC, 2013. https://repositorio.ufsc.br/xmlui/handle/123456789/123078.

Full text
Abstract:
Dissertação (mestrado) - Universidade Federal de Santa Catarina, Centro de Comunicação e Expressão, Programa de Pós-Graduação em Design e Expressão Gráfica, Florianópolis, 2013.
Made available in DSpace on 2014-08-06T17:45:54Z (GMT). No. of bitstreams: 1 322395.pdf: 6977577 bytes, checksum: ea790eb699493099724ff0e82adcf75c (MD5) Previous issue date: 2013
Por meio de base teórica esta pesquisa visa levantar fundamentos que possam levar à compreensão da aplicação do design da informação como orientador visual em interfaces de redes sociais. O estudo fez levantamento de aspectos relativos aos temas redes sociais, design da informação, modelos mentais, usabilidade e cognição. Sobre redes sociais visa dar a compreensão de sua relevância na atual sociedade. Sobre o design da informação foi em busca das formas como ele se aplica a interfaces digitais, bem como seu objetivo nas mesmas. Em modelos mentais foram levantados aspectos que pudessem compreender a relação do usuário com a interface. Usabilidade trouxe pontos que podem ser relevantes para o desenvolvimento de interfaces que se comuniquem com os usuários. Cognição apresentou alguns aspectos de como os usuários captam as informações nas interfaces. Após os estudos teóricos duas redes sociais foram observadas com objetivo de verificar se os estudos podem se aplicar a elas.

Abstract : Through theoretical basis this research aims to lift fundamentals that can lead to an understanding of the application of information design as visual guiding on social networking interfaces. The study lifted aspects relating to the topics social networks, information design, mental models, usability, and cognition. About social networking aims to give an understanding of its relevance in today's society. About information design serached ways as it applies to digital interfaces, as well as your goal. In mental models were lifted issues that might understand the relationship with the user interface. Usability brought points that may be relevant to the development of interfaces that communicate with users. Cognition presented some aspects of how users collect information about the interfaces. After the theoretical two social networks were observed in order to verify whether the studies can be applied to them.
APA, Harvard, Vancouver, ISO, and other styles
28

Opletal, Martin. "Vývoj 3D aplikací v prostředí Blender." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2014. http://www.nusl.cz/ntk/nusl-235410.

Full text
Abstract:
Thesis is focused on evaluation of available tools for 3D application development. In one selected tool (Blender) is created computer game. This game demonstrates Blender's capabilities. The process of creating is recorded in a case study for the possible purposes of own complex games development.
APA, Harvard, Vancouver, ISO, and other styles
29

Fontana, Francesco. "Recupero e ripristino di una centrale dismessa sul fiume Vergari a Mesoraca (KR): valutazione della producibilità idroelettrica." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/12595/.

Full text
Abstract:
Il presente elaborato di Tesi sviluppa uno studio idrologico e la stima di massima della producibilità idroelettrica associabile ad un intervento di recupero di una piccola centrale storica, sita in sinistra idraulica sul fiume Vergari a Mesoraca (Kr) e ormai dismessa da diverse decine di anni. Tale obiettivo è stato raggiunto nonostante l’assenza di dati idrometrici per il corso d’acqua oggetto di studio mediante una ricostruzione della curva di durata delle portate (flow duration curve, FDC). Per stimare le FDC in assenza di dati idrometrici è stato necessario adottare opportune tecniche di regionalizzazione dell’informazione idrologica. In particolare, si è fatto riferimento al cosiddetto “Metodo Grafico” (Smakhtin et al., 1997). Tale tecnica utilizza come dati di input le osservazioni di portata naturale raccolte in stazioni idrometriche esistenti in bacini limitrofi ed i descrittori idrologici dei bacini stessi. Infine, ipotizzando la riattivazione della centrale, si è proceduto a stimare la quantità d’acqua utilizzabile ai fini della produzione idroelettrica nel rispetto dei vincoli ambientali imposti dalla Regione Calabria (Deflusso Minimo Vitale), oltre alla producibilità elettrica dell’impianto ed al fabbisogno che riuscirebbe a soddisfare con diverse condizioni climatiche ed idrologiche. Gli scenari ipotizzati sono tre: un anno idrologicamente tipico, che descrive il regime di deflusso medio; un anno poco piovoso, che descrive uno scenario di produzione minima; un anno medio riferito ad uno scenario di lungo periodo, che tiene conto della variabilità climatica su un periodo di oltre un decennio.
APA, Harvard, Vancouver, ISO, and other styles
30

Guixeres, Provinciale Jaime. "Estudio y caracterización de la respuesta fisiologíca y metabólica del niño obeso en reposo y esfuerzo." Doctoral thesis, Universitat Politècnica de València, 2014. http://hdl.handle.net/10251/39342.

Full text
Abstract:
Antecedentes: Existen hasta la fecha bastantes trabajos de la respuesta metabólica y cardiovascular en adultos y algunos menos en niños pero muy pocos se han centrado en investigar acerca de las alteraciones cardiorespiratorias y metabólicas que implica la obesidad infantil. Una auténtica epidemia del siglo XXI. Se hace necesario instaurar nuevos marcadores para adaptar los tratamientos en este campo Objetivo: Mejorar y avanzar en el conocimiento que permita proponer nuevos métodos para estimar patrones de actividad física del niño obeso en situaciones ambulatorias y clínicas y estudiar su respuesta cardiorrespiratoria en situaciones de reposo y de esfuerzo con el fin de generar marcadores que ayuden a caracterizarlo mejor permitiendo aumentar la eficacia del tratamiento aplicado. Métodos: Para ello tres tipos de estudios se llevaron a cabo. Durante los estudios un grupo de señales fisiológicas (electrocardiograma (ECG), frecuencia respiratoria (BR), pulsioximetría (SpO2 )), y señales de acelerometría (ACC) han sido medidas por una nueva plataforma de sensorización que en su última versión se ha integrado en un tejido inteligente (TIPS shirt). Acompañadas de datos clínicos, las medidas del TIPS han sido correlacionadas con el consumo metabólico y la respuesta ventilatoria medida con un calorímetro. En el estudio A se midieron a 27 niños obesos y 29 niños con peso normal para determinar y analizar el metabolismo basal. En el estudio B se midió el consumo energético de un grupo de 61 niños obesos y 31 niños con peso normal que completaron una prueba de esfuerzo y una serie de actividades cotidianas de un niño en reposo y por último en el estudio C, 60 niños obesos y 40 normo peso completaron una prueba de esfuerzo y recuperación para analizar la respuesta cardiorrespiratoria. Todos los estudios fueron llevados a cabo en el Hospital General Universitario de Valencia y en el colegio Max UAB de Valencia. Resultados En el estudio A se compararon los valores obtenidos de medida del metabolismo basal frente a los valores predichos por modelos de la literatura y se encontró una alta variabilidad en las predicciones que aconsejan la medición real de este parámetro. Además se apreciaron diferencias significativas entre el grupo obeso y el grupo normo peso en la respuesta autónoma en reposo. En el estudio B los modelos que combinaron frecuencia cardiaca y acelerometría mostraron la relación más fuerte con las variables metabólicas medidas por el equipo de calorimetría indirecta y en el estudio C se encontraron diferencias significativas en la recuperación del esfuerzo máximo del obeso frente al normo peso especialmente en la recuperación del tono parasimpático. Conclusión: En esta Tesis se ha demostrado que la inclusión de determinadas pruebas clínicas y de nuevas técnicas de procesamiento de las medidas realizadas aumentarán la eficacia de la intervención en el tratamiento del niño obeso. En las mediciones del metabolismo basal realizadas se ha observado que es necesario determinar la respuesta del metabolismo basal de cada sujeto mediante una medida real y no por medio de modelos predictivos. Por otro lado, el realizar una medida de esfuerzo ha permitido determinar como caracterizar la respuesta individual tanto metabólica como cardiovascular del sujeto con sistemas ambulatorios menos costosos y más sencillos de manejar que los calorímetros usados habitualmente en pruebas de esfuerzo, especialmente analizando parámetros de disfunción que han aparecido en esta Tesis del sistema autónomo y aplicando los nuevos modelos de predicción del consumo metabólico extraídos exclusivamente de población obesa haciendo uso de la señal cardiaca y de la señal de acelerometría. Estos indicadores extraídos de este tipo de pruebas ayudarán al profesional de la salud a adaptar y personalizar la intervención a aplicar en cada caso clínico a tratar.
Guixeres Provinciale, J. (2014). Estudio y caracterización de la respuesta fisiologíca y metabólica del niño obeso en reposo y esfuerzo [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/39342
TESIS
APA, Harvard, Vancouver, ISO, and other styles
31

Tumanova, Natalija. "The Numerical Analysis of Nonlinear Mathematical Models on Graphs." Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2012. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2012~D_20120720_121648-24321.

Full text
Abstract:
The numerical algorithms for non-stationary mathematical models in non-standard domains are investigated in the dissertation. The problem definition domain is represented by branching structures with conjugation equations considered at the branching points. The numerical analysis of the conjugation equations and non-classical boundary conditions distinguish considered problems among the classical problems of mathematical physics presented in the literature. The scope of the dissertation covers the investigation of stability and convergence of the numerical algorithms on branching structures with different conjugation equations, the construction and implementation of parallel algorithms, the investigation of the numerical schemes for the problems with nonlocal integral conditions. The modeling of the excitation of neuron and photoexcited carrier decay in a semiconductor, also the problem of the identification of nonlinear model are considered in the dissertation.
Disertacijoje nagrinėjami nestacionarių matematinių modelių nestandartinėse srityse skaitiniai sprendimo algoritmai. Uždavinio formulavimo sritis yra šakotosios strukturos (ang. branching structures), kurių išsišakojimo taškuose apibrežiami tvermės dėsniai. Tvermės dėsnių skaitinė analizė ir nestandartinių kraštinių sąlygų analizė skiria nagrinėjamus uždavinius nuo klasikinių aprašytų literatūroje matematinės fizikos uždaviniu. Disertacijoje suformuluoti uždaviniai apima skaitinių algoritmų šakotose struktūrose su skirtingais srautų tvermės dėsniais stabilumo ir konvergavimo tyrimą, lygiagrečiųjų algoritmų sudarymą ir taikymą, skaitinių schemų uždaviniams su nelokaliomis integralinėmis sąlygomis tyrimą. Disertacijoje sprendžiami taikomieji neurono sužadinimo ir impulso relaksacijos lazerio apšviestame puslaidininkyje uždaviniai, netiesinio modelio identifikavimo uždavinys.
APA, Harvard, Vancouver, ISO, and other styles
32

Pozzoli, V. "IL SISTEMA DELL'EDITORIA D'ARTE CONTEMPORANEA NELLA MILANO DEGLI ANNI TRENTA." Doctoral thesis, Università degli Studi di Milano, 2018. http://hdl.handle.net/2434/542250.

Full text
Abstract:
Oggetto della ricerca è stato lo studio e l’analisi del sistema dell’editoria d’arte contemporanea nella Milano degli anni trenta, a partire da una mappatura della produzione libraria specializzata uscita lungo il decennio di cui si sono messe a fuoco forme, meccanismi e protagonisti. Il lavoro ha avuto una frase preliminare di individuazione dei materiali di studio, di strutturazione dell’ambito e dei campi di ricerca, in una prospettiva storiografica sostanzialmente inedita, al confine tra la storia dell’arte e dell’editoria, in cui si intrecciano le dinamiche della promozione artistica e del suo consumo, del mercato editoriale e della filiera del libro. Le peculiarità dell’edizione d’arte, dal suo profilo materiale al pubblico a cui è indirizzata, ne fanno un prodotto con caratteristiche e problematiche distinte nel quadro allargato dell’industria editoriale. Tale specificità negli anni trenta si innesta in un dibattito cruciale sull’identità dell’arte contemporanea, prefigurando un quadro storico nuovo rispetto al periodo precedente in cui si assiste alla significativa fioritura di iniziative editoriali inedite tese alla codificazione e divulgazione dei valori della cultura figurativa del presente. L’intero studio si è fondato sulla mappatura sistematica delle pubblicazioni date alle stampe tra il 1929 e il 1943 – arco cronologico individuato come il più congruente ai fini dell’indagine – condotta sulla base dell’analisi di fonti d’epoca specializzate quali guide bibliografiche, bollettini, cataloghi di vendita dei libri, nonché i registri di carico delle biblioteche di settore. Il recupero, l’esame diretto e la schedatura delle singole edizioni attraverso parametri specifici, messi a punto tenendo conto della natura del libro d’arte e in particolare della centralità delle fotoriproduzioni nella filiera produttiva, ha portato alla realizzazione di un database, confluito in un repertorio organizzato in schede tecniche e indici delle presenze editoriali. I risultati scaturiti da questo ampio censimento hanno orientato la ricerca verso i grandi nodi del sistema produttivo, dei generi letterari emergenti e dei procedimenti di riproduzione e di stampa delle immagini, tra teoria e agganci ai testi, alle fonti a stampa e alla documentazione d’archivio, allargando il complessivo campo di indagine a una comparazione con la rispettiva produzione editoriale italiana e straniera coeva. La tesi si articola dunque in tre grandi parti, introdotte da un tentativo di definizione delle forme del libro d’arte contemporanea e da una verifica dell’andamento della produzione editoriale. Quest’ultima ha messo in luce l’esistenza di una periodizzazione interna agli estremi cronologici legata a doppio filo a una molteplicità di dinamiche in atto che incidono in modo diretto sull’editoria di settore, tra le quali il consolidamento di un nuovo collezionismo, i contestuali svolgimenti sul piano della politica delle arti e i progressi tecnici dell’industria grafica sono solo alcune delle più eloquenti. L’analisi del sistema editoriale si è rivolta anzitutto a definirne gli attori, vale a dire le figure direttamente coinvolte nella filiera produttiva, restituendo per la prima volta una mappa strutturata degli editori, dei fotoincisori, dei tipografi e stampatori. Tra le prerogative del settore spicca infatti la frammentazione dei soggetti imprenditoriali dovuta all’elevato standard di specializzazione richiesto dal libro illustrato, alla cui realizzazione concorrono necessariamente professionalità diverse. A monte, nel panorama degli editori è emersa una sensibile diversificazione tradotta in una sostanziale permeabilità al tessuto delle gallerie e del mercato e a quello delle riviste, permettendo di riconsiderare luoghi che per la storiografia vivono separati. Spostando l’obiettivo sulla produzione editoriale, ovvero sui libri oggetto d’indagine, si sono discusse le problematiche connesse alle forme della divulgazione dei nuovi valori figurativi, a fronte di un processo mobile teso a una loro compiuta definizione. Una prospettiva aderente alla mappatura, che facesse leva su uno sguardo d’insieme e sui molteplici aspetti del prodotto librario considerati nella ricerca, ha inteso mettere a fuoco i generi emergenti, dal libro-catalogo, al panorama, alle collane di monografie d’artista, riflettendo sulla loro fortuna, tra scarti, continuità ed elementi innovativi, anche attraverso un confronto con i modelli internazionali. Una parte centrale del lavoro è stata dedicata, infine, alla disamina dei diversi procedimenti fotomeccanici di riproduzione e di stampa impiegati nella realizzazione dei libri, un problema nodale, connaturato alle specificità stesse dell’editoria d’arte – fondata sulle riproduzioni e sulla loro mise en page – che tuttavia, allo stato degli studi, risulta sostanzialmente trascurato dalla storiografia. Attente alle attrezzature tecniche, ai passaggi di lavorazione e ai risultati grafici, le ricerche svolte hanno confermato il valore di questo filone di indagine, mettendo in luce il peso che negli anni trenta le innovazioni tecnologiche nel settore hanno giocato nel determinare non solo la fisicità e la grammatica delle immagini, e dunque la ricezione dell’arte, ma le stesse forme editoriali. Novità significative sono emerse, in particolare, in relazione alla riproduzione a colori e ai suoi rinnovati sistemi in rapida ascesa commerciale, come il fotocolor, di cui è stata ricostruita la prima diffusione. Il repertorio finale delle schede tecniche delle singole pubblicazioni, integrato da indici ed elenchi, tra cui i cataloghi completi delle collezioni editoriali, è presentato in appendice.
The research aimed at studying and analysing the contemporary art publishing system in Milan during the Thirties, on the basis of a mapping of the specialised book production with a major focus on its forms, mechanisms and leading figures. The work spanned a preliminary phase designed to identifying the study materials, to defining the research boundaries and fields in a quite unusual historiographical perspective, poised between history of art and publishing, on a ground where the dynamics related to the artistic promotion and its consumption, to the publishing market and to the book production chain are mutually intertwined. Because of their peculiarities, such as the material profile as well as the target audience, art books prove to be products with distinct features and issues within the publishing industry. In the Thirties, such specificity interact with a crucial debate on the identity of contemporary art, prefiguring a new historical context characterised by the unprecedented development of editorial initiatives aimed at the codification and dissemination of the values of the current figurative culture. The entire study was based on the systematic mapping of publications issued between 1929 and 1943 – a chronological arc identified as the most congruent for the purposes of the investigation – carried out according to the analysis of coeval sources, such as bibliographic guides and bulletins, book sales catalogues, specialised libraries records. The evaluation and cataloguing of each editions took into account specific parameters selected depending on the nature of art books and more specifically considering the central role of photomechanical reproductions in the production chain. All information was gathered in a database converted into an organised repertory with technical entries and indexes of editorial presences. The results of this broad census led the research considering the great issues of the productive system, the emerging literary genres, and the reproduction and printing processes of images, making references to theory as well as to texts, printed and archive sources, until broadening the field of investigation to include a comparison with the related Italian and foreign coeval publishing. The thesis is thus divided into three main parts, introduced by an attempt to define the forms of contemporary art books and by a check of the performance of editorial production. This section shed light on an existing periodization within the chronological extremes closely intertwined with a wide variety of ongoing dynamics directly affecting the publishing sector, among which the consolidation of a new collecting, the contextual developments in the field of art politics and the technological advances of the graphic industry are just some of the most relevant. The analysis of the publishing system chiefly looked to define the players, actually the figures personally involved in the production chain, thus outlining for the first time a structured map of publishers, photoengravers, typographers and printers. In fact, among the prerogatives of the sector, one which undoubtedly stands out is the fragmentation of the entrepreneurial figures ascribable to the high specialization standards which illustrated books require and to the creation of which contribute multiple professional profiles and skills. The panorama of the publishers itself revealed a remarkable diversification corresponding to a consistent permeability to the context of the galleries and the art market as well as to the context of magazines, thus making it possible to reconsider places which according to historiography live apart. Shifting the goal to the publishing production, namely to the books object of the investigation, the work addressed the issues related to the forms of dissemination of new figurative values, in response to the coeval ongoing process aiming at their accomplished definition. In keeping with the mapping, and tapping into a comprehensive overview and a wealth of aspects typical of the product book considered in the research, the perspective aimed at highlighting the emerging genres, such as the book-catalogue, the panorama, the series of artist monographs, while reflecting on their fortune, among gaps, drifts, continuity and innovative elements, also based on a comparison with international models. Finally, a key part of the work consisted in examining the photomechanical reproduction and printing processes employed in the production of books: a crucial issue inherently belonging to the specificity of art publishing – based on reproductions and their mise en page – that, however, appears largely overlooked by historiography. Mindful of the technical equipment, the processing steps and the graphic results, the research carried out confirmed the value of this investigation line, while shedding light on the role that the technological innovations achieved in the Thirties played not only in determining the materiality and grammar of images, and hence the reception of art, but the very publishing forms. Significantly new features emerged, in particular, in relation to the colour reproduction and its renovated, quickly booming commercial systems, such as fotocolor, whose first diffusion this work retraced. The appendix presents the final catalogue of books entries as well as indexes and lists, including the complete listing of publishing series.
APA, Harvard, Vancouver, ISO, and other styles
33

Salvador, García Elena. "PROTOCOLO HBIM PARA UNA GESTIÓN EFICIENTE DEL USO PÚBLICO DEL PATRIMONIO ARQUITECTÓNICO." Doctoral thesis, Universitat Politècnica de València, 2020. http://hdl.handle.net/10251/146811.

Full text
Abstract:
[ES] El mayor desafío en la gestión del uso público del patrimonio es establecer una relación sostenible entre patrimonio y turismo, ya que el acceso público, si bien promueve el interés social por su conservación, también representa un riesgo para la preservación de los recursos. La información que generan los equipos multidisciplinares que intervienen en la gestión del uso público generalmente se encuentra incompleta, descoordinada y desactualizada. La falta de una fuente de información fiable genera bajos niveles de eficiencia en la gestión del uso público poniendo en riesgo la preservación de los recursos del impacto de los visitantes y reduciendo el interés social por su conservación. Heritage Building Information Modelling (HBIM) es un sistema de trabajo colaborativo donde los agentes involucrados comparten información geométrica, semántica y documental del bien patrimonial de forma coordinada. HBIM se presenta como oportunidad para mejorar la eficiencia de la gestión del uso público del patrimonio. Considerando el previsible crecimiento del uso de HBIM en España en un futuro próximo, el objetivo de esta investigación es desarrollar, por primera vez, un protocolo HBIM que ayude a los profesionales a implementar HBIM para planificar y gestionar más eficientemente el uso público del patrimonio en sus cuatro ámbitos: la conservación preventiva, la gestión de visitantes, la interpretación del patrimonio y la divulgación del patrimonio. El método de investigación empleado es el Design Science Research (DSR en adelante) o investigación de las Ciencias del Diseño. Así pues, el estudio se inició con la revisión exhaustiva de la literatura científica relativa al uso de HBIM para la gestión del uso público del patrimonio, lo que permitió identificar la laguna del conocimiento actual en esta materia. Para analizar la gestión actual del uso público del patrimonio se tomaron tres casos de estudio y se recogieron datos mediante la técnica de la entrevista semiestructurada y la observación directa de la visita pública. El análisis de la planificación de los cuatro ámbitos del uso público se realizó a partir de los datos obtenidos mediante la técnica de la entrevista semiestructurada y el análisis de documentación técnica específica. Los resultados de estos análisis evidenciaron problemas de ineficiencia en la planificación y gestión del uso público actual. Con el fin de darle una solución a este problema, se desarrolló un Protocolo HBIM para planificar y gestionar el uso público de manera más eficiente. Dos de los aspectos del Protocolo HBIM, la gestión de visitantes y la interpretación del patrimonio, se implementaron satisfactoriamente al caso de estudio del conjunto de San Juan del Hospital de València. Por último, se evaluó la aplicabilidad y utilidad del protocolo con un panel de expertos en la gestión cultural del caso de estudio, en cada ámbito del uso público y en BIM. Los resultados de la implementación del Protocolo HBIM al caso de estudio del conjunto de San Juan del Hospital de València, demuestran por primera vez que HBIM y, en particular, el software Revit puede ser una herramienta útil para analizar, planificar y también para gestionar más eficientemente las visitas públicas de los bienes patrimoniales. Este estudio evidencia que la capacidad de HBIM de unificar la información generada por los distintos agentes involucrados en la conservación del patrimonio facilita la toma de decisiones para el diseño del itinerario turístico, la gestión del flujo de visitantes y la determinación de la capacidad de carga recreativa de una manera más integral. Estos resultados han permitido identificar futuras líneas de investigación orientadas a la gestión de visitantes en tiempo real gracias a la vinculación de sensores o dispositivos GPS a los modelos HBIM y encaminadas a refinar el Protocolo HBIM mediante su aplicación a mayores casos de estudio
[CAT] El major repte en la gestió de l'ús públic del patrimoni és establir una relació sostenible entre patrimoni i turisme, ja que l'accés públic, si bé promou l'interès social per la seua conservació, també representa un risc per a la preservació dels recursos. La informació que generen els equips multidisciplinaris que intervenen en la gestió de l'ús públic generalment es troba incompleta, desactualitzada i poc coordinada. L'absència d'una font d'informació fiable genera nivells baixos d'eficiència en la gestió de l'ús públic, posant en risc la preservació dels recursos front a l'impacte dels visitants i reduint l'interès social per la seua conservació. Heritage Building Information Modelling (HBIM) és un sistema de treball col·laboratiu on els agents involucrats comparteixen informació geomètrica, semàntica i documental de cada bé patrimonial de forma coordinada. HBIM es presenta com una oportunitat per a millorar l'eficiència de la gestió de l'ús públic del patrimoni. Considerant el previsible creixement de l'ús d'HBIM en Espanya en un futur pròxim, l'objectiu d'esta investigació és desenvolupar, per primera vegada, un protocol HBIM que ajude als professionals a implementar HBIM per a planificar i gestionar més eficientment l'ús públic del patrimoni en els seus quatre àmbits: la conservació preventiva, la gestió de visitants, la interpretació del patrimoni i la divulgació del patrimoni. El mètode d'investigació empleat és el Design Science Research (DSR en endavant) o investigació de les ciències del disseny. D'aquesta manera, l'estudi es va iniciar amb la revisió exhaustiva de la literatura científica relativa a l'ús de HBIM per a la gestió de l'ús públic del patrimoni, el que va permetre identificar la llacuna del coneixement actual en esta matèria. Per a analitzar la gestió actual de l'ús públic del patrimoni es van prendre tres casos d'estudi i es van recollir dades mitjançant la tècnica de l'entrevista semiestructurada i l'observació directa de la visita pública. L'anàlisi de la planificació dels quatre àmbits de l'ús públic es va realitzar a partir de les dades obtingudes mitjançant la tècnica de l'entrevista semiestructurada i l'anàlisi de documentació tècnica específica. Els resultats d'estos anàlisis van evidenciar problemes d'ineficàcia en la planificació i gestió de l'ús públic actual. Amb la finalitat de donar una solució a este problema, es va desenvolupar un Protocol HBIM per a planificar i gestionar l'ús públic d'una manera més eficient. Dos dels aspectes del Protocol HBIM, la gestió de visitants i la interpretació del patrimoni, es van implementar satisfactòriament en el cas d'estudi del conjunt de Sant Joan de l'Hospital de València. Per últim, es va avaluar l'aplicabilitat i utilitat del protocol amb un panell d'experts en la gestió cultural del cas d'estudi, en cada àmbit de l'ús públic i en BIM. Els resultats de la implementació del Protocol HBIM al cas d'estudi del conjunt de Sant Joan de l'Hospital de València, demostren per primera vegada que HBIM i, en particular, el software Revit pot ser una eina útil per a analitzar, planificar i també per a gestionar més eficientment les visites públiques dels béns patrimonials. Este estudi evidencia que la capacitat d'HBIM d'unificar la informació generada pels distints agents involucrats en la conservació del patrimoni facilita la presa de decisions per al disseny de l'itinerari turístic, la gestió del flux de visitants i la determinació de la capacitat de càrrega recreativa d'una manera més integral. Estos resultats han permès identificar futures línies d'investigació orientades a la gestió de visitants en temps real gràcies a la vinculació de sensors o dispositius GPS als models HBIM i encaminades a refinar el Protocol HBIM mitjançant la seua aplicació a casos majors d'estudi.
[EN] The greatest challenge to be overcome in managing the public use of heritage is to establish a sustainable relationship between heritage and tourism, since public access, while promoting social interest in its conservation, also represents a risk for the preservation of the assets. The information generated by the multidisciplinary teams involved in public use management is generally incomplete, uncoordinated and out of date. The lack of a reliable source of information generates low levels of efficiency in such management, which consequently jeopardises the ability to protect the resources against the impact of visitors and reduces social interest in their conservation. Heritage Building Information Modelling (HBIM) is a collaborative work system in which the stakeholders involved share geometric, semantic and documentary information about the heritage asset in a coordinated way. It offers an opportunity to improve the efficiency of the management of the public use of heritage. Bearing in mind the expected growth in the use of HBIM in Spain in the near future, the aim of this research is to develop, for the first time, an HBIM protocol that will help professionals to implement HBIM so as to achieve more efficient planning and management of the public use of heritage in the four areas involved in it, that is, preventative conservation, visitor flow management, heritage interpretation and heritage dissemination. The research method used for this purpose is Design Science Research (hereinafter, DSR). Thus, the study began with a comprehensive review of the literature on the use of HBIM for the management of the public use of heritage, which revealed the knowledge gap that exists in this area. To analyse the current management of the public use of heritage, three case studies were taken and data were collected using the semi-structured interview technique and direct observation of public visitation. The planning of the four areas of public use was analysed based on the data obtained through the semi-structured interviews and the analysis of specific technical documentation. The results of these analyses revealed problems of inefficiency in the current public use planning and management. In order to provide a solution to this problem, an HBIM Protocol was developed that enables public use to be planned and managed more efficiently. Two aspects of the HBIM Protocol, visitor management and heritage interpretation, were successfully implemented in the case study of the San Juan del Hospital ensemble in Valencia. Lastly, the applicability and usefulness of the protocol were evaluated with the collaboration of a panel of experts in the cultural management of the case study, in each area of public use and in BIM. The results from implementing the HBIM Protocol to the case study of the San Juan del Hospital complex in Valencia show for the first time that HBIM and, in particular, the Revit software package can be a useful tool for a more efficient analysis, planning and management of public visitation to heritage assets. This study shows that the capacity of HBIM to unify the information generated by the different stakeholders involved in the conservation of heritage facilitates the decision-making required to design the tourist itinerary, to manage the visitor flows and to determine the recreational carrying capacity in a more comprehensive manner. These results have made it possible to identify future lines of research focused on achieving visitor flow management in real time by linking sensors or GPS devices to HBIM models, while also seeking to refine the HBIM Protocol by applying it to larger case studies.
Salvador García, E. (2020). PROTOCOLO HBIM PARA UNA GESTIÓN EFICIENTE DEL USO PÚBLICO DEL PATRIMONIO ARQUITECTÓNICO [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/146811
TESIS
APA, Harvard, Vancouver, ISO, and other styles
34

Kičas, Rolandas. "Gamybinių užsakymų projektavimo ir valdymo programinė įranga." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2005. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2005~D_20050527_144237-30018.

Full text
Abstract:
This master thesis analyses principle manufacturing order design and management methods and schemas. They are practicaly applied in developing software for product‘s graphical-informational model management, feature management, technological data and rules specification and description, product‘s material list creation and estimation. The developed software is suited for both large and small manufacturing corporations, that specializes in plastic or wooden windows, doors, garage gates, glass fillings production. The system may be adjusted to various manufacturing technologies. These software development methods and technologies were applied during system design and implementation: Dynamic System Development Method (DSDM), Object Oriented (OO) Design and Programming, Component Based System Engineering Method (CBSE), Computer-Aided Design/Computer-Aided Manufacturing Methods, eXtensible Markup Language (XML). Created software was tested with real technological data. Systems suites functional and non-functional specifiaction. Experimets show that the product is more flexible than it‘s previous version.
APA, Harvard, Vancouver, ISO, and other styles
35

Ekman, Jonas. "Evaluation of HCI models to control a network system through a graphical user interface." Thesis, KTH, Data- och elektroteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-208951.

Full text
Abstract:
SAAB has a project under development for a network system with connected nodes, where the nodes are both information consumer and producer of different communication types. A node is an equipment or an object that are used by the army e.g. it can be a soldier, military hospital or an UAV. The nodes function as a part of a mission e.g. a mission can be Defend Gotland. The aim of this project is that the user will rank different missions from the one with the highest priority to the lowest. This will affect the network in a way that the communication between the nodes at the mission with the highest rank will be prioritised over the communication between the underlying missions. A user can via the GUI rank the missions, and then set the associated settings for them. Via the GUI the user should be able to work at three different levels. The first is to plan upcoming missions. The second one is in real time see if the system delivers the desired conditions. The last one is to simulate if the system can deliver the desired conditions. This thesis investigated various HCI models that could be used to create a GUI, to reduce the risk of a user configuring the system incorrectly. The study showed that there are no HCI models that take account for misconfigurations, and therefore a new model was created. The new model was used and evaluated by creating a prototype of a GUI for SAAB’s project and was tested on a potential user. The test showed that the new model reduced the risk of misconfigurations.
SAAB har ett projekt för utveckling av ett nätverkssystem med anslutande noder, med noder som kan vara både informationsproducent och konsument för olika kommunikationstyper. En nod är en sak eller ett objekt inom försvaret t.ex. kan det vara en soldat, militärt sjukhus eller en obemannad farkost. Varje nod tillhör ett uppdrag, tex att försvara Gotland. Målet med projektet är att man ska kunna gradera de olika uppdragen och därmed gradera vilken prioritet dessa noder har i nätet. Noder som tillhör ett uppdrag med hög gradering kommer prioriteras över de underliggande uppdragen i nätverket. En användare kan via ett grafiskt användargränssnitt gradera uppdragen och konfigurera tillhörande inställningar. Via det grafiska användargränssnittet kan en användare även planera, gradera och konfigurera inställningar för kommande uppdrag samt simulera om det går att genomföra. Användaren ska även i realtid kunna se om de önskade inställningarna inte kan leva upp till de önskade kraven, och därmed kunna åtgärda detta.  Detta arbete undersökte olika MMI-modeller som kan användas för att skapa ett grafiskt användargränssnitt som minimerar risken att användaren konfigurerar systemet på ett felaktigt sätt. Studien visade sig att det inte finns några MMI modeller som tar hänsyn till felkonfigureringar, och en ny modell skapades. Den nya modellen användes och utvärderas genom att skapa en prototyp av ett grafiskt användargränssnitt för SAAB:s projekt, som testades på en potentiell användare. Testet visade att den nya modellen minskar risken för felkonfigureringar.
APA, Harvard, Vancouver, ISO, and other styles
36

Polowinski, Jan. "Ontology-Driven, Guided Visualisation Supporting Explicit and Composable Mappings." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-229908.

Full text
Abstract:
Data masses on the World Wide Web can hardly be managed by humans or machines. One option is the formal description and linking of data sources using Semantic Web and Linked Data technologies. Ontologies written in standardised languages foster the sharing and linking of data as they provide a means to formally define concepts and relations between these concepts. A second option is visualisation. The visual representation allows humans to perceive information more directly, using the highly developed visual sense. Relatively few efforts have been made on combining both options, although the formality and rich semantics of ontological data make it an ideal candidate for visualisation. Advanced visualisation design systems support the visualisation of tabular, typically statistical data. However, visualisations of ontological data still have to be created manually, since automated solutions are often limited to generic lists or node-link diagrams. Also, the semantics of ontological data are not exploited for guiding users through visualisation tasks. Finally, once a good visualisation setting has been created, it cannot easily be reused and shared. Trying to tackle these problems, we had to answer how to define composable and shareable mappings from ontological data to visual means and how to guide the visual mapping of ontological data. We present an approach that allows for the guided visualisation of ontological data, the creation of effective graphics and the reuse of visualisation settings. Instead of generic graphics, we aim at tailor-made graphics, produced using the whole palette of visual means in a flexible, bottom-up approach. It not only allows for visualising ontologies, but uses ontologies to guide users when visualising data and to drive the visualisation process at various places: First, as a rich source of information on data characteristics, second, as a means to formally describe the vocabulary for building abstract graphics, and third, as a knowledge base of facts on visualisation. This is why we call our approach ontology-driven. We suggest generating an Abstract Visual Model (AVM) to represent and »synthesise« a graphic following a role-based approach, inspired by the one used by J. v. Engelhardt for the analysis of graphics. It consists of graphic objects and relations formalised in the Visualisation Ontology (VISO). A mappings model, based on the declarative RDFS/OWL Visualisation Language (RVL), determines a set of transformations from the domain data to the AVM. RVL allows for composable visual mappings that can be shared and reused across platforms. To guide the user, for example, we discourage the construction of mappings that are suboptimal according to an effectiveness ranking formalised in the fact base and suggest more effective mappings instead. The guidance process is flexible, since it is based on exchangeable rules. VISO, RVL and the AVM are additional contributions of this thesis. Further, we initially analysed the state of the art in visualisation and RDF-presentation comparing 10 approaches by 29 criteria. Our approach is unique because it combines ontology-driven guidance with composable visual mappings. Finally, we compare three prototypes covering the essential parts of our approach to show its feasibility. We show how the mapping process can be supported by tools displaying warning messages for non-optimal visual mappings, e.g., by considering relation characteristics such as »symmetry«. In a constructive evaluation, we challenge both the RVL language and the latest prototype trying to regenerate sketches of graphics we created manually during analysis. We demonstrate how graphics can be varied and complex mappings can be composed from simple ones. Two thirds of the sketches can be almost or completely specified and half of them can be almost or completely implemented
Datenmassen im World Wide Web können kaum von Menschen oder Maschinen erfasst werden. Eine Option ist die formale Beschreibung und Verknüpfung von Datenquellen mit Semantic-Web- und Linked-Data-Technologien. Ontologien, in standardisierten Sprachen geschrieben, befördern das Teilen und Verknüpfen von Daten, da sie ein Mittel zur formalen Definition von Konzepten und Beziehungen zwischen diesen Konzepten darstellen. Eine zweite Option ist die Visualisierung. Die visuelle Repräsentation ermöglicht es dem Menschen, Informationen direkter wahrzunehmen, indem er seinen hochentwickelten Sehsinn verwendet. Relativ wenige Anstrengungen wurden unternommen, um beide Optionen zu kombinieren, obwohl die Formalität und die reichhaltige Semantik ontologische Daten zu einem idealen Kandidaten für die Visualisierung machen. Visualisierungsdesignsysteme unterstützen Nutzer bei der Visualisierung von tabellarischen, typischerweise statistischen Daten. Visualisierungen ontologischer Daten jedoch müssen noch manuell erstellt werden, da automatisierte Lösungen häufig auf generische Listendarstellungen oder Knoten-Kanten-Diagramme beschränkt sind. Auch die Semantik der ontologischen Daten wird nicht ausgenutzt, um Benutzer durch Visualisierungsaufgaben zu führen. Einmal erstellte Visualisierungseinstellungen können nicht einfach wiederverwendet und geteilt werden. Um diese Probleme zu lösen, mussten wir eine Antwort darauf finden, wie die Definition komponierbarer und wiederverwendbarer Abbildungen von ontologischen Daten auf visuelle Mittel geschehen könnte und wie Nutzer bei dieser Abbildung geführt werden könnten. Wir stellen einen Ansatz vor, der die geführte Visualisierung von ontologischen Daten, die Erstellung effektiver Grafiken und die Wiederverwendung von Visualisierungseinstellungen ermöglicht. Statt auf generische Grafiken zielt der Ansatz auf maßgeschneiderte Grafiken ab, die mit der gesamten Palette visueller Mittel in einem flexiblen Bottom-Up-Ansatz erstellt werden. Er erlaubt nicht nur die Visualisierung von Ontologien, sondern verwendet auch Ontologien, um Benutzer bei der Visualisierung von Daten zu führen und den Visualisierungsprozess an verschiedenen Stellen zu steuern: Erstens als eine reichhaltige Informationsquelle zu Datencharakteristiken, zweitens als Mittel zur formalen Beschreibung des Vokabulars für den Aufbau von abstrakten Grafiken und drittens als Wissensbasis von Visualisierungsfakten. Deshalb nennen wir unseren Ansatz ontologie-getrieben. Wir schlagen vor, ein Abstract Visual Model (AVM) zu generieren, um eine Grafik rollenbasiert zu synthetisieren, angelehnt an einen Ansatz der von J. v. Engelhardt verwendet wird, um Grafiken zu analysieren. Das AVM besteht aus grafischen Objekten und Relationen, die in der Visualisation Ontology (VISO) formalisiert sind. Ein Mapping-Modell, das auf der deklarativen RDFS/OWL Visualisation Language (RVL) basiert, bestimmt eine Menge von Transformationen von den Quelldaten zum AVM. RVL ermöglicht zusammensetzbare »Mappings«, visuelle Abbildungen, die über Plattformen hinweg geteilt und wiederverwendet werden können. Um den Benutzer zu führen, bewerten wir Mappings anhand eines in der Faktenbasis formalisierten Effektivitätsrankings und schlagen ggf. effektivere Mappings vor. Der Beratungsprozess ist flexibel, da er auf austauschbaren Regeln basiert. VISO, RVL und das AVM sind weitere Beiträge dieser Arbeit. Darüber hinaus analysieren wir zunächst den Stand der Technik in der Visualisierung und RDF-Präsentation, indem wir 10 Ansätze nach 29 Kriterien vergleichen. Unser Ansatz ist einzigartig, da er eine ontologie-getriebene Nutzerführung mit komponierbaren visuellen Mappings vereint. Schließlich vergleichen wir drei Prototypen, welche die wesentlichen Teile unseres Ansatzes umsetzen, um seine Machbarkeit zu zeigen. Wir zeigen, wie der Mapping-Prozess durch Tools unterstützt werden kann, die Warnmeldungen für nicht optimale visuelle Abbildungen anzeigen, z. B. durch Berücksichtigung von Charakteristiken der Relationen wie »Symmetrie«. In einer konstruktiven Evaluation fordern wir sowohl die RVL-Sprache als auch den neuesten Prototyp heraus, indem wir versuchen Skizzen von Grafiken umzusetzen, die wir während der Analyse manuell erstellt haben. Wir zeigen, wie Grafiken variiert werden können und komplexe Mappings aus einfachen zusammengesetzt werden können. Zwei Drittel der Skizzen können fast vollständig oder vollständig spezifiziert werden und die Hälfte kann fast vollständig oder vollständig umgesetzt werden
APA, Harvard, Vancouver, ISO, and other styles
37

GOTTARD, ANNA. "Analisi di processi stocastici interdipendenti mediante modelli grafici di durata: lo studio delle relazioni dinamiche tra lavoro e fecondità." Doctoral thesis, 1998. http://hdl.handle.net/2158/651801.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

BIONDI, LUIGI. "Identifiability of Discrete Hierarchical Models with One Latent Variable." Doctoral thesis, 2016. http://hdl.handle.net/2158/1028810.

Full text
Abstract:
In the thesis we discuss in depth the problem of identifiability for discrete models, showing many concrete situations in which it may fail. After this survey we concentrate on hierarchical log-linear models for binary variables with one hidden variable. These are more general of the LC models because may include some higher-order interactions. These models may sometimes be interpreted as discrete undirected graphical models (called also concentration graph models), but they are more general.
APA, Harvard, Vancouver, ISO, and other styles
39

Di, Puglia Pugliese Luigi, Francesca Guerriero, and Lucio Grandinetti. "Models and methods for the constrained shortest path problem and its variants." Thesis, 2014. http://hdl.handle.net/10955/592.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Peters, Sascha [Verfasser]. "Modell zur Beschreibung der kreativen Prozesse im Design unter Berücksichtigung der ingenieurtechnischen Semantik / vorgelegt von Sascha Peters." 2004. http://d-nb.info/972416463/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Kryštof, Jan. "Modelem řízená realizace prezentační vrstvy softwarových aplikací." Doctoral thesis, 2010. http://www.nusl.cz/ntk/nusl-249275.

Full text
Abstract:
In recent years, several approaches dealing with modeling of user interfaces and related interaction have been proposed. These model artifacts can, in some cases, be used for further development and therefore they can be utilized in both design and implementation phases. Model data are recorded in a graphical or text representation which also determines possibilities of a consequent utilization. Several criteria can be used to judge model notations suitability. Focusing on the criteria of interoperability can highlight that most approaches do not reach satisfactory level. The optimal level is reached by the approaches based on graphical notation of UML. However, the usability of those models is rather low in the implementation phase, where text based notations are more appropriate. This thesis deals with the contemporary notations for modeling graphical user interfaces in software applications and introduces a new approach which employs the UML in order to create models. These models are appropriate for both the design and implementation phases. The center core of this work deals with a proposal of an interaction model, which can be used as an alternative to current approaches based on text notation and can be utilized for tasks of code generation for an arbitrary presentation framework. The interaction model consists of a presentation model and a task action model. The presentation model can be used for generation of containment hierarchy and layout of user interface components, while the task action model enables to generate XML compliant descriptor for dynamic flow control. This work also deals with a construction of MB-UIDE, which is realized as an extension of an adaptive UML compliant modeling tool. This way of obtaining the MB-UIDE represents an easy and cheap way how to manage the interaction model and support its integration with models of another layers in software architecture.
APA, Harvard, Vancouver, ISO, and other styles
42

Marín, Morales Javier. "Modelling human emotions using immersive virtual reality, physiological signals and behavioural responses." Doctoral thesis, 2020. http://hdl.handle.net/10251/148717.

Full text
Abstract:
[ES] El uso de la realidad virtual (RV) se ha incrementado notablemente en la comunidad científica para la investigación del comportamiento humano. En particular, la RV inmersiva ha crecido debido a la democratización de las gafas de realidad virtual o head mounted displays (HMD), que ofrecen un alto rendimiento con una inversión económica. Uno de los campos que ha emergido con fuerza en la última década es el Affective Computing, que combina psicofisiología, informática, ingeniería biomédica e inteligencia artificial, desarrollando sistemas que puedan reconocer emociones automáticamente. Su progreso es especialmente importante en el campo de la investigación del comportamiento humano, debido al papel fundamental que las emociones juegan en muchos procesos psicológicos como la percepción, la toma de decisiones, la creatividad, la memoria y la interacción social. Muchos estudios se han centrado en intentar obtener una metodología fiable para evocar y automáticamente identificar estados emocionales, usando medidas fisiológicas objetivas y métodos de aprendizaje automático. Sin embargo, la mayoría de los estudios previos utilizan imágenes, audios o vídeos para generar los estados emocionales y, hasta donde llega nuestro conocimiento, ninguno de ellos ha desarrollado un sistema de reconocimiento emocional usando RV inmersiva. Aunque algunos trabajos anteriores sí analizan las respuestas fisiológicas en RV inmersivas, estos no presentan modelos de aprendizaje automático para procesamiento y clasificación automática de bioseñales. Además, un concepto crucial cuando se usa la RV en investigación del comportamiento humano es la validez: la capacidad de evocar respuestas similares en un entorno virtual a las evocadas por el espacio físico. Aunque algunos estudios previos han usado dimensiones psicológicas y cognitivas para comparar respuestas entre entornos reales y virtuales, las investigaciones que analizan respuestas fisiológicas o comportamentales están mucho menos extendidas. Según nuestros conocimientos, este es el primer trabajo que compara entornos físicos con su réplica en RV, empleando respuestas fisiológicas y algoritmos de aprendizaje automático y analizando la capacidad de la RV de transferir y extrapolar las conclusiones obtenidas al entorno real que se está simulando. El objetivo principal de la tesis es validar el uso de la RV inmersiva como una herramienta de estimulación emocional usando respuestas psicofisiológicas y comportamentales en combinación con algoritmos de aprendizaje automático, así como realizar una comparación directa entre un entorno real y virtual. Para ello, se ha desarrollado un protocolo experimental que incluye entornos emocionales 360º, un museo real y una virtualización 3D altamente realista del mismo museo. La tesis presenta novedosas contribuciones del uso de la RV inmersiva en la investigación del comportamiento humano, en particular en lo relativo al estudio de las emociones. Esta ayudará a aplicar metodologías a estímulos más realistas para evaluar entornos y situaciones de la vida diaria, superando las actuales limitaciones de la estimulación emocional que clásicamente ha incluido imágenes, audios o vídeos. Además, en ella se analiza la validez de la RV realizando una comparación directa usando una simulación altamente realista. Creemos que la RV inmersiva va a revolucionar los métodos de estimulación emocional en entornos de laboratorio. Además, su sinergia junto a las medidas fisiológicas y las técnicas de aprendizaje automático, impactarán transversalmente en muchas áreas de investigación como la arquitectura, la salud, la evaluación psicológica, el entrenamiento, la educación, la conducción o el marketing, abriendo un nuevo horizonte de oportunidades para la comunidad científica. La presente tesis espera contribuir a caminar en esa senda.
[EN] In recent years the scientific community has significantly increased its use of virtual reality (VR) technologies in human behaviour research. In particular, the use of immersive VR has grown due to the introduction of affordable, high performance head mounted displays (HMDs). Among the fields that has strongly emerged in the last decade is affective computing, which combines psychophysiology, computer science, biomedical engineering and artificial intelligence in the development of systems that can automatically recognize emotions. The progress of affective computing is especially important in human behaviour research due to the central role that emotions play in many background processes, such as perception, decision-making, creativity, memory and social interaction. Several studies have tried to develop a reliable methodology to evoke and automatically identify emotional states using objective physiological measures and machine learning methods. However, the majority of previous studies used images, audio or video to elicit emotional statements; to the best of our knowledge, no previous research has developed an emotion recognition system using immersive VR. Although some previous studies analysed physiological responses in immersive VR, they did not use machine learning techniques for biosignal processing and classification. Moreover, a crucial concept when using VR for human behaviour research is validity: the capacity to evoke a response from the user in a simulated environment similar to the response that might be evoked in a physical environment. Although some previous studies have used psychological and cognitive dimensions to compare responses in real and virtual environments, few have extended this research to analyse physiological or behavioural responses. Moreover, to our knowledge, this is the first study to compare VR scenarios with their real-world equivalents using physiological measures coupled with machine learning algorithms, and to analyse the ability of VR to transfer and extrapolate insights obtained from VR environments to real environments. The main objective of this thesis is, using psycho-physiological and behavioural responses in combination with machine learning methods, and by performing a direct comparison between a real and virtual environment, to validate immersive VR as an emotion elicitation tool. To do so we develop an experimental protocol involving emotional 360º environments, an art exhibition in a real museum, and a highly-realistic 3D virtualization of the same art exhibition. This thesis provides novel contributions to the use of immersive VR in human behaviour research, particularly in relation to emotions. VR can help in the application of methodologies designed to present more realistic stimuli in the assessment of daily-life environments and situations, thus overcoming the current limitations of affective elicitation, which classically uses images, audio and video. Moreover, it analyses the validity of VR by performing a direct comparison using highly-realistic simulation. We believe that immersive VR will revolutionize laboratory-based emotion elicitation methods. Moreover, its synergy with physiological measurement and machine learning techniques will impact transversely in many other research areas, such as architecture, health, assessment, training, education, driving and marketing, and thus open new opportunities for the scientific community. The present dissertation aims to contribute to this progress.
[CA] L'ús de la realitat virtual (RV) s'ha incrementat notablement en la comunitat científica per a la recerca del comportament humà. En particular, la RV immersiva ha crescut a causa de la democratització de les ulleres de realitat virtual o head mounted displays (HMD), que ofereixen un alt rendiment amb una reduïda inversió econòmica. Un dels camps que ha emergit amb força en l'última dècada és el Affective Computing, que combina psicofisiologia, informàtica, enginyeria biomèdica i intel·ligència artificial, desenvolupant sistemes que puguen reconéixer emocions automàticament. El seu progrés és especialment important en el camp de la recerca del comportament humà, a causa del paper fonamental que les emocions juguen en molts processos psicològics com la percepció, la presa de decisions, la creativitat, la memòria i la interacció social. Molts estudis s'han centrat en intentar obtenir una metodologia fiable per a evocar i automàticament identificar estats emocionals, utilitzant mesures fisiològiques objectives i mètodes d'aprenentatge automàtic. No obstant això, la major part dels estudis previs utilitzen imatges, àudios o vídeos per a generar els estats emocionals i, fins on arriba el nostre coneixement, cap d'ells ha desenvolupat un sistema de reconeixement emocional mitjançant l'ús de la RV immersiva. Encara que alguns treballs anteriors sí que analitzen les respostes fisiològiques en RV immersives, aquests no presenten models d'aprenentatge automàtic per a processament i classificació automàtica de biosenyals. A més, un concepte crucial quan s'utilitza la RV en la recerca del comportament humà és la validesa: la capacitat d'evocar respostes similars en un entorn virtual a les evocades per l'espai físic. Encara que alguns estudis previs han utilitzat dimensions psicològiques i cognitives per a comparar respostes entre entorns reals i virtuals, les recerques que analitzen respostes fisiològiques o comportamentals estan molt menys esteses. Segons els nostres coneixements, aquest és el primer treball que compara entorns físics amb la seua rèplica en RV, emprant respostes fisiològiques i algorismes d'aprenentatge automàtic i analitzant la capacitat de la RV de transferir i extrapolar les conclusions obtingudes a l'entorn real que s'està simulant. L'objectiu principal de la tesi és validar l'ús de la RV immersiva com una eina d'estimulació emocional usant respostes psicofisiològiques i comportamentals en combinació amb algorismes d'aprenentatge automàtic, així com realitzar una comparació directa entre un entorn real i virtual. Per a això, s'ha desenvolupat un protocol experimental que inclou entorns emocionals 360º, un museu real i una virtualització 3D altament realista del mateix museu. La tesi presenta noves contribucions de l'ús de la RV immersiva en la recerca del comportament humà, en particular quant a l'estudi de les emocions. Aquesta ajudarà a aplicar metodologies a estímuls més realistes per a avaluar entorns i situacions de la vida diària, superant les actuals limitacions de l'estimulació emocional que clàssicament ha inclòs imatges, àudios o vídeos. A més, en ella s'analitza la validesa de la RV realitzant una comparació directa usant una simulació altament realista. Creiem que la RV immersiva revolucionarà els mètodes d'estimulació emocional en entorns de laboratori. A més, la seua sinergia al costat de les mesures fisiològiques i les tècniques d'aprenentatge automàtic, impactaran transversalment en moltes àrees de recerca com l'arquitectura, la salut, l'avaluació psicològica, l'entrenament, l'educació, la conducció o el màrqueting, obrint un nou horitzó d'oportunitats per a la comunitat científica. La present tesi espera contribuir a caminar en aquesta senda.
Marín Morales, J. (2020). Modelling human emotions using immersive virtual reality, physiological signals and behavioural responses [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/148717
TESIS
APA, Harvard, Vancouver, ISO, and other styles
43

PELLEGATTA, CRISTINA. "Il pensiero rappresentato: il ruolo delle immagini nella scienza e nell'arte. Pensare per immagini e immagini per pensare." Doctoral thesis, 2018. http://hdl.handle.net/11573/1078355.

Full text
Abstract:
Images were and still are an integral part of the spreading of knowledge and have always gone along with human thought with great interdisciplinary tension. Nowadays, in a reality characterized over the last twenty years by an evident hyper-dosage of visuals and where the power of images – sometimes true “icons” – seems to be undeniable, it is necessary to think again about the fundamentals of representation and about the ways of creating visual models in order to spread a “culture of the image”. In this contest where, for example, semiologist and philosophers express their theories on the subject in question, we need to start a “theory of the image”, beginning from the reinterpretation, dropped in contemporary reality, of the researchers of the past century. We should also try to understand what the role of the architect is and will be in such a cultural debate. The great importance that the architect can and must have in this field is rooted in the disciplinary fundamentals of the science of representation. The possibility of applying the studies on representation and, in general, on visual and graphic sciences to the “science and art universes” would provide the theoretical an practical bases to open a dialogue with those who want to deepen and participate actively in the definition of visual models, investigating their experimental aspects. Consequently, in the wide world of the image, the main interest is addressed to the study of the propositive and constructive aspects of the image in all its facets, image which is thought in representation. Therefore we have to investigate the morphological and structural aspects of visual form in order to outline the construction and production processes with a clear project aim. This involves a study in depth of themes belonging to different disciplinary field to individuate and propose not only a key to reading and interpreting images, but also an operating mode, a “way of doing”, built within the discipline of drawing.
APA, Harvard, Vancouver, ISO, and other styles
44

Disler, Pius. "Wie viel Abstraktion erträgt die Lernwirksamkeit?" Doctoral thesis, 2005. http://hdl.handle.net/11858/00-1735-0000-0006-B3C7-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Florez, Montes Frank. "Análisis dinámico del confort en edificios con estrategias de control adaptativo en modos deslizantes." Doctoral thesis, 2020. http://hdl.handle.net/10251/153803.

Full text
Abstract:
[ES] En esta tesis de doctorado se utiliza el modelado matemático de zonas térmicas para evaluar la capacidad del control en modos deslizantes, para regular la temperatura interna de un caso de estudio. Se utiliza la técnica de parámetros agrupados para representar los espacios cerrados, que al ser complementada con mediciones experimentales y algoritmos de optimización, permitió construir un simulador para reproducir con una precisión de más del 97% las condiciones del modelo estudiado, y que permitió estudiar el sistema en general mientras se introducen perturbaciones o variaciones en los parámetros del modelo. Inicialmente se utilizaron modelos de escala reducida para caracterizar el efecto termo-aislante de la solución Thermo Skold sobre la temperatura interna, se caracterizó el efecto de la pintura sobre cada uno de los parámetros de transmisión de calor del caso de estudio, lo que permitió entender los ahorros y resultados obtenidos experimentalmente. Posteriormente, se utilizaron los modelos de escala reducida para evaluar la técnica de control en modos deslizantes, por lo que se modeló, simuló y veri ficó experimentalmente la efectividad de la técnica para mantener una temperatura de referencia ja, con un error inferior al 2 %. En la etapa final de la tesis se utilizó un domo geodésico como caso de estudio, el cual fue modelado con un circuito eléctrico propuesto para sus características específi cas. Se realizaron medidas experimentales de las condiciones térmicas del domo geodésico, con las cuales se ajustó el simulador utilizando el algoritmo de optimización Búsqueda de Patrones. Gracias al simulador desarrollado se estudiaron las condiciones de confort térmico y las necesidades de refrigeración del domo, considerando diferentes situaciones y cargas internas por ocupantes y sistemas de refrigeración.
[EN] In this doctoral thesis, the mathematical modeling of thermal zones is used to evaluate the ability of the control in sliding modes, to regulate the internal temperature of a case study. The grouped parameters technique is used to represent the closed spaces, which, when complemented with experimental measurements and optimization algorithms, allowed the construction of a simulator to reproduce the conditions of the model studied with an accuracy of more than 97 %, which allowed studying the system in general while introducing disturbances or variations in the model parameters. Initially, reduced-scale models were used to characterize the thermal insulating effect of the Thermo Skold solution on the internal temperature. The impact of the painting on each one of the heat transfer parameters was studied, which allowed us to understand the savings and results obtained experimentally. Subsequently, the reduced scale models were used to evaluate the control technique in sliding modes, so the effectiveness of the technique was modeled, simulated and verified experimentally to maintain a fixed reference temperature, with an error of less than 2%. In the final stage of the thesis, a geodesic dome was used as a case study, which was modeled with an electrical circuit proposed for its specific characteristics. Experimental measurements of the thermal conditions of the geodesic dome were made, with which the simulator was adjusted using the Pattern Search optimization algorithm. Thanks to the simulator developed, the thermal comfort conditions and the cooling needs of the dome were studied, considering different situations and internal loads by occupants and cooling systems.
[CA] En aquesta tesi de doctorat s'utilitza el modelatge matemàtic de zones tèrmiques per avaluar la capacitat de l'control en maneres lliscants, per regular la temperatura interna d'un cas d'estudi. S'utilitza la tècnica de paràmetres agrupats per representar els espais tancats, que a l'ésser complementada amb mesuraments experimentals i algoritmes d'optimització, va permetre construir un simulador per reproduir amb una precisió de més de l' 97 % les condicions de el model estudiat, i que va permetre estudiar el sistema en general mentre s'introdueixen pertorbacions o variacions en els paràmetres de el model. Inicialment es van utilitzar models d'escala reduïda per caracteritzar l'efecte termo-aïllant de la solució Thermo Skold sobre la temperatura interna, es va caracteritzar l'efecte de la pintura sobre cadascun dels paràmetres de transmissió de calor de l' cas d'estudi, el que va permetre entendre els estalvis i resultats obtinguts experimentalment. Posteriorment, es van utilitzar els models d'escala reduïda per avaluar la tècnica de control en maneres lliscants, de manera que es va modelar, simular i va verificar experimentalment l'efectivitat de la tècnica per mantenir una temperatura de referència fixa, amb un error inferior a el 2 % . En l'etapa final de la tesi es va utilitzar un dom geodèsic com a cas d'estudi, el qual va ser modelat amb un circuit elèctric proposat per les seves característiques especifiques. Es van realitzar mesures experimentals de les condicions tèrmiques de l'dom geodèsic, amb les quals es va ajustar el simulador utilitzant l'algoritme d'optimització Cerca de Patrons. Gràcies a el simulador desenvolupat es van estudiar les condicions de confort tèrmic i les necessitats de refrigeració de la cúpula, considerant diferents situacions i càrregues internes per ocupants i sistemes de refrigeració.
Al programa de becas de doctorados nacionales convocatoria 727 de Colciencias, por todos los recursos aportados durante el desarrollo de mis estudios. Finalmente, a la Universidad Nacional de Colombia sede Manizales y a la Universidad Politècnica de Valencia, que con sus programas de doctorado en ingeniería automática y matemáticas aplicadas contribuyeron en mi formación como investigador.
Florez Montes, F. (2020). Análisis dinámico del confort en edificios con estrategias de control adaptativo en modos deslizantes [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/153803
TESIS
APA, Harvard, Vancouver, ISO, and other styles
46

Polowinski, Jan. "Ontology-Driven, Guided Visualisation Supporting Explicit and Composable Mappings." Doctoral thesis, 2016. https://tud.qucosa.de/id/qucosa%3A30593.

Full text
Abstract:
Data masses on the World Wide Web can hardly be managed by humans or machines. One option is the formal description and linking of data sources using Semantic Web and Linked Data technologies. Ontologies written in standardised languages foster the sharing and linking of data as they provide a means to formally define concepts and relations between these concepts. A second option is visualisation. The visual representation allows humans to perceive information more directly, using the highly developed visual sense. Relatively few efforts have been made on combining both options, although the formality and rich semantics of ontological data make it an ideal candidate for visualisation. Advanced visualisation design systems support the visualisation of tabular, typically statistical data. However, visualisations of ontological data still have to be created manually, since automated solutions are often limited to generic lists or node-link diagrams. Also, the semantics of ontological data are not exploited for guiding users through visualisation tasks. Finally, once a good visualisation setting has been created, it cannot easily be reused and shared. Trying to tackle these problems, we had to answer how to define composable and shareable mappings from ontological data to visual means and how to guide the visual mapping of ontological data. We present an approach that allows for the guided visualisation of ontological data, the creation of effective graphics and the reuse of visualisation settings. Instead of generic graphics, we aim at tailor-made graphics, produced using the whole palette of visual means in a flexible, bottom-up approach. It not only allows for visualising ontologies, but uses ontologies to guide users when visualising data and to drive the visualisation process at various places: First, as a rich source of information on data characteristics, second, as a means to formally describe the vocabulary for building abstract graphics, and third, as a knowledge base of facts on visualisation. This is why we call our approach ontology-driven. We suggest generating an Abstract Visual Model (AVM) to represent and »synthesise« a graphic following a role-based approach, inspired by the one used by J. v. Engelhardt for the analysis of graphics. It consists of graphic objects and relations formalised in the Visualisation Ontology (VISO). A mappings model, based on the declarative RDFS/OWL Visualisation Language (RVL), determines a set of transformations from the domain data to the AVM. RVL allows for composable visual mappings that can be shared and reused across platforms. To guide the user, for example, we discourage the construction of mappings that are suboptimal according to an effectiveness ranking formalised in the fact base and suggest more effective mappings instead. The guidance process is flexible, since it is based on exchangeable rules. VISO, RVL and the AVM are additional contributions of this thesis. Further, we initially analysed the state of the art in visualisation and RDF-presentation comparing 10 approaches by 29 criteria. Our approach is unique because it combines ontology-driven guidance with composable visual mappings. Finally, we compare three prototypes covering the essential parts of our approach to show its feasibility. We show how the mapping process can be supported by tools displaying warning messages for non-optimal visual mappings, e.g., by considering relation characteristics such as »symmetry«. In a constructive evaluation, we challenge both the RVL language and the latest prototype trying to regenerate sketches of graphics we created manually during analysis. We demonstrate how graphics can be varied and complex mappings can be composed from simple ones. Two thirds of the sketches can be almost or completely specified and half of them can be almost or completely implemented.:Legend and Overview of Prefixes xiii 1 Introduction 1 2 Background 11 2.1 Visualisation 11 2.1.1 What is Visualisation? 11 2.1.2 What are the Benefits of Visualisation? 12 2.1.3 Visualisation Related Terms Used in this Thesis 12 2.1.4 Visualisation Models and Architectural Patterns 12 2.1.5 Visualisation Design Systems 14 2.1.6 What is the Difference between Visual Mapping and Styling? 14 2.1.7 Lessons Learned from Style Sheet Languages 15 2.2 Data 16 2.2.1 Data – Information – Knowledge 17 2.2.2 Structured Data 17 2.2.3 Ontologies in Computer Science 19 2.2.4 The Semantic Web and its Languages 19 2.2.5 Linked Data and Open Data 20 2.2.6 The Metamodelling Technological Space 21 2.2.7 SPIN 21 2.3 Guidance 22 2.3.1 Guidance in Visualisation 22 3 Problem Analysis 23 3.1 Problems of Ontology Visualisation Approaches 24 3.2 Research Questions 25 3.3 Set up of the Case Studies 25 3.3.1 Case Studies in the Life Sciences Domain 26 3.3.2 Case Studies in the Publishing Domain 26 3.3.3 Case Studies in the Software Technology Domain 27 3.4 Analysis of the Case Studies’ Ontologies 27 3.5 Manual Sketching of Graphics 29 3.6 Analysis of the Graphics for Typical Visualisation Cases 29 3.7 Requirements 33 3.7.1 Requirements for Visualisation and Interaction 34 3.7.2 Requirements for Data Awareness 34 3.7.3 Requirements for Reuse and Composition 34 3.7.4 Requirements for Variability 35 3.7.5 Requirements for Tooling Support and Guidance 35 3.7.6 Optional Features and Limitations 36 4 Analysis of the State of the Art 37 4.1 Related Visualisation Approaches 38 4.1.1 Short Overview of the Approaches 38 4.1.2 Detailed Comparison by Criteria 46 4.1.3 Conclusion – What Is Still Missing? 60 4.2 Visualisation Languages 62 4.2.1 Short Overview of the Compared Languages 62 4.2.2 Detailed Comparison by Language Criteria 66 4.2.3 Conclusion – What Is Still Missing? 71 4.3 RDF Presentation Languages 72 4.3.1 Short Overview of the Compared Languages 72 4.3.2 Detailed Comparison by Language Criteria 76 4.3.3 Additional Criteria for RDF Display Languages 87 4.3.4 Conclusion – What Is Still Missing? 89 4.4 Model-Driven Interfaces 90 4.4.1 Metamodel-Driven Interfaces 90 4.4.2 Ontology-Driven Interfaces 92 4.4.3 Combined Usage of the Metamodelling and Ontology Technological Space 94 5 A Visualisation Ontology – VISO 97 5.1 Methodology Used for Ontology Creation 100 5.2 Requirements for a Visualisation Ontology 100 5.3 Existing Approaches to Modelling in the Field of Visualisation 101 5.3.1 Terminologies and Taxonomies 101 5.3.2 Existing Visualisation Ontologies 102 5.3.3 Other Visualisation Models and Approaches to Formalisation 103 5.3.4 Summary 103 5.4 Technical Aspects of VISO 103 5.5 VISO/graphic Module – Graphic Vocabulary 104 5.5.1 Graphic Representations and Graphic Objects 105 5.5.2 Graphic Relations and Syntactic Structures 107 5.6 VISO/data Module – Characterising Data 110 5.6.1 Data Structure and Characteristics of Relations 110 5.6.2 The Scale of Measurement and Units 112 5.6.3 Properties for Characterising Data Variables in Statistical Data 113 5.7 VISO/facts Module – Facts for Vis. Constraints and Rules 115 5.7.1 Expressiveness of Graphic Relations 116 5.7.2 Effectiveness Ranking of Graphic Relations 118 5.7.3 Rules for Composing Graphics 119 5.7.4 Other Rules to Consider for Visual Mapping 124 5.7.5 Providing Named Value Collections 124 5.7.6 Existing Approaches to the Formalisation of Visualisation Knowledge . . 126 5.7.7 The VISO/facts/empiric Example Knowledge Base 126 5.8 Other VISO Modules 126 5.9 Conclusions and Future Work 127 5.10 Further Use Cases for VISO 127 5.11 VISO on the Web – Sharing the Vocabulary to Build a Community 128 6 A VISO-Based Abstract Visual Model – AVM 129 6.1 Graphical Notation Used in this Chapter 129 6.2 Elementary Graphic Objects and Graphic Attributes 131 6.3 N-Ary Relations 131 6.4 Binary Relations 131 6.5 Composition of Graphic Objects Using Roles 132 6.6 Composition of Graphic Relations Using Roles 132 6.7 Composition of Visual Mappings Using the AVM 135 6.8 Tracing 135 6.9 Is it Worth Having an Abstract Visual Model? 135 6.10 Discussion of Fresnel as a Related Language 137 6.11 Related Work 139 6.12 Limitations 139 6.13 Conclusions 140 7 A Language for RDFS/OWL Visualisation – RVL 141 7.1 Language Requirements 142 7.2 Main RVL Constructs 145 7.2.1 Mapping 145 7.2.2 Property Mapping 146 7.2.3 Identity Mapping 146 7.2.4 Value Mapping 147 7.2.5 Inheriting RVL Settings 147 7.2.6 Resource Mapping 148 7.2.7 Simplifications 149 7.3 Calculating Value Mappings 150 7.4 Defining Scale of Measurement 153 7.4.1 Determining the Scale of Measurement 154 7.5 Addressing Values in Value Mappings 156 7.5.1 Determining the Set of Addressed Source Values 156 7.5.2 Determining the Set of Addressed Target Values 157 7.6 Overlapping Value Mappings 158 7.7 Default Value Mapping 158 7.8 Default Labelling 159 7.9 Defining Interaction 159 7.10 Mapping Composition and Submappings 160 7.11 A Schema Language for RVL 160 7.11.1 Concrete Examples of the RVL Schema 163 7.12 Conclusions and Future Work 166 8 The OGVIC Approach 169 8.1 Ontology-Driven, Guided Editing of Visual Mappings 172 8.1.1 Classification of Constraints 172 8.1.2 Levels of Guidance 173 8.1.3 Implementing Constraint-Based Guidance 173 8.2 Support of Explicit and Composable Visual Mappings 177 8.2.1 Mapping Composition Cases 178 8.2.2 Selecting a Context 180 8.2.3 Using the Same Graphic Relation Multiple Times 181 8.3 Prototype P1 (TopBraid-Composer-based) 182 8.4 Prototype P2 (OntoWiki-based) 184 8.5 Prototype P3 (Java Implementation of RVL) 187 8.6 Lessons Learned from Prototypes & Future Work 190 8.6.1 Checking RVL Constraints and Visualisation Rules 190 8.6.2 A User Interface for Editing RVL Mappings 190 8.6.3 Graph Transformations with SPIN and SPARQL 1.1 Update 192 8.6.4 Selection and Filtering of Data 193 8.6.5 Interactivity and Incremental Processing 193 8.6.6 Rendering the Final Platform-Specific Code 196 9 Application 197 9.1 Coverage of Case Study Sketches and Necessary Features 198 9.2 Coverage of Visualisation Cases 201 9.3 Coverage of Requirements 205 9.4 Full Example 206 10 Conclusions 211 10.1 Contributions 211 10.2 Constructive Evaluation 212 10.3 Research Questions 213 10.4 Transfer to Other Models and Constraint Languages 213 10.5 Limitations 214 10.6 Future Work 214 Appendices 217 A Case Study Sketches 219 B VISO – Comparison of Visualisation Literature 229 C RVL 231 D RVL Example Mappings and Application 233 D.1 Listings of RVL Example Mappings as Required by Prototype P3 233 D.2 Features Required for Implementing all Sketches 235 D.3 JSON Format for Processing the AVM with D3 – Hierarchical Variant 238 Bibliography 238 List of Figures 251 List of Tables 254 List of Listings 257
Datenmassen im World Wide Web können kaum von Menschen oder Maschinen erfasst werden. Eine Option ist die formale Beschreibung und Verknüpfung von Datenquellen mit Semantic-Web- und Linked-Data-Technologien. Ontologien, in standardisierten Sprachen geschrieben, befördern das Teilen und Verknüpfen von Daten, da sie ein Mittel zur formalen Definition von Konzepten und Beziehungen zwischen diesen Konzepten darstellen. Eine zweite Option ist die Visualisierung. Die visuelle Repräsentation ermöglicht es dem Menschen, Informationen direkter wahrzunehmen, indem er seinen hochentwickelten Sehsinn verwendet. Relativ wenige Anstrengungen wurden unternommen, um beide Optionen zu kombinieren, obwohl die Formalität und die reichhaltige Semantik ontologische Daten zu einem idealen Kandidaten für die Visualisierung machen. Visualisierungsdesignsysteme unterstützen Nutzer bei der Visualisierung von tabellarischen, typischerweise statistischen Daten. Visualisierungen ontologischer Daten jedoch müssen noch manuell erstellt werden, da automatisierte Lösungen häufig auf generische Listendarstellungen oder Knoten-Kanten-Diagramme beschränkt sind. Auch die Semantik der ontologischen Daten wird nicht ausgenutzt, um Benutzer durch Visualisierungsaufgaben zu führen. Einmal erstellte Visualisierungseinstellungen können nicht einfach wiederverwendet und geteilt werden. Um diese Probleme zu lösen, mussten wir eine Antwort darauf finden, wie die Definition komponierbarer und wiederverwendbarer Abbildungen von ontologischen Daten auf visuelle Mittel geschehen könnte und wie Nutzer bei dieser Abbildung geführt werden könnten. Wir stellen einen Ansatz vor, der die geführte Visualisierung von ontologischen Daten, die Erstellung effektiver Grafiken und die Wiederverwendung von Visualisierungseinstellungen ermöglicht. Statt auf generische Grafiken zielt der Ansatz auf maßgeschneiderte Grafiken ab, die mit der gesamten Palette visueller Mittel in einem flexiblen Bottom-Up-Ansatz erstellt werden. Er erlaubt nicht nur die Visualisierung von Ontologien, sondern verwendet auch Ontologien, um Benutzer bei der Visualisierung von Daten zu führen und den Visualisierungsprozess an verschiedenen Stellen zu steuern: Erstens als eine reichhaltige Informationsquelle zu Datencharakteristiken, zweitens als Mittel zur formalen Beschreibung des Vokabulars für den Aufbau von abstrakten Grafiken und drittens als Wissensbasis von Visualisierungsfakten. Deshalb nennen wir unseren Ansatz ontologie-getrieben. Wir schlagen vor, ein Abstract Visual Model (AVM) zu generieren, um eine Grafik rollenbasiert zu synthetisieren, angelehnt an einen Ansatz der von J. v. Engelhardt verwendet wird, um Grafiken zu analysieren. Das AVM besteht aus grafischen Objekten und Relationen, die in der Visualisation Ontology (VISO) formalisiert sind. Ein Mapping-Modell, das auf der deklarativen RDFS/OWL Visualisation Language (RVL) basiert, bestimmt eine Menge von Transformationen von den Quelldaten zum AVM. RVL ermöglicht zusammensetzbare »Mappings«, visuelle Abbildungen, die über Plattformen hinweg geteilt und wiederverwendet werden können. Um den Benutzer zu führen, bewerten wir Mappings anhand eines in der Faktenbasis formalisierten Effektivitätsrankings und schlagen ggf. effektivere Mappings vor. Der Beratungsprozess ist flexibel, da er auf austauschbaren Regeln basiert. VISO, RVL und das AVM sind weitere Beiträge dieser Arbeit. Darüber hinaus analysieren wir zunächst den Stand der Technik in der Visualisierung und RDF-Präsentation, indem wir 10 Ansätze nach 29 Kriterien vergleichen. Unser Ansatz ist einzigartig, da er eine ontologie-getriebene Nutzerführung mit komponierbaren visuellen Mappings vereint. Schließlich vergleichen wir drei Prototypen, welche die wesentlichen Teile unseres Ansatzes umsetzen, um seine Machbarkeit zu zeigen. Wir zeigen, wie der Mapping-Prozess durch Tools unterstützt werden kann, die Warnmeldungen für nicht optimale visuelle Abbildungen anzeigen, z. B. durch Berücksichtigung von Charakteristiken der Relationen wie »Symmetrie«. In einer konstruktiven Evaluation fordern wir sowohl die RVL-Sprache als auch den neuesten Prototyp heraus, indem wir versuchen Skizzen von Grafiken umzusetzen, die wir während der Analyse manuell erstellt haben. Wir zeigen, wie Grafiken variiert werden können und komplexe Mappings aus einfachen zusammengesetzt werden können. Zwei Drittel der Skizzen können fast vollständig oder vollständig spezifiziert werden und die Hälfte kann fast vollständig oder vollständig umgesetzt werden.:Legend and Overview of Prefixes xiii 1 Introduction 1 2 Background 11 2.1 Visualisation 11 2.1.1 What is Visualisation? 11 2.1.2 What are the Benefits of Visualisation? 12 2.1.3 Visualisation Related Terms Used in this Thesis 12 2.1.4 Visualisation Models and Architectural Patterns 12 2.1.5 Visualisation Design Systems 14 2.1.6 What is the Difference between Visual Mapping and Styling? 14 2.1.7 Lessons Learned from Style Sheet Languages 15 2.2 Data 16 2.2.1 Data – Information – Knowledge 17 2.2.2 Structured Data 17 2.2.3 Ontologies in Computer Science 19 2.2.4 The Semantic Web and its Languages 19 2.2.5 Linked Data and Open Data 20 2.2.6 The Metamodelling Technological Space 21 2.2.7 SPIN 21 2.3 Guidance 22 2.3.1 Guidance in Visualisation 22 3 Problem Analysis 23 3.1 Problems of Ontology Visualisation Approaches 24 3.2 Research Questions 25 3.3 Set up of the Case Studies 25 3.3.1 Case Studies in the Life Sciences Domain 26 3.3.2 Case Studies in the Publishing Domain 26 3.3.3 Case Studies in the Software Technology Domain 27 3.4 Analysis of the Case Studies’ Ontologies 27 3.5 Manual Sketching of Graphics 29 3.6 Analysis of the Graphics for Typical Visualisation Cases 29 3.7 Requirements 33 3.7.1 Requirements for Visualisation and Interaction 34 3.7.2 Requirements for Data Awareness 34 3.7.3 Requirements for Reuse and Composition 34 3.7.4 Requirements for Variability 35 3.7.5 Requirements for Tooling Support and Guidance 35 3.7.6 Optional Features and Limitations 36 4 Analysis of the State of the Art 37 4.1 Related Visualisation Approaches 38 4.1.1 Short Overview of the Approaches 38 4.1.2 Detailed Comparison by Criteria 46 4.1.3 Conclusion – What Is Still Missing? 60 4.2 Visualisation Languages 62 4.2.1 Short Overview of the Compared Languages 62 4.2.2 Detailed Comparison by Language Criteria 66 4.2.3 Conclusion – What Is Still Missing? 71 4.3 RDF Presentation Languages 72 4.3.1 Short Overview of the Compared Languages 72 4.3.2 Detailed Comparison by Language Criteria 76 4.3.3 Additional Criteria for RDF Display Languages 87 4.3.4 Conclusion – What Is Still Missing? 89 4.4 Model-Driven Interfaces 90 4.4.1 Metamodel-Driven Interfaces 90 4.4.2 Ontology-Driven Interfaces 92 4.4.3 Combined Usage of the Metamodelling and Ontology Technological Space 94 5 A Visualisation Ontology – VISO 97 5.1 Methodology Used for Ontology Creation 100 5.2 Requirements for a Visualisation Ontology 100 5.3 Existing Approaches to Modelling in the Field of Visualisation 101 5.3.1 Terminologies and Taxonomies 101 5.3.2 Existing Visualisation Ontologies 102 5.3.3 Other Visualisation Models and Approaches to Formalisation 103 5.3.4 Summary 103 5.4 Technical Aspects of VISO 103 5.5 VISO/graphic Module – Graphic Vocabulary 104 5.5.1 Graphic Representations and Graphic Objects 105 5.5.2 Graphic Relations and Syntactic Structures 107 5.6 VISO/data Module – Characterising Data 110 5.6.1 Data Structure and Characteristics of Relations 110 5.6.2 The Scale of Measurement and Units 112 5.6.3 Properties for Characterising Data Variables in Statistical Data 113 5.7 VISO/facts Module – Facts for Vis. Constraints and Rules 115 5.7.1 Expressiveness of Graphic Relations 116 5.7.2 Effectiveness Ranking of Graphic Relations 118 5.7.3 Rules for Composing Graphics 119 5.7.4 Other Rules to Consider for Visual Mapping 124 5.7.5 Providing Named Value Collections 124 5.7.6 Existing Approaches to the Formalisation of Visualisation Knowledge . . 126 5.7.7 The VISO/facts/empiric Example Knowledge Base 126 5.8 Other VISO Modules 126 5.9 Conclusions and Future Work 127 5.10 Further Use Cases for VISO 127 5.11 VISO on the Web – Sharing the Vocabulary to Build a Community 128 6 A VISO-Based Abstract Visual Model – AVM 129 6.1 Graphical Notation Used in this Chapter 129 6.2 Elementary Graphic Objects and Graphic Attributes 131 6.3 N-Ary Relations 131 6.4 Binary Relations 131 6.5 Composition of Graphic Objects Using Roles 132 6.6 Composition of Graphic Relations Using Roles 132 6.7 Composition of Visual Mappings Using the AVM 135 6.8 Tracing 135 6.9 Is it Worth Having an Abstract Visual Model? 135 6.10 Discussion of Fresnel as a Related Language 137 6.11 Related Work 139 6.12 Limitations 139 6.13 Conclusions 140 7 A Language for RDFS/OWL Visualisation – RVL 141 7.1 Language Requirements 142 7.2 Main RVL Constructs 145 7.2.1 Mapping 145 7.2.2 Property Mapping 146 7.2.3 Identity Mapping 146 7.2.4 Value Mapping 147 7.2.5 Inheriting RVL Settings 147 7.2.6 Resource Mapping 148 7.2.7 Simplifications 149 7.3 Calculating Value Mappings 150 7.4 Defining Scale of Measurement 153 7.4.1 Determining the Scale of Measurement 154 7.5 Addressing Values in Value Mappings 156 7.5.1 Determining the Set of Addressed Source Values 156 7.5.2 Determining the Set of Addressed Target Values 157 7.6 Overlapping Value Mappings 158 7.7 Default Value Mapping 158 7.8 Default Labelling 159 7.9 Defining Interaction 159 7.10 Mapping Composition and Submappings 160 7.11 A Schema Language for RVL 160 7.11.1 Concrete Examples of the RVL Schema 163 7.12 Conclusions and Future Work 166 8 The OGVIC Approach 169 8.1 Ontology-Driven, Guided Editing of Visual Mappings 172 8.1.1 Classification of Constraints 172 8.1.2 Levels of Guidance 173 8.1.3 Implementing Constraint-Based Guidance 173 8.2 Support of Explicit and Composable Visual Mappings 177 8.2.1 Mapping Composition Cases 178 8.2.2 Selecting a Context 180 8.2.3 Using the Same Graphic Relation Multiple Times 181 8.3 Prototype P1 (TopBraid-Composer-based) 182 8.4 Prototype P2 (OntoWiki-based) 184 8.5 Prototype P3 (Java Implementation of RVL) 187 8.6 Lessons Learned from Prototypes & Future Work 190 8.6.1 Checking RVL Constraints and Visualisation Rules 190 8.6.2 A User Interface for Editing RVL Mappings 190 8.6.3 Graph Transformations with SPIN and SPARQL 1.1 Update 192 8.6.4 Selection and Filtering of Data 193 8.6.5 Interactivity and Incremental Processing 193 8.6.6 Rendering the Final Platform-Specific Code 196 9 Application 197 9.1 Coverage of Case Study Sketches and Necessary Features 198 9.2 Coverage of Visualisation Cases 201 9.3 Coverage of Requirements 205 9.4 Full Example 206 10 Conclusions 211 10.1 Contributions 211 10.2 Constructive Evaluation 212 10.3 Research Questions 213 10.4 Transfer to Other Models and Constraint Languages 213 10.5 Limitations 214 10.6 Future Work 214 Appendices 217 A Case Study Sketches 219 B VISO – Comparison of Visualisation Literature 229 C RVL 231 D RVL Example Mappings and Application 233 D.1 Listings of RVL Example Mappings as Required by Prototype P3 233 D.2 Features Required for Implementing all Sketches 235 D.3 JSON Format for Processing the AVM with D3 – Hierarchical Variant 238 Bibliography 238 List of Figures 251 List of Tables 254 List of Listings 257
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography