Siga este enlace para ver otros tipos de publicaciones sobre el tema: Tableau Method.

Tesis sobre el tema "Tableau Method"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores tesis para su investigación sobre el tema "Tableau Method".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Slavětínský, Radek. "Analýza cloudových řešení Business Intelligence pro SME". Master's thesis, Vysoká škola ekonomická v Praze, 2017. http://www.nusl.cz/ntk/nusl-358847.

Texto completo
Resumen
The thesis is focused on the analysis of presently offered products supporting Business Intelligence (BI) which are affordable for small and medium-sized enterprises (SMEs). Current BI solutions available to SMEs are mostly offered via Cloud computing, specifically in the form of Software as a Service (SaaS) as it requires low initial acquisition costs. The objectives of this thesis are to analyse the work in applications for BI in cloud that can be used by SMEs and to analyse in detail the comparison the worldwide extended reporting tools distributed as SaaS in the lower price category. The theoretical part provides a description of the Cloud computing and the BI system. In the practical part are selected following products: IBM Watson Analytics, Qlik Sense Cloud, Zoho Reports, Tableau Public and Microsoft Power BI. Practical testing of these applications was based on evaluation of the selected metrics with weights calculated by using the Fuller's triangle. Analyses and the information form the basis for comparison of selected applications. The contribution of this thesis is in discovering the strengths and weaknesses of these BI solutions. The output of this thesis can be used as a source for the selection of BI applications for SMEs.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Mahieux, Bruno. "Contribution à l'optimisation en masse de structures complexes sous contraintes vibratoires : application à la conception des planches de bord en phase d'études préliminaires". Valenciennes, 1996. http://www.theses.fr/1996VALE0031.

Texto completo
Resumen
Dans le cadre de la conception des planches de bord en phase d'études préliminaires, nous proposons une stratégie d'optimisation permettant de réaliser des gains en masse rapidement à partir d'un cahier des charges. Ce cahier des charges comprend en particulier des critères vibratoires. En fonction des besoins d'études ainsi que des caractéristiques d'une planche de bord, la stratégie développée fait appel à une utilisation conjointe des techniques de sous-structuration et d'optimisation. Elle consiste à optimiser séparément les éléments (ou sous-structures) modifiables. Pour cela, nous étudions les possibilités offertes par les techniques de synthèse modale, puis nous examinons les principaux concepts nécessaires à la mise en forme d'un problème d'optimisation. À partir de cette dernière, nous proposons un développement particulier du calcul des sensibilités paramétriques basé sur la dérivation d'une équation de synthèse modale. Nous confrontons cette nouvelle approche à des techniques usuelles a partir d'un cas test élémentaire. Nous définissons ensuite les hypothèses de passage d'une optimisation de l'ensemble de la planche de bord, vers une optimisation séparée des éléments modifiables. Nous spécifions alors une modélisation des restrictions locales relatives aux problèmes d'optimisation des éléments modifiables. Nous abordons ensuite la validation expérimentale de la stratégie d'optimisation proposée, sur la base d'une structure minimale représentative du comportement vibratoire d'une planche de bord réelle. En conclusion, nous présentons l'optimisation de la planche de bord du véhicule zx, qui constitue l'aboutissement de la stratégie d'optimisation sur des planches de bord réelles.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Vadaparty, Sirisha Lakshmi. "Semantic tableaux program". CSUSB ScholarWorks, 2006. https://scholarworks.lib.csusb.edu/etd-project/2953.

Texto completo
Resumen
This project created a program that takes predicate calculus formulas and creates a visual Semantic Tableaux truth tree, thereby proving or disproving a conclusion. Formal methods used in developing and verifying software and hardware are mathematically based techniques for describing and reasoning about system properties. Such formal methods provide frameworks within which people specify, develop, and verify systems in a systematic, rather than ad hoc, manner. Formal methods include the more specific activities of program specification, program verification and hardware verification.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Lévy, Michel. "Contribution a l'analyse de la methode des tableaux". Grenoble 1, 1991. https://theses.hal.science/tel-00340419.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Lévy, Michel. "Contribution a l'analyse de la methode des tableaux". Université Joseph Fourier (Grenoble), 1991. http://tel.archives-ouvertes.fr/tel-00340419.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Levy, Michel Trilling Laurent Sifakis Joseph. "Contribution à l'analyse de la methode des tableaux". S.l. : Université Grenoble 1, 2008. http://tel.archives-ouvertes.fr/tel-00340419.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Gabrielsson, Jon. "Multivariate methods in tablet formulation". Doctoral thesis, Umeå : Univ, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-268.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Lönroth, Lisa. "Tablet-based interaction methods for VR". Thesis, Linköpings universitet, Institutionen för teknik och naturvetenskap, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-97142.

Texto completo
Resumen
Interaction with virtual environments has for a long time been a subject of research and many different techniques and interaction-devices have been developed. Both 3D interaction devices such as wands, gloves etc are in use, but also traditional 2D interaction with a mouse and a keyboard exist. In this work we explore the possibility to use a TabletPC as an interaction-device for virtual environments. Several methods of interaction for tasks like selection and value manipulation were developed, using the human ability of spatial knowledge as an aid. The methods have been developed to suit a pen-based interface and the special features of the pen have been taken under consideration. Evaluation showed that the developed interaction methods were usable.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Halmagrand, Pierre. "Automated deduction and proof certification for the B method". Thesis, Paris, CNAM, 2016. http://www.theses.fr/2016CNAM1064/document.

Texto completo
Resumen
La Méthode B est une méthode formelle de spécification et de développement de logiciels critiques largement utilisée dans l'industrie ferroviaire. Elle permet le développement de programmes dit corrects par construction, grâce à une procédure de raffinements successifs d'une spécification abstraite jusqu'à une implantation déterministe du programme. La correction des étapes de raffinement est garantie par la vérification de la correction de formules mathématiques appelées obligations de preuve et exprimées dans la théorie des ensembles de la Méthode B. Les projets industriels utilisant la Méthode B génèrent généralement des milliers d'obligation de preuve. La faisabilité et la rapidité du développement dépendent donc fortement d'outils automatiques pour prouver ces formules mathématiques. Un outil logiciel, appelé Atelier B, spécialement développé pour aider au développement de projet avec la Méthode B, aide les utilisateurs a se décharger des obligations de preuve, automatiquement ou interactivement. Améliorer la vérification automatique des obligations de preuve est donc une tache importante. La solution que nous proposons est d'utiliser Zenon, un outils de déduction automatique pour la logique du premier ordre et qui implémente la méthode des tableaux. La particularité de Zenon est de générer des certificats de preuve, des preuves écrites dans un certain format et qui permettent leur vérification automatique par un outil tiers. La théorie des ensembles de la Méthode B est une théorie des ensembles en logique du premier ordre qui fait appel à des schémas d'axiomes polymorphes. Pour améliorer la preuve automatique avec celle-ci, nous avons étendu l'algorithme de recherche de preuve de Zenon au polymorphisme et à la déduction modulo théorie. Ce nouvel outil, qui constitue le cœur de notre contribution, est appelé Zenon Modulo. L'extension de Zenon au polymorphisme nous a permis de traiter, efficacement et sans encodage, les problèmes utilisant en même temps plusieurs types, par exemple les booléens et les entiers, et des axiomes génériques, tels ceux de la théorie des ensembles de B. La déduction modulo théorie est une extension de la logique du premier ordre à la réécriture des termes et des propositions. Cette méthode est parfaitement adaptée à la recherche de preuve dans les théories axiomatiques puisqu'elle permet de transformer des axiomes en règles de réécriture. Par ce moyen, nous passons d'une recherche de preuve dans des axiomes à du calcul, réduisant ainsi l'explosion combinatoire de la recherche de preuve en présence d'axiomes et compressant la taille des preuves en ne gardant que les étapes intéressantes. La certification des preuves de Zenon Modulo, une autre originalité de nos travaux, est faite à l'aide de Dedukti, un vérificateur universel de preuve qui permet de certifier les preuves provenant de nombreux outils différents, et basé sur la déduction modulo théorie. Ce travail fait parti d'un projet plus large appelé BWare, qui réunit des organismes de recherche académique et des industriels autour de la démonstration automatique d'obligations de preuve dans l'Atelier B. Les partenaires industriels ont fournit à BWare un ensemble d'obligation de preuve venant de vrais projets industriels utilisant la Méthode B, nous permettant ainsi de tester notre outil Zenon Modulo.Les résultats expérimentaux obtenus sur cet ensemble de référence sont particulièrement convaincant puisque Zenon Modulo prouve plus d'obligation de preuve que les outils de déduction automatique de référence au premier ordre. De plus, tous les certificats de preuve produits par Zenon Modulo ont été validés par Dedukti, nous permettant ainsi d'être très confiant dans la correction de notre travail
The B Method is a formal method heavily used in the railway industry to specify and develop safety-critical software. It allows the development of correct-by-construction programs, thanks to a refinement process from an abstract specification to a deterministic implementation of the program. The soundness of the refinement steps depends on the validity of logical formulas called proof obligations, expressed in a specific typed set theory. Typical industrial projects using the B Method generate thousands of proof obligations, thereby relying on automated tools to discharge as many as possible proof obligations. A specific tool, called Atelier B, designed to implement the B Method and provided with a theorem prover, helps users verify the validity of proof obligations, automatically or interactively. Improving the automated verification of proof obligations is a crucial task for the speed and ease of development. The solution developed in our work is to use Zenon, a first-orderlogic automated theorem prover based on the tableaux method. The particular feature of Zenon is to generate proof certificates, i.e. proof objects that can be verified by external tools. The B Method is based on first-order logic and a specific typed set theory. To improve automated theorem proving in this theory, we extend the proof-search algorithm of Zenon to polymorphism and deduction modulo theory, leading to a new tool called Zenon Modulo which is the main contribution of our work. The extension to polymorphism allows us to deal with problems combining several sorts, like booleans and integers, and generic axioms, like B set theory axioms, without relying on encodings. Deduction modulo theory is an extension of first-order logic with rewriting both on terms and propositions. It is well suited for proof search in axiomatic theories, as it turns axioms into rewrite rules. This way, we turn proof search among axioms into computations, avoiding unnecessary combinatorial explosion, and reducing the size of proofs by recording only their meaningful steps. To certify Zenon Modulo proofs, we choose to rely on Dedukti, a proof-checker used as a universal backend to verify proofs coming from different theorem provers,and based on deduction modulo theory. This work is part of a larger project called BWare, which gathers academic entities and industrial companies around automated theorem proving for the B Method. These industrial partners provide to BWare a large benchmark of proof obligations coming from real industrial projects using the B Method and allowing us to test our tool Zenon Modulo. The experimental results obtained on this benchmark are particularly conclusive since Zenon Modulo proves more proof obligations than state-of-the-art first-order provers. In addition, all the proof certificates produced by Zenon Modulo on this benchmark are well checked by Dedukti, increasing our confidence in the soundness of our work
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Ivarsson, Kristoffer. "Pile foundation, calculation method and design tables according to Eurocode". Thesis, Linköpings universitet, Medie- och Informationsteknik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-94549.

Texto completo
Resumen
Because of the transition to a common standard for building regulations in Europe called Eurocodes, there is a need to update old reports that was written when old national standards were in use. A pile foundation is needed if the ground beneath a building does not have enough loadbearing capacity. The function of the pile cap is to distribute the load from the above construction on to the piles in the ground. The goal of this thesis is to create design tables with a number of type caps that can be use to quickly get a grip of the size, quantity of reinforcement steel and loadbearing capacity of the cap without the need to do any calculations. To create the values for the design tables the cantilever truss model was used. The truss is made up of the strut between the pile head-compression zone under the wall/pillar and the tie that is the reinforcement steel. The choice of this model makes it relatively simple to calculate the height and loadbearing capacity for the cap.The model from the theory part of the thesis is further explained by a calculation example that shows how the model has been implemented to create the design tables. The work with this thesis has been carried out at WSP and has it’s grounds in an handbook that they have there.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

嚴秀娟 y Sau-kuen Yim. "The analysis of 2-way tables without replicates". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1991. http://hub.hku.hk/bib/B31976839.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Ahmat, Norhayati. "Geometric modelling and shape optimisation of pharmaceutical tablets. Geometric modelling and shape optimisation of pharmaceutical tablets using partial differential equations". Thesis, University of Bradford, 2012. http://hdl.handle.net/10454/5702.

Texto completo
Resumen
Pharmaceutical tablets have been the most dominant form for drug delivery and they need to be strong enough to withstand external stresses due to packaging and loading conditions before use. The strength of the produced tablets, which is characterised by their compressibility and compactibility, is usually deter-mined through a physical prototype. This process is sometimes quite expensive and time consuming. Therefore, simulating this process before hand can over-come this problem. A technique for shape modelling of pharmaceutical tablets based on the use of Partial Differential Equations is presented in this thesis. The volume and the sur-face area of the generated parametric tablet in various shapes have been es-timated numerically. This work also presents an extended formulation of the PDE method to a higher dimensional space by increasing the number of pa-rameters responsible for describing the surface in order to generate a solid tab-let. The shape and size of the generated solid tablets can be changed by ex-ploiting the analytic expressions relating the coefficients associated with the PDE method. The solution of the axisymmetric boundary value problem for a finite cylinder subject to a uniform axial load has been utilised in order to model a displace-ment component of a compressed PDE-based representation of a flat-faced round tablet. The simulation results, which are analysed using the Heckel model, show that the developed model is capable of predicting the compressibility of pharmaceutical powders since it fits the experimental data accurately. The opti-mal design of pharmaceutical tablets with particular volume and maximum strength has been obtained using an automatic design optimisation which is performed by combining the PDE method and a standard method for numerical optimisation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Westberg, Annica. "Characterization of mini-tablets : Evaluation of disintegrationand dissolution methods". Thesis, Umeå universitet, Farmakologi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-157876.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Dibble, Emily. "The interpretation of graphs and tables /". Thesis, Connect to this title online; UW restricted, 1997. http://hdl.handle.net/1773/9101.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Ahmat, Norhayati Binti. "Geometric modelling and shape optimisation of pharmaceutical tablets : geometric modelling and shape optimisation of pharmaceutical tablets using partial differential equations". Thesis, University of Bradford, 2012. http://hdl.handle.net/10454/5702.

Texto completo
Resumen
Pharmaceutical tablets have been the most dominant form for drug delivery and they need to be strong enough to withstand external stresses due to packaging and loading conditions before use. The strength of the produced tablets, which is characterised by their compressibility and compactibility, is usually deter-mined through a physical prototype. This process is sometimes quite expensive and time consuming. Therefore, simulating this process before hand can over-come this problem. A technique for shape modelling of pharmaceutical tablets based on the use of Partial Differential Equations is presented in this thesis. The volume and the sur-face area of the generated parametric tablet in various shapes have been es-timated numerically. This work also presents an extended formulation of the PDE method to a higher dimensional space by increasing the number of pa-rameters responsible for describing the surface in order to generate a solid tab-let. The shape and size of the generated solid tablets can be changed by ex-ploiting the analytic expressions relating the coefficients associated with the PDE method. The solution of the axisymmetric boundary value problem for a finite cylinder subject to a uniform axial load has been utilised in order to model a displace-ment component of a compressed PDE-based representation of a flat-faced round tablet. The simulation results, which are analysed using the Heckel model, show that the developed model is capable of predicting the compressibility of pharmaceutical powders since it fits the experimental data accurately. The opti-mal design of pharmaceutical tablets with particular volume and maximum strength has been obtained using an automatic design optimisation which is performed by combining the PDE method and a standard method for numerical optimisation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Jewad, Raeid. "Novel methods for production of pharmaceutical tablets with complex geometries". Thesis, University of Cambridge, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.614052.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Nanda, Dhruv. "A Method to Enhance the Performance of Synthetic Origin-Destination (O-D) Trip Table Estimation Models". Thesis, Virginia Tech, 1997. http://hdl.handle.net/10919/36793.

Texto completo
Resumen
The conventional methods of determining the Origin- Destination (O-D) trip tables involve elaborate surveys, such as home interviews, requiring considerable time, manpower and funds. To overcome this drawback, a number of theoretical models that synthesize O-D trip tables from link volume data have been developed in the past. The focus of this research was on two of these models, namely, The Highway Emulator (THE) and the Linear Programming (LP) models. These models use target/seed tables for guiding the modeled trip tables. In an earlier research effort conducted by Virginia Tech Center for Transportation Research, potential was noted for enhancing these models' performances by using a superior target/seed table. This research study exploits the readily available socio- economic/census data and link volume information and proposes a methodology for obtaining improved target/seed tables, by performing the trip generation and trip distribution steps. This table was provided as target to THE and LP models, and their performances evaluated using Pulaski town as case study. In addition to measuring the closeness of the output tables to surveyed tables and their capability to replicate observed volumes, their improvements over the case when a structural table is used as target was also studied.

Tests showed that the use of the superior target/seed table significantly improved the performance of the LP model. However, for THE, mixed trends are seen in terms of different measures of closeness. The sensitivity of the user parameter to place certain degree of belief on the target/seed table for LP model was also analyzed.
Master of Science

Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Löfving, Erik. "Organizing physical flow data : from input-output tables to data warehouses /". Linköping : Dept. of Mathematics, Univ, 2005. http://www.bibl.liu.se/liupubl/disp/disp2005/stat5s.pdf.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Lohaka, Hippolyte O. "MAKING A GROUPED-DATA FREQUENCY TABLE: DEVELOPMENT AND EXAMINATION OF THE ITERATION ALGORITHM". Ohio : Ohio University, 2007. http://www.ohiolink.edu/etd/view.cgi?ohiou1194981215.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Masek, Caroline Humphrey. "Adapting the SCS method for estimating runoff in shallow water table environments". Scholar Commons, 2002. http://purl.fcla.edu/fcla/etd/SFE0000040.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Khedri, Shiler. "Markov chain Monte Carlo methods for exact tests in contingency tables". Thesis, Durham University, 2012. http://etheses.dur.ac.uk/5579/.

Texto completo
Resumen
This thesis is mainly concerned with conditional inference for contingency tables, where the MCMC method is used to take a sample of the conditional distribution. One of the most common models to be investigated in contingency tables is the independence model. Classic test statistics for testing the independence hypothesis, Pearson and likelihood chi-square statistics rely on large sample distributions. The large sample distribution does not provide a good approximation when the sample size is small. The Fisher exact test is an alternative method which enables us to compute the exact p-value for testing the independence hypothesis. For contingency tables of large dimension, the Fisher exact test is not practical as it requires counting all tables in the sample space. We will review some enumeration methods which do not require us to count all tables in the sample space. However, these methods would also fail to compute the exact p-value for contingency tables of large dimensions. \cite{DiacStur98} introduced a method based on the Grobner basis. It is quite complicated to compute the Grobner basis for contingency tables as it is different for each individual table, not only for different sizes of table. We also review the method introduced by \citet{AokiTake03} using the minimal Markov basis for some particular tables. \cite{BuneBesa00} provided an algorithm using the most fundamental move to make the irreducible Markov chain over the sample space, defining an extra space. The algorithm is only introduced for $2\times J \times K$ tables using the Rasch model. We introduce direct proof for irreducibility of the Markov chain achieved by the Bunea and Besag algorithm. This is then used to prove that \cite{BuneBesa00} approach can be applied for some tables of higher dimensions, such as $3\times 3\times K$ and $3\times 4 \times 4$. The efficiency of the Bunea and Besag approach is extensively investigated for many different settings such as for tables of low/moderate/large dimensions, tables with special zero pattern, etc. The efficiency of algorithms is measured based on the effective sample size of the MCMC sample. We use two different metrics to penalise the effective sample size: running time of the algorithm and total number of bits used. These measures are also used to compute the efficiency of an adjustment of the Bunea and Besag algorithm which show that it outperforms the the original algorithm for some settings.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Kostov, Belchin Adriyanov. "A principal component method to analyse disconnected frequency tables by means of contextual information". Doctoral thesis, Universitat Politècnica de Catalunya, 2015. http://hdl.handle.net/10803/316395.

Texto completo
Resumen
This thesis arises from the need to deal with open-ended questions answered in different languages in international surveys. For every language, the free answers are encoded in the form of a individuals x words lexical table. An important feature is that the lexical tables, from one language to the other, have neither the row-individuals nor the column-words in common. However, the global analysis and the comparison of the different samples require to place all the words, in any language, in the same space. As a solution, we propose to integrate the answers to the closed questions into the analysis, where the contextual variables the same for all the samples. This integration plays an essential role by permitting a global analysis. Thus, for every language, we have one lexical table and one categorical/quantitative table, a structure that we call "coupled tables". The global complex data structure is a sequence of "coupled tables". To analyse these data, we adopt a Correspondence Analysis-like approach. We propose a method which combines: Multiple Factor Analysis for Contingency Tables, in order to balance the influence of the sets of words in the global analysis and Correspondence Analysis on a Generalised Aggregated Lexical Table, which places all the words in the same space. The new method is called Multiple Factor Analysis on Generalised Aggregated Lexical Table. The results in an application show that the method provides outputs that are easy to interpret. They allow for studying the similarities/dissimilarities between the words including when they belong to different languages as far as they are associated in a similar/different way to the contextual variables. The methodology can be applied in other fields provided that the data are coded in a sequence of coupled tables.
Esta tesis surge de la necesidad de tratar las preguntas abiertas respondidas en diferentes idiomas en las encuestas internacionales. En cada uno de los idiomas, las respuestas libres se codifican en la forma de una tabla léxica de individuos x palabras. Una característica importante de estas tablas léxicas es que, de un idioma a otro, no tienen ni las filas-individuos ni las columnas-palabras en común. Sin embargo, el análisis global y la comparación de las diferentes muestras requiere proyectar todas las palabras del cualquier idioma en un mismo espacio. Como solución, se propone integrar las respuestas a las preguntas cerradas en el análisis, donde las variables contextuales son las mismas para todas las muestras. Esta integración juega un papel esencial al permitir un análisis global de los datos. Por lo tanto, para cada idioma, tenemos una tabla léxica y una tabla contextual con variables categóricas o cuantitativas, la estructura que llamamos "tablas acopladas". Y la estructura global compleja se llama secuencia de "tablas acopladas". Para analizar estos datos, adoptamos un enfoque similar a lo de análisis de correspondencias. Proponemos un método que combina: análisis factorial múltiple para las tablas de contingencia con el objetivo de equilibrar la influencia de los grupos de palabras en el análisis global y análisis de correspondencias en las tablas léxicas agregadas generalizadas, lo que permite proyectar todas las palabras en un mismo espacio. El nuevo método se llama Análisis Factorial Múltiple en las tablas léxicas agregadas generalizadas. Aplicación sobre una encuesta de satisfacción muestra que el método proporciona resultados que son fáciles de interpretar. Estos resultados permiten estudiar las similitudes/diferencias entre las palabras, incluyendo cuando pertenecen a diferentes idiomas, en función de su asociación con las variables contextuales. La metodología se puede aplicar en otros campos siempre y cuando los datos se codifiquen en una secuencia de "tablas acopladas".
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Fichtner, Jason J. "Distribution Tables and Federal Tax Policy: A Scoring Index as a Method for Evaluation". Diss., Virginia Tech, 2005. http://hdl.handle.net/10919/29422.

Texto completo
Resumen
Distribution tables have become ubiquitous to the tax policy debates surrounding major legislative initiatives to change tax law at the federal level. The fairness of any proposed change to federal tax policy has become one of the most highlighted components of tax policy discussions. The presentation of tax data within distribution tables can hide or omit important information that is required in order to effectively evaluate the merits of any tax legislation. Many producers of distribution tables show only the information necessary to present their policy preferences in the best possible light. The different economic assumptions and presentations of data used by the various groups that release distribution tables have the inherent consequence of providing the public with numerous tables that are often used as political ammunition to influence and shape debate. The purpose of this research is to contribute to the tax policy research literature by exploring the limitations and biases inherent in specific designs of tax distribution tables and in specific methodological approaches to tax distribution analysis. This is done by means of a systematic examination of how different designs and methodologies provide an incomplete picture of a proposed change to federal tax policy. By comparing distribution tables as used by different groups to provide alternative perspectives of various tax proposals, the research shows how the use of tax distribution tables often provides misleading results about the impact of proposed tax legislation in order to influence and shape the issues surrounding a proposed change to federal tax policy. A method for evaluating tax distribution tables is proposed which highlights the deficiencies of design and methodology which characterize the present use of tax distribution tables. An index of questions is provided as part of this research project to serve as a new tool of policy analysis, an index termed the "Tax Distribution Table Scoring Index" (TDTSI). The TDTSI will assist in balancing the different perspectives presented via tax distribution tables by identifying the biases and limitations associated with different methodologies and presentations of data.
Ph. D.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Daniels, Andries Jerrick. "Development of infrared spectroscopic methods to assess table grape quality". Thesis, Stellenbosch : Stellenbosch University, 2013. http://hdl.handle.net/10019.1/80369.

Texto completo
Resumen
Thesis (MScAgric)--Stellenbosch University, 2013.
ENGLISH ABSTRACT: The two white seedless table grape cultivars, Regal Seedless and Thompson Seedless fulfil a very important role in securing foreign income not only for the South African table grape industry, but the South African economy as a whole. These two cultivars, however, are like so many other white table grape cultivars, also prone to browning, especially netlike browning on Regal Seedless and internal browning on Thompson Seedless grapes. This leads to huge financial losses every year, since there is no established way to assess at harvest, during storage or during packaging, whether the grapes will eventually turn brown. In other words, there is no well-known protocol of assessing the browning risk of a particular batch of grapes prior to export. Numerous studies have been undertaken to determine the exact cause of browning and how it should be managed, but to date, no chemical or physical parameter has been firmly associated with the phenomenon. The overall aim of this study was thus to find an alternative way to deal with the problem by investigating the potential of near infrared (NIR) spectroscopy as a fast, non-destructive measurement technique to determine the browning potential of whole white seedless table grapes. A secondary aim was the determination of optimal ripeness of table grapes. In this way harvest maturity and quality indicative parameters namely total soluble solids (TSS), titratable acidity (TA), pH, glucose and fructose, also associated with the browning phenomenon, was quantified using models based on infrared spectra. Three different techniques (a) Fourier transform Near Infrared (FT-NIR), (b) Fourier transform – Mid Infrared (FT-MIR) and (c) Fourier transform – Mid Infrared Attenuated Total Reflectance (FT-MIR ATR) spectroscopy were investigated to determine these parameters. This was done so that a platform of different technologies would be available to the table grape industry. The grapes used in this study were harvested over two years (2008 and 2009) and were sourced from two different commercial vineyards in the Hex River valley, Western Cape, South Africa. Different crop loads (the total amount of bunches on the vines per hectare) were left for Regal Seedless (75 000, 50 000 and 35 000) and for Thompson Seedless (75 000 and 50 000). Three rows were used for Regal Seedless and two rows for Thompson Seedless. Each row had six sections which each represented a repetition for each crop load. In 2008 these cultivars were harvested early at 16°Brix, at optimum ripeness (18°Brix) and late at 20°Brix. In 2009 they were harvested twice at the optimum ripeness level. Berries from harvested bunches were crushed and the juice was used to determine the reference values for the different parameters in the laboratory according to their specific methods. The obtained juice was also scanned on the three different instruments. Different software (OPUS 6.5 for the FT-NIR and FT-MIR ATR instruments and Unscrambler version 9.2 for the FT-MIR instrument) as well as different spectral pre-processing techniques were also evaluated before construction of the models for all the instruments. Partial least squares (PLS) regression was used for the construction of the different calibration models. Different regression statistics, that included the root mean square error for prediction (RMSEP); the coefficient of determination (R2); the residual prediction deviation (RPD) and the bias were used to evaluate the performance of the developed calibration models. Calibration models which are fit for screening purposes were obtained on the FT-NIR and FTMIR ATR instruments for TSS (11.40 - 21.80°Brix) (R2 = 85.92%, RMSEP = 0.71 °Brix RPD = 2.67 and bias = 0.03°Brix), pH (2.94 - 3.9) (R2 = 85.00%, RMSEP = 0.08 RPD = 2.59 and bias = -0.01) and TA (4.3 - 13.1 g/L), (R2 = 90.77%, RMSEP = 0.48 g/L RPD = 3.30 and bias = -0.03 g/L). Models for fructose (46.70 – 176.82 g/L) (R2 = 74.66%, RMSEP = 9.28 g/L RPD = 2.00 and bias = 1.10 g/L) and glucose (20.36 – 386.67 g/L) (R2 = 70.71%, RMSEP = 11.10 g/L RPD = 1.87 and bias = 1.64 g/L) were obtained with the FT-NIR and FT-MIR ATR instruments that were in some instances fit for screening purposes and in some instances unsuitable for quantification purposes. The FT-MIR instrument gave models for all the parameters that were not yet suitable for quantification purposes. Combined spectral ranges used for calibration were often similar for some parameters, namely 12 493 - 5 446.2 for TSS and pH, 6 101.9 - 5 446.2 for TSS, TA and fructose and 4 601.5 - 4 246.7 for pH and fructose on the FT-NIR instrument, 2 993.2 - 2 322.3 for pH, TA and glucose and 1 654.3 - 649.4 for pH and glucose on the FT-MIR ATR instrument and sometimes they were adjacent (3 996.6 - 3 661.2, 3 663.5 - 3 327.7 and 3 327.2 - 2 322.3 for TSS and glucose, 1 988.3 - 1 652.8 and 1 654.3 - 649.4 for TSS, pH and TA. Other times they were overlapping (1 654.3 - 649.4 and 1 318.8 - 649.4) for pH, TA and fructose on the FT-MIR ATR instrument. This is a very good sign for transfer of this technology to a handheld device, where adjacent and/ or overlapping wavenumbers are crucial. Instruments which have to determine different parameters over large spectral ranges are not only impractical, because the instrument has to be big, but because it is also very expensive. Another advantage of implementing especially FT-NIR spectroscopy as a fast, accurate and inexpensive technique for determining harvest maturity and quality parameters is because no sample preparation is necessary and very little waste (few single berries tested) is produced. This is a pre-requisite which is highly recommended in the green era that we are currently living in and will do so for aeons to come. A platform of technologies has now been made available through this study for the determination of the respective parameters in future table grape samples by just taking their spectra on one of the instruments. Indeed something that has not been possible or available for the South African table grape industry before. Berries for the browning experiments were scanned on a FT-NIR instrument immediately after harvest (before cold storage) and again after cold storage. Before cold storage they were scanned on each side of the berry and after cold storage they were scanned twice on a brown spot if browning was present and twice on a clear spot, irrespective of whether browning was present or not. Inspection of the berries for the incidence of browning after cold storage revealed that Regal Seedless had a higher incidence of browning (68% in 2008 and 66% in 2009) than Thompson Seedless (21% in 2008 and 25% in 2009). Regal Seedless was also more prone to external browning, specifically netlike browning, whereas Thompson Seedless was more prone to internal browning, despite the different phenotypes of browning that were present on both. Principal component analysis (PCA) done on the spectra obtained before and after cold storage revealed that NIR can capture the changes related to cold storage with the first principal components explaining almost 100% of the variation in the spectra. Classification models also build using PCA was based on spectra of berries that remained clear before and after cold storage and those that turned brown after cold storage. Classification models of berries based on spectra obtained after cold storage (browning present) had a better total accuracy (94% for training- and 87% for test datasets), than the classification models based on spectra obtained before cold storage (79% for training- and 64% for test datasets). The implication of this is that the current models will be able to classify berries in terms of those which have turned brown already and those that remained clear better after cold storage than before cold storage, which is the critical stage where we want to actually know whether the berries will turn brown or not. The potential, however, to use NIR spectroscopy to detect browning before harvest already on white seedless grapes is still present, since all these models were built using the whole NIR spectrum. No variable selection was thus done and all the different browning phenotypes were also used together. Further analysis of the data will thus be based on using variable selection techniques like particle swarm optimization (PSO) to select certain wavelengths strongly associated with the browning phenomenon and only on the main types of browning (netlike on Regal Seedless and internal browning on Thompson Seedless). This study has major implications for the table grape industry, since it is the first time that the possibility to predict browning with other methods than visual inspection, especially before cold storage, is shown.
AFRIKAANSE OPSOMMING: Die twee wit pitlose tafeldruif kultivars, Regal Seedless en Thompson Seedless onderskeidelik, speel 'n baie belangrike rol in die verkryging van buitelandse inkomste, nie net vir die Suid- Afrikaanse tafeldruif industrie nie, maar ook vir die Suid-Afrikaanse ekonomie as 'n geheel. Hierdie twee kultivars is egter, soos baie ander wit kultivars, ook geneig tot verbruining. Dit is veral netagtige verbruining op Regal Seedless en interne verbruining op Thompson Seedless wat pertinent is. Hierdie belangrike kwaliteitsprobleme lei jaarliks tot groot finansiële verliese, aangesien daar huidiglik geen gevestigde prosedure is om voor oes, tydens opberging of tydens verpakking te bepaal of die druiwe uiteindelik gaan verbruin nie. Met ander woorde, daar is geen gevestigde protokol vir die beoordeling van die verbruinings risiko van 'n bepaalde groep druiwe voor dit uitgevoer word nie. Talle studies is alreeds onderneem om vas te stel wat die presiese oorsaak van hierdie verskynsel is en hoe dit bestuur moet word, maar geen enkele aspek wat bestudeer is kon tot op hede, herhaaldelik ge-assosieer word met die presiese oorsaak van verbruining nie. Die oorkoepelende doel van hierdie studie was dus om 'n alternatiewe manier te kry om hierdie probleem aan te spreek. ‘n Ondersoek na die potensiaal van naby infrarooi (NIR) spektroskopie as 'n vinnige en nie-vernietigende metings tegniek om die verbruinings potensiaal van ‘n wit pitlose tafeldruifkorrel wat nog heel is te bepaal, is onderneem. 'n Sekondêre doel was om die bepaling van optimale rypheid van tafeldruiwe te onderosek. Op hierdie manier is oesrypheid, en die kwaliteitsfaktore, naamlik totale oplosbare vastestowwe (TOVS), titreerbare suur (TS), pH, glukose en fruktose, wat ook gekoppel word aan die voorkoms van verbruining, deur middel van infrarooi (IR) spektroskopie modelle gekwantifiseer. Drie verskillende infrarooi metodes naamlik (a) die Fourier transform naby infrarooi (FT-NIR), (b) Fourier transform - Mid Infrarooi (FT-MIR) en (c) Fourier transform - Mid Infrarooi Verswakte Totale Refleksie (FT-MIR VTR) spektroskopie is gebruik om die aspekte te bepaal. Dis gedoen sodat 'n platform van tegnologie beskikbaar sou wees vir die tafeldruif industrie. Die druiwe wat in hierdie studie gebruik is, is oor twee jaar (2008 en 2009) en van twee verskillende kommersiële wingerde in die Hexriviervallei, Wes-Kaap, Suid-Afrika ge-oes. Verskillende oesladings (die totale aantal trosse op die wingerdstokke per hektaar) is vir Regal Seedless (75 000, 50 000 en 35 000) en Thompson Seedless (75 000 en 50 000) gelaat. Daar is drie rye gebruik Regal Seedless en twee vir Thompson Seedless. Elke ry het ses vakkies gehad wat dan verteenwoordigend was van ‘n herhaling vir elke oeslading. In 2008 is hierdie kultivars by vroeë rypwording (16°Brix), by optimale rypheid (18°Brix) en by laat rypheid (20°Brix) geoes. In 2009 is dit twee keer by die optimale rypheidsgraad geoes. Vir die bepaling van oesrypheid, en die kwaliteitsapekte is verskillende sagteware (OPUS 6.5 op die FT-NIR en FT-MIR VTR instrumente en Unscrambler weergawe 9.2 vir die FT-MIR instrument) sowel as verskillende spektrale voor-verwerking tegnieke ëvalueer voor die konstruksie van die kalibrasie modelle op die verskillende instrumente. Parsiële kleinste kwadraat (PKK) regressie is gebruik vir die opstel van kalibrasiemodelle vir die bepaling van laasgenoemde aspekte. Verskillende statistieke gegewens is gebruik om die kalibrasie modelle te evalueer, naamlik die bepalingskoëffisiënt (R2), die vierkantswortelgemiddelde- kwadraat fout vir voorspelling (VGKV), relatiewe voorspellingsafwyking (RVA) en sydigheid. Kalibrasie modelle wat geskik is vir keuring is verkry op die FT-NIR en FT-MIR VTR instrumente vir TOVS (11.40 – 21.80°Brix) (R2 = 85.92%, VGKV = 0.71°Brix, RVA = 2.67 en sydigheid = 0.03°Brix), pH (2.94 – 3.9) (R2 = 85.00%, VGKV = 0.08 g/L, RVA = 2.59 en sydigheid = -0.01 g/L), en TS (4.3 – 13.1 g/L), (R2 = 90.77%, VGKV = 0.48 g/L RVA = 3.30 en sydigheid = -0.03 g/L). Modelle vir fruktose (46.70-176.82 g/L) (R2 = 74.66%, VGKV = 9.28 g/L RVA = 2.00 en sydigheid = 1.10 g/L) en glukose (20.36 – 386.67 g/L) (R2 = 70.71%, VGKV = 11.10 g/L RVA = 1.87 en sydigheid = 1.64 g/L) is verkry met die FT-NIR en FT-MIR VTR instrumente wat in sommige gevalle gepas was vir keuringsdoeleindes en in sommige gevalle nie geskik was vir kwantifiserings doeleindes nie. Die FT-MIR-instrument het modelle vir al die aspekte gegee wat nog nie vir kwantifiserings doeleindes of vir keuringsdoeleindes geskik was nie. Gekombineerde spektrale reekse is gebruik vir die kalibrasies wat dikwels soortgelyk was vir sommige aspekte naamlik 12 493 - 5 446.2 vir TOVS en pH, 6 101.9 - 5 446,2 vir TOVS, TS en fruktose en 4 601.5 - 4 246.7 vir pH en fruktose op die FT-NIR instrument, 2 993.2 - 2 322.3 vir pH, TA en glukose en 1 654.3 – 649.4 vir pH en glukose op die FT-MIR VTR instrument. Andersyds, was dit aangrensend (3 996.6 - 3 661.2, 3 663.5 - 3 327.7 en 3 327.2 - 2 322.3) vir TOVS en glukose, 1 988.3 - 1 652.8, 1 654.3 – 649.4 vir TOVS, pH en TS en ander tye was dit weer oorvleuelend 1 654.3 – 649.4 en 1 318.8 – 649.4 vir pH, TS en fruktose op die FT-MIR VTR instrument. Dit is 'n baie goeie teken vir die oordrag van hierdie tegnologie na ‘n handgedraagde instrument, waar aanliggende en/of oorvleuelende golfnommers noodsaaklik is. Instrumente wat verskillende aspekte oor groot spektrale reekse moet bepaal is nie net onprakties, omdat die instrument groot moet wees nie, maar dit is ook baie duur. Nog 'n voordeel van die implementering van veral FT-NIR spektroskopie as 'n vinnige, akkurate en goedkoop tegniek vir die bepaling van oesrypheid, en die kwaliteit aspekte van druiwe is omdat daar geen monster voorbereiding nodig is nie en baie min afval (paar enkele korrels word gemonster) geproduseer word. 'n Voorvereiste wat sterk aanbeveel kom in die groen era waarin ons tans leef en nog vir eeue van nou af gaan doen. ‘n Platform van tegnologie is nou beskikbaar gestel deur middel van hierdie studie vir die bepaling van die onderskeie aspekte in toekomstige tafeldruif monsters deur net op een van die instrumente hulle spektra te neem. Inderdaad iets wat nie voorheen moontlik of beskikbaar was vir die Suid- Afrikaanse tafeldruif industrie nie. Korrels vir die verbruiningseksperimente is geskandeer direk na oes (voor koelopberging) en weer na koelopberging. Dit was voor koelopberging op elke kant van die korrel skandeer en na koelopberging was dit twee maal skandeer op 'n bruin vlek indien verbruining teenwoordig was en twee keer op 'n helder plek, ongeag of verbruining teenwoordig was of nie. Inspeksie van die korrels vir die voorkoms van verbruining na koelopberging het aan die lig gebring dat Regal Seedless 'n hoër voorkoms van verbruining (68% in 2008 en 66% in 2009) as Thompson Seedless (21% in 2008 en 25% in 2009) gehad het. Regal Seedless was ook meer geneig om eksterne verbruining, spesifiek netagtige verbruining te vertoon, terwyl Thompson Seedless meer geneig was om interne verbruining te vertoon, ten spyte van die verskillende fenotipes van verbruining wat teenwoordig was op beide kultivars. Hoofkomponente analise (HKA) is op die spektra gedoen voor en na koelopberging en naby infrarooi spektroskopie het aan die lig gebring dat die veranderinge wat verband hou met koelopberging met die eerste hoofkomponent (HK) verduidelik kan word met byna 100% van die variasie in die spektra wat daarin vasgevang is. Klassifikasiemodelle is ook deur die gebruik van HKA gebou en was gebaseer op die spektra van korrels wat vekry is voor en na koelopberging asook die wat verkry is nadat korrels verbruin het na koelopberging. Klassifikasiemodelle van korrels wat gebaseer was op spektra na koelopberging (verbruining teenwoordig) het 'n beter algehele akkuraatheid (94% vir opleidingsdata en 87% vir toetsdata), getoon as die klassifikasiemodelle wat gebaseer was op spektra van korrels voor koelopberging (79% vir opleidings data en 64% vir toetsdata). Die implikasie hiervan is dat die huidige modelle in staat sal wees om korrels beter te klassifiseer in terme van diegene wat alreeds verbruin het en die wat nie verbruin het na koelopberging as daardie voor koelopberging, wat juis die kritieke stadium is waar ons wil weet of die korrels wel gaan verbruin of nie. Daar is wel potensiaal wat verder ontgin kan word, aangesien al hierdie modelle gebou is deur gebruik te maak van die hele NIR spektrum. Geen veranderlike seleksie is dus gedoen nie en al die verskillende verbruiningsfenotipes is ook saam gebruik in die opstel van die modelle. Verdere analise van die data sal dus gebaseer word op die gebruik van veranderlike seleksie tegnieke soos deeltjie swerm optimisasie (DSO) wat sekere golflengtes kies wat sterk verband hou met die verbruining verskynsel en slegs die belangrikste tipes van verbruining (netagtig op Regal Seedless en interne verbruining op Thompson Seedless) sal gebruik word. Hierdie studie het 'n baie belangrike implikasie vir die tafeldruifbedryf, want dit is die eerste keer dat die moontlikheid om verbruining te voorspel met ander metodes as visuele inspeksie, veral voor koelopberging, getoon word.
The Postharvest and Innovation Programme, for financing this study
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Bergmark, Fabian. "Online aggregate tables : A method forimplementing big data analysis in PostgreSQLusing real time pre-calculations". Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-207808.

Texto completo
Resumen
In modern user-centric applications, data gathering and analysis is often of vitalimportance. Current trends in data management software show that traditionalrelational databases fail to keep up with the growing data sets. Outsourcingdata analysis often means data is locked in with a particular service, makingtransitions between analysis systems nearly impossible. This thesis implementsand evaluates a data analysis framework implemented completely within a re-lational database. The framework provides a structure for implementations ofonline algorithms of analytical methods to store precomputed results. The re-sult is an even resource utilization with predictable performance that does notdecrease over time. The system keeps all raw data gathered to allow for futureexportation. A full implementation of the framework is tested based on thecurrent analysis requirements of the company Shortcut Labs, and performancemeasurements show no problem with managing data sets of over a billion datapoints.
I moderna användarcentrerade applikationer är insamling och analys av dataofta av affärskritisk vikt. Traditionalla relationsdatabaser har svårt att hanterade ökande datamängderna. Samtidigt medför användning av externa tjänster fördataanalys ofta inlåsning av data, vilket försvårar byte av analystjänst. Dennarapport presenterar och utvärderar ett ramverk för dataanalys som är imple-menterat i en relationsdatabas. Ramverket tillhandahåller strukturer för attförberäkna resultat för analytiska beräkningar på ett effektivt sätt. Resultatetblir en jämn resursanvändning med förutsägbar prestanda som inte försämrasöver tid. Ramverket sparar även all insamlad data vilket möjliggör exporter-ing. Ramverket utvärderas hos företaget Shortcut Labs och resultatet visar attramverket klarar av datamängder på över en miljard punkter.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Mazalek, Alexandra 1976. "Media tables : an extensible method for developing multi-user media interaction platforms for shared spaces". Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/33882.

Texto completo
Resumen
Thesis (Ph. D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2005.
Includes bibliographical references (p. 153-157).
As digital entertainment applications evolve, there is a need for new kinds of platforms that can support sociable media interactions for everyday consumers. This thesis demonstrates an extensible method and sensing framework for real-time tracking of multiple objects on an interactive table with an embedded display. This tabletop platform can support many different applications, and is designed to overcome the commercial obstacles of previous single purpose systems. The approach is supported through the design and implementation of an acoustic-based sensing system that provides a means for managing large numbers of objects and applications across multiple platform instances. The design requires precise and dynamic positioning of multiple objects in order to enable real-time multi-user interactions with media applications. Technical analysis shows the approach l:o be robust, scalable to various sizes, and accurate to a within a few millimeters of tolerance. A qualitative user evaluation of the table within a real-world setting illustrates its usability in the consumer entertainment space for digital media browsing and game play. Our observations revealed different ways of mapping physical interaction objects to the media space, as either generic controls or fixed function devices, and highlighted the issue of directionality on visual displays that are viewable from different sides.
(cont.) The thesis suggests that by providing a general purpose method for shared tabletop display platforms we give application designers the freedom to invent a broad range of media interactions and applications for everyday social environments, such as homes, classrooms and public spaces. Contributions of the thesis include: formulation of an extensible method for media table platforms; development of a novel sensing approach for dynamic object tracking on glass surfaces; a taxonomy of interface design considerations; and prototype designs for media content browsing, digital storytelling and game play applications.
Alexandra Mazalek.
Ph.D.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Yamashita, Shigeru. "Studies on Logic Synthesis Methods for Look-Up Table based FPGAs". 京都大学 (Kyoto University), 2001. http://hdl.handle.net/2433/150219.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

MARTINS, FABIO GARRIDO LEAL. "GRADUATION METHODS UNDER PARAMETRIC AND NON-PARAMETRIC MODELS FOR SELECT AND ULTIMATE TABLES". PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2007. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=11409@1.

Texto completo
Resumen
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
O estudo aborda as diversas metodologias de construção de tábuas biométricas: desde as técnicas de graduação tradicionalmente utilizadas para os casos em que há grande quantidade de dados, até um método específico de aplicação para o caso de poucos dados. Inclui uma discussão sobre as formas de construção de tábuas seletas, em particular de sobrevivência de inválidos. A população de servidores públicos estatutários da administração direta do município do Rio de Janeiro é utilizada para a graduação de tábuas de sobrevivência de válidos e de inválidos, enquanto que a dos aposentados urbanos por invalidez do INSS serve de base para a tábua seleta de sobrevivência de inválidos.
This study represents an approach to the main methods of life tables construction. It shows traditional graduation techniques for cases including high exposure data, as well a methodology for few data. Further more, this study generates a discussion about select life tables construction, in particular disability mortality tables. Data set from Rio de Janeiro officials population were used for mortality and disability mortality tables construction. In addition, a select disability mortality table was constructed based on the INSS urban disability retired population.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Türk, Serhat y Kristoffer Müller. "Kinetic Art Table : Polar sand plotter". Thesis, KTH, Mekatronik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-296307.

Texto completo
Resumen
CNC machines are used with plenty of different implementations, one of which is in this project where a polar CNC machine was used to draw mesmerizing patterns on a table with fine sand. This construction read G-code and converted it to polar coordinates. The capabilities of what the plotter could draw were tested, everything from ODE plots to custom-made patterns and drawings with the help of Sandify. Although the patterns were drawn properly with small errors the ODE was too difficult to draw because it required a smaller magnetic ball and an even more precise system than what was used. This machine also generated noise at roughly 33 dB when it was in use.
CNC-maskiner används med massor av olika implementationer, en av dem är i det här projektet där en polar CNC maskin användes för att rita fascinerande mönster på ett bord fylld med fin sand. Denna konstruktion läste in G-kod och konverterade det till polära koordinater. Förmågan av vad maskinen kunde rita testades, allt från ODE grafer till specialtillverkade mönster och ritningar med hjälp av Sandify. Aven om de olika mönstren ritades ordentligt men med mindre små fel var ODE för svårt att rita på grund av att det krävde en mindre magnetisk kula och ännu mer noggrannhet jämfört med detta system. Denna maskin alstrade också ljud på cirka 33 dB under användning.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Knight, Velma E. "A comprehensive analysis of the Method Absolute algorithm for solving transportation problems and the development of the Row Table Method and almost absolute points /". Instructions for remote access. Click here to access this electronic resource. Access available to Kutztown University faculty, staff, and students only, 2001. http://www.kutztown.edu/library/services/remote_access.asp.

Texto completo
Resumen
Thesis (M.S.)--Kutztown University of Pennsylvania, 2001.
Source: Masters Abstracts International, Volume: 45-06, page: 3171. Typescript. Abstract precedes thesis as preliminary leaves[1-3]. Includes bibliographical references (leaves 91-92).
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

ZHENG, SONGHUI. "Qualification des methodes de calculs de fluence dans les reacteurs a eau pressurisee. Amelioration du traitement des sections efficaces par la methode des tables de probabilite". Paris 11, 1993. http://www.theses.fr/1993PA112356.

Texto completo
Resumen
La connaissance de la fluence sur la cuve est indispensable dans les reacteurs a eau pressurisee; les sections efficaces et leur traitement jouent un role important pour ce type de probleme. Dans cette etude, deux benchmarks sont interpretes par le code de monte carlo tripoli pour qualifier la methode de calcul et les sections efficaces utilisees. Pour le traitement des sections efficaces, la methode multigroupe est souvent utilisee actuellement mais elle pose des problemes: le choix des fonctions de ponderation est difficile et il faut un grand nombre de groupes pour bien decrire la fluctuation des sections efficaces. Dans cette these, nous proposons une nouvelle methode appelee methode des tables de probabilite pour le traitement des sections efficaces. Pour sa qualification, un programme de simulation du transport des neutrons a une dimension par la methode de monte-carlo utilisant les tables de probabilite a ete ecrit; la comparaison entre les calculs multigroupes et ceux utilisant les tables de probabilite nous montre les avantages de cette nouvelle methode. Les tables de probabilite sont aussi introduites dans le code tripoli; les resultats du calcul d'un benchmark de penetration profonde des neutrons dans le fer sont nettement ameliores en les comparant avec l'experience. Par consequent, c'est interessant d'utiliser cette nouvelle methode pour les calculs de protection et de neutronique
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Kolcuoglu, Turusan. "Linearization Of Rf Power Amplifiers With Memoryless Baseband Predistortion Method". Master's thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12613213/index.pdf.

Texto completo
Resumen
In modern wireless communication systems, advanced modulation techniques are used to support more users by handling high data rates and to increase the utilization efficiency of the limited RF spectrum. These techniques are sensitive to the nonlinear distortions due to their high peak to average power ratios. Main source of nonlinear distortion in transmitter topologies are power amplifiers that determine the overall efficiency and linearity of the transmitter. To increase linearity without sacrificing efficiency, power amplifier linearization techniques may be a choice. Baseband predistortion technique is known to be one of the optimum methods due to its relatively low complexity and its convenience for adaptation. In this thesis, different memoryless baseband signal predistortion methods are investigated and analyzed by simulations. Look-Up Table(LUT) and Polynomial approaches are compared and LUT approach is found to be better in performance. Parameters, like indexing, training sequences and training duration are evaluated. An open loop testbench is built with a real amplifier and a different LUT predistortion method that is based on amplifier modeling is offered. It is evaluated by using two tone test and adjacent channel power suppression with 8PSK data. Also, some Look-Up Table parameters are re-investigated with the proposed method. The performances of the proposed method in dierent amplifier classes are observed. Along with these studies, a list of prerequisites for design of a predistortion system is determined.
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Yao, Yonggang. "Statistical Applications of Linear Programming for Feature Selection via Regularization Methods". The Ohio State University, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=osu1222035715.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Song, Yongxin. "Study of the dynamic behavior of tablet movement in a rotating drum using discrete element modeling (DEM) method". Morgantown, W. Va. : [West Virginia University Libraries], 2006. https://eidr.wvu.edu/etd/documentdata.eTD?documentid=4681.

Texto completo
Resumen
Thesis (Ph. D.)--West Virginia University, 2006.
Title from document title page. Document formatted into pages; contains xi, 110 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 105-110).
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Lei, Peng. "A Linear Programming Method for Synthesizing Origin-Destination (O-D) Trip Tables from Traffic Counts for Inconsistent Systems". Thesis, Virginia Tech, 1998. http://hdl.handle.net/10919/36860.

Texto completo
Resumen
Origin-Destination (O-D) trip tables represent the demand-supply information of each directed zonal-pair in a given region during a given period of time. The effort of this research is to develop a linear programming methodology for estimating O-D trip tables based on observed link volumes. In order to emphasize the nature of uncertainty in the data and in the problem, the developed model permits the user's knowledge of path travel time to vary within a band-width of values, and accordingly modifies the user-optimality principle. The data on the observed flows might also not be complete and need not be perfectly matched. In addition, a prior trip table could also be specified in order to guide the updating process via the model solution. To avoid excessive computational demands required by a total numeration of all possible paths between each O-D pair, a Column Generation Algorithm (CGA) is adopted to exploit the special structures of the model. Based on the known capacity of each link, a simple formula is suggested to calculate the cost for the links having unknown volumes. An indexed cost is introduced to avoid the consideration of unnecessary passing-through-zone paths, and an algorithm for solving the corresponding minimum-cost-path problem is developed. General principles on the design of an object-oriented code are presented, and some useful programming techniques are suggested for this special problem. Some test results on the related models are presented and compared, and different sensitivity analyses are performed based on different scenarios. Finally, several research topics are recommended for future research.
Master of Science
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Božíková, Barbora. "Analýza spotřebitelských úvěrů pomocí statistických metod". Master's thesis, Vysoká škola ekonomická v Praze, 2016. http://www.nusl.cz/ntk/nusl-264541.

Texto completo
Resumen
Consumer loans are part of loan products provided by bank institutions. This diploma thesis is focused on possibility of identifying risk clients with the consumer loans, using available data set. In the first part of the work was briefly mentioned the credit process and also theoretical basis of statistic methods used in empirical part of the work. In the second part were investigated dependencies, and was described the clients structure. Then the discriminant analysis was applied, with the aim to identify the sorting criteria, which could recognize the risk and unproblematic clients. Subsequently the results of the analysis were evaluated and described the identified connections.
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Hage, Nohra. "Study of plactic monoids by rewriting methods". Thesis, Lyon, 2016. http://www.theses.fr/2016LYSES065/document.

Texto completo
Resumen
Cette thèse est consacrée à l’étude des monoïdes plaxiques par une nouvelle approche utilisant des méthodes issues de la réécriture. Ces méthodes sont appliquées à des présentations de monoïdes plaxiques décrites en termes de tableaux de Young, de bases cristallines de Kashiwara et de modèle des chemins de Littelmann. On étudie le problème des syzygies pour la présentation de Knuth des monoïdes plaxiques. En utilisant la procédure de complétion homotopique basée sur les procédures de complétion de Squier et de Knuth–Bendix, on construit des présentations cohérentes de monoïdes plaxiques de type A. Une telle présentation cohérente étend la notion de présentation convergente d’un monoïde par une famille génératrice de syzygies, décrivant toutes les relations entre les relations. On explicite une présentation cohérente finie des monoïdes plaxiques de type A avec les générateurs colonnes. Cependant, cette présentation n’est pas minimale dans le sens que plusieurs de ses générateurs sont superflus. En appliquant la procédure de réduction homotopique, on réduit cette présentation en une présentation cohérente finie qui étend la présentation de Knuth, donnantainsi toutes les syzygies des relations de Knuth. D’une manière plus générale, on étudie des présentations de monoïdes plaxiques généralisés du point de vue de la réécriture. On construit des présentations convergentes finies de ces monoïdes en utilisant les chemins de Littelmann. De plus, on étudie ces présentations pour le type C en termes de bases cristallines de Kashiwara. En introduisant les générateurs colonnes admissibles, on construit une présentation convergente finie du monoïde plaxique de type C avec des relations explicites. Cette approche nous permettrait d’étudier le problème des syzygies des présentations de monoïdes plaxiques en tout type
This thesis focuses on the study of plactic monoids by a new approach using methods issued from rewriting theory. These methods are applied on presentations of plactic monoids given in terms of Young tableaux, Kashiwara’s crystal bases and Littelmann path model. We study the syzygy problem for the Knuth presentation of the plactic monoids. Using the homotopical completion procedure that extends Squier’s and Knuth–Bendix’s completions procedure, we construct coherent presentations of plactic monoids of type A. Such a coherent presentation extends the notion of a presentation of a monoid by a family of generating syzygies, taking into account all the relations among the relations. We make explicit a finite coherent presentation of plactic monoids of type A with the column generators. However, this presentation is not minimal in the sense that many of its generators are superfluous. After applying the homotopical reduction procedure on this presentation, we reduce it to a finite coherent one that extends the Knuth presentation, giving then all the syzygies of the Knuth relations. More generally, we deal with presentations of plactic monoids of any type from the rewriting theory perspective. We construct finite convergent presentations for these monoids in a general way using Littelmann paths. Moreover, we study the latter presentations in terms of Kashiwara’s crystal graphs for type C. By introducing the admissible column generators, we obtain a finite convergent presentation of the plactic monoid of type C with explicit relations. This approach should allow us to study the syzygy problem for the presentations of plactic monoids for any type
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Mustain, Mahmud. "The application of the shallow seismic reflection method and AVO analysis to identify the water table reflection". Thesis, University of Leicester, 2000. http://hdl.handle.net/2381/30442.

Texto completo
Resumen
A simple mathematical model of a sandstone aquifer has been constructed based on a local example, the Sherwood Sandstone of the East Midlands, UK. Simple seismic reflectivity calculations show that the air-water interface should theoretically produce a detectable seismic reflected wave for sandstone porosities as low as 10%. A synthetic seismic reflection dataset was constructed for a typical field survey geometry, and processed using the Promax system to produce a stacked section. The final section clearly shows the water table reflector. A field dataset from a subsequent survey has also been processed using the same sequence which also imaged a clear reflector at 30m depth. This is important evidence that the method has uses in identifying water table as a part of progress in shallow seismic reflection survey. The methods currently employed are (1) to define the optimum field, and (2) to define the optimum processing sequence, so that water table reflection can be imaged in a variety of geological situations. The application of Amplitude versus Offset (AVO) analysis to CMP gathers from the field data shows a characteristic increase of amplitude with increasing angle of incidence for super-critical reflection. In this way the water table reflector is clearly identified with the amplitude increasing by 30% over the range of incident angle from 28° to 34°. AVO analysis has also been applied to other field data that has a similar geological setting, but with a lithological reflector over the same super-critical angle. The resulting AVO curve shows a decrease in amplitude of over 90% with increasing offset, clearly differentiating from the water table reflection. Both water table and lithological results closely agree with theoretical predictions.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Li, Shih-Hung. "Automated robotic assembly using a vibratory work table : optimal tuning of vibrators based on the Taguchi method". Thesis, Massachusetts Institute of Technology, 1992. http://hdl.handle.net/1721.1/32128.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Ferreira, Juliana Borges. "Simulações da SAR em virtude da exposição por tablets operados próximo à cabeça". reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2016. http://hdl.handle.net/10183/143369.

Texto completo
Resumen
A grande maioria da população mundial está crescentemente exposta à radiação eletromagnética proveniente de fontes que muitas vezes estão localizadas nas proximidades do corpo. A radiação eletromagnética é considerada um agente possivelmente cancerígeno para as pessoas, classificação 2B indicada pela Organização Mundial da Saúde-OMS (WHO/IARC, 2011). Devido às preocupações em relação aos riscos associados a esta exposição existem normas que recomendam os valores máximos de exposição permitidos (ICNIRP, 1998; FCC, 2001). A correta avaliação das doses de radiação é, portanto, relevante. Este trabalho tem a finalidade de avaliar o impacto dos resultados do cálculo da dose da Taxa de Absorção Específica (SAR) em usuários expostos a radiação por tablets operando na faixa de radiocomunicações Wi-fi. Os três modelos existentes de cabeça humana utilizados serão um manequim homogêneo SAM phantom e dois modelos de cabeça realistas heterogêneos: um adulto masculino e uma criança masculina. Será também utilizado nas simulações um modelo masculino de criança que foi desenvolvido através de imagens de tomografia computadorizada (TC) pelo processo de segmentação feito no software AMIRA. Será utilizado um modelo genérico de tablet. Os parâmetros dosimétricos usados para simulação da SAR serão computados pelo software SEMCAD X que é baseado no Método das Diferenças Finitas no Domínio do Tempo (FDTD). Será criado também um código do Método FDTD através do software MATLAB que servirá para a escolha dos parâmetros do SEMCAD X. A distância entre o tablet e os modelos de cabeças varia de 50 mm a 300 mm. Os resultados da SAR serão comparados com os limites de exposição recomendados pelas normas internacionais. Também serão simuladas diferentes posições da antena no tablet. Da análise dos resultados foi constatado que os valores de SAR são muito baixos e todos os resultados ficaram dentro dos limites do psSAR recomendados pela FCC de 1,6 W/kg em cada 1 g de tecido e de 2 W/kg em cada 10 g de tecido estabelecidos pela ICNIRP. Comparando os valores de SAR do modelo SAM com o modelo DUKE, o modelo SAM se mostra conservador, porém quando a comparação é feita com as crianças o SAM deixa de ser conservador.
The vast majority of the world population is increasingly exposed to electromagnetic radiation from sources which are often located near to the body. Electromagnetic radiation is considered a possible carcinogen for people, classification 2B indicated by the World Health Organization-WHO (WHO/IARC, 2011). Due to concerns regarding the risks associated with this exposition there are regulations suggesting maximum allowed exposure values (ICNIRP, 1998; FCC, 2001). The correct evaluation of radiation doses is therefore relevant. This work aims to assess the impact of the results of the calculation of Specific Absorption Rate dose (SAR) in users exposed to radiation from tablets operating in the Wi-fi band. The three existing models of human head used are a homogeneous dummy SAM phantom and two heterogeneous realistic head models: a male adult and a male child. It will also be used in the simulations a male child model which was developed from computed tomography (CT) imaging using the AMIRA software for the segmentation process. A generic model of tablet is used. Dosimetric parameters used for simulation of the SAR are computed using the SEMCAD X software which is based on the Finite Difference Method in Time Domain (FDTD). A FDTD code was developed using the MATLAB software in order to help to choose the input SEMCAD X parameters. The distances between the tablet and the head of the models varies from 50 mm to 300 mm. SAR results are compared with the exposure limits recommended by international standards. Different antenna positions on the tablet are simulated too. Examining the results it was found that the SAR values are very low and all results are within the psSAR limits recommended by FCC (1,6 W/kg averaged over 1 g of tissue) and by ICNIRP (2 W/kg in 10 g of tissue). Comparing the SAR in the SAM model with the SAR in the DUKE model, the SAM model shows to be conservative. However, when compared with the children, the SAM is not conservative.
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Arora, Namita. "Dynamic estimation of origin-destination trip-tables from real-time traffic volumes using parameter optimization methods". Thesis, This resource online, 1993. http://scholar.lib.vt.edu/theses/available/etd-11102009-020100/.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Binti, Shamjuddin Amnani. "Swelling and disintegration of multi-component polymeric structures". Thesis, University of Leicester, 2018. http://hdl.handle.net/2381/43072.

Texto completo
Resumen
This thesis aims to develop an understanding about swelling and disintegration of multi-component polymeric structures such as pharmaceutical tablets. The thesis presents a model for the diffusion-driven water uptake, swelling deformation and subsequent disintegration of polymer matrix drug-delivery devices. Hygroscopic swelling occurs when a dry tablet enters a humid environment and absorbs water molecules. The modification of tablet structures changes the release profile of the drug in the desired manner. The previous research mostly focused on transport problems related to drug release. This study contributes an understanding of the mechanical behaviour of hydrophilic polymer release matrix materials which are treated as continuum. Modelling of the swelling problem involves concurrent large deformation of the polymer network and diffusion of the solvent through the network. A coupled diffusion-deformation model was created to study the relation between both physics. The coupled diffusion-deformation model was utilised to consider disintegration of polymer matrix through the inclusion of swelling agents. Two cases were presented to illustrate the application of the model: swelling-controlled and immediate-release drug delivery systems. This study used COMSOL Multiphysics®, a finite element commercial software to perform the analysis. Various physical modules: structural mechanics, chemical transport and mathematics were combined for solving coupled diffusion-deformation-damage boundary value problems. The numerical results were validated using existing experimental data from the literature. The model parameters were varied to investigate their sensitivity to the solution. Higher solvent concentration gradient in the matrix produced higher swelling strain, thus increased local stress. Disintegrability was measured by the time taken for the maximum principal stress to reach a given failure. Higher coefficient of water diffusion allows higher amount of water ingression into the matrix. Higher coefficient of hygroscopic swelling generates higher local swelling strain. This study facilitates in understanding the complex phenomena in the application of drug release formulation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Ngcobo, Mduduzi Elijah Khulekani. "Resistance to airflow and moisture loss of table grapes inside multi-scale packaging". Thesis, Stellenbosch : Stellenbosch University, 2013. http://hdl.handle.net/10019.1/80192.

Texto completo
Resumen
Thesis (PhD(Agric))--Stellenbosch University, 2013.
ENGLISH ABSTRACT: Postharvest quality of fresh table grapes is usually preserved through cooling using cold air. However, cooling efficiencies are affected by the multi-scale packaging that is commercially used for handling grapes after harvest. There is usually spatial temperature variability of grapes that often results in undesirable quality variations during postharvest handling and marketing. This heterogeneity of grape berry temperature inside multi-packages is largely due to uneven cold airflow patterns that are caused by airflow resistance through multi-package components. The aims of this study were therefore to conduct an in-depth experimental investigation of the contribution of grape multi-packaging components to total airflow resistance, cooling rates and patterns of grapes inside the different commercially used multi-packages, and to assess the effects of these multi-packages on table grape postharvest quality attributes. A comprehensive study of moisture loss from grapes during postharvest storage and handling, as well as a preliminary investigation of the applicability of computational fluid dynamics (CFD) modeling in predicting the transport phenomena of heat and mass transfer of grapes during cooling and cold storage in multi-packages were included in this study. Total pressure drop through different table grapes packages were measured and the percentage contribution of each package component and the fruit bulk were determined. The liner films contributed significantly to total pressure drop for all the package combinations studied, ranging from 40.33±1.15% for micro-perforated liner film to 83.34±2.13 % for non-perforated liner film. The total pressure drop through the grape bulk (1.40±0.01 % to 9.41±1.23 %) was the least compared to the different packaging combinations with different levels of liner perforation. The cooling rates of grapes in the 4.5 kg multi-packaging were significantly (P<0.05) slower than that of grapes in 5 kg punnet multi-packaging, where the 4.5 kg box resulted in a seven-eighths cooling time of 30.30-46.14% and 12.69-25.00% more than that of open-top and clamshell punnet multi-packages, respectively. After 35 days in cold storage at -0.5°C, grape bunches in the 5 kg punnet box combination (open-top and clamshell) had weight loss of 2.01 – 3.12%, while the bunches in the 4.5 kg box combination had only 1.08% weight loss. During the investigation of the effect of different carton liners on the cooling rate and quality attributes of ‘Regal seedless’ table grapes in cold storage, the non-perforated liner films maintained relative humidity (RH) close to 100 %. This high humidity inside non-perforated liner films resulted in delayed loss of stem quality but significantly (P ≤ 0.05) increased the incidence of SO2 injury and berry drop during storage compared to perforated liners. The perforated liners improved fruit cooling rates but significantly (P ≤ 0.05) reduced RH. The low RH in perforated liners also resulted in an increase in stem dehydration and browning compared to non-perforated liners. The moisture loss rate from grapes packed in non-perforated liner films was significantly (P<0.05) lower compared to the moisture loss rate from grapes packed in perforated liner films (120 x 2 mm and 36 x 4 mm). The effective moisture diffusivity values for stem parts packed in non-perforated liner films were lower than the values obtained for stem parts stored without packaging liners, and varied from 5.06x10-14 to 1.05x10-13 m2s-1. The dehydration rate of stem parts was inversely proportional to the size (diameter) of the stem parts. Dehydration rate of stems exposed (without liners) to circulating cold air was significantly (P<0.05) higher than the dehydration rates of stems packed in non-perforated liner film. Empirical models were successfully applied to describe the dehydration kinetics of the different parts of the stem. The potential of cold storage humidification in reducing grape stem dehydration was investigated. Humidification delayed and reduced the rate of stem dehydration and browning; however, it increased SO2 injury incidence on table grape bunches and caused wetting of the packages. The flow phenomenon during cooling and handling of packed table grapes was also studied using a computational fluid dynamic (CFD) model and validated using experimental results. There was good agreement between measured and predicted results. The result demonstrated clearly the applicability of CFD models to determine optimum table grape packaging and cooling procedures.
AFRIKAANSE OPSOMMING: Naoes kwaliteit van vars tafeldruiwe word gewoonlik behou deur middel van verkoeling van die produk met koue lug. Ongelukkig word die effektiwiteit van dié verkoeling beïnvloed deur die multivlakverpakking wat kommersieel gebruik word vir die naoes hantering van druiwe. Daar is gewoonlik ruimtelike variasie in die temperatuur van die druiwe wat ongewenste variasie in die kwaliteit van die druiwe veroorsaak tydens naoes hantering en bemarking. Die heterogene druiwetemperature binne die multivlakverpakkings word grootliks veroorsaak deur onegalige lugvloeipatrone van die koue lug as gevolg van die weerstand wat die verskillende komponente van die multivlakverpakkings teen lugvloei bied. Die doel van hierdie studie was dus om ‘n indiepte eksperimentele ondersoek te doen om die bydrae van multivlakverpakking op totale lugvloeiweerstand, verkoelingstempo’s en –patrone van druiwe binne kommersieël gebruikte multivlakverpakkings te ondersoek, asook die effek van die multivalkverpakking op die naoes kwaliteit van druiwe te bepaal. ‘n Omvattende studie van vogverlies van druiwe tydens naoes opberging en hantering, asook ‘n voorlopige ondersoek na die bruikbaarheid van ‘n berekende vloei dinamika (BVD) model om die bewegingsfenomeen van hitte en massa oordrag van druiwe tydens verkoeling en koelopberging in multivlakverpakkings te voorspel, was ook by die studie ingesluit. Die totale drukverskil deur verskillende tafeldruif verpakkingssisteme is gemeet en die persentasie wat deur elke verpakkingskomponent en die vruglading bygedra is, is bereken. Van al die verpakkingskombinasies wat gemeet is, het die voeringfilms betekenisvol tot die totale drukverskil bygedra, en het gewissel van 40.33±1.15% vir die mikro geperforeerde voeringfilm tot 83.34±2.13 % vir die nie-geperforeerde voeringfilm. Die totale drukverskil oor die druiflading (1.40±0.01 % to 9.41±1.23 %) was die minste in vergelyking met die verskillende verpakkingskombinasies met die verskillende vlakke van voeringperforasies. Die verkoelingstempos van die druiwe in die 4.5 kg multiverpakking was betekenisvol (P<0.05) stadiger as vir die druiwe in die 5 kg handmandjie (‘punnet’) multiverpakking. Die 4.5 kg karton het ‘n seweagstes verkoelingstyd van 30.30-46.14% en 12.69-25.00% langer, respektiewelik, as oop-vertoon en toeslaan-‘punnet’ multiverpakkings gehad. Na 35 dae van koelopberging by -0.5°C het druiwetrosse in die 5 kg ‘punnet’-kartonkombinasies (oop-vertoon en toeslaan-’punnet’) ‘n massaverlies van 2.01 – 3.12% gehad, terwyl die trosse in die 4.5 kg kartonkombinasie slegs ‘n 1.08% massaverlies gehad het. In die ondersoek na die effek van verskillende kartonvoerings op die verkoelingstempo en kwaliteitseienskappe van ‘Regal seedless’ tafeldruiwe tydens koelopbering, het die nie-geperforeerde kartonvoerings ‘n relatiewe humiditeit (RH) van byna 100 % gehandhaaf. Hierdie hoë humiditeit in die nie-geperforeerde voeringfilms het ‘n verlies in stingelkwaliteit vertraag, maar het die voorkoms van SO2-skade en loskorrels betekenisvol (P < 0.05) verhoog in vergelyking met geperforeerde voerings. Die geperforeerde voerings het vrugverkoelingstempos verbeter, maar het die RH betekenisvol (P ≤ 0.05) verlaag. Die lae RH in die geperforeerde voerings het gelei tot ‘n verhoging in stingeluitdroging en –verbruining in vergelyking met die nie-geperforeerde voerings. Die vogverliestempo uit druiwe verpak in nie-geperforeerde voeringfilms was betekenisvol (P<0.05) stadiger in vergelyking met druiwe verpak in geperforeerde voeringfilms (120 x 2 mm and 36 x 4 mm). Die effektiewe vogdiffusiewaardes vir stingelgedeeltes verpak in nie-geperforeerde voeringfilms was stadiger as vir stingelgedeeltes wat verpak is sonder verpakkingsvoerings, en het gevarieer van 5.06x10-14 – 1.05x10-13 m2s-1. Die uitdrogingstempo van stingelgedeeltes was omgekeerd eweredig aan die grootte (deursnit) van die stingelgedeeltes. Die uitdrogingstempo van stingels wat blootgestel was (sonder voerings) aan sirkulerende koue lug was betekenisvol (P<0.05) hoër as die uitdrogingstempos van stingels wat verpak was in nie-geperforeerde voeringfilms. Empiriese modelle is gebruik om die uitdrogingskinetika van die verskillende stingelgedeeltes te beskryf. Die potensiaal van koelkamer humidifisering in die vermindering van die uitdroging van druifstingels is ondersoek. Humidifisering het stingeluitdroging vertraag en het die tempo van stingeluitdroging en -verbruining verminder, maar dit het die voorkoms van SO2-skade op die tafeldruiftrosse verhoog en het die verpakkings laat nat word. Die bewegingsfenomeen tydens verkoeling en hantering van verpakte tafeldruiwe is ook ondersoek deur gebruik te maak van ‘n BVD model en is bevestig met eksperimentele resultate. Daar was goeie ooreenstemming tussen gemete en voorspelde resultate. Die resultaat demonstreer duidelik die toepaslikheid van BVD-modelle om die optimum tafeldruifverpakkings- en verkoelingsprosedures te bepaal.
PPECB and Postharvest Innovation Programme (PHI-2) for their financial support
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Tuswa, Nangamso. "Using coupled atmospheric-unsaturated zone model to quantify groundwater recharge to the Table Mountain Group Aquifer system, George, South Africa". University of the Western Cape, 2019. http://hdl.handle.net/11394/7059.

Texto completo
Resumen
>Magister Scientiae - MSc
The current study aimed at providing groundwater recharge estimates in a fractured rock aquifer environment that is occupied by pine plantation and indigenous forests in order to improve the understanding of the effect of pine plantation forests on recharge. This was based on the argument that for the trees to affect recharge, they do not necessarily need to tap directly from the saturated zone, as vegetation may indirectly affect groundwater recharge by interception and abstracting the infiltrating water in the vadose zone before reaching the water table. The study was conducted along the Southern Cape coastline of Western Cape Province in South Africa. This area is 7 km east of George in an area characterized by the occurrence of the Table Mountain Group aquifer. The research presented in this thesis formed part of a Water Research Commission (WRC) project titled “The Impacts of Commercial Plantation Forests on Groundwater Recharge and Streamflow”. To achieve the aim of the current study, three objectives were formulated: i) to characterize the dominantly occurring recharge mechanism ii) to determine long-term groundwater recharge estimates, and iii) to assess the effect of plantation forests on groundwater recharge. As part of characterizing the dominant recharge mechanism in the area, a conceptual groundwater recharge model of the area was developed to explain the recharge mechanism and facilitate an improved understanding of recharge estimates. The model was based on a theoretical understanding and previous investigations conducted in the study area. Methods such as environmental stable isotopes and hydrochemistry were used to refine the conceptual model by identifying the source of recharge and the dominant recharge mechanism. The occurrence and density of lineaments were used as a proxy to delineate potential recharge zones in the area. Recharge was estimated using the Rainfall Infiltration Breakthrough (RIB) and the Chloride Mass Balance (CMB) methods. Additionally, the effect of plantation forests on recharge was assessed using the HYDRUS-2D numerical model. The recharge estimates derived from the RIB and CMB techniques were verified using the published maps by Vegter (1995).
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Alkhairy, Ibrahim H. "Designing and Encoding Scenario-based Expert Elicitation for Large Conditional Probability Tables". Thesis, Griffith University, 2020. http://hdl.handle.net/10072/390794.

Texto completo
Resumen
This thesis focuses on the general problem of asking experts to assess the likelihood of many scenarios, when there is insufficient time to ask about all possible scenarios. The challenge addressed here is one of experimental design: How to choose which scenarios are assessed; How to use that limited data to extrapolate information about the scenarios that remain unasked? In a mathematical sense, this problem can be constructed as a problem of expert elicitation, where experts are asked to quantify conditional probability tables (CPTs). Experts may be relied on, for example in the situation when empirical data is unavailable or limited. CPTs are used widely in statistical modelling to describe probabilistic relationships between an outcome and several factors. I consider two broad situations where CPTs are important components of quantitative models. Firstly experts are often asked to quantify CPTs that form the building blocks of Bayesian Networks (BNs). In one case study, CPTs describe how habitat suitability of feral pigs is related to various environmental factors, such as water quality and food availability. Secondly CPTs may also support a sensitivity analysis for large computer experiments, by examining how some outcome changes, as various factors are changed. Another case study uses CPTs to examine sensitivity to settings, for algorithms available through virtual laboratories, to map the geographic distribution of species such as the koala. An often-encountered problem is the sheer amount of information asked of the expert: the number of scenarios. Each scenario corresponds to a row of the CPT, and concerns a particular combination of factors, and the likely outcome. Currently most researchers arrange elicitation of CPTs by keeping the number of rows and columns in the CPT to a minimum, so that they need ask experts about no more than twenty or so scenarios. However in some practical problems, CPTs may need to involve more rows and columns, for example involving more than two factors, or factors which can take on more than two or three possible values. Here we propose a new way of choosing scenarios, that underpin the elicitation strategy, by taking advantage of experimental design to: ensure adequate coverage of all scenarios; and to make best use of the scarce resources like the valuable time of the experts. I show that this can be essentially constructed as a problem of how to better design choice of scenarios to elicit from a CPT. The main advantages of these designs is that they explore more of the design space compared to usual design choices like the one-factor-at-a-time (OFAT) design that underpins the popular encoding approach embedded in “CPT Calculator”. In addition this work tailors an under-utilized scenario-based elicitation method to ensure that the expert’s uncertainty was captured, together with their assessments, of the likelihood of each possible outcome. I adopt the more intuitive Outside-In Elicitation method to elicit the expert’s plausible range of assessed values, rather than the more common and reverse-order approach of eliciting their uncertainty around their best guess. Importantly this plausible range of values is more suitable for input into a new approach that was proposed for encoding scenario-based elicitation: Bayesian (rather than a Frequentist) interpretation. Whilst eliciting some scenarios from large CPTs, another challenge arises from the remaining CPT entries that are not elicited. This thesis shows how to adopt a statistical model to interpolate not only the missing CPT entries but also quantify the uncertainty for each scenario, which is new for these two situations: BNs and sensitivity analyses. For this purpose, I introduce the use of Bayesian generalized linear models (GLMs). The Bayesian updating framework also enables us to update the results of elicitation, by incorporating empirical data. The idea is to utilise scenarios elicited from experts to constructan informative Bayesian “prior” model. Then the prior information (e.g. about scenarios) is combined with the empirical data (e.g. from computer model runs), to update the posterior estimates of plausible outcomes (affecting all scenarios). The main findings showed that Bayesian inference suits the small data problem of encoding the expert’s mental model underlying their assessments, allowing uncertainty to vary about each scenario. In addition Bayesian inference provides rich feedback to the modeller and experts on the plausible influence of factors on the response, and whether any information was gained on their interactions. That information could be pivotal to designing the next phase of elicitation about habitat requirements or another phase of computer models. In this way, the Bayesian paradigm naturally supports a sequential approach to gradually accruing information about the issue at hand. As summarised above, the novel statistical methodology presented in this thesis also contributes to computer science. Specifically computation for Bayesian Networks and sensitivity analyses of large computer experiments can be re-designed to be more efficient. Here the expert knowledge is useful to complement the empirical data to inform a more comprehensive analyses.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
School of Info & Comm Tech
Science, Environment, Engineering and Technology
Full Text
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Bršlíková, Jana. "Analýza úmrtnostních tabulek pomocí vybraných vícerozměrných statistických metod". Master's thesis, Vysoká škola ekonomická v Praze, 2015. http://www.nusl.cz/ntk/nusl-201859.

Texto completo
Resumen
The mortality is historically one of the most important demographic indicator and definitely reflects the maturity of each country. The objective of this diploma thesis is the comparison of mortality rates in analyzed countries around the world over time and among each other using the principle component analysis that allows assessing data different way. The big advantage of this method is minimal loss of information and quite understandable interpretation of mortality in each country. This thesis offers several interesting graphical outputs, that for example confirm higher mortality rate in Eastern European countries compared to Western European countries and show that Czech republic is country where mortality has fallen most in context of post-communist countries between 1990 and 2010. Source of the data is Human Mortality Database and all data were processed in statistical tool SPSS.
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Chang, Chia-Chi y 張家綺. "An Automatic GUI Generating Method from Hand-Drawn Sketch to Neat Tableau Based on Deep Neural Networks: A Case of Webpage Layout". Thesis, 2019. http://ndltd.ncl.edu.tw/handle/cqdn9k.

Texto completo
Resumen
碩士
國立臺灣科技大學
資訊工程系
107
This thesis presents a webpage GUI generator using hand-drawn sketches of webpage designs. This generator can reduce the labor and time costs in the web development process, and allow designers and developers to quickly generate webpage prototypes. There are two main sections including object detection and automatic GUI generation. First, webpage elements are detected by virtue of a deep neural network on a hand-drawn sketch. After feature extraction and classification of each webpage element, the bounding box of the element is subsequently predicted. The coordinates of the bounding box for each webpage element allow the system to generate a GUI skeleton for webpage development. In the experiments, the execution time of the automatic GUI generating process spends 2~3 seconds, and the accuracy of generating the neat tableau webpage layout reaches 83.24 % on an average. Our proposed method can be implemented for webpage development to generate high fidelity prototypes for clients to preview the designs. Such a method will reduce miscommunications, save webpage development costs, and shorten webpage development time.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Jakubův, Jan. "Automatické dokazování vět s použitím tableaux metod". Master's thesis, 2006. http://www.nusl.cz/ntk/nusl-270033.

Texto completo
Resumen
In this paper we are studying the Tableaux Calculus a related methods. We adopt basic notions and present the tableaux calculus for the First-Order Logic. Then we present the Beckert and Possega [5] prover. We extend this prover with speed-up technique and compare the results with the original prover. Then we present the tableaux T system of Degtyarev and Voronkov [3] supposed to handle the equality. The main benefit of this paper is the implementation the the T system prover. We also present a library providing tools to work with the objects of the First-Order Logic in the programming language Python.
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Levy, Michel. "Contribution à l'analyse de la methode des tableaux". Phd thesis, 1991. http://tel.archives-ouvertes.fr/tel-00340419.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Ahmat, Norhayati, Hassan Ugail y Castro Gabriela Gonzalez. "Method of modelling the compaction behaviour of cylindrical pharmaceutical tablets". 2010. http://hdl.handle.net/10454/4976.

Texto completo
Resumen
No
The mechanisms involved for compaction of pharmaceutical powders have become a crucial step in the development cycle for robust tablet design with required properties. Compressibility of pharmaceutical materials is measured by a force-displacement relationship which is commonly analysed using a well known method, the Heckel model. This model requires the true density and compacted powder mass value to determine the powder mean yield pressure. In this paper, we present a technique for shape modelling of pharmaceutical tablets based on the use of partial differential equations (PDEs). This work also presents an extended formulation of the PDE method to a higher dimensional space by increasing the number of parameters responsible for describing the surface in order to generate a solid tablet. Furthermore, the volume and the surface area of the parametric cylindrical tablet have been estimated numerically. Finally, the solution of the axisymmetric boundary value problem for a finite cylinder subject to a uniform axial load has been utilised in order to model the displacement components of a compressed PDE-based representation of a tablet. The Heckel plot obtained from the developed model shows that the model is capable of predicting the compaction behaviour of pharmaceutical materials since it fits the experimental data accurately.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía