Dissertations / Theses on the topic 'Multipole decomposition'

To see the other types of publications on this topic, follow the link: Multipole decomposition.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Multipole decomposition.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Cocle, Roger. "Combining the vortex-in-cell and parallel fast multipole methods for efficient domain decomposition simulations : DNS and LES approaches." Université catholique de Louvain, 2007. http://edoc.bib.ucl.ac.be:81/ETD-db/collection/available/BelnUcetd-08172007-165806/.

Full text
Abstract:
This thesis is concerned with the numerical simulation of high Reynolds number, three-dimensional, incompressible flows in open domains. Many problems treated in Computational Fluid Dynamics (CFD) occur in free space: e.g., external aerodynamics past vehicles, bluff bodies or aircraft; shear flows such as shear layers or jets. In observing all these flows, we can remark that they are often unsteady, appear chaotic with the presence of a large range of eddies, and are mainly dominated by convection. For years, it was shown that Lagrangian Vortex Element Methods (VEM) are particularly well appropriate for simulating such flows. In VEM, two approaches are classically used for solving the Poisson equation. The first one is the Biot-Savart approach where the Poisson equation is solved using the Green's function approach. The unbounded domain is thus implicitly taken into account. In that case, Parallel Fast Multipole (PFM) solvers are usually used. The second approach is the Vortex-In-Cell (VIC) method where the Poisson equation is solved on a grid using fast grid solvers. This requires to impose boundary conditions or to assume periodicity. An important difference is that fast grid solvers are much faster than fast multipole solvers. We here combine these two approaches by taking the advantages of each one and, eventually, we obtain an efficient VIC-PFM method to solve incompressible flows in open domain. The major interest of this combination is its computational efficiency: compared to the PFM solver used alone, the VIC-PFM combination is 15 to 20 times faster. The second major advantage is the possibility to run Large Eddy Simulations (LES) at high Reynolds number. Indeed, as a part of the operations are done in an Eulerian way (i.e. on the VIC grid), all the existing subgrid scale (SGS) models used in classical Eulerian codes, including the recent "multiscale" models, can be easily implemented.
APA, Harvard, Vancouver, ISO, and other styles
2

Dal, Forno Massimo. "Theoretical and experimental analysis of interactions between electromagnetic fields and relativistic electrons in vacuum chamber." Doctoral thesis, Università degli studi di Trieste, 2013. http://hdl.handle.net/10077/8570.

Full text
Abstract:
2011/2012
Il laser ad elettroni liberi (FEL) è una sorgente luminosa di quarta generazione che ha specifiche più stringenti rispetto alle sorgenti luminose di terza generazione, tra le quali ricordiamo i sincrotroni. La cosiddetta emittanza e la traiettoria del fascio determinano la qualità del fascio, e devono soddisfare severi requisiti nei FEL. Per esempio, nella sala ondulatori, la posizione del fascio deve essere misurata con risoluzione micrometrica. Il controllo della posizione del fascio può essere effettuato utilizzando i “Cavity Beam Position Monitor” (Cavity BPM). Questa tesi descrive l’attività di ricerca eseguita sui Cavity BPM. Precisamente, la progettazione, la simulazione elettromagnetica e l'ottimizzazione di un Cavity BPM sono state effettuate. Successivamente, 25 Cavity BPM sono stati fabbricati e installati nella sala ondulatori del progetto FERMI@Elettra. I segnali sono stati acquisiti e processati con un nuovo tipo di elettronica, e una serie di misure sono state effettuate. Il secondo dispositivo studiato in questo dottorato è l'acceleratore lineare di particelle. Tradizionali strutture acceleranti, dotate di un accoppiatore a singolo ingresso causano la degradazione delle proprietà fascio elettronico, a causa dell’ asimmetria del campo elettromagnetico. Un nuovo tipo di accoppiatore, con cortocircuito mobile, viene proposto, nel quale il campo elettrico è stato simmetrizzato. La progettazione, simulazione elettromagnetica e ottimizzazione del dispositivo sono state effettuate, e un prototipo della struttura accelerante è stato prodotto e sintonizzato. Il campo elettrico è stato misurato con il metodo bead-pull. Infine, in questa tesi sono descritti i deflettori RF ad alta energia, che sono degli strumenti di diagnostica in grado di misurare le proprietà fascio elettronico, in particolare la lunghezza del banco di elettroni e lo spazio longitudinale di fase.
The Free Electron Laser (FEL) is a fourth generation light source that has more stringent specifications with respect to the third generation light sources, such as synchrotrons. The so-called emittance and the beam trajectory determine the beam quality, and must satisfy stringent requirements in FELs. For example, in the undulator hall, the beam position must be measured with the micrometer resolution. The control in the beam position can be achieved using a cavity beam position monitor (Cavity BPM). This thesis describes the research performed on the cavity BPM. Precisely, the electromagnetic design, the simulation and the optimization of a cavity BPM have been carried out. Subsequently, 25 cavity BPMs have been manufactured and installed in the undulator hall of the FERMI@Elettra project. A new RF front-end has been set up, and a series of measurements have been performed. The second device studied in this PhD is the travelling wave linear accelerator. Traditional accelerating structures endowed with a single feed coupler cause degradation of the electron beam properties, due to the electromagnetic field asymmetry. A new type of single feed structure with movable short circuit is proposed, where the electric field has been symmetryzed. The electromagnetic design, simulation and optimization of the device have been carried out, and a prototype of the accelerating structure has been produced and tuned. The electric field has been measured with the bead-pull method. Finally, in this thesis are described the High Energy RF Deflector (HERFD), which are a fundamental diagnostic tool to measure the electron beam properties, in particular the bunch length and the longitudinal phase space.
XXV Ciclo
1984
APA, Harvard, Vancouver, ISO, and other styles
3

Dal, Forno Massimo. "Theoretical and experimental analysis of interactions between electromagnetic fields and relativistic electrons in vacuum chamber." Doctoral thesis, Università degli studi di Trieste, 2013. http://hdl.handle.net/10077/8537.

Full text
Abstract:
2011/2012
Il laser ad elettroni liberi (FEL) è una sorgente luminosa di quarta generazione che ha specifiche più stringenti rispetto alle sorgenti luminose di terza generazione, tra le quali ricordiamo i sincrotroni. La cosiddetta emittanza e la traiettoria del fascio determinano la qualità del fascio, e devono soddisfare severi requisiti nei FEL. Per esempio, nella sala ondulatori, la posizione del fascio deve essere misurata con risoluzione micrometrica. Il controllo della posizione del fascio può essere effettuato utilizzando i “Cavity Beam Position Monitor” (Cavity BPM). Questa tesi descrive l’attività di ricerca eseguita sui Cavity BPM. Precisamente, la progettazione, la simulazione elettromagnetica e l'ottimizzazione di un Cavity BPM sono state effettuate. Successivamente, 25 Cavity BPM sono stati fabbricati e installati nella sala ondulatori del progetto FERMI@Elettra. I segnali sono stati acquisiti e processati con un nuovo tipo di elettronica, e una serie di misure sono state effettuate. Il secondo dispositivo studiato in questo dottorato è l'acceleratore lineare di particelle. Tradizionali strutture acceleranti, dotate di un accoppiatore a singolo ingresso causano la degradazione delle proprietà fascio elettronico, a causa dell’ asimmetria del campo elettromagnetico. Un nuovo tipo di accoppiatore, con cortocircuito mobile, viene proposto, nel quale il campo elettrico è stato simmetrizzato. La progettazione, simulazione elettromagnetica e ottimizzazione del dispositivo sono state effettuate, e un prototipo della struttura accelerante è stato prodotto e sintonizzato. Il campo elettrico è stato misurato con il metodo bead-pull. Infine, in questa tesi sono descritti i deflettori RF ad alta energia, che sono degli strumenti di diagnostica in grado di misurare le proprietà fascio elettronico, in particolare la lunghezza del banco di elettroni e lo spazio longitudinale di fase.
The Free Electron Laser (FEL) is a fourth generation light source that has more stringent specifications with respect to the third generation light sources, such as synchrotrons. The so-called emittance and the beam trajectory determine the beam quality, and must satisfy stringent requirements in FELs. For example, in the undulator hall, the beam position must be measured with the micrometer resolution. The control in the beam position can be achieved using a cavity beam position monitor (Cavity BPM). This thesis describes the research performed on the cavity BPM. Precisely, the electromagnetic design, the simulation and the optimization of a cavity BPM have been carried out. Subsequently, 25 cavity BPMs have been manufactured and installed in the undulator hall of the FERMI@Elettra project. A new RF front-end has been set up, and a series of measurements have been performed. The second device studied in this PhD is the travelling wave linear accelerator. Traditional accelerating structures endowed with a single feed coupler cause degradation of the electron beam properties, due to the electromagnetic field asymmetry. A new type of single feed structure with movable short circuit is proposed, where the electric field has been symmetryzed. The electromagnetic design, simulation and optimization of the device have been carried out, and a prototype of the accelerating structure has been produced and tuned. The electric field has been measured with the bead-pull method. Finally, in this thesis are described the High Energy RF Deflector (HERFD), which are a fundamental diagnostic tool to measure the electron beam properties, in particular the bunch length and the longitudinal phase space.
XXV Ciclo
1984
APA, Harvard, Vancouver, ISO, and other styles
4

Laffont, Pierre-Yves. "Intrinsic image decomposition from multiple photographs." Nice, 2012. http://www.theses.fr/2012NICE4060.

Full text
Abstract:
La modification d’éclairage et de matériaux dans une image est un objectif de longue date en traitement d’image, vision par ordinateur et infographie. Cette thèse a pour objectif de calculer une décomposition en images intrinsèques, qui sépare une photographie en composantes indépendantes : la réflectance, qui correspond à la couleur des matériaux, et l’illumination, qui représente la contribution de l’éclairage à chaque pixel. Nous cherchons à résoudre ce problème difficile à l’aide de plusieurs photographies de la scène. L’intuition clé des approches que nous proposons est de contraindre la décomposition en combinant des algorithmes guidés par l’image, et une reconstruction 3D a=éparse et incomplète générée par les algorithmes de stéréo multi-vue. Une première approche permet de décomposer des images de scènes extérieures, à partir de plusieurs photographies avec un éclairage fixe. Cette méthode permet non seulement de séparer la réflectance de l’illumination, mais décompose également cette dernière en composantes dues au soleil, au ciel et à l’éclairage indirect. Une méthodologie permettant de simplifier le processus de capture et de calibration est ensuite proposée. La troisième partie de cette thèse est consacrée aux collections d’images : nous exploitons les variations d’éclairage afin de traiter des scènes complexes sans intervention de l’utilisateur. Les méthodes décrites dans cette thèse rendent possible plusieurs manipulations d’images, telles que l’édition de matériaux, telles que l’édition de matériaux tout en préservant un éclairage cohérent, l’insertion d’objets virtuels ou le transfert d’éclairage entre photographies d’une même scène
Editing materials and lighting is a common image manipulation task that requires significant expertise to achieve plausible results. Each pixel aggregates the effect of both material and lighting, therefore standard color manipulations are likely to affect both components. Intrinsic image decomposition separates a photograph into independent layers : reflectance, which represents the color of the materials, and illumination, which encodes the effect of lighting at each pixel. In this thesis, we tackle this ill-posed problem by leveraging additional information provided by multiple photographs of the scene. We combine image-guided algorithms with sparse 3D information reconstructed from multi-view stereo, in order to constrain the decomposition. We first present an approach to decompose images of outdoor scenes, using photographs captured at a single time of day. This method not only separates reflectance from illumination, but also decomposes the illumination into sun, sky and indirect layers. We then develop a new method to extract lighting information about a scene only from a few images, thus simplifying the capture and calibration steps of our intrinsic decomposition. In the third part of this thesis, we focus on image collections gathered from photo-sharing websites or captured with a moving light source. We exploit the variations of lighting to process complex scenes without user assistance, not precise and complete geometry. The method described in this thesis enable advanced image manipulations such as lighting-aware editing, insertion of virtual objects, and image-based illumination transfer between photographs of a collection
APA, Harvard, Vancouver, ISO, and other styles
5

Rajasekharan, Sabarinath. "The decomposition of multi robot systems : a human motor control perspective." Thesis, University of Reading, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.269653.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Jang, Young Jae 1974. "Multiple part type decomposition method in manufacturing processing line." Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/89318.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering; and, (S.M.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2001.
"June 2001."
Includes bibliographical references (leaf 67).
by Young Jae Jang.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
7

Conway, Adrian E. "Decomposition methods and computational algorithms for multiple-chain closed queueing networks." Thesis, University of Ottawa (Canada), 1986. http://hdl.handle.net/10393/4973.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Dray, Matthew William. "Effects of multiple environmental stressors on litter chemical composition and decomposition." Thesis, Cardiff University, 2014. http://orca.cf.ac.uk/68365/.

Full text
Abstract:
Tree litter is a key basal resource in temperate deciduous woodlands and streams that drain them. Litter decomposition promotes carbon and nutrient cycling, fueling woodland food webs. Research to date has not thoroughly explored how ongoing environmental changes affect this process. This study used microcosm and field experiments to investigate how multiple stressors (urban pollution, elevated atmospheric CO2 and stream acidification) affected litter chemical composition, invertebrate consumption, and terrestrial and aquatic mass loss. Leaf litter chemical composition differed between ambient- and elevated-CO2 litters, and between rural and urban litters, but the direction of these responses was complex and differed between experiments. In microcosms, leaf litter consumption by terrestrial and aquatic invertebrate detritivores was species-specific. After exposure to a woodland floor or headwater streams, urban litter broke down faster than rural litter, while CO2 treatment did little to influence mass loss. The abundance, richness and diversity of terrestrial and aquatic invertebrates associated with leaf litter generally declined from 28 to 112 days in the field. Taxon richness and diversity were generally higher in elevated- than ambient-CO2 leaf litter through time, while urban leaf litter had greater diversity than rural litter after 112 days only. Abundance was greater in the circumneutral than the acid stream. Aside from leaf litter, small woody debris was also affected by CO2 treatment: elevated-CO2 twigs had a greater concentration of nitrogen and lignin, and broke down faster than ambient-CO2 twigs on a woodland floor and in headwater streams. This work highlights the complexity of invertebrate- and ecosystem-scale responses to the effects of multiple environmental stressors, with implications for nutrient cycling and food webs. Urban pollution may have a greater influence on litter chemical composition than CO2 treatment, while effects of growth condition may be more important than stream acidity in influencing decay and invertebrate communities.
APA, Harvard, Vancouver, ISO, and other styles
9

Syrowicz, Diego A. (Syrowicz Gajnaj Diego Ariel) 1976. "Decomposition analysis of a deterministic, multi-part-type, multiple-failure-mode production line." Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/80128.

Full text
Abstract:
Thesis (M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1999.
Includes bibliographical references (leaves 103-105).
by Diego A. Syrowicz.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
10

Araki, Sho. "Orthogonal transformation based algorithms for singular value decomposition." Doctoral thesis, Kyoto University, 2021. http://hdl.handle.net/2433/263784.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Keller, Brian. "Models and Methods for Multiple Resource Constrained Job Scheduling under Uncertainty." Diss., The University of Arizona, 2009. http://hdl.handle.net/10150/193630.

Full text
Abstract:
We consider a scheduling problem where each job requires multiple classes of resources, which we refer to as the multiple resource constrained scheduling problem(MRCSP). Potential applications include team scheduling problems that arise in service industries such as consulting and operating room scheduling. We focus on two general cases of the problem. The first case considers uncertainty of processing times, due dates, and resource availabilities consumption, which we denote as the stochastic MRCSP with uncertain parameters (SMRCSP-U). The second case considers uncertainty in the number of jobs to schedule, which arises in consulting and defense contracting when companies bid on future contracts but may or may not win the bid. We call this problem the stochastic MRCSP with job bidding (SMRCSP-JB).We first provide formulations of each problem under the framework of two-stage stochastic programming with recourse. We then develop solution methodologies for both problems. For the SMRCSP-U, we develop an exact solution method based on the L-shaped method for problems with a moderate number of scenarios. Several algorithmic enhancements are added to improve efficiency. Then, we embed the L-shaped method within a sampling-based solution method for problems with a large number of scenarios. We modify a sequential sampling procedure to allowfor approximate solution of integer programs and prove desired properties. The sampling-based method is applicable to two-stage stochastic integer programs with integer first-stage variables. Finally, we compare the solution methodologies on a set of test problems.For SMRCSP-JB, we utilize the disjunctive decomposition (D2 ) algorithm for stochastic integer programs with mixed-binary subproblems. We develop several enhancements to the D2 algorithm. First, we explore the use of a cut generation problem restricted to a subspace of the variables, which yields significant computational savings. Then, we examine generating alternative disjunctive cuts based on the generalized upper bound (GUB) constraints that appear in the second-stage of the SMRCSP-JB. We establish convergence of all D2 variants and present computational results on a set of instances of SMRCSP-JB.
APA, Harvard, Vancouver, ISO, and other styles
12

Iseric, Hamza. "BISER: fast characterization of segmental duplication structure in multiple genome assemblies." Thesis, Schloss Dagstuhl -- Leibniz-Zentrum für Informatik, 2021. http://hdl.handle.net/1828/13343.

Full text
Abstract:
The increasing availability of high-quality genome assemblies raised interest in the characterization of genomic architecture. Major architectural elements, such as common repeats and segmental duplications (SDs), increase genome plasticity that stimulates further evolution by changing the genomic structure and inventing new genes. Optimal computation of SDs within a genome requires quadratic-time local alignment algorithms that are impractical due to the size of most genomes. Additionally, to perform evolutionary analysis, one needs to characterize SDs in multiple genomes and find relations between those SDs and unique (non-duplicated) segments in other genomes. A na ̈ıve approach consisting of multiple sequence alignment would make the optimal solution to this problem even more impractical. Thus there is a need for fast and accurate algorithms to characterize SD structure in multiple genome assemblies to better understand the evolutionary forces that shaped the genomes of today. Here we introduce a new approach, BISER, to quickly detect SDs in multiple genomes and identify elementary SDs and core duplicons that drive the formation of such SDs. BISER improves earlier tools by (i) scaling the detection of SDs with low homology (75%) to multiple genomes while introducing further 10–34× speed-ups over the existing tools, and by (ii) characterizing elementary SDs and detecting core duplicons to help trace the evolutionary history of duplications to as far as 300 million years.
Graduate
APA, Harvard, Vancouver, ISO, and other styles
13

Maier, Konradin, and Volker Stix. "A Semi-Automated Approach for Structuring Multi Criteria Decision Problems." Elsevier, 2013. http://dx.doi.org/10.1016/j.ejor.2012.10.018.

Full text
Abstract:
This article seeks to enhance multi criteria decision making by providing a scientic approach for decomposing and structuring decision problems. We propose a process, based on concept mapping, which integrates group creativity techniques, card sorting procedures, quantitative data analysis and algorithmic automatization to construct meaningful and complete hierarchies of criteria. The algorithmic aspect is covered by a newly proposed recursive cluster algorithm, which automatically generates hierarchies from card sorting data. Based on comparison with another basic algorithm and empirical engineered and real-case test data, we validate that our process efficiently produces reasonable hierarchies of descriptive elements like goal- or problem-criteria. (authors' abstract)
APA, Harvard, Vancouver, ISO, and other styles
14

McKee, Alex Clive Seymoore. "Analytical solutions of orientation aggregation models, multiple solutions and path following with the Adomian decomposition method." Thesis, Brunel University, 2011. http://bura.brunel.ac.uk/handle/2438/7349.

Full text
Abstract:
In this work we apply the Adomian decomposition method to an orientation aggregation problem modelling the time distribution of filaments. We find analytical solutions under certain specific criteria and programmatically implement the Adomian method to two variants of the orientation aggregation model. We extend the utility of the Adomian decomposition method beyond its original capability to enable it to converge to more than one solution of a nonlinear problem and further to be used as a corrector in path following bifurcation problems.
APA, Harvard, Vancouver, ISO, and other styles
15

Metawe, Saad Abdel-Hamid. "The Prediction of Industrial Bond Rating Changes: a Multiple Discriminant Model Versus a Statistical Decomposition Model." Thesis, North Texas State University, 1985. https://digital.library.unt.edu/ark:/67531/metadc332370/.

Full text
Abstract:
The purpose of this study is to investigate the usefulness of statistical decomposition measures in the prediction of industrial bond rating changes. Further, the predictive ability of decomposition measures is compared with multiple discriminant analysis on the same sample. The problem of this study is twofold. It stems in general from the statistical problems associated with current techniques employed in the study of bond ratings and in particular from the lack of attention to the study of bond rating changes. Two main hypotheses are tested in this study. The first is that bond rating changes can be predicted through the use of financial statement data. The second is that decomposition analysis can achieve the same performance as multiple discriminant analysis in duplicating and predicting industrial bond rating changes. To explain and predict industrial bond rating changes, statistical decomposition measures were computed for each company in the sample. Based on these decomposition measures, the two types of analyses performed were (a) a univariate analysis where each decomposition measure was compared with an industry average decomposition measure, and (b) a multivariate analysis where decomposition measures were used as independent variables in a probability linear model. In addition to statistical decomposition analysis, multiple discriminant analysis was used in duplicating and predicting bond rating changes. Finally, a comparison was made between the predictive abilities of decomposition analysis and discriminant analysis.
APA, Harvard, Vancouver, ISO, and other styles
16

Liu, Xiaohua. "Anion-Peptide Adduct Formation and Decomposition As Studied by Fourier Transform Ion Cyclotron Resonance (FT-ICR) Mass Spectrometry." ScholarWorks@UNO, 2013. http://scholarworks.uno.edu/td/1748.

Full text
Abstract:
A new “best match” match model has been developed to account for adduct formation on multiply charged peptides observed in negative ion electrospray mass spectrometry. To obtain a stable adduct, the model necessitates an approximate matching of apparent gas-phase basicity (GBapp) of a given proton bearing site on the peptide with the gas-phase basicity (GB) of the anion attaching at that site. Evidence supporting the model is derived from the fact that singly charged adducts were only observed for lower GB anions: HSO4-, I-, CF3COO-. Ions that have medium GBs (NO3-, Br-, H2PO4-) only form adducts having -2 charge states, whereas Cl- (higher GB) can form adducts having -3 charge states. Hydrogen bonds are the main interactions pertinent to the “Best Match” model, however, ion-ion interactions formed between peptides ([Glu]Fibrinopeptide B, Angiotensin I or [Asn1,Val5]-Angiotensin II) and low GB anions (ClO4- or HSO4-) have been established by CID-MS/MS. Evidence for ion-ion interactions comes especially from product ions formed during the first dissociation step, where, in addition to the expected loss of the anion or neutral acid, other product ions that require covalent bond cleavage (i.e., H2O or NH3 loss) are also observed. In this study, the “Best Match” model is further supported by the decomposition behavior of adducts formed when Na+/H+ exchange has occurred on peptides. Na+/H+ exchanges were found to occur preferentially at higher acidity sites. Without any Na+/H+ exchange, F- and CH3COO- can hardly form observable adducts with [Glu]Fibrinopeptide B. However, after multiple Na+/H+ exchanges, F- and CH3COO- do form stable adducts. This phenomenon can be rationalized by considering that Na+ cations serve to “block” the highly acidic sites, thereby forcing them to remain overall neutral. This leaves the less acidic protons available to match with higher GB anions. According to the "best match" model, high GB anions will match with high GBapp sites on the peptide, whereas low GB anions will match with low GBapp peptide sites. High charge states readily augment GBapp of the peptide (through-space effect). Na+/H+ exchanges substantially decrease GBapp by neutralizing charged sites, while slightly increasing intrinsic GBs by the inductive effect.
APA, Harvard, Vancouver, ISO, and other styles
17

Rasheed, Sarbast. "A Multiclassifier Approach to Motor Unit Potential Classification for EMG Signal Decomposition." Thesis, University of Waterloo, 2006. http://hdl.handle.net/10012/934.

Full text
Abstract:
EMG signal decomposition is the process of resolving a composite EMG signal into its constituent motor unit potential trains (classes) and it can be configured as a classification problem. An EMG signal detected by the tip of an inserted needle electrode is the superposition of the individual electrical contributions of the different motor units that are active, during a muscle contraction, and background interference.
This thesis addresses the process of EMG signal decomposition by developing an interactive classification system, which uses multiple classifier fusion techniques in order to achieve improved classification performance. The developed system combines heterogeneous sets of base classifier ensembles of different kinds and employs either a one level classifier fusion scheme or a hybrid classifier fusion approach.
The hybrid classifier fusion approach is applied as a two-stage combination process that uses a new aggregator module which consists of two combiners: the first at the abstract level of classifier fusion and the other at the measurement level of classifier fusion such that it uses both combiners in a complementary manner. Both combiners may be either data independent or the first combiner data independent and the second data dependent. For the purpose of experimentation, we used as first combiner the majority voting scheme, while we used as the second combiner one of the fixed combination rules behaving as a data independent combiner or the fuzzy integral with the lambda-fuzzy measure as an implicit data dependent combiner.
Once the set of motor unit potential trains are generated by the classifier fusion system, the firing pattern consistency statistics for each train are calculated to detect classification errors in an adaptive fashion. This firing pattern analysis allows the algorithm to modify the threshold of assertion required for assignment of a motor unit potential classification individually for each train based on an expectation of erroneous assignments.
The classifier ensembles consist of a set of different versions of the Certainty classifier, a set of classifiers based on the nearest neighbour decision rule: the fuzzy k-NN and the adaptive fuzzy k-NN classifiers, and a set of classifiers that use a correlation measure as an estimation of the degree of similarity between a pattern and a class template: the matched template filter classifiers and its adaptive counterpart. The base classifiers, besides being of different kinds, utilize different types of features and their performances were investigated using both real and simulated EMG signals of different complexities. The feature sets extracted include time-domain data, first- and second-order discrete derivative data, and wavelet-domain data.
Following the so-called overproduce and choose strategy to classifier ensemble combination, the developed system allows the construction of a large set of candidate base classifiers and then chooses, from the base classifiers pool, subsets of specified number of classifiers to form candidate classifier ensembles. The system then selects the classifier ensemble having the maximum degree of agreement by exploiting a diversity measure for designing classifier teams. The kappa statistic is used as the diversity measure to estimate the level of agreement between the base classifier outputs, i. e. , to measure the degree of decision similarity between the base classifiers. This mechanism of choosing the team's classifiers based on assessing the classifier agreement throughout all the trains and the unassigned category is applied during the one level classifier fusion scheme and the first combiner in the hybrid classifier fusion approach. For the second combiner in the hybrid classifier fusion approach, we choose team classifiers also based on kappa statistics but by assessing the classifiers agreement only across the unassigned category and choose those base classifiers having the minimum agreement.
Performance of the developed classifier fusion system, in both of its variants, i. e. , the one level scheme and the hybrid approach was evaluated using synthetic simulated signals of known properties and real signals and then compared it with the performance of the constituent base classifiers. Across the EMG signal data sets used, the hybrid approach had better average classification performance overall, specially in terms of reducing the number of classification errors.
APA, Harvard, Vancouver, ISO, and other styles
18

Kabeya, Kazuhisa III. "Structural Health Monitoring Using Multiple Piezoelectric Sensors and Actuators." Thesis, Virginia Tech, 1998. http://hdl.handle.net/10919/36709.

Full text
Abstract:
A piezoelectric impedance-based structural health monitoring technique was developed at the Center for Intelligent Material Systems and Structures. It has been successfully implemented on several complex structures to detect incipient-type damage such as small cracks or loose connections. However, there are still some problems to be solved before full scale development and commercialization can take place. These include: i) the damage assessment is influenced by ambient temperature change; ii) the sensing area is small; and iii) the ability to identify the damage location is poor. The objective of this research is to solve these problems in order to apply the impedance-based structural health monitoring technique to real structures. First, an empirical compensation technique to minimize the temperature effect on the damage assessment has been developed. The compensation technique utilizes the fact that the temperature change causes vertical and horizontal shifts of the signature pattern in the impedance versus frequency plot, while damage causes somewhat irregular changes. Second, a new impedance-based technique that uses multiple piezoelectric sensor-actuators has been developed which extends the sensing area. The new technique relies on the measurement of electrical transfer admittance, which gives us mutual information between multiple piezoelectric sensor-actuators. We found that this technique increases the sensing region by at least an order of magnitude. Third, a time domain technique to identify the damage location has been proposed. This technique also uses multiple piezoelectric sensors and actuators. The basic idea utilizes the pulse-echo method often used in ultrasonic testing, together with wavelet decomposition to extract traveling pulses from a noisy signal. The results for a one-dimensional structure show that we can determine the damage location to within a spatial resolution determined by the temporal resolution of the data acquisition. The validity of all these techniques has been verified by proof-of-concept experiments. These techniques help bring conventional impedance-based structural health monitoring closer to full scale development and commercialization.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
19

Mendoza, Michael A. "Water Fat Separation with Multiple-Acquisition Balanced Steady-State Free Precession MRI." BYU ScholarsArchive, 2013. https://scholarsarchive.byu.edu/etd/4304.

Full text
Abstract:
Magnetic resonance imaging (MRI) is an important medical imaging technique for visualizing soft tissue structures in the body. It has the advantages of being noninvasive and, unlike x-ray, does not rely on ionizing radiation for imaging. In traditional hydrogen-based MRI, the strongest measured signals are generated from the hydrogen nuclei contained in water and fat molecules.Reliable and uniform water fat separation can be used to improve medical diagnosis. In many applications the water component is the primary signal of interest, while the fat component represents a signal which can obscure the underlying pathology or other features of interest. In other applications the fat signal is the signal of interest. There currently exist many techniques for water fat separation. Dixon reconstruction techniques take multiple images acquired at select echo times with specific phase properties. Linear combinations of these images produce separate water and fat images. In MR imaging, images with high signal-to-noise ratio (SNR), that can be generated in a short time, are desired. Balanced steady-state free precession (bSSFP) MRI is a technique capable of producing images with high SNR in a short imaging time but suffers from signal voids or banding artifacts due to magnetic field inhomogeneity and susceptibly variations. These signal voids degrade image quality. Several methods have been developed to remove these banding effects. The simplest methods combine images across multiple bSSFP image acquisitions. This thesis describes a technique in water fat separation I developed which combines the advantages of bSSFP with Dixon reconstruction in order to produce robust water fat decomposition with high SNR in a short imaging time, while simultaneously reducing banding artifacts which traditionally degrade image quality. This algorithm utilizes four phased-cycled bSSFP acquisitions at specific echo times. Phase sensitive post-processing and a field map are used to prepare the data and reduce the effects of field inhomogeneities. Dixon reconstruction is then used to generate separate water and fat images.
APA, Harvard, Vancouver, ISO, and other styles
20

Chang, Chien Kuang Che. "Automated lung screening system of multiple pathological targets in multislice CT." Thesis, Evry, Institut national des télécommunications, 2011. http://www.theses.fr/2011TELE0021/document.

Full text
Abstract:
Cette recherche vise à développer un système de diagnostic assisté par ordinateur pour la détection automatique et la classification des pathologies du parenchyme pulmonaire telles que les pneumonies interstitielles idiopathiques et l'emphysème, en tomodensitométrie multicoupe. L’approche proposée repose sur morphologie mathématique 3-D, analyse de texture et logique floue, et peut être divisée en quatre étapes : (1) un schéma de décomposition multi-résolution basé sur un filtre 3-D morphologique exploitée pour discriminer les régions pulmonaires selon différentes échelles d’analyse. (2) Un partitionnement spatial supplémentaire du poumon basé sur la texture du tissu pulmonaire a été introduit afin de renforcer la séparation spatiale entre les motifs extraits au même niveau résolution dans la pyramide de décomposition. Puis, (3) une structure d'arbre hiérarchique a été construite pour décrire la relation d’adjacence entre les motifs à différents niveaux de résolution, et pour chaque motif, six fonctions d'appartenance floue ont été établies pour attribuer une probabilité d'association avec un tissu normal ou une cible pathologique. Enfin, (4) une étape de décision exploite les classifications par la logique floue afin de sélectionner la classe cible de chaque motif du poumon parmi les catégories suivantes : normal, emphysème, fibrose/rayon de miel, et verre dépoli. La validation expérimentale du système développé a permis de définir des spécifications relatives aux valeurs recommandées pour le nombre de niveaux de résolution NRL = 12, et le protocole d'acquisition comportant le noyau de reconstruction “LUNG” / ”BONPLUS” et des collimations fines (1.25 mm ou moins). Elle souligne aussi la difficulté d'évaluer quantitativement la performance de l'approche proposée en l'absence d'une vérité terrain, notamment une évaluation volumétrique, la sélection large des bords de la pathologie, et la distinction entre la fibrose et les structures (vasculaires) de haute densité
This research aims at developing a computer-aided diagnosis (CAD) system for fully automatic detection and classification of pathological lung parenchyma patterns in idiopathic interstitial pneumonias (IIP) and emphysema using multi-detector computed tomography (MDCT). The proposed CAD system is based on 3-D mathematical morphology, texture and fuzzy logic analysis, and can be divided into four stages: (1) a multi-resolution decomposition scheme based on a 3-D morphological filter was exploited to discriminate the lung region patterns at different analysis scales. (2) An additional spatial lung partitioning based on the lung tissue texture was introduced to reinforce the spatial separation between patterns extracted at the same resolution level in the decomposition pyramid. Then, (3) a hierarchic tree structure was exploited to describe the relationship between patterns at different resolution levels, and for each pattern, six fuzzy membership functions were established for assigning a probability of association with a normal tissue or a pathological target. Finally, (4) a decision step exploiting the fuzzy-logic assignments selects the target class of each lung pattern among the following categories: normal (N), emphysema (EM), fibrosis/honeycombing (FHC), and ground glass (GDG). The experimental validation of the developed CAD system allowed defining some specifications related with the recommendation values for the number of the resolution levels NRL = 12, and the CT acquisition protocol including the “LUNG” / ”BONPLUS” reconstruction kernel and thin collimations (1.25 mm or less). It also stresses out the difficulty to quantitatively assess the performance of the proposed approach in the absence of a ground truth, such as a volumetric assessment, large margin selection, and distinguishability between fibrosis and high-density (vascular) regions
APA, Harvard, Vancouver, ISO, and other styles
21

RACIOPPI, STEFANO. "CHEMICAL BONDING IN METAL-ORGANIC SYSTEMS: NATURE, STRUCTURES AND PROPERTIES." Doctoral thesis, Università degli Studi di Milano, 2019. http://hdl.handle.net/2434/606271.

Full text
Abstract:
The main purpose of my thesis is the application of theoretical and experimental methods for the study of the nature of the chemical bond and its effect on structures and properties in organome-tallic systems, like Metal Carbonyl Clusters and Coordination Polymers (CPs) featuring, in some of the cases under study, intrinsic porosity (in the following, PCP for Porous Coordination Polymers or MOFs for Metal-Organic Frameworks). Concerning metal clusters, we worked on high nuclearity metal carbonyl clusters, and, particularly, on those featuring semi-interstitial atoms. The chemical bonding and the related properties in these peculiar class of molecules are still a matter of discussion in the scientific community. Concerning the class of Metal-organic Frameworks, we focused our attention on azolate-based ligands as building blocks for the synthesis of MOFs, looking at their possible future application as ultra-low dielectric constant materials in electronic devices. Finally, we investigated the structural behavior of Coordination Polymers at non-ambient condition (high pressure, in the order of 0-8 GPa), to induce new interactions and attitudes like electric conductivity. This research required the application of a bunch of theoretical tools, assisted by accurate single crystal X-ray diffraction experiments in standard and not-standard conditions (low temperature and high pressure). Moreover, a protocol for comparing different energy decomposition methods was developed and successfully applied to investigate the bonding nature in simple and complex systems.
APA, Harvard, Vancouver, ISO, and other styles
22

Okada, Daigo. "Decomposition of a set of distributions in extended exponential family form for distinguishing multiple oligo-dimensional marker expression profiles of single-cell populations and visualizing their dynamics." Doctoral thesis, Kyoto University, 2021. http://hdl.handle.net/2433/263569.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Matos, Luís Miguel de Oliveira. "Lossless compression algorithms for microarray images and whole genome alignments." Doctoral thesis, Universidade de Aveiro, 2015. http://hdl.handle.net/10773/14273.

Full text
Abstract:
Doutoramento em Informática
Nowadays, in the 21st century, the never-ending expansion of information is a major global concern. The pace at which storage and communication resources are evolving is not fast enough to compensate this tendency. In order to overcome this issue, sophisticated and efficient compression tools are required. The goal of compression is to represent information with as few bits as possible. There are two kinds of compression, lossy and lossless. In lossless compression, information loss is not tolerated so the decoded information is exactly the same as the encoded one. On the other hand, in lossy compression some loss is acceptable. In this work we focused on lossless methods. The goal of this thesis was to create lossless compression tools that can be used in two types of data. The first type is known in the literature as microarray images. These images have 16 bits per pixel and a high spatial resolution. The other data type is commonly called Whole Genome Alignments (WGA), in particularly applied to MAF files. Regarding the microarray images, we improved existing microarray-specific methods by using some pre-processing techniques (segmentation and bitplane reduction). Moreover, we also developed a compression method based on pixel values estimates and a mixture of finite-context models. Furthermore, an approach based on binary-tree decomposition was also considered. Two compression tools were developed to compress MAF files. The first one based on a mixture of finite-context models and arithmetic coding, where only the DNA bases and alignment gaps were considered. The second tool, designated as MAFCO, is a complete compression tool that can handle all the information that can be found in MAF files. MAFCO relies on several finite-context models and allows parallel compression/decompression of MAF files.
Hoje em dia, no século XXI, a expansão interminável de informação é uma grande preocupação mundial. O ritmo ao qual os recursos de armazenamento e comunicação estão a evoluir não é suficientemente rápido para compensar esta tendência. De forma a ultrapassar esta situação, são necessárias ferramentas de compressão sofisticadas e eficientes. A compressão consiste em representar informação utilizando a menor quantidade de bits possível. Existem dois tipos de compressão, com e sem perdas. Na compressão sem perdas, a perda de informação não é tolerada, por isso a informação descodificada é exatamente a mesma que a informação que foi codificada. Por outro lado, na compressão com perdas alguma perda é aceitável. Neste trabalho, focámo-nos apenas em métodos de compressão sem perdas. O objetivo desta tese consistiu na criação de ferramentas de compressão sem perdas para dois tipos de dados. O primeiro tipo de dados é conhecido na literatura como imagens de microarrays. Estas imagens têm 16 bits por píxel e uma resolução espacial elevada. O outro tipo de dados é geralmente denominado como alinhamento de genomas completos, particularmente aplicado a ficheiros MAF. Relativamente às imagens de microarrays, melhorámos alguns métodos de compressão específicos utilizando algumas técnicas de pré-processamento (segmentação e redução de planos binários). Além disso, desenvolvemos também um método de compressão baseado em estimação dos valores dos pixéis e em misturas de modelos de contexto-finito. Foi também considerada, uma abordagem baseada em decomposição em árvore binária. Foram desenvolvidas duas ferramentas de compressão para ficheiros MAF. A primeira ferramenta, é baseada numa mistura de modelos de contexto-finito e codificação aritmética, onde apenas as bases de ADN e os símbolos de alinhamento foram considerados. A segunda, designada como MAFCO, é uma ferramenta de compressão completa que consegue lidar com todo o tipo de informação que pode ser encontrada nos ficheiros MAF. MAFCO baseia-se em vários modelos de contexto-finito e permite compressão/descompressão paralela de ficheiros MAF.
APA, Harvard, Vancouver, ISO, and other styles
24

Odedo, Victor. "High resolution time reversal (TR) imaging based on spatio-temporal windows." Thesis, University of Manchester, 2017. https://www.research.manchester.ac.uk/portal/en/theses/high-resolution-time-reversal-tr-imaging-based-on-spatiotemporal-windows(f0589f73-901f-4de2-9886-7045b7f6cfd4).html.

Full text
Abstract:
Through-the-wall Imaging (TWI) is crucial for various applications such as law enforcement, rescue missions and defense. TWI methods aim to provide detailed information of spaces that cannot be seen directly. Current state-of-the-art TWI systems utilise ultra-wideband (UWB) signals to simultaneously achieve wall penetration and high resolution. These TWI systems transmit signals and mathematically back-project the reflected signals received to image the scenario of interest. However, these systems are diffraction-limited and encounter problems due to multipath signals in the presence of multiple scatterers. Time reversal (TR) methods have become popular for remote sensing because they can take advantage of multipath signals to achieve superresolution (resolution that beats the diffraction limit). The Decomposition Of the Time-Reversal Operator (DORT in its French acronym) and MUltiple SIgnal Classification (MUSIC) methods are both TR techniques which involve taking the Singular Value Decomposition (SVD) of the Multistatic Data Matrix (MDM) which contains the signals received from the target(s) to be located. The DORT and MUSIC imaging methods have generated a lot of interests due to their robustness and ability to locate multiple targets. However these TR-based methods encounter problems when the targets are behind an obstruction, particularly when the properties of the obstruction is unknown as is often the case in TWI applications. This dissertation introduces a novel total sub-MDM algorithm that uses the highly acclaimed MUSIC method to image targets hidden behind an obstruction and achieve superresolution. The algorithm utilises spatio-temporal windows to divide the full-MDM into sub-MDMs. The summation of all images obtained from each sub-MDM give a clearer image of a scenario than we can obtain using the full-MDM. Furthermore, we propose a total sub-differential MDM algorithm that uses the MUSIC method to obtain images of moving targets that are hiddenbehind an obstructing material.
APA, Harvard, Vancouver, ISO, and other styles
25

Sartori, Lauriana Rúbio. "Informação polarimétrica PALSAR/ALOS aplicada à discriminação de espécies e estimação de parâmetros morfológicos de macrófitas /." Presidente Prudente : [s.n.], 2011. http://hdl.handle.net/11449/100252.

Full text
Abstract:
Resumo: O propósito deste trabalho foi avaliar o potencial dos dados PALSAR polarimétricos para discriminar e mapear espécies de macrófitas (vegetação aquática) de uma área alagável da Amazônia, a planície de inundação do Lago Grande de Monte Alegre, no estado do Pará. A coleta de dados foi realizada quase simultaneamente à aquisição dos dados de radar. Três principais espécies de macrófitas foram encontradas na área: Paspalum repens (PR), Hymenachne amplexicaulis (HA) e Paspalum elephantipes (PE). Variáveis morfológicas foram medidas em campo e usadas para derivar outras variáveis tais como a biomassa. Atributos foram gerados a partir da matriz de covariância [C] extraída da imagem ALOS/PALSAR em modo SLC (single look complex). Os atributos polarimétricos foram analisados para as três espécies e identificados aqueles capazes de discriminar as espécies. Foram aplicadas as seguintes abordagens de classificação: baseada em regras, baseada em modelos de decomposição (Decomposições de Freeman-Durden e Cloude-Pottier), baseada em estatística (Classificação supervisionada baseada na distância Wishart) e híbrida (Classificador Wishart com classes de entrada baseadas na decomposição de Cloude-Pottier). Finalmente, a variável morfológica "volume da haste" foi modelada por regressão múltipla em função de alguns atributos polarimétricos. Os resultados sugerem que a imagem polarimétrica banda L possui potencial para discriminar as espécies de macrófitas, sendo os principais atributos para isso sigma zero HH ( ), sigma zero HV ( ) e sigma zero VV ( ), índice de estrutura da copa ... (Resumo completo, clicar acesso eletrônico abaixo)
Abstract: The purpose of this work was to evaluate the potential of fully polarimetric PALSAR data to discriminate and map macrophyte species in the Amazon floodplain, more specifically in the Monte Alegre Lake, in the state of Pará, Brazil. Fieldwork was carried out almost simultaneously to the radar acquisition. Three main species were found in the study area: Paspalum repens (PR), Hymenachne amplexicaulis (HA) and Paspalum elephantipes (PE). Macrophyte morphological variables were measured on the field and used to derive others variables, like the biomass. Attributes were calculated from the covariance matrix [C] derived from the SLC (single look complex) data. The polarimetric attributes were analyzed for the three species and it was identified that ones capable of discriminating them. The following classification approaches were applied: a rule-based classification, model-based classifications (Freeman-Durden and Cloude-Pottier), a statistical-based classification (supervised classification using Wishart distance measure) and a hybrid classification (Wishart classifier with the input classes based on the H/a plane). Finally, the morphological variable "stem volume" was modeled using multiple regression. The findings suggest that the fully polarimetric image has potential for discriminating plant species, being the main attributes sigma-nought HH ( ), sigma-nought HV ( ) and sigma-nought VV ( ), canopy structure index ( ), HH-VV polarimetric coherence ( ), helicity of the third scattering mechanism (τ ), orientation angle of the first scattering mechanism ( ) and scattering type phase of the first mechanism ( ); among the different classifications, only the supervised (Wishart) and the rule-based discriminated the species, with overall accuracy of 75,04% and 87,18%, respectively; the stem volume was modeled using the following attributes: biomass index ( ), volumetric scattering ... (Complete abstract click electronic access below)
Orientador: Nilton Nobuhiro Imai
Coorientador: José Cláudio Mura
Banca: Evlyn Marcia Leão de Moraes Novo
Banca: Thiago Sanna Freire Silva
Banca: João Roberto dos Santos
Banca: Vilma Mayumi Tachibana
Doutor
APA, Harvard, Vancouver, ISO, and other styles
26

Silva, Maria Joseane Cruz da. "Imputação múltipla: comparação e eficiência em experimentos multiambientais." Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/11/11134/tde-08082012-143901/.

Full text
Abstract:
Em experimentos de genótipos ambiente são comuns à presença de valores ausentes, devido à quantidade insuficiente de genótipos para aplicação dificultando, por exemplo, o processo de recomendação de genótipos mais produtivos, pois para a aplicação da maioria das técnicas estatísticas multivariadas exigem uma matriz de dados completa. Desta forma, aplicam-se métodos que estimam os valores ausentes a partir dos dados disponíveis conhecidos como imputação de dados (simples e múltiplas), levando em consideração o padrão e o mecanismo de dados ausentes. O objetivo deste trabalho é avaliar a eficiência da imputação múltipla livre da distribuição (IMLD) (BERGAMO et al., 2008; BERGAMO, 2007) comparando-a com o método de imputação múltipla com Monte Carlo via cadeia de Markov (IMMCMC), na imputação de unidades ausentes presentes em experimentos de interação genótipo (25) ambiente (7). Estes dados são provenientes de um experimento aleatorizado em blocos com a cultura de Eucaluptus grandis (LAVORANTI, 2003), os quais foram feitas retiradas de porcentagens aleatoriamente (10%, 20%, 30%) e posteriormente imputadas pelos métodos considerados. Os resultados obtidos por cada método mostraram que, a eficiência relativa em ambas as porcentagens manteve-se acima de 90%, sendo menor para o ambiente (4) quando imputado com a IMLD. Para a medida geral de exatidão, a medida que ocorreu acréscimo de dados em falta, foi maior ao imputar os valores ausentes com a IMMCMC, já para o método IMLD estes valores variaram sendo menor a 20% de retirada aleatória. Dentre os resultados encontrados, é de suma importância considerar o fato de que o método IMMCMC considera a suposição de normalidade, já o método IMLD leva vantagem sobre este ponto, pois não considera restrição alguma sobre a distribuição dos dados nem sobre os mecanismos e padrões de ausência.
In trials of genotypes by environment, the presence of absent values is common, due to the quantity of insufficiency of genotype application, making difficult for example, the process of recommendation of more productive genotypes, because for the application of the majority of the multivariate statistical techniques, a complete data matrix is required. Thus, methods that estimate the absent values from available data, known as imputation of data (simple and multiple) are applied, taking into consideration standards and mechanisms of absent data. The goal of this study is to evaluate the efficiency of multiple imputations free of distributions (IMLD) (BERGAMO et al., 2008; BERGAMO, 2007), compared with the Monte Carlo via Markov chain method of multiple imputation (IMMCMC), in the absent units present in trials of genotype interaction (25)environment (7). This data is provisional of random tests in blocks with Eucaluptus grandis cultures (LAVORANTI, 2003), of which random percentages of withdrawals (10%, 20%, 30%) were performed, with posterior imputation of the considered methods. The results obtained for each method show that, the relative efficiency in both percentages were maintained above 90%, being less for environmental (4) when imputed with an IMLD. The general measure of exactness, the measures where higher absent data occurred, was larger when absent values with an IMMCMC was imputed, as for the IMLD method, the varied absent values were lower at 20% for random withdrawals. Among results found, it is of sum importance to take into consideration the fact that the IMMCMC method considers it to be an assumption of normality, as for the IMLD method, it does not consider any restriction on the distribution of data, not on mechanisms and absent standards, which is an advantage on imputations.
APA, Harvard, Vancouver, ISO, and other styles
27

Judd, Jason D. "Modeling and Analysis of a Feedstock Logistics Problem." Diss., Virginia Tech, 2012. http://hdl.handle.net/10919/26810.

Full text
Abstract:
Recently, there has been a surge in the research and application of â Green energyâ in the United States. This has been driven by the following three objectives: (1) to reduce the nationâ s reliance on foreign oil, (2) to mitigate emission of greenhouse gas, and (3) to create an economic stimulus within the United States. Switchgrass is the biomass of choice for the Southeastern United States. In this dissertation, we address a feedstock logistics problem associated with the delivery of switchgrass for conversion into biofuel. In order to satisfy the continual demand of biomass at a bioenergy plant, production fields within a 48-km radius of its location are assumed to be attracted into production. The bioenergy plant is expected to receive as many as 50-400 loads of biomass per day. As a result, an industrialized transportation system must be introduced as early as possible in order to remove bottlenecks and reduce the total system cost. Additionally, we assume locating multiple bioenergy plants within a given region for the production of biofuel. We develop mixed integer programming formulations for the feedstock logistics problem that we address and for some related problems, and we solve them either through the use of decomposition-based methods or directly through the use of CPLEX 12.1.0. The feedstock logistics problem that we address spans the entire system-from the growing of switchgrass to the transporting of bio-crude oil, a high energy density intermediate product, to a refinery for conversion into a final product. To facilitate understanding, we present the reader with a case study that includes a preliminary cost analysis of a real-life-based instance in order to provide the reader appropriate insights of the logistics system before applying optimization techniques for its solution. First, we consider the benefits of active versus passive ownership of the production fields. This is followed by a discussion on the selection of baler type, and then, a discussion of contracts between various business entities. The advantages of storing biomass at a satellite storage location (SSL) and interactions between the operations performed at the production field with those performed at the storage locations are then established. We also provide a detailed description of the operations performed at a SSL. Three potential equipment options are presented for transporting biomass from the SSLs to a utilization point, defined in this study as a Bio-crude Plant (BcP). The details of the entire logistics chain are presented in order to highlight the need for making decisions in view of the entire chain rather than basing them on its segments. We model the feedstock logistics problem as a combination of a 2-level facility location-allocation problem and a multiple traveling salesmen problem (mATSP). The 2-level facility location-allocation problem pertains to the allocation of production fields to SSLs and SSLs to one of the multiple bioenergy plants. The mATSP arises because of the need for scheduling unloading operations at the SSLs. To this end, we provide a detailed study of 13 formulations of the mATSP and their reformulations as ATSPs. First, we assume that the SSLs are always full, regardless of when they are scheduled to be unloaded. We, then, relax this assumption by providing precedence constraints on the availability of the SSLs. This precedence is defined in two different ways and, is then, effectively modeled utilizing all the formulations for the mATSP and ATSP. Given the location of a BcP for the conversion of biomass to bio-crude oil, we develop a feedstock logistics system that relies on the use of SSLs for temporary storage and loading of round bales. Three equipment systems are considered for handling biomass at the SSLs, and they are either placed permanently or are mobile, and thereby, travel from one SSL to another. We use a mathematical programming-based approach to determine SSLs and equipment routes in order to minimize the total cost incurred. The mathematical program is applied to a real-life production region in South-central Virginia (Gretna, VA), and it clearly reveals the benefits of using SSLs as a part of the logistics system. Finally, we provide a sensitivity analysis on the input parameters that we used. This analysis highlights the key cost factors in the model, and it emphasizes areas where biggest gains can be achieved for further cost reduction. For a more general scenario, where multiple BcPs have to be located, we use a nested Bendersâ decomposition-based method. First, we prove the validity of using this method. We, then, employ this method for the solution of a potential real-life instance. Moreover, we successfully solve problems that are more than an order of magnitude larger than those solved directly by CPLEX 12.1.0. Finally, we develop a Bendersâ decomposition-based method for the solution of a problem that gives rise to a binary sub-problem. The difficulty arises because of the sub-problem being an integer program for which the dual solution is not readily available. Our approach consists of first solving the integer sub-problem, and then, generating the convex hull at the optimal integer point. We illustrate this approach for an instance for which such a convex hull is readily available, but otherwise, it is too expensive to generate for the entire problem. This special instance is the solution of the mATSP (using Bendersâ decomposition) for which each of the sub-problems is an ATSP. The convex hull for the ATSP is given by the Dantzig, Fulkerson, and Johnson constraints. These constraints at a given integer solution point are only polynomial in number. With the inclusion of these constraints, a linear programming solution and its corresponding dual solution can now be obtained at the optimal integer points. We have proven the validity of using this method. However, the success of our algorithm is limited because of a large number of integer problems that must be solved at every iteration. While the algorithm is theoretically promising, the advantages of the decomposition do not seem to outweigh the additional cost resulting from solving a larger number of decomposed problems.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
28

Neto, Raul Liberato de Lacerda. "Receptores MIMO baseados em algoritmo de decomposiÃÃo PARAFAC." Universidade Federal do CearÃ, 2005. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=2032.

Full text
Abstract:
Programa AlÃan de bolsas de estudo de alto nÃvel destinado à AmÃrica Latina
Este trabalho tem como objetivo a aplicaÃÃo da anÃlise tensorial para o tratamento de sinais no domÃnio de comunicaÃÃes sem fio. Utilizando a decomposiÃÃo tensorial conhecida como PARAFAC (decomposiÃÃo por fatores paralelos), um receptor à modelado para um sistema de comunicaÃÃo sem fio que utiliza uma estrutura MIMO na qual cada antena transmissora possui um cÃdigo de espalhamento particular, baseado na tÃcnica de mÃltiplo acesso por divisÃo de cÃdigo (CDMA). Nesse trabalho sÃo analisadas duas estruturas receptoras baseadas na decomposiÃÃo PARAFAC. A primeira à baseada no conhecimento da matriz de cÃdigos de espalhamento e a segunda à baseada no conhecimento da matriz de seqÃÃncia de treinamento. Duas famÃlias de cÃdigos sÃo consideradas: cÃdigos de Hadamard-Walsh e cÃdigos de Hadamard-Walsh truncados. Como resultado, foi observado que os receptores propostos apresentaram rÃpida convergÃncia e foram capazes de eliminar todas as ambigÃidades, inclusive aquelas que sÃo intrÃnsecas à decomposiÃÃo PARAFAC, que foram observadas em outros trabalhos. Resultados de simulaÃÃo sÃo apresentados para comparar o desempenho das duas estruturas receptoras em diversas configuraÃÃes do sistema de comunicaÃÃo, revelando o impacto dos parÃmetros do sistema (nÃmero de antenas transmissoras, nÃmero de antenas receptoras, tamanho do cÃdigo e relaÃÃo sinal-ruÃdo).
This work deals with the application of multi-way analysis to the context of signal processing for wireless communications. A tensor decomposition known as PARAFAC (PARAllel FACtors) is considered in the design of multiple-input multiple-output (MIMO) receiver for a wireless communication system with Spread Spectrum codes. We propose two supervised PARAFAC-based receiver structures for joint symbol and channel estimation. The first one is based on the knowledge of the spreading codes and the second on the knowledge of a training sequence per transmit antenna. Two code structures are considered, which are Hadamard-Wash (HW) and Truncated Hadamard-Walsh (THW). The main advantages of the proposed PARAFAC receivers is on the fact that they exhibit fast convergence and eliminate all ambiguities inherent to the PARAFAC model. Simulation results are provided to compare the performances of the two receivers for several systems configurations, revealing the impact of the number of transmit antennas, number of receiver antennas, code length and signal to noise ratio in their performances.
APA, Harvard, Vancouver, ISO, and other styles
29

Tolkkinen, M. (Mikko). "Biodiversity and ecosystem functioning in boreal streams:the effects of anthropogenic disturbances and naturally stressful environments." Doctoral thesis, Oulun yliopisto, 2015. http://urn.fi/urn:isbn:9789526209043.

Full text
Abstract:
Abstract The effect of biodiversity loss and change on the functioning of ecosystems is one of the key questions in ecological research. For stream ecosystems, compelling evidence indicates that species diversity may enhance ecosystem functions. However, ecosystem functions are often regulated by the same environmental factors that also shape diversity; thus, a major challenge for ecologists is to separate the effects of biodiversity loss on the ecosystem functions from the direct effects of human induced disturbance. In this doctoral thesis, I studied how decomposer communities and ecosystem functions respond to human disturbances (nutrient enrichment, acidification) and a natural stressor (naturally low water pH). I also studied how human disturbances and natural stressors affect the phylogenetic structure of stream fungal communities. I showed that human disturbance had a strong impact on species dominance patterns by reducing species evenness. Species dominance patterns also explained the variation in decomposition rates. Changes in abiotic variables also had a direct effect on leaf decomposition rates. In the naturally acidic sites, human impact (land drainage) further decreased water pH and increased metal concentrations, thereby reducing leaf decomposition rates, whereas high nutrient concentrations enhanced leaf decomposition. Naturally low pH had no effect on decomposition rates. Decomposer community similarity was higher in drainage-impacted sites, but only in naturally acidic, not in circumneutral, streams. Human induced disturbance also modified the phylogenetic similarity of fungal decomposer communities, with communities in disturbed sites consisting of more closely related species when compared to those in circumneutral reference sites. Leaf litter decomposition showed greater temporal variation in human disturbed sites than in reference sites, whereas fungal community variability was similar in disturbed and reference sites. Thus, temporally replicated monitoring may be needed for a reliable assessment of human disturbance in streams. My thesis emphasizes that using both functional and taxonomic measures allows a more comprehensive assessment of biological responses to human disturbance
Tiivistelmä Biodiversiteetin väheneminen ja siitä seuraava ekosysteemin toiminnan heikkeneminen on eräs keskeisimmistä ekologisista kysymyksistä. Ekosysteemin toiminnot ovat kuitenkin monesti yhteydessä ympäristöolosuhteisiin, joten on vaikea erottaa vähentyneen biodiversiteetin ja ympäristöolojen suhteellista merkitystä ekosysteemien toimintoihin. Tässä väitöskirjatyössäni tutkin, kuinka virtavesien hajottajayhteisöt ja ekosysteemin toiminnot (lehtikarikkeen hajotus) muuttuvat valuma-alueen ihmistoimintojen myötä. Tutkin myös, kuinka luontainen stressi (matala pH) vaikuttaa yhteisöihin ja ekosysteemin toimintoihin. Tarkastelen myös akvaattisten sienten fylogeneettistä rakennetta ihmistoiminnan muuttamissa vesiympäristöissä. Osoitan tutkimuksissani, että ihmistoiminnoilla on vaikutuksia hajottajayhteisöiden kokonaisrunsauden jakautumiseen lajien kesken. Muutamien runsaiden lajien dominoimissa yhteisöissä lehtikarikkeen hajoaminen on tehokkaampaa kuin yhteisöissä, joissa lajien runsauserot ovat pienempiä. Myös ympäristöoloilla on vaikutus lehtikarikkeen hajotukseen. Luontaisesti happamissa puroissa metsäojituksen seurauksena lisääntynyt veden metallipitoisuus ja alhainen pH vähentävät hajotuksen määrää. Toisaalta joen korkea ravinnepitoisuus lisää hajotusta. Lehtikarikkeen hajotus vaihtelee enemmän vuosien välillä ihmistoimintojen muuttamissa virtavesissä kuin luonnontilaisissa vesissä. Toisaalta sieniyhteisöt pysyvät koostumukseltaan samankaltaisina vuosien välillä ihmistoiminnan muuttamissa paikoissa ja referenssipaikoissa. Tämä työ osoittaa, että toiminnallisten ja yhteisöihin perustuvien indikaattorien yhteiskäyttö antaa kokonaisvaltaisimman kuvan ihmistoimintojen vaikutuksesta virtavesien ekosysteemeihin
APA, Harvard, Vancouver, ISO, and other styles
30

Barboteu, Mikaël. "Contact, frottement et techniques de calcul parallèle." Montpellier 2, 1999. http://www.theses.fr/1999MON20047.

Full text
Abstract:
Dans ce travail, nous avons developpe une modelisation mecanique et des methodes numeriques adaptees a l'analyse du comportement de structures multicontact ou interviennent de maniere essentielle le contact et le frottement. En effet, la multiplicite des zones de contact entre les differents corps deformables de la structure a pour consequence de rendre le probleme de grande taille, severement non lineaire et tres mal conditionne. Dans un premier temps, nous avons donne une formulation continue du contact qui a abouti au niveau discret a l'implantation d'elements contact. Une etude portant sur un rideau metallique compose de lames articulees a mis en evidence les limites des methodes classiques de resolution necessitant un temps de calcul exorbitant et une place memoire considerable. Pour remedier a cela, nous avons developpe deux techniques numeriques adaptees a l'architecture parallele de la nouvelle generation d'ordinateurs : - un preconditionneur element-by-element adapte aux elements contact fut etabli pour reduire le cout souvent excessif du preconditionnement de systemes mal conditionnes de grande taille. L'avantage de cette technique caracterisee par un parallelisme a petits grains reside dans le faible niveau de stockage necessaire, tout en ayant des performances comparables, et parfois superieures aux preconditionneurs classiques. - la deuxieme strategie basee sur des techniques de decomposition de domaine a permis de developper une methode de resolution utilisant une forte granularite, et donc mieux adaptee aux machines multi-processeurs. Notre probleme de contact avec frottement est alors traite par un schema de resolution couplant une methode de newton pour lever la difficulte de la non linearite et la methode du complement de schur permettant de resoudre les problemes linearises tangents non symetriques.
APA, Harvard, Vancouver, ISO, and other styles
31

Sartori, Lauriana Rúbio [UNESP]. "Informação polarimétrica PALSAR/ALOS aplicada à discriminação de espécies e estimação de parâmetros morfológicos de macrófitas." Universidade Estadual Paulista (UNESP), 2011. http://hdl.handle.net/11449/100252.

Full text
Abstract:
Made available in DSpace on 2014-06-11T19:30:31Z (GMT). No. of bitstreams: 0 Previous issue date: 2011-04-20Bitstream added on 2014-06-13T20:21:13Z : No. of bitstreams: 1 sartori_lr_dr_prud.pdf: 4148637 bytes, checksum: 5616600e595cbbe65fef21b03cd5309c (MD5)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
O propósito deste trabalho foi avaliar o potencial dos dados PALSAR polarimétricos para discriminar e mapear espécies de macrófitas (vegetação aquática) de uma área alagável da Amazônia, a planície de inundação do Lago Grande de Monte Alegre, no estado do Pará. A coleta de dados foi realizada quase simultaneamente à aquisição dos dados de radar. Três principais espécies de macrófitas foram encontradas na área: Paspalum repens (PR), Hymenachne amplexicaulis (HA) e Paspalum elephantipes (PE). Variáveis morfológicas foram medidas em campo e usadas para derivar outras variáveis tais como a biomassa. Atributos foram gerados a partir da matriz de covariância [C] extraída da imagem ALOS/PALSAR em modo SLC (single look complex). Os atributos polarimétricos foram analisados para as três espécies e identificados aqueles capazes de discriminar as espécies. Foram aplicadas as seguintes abordagens de classificação: baseada em regras, baseada em modelos de decomposição (Decomposições de Freeman-Durden e Cloude-Pottier), baseada em estatística (Classificação supervisionada baseada na distância Wishart) e híbrida (Classificador Wishart com classes de entrada baseadas na decomposição de Cloude-Pottier). Finalmente, a variável morfológica “volume da haste” foi modelada por regressão múltipla em função de alguns atributos polarimétricos. Os resultados sugerem que a imagem polarimétrica banda L possui potencial para discriminar as espécies de macrófitas, sendo os principais atributos para isso sigma zero HH ( ), sigma zero HV ( ) e sigma zero VV ( ), índice de estrutura da copa...
The purpose of this work was to evaluate the potential of fully polarimetric PALSAR data to discriminate and map macrophyte species in the Amazon floodplain, more specifically in the Monte Alegre Lake, in the state of Pará, Brazil. Fieldwork was carried out almost simultaneously to the radar acquisition. Three main species were found in the study area: Paspalum repens (PR), Hymenachne amplexicaulis (HA) and Paspalum elephantipes (PE). Macrophyte morphological variables were measured on the field and used to derive others variables, like the biomass. Attributes were calculated from the covariance matrix [C] derived from the SLC (single look complex) data. The polarimetric attributes were analyzed for the three species and it was identified that ones capable of discriminating them. The following classification approaches were applied: a rule-based classification, model-based classifications (Freeman-Durden and Cloude-Pottier), a statistical-based classification (supervised classification using Wishart distance measure) and a hybrid classification (Wishart classifier with the input classes based on the H/a plane). Finally, the morphological variable “stem volume” was modeled using multiple regression. The findings suggest that the fully polarimetric image has potential for discriminating plant species, being the main attributes sigma-nought HH ( ), sigma-nought HV ( ) and sigma-nought VV ( ), canopy structure index ( ), HH-VV polarimetric coherence ( ), helicity of the third scattering mechanism (τ ), orientation angle of the first scattering mechanism ( ) and scattering type phase of the first mechanism ( ); among the different classifications, only the supervised (Wishart) and the rule-based discriminated the species, with overall accuracy of 75,04% and 87,18%, respectively; the stem volume was modeled using the following attributes: biomass index ( ), volumetric scattering ... (Complete abstract click electronic access below)
APA, Harvard, Vancouver, ISO, and other styles
32

Silva, Marcus Vinícius Amaral e. "Estrutura de renda, consumo e sistema produtivo: mudanças na economia brasileira entre 2000 e 2010." Universidade Federal de Juiz de Fora (UFJF), 2018. https://repositorio.ufjf.br/jspui/handle/ufjf/7188.

Full text
Abstract:
Submitted by Renata Lopes (renatasil82@gmail.com) on 2018-07-20T15:20:49Z No. of bitstreams: 1 marcusviniciusamaralesilva.pdf: 2590619 bytes, checksum: 4fd347134630b770de59aaaa2aa2fd58 (MD5)
Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2018-09-03T16:18:00Z (GMT) No. of bitstreams: 1 marcusviniciusamaralesilva.pdf: 2590619 bytes, checksum: 4fd347134630b770de59aaaa2aa2fd58 (MD5)
Made available in DSpace on 2018-09-03T16:18:00Z (GMT). No. of bitstreams: 1 marcusviniciusamaralesilva.pdf: 2590619 bytes, checksum: 4fd347134630b770de59aaaa2aa2fd58 (MD5) Previous issue date: 2018-07-03
O objetivo desta tese é analisar as mudanças na estrutura de rendimento, ocorridas na economia brasileira entre 2000 e 2010, e sua relação com as alterações nos padrões de consumo e as transformações na estrutura produtiva do país. Para isso, são elaboradas duas matrizes, por meio de um modelo de Matriz de Contabilidade Social (MCS), desagregada para 10 grupos familiares representativos. A estrutura de interdependência de renda entre as famílias é investigada por meio dos multiplicadores inter-relacionais de renda de Miyazawa. Já as mudanças na estrutura produtiva, entre 2000 e 2010, induzida por cada uma das 10 famílias típicas é investigada por meio de uma Análise de Decomposição Estrutural. Os principais resultados alcançados pela aplicação desses dos dois métodos apontam para uma relevante redução na renda absorvida pela última classe familiar, dado um choque exógeno de renda, ao longo do período de análise. Por outro lado, as famílias que fazem parte dos grupos de menor rendimento, tiveram aumento significativo na absorção de renda entre 2000 e 2010. O que pode ser explicado pelas transformações na estrutura de rendimentos, ocorridas principalmente em favor das classes familiares de menor renda, representadas sobretudo pela redução dos indicadores de desigualdade de renda. Isso implica que, políticas de transferência de renda, como o Bolsa Família, e as mudanças no mercado de trabalho, observada principalmente por meio do aumento do salário mínimo real, passaram a gerar maiores benefícios às camadas mais pobres da população. Já a análise de decomposição estrutural indica que os grupos familiares com menor rendimento médio foram aqueles que mais contribuíram para o aumento da produção observada no período. Esse resultado sugere que o crescimento da renda, associado a novos padrões de consumo, está intimamente ligado aos avanços produtivos entre 2000 e 2010.
The aim of this thesis is to analyze the changes in the structure of income that occurred in the Brazilian economy between 2000 and 2010, and it's relation with the changes in patterns of consumption and the transformations in the productive structure of the country. To achieve this objective, two matrices are elaborated, using a Social Accounting Matrix (SAM) model, disaggregated for 10 representative households groups. The structure of income interdependence among households is investigated through Miyazawa's interrelational income multipliers. The changes in the productive structure, between 2000 and 2010, induced by each one of the 10 typical families are investigated through a Structural Decomposition Analysis (SDA). The main results point to a significant reduction in the income absorbed by the last household, given an exogenous income shock, throughout the period of analysis. On the other hand, the families that are part of the lower income groups had a significant increase in income absorption between 2000 and 2010. This can be explained by the changes in income structure, mainly in favor of lower income households, mainly represented by the reduction of income inequality indicators. This implies that income transfer policies, such as Bolsa Família, and changes in the labor market, observed mainly through the increase of the real minimum wage, generated greater benefits to the poorest sections of the population. On the other hand, the analysis of structural decomposition indicates that the household groups with the lowest average income were the ones that contributed the most to the production increase observed in the period. This result suggests that income growth and the rise of a new middle class, with new patterns of consumption, are closely linked to the productive advances between 2000 and 2010.
APA, Harvard, Vancouver, ISO, and other styles
33

Pavelski, Lucas Marcondes. "Otimização evolutiva multiobjetivo baseada em decomposição e assistida por máquinas de aprendizado extremo." Universidade Tecnológica Federal do Paraná, 2015. http://repositorio.utfpr.edu.br/jspui/handle/1/1254.

Full text
Abstract:
Muitos problemas de otimização reais apresentam mais de uma função-objetivo. Quando os objetivos são conflitantes, estratégias especializadas são necessárias, como é o caso dos algoritmos evolutivos multiobjetivo (MOEAs, do inglês Multi-objective Optimization Evolutionary Algorithms). Entretanto, se a avaliação das funções-objetivo é custosa (alto custo computacional ou econômico) muitos MOEAs propostos são impraticáveis. Uma alternativa pode ser a utilização de um modelo de aprendizado de máquina que aproxima o cálculo do fitness (surrogate) no algoritmo de otimização. Este trabalho propõe e investiga uma plataforma chamada ELMOEA/D que agrega MOEAs do estado da arte baseados em decomposição de objetivos (MOEA/D) e máquinas de aprendizado extremo (ELMs, do inglês Extreme Learning Machines) como modelos surrogate. A plataforma proposta é testada com diferentes variantes do algoritmo MOEA/D e apresenta bons resultados em problemas benchmark, comparada a um algoritmo da literatura que também utiliza MOEA/D mas modelos surrogates baseados em redes com função de base radial. A plataforma ELMOEA/D também é testada no Problema de Predição de Estrutura de Proteínas (PPEP). Apesar dos resultados alcançados pela proposta não serem tão animadores quanto aqueles obtidos nos benchmarks (quando comparados os algoritmos com e sem surrogates), diversos aspectos da proposta e do problema são explorados. Por fim, a plataforma ELMOEA/D é aplicada a uma formulação alternativa do PPEP com sete objetivos e, com estes resultados, várias direções para trabalhos futuros são apontadas.
Many real optimization problems have more than one objective function. When the objectives are in conflict, there is a need for specialized strategies, as is the case of the Multi-objective Optimization Evolutionary Algorithms (MOEAs). However, if the functions evaluation is expensive (high computational or economical costs) many proposed MOEAs are impractical. An alternative might be the use of a machine learning model to approximate the fitness function (surrogates) in the optimization algorithm. This work proposes and investigates a framework called ELMOEA/D that aggregates state-of-the-art MOEAs based on decomposition of objectives (MOEA/D) and extreme learning machines as surrogate models. The proposed framework is tested with different MOEA/D variants and show good results in benchmark problems, compared to a literature algorithm that also encompasses MOEA/D but uses surrogate models based on radial basis function networks. The ELMOEA/D framework is also applied to the protein structure prediction problem (PSPP). Despite the fact that the results achieved by the proposed approach were not as encouraging as the ones achieved in the benchmarks (when the algorithms with and without surrogates are compared), many aspects of both algorithm and problem are explored. Finally, the ELMOEA/D framework is applied to an alternative formulation of the PSPP and the results lead to various directions for future works.
APA, Harvard, Vancouver, ISO, and other styles
34

Kapfunde, Goodwell. "Near-capacity sphere decoder based detection schemes for MIMO wireless communication systems." Thesis, University of Hertfordshire, 2013. http://hdl.handle.net/2299/11350.

Full text
Abstract:
The search for the closest lattice point arises in many communication problems, and is known to be NP-hard. The Maximum Likelihood (ML) Detector is the optimal detector which yields an optimal solution to this problem, but at the expense of high computational complexity. Existing near-optimal methods used to solve the problem are based on the Sphere Decoder (SD), which searches for lattice points confined in a hyper-sphere around the received point. The SD has emerged as a powerful means of finding the solution to the ML detection problem for MIMO systems. However the bottleneck lies in the determination of the initial radius. This thesis is concerned with the detection of transmitted wireless signals in Multiple-Input Multiple-Output (MIMO) digital communication systems as efficiently and effectively as possible. The main objective of this thesis is to design efficient ML detection algorithms for MIMO systems based on the depth-first search (DFS) algorithms whilst taking into account complexity and bit error rate performance requirements for advanced digital communication systems. The increased capacity and improved link reliability of MIMO systems without sacrificing bandwidth efficiency and transmit power will serve as the key motivation behind the study of MIMO detection schemes. The fundamental principles behind MIMO systems are explored in Chapter 2. A generic framework for linear and non-linear tree search based detection schemes is then presented Chapter 3. This paves way for different methods of improving the achievable performance-complexity trade-off for all SD-based detection algorithms. The suboptimal detection schemes, in particular the Minimum Mean Squared Error-Successive Interference Cancellation (MMSE-SIC), will also serve as pre-processing as well as comparison techniques whilst channel capacity approaching Low Density Parity Check (LDPC) codes will be employed to evaluate the performance of the proposed SD. Numerical and simulation results show that non-linear detection schemes yield better performance compared to linear detection schemes, however, at the expense of a slight increase in complexity. The first contribution in this thesis is the design of a near ML-achieving SD algorithm for MIMO digital communication systems that reduces the number of search operations within the sphere-constrained search space at reduced detection complexity in Chapter 4. In this design, the distance between the ML estimate and the received signal is used to control the lower and upper bound radii of the proposed SD to prevent NP-complete problems. The detection method is based on the DFS algorithm and the Successive Interference Cancellation (SIC). The SIC ensures that the effects of dominant signals are effectively removed. Simulation results presented in this thesis show that by employing pre-processing detection schemes, the complexity of the proposed SD can be significantly reduced, though at marginal performance penalty. The second contribution is the determination of the initial sphere radius in Chapter 5. The new initial radius proposed in this thesis is based on the variable parameter α which is commonly based on experience and is chosen to ensure that at least a lattice point exists inside the sphere with high probability. Using the variable parameter α, a new noise covariance matrix which incorporates the number of transmit antennas, the energy of the transmitted symbols and the channel matrix is defined. The new covariance matrix is then incorporated into the EMMSE model to generate an improved EMMSE estimate. The EMMSE radius is finally found by computing the distance between the sphere centre and the improved EMMSE estimate. This distance can be fine-tuned by varying the variable parameter α. The beauty of the proposed method is that it reduces the complexity of the preprocessing step of the EMMSE to that of the Zero-Forcing (ZF) detector without significant performance degradation of the SD, particularly at low Signal-to-Noise Ratios (SNR). More specifically, it will be shown through simulation results that using the EMMSE preprocessing step will substantially improve performance whenever the complexity of the tree search is fixed or upper bounded. The final contribution is the design of the LRAD-MMSE-SIC based SD detection scheme which introduces a trade-off between performance and increased computational complexity in Chapter 6. The Lenstra-Lenstra-Lovasz (LLL) algorithm will be utilised to orthogonalise the channel matrix H to a new near orthogonal channel matrix H ̅.The increased computational complexity introduced by the LLL algorithm will be significantly decreased by employing sorted QR decomposition of the transformed channel H ̅ into a unitary matrix and an upper triangular matrix which retains the property of the channel matrix. The SIC algorithm will ensure that the interference due to dominant signals will be minimised while the LDPC will effectively stop the propagation of errors within the entire system. Through simulations, it will be demonstrated that the proposed detector still approaches the ML performance while requiring much lower complexity compared to the conventional SD.
APA, Harvard, Vancouver, ISO, and other styles
35

"Combining the vortex-in-cell and parallel fast multipole methods for efficient domain decomposition simulations : DNS and LES approaches." Université catholique de Louvain, 2007. http://edoc.bib.ucl.ac.be:81/ETD-db/collection/available/BelnUcetd-08172007-165806/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Abduljabbar, Mustafa. "Communication Reducing Approaches and Shared-Memory Optimizations for the Hierarchical Fast Multipole Method on Distributed and Many-core Systems." Diss., 2018. http://hdl.handle.net/10754/630221.

Full text
Abstract:
We present algorithms and implementations that overcome obstacles in the migration of the Fast Multipole Method (FMM), one of the most important algorithms in computational science and engineering, to exascale computing. Emerging architectural approaches to exascale computing are all characterized by data movement rates that are slow relative to the demand of aggregate floating point capability, resulting in performance that is bandwidth limited. Practical parallel applications of FMM are impeded in their scaling by irregularity of domains and dominance of collective tree communication, which is known not to scale well. We introduce novel ideas that improve partitioning of the N-body problem with boundary distribution through a sampling-based mechanism that hybridizes two well-known partitioning techniques, Hashed Octree (HOT) and Orthogonal Recursive Bisection (ORB). To reduce communication cost, we employ two methodologies. First, we directly utilize features available in parallel runtime systems to enable asynchronous computing and overlap it with communication. Second, we present Hierarchical Sparse Data Exchange (HSDX), a new all-to-all algorithm that inherently relieves communication by relaying sparse data in a few steps of neighbor exchanges. HSDX exhibits superior scalability and improves relative performance compared to the default MPI alltoall and other relevant literature implementations. We test this algorithm alongside others on a Cray XC40 tightly coupled with the Aries network and on Intel Many Integrated Core Architecture (MIC) represented by Intel Knights Corner (KNC) and Intel Knights Landing (KNL) as modern shared-memory CPU environments. Tests include comparisons of thoroughly tuned handwritten versus auto-vectorization of FMM Particle-to-Particle (P2P) and Multipole-to-Local (M2L) kernels. Scalability of task-based parallelism is assessed with FMM’s tree traversal kernel using different threading libraries. The MIC tests show large performance gains after adopting the prescribed techniques, which are inevitable in a world that is moving towards many-core parallelism.
APA, Harvard, Vancouver, ISO, and other styles
37

Lin, Jenq-Jong, and 林正忠. "Multiple star decomposition of complete multigraphs and some decompositions of crowns." Thesis, 1999. http://ndltd.ncl.edu.tw/handle/58221114818367737081.

Full text
Abstract:
博士
國立中央大學
數學研究所
87
This thesis consists of two topics: (1) the decomposition of complete multigraphs into multiple stars and (2) some decompositions of crowns. (1) Decomposition of complete multigraphs into multiple stars: In Chapter 2, we give a criterion for the decomposition of complete multigraphsλKn into multiple stars (not necessarily isomorphic). Further we give a criterion for the decomposition of 2Kn into isomorphic multiple stars. (2) Some decompositions of crowns: In Chapter 3, we give a criterion for the decomposition of a multicrown into isomorphic stars. As a consequence, we obtain a criterion for the decomposition of the power of a cycle into isomorphic stars. In Chapter 4, we give some sufficient conditions for the decomposition of the crown into cycles. In Chapter 5, we investigate the decomposition numbers of the crown, which include the tree number, biclique number and star arboricity.
APA, Harvard, Vancouver, ISO, and other styles
38

He, Ying active 2013. "Decomposition of multiple attribute preference models." 2013. http://hdl.handle.net/2152/22980.

Full text
Abstract:
This dissertation consists of three research papers on Preference models of decision making, all of which adopt an axiomatic approach in which preference conditions are studied so that the models in this dissertation can be verified by checking their conditions at the behavioral level. The first paper “Utility Functions Representing Preference over Interdependent Attributes” studies the problem of how to assess a two attribute utility function when the attributes are interdependent. We consider a situation where the risk aversion on one attribute could be influenced by the level of the other attribute in a two attribute decision making problem. In this case, the multilinear utility model—and its special cases the additive and multiplicative forms—cannot be applied to assess a subject’s preference because utility independence does not hold. We propose a family of preference conditions called nth degree discrete distribution independence that can accommodate a variety of dependencies among two attributes. The special case of second degree discrete distribution independence is equivalent to the utility independence condition. Third degree discrete distribution independence leads to a decomposition formula that contains many other decomposition formulas in the existing literature as special cases. As the decompositions proposed in this research is more general than many existing ones, the study provides a model of preference that has potential to be used for assessing utility functions more accurately and with relatively little additional effort. The second paper “On the Axiomatization of the Satiation and Habit Formation Utility Models” studies the axiomatic foundations of the discounted utility model that incorporates both satiation and habit formation in temporal decision. We propose a preference condition called shifted difference independence to axiomatize a general habit formation and satiation model (GHS). This model allows for a general habit formation and satiation function that contains many functional forms in the literature as special cases. Since the GHS model can be reduced to either a general satiation model (GSa) or a general habit formation model (GHa), our theory also provides approaches to axiomatize both the GSa model and the GHa model. Furthermore, by adding extra preference conditions into our axiomatization framework, we obtain a GHS model with a linear habit formation function and a recursively defined linear satiation function. In the third paper “Hope, Dread, Disappointment, and Elation from Anticipation in Decision Making”, we propose a model to incorporate both anticipation and disappointment into decision making, where we define hope as anticipating a gain and dread as anticipating a loss. In this model, the anticipation for a lottery is a subjectively chosen outcome for a lottery that influences the decision maker’s reference point. The decision maker experiences elation or disappointment when she compares the received outcome with the anticipated outcome. This model captures the trade-off between a utility gain from higher anticipation and a utility loss from higher disappointment. We show that our model contains some existing decision models as its special cases, including disappointment models. We also use our model to explore how a person’s attitude toward the future, either optimistic or pessimistic, could mediate the wealth effect on her risk attitude. Finally, we show that our model can be applied to explain the coexistence of a demand for gambling and insurance and provides unique insights into portfolio choice and advertising decision problems.
text
APA, Harvard, Vancouver, ISO, and other styles
39

"Structural decomposition of multiple time scale Markov processes." Laboratory for Information and Decision Systems, Massachusetts Institute of Technology], 1987. http://hdl.handle.net/1721.1/3021.

Full text
Abstract:
J.R. Rohlicek, A.S. Willsky.
Caption title. "October 1987."
Includes bibliographical references.
Supported in part by a grant from the Air Force Office of Scientific Research. AFOSR-82-0258 Supported in part by a grant from the Army Research Office. DAAG-29-84-K005
APA, Harvard, Vancouver, ISO, and other styles
40

Kao, Huei-Yuan, and 高慧媛. "Multiple Group signature with Threshold authorization and Document Decomposition." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/968su8.

Full text
Abstract:
碩士
國立臺灣科技大學
資訊管理系
106
In recent years, many large-scale integrated projects have been launched, a large and complex integrated project document contains multiple departments responsible for different domians, when the project is running, all departments need to sign the professional domains involved. In order to solve the trivial and complicated signing of each department in the large-scale project document, and ensuring that the entire document is signed, will not be tampered with, so the study was proposed. Proposed the Multiple Group signature with Threshold authorization and Document Decomposition, through document segmentation the departments are responsible for signing the responsibility. Each department signs the document with different Thresholds authorization of Elliptic Curves Digital Signatures. The final output of the organization is a multi-group signature with multiple Elliptic Curves Digital Signatures. The seal can only allow external receivers to verify that signature is from the organization, and receivers is impossible to know who is signed by those departments.This study comply with Federal Information Processing Standards (FIPS) requirements of the U.S. federal government three security provisions for digital signatures: Authentication, Integrity, and Non-repudiation.
APA, Harvard, Vancouver, ISO, and other styles
41

Lu, Ting-You, and 陸亭佑. "Modified Gram-Schmidt-based QR decomposition Hardware Architecture and Implementation for Multiple-Input Multiple-Output System." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/73480903740726606402.

Full text
Abstract:
碩士
雲林科技大學
電子與資訊工程研究所
99
In recent years, due to the increasing use of wireless communications in applications such as cell phone video and multimedia downloads, high-speed wireless communication is necessary to be applied. An improved implementation of QR decomposition for Multiple-Input Multiple-Output Orthogonal frequency-division multiplexing (MIMO-OFDM) detection which is based on the Gram-Schmidt method is used in this thesis. In wireless communication systems, the accuracy for transmitting signals is essential. In this thesis, we present a preprocessing MIMO signal detection circuit architecture and FPGA implementation based on Triangular Systolic Array QRD (TSAQRD) which can approach near-ML performance with a fixed computational complexity. The thesis presents a dual-mode-process (DMP) hardware and it works on the 8 × 8 16 QAM MIMO systems.
APA, Harvard, Vancouver, ISO, and other styles
42

Yan, Wei-Jhih, and 顏威志. "Decomposition of a Finite Product of Multiples ofRiemann Zeta Values." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/95073135094788108345.

Full text
Abstract:
碩士
國立中正大學
數學研究所
103
The classical Euler decomposition theorem expresses a product of two Riemann zeta values as a weighted sum of double Euler sums. Such a decomposition theorem can be generalized to a finite product of Riemann zeta values and of multiple zeta values of height one. In this thesis, we investigate a decomposition of the product in its theoretical form. In particular, we will illustrate its explicit decomposition for the cases of n = 3 and 4 in terms of weighted sums of multiple zeta values.
APA, Harvard, Vancouver, ISO, and other styles
43

Zhang, Zhenyu, and Stanley B. Gershwin. "Modeling and Analysis of Manufacturing Systems with Multiple-Loop Structures." 2005. http://hdl.handle.net/1721.1/29836.

Full text
Abstract:
Kanban and Constant Work-In-Process (CONWIP) control methods are designed to impose tight controls over inventory, while providing a satisfactory production rate. This paper generalizes systems with kanban or CONWIP control as assembly/disassembly networks with multiple-loop structures. We present a stochastic mathematical model which integrates the information control flows into material flows. Graph theory is used to analyze the multiple-loop structures. An efficient analytical algorithm is developed for evaluating the expected production rate and inventory levels. The performance of the algorithm is reported in terms of accuracy, reliability and speed.
Singapore-MIT Alliance (SMA)
APA, Harvard, Vancouver, ISO, and other styles
44

Abeysundera, Melanie. "Phylogenetic analysis of multiple genes based on spectral methods." 2011. http://hdl.handle.net/10222/14310.

Full text
Abstract:
Multiple gene phylogenetic analysis is of interest since single gene analysis often results in poorly resolved trees. Here the use of spectral techniques for analyzing multi-gene data sets is explored. The protein sequences are treated as categorical time series and a measure of similarity between a pair of sequences, the spectral covariance, is used to build trees. Unlike other methods, the spectral covariance method focuses on the relationship between the sites of genetic sequences. We consider two methods with which to combine the dissimilarity or distance matrices of multiple genes. The first method involves properly scaling the dissimilarity measures derived from different genes between a pair of species and using the mean of these scaled dissimilarity measures as a summary statistic to measure the taxonomic distances across multiple genes. We introduced two criteria for computing scale coefficients which can then be used to combine information across genes, namely the minimum variance (MinVar) criterion and the minimum coefficient of variation squared (MinCV) criterion. The scale coefficients obtained with the MinVar and MinCV criteria can then be used to derive a combined-gene tree from the weighted average of the distance or dissimilarity matrices of multiple genes. The second method is based on the singular value decomposition of a matrix made up of the p-vectors of pairwise distances for k genes. By decomposing such a matrix, we extract the common signal present in multiple genes to obtain a single tree representation of the relationship between a given set of taxa. Influence functions for the components of the singular value decomposition are derived to determine which genes are most influential in determining the combined-gene tree.
APA, Harvard, Vancouver, ISO, and other styles
45

Ghosh, Shaunak. "Multiple suppression in the t-x-p domain." 2013. http://hdl.handle.net/2152/23207.

Full text
Abstract:
Multiples in seismic data pose serious problems to seismic interpreters for both AVO studies and interpretation of stacked sections. Several methods have been practiced with varying degrees of success to suppress multiples in seismic data. One family of velocity filters for demultiple operations using Radon transforms traditionally face challenges when the water column is shallow. Additionally, the hyperbolic Radon Transform can be computationally expensive. In this thesis, I introduce a novel multiple suppression technique in the t-x-p domain, where p is the local slope of seismic events that aims at tackling some of the aforementioned limitations, and discuss the advantages and scope of this approach. The technique involves essentially two steps: the decomposition part and the suppression part. Common Mid-Point (CMP) gathers are taken and transformed from the original t-x space to the extended t-x-p space and eventually to the t0-x-p space, where t0 is the zero offset traveltime. Multiplication of the gather in the extended space with Gaussian tapering filters, formed using the difference of the powers of the intrinsically calculated velocities in terms of t0 , x and p using analytical relations and the picked primary velocities and stacking along the p axis produces gathers with multiples suppressed.
text
APA, Harvard, Vancouver, ISO, and other styles
46

ZHANG, SHI-PENG, and 張世鵬. "A study on the application of decomposition principle to multiple parallelly serial-connected reservoirs." Thesis, 1988. http://ndltd.ncl.edu.tw/handle/58609782880032429542.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Wu, Kuo-Hao, and 吳國豪. "SIMULTANEOUS TEMPLATE ASSIGNMENT AND LAYOUT DECOMPOSITION USING MULTIPLE BCP MATERIALS IN DSA-MP LITHOGRAPHY." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/yvc3a6.

Full text
Abstract:
碩士
國立臺灣科技大學
電機工程系
105
In sub 10-nm technology nodes, the directed selfassembly technology with multiple patterning lithography (DSA-MP) is a promising solution for contact/via layer fabrication. However, previous studies using multiple patterning with a single block copolymer (BCP) material still suffer from low via manufacturability due to limited types of feasible guiding templates. To mitigate the problem, multiple patterning in combination with two different BCP materials has been proposed, which contributes to more flexible DSA-compatible pattern matching. In this paper, we propose the first work of simultaneous guiding template assignment and layout decomposition with multiple BCP materials for general via layouts in DSA-MP. An optimal integer linear programming (ILP) formulation and a practical and sophisticated heuristic algorithm are proposed. Experimental results indicate that adopting two different BCP materials can greatly reduce conflict numbers compared with existing works using a single BCP material, and the proposed heuristic method can efficiently obtain good solutions.
APA, Harvard, Vancouver, ISO, and other styles
48

Hasbullah, Hasbullah. "Alleviating the negative effect of salinity on soil respiration by plant residue addition: effect of residue properties, mixing and amendment frequency." Thesis, 2015. http://hdl.handle.net/2440/102766.

Full text
Abstract:
Salinity is a major constraint to crop production and also contributes to land degradation, particularly in arid and semiarid regions. Salinity has negative effects on soil microorganisms, reducing soil respiration, microbial biomass and microbial diversity. One of the main reasons for the negative impact of salinity is the low osmotic potential induced by high salt concentrations in the soil solution which reduces water uptake into cells and can cause water loss from cells. Some microorganisms can adapt to salinity by accumulation of osmolytes which is a significant metabolic burden. Rapidly decomposable plant residues contain high concentrations of easily available compounds which can be utilised by many soil microbes. Slowly decomposable residues on the other hand contain complex compounds which can only be utilised by few microbes, those capable of releasing specialised enzymes to break down these compounds. If salinity inhibits or kills some microbes, the decomposition of rapidly decomposable residues may be less affected than that of slowly decomposable residues because the loss of sensitive microbes can be compensated by a larger number of microbes with the former compared to the latter. If this is true, microbial activity after addition of slowly decomposable residues (high in lignin content and C/N ratio and low in water soluble carbon) should decrease more strongly with increasing salinity than after addition of rapidly decomposable residues. However, most previous studies on respiration in saline soils only used one or two types of plant residues (e.g. cereal or legume shoots). A further factor that may influence the impact of salinity on soil respiration is the frequency of residue addition. Frequent residue addition may provide soil microbes with a continuous supply of nutrients and therefore improve salinity tolerance compared to a single addition where easily available compounds are rapidly depleted. These two assumptions have not been systematically investigated. The aim of this project was to investigate the effect of the chemical composition of added residues, mixing of residues and addition frequency on soil respiration and microbial biomass in soils with different salinity. Three studies were carried out to address the aims in non-saline soil and naturally saline soils with different salinity levels. The aim of the first study was to investigate the impact of salinity on respiration in soil amended with residues differing in chemical composition (lignin concentration, water soluble organic carbon and C/N ratio). Three incubation experiments were conducted in this study. In the first experiment various residue types (shoots of wheat, canola, saltbush and kikuyu, saw dust, eucalyptus leaves) differing in C/N ratio, lignin and water extractable organic carbon concentration, were applied at 2% w/w to a non-saline soil (EC₁﹕₅, 0.1 dS m⁻¹) and three naturally saline soils with EC₁﹕₅ 1, 2.5 and 3.3 dS m⁻¹. Cumulative respiration decreased with increasing salinity but the negative effect of salinity was different among residues. Compared to non-saline soil, respiration was decreased by 20% at EC₁﹕₅ 0.3 dS m⁻¹ when slowly decomposable residues (saw dust or canola shoots) were added, but at EC₁﹕₅ 4 dS m⁻¹ when amended with a rapidly decomposable residue (saltbush). In the second experiment, the influence of residue C/N ratio and lignin content on soil respiration in saline soils was investigated. Two residues (canola and saw dust) with high C/N ratios but different lignin content were used. The C/N ratio was adjusted to between 20 and 80 by adding ammonium sulfate. Increasing salinity had smaller impact on cumulative respiration after addition of residues with C/N ratio 20 or 40 compared to residues with higher C/N ratio. In the third experiment, the effect of the concentration of water-soluble organic C (WEOC) of the residues was determined. WEOC was partially removed by leaching from two residues with high WEOC content (eucalypt leaves and saltbush shoots). Partial WEOC removal decreased cumulative respiration in saline soil compared to the original residues, but increased the negative effect of salinity on respiration only with saltbush shoots. The second study was conducted using the four soils from the first study (EC₁﹕₅, 0.1, 1, 2.5 and 3.3 dS m⁻¹) to compare the impact of single and multiple additions of residues that differ in decomposability on the response of soil respiration to increasing salinity. Two residues with distinct decomposability were used; kikuyu with 19 C/N ratio (rapidly decomposable) and canola with 82 C/N ratio (slowly decomposable). Both residues were added once or 2-4 times (on days 0, 14, 28 and 42) with a total addition of 10 g C kg⁻¹ soil and incubated for 56 days. Compared to a single addition, repeated addition of the rapidly decomposable residue reduced the negative effect of salinity on cumulative respiration, but this was not the case with slowly decomposable residues. The third study was carried out to investigate the effect of the proportion of rapidly and slowly decomposable residues in a mixture on the impact of salinity on soil respiration. This study included two experiments with two residues differing in decomposability (slowly decomposable saw dust and rapidly decomposable kikuyu grass). In the first experiment, both residues were added alone and in mixtures with different ratios into four soils having EC₁﹕₅ 0.1, 1.0, 2.5 and 3.3 dS m⁻¹. The addition of 25% of rapidly decomposable residues in mixture with slowly decomposable residues was sufficient to decrease the negative impact of salinity on cumulative respiration compared to the slowly decomposable residue alone. In the second experiment, three soils were used (EC₁﹕₅ 0.1, 1.0 and 2.5 dS m⁻¹), residues were added once or 3 times (on days 0, 14 and 28) to achieve a total of 10 g C kg⁻¹ soil either with sawdust alone, kikuyu alone or both where final proportion of kikuyu was 25%, but the order in which the residues were applied differed The negative effect of salinity on cumulative respiration was smaller when the rapidly decomposable residue was added early, that is when the soil contained small amounts of slowly decomposable residues. Salinity reduced soil respiration to a greater extent in treatments where rapidly decomposable residue was added to soil containing larger amounts of slowly decomposable residues. It is concluded that rapidly decomposable residues can alleviate salinity stress to soil microbes even if they make up only a small proportion of the residues added. By promoting greater microbial activity in saline soils and providing nutrients, rapidly decomposable residues could also improve plant growth through increased nutrient availability.
Thesis (Ph.D.) (Research by Publication) -- University of Adelaide, School of Agriculture, Food and Wine, 2015.
APA, Harvard, Vancouver, ISO, and other styles
49

Lu, Chi-Hsuan, and 呂霽軒. "Application of constrained independent component analysis and empirical mode decomposition to diagnose synchronous multiple bearing faults." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/x44dsg.

Full text
Abstract:
碩士
國立中興大學
機械工程學系所
106
This study investigates the diagnosis of multiple faults that occur concurrently in the bearing through empirical mode decomposition and constrained independent component analysis. The vibration measurements are first decomposed into several intrinsic modal functions through the empirical mode decomposition method. The intrinsic mode functions that present obvious amplitude modulation phenomenon are selected to synthesize a new signal. The constrained independent component analysis is employed to extract the signal component which is highly correlated to the bearing fault features. The fast Fourier transform is utilized to obtain the frequency-domain features of the faulted signal, and the extracted features are compared with the one derived from the theoretical characteristics. The time-domain and frequency-domain characteristics of this independent component are quantified for the intelligent diagnosis through the support vector machine classifier.
APA, Harvard, Vancouver, ISO, and other styles
50

Lassaux, G., and Karen E. Willcox. "Model reduction for active control design using multiple-point Arnoldi methods." 2003. http://hdl.handle.net/1721.1/3702.

Full text
Abstract:
A multiple-point Arnoldi method is derived for model reduction of computational fluid dynamic systems. By choosing the number of frequency interpolation points and the number of Arnoldi vectors at each frequency point, the user can select the accuracy and range of validity of the resulting reduced-order model while balancing computational expense. The multiple-point Arnoldi approach is combined with a singular value decomposition approach similar to that used in the proper orthogonal decomposition method. This additional processing of the basis allows a further reduction in the number of states to be obtained, while retaining a significant computational cost advantage over the proper orthogonal decomposition. Results are presented for a supersonic diffuser subject to mass flow bleed at the wall and perturbations in the incoming flow. The resulting reduced-order models capture the required dynamics accurately while providing a significant reduction in the number of states. The reduced-order models are used to generate transfer function data, which are then used to design a simple feedforward controller. The controller is shown to work effectively at maintaining the average diffuser throat Mach number.
Singapore-MIT Alliance (SMA)
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography