To see the other types of publications on this topic, follow the link: Entropy source.

Dissertations / Theses on the topic 'Entropy source'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Entropy source.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Leone, Nicolò. "A quantum entropy source based on Single Photon Entanglement." Doctoral thesis, Università degli studi di Trento, 2022. https://hdl.handle.net/11572/339572.

Full text
Abstract:
In this thesis, I report on how to use Single Photon Entanglement for generating certified quantum random numbers. Single Photon Entanglement is a particular type of entanglement which involves non-contextual correlations between two degrees of freedom of a single photon. In particular, here I consider momentum and polarization. The presence of the entanglement was validated using different attenuated coherent and incoherent sources of light by evaluating the Bell inequality, a well-known entanglement witness. Different non-idealities in the calculation of the inequality are discussed addressing them both theoretically and experimentally. Then, I discuss how to use the Single Photon Entanglement for generating certified quantum random numbers using a semi-device independent protocol. The protocol is based on a partial characterization of the experimental setup and the violation of the Bell's inequality. An analysis of the non-idealities of the devices employed in the experimental setup is also presented In the last part of the thesis, the integrated photonic version of the previously introduced experiments is discussed: first, it is presented how to generate single photon entangled states exploiting different degrees of freedom with respect to the bulk experiment. Second, I discuss how to perform an integrated test of the Bell's inequality.
APA, Harvard, Vancouver, ISO, and other styles
2

Hedrich, Tanguy. "Spatio temporal evaluation of entropy-based source localization in magnetoencephalography." Thesis, McGill University, 2014. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=121574.

Full text
Abstract:
For 30% of patients affected by focal epilepsy, a surgical intervention is considered to remove the zone of the brain triggering the seizures. Interictal spikes are transient epileptic discharges, occurring between seizures. During presurgical investigation of patients affected by focal epilepsy, magnetoencephalography (MEG) source localization is used to detect the brain region where interictal spikes occur. The estimation of the size of the brain region generating the epileptic discharge may be a useful piece of information for the estimation of the resection area. Moreover, the interictal spike may propagate along the cortical surface. It is crucial during presurgical investigation to detect propagation pattern to identify the onset of the epileptic discharges. Maximum entropy on the mean (MEM) is a source localization method which allows one to introduce a reference distribution containing prior information about MEG data. This reference distribution is based on the assumption that the brain is divided into parcels of generators that can be considered as active or not. It has already been demonstrated that this method is sensitive to the spatial extent of the source. Coherent MEM (cMEM) is a version of MEM adding a spatial smoothness constraint within the parcels. In this thesis, MEM and cMEM were compared to a standard source localization technique: the minimum norm estimate (MNE). The first objective of the thesis was to test the performance to estimate the time course of a simulated epileptic spike. We simulated interictal spikes propagating along the cortical surface. Real MEG background data were used to corrupt noise-free simulations, thus providing realistic simulated data. We demonstrated that MEM and cMEM provided excellent spatial detection scores and a similar temporal accuracy to MNE. We then evaluated the spatial accuracy of the source localization methods using the resolution matrix, which can be seen as the set of all the point spread functions of the source localization methods. MEM and cMEM resolution matrices were estimated using Monte Carlo simulations. The results showed that MEM and cMEM have higher theoretical spatial resolution than MNE. Finally, a method was proposed to obtain a map of activation based on a non parametric statistical analysis. The map was performed using a bootstrap-based analysis in the time-frequency domain. We found that the results varied with the spatial extent of the source. However, the variance of the estimate of the null distribution seemed to be underestimated as the statistical map lacked specificity.
Pour 30% des patients touchés par une épilepsie focale, une intervention chirurgicale peut être envisagée en vue de réséquer la zone du cerveau provoquant les crises d'épilepsie. Les pointes intérictales sont des évennements épileptiques transitoires, se produisant entre les crises d'épilepsie. Pendant la prise en charge préchirurgicale de ces patients, la localisation de source en magnétoencéphalographie (MEG) est utilisée pour détecter la région du cerveau où les pointes intérictales se produisent. L'estimation de la taille des décharges épileptiques peut être une information utile pour l'estimation de la zone à réséquer. De plus, une pointe épileptique peut se propager le long de la surface corticale. Il est crucial pendant l'exploration préchirurgicale de localiser un tel circuit de propagation pour s'assurer d'identifier la zone initiale des décharges épileptiques. Le maximum d'entropie sur la moyenne (MEM) est une technique de localisation de source qui permet d'introduire une distribution de référence contenant des informations a priori sur les données MEG. Cette distribution de référence est basée sur l'hypothèse que le cerveau est divisé en parcelles de générateurs qui peuvent être considérées comme actives ou non. Il a déjà été prouvé que cette méthode était sensible à l'extention spatiale de la source. Le MEM cohérent (coherent MEM - cMEM) est une version du MEM qui ajoute une contrainte de lissage spatial à l'intérieur des parcelles. Pour ce mémoire, MEM et cMEM ont été comparés à une technique de localisation de source standard: l'estimation de norme minimale (minimum norm estimate - MNE). Le premier objectif de ce mémoire est de tester la performance de l'estimation du décours temporel d'une pointe. Nous avons généré des simulations de pointes interictales se propageant le long de la surface corticale, entachées par du bruit de fond réaliste. Nous avons montré que MEM et cMEM obtenaient d'excellents scores de détection spatiale, ainsi qu'une précision temporelle similaire au MNE. Ensuite, nous avons évalué la précision spatiale des méthodes de localisation de source en utilisant la matrice de résolution, qui peut être vue comme l'ensemble de toutes les fonctions d'étalement de point des méthodes de localisation de sources. Les matrices de résolution du MEM et du cMEM ont été estimées en utilisant des simulations de Monte Carlo. Les résultats ont montré que MEM et cMEM avaient une meilleure résolution spatiale théorique que MNE. Finalement, nous avons proposé une méthode visant à obtenir une carte d'activation se basant sur une analyse statistique non paramétrique. La réalisation de la carte statistique s'est faite avec une méthode utilisant le rééchantillonnage bootstrap dans le domaine temps-échelle. Nous avons trouvé que les résultats variaient avec l'extention spatiale de la source estimée. Cependant, la variance de l'estimation de la distribution de l'hypothèse nulle semblait être sous-estimée, étant donné que la carte statistique manquait de spécificité.
APA, Harvard, Vancouver, ISO, and other styles
3

Bafrnec, Matúš. "Hodnocení zdrojů entropie v běžných počítačích." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2020. http://www.nusl.cz/ntk/nusl-413100.

Full text
Abstract:
This thesis is focused on entropy sources and their evaluation. It includes a brief introduction to the information theory, description of entropy sources, their parameters and characteristics and methods of evaluation based on the NIST organisation standard SP 800-90B. The following part of the thesis is dedicated to the description of two created programs and evaluation and comparison of entropy sources. Additionally, the last part describes the usage of hash functions in association with entropy sources.
APA, Harvard, Vancouver, ISO, and other styles
4

Blumenthal, Robert E. "A study of Hadamard transform, DPCM, and entropy coding techniques for a realizable hybrid video source coder /." Thesis, McGill University, 1986. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=65518.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Omer, Mohamoud. "Estimation of regularity and synchronism in parallel biomedical time series." Phd thesis, Univerzitet u Novom Sadu, Fakultet tehničkih nauka u Novom Sadu, 2017. https://www.cris.uns.ac.rs/record.jsf?recordId=101879&source=NDLTD&language=en.

Full text
Abstract:
Objectives: Self-monitoring in health applications has already been recognized as a part of the mobile crowdsensing concept, where subjects, equipped with adequate sensors, share and extract information for personal or common benefit. Limited data transmission resources force a local analysis at wearable devices, but it is incompatible with analytical tools that require stationary and artifact-free data. The key objective of this thesis is to explain a computationally efficient binarized cross-approximate entropy, (X)BinEn, for blind cardiovascular signal processing in environments where energy and processor resources are limited.Methods: The proposed method is a descendant of cross-approximate entropy ((X)ApEn). It operates over binary differentially encoded data series, split into m-sized binary vectors. Hamming distance is used as a distance measure, while a search for similarities is performed over the vector sets, instead of over the individual vectors. The procedure is tested in laboratory rats exposed to shaker and restraint stress and compared to the existing (X)ApEn results.Results: The number of processor operations is reduced. (X)BinEn captures entropy changes similarly to (X)ApEn. The coding coarseness has an adverse effect of reduced sensitivity, but it attenuates parameter inconsistency and binary bias. A special case of (X)BinEn is equivalent to Shannon entropy. A binary conditional m=1 entropy is embedded into the procedure and can serve as a complementary dynamic measure.Conclusion: (X)BinEn can be applied to a single time series as auto-entropy or, more generally, to a pair of time series, as cross-entropy. It is intended for mobile, battery operated self-attached sensing devices with limited power and processor resources.
Cilj: Snimanje sopstvenih zdravstveih prametara je postalo deo koncepta mobilnog ‘crowdsensing-a’ prema kojem učesnici sa nakačenim senzorima skupljaju i dele informacije, na ličnu ili opštu dobrobit. Međutim, ograničenja u prenosu podataka dovela su do koncepta lokalne obrade (na licu mesta). To je pak nespojivo sa uobičajenim metodama za koje je potrebno da podaci koji se obrađuju budu stacionarni i bez artefakata. Ključni deo ove teze je opis procesorski nezahtevne binarizovane unakrsne aproksimativne entropije (X)BinEn koja omogućava analizu kardiovaskularnih podataka bez prethodne predobrade, u uslovima ograničenog napajanja i procesorskih resursa.Metoda: (X)BinEn je nastao razradom postojećeg postupka unakrsne entropije ((X)ApEn). Definisan je nad binarnim diferencijalno kodovanim vremenskim nizovima, razdeljenim u binarne vektore dužine m. Za procenu razmaka između vektora koristi se Hemingovo rastojanje, a sličnost vektora se ne procenjuje između svakog vektora pojedinačno, već između skupova vektora. Procedura je testirana nad laboratorijskim pacovima izloženim različitim vrstova stresova i upoređena sa postojećim rezultatima.Rezultati: Broj potrebnih procesorskih operacija je značajno smanjen. (X)BinEn registruje promene entropije slično (X)ApEn. Beskonačno klipovanje je gruba kvantizacija i za posledicu ima smanjenu osetljivost na promene, ali, sa druge strane, prigušuje binarnu asimetriju i nekonzistentnan uticaj parametara. Za određeni skup parametara (X)BinEn je ekvivalentna Šenonovoj entropiji. Uslovna binarna m=1 entropija automatski se dobija kao uzgredni product binarizovane entropije, i može da se iskoristi kao komplementarna dinamička mera.Zaključak: (X)BinEn može da se koristi za jedan vremenski niz, kao auto-entropija, ili, u opštem slučaju, za dva vremenska niza kao unakrsna entropija. Namenjena je mobilnim uređajima sa baterijskim napajanjem za individualne korisnike, to jest za korisnike sa ograničenim napajanjem i procesorskim resursima.
APA, Harvard, Vancouver, ISO, and other styles
6

Taylor, Quinn Carlson. "Analysis and Characterization of Author Contribution Patterns in Open Source Software Development." BYU ScholarsArchive, 2012. https://scholarsarchive.byu.edu/etd/2971.

Full text
Abstract:
Software development is a process fraught with unpredictability, in part because software is created by people. Human interactions add complexity to development processes, and collaborative development can become a liability if not properly understood and managed. Recent years have seen an increase in the use of data mining techniques on publicly-available repository data with the goal of improving software development processes, and by extension, software quality. In this thesis, we introduce the concept of author entropy as a metric for quantifying interaction and collaboration (both within individual files and across projects), present results from two empirical observational studies of open-source projects, identify and analyze authorship and collaboration patterns within source code, demonstrate techniques for visualizing authorship patterns, and propose avenues for further research.
APA, Harvard, Vancouver, ISO, and other styles
7

Tamara, Škorić. "Automatsko određivanje i analitička provera parametara uzajamne entropije kardiovaskularnih vremenskih nizova." Phd thesis, Univerzitet u Novom Sadu, Fakultet tehničkih nauka u Novom Sadu, 2017. https://www.cris.uns.ac.rs/record.jsf?recordId=104844&source=NDLTD&language=en.

Full text
Abstract:
Unakrsna aproksimativna entropija kvantifikuje međusobnu uređenostdva istomvremeno snimljena vremenska niza. Iako je izvedena izveoma zastupljene entropije za procenu uređenosti jednog vremenskogniza, još uvek nije dostigla njenu reputaciju. Cilj ove disertacije jeda identifikuje probleme koji otežavaju širu primenu unakrsneentropije i da predloži skup rešenja. Validacija rezultata je rađenana kardiovaskularnim signalima, sistolnog krvnog pritiska i palsnoginetervala snimljenim na laboratorijskim pacovima i na signalimazdravih volontera.
Cross-approximate entropy (XApEn) quantifies a mutual orderliness of twosimultaneously recorded time series. Although derived from the firmlyestablishe solitary entropies, it has never reached their reputation anddeployment. The aim of this thesis is to identify the problems that precludewider XApEn implementation and to develop a set of solutions. Results werevalidated using the cardiovascular time series, systolic blood pressure andpulse interval, recorded from laboratories animals and also signals recordedfrom healthy human volunteers.
APA, Harvard, Vancouver, ISO, and other styles
8

Williams, Ryan Scott. "Lean Manufacturing as a Source of Competitive Advantage." BYU ScholarsArchive, 2010. https://scholarsarchive.byu.edu/etd/2333.

Full text
Abstract:
The productivity advances generated from lean manufacturing are self-evident. Plants that adopt lean are more capable of achieving high levels of quality, shorter lead times, and less waste in the system. While it seems logical that higher levels of productivity and quality, as is common in lean companies, should result in positive financial performance, the research community has failed to establish the financial profitability of lean. Those researchers who have studied the financial returns issue report varying results. The goal of this research was to determine if a connection exists between lean and financial success and to discover why so many researchers are finding mixed results. Information Velocity (IV) was theorized to provide the solidifying link between lean and financial performance. Measured by combining the environmental volatility with a company's leanness, IV measures how fast a company can transmit information from the market into a customer-satisfying product in the hands of the consumer. This study analyzed over 530 publicly-traded manufacturing companies to validate the following hypotheses: 1) there is a positive relationship between leanness and financial returns, 2) there is a negative relationship between environmental volatility and financial returns, and 3) there is a positive relationship between IV and financial returns. Regression models were run in various combinations to determine the effect of lean, environmental instability, environmental unpredictability, and IV on financial performance indicators such as return on sales (ROS), return on assets (ROA), and quarter-closing stock price. The outcome of this study showed that financial rewards do result from lean, which positively affected financial performance in almost all scenarios. Environmental instability always negatively correlated with financial returns, and IV mostly shows a positive effect, but with mixed results. Lastly, IV does not explain why researchers find mixed results on the profitability measures of lean. The results of this thesis highlight the significance of implementing lean manufacturing, especially in a dynamic environment. As the instability in the environment increases, profitability decreases. Therefore, an increase in leanness by boosting inventory turns can compensate for the volatility and create enhanced productivity measures and financial results.
APA, Harvard, Vancouver, ISO, and other styles
9

Miodrag, Petković. "Prilog razvoju metode za detekciju napada ometanjem usluge na Internetu." Phd thesis, Univerzitet u Novom Sadu, Fakultet tehničkih nauka u Novom Sadu, 2018. https://www.cris.uns.ac.rs/record.jsf?recordId=107396&source=NDLTD&language=en.

Full text
Abstract:
U ovoj doktorskoj disertaciji predložen je i analiziran metod koji kombinuje primenu entropije odabranih obeležja mrežnog saobraćaja i Takagi-Sugeno-Kang (TSK) neuro-fazi modela u detekciji DoS napada. Entropija je primenjena jer omogućava detekciju širokog spektra statističkih anomalija uzrokovanih DoS napadima dok TSK neuro-fazi model daje dodatni kvalitet u konačnom određivanju tačaka početka i kraja napada povećavajući odnos ispravno i pogrešno detektovanih napada.
In this thesis a new method for DoS attack detection is proposed. This methodcombines the use of entropy of some characteristic parameters of network trafficand Takagi-Sugeno-Kang (TSK) neuro-fuzzy model. Entropy has been used becauseit enables detection of wide spectar of network anomalies caused by DoS attacks,while TSK adds new value to final detection of the start and the end of an attackincreasing ratio between true and false detections.
APA, Harvard, Vancouver, ISO, and other styles
10

Cavallini, Alessandro Giorgio. "Lean Six Sigma as a Source of Competitive Advantage." Diss., CLICK HERE for online access, 2008. http://contentdm.lib.byu.edu/ETD/image/etd2656.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Kumar, Pankaj. "Chaos in Pulsed Laminar Flow." Diss., Virginia Tech, 2010. http://hdl.handle.net/10919/39260.

Full text
Abstract:
Fluid mixing is a challenging problem in laminar flow systems. Chaotic advection can play an important role in enhancing mixing in such flow. In this thesis, different approaches are used to enhance fluid mixing in two laminar flow systems. In the first system, chaos is generated in a flow between two closely spaced parallel circular plates by pulsed operation of fluid extraction and reinjection through singularities in the domain. A singularity through which fluid is injected (or extracted) is called a source (or a sink). In a bounded domain, one source and one sink with equal strength operate together as a source-sink pair to conserve the fluid volume. Fluid flow between two closely spaced parallel plates is modeled as Hele-Shaw flow with the depth averaged velocity proportional to the gradient of the pressure. So, with the depth-averaged velocity, the flow between the parallel plates can effectively be modeled as two-dimensional potential flow. This thesis discusses pulsed source-sink systems with two source-sink pairs operating alternately to generate zig-zag trajectories of fluid particles in the domain. For reinjection purpose, fluid extracted through a sink-type singularity can either be relocated to a source-type one, or the same sink-type singularity can be activated as a source to reinject it without relocation. Relocation of fluid can be accomplished using either â first out first inâ or â last out first inâ scheme. Both relocation methods add delay to the pulse time of the system. This thesis analyzes mixing in pulsed source-sink systems both with and without fluid relocation. It is shown that a pulsed source-sink system with â first out first inâ scheme generates comparatively complex fluid flow than pulsed source-sink systems with â last out first inâ scheme. It is also shown that a pulsed source-sink system without fluid relocation can generate complex fluid flow. In the second system, mixing and transport is analyzed in a two-dimensional Stokes flow system. Appropriate periodic motions of three rods or periodic points in a two-dimensional flow are determined using the Thurston-Nielsen Classification Theorem (TNCT), which also predicts a lower bound on the complexity generated in the fluid flow. This thesis extends the TNCT -based framework by demonstrating that, in a perturbed system with no lower order fixed points, almost invariant sets are natural objects on which to apply the TNCT. In addition, a method is presented to compute line stretching by tracking appropriate motion of finite size rods. This method accounts for the effect of the rod size in computing the complexity generated in the fluid flow. The last section verifies the existence of almost invariant sets in a two-dimensional flow at finite Reynolds number. The almost invariant set structures move with appropriate periodic motion validating the application of the TNCT to predict a lower bound on the complexity generated in the fluid flow.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
12

Gordan, Mimić. "Nelinearna dinamička analiza fizičkih procesa u žiivotnoj sredini." Phd thesis, Univerzitet u Novom Sadu, Prirodno-matematički fakultet u Novom Sadu, 2016. https://www.cris.uns.ac.rs/record.jsf?recordId=101258&source=NDLTD&language=en.

Full text
Abstract:
Ispitivan  je  spregnut  sistem  jednačina  za  prognozu  temperature  na površini  i  u  dubljem sloju zemljišta.  Računati  su  Ljapunovljevi eksponenti,  bifurkacioni dijagram, atraktor i analiziran je domen rešenja. Uvedene su nove informacione mere  bazirane naKolmogorovljevoj kompleksnosti,  za kvantifikaciju  stepena nasumičnosti u vremenskim serijama,.  Nove mere su primenjene na razne serije dobijene merenjem fizičkih faktora životne sredine i pomoću klimatskih modela.
Coupled system of prognostic equations for  the  ground surface temperature and  the deeper layer temperature was examind. Lyapunov exponents, bifurcation diagrams, attractor and the domain of solutions were analyzed.  Novel information measures based on Kolmogorov complexity  and used  for the quantification of randomness in time series, were presented.Novel measures were tested on various time series obtained by measuring physical factors of the environment or as the climate model outputs.
APA, Harvard, Vancouver, ISO, and other styles
13

Mladen, Kovačević. "Error-Correcting Codes in Spaces of Sets and Multisets and Their Applications in Permutation Channels." Phd thesis, Univerzitet u Novom Sadu, Fakultet tehničkih nauka u Novom Sadu, 2014. https://www.cris.uns.ac.rs/record.jsf?recordId=85935&source=NDLTD&language=en.

Full text
Abstract:
The thesis studies two communicationchannels and corresponding error-correctingcodes. Multiset codes are introduced andtheir applications described. Properties ofentropy and relative entropy are investigated.
U tezi su analizirana dva tipa komunikacionihkanala i odgovarajući zaštitni kodovi.Uveden je pojam multiskupovnog koda iopisane njegove primene. Proučavane suosobine entropije i relativne entropije.
APA, Harvard, Vancouver, ISO, and other styles
14

Krein, Jonathan L. "Programming Language Fragmentation and Developer Productivity: An Empirical Study." BYU ScholarsArchive, 2011. https://scholarsarchive.byu.edu/etd/2477.

Full text
Abstract:
In an effort to increase both the quality of software applications and the efficiency with which applications can be written, developers often incorporate multiple programming languages into software projects. Although language specialization arguably introduces benefits, the total impact of the resulting language fragmentation (working concurrently in multiple programming languages) on developer performance is unclear. For instance, developers may solve problems more efficiently when they have multiple language paradigms at their disposal. However, the overhead of maintaining efficiency in more than one language may outweigh those benefits. This thesis represents a first step toward understanding the relationship between language fragmentation and programmer productivity. We address that relationship within two different contexts: 1) the individual developer, and 2) the overall project. Using a data-centered approach, we 1) develop metrics for measuring productivity and language fragmentation, 2) select data suitable for calculating the needed metrics, 3) develop and validate statistical models that isolate the correlation between language fragmentation and individual programmer productivity, 4) develop additional methods to mitigate threats to validity within the developer context, and 5) explore limitations that need to be addressed in future work for effective analysis of language fragmentation within the project context using the SourceForge data set. Finally, we demonstrate that within the open source software development community, SourceForge, language fragmentation is negatively correlated with individual programmer productivity.
APA, Harvard, Vancouver, ISO, and other styles
15

Bojan, Mrazovac. "Novo rešenje za detekciju prisustva i kretanja ljudi u prostorijama na osnovu analize signala u bežičnoj senzorskoj mreži." Phd thesis, Univerzitet u Novom Sadu, Fakultet tehničkih nauka u Novom Sadu, 2016. http://www.cris.uns.ac.rs/record.jsf?recordId=96979&source=NDLTD&language=en.

Full text
Abstract:
Neregularnost prostiranja radio talasa je uobičajeni fenomen kojiutiče na kvalitet radio veze u okviru bežične mreže, rezultujućirazličitim obrascima prostiranja radio talasa. Ova teza dajepredlog nekoliko postupaka analize prostiranja radio talasa u ciljubez-senzorskog otkrivanja prisustva i kretanja ljudi unutar postojećebežične mreže. Indikator primljene snage radio signala predstavljaosnovni element analize, iz kog se izdvajaju informaciono,amplitudsko i frekventno obeležje. Analizom navedenih obeležjamoguća je realizacija robusnog postupka bez-senzorske detekcije ljudikoja se može primeniti u različitim rešenjima ambijentalneinteligencije, zahtevajući minimalan broj elemenata fizičkearhitekture, neophodnih za uspostavljanje korisnički svesnogokruženja.
Radio irregularity is a common and non-negligible phenomenon that impactsthe connectivity and interference in a wireless network, by introducingdisturbances in radio signal’s propagation pattern. In order to detect apossible presence of a human subject within the existing radio networksensorlessly, this thesis analyze the irregularity data expressed in a form ofreceived signal strength variation. The received signal strength variation isdecomposed into information, amplitude and frequency characteristics. Thecombination of these three characteristics analysis enables the definition ofrobust and cost-effective device-free human presence detection method thatcan be exploited for various ambient intelligence solutions, requiring theminimum hardware add-ons that are necessary for the establishment of auser aware environment.
APA, Harvard, Vancouver, ISO, and other styles
16

Jovica, Tasevski. "Адаптивне бихевиористичке стратегије у интеракцији између човека и машине у контексту медицинске терапије." Phd thesis, Univerzitet u Novom Sadu, Fakultet tehničkih nauka u Novom Sadu, 2018. https://www.cris.uns.ac.rs/record.jsf?recordId=107588&source=NDLTD&language=en.

Full text
Abstract:
У овој дисертацији се разматрају изабрани аспекти истраживачког проблема спецификације, дизајнирања и имплементације конверзационих робота као асистивних средстава у терапији деце са церебралном парализом. Доприноси ови тезе су следећи. (i) Предложена је архитектура конверзационог агента опште намене која омогућава флексибилно интегрисање модула различитих функционалности. (ii) Дефинисана је и имплементирана адаптивна бихевиористичка стратегија коју робот примењује у интеракцији са децом. (iii) Предложена дијалошка стратегија је спроведена и позитивно процењена у интеркацији између деце и робота у реалистичном терапеутском контексту. (iv) Коначно, предложен је приступ за аутоматско детектовање критичних промена у дијалогу, заснован на појму нормализоване дијалошке ентропије.
U ovoj disertaciji se razmatraju izabrani aspekti istraživačkog problema specifikacije, dizajniranja i implementacije konverzacionih robota kao asistivnih sredstava u terapiji dece sa cerebralnom paralizom. Doprinosi ovi teze su sledeći. (i) Predložena je arhitektura konverzacionog agenta opšte namene koja omogućava fleksibilno integrisanje modula različitih funkcionalnosti. (ii) Definisana je i implementirana adaptivna bihevioristička strategija koju robot primenjuje u interakciji sa decom. (iii) Predložena dijaloška strategija je sprovedena i pozitivno procenjena u interkaciji između dece i robota u realističnom terapeutskom kontekstu. (iv) Konačno, predložen je pristup za automatsko detektovanje kritičnih promena u dijalogu, zasnovan na pojmu normalizovane dijaloške entropije.
This doctoral dissertation considers selected aspects of the research problem of specification, design, and implementation of conversational robots as assistive tools in therapy for children with cerebral palsy. This dissertation has made the following contributions: (i) It proposes a general architecture for conversational agents that allows for flexible integration of software modules implementing different functionalities. (ii) It introduces and implements an adaptive behavioural strategy that is applied by the robot in interaction with children. (iii) The proposed dialogue strategy is applied and evaluated in interaction between children and the robot MARKO, in realistic therapeutic settings. (iv) Finally, the dissertation proposes an approach to automatic detection of critical changes in human-machine interaction, based on the notion of normalized interactional entropy.
APA, Harvard, Vancouver, ISO, and other styles
17

Dumitru, Corneliu Octavian. "Noise sources in robust uncompressed video watermarking." Phd thesis, Institut National des Télécommunications, 2010. http://tel.archives-ouvertes.fr/tel-00541755.

Full text
Abstract:
Cette thèse traite de ce verrou théorique pour des vidéos naturelles. Les contributions scientifiques développées ont permis : 1. De réfuter mathématiquement le modèle gaussien en général adopté dans la littérature pour représenter le bruit de canal ; 2. D'établir pour la première fois, le caractère stationnaire des processus aléatoires représentant le bruit de canal, la méthode développée étant indépendante du type de données, de leur traitement et de la procédure d'estimation ; 3. De proposer une méthodologie de modélisation du bruit de canal à partir d'un mélange de gaussiennes pour une transformée aussi bien en cosinus discrète qu'en ondelette discrète et pour un large ensemble d'attaques (filtrage, rotation, compression, StirMark, ...). L'intérêt de cette approche est entre autres de permettre le calcul exact de la capacité du canal alors que la littérature ne fournissait que des bornes supérieure et inférieure. 4. Les contributions technologique concernent l'intégration et l'implémentions de ces modèles dans la méthode du tatouage IProtect brevetée Institut Télécom/ARTEMIS et SFR avec un gain en temps d'exécution d'un facteur 100 par rapport à l'état de l'art.
APA, Harvard, Vancouver, ISO, and other styles
18

Einar, Marcus. "Implementing and Testing Self-Timed Rings on a FPGA as Entropy Sources." Thesis, Linköpings universitet, Informationskodning, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-119724.

Full text
Abstract:
Random number generators are basic building blocks of modern cryptographic systems. Usually pseudo random number generators, carefully constructed deter- ministic algorithms that generate seemingly random numbers, are used. These are built upon foundations of thorough mathematical analysis and have been subjected to stringent testing to make sure that they can produce pseudo random sequences at a high bit-rate with good statistical properties. A pseudo random number generator must be initiated with a starting value. Since they are deterministic, the same starting value used twice on the same pseudo random number generator will produce the same seemingly random sequence. Therefore it is of utmost importance that the starting value contains enough en- tropy so that the output cannot be predicted or reproduced in an attack. To gen- erate a high entropy starting value, a true random number generator that uses sampling of some physical non-deterministic phenomenon to generate entropy, can be used. These are generally slower than their pseudo random counterparts but in turn need not generate the same amount of random values. In field programmable gate arrays (FPGA), generating random numbers is not trivial since they are built upon digital logic. A popular technique to generate entropy within a FPGA is to sample jittery clock signals. A quite recent technique proposed to create a robust clock signals, that contains such jitter, is to use self- timed ring oscillators. These are structures in which several events can propagate freely at an evenly spaced phase distribution. In this thesis self-timed rings of six different lengths is implemented on a spe- cific FPGA hardware. The different implementations are tested with the TestU01 test suite. The results show that two of the implementations have a good oscilla- tory behaviour that is well suited for use as random number generators. Others exhibit unexpected behaviours that are not suited to be used in a random num- ber generator. Two of the implemented random generators passed all tests in the TestU01 batteries Alphabit and BlockAlphabit. One of the generators was deemed not fit for use in a random number generator after failing all of the tests. The last three were not subjected to any tests since they did not behave as ex- pected.
APA, Harvard, Vancouver, ISO, and other styles
19

Casseau, Vincent. "An open-source CFD solver for planetary entry." Thesis, University of Strathclyde, 2017. http://digitool.lib.strath.ac.uk:80/R/?func=dbin-jump-full&object_id=28646.

Full text
Abstract:
hy2Foam is a newly-coded open-source two-temperature computational fluid dynamics(CFD) solver that aims at (1) giving open-source access to a state-of-the-art hypersonic CFD solver to students and researchers; and (2) providing a foundation for a future hybrid CFD-DSMC (direct simulation Monte Carlo) code within the OpenFOAM framework. Benchmarking has firstly been performed for zero-dimensional test cases and hy2Foam has then been shown to produce results in good agreement with previously published data for a 2D-axisymmetric Mach 11 nitrogen flow over a blunted cone and with the dsmcFoam code for a series of Fourier cases and a 2D Mach 20 cylinder flow for a binary reacting mixture. This latter case scenario provides a useful basis for other codes to compare against. hy2Foam and dsmcFoam capabilities have eventually been utilised to derive and to test a new set of chemical rates based on quantum-kinetic theory and that could be employed for Earth atmospheric re-entry computations.
APA, Harvard, Vancouver, ISO, and other styles
20

CASTRO, JOSÉ FILHO DA COSTA. "OPERATING RESERVE ASSESSMENT IN MULTI-AREA SYSTEMS WITH RENEWABLE SOURCES VIA CROSS ENTROPY METHOD." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2018. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=36076@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO
PROGRAMA DE EXCELENCIA ACADEMICA
A reserva girante é a parcela da reserva operativa provida por geradores sincronizados, e interligados à rede de transmissão, aptos a suprir a demanda na ocorrência de falhas de unidades de geração, erros na previsão da demanda, variações de capacidade de fontes renováveis ou qualquer outro fator inesperado. Dada sua característica estocástica, essa parcela da reserva operativa é mais adequadamente avaliada por meio de métodos capazes de representar as incertezas inerentes ao seu dimensionamento e planejamento. Por meio do risco de corte de carga é possível comparar e classificar distintas configurações do sistema elétrico, garantindo a não violação dos requisitos de confiabilidade. Sistemas com elevada penetração de fontes renováveis apresentam comportamento mais complexo devido ao aumento das incertezas envolvidas, à forte dependência de fatores energético-climáticos e às variações de capacidade destas fontes. Para avaliar as correlações temporais e representar a cronologia de ocorrência dos eventos no curto-prazo, um estimador baseado na Simulação Monte Carlo Quase Sequencial é apresentado. Nos estudos de planejamento da operação de curto-prazo o horizonte em análise é de minutos a algumas horas. Nestes casos, a ocorrência de falhas em equipamentos pode apresentar baixa probabilidade e contingências que causam corte de carga podem ser raras. Considerando a raridade destes eventos, as avaliações de risco são baseadas em técnicas de amostragem por importância. Os parâmetros de simulação são obtidos por um processo numérico adaptativo de otimização estocástica, utilizando os conceitos de Entropia Cruzada. Este trabalho apresenta uma metodologia de avaliação dos montantes de reserva girante em sistemas com participação de fontes renováveis, em uma abordagem multiárea. O risco de perda de carga é estimado considerando falhas nos sistemas de geração e transmissão, observando as restrições de transporte e os limites de intercâmbio de potência entre as diversas áreas elétricas.
The spinning reserve is the portion of the operational reserve provided by synchronized generators and connected to the transmission network, capable of supplying the demand considering generating unit failures, errors in load forecasting, capacity intermittency of renewable sources or any other unexpected factor. Given its stochastic characteristic, this portion of the operating reserve is more adequately evaluated through methods capable of modeling the uncertainties inherent in its design and planning. Based on the loss of load risk, it is possible to compare different configurations of the electrical system, ensuring the non-violation of reliability requirements. Systems with high penetration of renewable sources present a more complex behavior due to the number of uncertainties involved, strong dependence of energy-climatic factors and variations in the capacity of these sources. In order to evaluate the temporal correlations and to represent the chronology of occurrence of events in the short term, an estimator based on quasi-sequential Monte Carlo simulation is presented. In short-term operation planning studies, the horizon under analysis is from minutes to a few hours. In these cases, the occurrence of equipment failures may present low probability and contingencies that cause load shedding may be rare. Considering the rarity of these events, risk assessments are based on importance sampling techniques. The simulation parameters are obtained by an adaptive numerical process of stochastic optimization, using the concept of Cross Entropy. This thesis presents a methodology for evaluating the amounts of spinning reserve in systems with high penetration of renewable sources, in a multi-area approach. The risk of loss of load is estimated considering failures in the generation and transmission systems, observing the network restrictions and the power exchange limits between the different electric areas.
APA, Harvard, Vancouver, ISO, and other styles
21

More, Kristen M. "CONSIDER THE SOURCE: AN INVESTIGATION INTO PSYCHOLOGICAL CONTRACT FORMATION." Ohio : Ohio University, 2006. http://www.ohiolink.edu/etd/view.cgi?ohiou1163691587.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

RIBEIRO, AYRTON SOARES. "INFERRING THE NATURE OF DETERMINISTIC SOURCES OF REAL SERIES THROUGH PERMUTATION ENTROPY AND STATISTICAL COMPLEXITY MEASURES." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2013. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=23644@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO
FUNDAÇÃO DE APOIO À PESQUISA DO ESTADO DO RIO DE JANEIRO
O objetivo dessa dissertação é inferir o caráter das forças que governam os sistemas complexos modelados por equações de Langevin, utilizando quantificadores provenientes da teoria de informação. Avaliamos em detalhes as medidas de entropia de permutação (PE) e de complexidade estatística de permutação (PSC) para duas classes de similaridade de modelos estocásticos, caracterizadas por propriedades de arrasto ou de reversão, respectivamente, empregando-as como referência para a inspeção de séries reais. Encontramos novos parâmetros relevantes dos modelos para as medidas de PE e PSC, em relação a medidas tradicionais de entropia. Determinamos as curvas de PE e PSC de acordo com esses parâmetros para diferentes ordens de permutação n e inferimos as medidas limites para uma ordem arbitrariamente grande. Apesar de a medida PSC apresentar comportamento fortemente dependente da ordem de permutação considerada, encontramos um importante resultado n-invariante, que permite identificar a natureza (de arrasto ou de reversão) das fontes determinísticas subjacentes ao sinal complexo. Concluímos investigando a presença de tendências locais em séries de preços de ações.
The scope of this dissertation is to infer the character of the forces controlling complex systems modeled by Langevin equations, by recourse to information-theory quantifiers. We evaluate in detail the permutation entropy (PE) and the permutation statistical complexity (PSC) measures for two classes of similarity of stochastic models characterized by drifting and reversion properties, respectively, employing them as a framework for the inspection of real series. We found new relevant model parameters for PE and PSC measures as compared to standard entropy measures. We determine the PE and PSC curves according to these parameters for different permutation orders n and infer the limiting measures for arbitrary large order. Although the PSC measure presents a strongly scaledependent behavior, a key n-invariant outcome arises, enabling one to identify the nature (drifting or reversion) of the deterministic sources underlying the complex signal. We conclude by investigating the presence of local trends in stock price series.
APA, Harvard, Vancouver, ISO, and other styles
23

MORETTI, RICCARDO. "Digital Nonlinear Oscillators: A Novel Class of Circuits for the Design of Entropy Sources in Programmable Logic Devices." Doctoral thesis, Università di Siena, 2021. http://hdl.handle.net/11365/1144376.

Full text
Abstract:
In recent years, cybersecurity is gaining more and more importance. Cryptography is used in numerous applications, such as authentication and encryption of data in communications, access control to restricted or protected areas, electronic payments. It is safe to assume that the presence of cryptographic systems in future technologies will become increasingly pervasive, leading to a greater demand for energy efficiency, hardware reliability, integration, portability, and security. However, this pervasiveness introduces new challenges: the implementation of conventional cryptographic standards approved by NIST requires the achievement of performance in terms of timing, chip area, power and resource consumption that are not compatible with reduced complexity hardware devices, such as IoT systems. In response to this limitation, lightweight cryptography comes into play - a branch of cryptography that provides tailor-made solutions for resource-limited devices. One of the fundamental classes of cryptographic hardware primitives is represented by Random Number Generators (RNGs), that is, systems that provide sequences of integers that are supposed to be unpredictable. The circuits and systems that implement RNGs can be divided into two categories, namely Pseudo Random Number Generators (PRNGs) and True Random Number Generators (TRNGs). PRNGs are deterministic and possibly periodic finite state machines, capable of generating sequences that appear to be random. In other words, a PRNG is a device that generates and repeats a finite random sequence, saved in memory, or generated by calculation. A TRNG, on the other hand, is a device that generates random numbers based on real stochastic physical processes. Typically, a hardware TRNG consists of a mixed-signal circuit that is classified according to the stochastic process on which it is based. Specifically, the most used sources of randomness are chaotic circuits, high jitter oscillators, circuits that measure other stochastic processes. A chaotic circuit is an analog or mixed-signal circuit in which currents and voltages vary over time based on certain mathematical properties. The evolution over time of these currents and voltages can be interpreted as the evolution of the state of a chaotic nonlinear dynamical system. Jitter noise can instead be defined as the deviation of the output signal of an oscillator from its true periodicity, which causes uncertainty in its low-high and high-low transition times. Other possible stochastic processes that a TRNG can use may involve radioactive decay, photon detection, or electronic noise in semiconductor devices. TRNG proposals presented in the literature are typically designed in the form of Application Specific Integrated Circuits (ASICs). On the other hand, in recent years more and more researchers are exploring the possibility of designing TRNGs in Programmable Logic Devices (PLDs). A PLD offers, compared to an ASIC, clear advantages in terms of cost and versatility. At the same time, however, there is currently a widespread lack of trust in these PLD-based architectures, particularly due to strong cryptographic weaknesses found in Ring Oscillator-based solutions. The goal of this thesis is to show how this mistrust does not depend on poor performance in cryptographic terms of solutions for the generation of random numbers based on programmable digital technologies, but rather on a still immature approach in the study of TRNG architectures designed on PLDs. During the thesis chapters a new class of nonlinear circuits based on digital hardware is introduced that can be used as entropy sources for TRNGs implemented in PLDs, identified by the denomination of Digital Nonlinear Oscillators (DNOs). In Chapter 2 a novel class of circuits that can be used to design entropy sources for True Random Number Generation, called Digital Nonlinear Oscillators (DNOs), is introduced. DNOs constitute nonlinear dynamical systems capable of supporting complex dynamics in the time-continuous domain, although they are based on purely digital hardware. By virtue of this characteristic, these circuits are suitable for their implementation on Programmable Logic Devices. By focusing the analysis on Digital Nonlinear Oscillators implemented in FPGAs, a preliminary comparison is proposed between three different circuit topologies referable to the introduced class, to demonstrate how circuits of this type can have different characteristics, depending on their dynamical behavior and the hardware implementation. In Chapter 3 a methodology for the analysis and design of Digital Nonlinear Oscillators based on the evaluation of their electronics aspects, their dynamical behavior, and the information they can generate is formalized. The presented methodology makes use of different tools, such as figures of merit, simplified dynamical models, advanced numerical simulations and experimental tests carried out through implementation on FPGA. Each of these tools is analyzed both in its theoretical premises and through explanatory examples. In Chapter 4 the analysis and design methodologies of Digital Nonlinear Oscillators formalized in Chapter 3 are used to describe the complete workflow followed for the design of a novel DNO topology. This DNO is characterized by chaotic dynamical behaviors and can achieve high performance in terms of generated entropy, downstream of a reduced hardware complexity and high sampling frequencies. By exploiting the simplified dynamical model, the advanced numerical simulations in Cadence Virtuoso and the FPGA implementation, the presented topology is extensively analyzed both from a theoretical point of view (notable circuit sub-elements that make up the topology, bifurcation diagrams, internal periodicities) and from an experimental point of view (generated entropy, source autocorrelation, sensitivity to routing, application of standard statistical tests). In Chapter 5 an algorithm, called Maximum Worst-Case Entropy Selector (MWCES), that aims to identify, within a set of entropy sources, which offers the best performance in terms of worst-case entropy, also known in literature as "min-entropy", is presented. This algorithm is designed to be implemented in low-complexity digital architectures, suitable for lightweight cryptographic applications, thus allowing online maximization of the performance of a random number generation system based on Digital Nonlinear Oscillators. This chapter presents the theoretical premises underlying the algorithm formulation, some notable examples of its generic application and, finally, considerations related to its hardware implementation in FPGA.
APA, Harvard, Vancouver, ISO, and other styles
24

Wen, Wen. "The implications of incumbent intellectual property strategies for open source software success and commercialization." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44867.

Full text
Abstract:
There has been little understanding of how the existence and exercise of formal intellectual property rights (IPR) such as patents influence the direction of OSS innovation. This dissertation seeks to bridge this gap in prior literature by focusing on two closely related topics. First, it investigates how OSS adoption and production are influenced by IPR enforcement exercised by proprietary incumbents. It suggests that when an IPR enforcement action is filed, user interest and developer activity will be negatively affected in two types of related OSS projects--those that display technology overlap with the litigated OSS and business projects that are specific to a focal litigated platform. The empirical analyses based on data from SourceForge.net strongly support the hypotheses. Second, it examines the impact of royalty-free patent pools contributed by OSS-friendly incumbents on OSS product entry by start-up firms. It argues that increases in the size of the OSS patent pool related to a software segment will facilitate OSS entry by start-up firms into the same segment; further, the marginal effect of the pool on OSS entry will be especially large in software segments where the cumulativeness of innovation is high or where patent ownership in a segment is concentrated. These hypotheses are empirically tested through examining the impacts of a major OSS patent pool--the Patent Commons, established by IBM and a few others in 2005--on OSS entry by 2,054 start-up firms from 1999 to 2009. The empirical results largely support these hypotheses and are robust to adding a variety of controls as well as to GMM instrumental variables estimation.
APA, Harvard, Vancouver, ISO, and other styles
25

Kouichi, Hamza. "Optimisation de réseaux de capteurs pour la caractérisation de source de rejets atmosphériques." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLE020/document.

Full text
Abstract:
L’objectif principal de cette étude est de définir les méthodes nécessaires pour optimiser un réseau de surveillance conçu pour la caractérisation de source de rejets atmosphériques. L’optimisation consiste ici à déterminer le nombre et les positions optimales de capteurs à déployer afin de répondre à ce type de besoin. Dans ce contexte, l’optimisation est réalisée pour la première fois par un couplage entre la technique d’inversion de données dite de « renormalisation » et des algorithmes d’optimisation métaheuristique. La méthode d’inversion a été en premier lieu évaluée pour la caractérisation de source ponctuelle, et a permis ensuite, de définir des critères d’optimalité pour la conception des réseaux. Dans cette étude, le processus d’optimisation a été évalué dans le cadre d’expériences réalisées en terrain plat sans obstacles (DYCE) et en milieu urbain idéalisé (MUST). Trois problématiques ont été définies et testées sur ces expériences. Elles concernent (i) la détermination de la taille optimale d’un réseau permettant de caractériser une source de pollution, où une fonction coût (erreurs normalisées), traduisant l’écart entre les observations et les données modélisées, a été minimisée ; (ii) la conception optimale d’un réseau permettant de caractériser une source ponctuelle inconnue, pour une condition météorologique particulière. Dans ce contexte, une fonction coût entropique a été maximisée afin d’augmenter la quantité d’information fournie par le réseau ; (iii) la détermination d’un réseau optimal permettant de caractériser une source ponctuelle inconnue pour des configurations météorologiques multiples. Pour ce faire, une fonction coût entropique généralisée, que nous avons définie, a été maximisée. Pour ces trois problématiques, l’optimisation est assurée dans le cadre d’une approche d’optimisation combinatoire. La détermination de la taille optimale d’un réseau (problématique 1) s’est révélée particulièrement sensible aux différentes conditions expérimentales (hauteur et débit de la source, conditions de stabilité, vitesse et direction du vent, etc.). Nous avons noté pour ces expériences, que les performances des réseaux sont meilleures dans le cadre d’une dispersion sur terrain plat comparativement aux milieux urbains. Nous avons également montré que différentes architectures de réseaux pouvaient converger vers le même optimum (approché ou global). Pour la caractérisation de sources inconnues (problématiques 2 et 3), les fonctions coûts entropiques se sont avérées robustes et ont permis d’obtenir des réseaux optimaux performants (de tailles raisonnables) capables de caractériser différentes sources pour une ou plusieurs conditions météorologiques
The main objective of this study is to define the methods required to optimize a monitoring network designed for atmospheric source characterization. The optimization consists in determining the optimal number and locations of sensors to be deployed in order to respond to such needs. In this context, the optimization is performed for the first time by a coupling between the data inversion technique named "renormalization" and the metaheuristic optimization algorithms. At first, the inversion method was evaluated for a point source, and then have allowed to define optimality criteria for networks design. In this study, the optimization process was evaluated in experiments carried out in flat terrain without obstacles (DYCE) and in an idealized urban environment (MUST). Three problems were defined and tested based on these experiments. These problems concern (i) the determination of the optimal network size for source characterization, for which a cost function (standard errors) estimating the gap between observations and modeled data, has been minimized; (ii) the optimal design of a network to retrieve an unknown point source for a particular meteorological condition. In this context, an entropy cost function has been maximized in order to increase the information’s amount provided by the network; (iii) the determination of an optimal network to reconstruct an unknown point source for multiple meteorological configurations. For this purpose, a generalized entropic cost function that we have defined, has been maximized. For these all problems, optimization is ensured within the framework of a combinatorial optimization approach. The determination of the optimal network size (problem 1) was highly sensitive to experimental conditions (source height and intensity, stability conditions, wind speed and direction, etc.). We have noted that the networks performance is better for a dispersion on flat terrain compared to the urban environments. We have also shown that different networks architectures can converge towards the same optimum (approximate or global). For unknown sources reconstruction (problems 2 and 3), the entropic cost functions have proven to be robust and allowed to obtain optimal networks (for reasonable sizes) capable of characterizing different sources for one or multiple meteorological conditions
APA, Harvard, Vancouver, ISO, and other styles
26

Mester, Gretchen S. "An empirical assessment of entry into the green power market /." view abstract or download file of text, 2004. http://wwwlib.umi.com/cr/uoregon/fullcit?p3153794.

Full text
Abstract:
Thesis (Ph. D.)--University of Oregon, 2004.
Typescript. Includes vita and abstract. Includes bibliographical references (leaves 93-96). Also available for download via the World Wide Web; free to University of Oregon users.
APA, Harvard, Vancouver, ISO, and other styles
27

Strobio, Chen Lin [Verfasser], Wolfgang [Akademischer Betreuer] [Gutachter] Polifke, and Maria [Gutachter] Heckl. "Scattering and Generation of Acoustic and Entropy Waves across Moving and Fixed Heat Sources / Lin Strobio Chen ; Gutachter: Wolfgang Polifke, Maria Heckl ; Betreuer: Wolfgang Polifke." München : Universitätsbibliothek der TU München, 2016. http://d-nb.info/1142376257/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Karmela, Rakić. "Лексиколошки и лексикографски статус актуелних позајмљеница у српском језику." Phd thesis, Univerzitet u Novom Sadu, Filozofski fakultet u Novom Sadu, 2016. http://www.cris.uns.ac.rs/record.jsf?recordId=101223&source=NDLTD&language=en.

Full text
Abstract:
U radu se razmatra status aktuelnih pozajmljenica usrpskom jeziku iz perspektive leksikologije ileksikografije. Za potrebe ovog razmatranja prikupljen jekorpus od 1000 pozajmljenica ekscerpiranih izrelevantnih štampanih medija. Prvi dio rada, nakonosnovnih, uvodnih napomena o predmetu i ciljuistraživanja, bavi se teorijskim postavkama koje se ticupozajmljenica u leksikološkoj i leksikografskoj literaturi.U narednom segmentu rada objašnjava se metodologijaprimijenjena u radu, te se opisuju izvori za ekscerpcijuleksike. Zatim slijedi analiticki dio u kome se analiziraekscerpirana leksika na fonološkom, morfološkom,tvorbenom, leksikološkom, semantickom i pravopisnomnivou. Time se pokazuje adaptiranost pozajmljenica narazlicitim jezickim nivoima, i odnos savremene srpskeleksikografije prema ovom znacajnom segmentu aktuelneleksike srpskog jezika. Posebni dijelovi rada izdvojeni suza osvrt na problem višeclanih leksema, i zatim zastrukturu rjecnika. Razmatranje se završava zakljucnimnapomenama, u kojima se daje pregled cjelokupnog rada,i skrece se pažnja na otvorena pitanja. Najobimniji diorada je rjecnik aktuelnih pozajmljenica u srpskom jeziku.U ovom istraživanju se pokazalo da relevantnaleksikološka i leksikografska literatura, kao nileksikografska praksa ne obuhvataju u dovoljnoj mjerivrlo slojevit i kompleksan nanos strane leksike usavremenom srpskom jeziku.
The thesis discusses the position of the currentloanwords in the Serbian language from the lexicologicaland lexicographical point of view. The corpus of 1,000loanwords excerpted from the relevant press has beencollected for this study. After some basic and introductorycomments on the subject and the goal of the research, thefirst part of the thesis deals with theoretical statementsconcerning loanwords in lexicological andlexicographical literature. The following segment of thestudy explains the methodology applied in practice anddescribes the sources of the excerpted lexis. The next isthe analytical part where the excerpted lexis is analysedon the levels of phonology, morphology, word formation,lexicology, semantics and ortography. It is used to showthe adaptibility of loanwords to different linguistic levelsand approach of the contemporary Serbian lexicographytowards this significant segment of the current lexis in theSerbian language. Some specific parts of the thesis havebeen selected as the review of the issue of compositelexemes, as well as the structure of the lexicon. The studyis ended with the concluding notes where the overview ofthe complete thesis is given and the attention is drawn tothe open questions. The most extensive part is the lexiconof the current loanwords in the Serbian language. Thisresearch has shown that relevant lexicological andlexicographical practice do not adequately comprise quitelayered and complex accumulation of foreign lexis in thecontemporary Serbian language.
APA, Harvard, Vancouver, ISO, and other styles
29

Bouchet, Laurent. "Observations avec le téléscope SIGMA des candidats trous noirs, sources de rayonnement Gamma." Toulouse 3, 1992. http://www.theses.fr/1992TOU30267.

Full text
Abstract:
Le telescope francais sigma, mis en orbite le 1er decembre 1989, permet de realiser pour la premiere fois des images du ciel dans le domaine des x durs, avec une precision angulaires de quelques minutes d'arc. Ce memoire se compose essentiellement de deux parties. La premiere est consacree a l'etude de la reponse en energie spectrale de l'instrument, ainsi qu'au developpement de methodes permettant la reconstruction du spectre en energie des objets observes. La seconde partie est centree sur l'analyse et l'interpretation des donnees spectrales des sources observees par le telescope sigma. En particulier, dans le contexte d'une etude des candidats trous noirs, la signature spectrale et la variabilite des objets suivants seront discutees: 1e 1740,7-2942 (centre galactique), grs 1758-258, grs 1124-68 (la nova de la mouche), cygnus x-1 et gx 339-4. Des proprietes communes aux candidats trous noirs, obtenues a partir de cet echantillon de sources permettront de les distinguer des etoiles a neutrons, en l'absence de la mesure de leur masse. Dans le contexte de l'emission etroite a 511 kev, observee dans la region du centre galactique, nous derivons quelques contraintes sur les modeles qui se proposent d'expliquer cette, emission, a partir des donnees de sigma
APA, Harvard, Vancouver, ISO, and other styles
30

Dong, Bin. "Spatial Separation of Sound Sources." Phd thesis, INSA de Lyon, 2014. http://tel.archives-ouvertes.fr/tel-01059653.

Full text
Abstract:
La séparation aveugle de sources est une technique prometteuse pour l'identification, la localisation, et la classification des sources sonores. L'objectif de cette thèse est de proposer des méthodes pour séparer des sources sonores incohérentes qui peuvent se chevaucher à la fois dans les domaines spatial et fréquentiel par l'exploitation de l'information spatiale. De telles méthodes sont d'intérêt dans les applications acoustiques nécessitant l'identification et la classification des sources sonores ayant des origines physiques différentes. Le principe fondamental de toutes les méthodes proposées se décrit en deux étapes, la première étant relative à la reconstruction du champ source (comme par exemple à l'aide de l'holographie acoustique de champ proche) et la seconde à la séparation aveugle de sources. Spécifiquement, l'ensemble complexe des sources est d'abord décomposé en une combinaison linéaire de fonctions de base spatiales dont les coefficients sont définis en rétropropageant les pressions mesurées par un réseau de microphones sur le domaine source. Cela conduit à une formulation similaire, mais pas identique, à la séparation aveugle de sources. Dans la seconde étape, ces coefficients sont séparés en variables latentes décorrélées, affectées à des "sources virtuelles" incohérentes. Il est montré que ces dernières sont définies par une rotation arbitraire. Un ensemble unique de sources sonores est finalement résolu par la recherche de la rotation (par gradient conjugué dans la variété Stiefel des matrices unitaires) qui minimise certains critères spatiaux, tels que la variance spatiale, l'entropie spatiale, ou l'orthogonalité spatiale. Il en résulte la proposition de trois critères de séparation à savoir la "moindre variance spatiale", la "moindre entropie spatiale", et la "décorrélation spatiale", respectivement. De plus, la condition sous laquelle la décorrélation classique (analyse en composantes principales) peut résoudre le problème est établit de une manière rigoureuse. Le même concept d'entropie spatiale, qui est au coeur de cette thèse, est également iv exploité dans la définition d'un nouveau critère, la courbe en L entropique, qui permet de déterminer le nombre de sources sonores actives sur le domaine source d'intérêt. L'idée consiste à considérer le nombre de sources qui réalise le meilleur compromis entre une faible entropie spatiale (comme prévu à partir de sources compactes) et une faible entropie statistique (comme prévu à partir d'une faible erreur résiduelle). La méthode proposée est validée à la fois sur des expériences de laboratoire et des données numériques et illustrée par un exemple industriel concernant la classification des sources sonores sur la face supérieure d'un moteur Diesel. La méthodologie peut également séparer, de façon très précise, des sources dont les amplitudes sont de 40 dB inférieur aux sources les plus fortes. Aussi, la robustesse vis-à-vis de l'estimation du nombre de sources actives, de la distance entre le domaine source d'intérêt et le réseau de microphones, ainsi que de la taille de la fonction d'ouverture est démontrée avec succès.
APA, Harvard, Vancouver, ISO, and other styles
31

Saulich, Sven. "Generic design and investigation of solar cooling systems." Thesis, Loughborough University, 2013. https://dspace.lboro.ac.uk/2134/13627.

Full text
Abstract:
This thesis presents work on a holistic approach for improving the overall design of solar cooling systems driven by solar thermal collectors. Newly developed methods for thermodynamic optimization of hydraulics and control were used to redesign an existing pilot plant. Measurements taken from the newly developed system show an 81% increase of the Solar Cooling Efficiency (SCEth) factor compared to the original pilot system. In addition to the improvements in system design, new efficiency factors for benchmarking solar cooling systems are presented. The Solar Supply Efficiency (SSEth) factor provides a means of quantifying the quality of solar thermal charging systems relative to the usable heat to drive the sorption process. The product of the SSEth with the already established COPth of the chiller, leads to the SCEth factor which, for the first time, provides a clear and concise benchmarking method for the overall design of solar cooling systems. Furthermore, the definition of a coefficient of performance, including irreversibilities from energy conversion (COPcon), enables a direct comparison of compression and sorption chiller technology. This new performance metric is applicable to all low-temperature heat-supply machines for direct comparison of different types or technologies. The achieved findings of this work led to an optimized generic design for solar cooling systems, which was successfully transferred to the market.
APA, Harvard, Vancouver, ISO, and other styles
32

Vayá, Salort Carlos. "Characterization and processing of atrial fibrillation episodes by convolutive blind source separation algorithms and nonlinear analysis of spectral features." Doctoral thesis, Universitat Politècnica de València, 2010. http://hdl.handle.net/10251/8416.

Full text
Abstract:
Las arritmias supraventriculares, en particular la fibrilación auricular (FA), son las enfermedades cardíacas más comúnmente encontradas en la práctica clínica rutinaria. La prevalencia de la FA es inferior al 1\% en la población menor de 60 años, pero aumenta de manera significativa a partir de los 70 años, acercándose al 10\% en los mayores de 80. El padecimiento de un episodio de FA sostenida, además de estar ligado a una mayor tasa de mortalidad, aumenta la probabilidad de sufrir tromboembolismo, infarto de miocardio y accidentes cerebrovasculares. Por otro lado, los episodios de FA paroxística, aquella que termina de manera espontánea, son los precursores de la FA sostenida, lo que suscita un alto interés entre la comunidad científica por conocer los mecanismos responsables de perpetuar o conducir a la terminación espontánea de los episodios de FA. El análisis del ECG de superficie es la técnica no invasiva más extendida en la diagnosis médica de las patologías cardíacas. Para utilizar el ECG como herramienta de estudio de la FA, se necesita separar la actividad auricular (AA) de las demás señales cardioeléctricas. En este sentido, las técnicas de Separación Ciega de Fuentes (BSS) son capaces de realizar un análisis estadístico multiderivación con el objetivo de recuperar un conjunto de fuentes cardioeléctricas independientes, entre las cuales se encuentra la AA. A la hora de abordar un problema de BSS, se hace necesario considerar un modelo de mezcla de las fuentes lo más ajustado posible a la realidad para poder desarrollar algoritmos matemáticos que lo resuelvan. Un modelo viable es aquel que supone mezclas lineales. Dentro del modelo de mezclas lineales se puede además hacer la restricción de que estas sean instantáneas. Este modelo de mezcla lineal instantánea es el utilizado en el Análisis de Componentes Independientes (ICA).
Vayá Salort, C. (2010). Characterization and processing of atrial fibrillation episodes by convolutive blind source separation algorithms and nonlinear analysis of spectral features [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/8416
Palancia
APA, Harvard, Vancouver, ISO, and other styles
33

Le, Quang Huy Damien. "Spectroscopic measurements of sub-and supersonic plasma flows for the investigation of atmospheric re-entry shock layer radiation." Thesis, Clermont-Ferrand 2, 2014. http://www.theses.fr/2014CLF22462/document.

Full text
Abstract:
Lors des rentrées atmosphériques, les processus thermochimiques hors équilibre dans la couche de choc limitent la fiabilité des prédictions aérothermiques. Afin d'améliorer l'exactitude de ces prévisions, des modèles cinétiques sont actuellement développés. Ces modèles sont expérimentalement évalués à l'aide d'expériences dans lesquelles un départ à l'équilibre thermodynamique est caractérisé. Pour cette raison, le présent travail est consacré à la caractérisation du déséquilibre thermodynamique au sein d'écoulements réactifs à haute enthalpie. La plupart des études expérimentales dédiées à la validation de modèles cinétiques à haute température emploient des installations communément appelées tubes à choc. Nous évaluons ici la possibilité de générer un départ significatif à l'équilibre thermodynamique dans des écoulements plasma stationnaires, incluant des jets supersoniques dans lesquels le déséquilibre vibrationnel est fortement attendu. Des diagnostics spectroscopiques appropriés ont été appliqués, permettant de futures comparaisons avec des descriptions microscopiques issue de modèles théoriques
During planetary atmospheric entries, thermochemical non-equilibrium processes in the shock layer limit the reliability of aerothermal environment prediction. To improve prediction accuracy, non-equilibrium kinetic models are being developed. These models are experimentally assessed through the comparison with well characterized non-equilibrium experiments. For this purpose, the present work is dedicated to the thermodynamic characterization of non-equilibrium in high enthalpy reactive flows. Conversely to common studies that employ short duration facilities to investigate shock layer kinetics, we will assess the possibility of producing significant departure from equilibrium using radio-frequency and microwave stationary plasma flows, including supersonic plasma flows where vibrational non-equilibrium is strongly expected. Suitable spectroscopic diagnostics have been applied allowing future comparisons to be made between the microscopic description of the experiments and theoretical non-equilibrium models
APA, Harvard, Vancouver, ISO, and other styles
34

Clark, Steven James. "A market entry strategy of Metso for the biomass-based power generation solutions market in South Africa." Thesis, Stellenbosch : Stellenbosch University, 2011. http://hdl.handle.net/10019.1/80475.

Full text
Abstract:
Thesis (MBA)--Stellenbosch University, 2011.
The global energy industry is actively moving toward renewable energy sources in order to meet the ever-increasing demand for energy in a sustainable manner. The South African government, however, has only recently begun creating an environment which is truly conducive to investment into the renewable energy industry. Metso, a Finnish multi-national corporation, has a strong global position in the field of biomass-based power generation for heat, power or combined heat and power applications. The corporation has developed a modular biomass-based power generation solution for power generation in the 3MW to 10MW range, which is highly automated and can essentially operate without the need for extensive human intervention and is known as the Metso Bio-energy Solution. Considering the current state of the South African energy environment, Metso management requested the researcher to investigate the opportunities that exist in the South African market for Metso’s Bio-energy Solution, and to propose a market entry strategy which Metso should follow in order to enter the South African market. In the findings, the researcher observed that South Africa has a clear potential for the development of a bio-energy industry for power generation, although the limited availability of biomass in certain regions and the various harvesting methods in industries such as the forestry and sugar industries do restrict the access to this resource. The municipal solid waste industry appears to be an area of interest as well, although very little information exists regarding the volumes of waste available and sorting practices, which may be required in order to access these resources. Interviews were held with experts in the field of energy, renewable energy and energy policy in order to obtain opinions on the market potential for Metso’s Bio-energy Solution. The general perception of all interviewees was that the technology has its place within the South African energy mix. The interviewees, however, did confirm that there currently appears to be a major focus on wind and solar energy in the country, although biomass technology may well be a better solution due to its baseload capabilities. It was found that the local policy environment, the lack of government initiative on renewable energy licensing and unclear tariff structures have all inhibited the proliferation of the renewable energy industry. In many cases, frustration with power outages and policy delays has caused companies to invest in biomass co-firing facilities for their own consumption. The factors for success for biomass-based technologies in the South African market would appear to be directly linked to job creation potential, access to reliable and sustainable biomass resources and access to investment capital, from both private equity and the state. It is the recommendation of the researcher that Metso enters into a joint venture with a large international environmental finance company, which would base their business model on the technology provided by Metso, whilst securing the political and financial support for projects of this nature in the country.
APA, Harvard, Vancouver, ISO, and other styles
35

Lorusso, Marise Miglioli. "COMUNICAÇÃO EM LINHA E RUÍDOS SEMÂNTICOS NA RECUPERAÇÃO DE INFORMAÇÕES EM PESQUISAS CIENTÍFICAS." Universidade Metodista de São Paulo, 2007. http://tede.metodista.br/jspui/handle/tede/800.

Full text
Abstract:
Made available in DSpace on 2016-08-03T12:30:36Z (GMT). No. of bitstreams: 1 CAPA.pdf: 6587 bytes, checksum: 2b1e4893b77f4a03ffa888fb1f5d6d23 (MD5) Previous issue date: 2007-08-14
The scientific researcher needs necessary information, in skillful time for conclusion of its works. With the advent of the INTERNET, the process of on-line communication, man x machine, mediated for the search mechanisms, became, simultaneously, an aid and a difficulty in the process of recovery of information. The researcher had that to adapt it the way to operate of the INTERNET and included knowledge of idiomatic differences, of terminology, beyond using instruments that supply parameters to it to get greater relevancy and relevance in the data. The use of intelligent agents for improvement of results and the reduction of semantic noises have been pointed as solutions with respect to increase of the precision in the result of the searches. The exploratory study of cases carried through it analyzes on-line research from the theory of the information and considers two forms to optimize the comunicacional process with sights to the relevancy and relevance of the gotten data: the first one suggests the application of algorithms that use mediating the controlled vocabulary as of the communication process using itself of the describers for on-line recovery. , and second the importance of the intelligent agents in the process of man-machine communication stands out.(AU)
O pesquisador científico necessita de informações precisas, em tempo hábil para conclusão de seus trabalhos. Com o advento da INTERNET, o processo de comunicação em linha, homem x máquina, mediado pelos mecanismos de busca, tornou-se, simultaneamente, um auxílio e uma dificuldade no processo de recuperação de informações. O pesquisador teve que adaptar-se ao modo de operar da INTERNET e incluiu conhecimentos de diferenças idiomáticas, de terminologia, além de utilizar instrumentos que lhe forneçam parâmetros para obter maior pertinência e relevância nos dados. O uso de agentes inteligentes para melhoria de resultados e a diminuição de ruídos semânticos têm sido apontados como soluções para aumento da precisão no resultado das buscas. O estudo de casos exploratório realizado analisa a pesquisa em linha a partir da teoria da informação e propõe duas formas de otimizar o processo comunicacional com vistas à pertinência e relevância dos dados obtidos: a primeira sugere a aplicação de algoritmos que utilizem o vocabulário controlado como mediador do processo de comunicação utilizando-se dos descritores para recuperação em linha. , e a segunda ressalta a importância dos agentes inteligentes no processo de comunicação homem-máquina.(AU)
APA, Harvard, Vancouver, ISO, and other styles
36

Cure, Vellojin Laila Nadime. "Analytical Methods to Support Risk Identification and Analysis in Healthcare Systems." Scholar Commons, 2011. http://scholarcommons.usf.edu/etd/3054.

Full text
Abstract:
Healthcare systems require continuous monitoring of risk to prevent adverse events. Risk analysis is a time consuming activity that depends on the background of analysts and available data. Patient safety data is often incomplete and biased. This research proposes systematic approaches to monitor risk in healthcare using available patient safety data. The methodologies combine traditional healthcare risk analysis methods with safety theory concepts, in an innovative manner, to allocate available evidence to potential risk sources throughout the system. We propose the use of data mining to analyze near-miss reports and guide the identification of risk sources. In addition, we propose a Maximum-Entropy based approach to monitor risk sources and prioritize investigation efforts accordingly. The products of this research are intended to facilitate risk analysis and allow for timely identification of risks to prevent harm to patients.
APA, Harvard, Vancouver, ISO, and other styles
37

Mbarki, Omar. "Etude des contributions enthalpiques et entropiques à la variation d'énergie libre redox de protéines de transfert d'électrons : cytochromes et protéines fer-soufre." Aix-Marseille 1, 1995. http://www.theses.fr/1995AIX11021.

Full text
Abstract:
Dans les systemes biologiques de transfert d'electrons, le potentiel redox d'un type donne de centre varie beaucoup d'une proteine a l'autre. Pour comprendre l'origine de ces variations, nous avons mesure la composante entropique intrinseque ds'#r#c et la composante enthalpique dh' de la variation d'energie libre redox pour une serie de cytochromes de type c et b, et pour des proteines fer-soufre, en utilisant un montage potentiometrique non isotherme. Dans le cas des cytochromes c et b monohemiques, la variation de potentiel de 400 mv qui est observee est due autant a des effets entropiques qu'a des effets enthalpiques. Des experiences a force ionique variable effectuees sur le cytochrome c#5#5#3 de d. Vulgaris hildenborough montrent que la valeur negative de ds'#r#c est due a des effets de solvatation. Ces effets sont attribues aux interactions entre les dipoles du solvant et les propionates de l'heme, une hypothese qui est confirmee par la correlation existant entre l'exposition des propionates au solvant et la valeur de ds'#r#c dans cette serie de cytochromes. Le degre d'exposition des propionates au solvant pourrait egalement moduler la composante enthalpique dh'. Dans le cas des cytochromes tetrahemiques c#3, on observe de grandes variations des parametres thermodynamiques d'un heme a l'autre. Ces variations sont dues aux facteurs mis en evidence chez les cytochromes monohemiques, mais aussi aux interactions redox entre les hemes d'une meme molecule. Les resultats obtenus pour les proteines fer-soufre mettent egalement en evidence l'existence de composantes entropiques importantes. Des experiences a force ionique variable effectuees sur la ferrodoxine 2fe-2s de la cyanobacterie spirulina maxima montrent un comportement complexe dont l'analyse necessite des etudes plus approfondies
APA, Harvard, Vancouver, ISO, and other styles
38

Duarte, Leonardo Tomazeli 1982. "Um estudo sobre separação cega de fontes e contribuições ao caso de misturas não-lineares." [s.n.], 2006. http://repositorio.unicamp.br/jspui/handle/REPOSIP/261781.

Full text
Abstract:
Orientadores: João Marcos Travassos Romano, Romis Ribeiro de Faissol Attux
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação
Made available in DSpace on 2018-08-06T23:03:11Z (GMT). No. of bitstreams: 1 Duarte_LeonardoTomazeli_M.pdf: 2778720 bytes, checksum: ff42018b4aa2d824cd1f001655a42ddf (MD5) Previous issue date: 2006
Resumo: O presente trabalho tem como objetivo a realização de um estudo sobre o problema de separação cega de fontes. Em uma primeira parte, considera-se o caso clássico em que o sistema misturador é de natureza linear. Na seqüência, a extensão ao caso não-linear é tratada. Em particular, enfatizamos uma importante classe de modelos não-lineares, os modelos com não-linearidade posterior (PNL). Com o intuito de contornar uma dificuldade relacionada à convergência para mínimos locais no treinamento de sistemas separadores PNL, uma nova técnica é proposta. Tal solução se baseia no uso de um algoritmo evolutivo na etapa de treinamento e de um estimador de entropia baseado em estatísticas de ordem. A eficácia do algoritmo proposto é verificada através de simulações em diferentes cenários
Abstract: The aim of this work is to study the problem of blind source separation (BSS). In a first part, the classical case in which the mixture system is of linear nature is considered. Afterwards, the nonlinear extension of the BSS problem is addressed. In special, an important class of nonlinear models, the post-nonlinear (PNL) models, is emphasized. In order to overcome a problem related to the convergence to local minima in the training of a PNL separating system, a novel technique is proposed. The bases of such solution are the application of an evolutionary algorithm in the training stage and the use of an entropy estimator based on order statistics. The efficacy of the proposal is attested by simulations conducted in different scenarios
Mestrado
Telecomunicações e Telemática
Mestre em Engenharia Elétrica
APA, Harvard, Vancouver, ISO, and other styles
39

Kovaříková, Lenka. "Hodnocení investičních možností PRE do malých vodních elektráren." Master's thesis, Vysoká škola ekonomická v Praze, 2010. http://www.nusl.cz/ntk/nusl-75487.

Full text
Abstract:
In recent years the greater emphasis is placed on the renewable energy sources that are environmentally friendly. Demand for renewable energy sources is supported by government incentives, which leads to an increasing pace of investment in this sector. Prices for electricity from renewable energies are far higher than the prices of conventional electricity. If the current trend continues, it is only a matter of time before they meet. According to this fact, PRE Company decided to invest in hydropower. This thesis deals with identifying of potential investment opportunities in small hydropower plants from the PRE point of view and the subsequent evaluation of these variants on the basis of multi-criteria decision making.
APA, Harvard, Vancouver, ISO, and other styles
40

Krysta, Monika. "Modélisation numérique et assimilation de données de la dispersion de radionucléides en champ proche et à l'échelle continentale." Phd thesis, Université Paris XII Val de Marne, 2006. http://tel.archives-ouvertes.fr/tel-00652840.

Full text
Abstract:
La prévision des conséquences de rejets radioactifs dans l'atmosphère repose sur des modèles de dispersion qui fournissent des solutions analytiques (pX 0.1 développé par IRSN/ECL) ou numériques (Polair3D développé au CEREA) de l'équation d'advection-diffusion. Les modèles s'appuient d'une part sur des champs météorologiques possédant une résolution limitée et intègrent d'autre part des processus d'appauvrissement d'un nuage radioactif dont la description est imparfaite. Afin de contourner ces difficultés nous exploitons dans cette thèse les opportunités offertes par le couplage des modèles aux mesures, connu sous le nom d'assimilation de données. Dans un premier temps nous confrontons les modèles utilisés à des observations. Ces dernières sont fournies par des expériences effectuées avec des traceurs passifs ou collectées suite a des rejets de radionucléides. La dispersion sur une maquette de la centrale de Bugey dans une soufflerie en champ proche, l'expérience ETEX-I à l'échelle continentale et les rejets accidentels de Tchernobyl et Algésiras font l'objet d'études dans ce travail. Dans un deuxième temps, les modèles de dispersion sont associés aux mesures dans le but d'améliorer l'évaluation de conséquences de rejets radioactifs et d'inverser leur sources si la qualité du modèle le permet. En champ proche, l'approche variationnelle standard est utilisée. L'adjoint de pX 0.1 est construit à l'aide d'un différenciateur automatique. Les mesures collectées dans la soufflerie sont assimilées afin d'inverser le débit de la source. Dans le but d'obtenir un meilleur accord entre le modèle et les mesures, l'optimisation des paramètres gouvernant la distribution spatiale du panache est ensuite abordée. L'avantage de l'emploi de mesures est conditionné par leur contenu informatif. Ainsi, l'analyse a posteriori du réseau de mesures est effectuée et des possibilités de réduire sa taille en vue des applications opérationnelles sont exploitées. A l'échelle continentale, malgré la restriction aux mailles contenant des sites nucléaires, l'espace engendré par des sources est de dimension considérablement plus grande que l'espace d'observations. Par conséquent le problème inverse de reconstruction de la position et du profil temporel des sources est mal posé. Ce problème est régularisé en utilisant le principe du maximum d'entropie sur la moyenne. Des nouvelles fonctions coût prenant en compte le confinement spatio-temporel des sources accidentelles sont construites dans l'espace d'observations. Le lien entre la source recherchée et les mesures est décrit par l'adjoint de Polair3D. Les observations assimilées par la méthode sont constituées de mesures synthétiques, parfaites ou bruitées. Une série d'expériences se focalisant sur des sensibilités de la méthode est menée et leur qualité évaluée à l'aide d'un indicateur objectif. Un algorithme de réduction des sites suspectés pour des applications opérationnelles est testé. Finalement, les résultats de l'inversion du profil temporel de la source de l'accident d'Algésiras sont présentés.
APA, Harvard, Vancouver, ISO, and other styles
41

Bai, Hong-Yi, and 白宏益. "Detection of Biomagnetic Source by the Method of Maximum Entropy." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/42333064575228684044.

Full text
Abstract:
碩士
國立中興大學
物理學系
91
This thesis applied a novel approach to the detection of active sources in the cortex from magnetic field measurements on the scalp in magnetoencephalograph (MEG). In order to get the neurons’s activation, we have to solve an ill-posed inverse problem. Our approach to the solution is the framework of Maximum Entropy on the Mean. A reference probability measure on the random variables is our means of regularizing of the singular nature of the problem. The variables are the intensity of current sources on the cortical region which include all available prior information that help in the regularization. We also introduce a hidden Markov random variable as part of the prior information. The Maximum Entropy method is used again within this framework. Finally, we demonstrate how the method is applied in a practical detection of cerebral activity by analyzing simulated MEG data.
APA, Harvard, Vancouver, ISO, and other styles
42

Lin, Yu-Chun, and 林昱均. "Entropy Generation Analysis due to Pumping Power and Heat Source of a Microchannel Heat Sink." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/35055450368819137961.

Full text
Abstract:
碩士
大同大學
機械工程學系(所)
100
In this study, the three-dimensional heat transfer in a micro-channel heat sink is analyzed numerically. Entropy generation is applied to analyze the overall efficiency of the heat sink, and to compare with the overall performance by thermal resistance. The entropy generation caused by the thermal resistance of a heat sink and the pressure loss of fluid flow in a flow passage is calculated. Base on the balance between thermal performance and fluid flow, an optimum geometry of microchannel heat sink is found. This method is an integrated approach for performance of a microchannel heat sink accompanied with the problem of pressure drop. The effects of channel-width ratio and inlet velocity on thermal resistance, pressure drop and entropy generation are studied by setting fixed flow rate, pressure drop and inlet velocity. Through the entropy generation, optimum geometric configuration for different channel number, it is found that case of number of channel N=100 is better than the other case for fixed flow rate. When pressure drop is chosen to be a fixed value, inappropriate over range of pressure drop can be excluded. The entropy generation is calculated for comparison with pressure drop. In addition, the effects due to the mean velocity on both flow resistance and thermal resistance are also calculated. Under the range of the present study, it is found that velocity of 2m/s has the minimum entropy generation than others. The goal of design on microchannel heat sink is to save fluid flow energy and get best heat dissipation. Therefore, entropy generation is a good indicator for analyzing the performance and efficiency of a microchannel heat sink.
APA, Harvard, Vancouver, ISO, and other styles
43

Yu-Wei, Wang. "High order joint source channel decoding and conditional entropy encoding novel bridging techniques between performance and complexity /." 2004. http://etda.libraries.psu.edu/theses/approved/WorldWideIndex/ETD-444/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Huang, Bo-Cheng, and 黃柏誠. "An Efficient Approach to Detect DDoS Attack in Software-defined Networking Architecture based on the Entropy of Source IP Address." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/x3debt.

Full text
Abstract:
碩士
國立臺灣科技大學
資訊工程系
105
The pattern of distributed denial-of-service (DDoS) attacks on the Internet draws a lot of attention recently. Although the mythology for launching a DDoS attack is simple, it is not easy to avoid and be free from the attack because of its the distributed traits of packets and blurry rules of attacking methods. With the emergence of software-defined networking (SDN), there are many flexible applications on defending against network-attack on the Internet. Conventionally, users rely on hardware support to filter and figure out attack behaviour, but they can neither change the settings in the switch nor even take actions when they find that attack happens. If users want to establish prevention methods, they have to modify the firmware of the switch or use the GUI applications which are developed by the manufacturers to satisfy their users’ need. In the past, some researchers take entropy to detect DDoS attacks, moreover, they can analyse flows according to timestamps. Others adapt statistical methods to differen- tiate from normal and abnormal flows on different routers and to identify packets sent by attackers. However, no matter entropy or statistical methods are used, they are all effec- tive for detecting attacks in many aspects. The proposed research will combine chunk of entropy with statistical methods to construct an improved mathematical model on the basis of DDoS attack and SDN environment. The proposed method can analyse statistical value in short time and investigate the difference of normal packets and attacking packets. When an attack happens, we can fix the network situation by modifying the flow table in the switch. The proposed research conducts attack/defense experiments through SDN environ- ment, servers and zombies to prove the practice of the proposed model. With Python scripts which is customized by propose method, we attack victims on our own SDN en- vironment and chunk size can be obtained by pretesting automatically. With appropriate chunk size, we can detect attacks within just 1 seconds by observing the attack if p-value is smaller than 0.05. In the situation of proposed chunk-size with fast simulation of normal networking environment, the false alarm rate is about 0.2%.
APA, Harvard, Vancouver, ISO, and other styles
45

Samuel, Sindhu. "Digital rights management (DRM) - watermark encoding scheme for JPEG images." Diss., 2008. http://hdl.handle.net/2263/27910.

Full text
Abstract:
The aim of this dissertation is to develop a new algorithm to embed a watermark in JPEG compressed images, using encoding methods. This encompasses the embedding of proprietary information, such as identity and authentication bitstrings, into the compressed material. This watermark encoding scheme involves combining entropy coding with homophonic coding, in order to embed a watermark in a JPEG image. Arithmetic coding was used as the entropy encoder for this scheme. It is often desired to obtain a robust digital watermarking method that does not distort the digital image, even if this implies that the image is slightly expanded in size before final compression. In this dissertation an algorithm that combines homophonic and arithmetic coding for JPEG images was developed and implemented in software. A detailed analysis of this algorithm is given and the compression (in number of bits) obtained when using the newly developed algorithm (homophonic and arithmetic coding). This research shows that homophonic coding can be used to embed a watermark in a JPEG image by using the watermark information for the selection of the homophones. The proposed algorithm can thus be viewed as a ‘key-less’ encryption technique, where an external bitstring is used as a ‘key’ and is embedded intrinsically into the message stream. The algorithm has achieved to create JPEG images with minimal distortion, with Peak Signal to Noise Ratios (PSNR) of above 35dB. The resulting increase in the entropy of the file is within the expected 2 bits per symbol. This research endeavor consequently provides a unique watermarking technique for images compressed using the JPEG standard.
Dissertation (MEng)--University of Pretoria, 2008.
Electrical, Electronic and Computer Engineering
unrestricted
APA, Harvard, Vancouver, ISO, and other styles
46

Srivastava, Gautam. "Pseudorandom number generators using multiple sources of entropy." Thesis, 2006. http://hdl.handle.net/1828/2096.

Full text
Abstract:
Randomness is an important part of computer science. A large group of work, both in theoretical and practical computer science, is dedicated to the study of whether true 'randomness' is necessary for a variety of applications and protocols to work. One of the main uses for randomness is in the generation of keys, used as a security measure for many cryptographic protocols. The main measure of randomness is achieved by looking at entropy, a measure of the disorder of a system. Nature is able to provide us with many sources that are high in entropy. However, many cryptographic protocols need sources of randomness that are stronger (higher in entropy) than what is present naturally to ensure security. Therefore, a gap exists between what is available in Nature, and what is necessary for provable security. This paper looks to bridge this gap. Research in pseudorandom number generation has gone on for decades. However, many of the past constructions were lacking in either documentation or provable security of their methods. The need for a pseudorandom number generator (PRG) with provable security and strong documentation is evident. A new construction of a PRG is introduced. The new construction, labeled XRNG, looks to encompass recent research in the field of extractors along with previously known research in the field of pseudorandom number generation. Extractors, as the name suggests, looks to extract close to random information from high entropy sources.
APA, Harvard, Vancouver, ISO, and other styles
47

Chen, Liang-Ling, and 陳亮霖. "The analysis Job Burnout Sources, Buffers and Communication Approaches with Entropy Theory." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/05330717602730069733.

Full text
Abstract:
碩士
國立高雄第一科技大學
行銷與流通管理所
91
There were already many studies about job burnout previously, few apply the theory of others areas to analyze characteristics of job burnout. Therefore, this study explained job burnout by analogizing with entropy that is popular notion of physics. And the researcher expected to manifest the importance of job burnout in the enterprises, individuals and families. About the data processing method, the Hartley entropy formula replaced the numeral of respondent with the information quantity. The data processing method can offer the meanings of scale data. Furthermore, Hartley entropy formula also solved the problem of Likert scale when items are combined to form a variable. Samples of this study are from the frontline employees in the service department.. Study samples were from 162 couples of the electronics and finances industry. The result showed that sources of the job burnout are work environment. And the job burnout also influences the marital burnout. Besides, the perception of social support of frontline employees in the service department owns the buffer ability to the link between work environment and job burnout. By study result, the researcher suggested that enterprises couldn''t attribute the reason of job burn out to individual''s factor or ignore the influence when frontline employees meet job burnout. Enterprises need survey work environment quality and are more positive responsible for the prevention and cure of job burnout so that Enterprises can mitigate the attacks of job burnout.
APA, Harvard, Vancouver, ISO, and other styles
48

"Contrast properties of entropic criteria for blind source separation : a unifying framework based on information-theoretic inequalities." Université catholique de Louvain, 2007. http://edoc.bib.ucl.ac.be:81/ETD-db/collection/available/BelnUcetd-02162007-112342/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Shyng, Chii-Wen, and 邢啟文. "The Source of Competitive Advantage: The New Product Entry Strategy and the Marketing Mix Perspective -- Analysis of Part-Time MBA Students in Taiwan and China with a Marketing Operation Strategy Simulation." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/98166433794773012769.

Full text
Abstract:
碩士
國立交通大學
管理學院碩士在職專班經營管理組
94
A key subject of business management is new product entry strategy. Firms invest huge resources to develop new product and expect it can be the driver to help firms to be evergreen. This study examines the success factors of new product entry strategy from the aspects of first-mover advantage and marketing mix by investigating the data from a computer based marketing operation strategy simulation which is executed by part-time MBA students from a Taiwanese national university and a Chinese management school. This study explores the strategy of new product entry and compares the difference of the strategy between the two schools by examining the factors of the source of first-mover advantage, including entry order, success order, periods prior to the success, channel strategy, pricing strategy and marketing expense strategy. Although there are only two of the seven hypotheses are significant, we still can deduce the following inferences by investigating the simulation result. First of all, entry order is not sufficient to explain the first-mover advantage. Second, this study shows the speed to achieve the scale of economics is the source of first-mover advantage. Third, Taiwanese students do strong then do big. Their strategy and management are flexible. Chinese students do big then do strong. They would like to take the whole market so build up the capacity in one time. There are differences of the new product entry strategy between the Taiwanese students, who come from a highly competitive business environment, and Chinese students, who come form a environment with planed economy and world factories.
APA, Harvard, Vancouver, ISO, and other styles
50

Hagenhoferová, Lucie. "K problémům členění na významy v překladovém slovníku." Master's thesis, 2014. http://www.nusl.cz/ntk/nusl-341332.

Full text
Abstract:
This thesis deals with the dividing the dictionary entry into several "sub-meanings", i.e. with the sense division, and with the closely related ordering of these "sub-meanings", i.e. with the sense ordering, with the bilingual passive German-Czech dictionary in the centre of interest. This thesis deals shortly also with the discriminating the senses by different means, i.e. with the sense discrimination. With these three subjects chronologically following the lexicographic decisions the matter of equivalence and the understanding of the term of meaning are correlated. In the theoretical part of this thesis the specifics of the sense division and the sense ordering in the monolingual and in the bilingual lexicography are introduced, in the practical part of this thesis the possibilities of the sense division and the sense ordering are exemplified with ten chosen substantive lemmas prepared for the Large German-Czech Academic Dictionary in progress. For every lemma the most suitable arrangement of the lemma is suggested, which is then compared with the corresponding dictionary entry in the source dictionary Duden - Deutsches Universalwörterbuch. The differences between the arrangements of the microstructure illustrate the necessity of the revision and eventual modification of the adopted structure...
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography