Dissertations / Theses on the topic 'Optical design technique'

To see the other types of publications on this topic, follow the link: Optical design technique.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Optical design technique.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Sunay, Ahmet Sertac. "Analysis And Design Of Passive Microwave And Optical Devices Using The Multimode Interference Technique." Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/2/12606456/index.pdf.

Full text
Abstract:
The Multimode Interference (MMI) mechanism is a powerful toool used in the analysis and design of a certain class of optical, microwave and millimeter wave devices. The principles of the MMI method and the self-imaging principle is described. Using this method, NXM MMI couplers, MMI splitter/combiners are analyzed. Computer simulations for illustrating the "
Multimode Interference Mechanism"
are carried out. The MMI approach is used to analyze overmoded '
rectangular metallic'
and '
dielectric slab'
type of waveguides and devices. The application of the MMI technique is investigated experimentally by using a metallic waveguide structure operating in the X-band. The construction of the related structure and the related experimental work are reported.
APA, Harvard, Vancouver, ISO, and other styles
2

Mital, Rashmi. "Design and demonstration of a novel optical true time delay technique using polynomial cells based on white cells." Connect to this title online, 2005. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1111161542.

Full text
Abstract:
Thesis (Ph. D.)--Ohio State University, 2005.
Title from first page of PDF file. Document formatted into pages; contains xiii, 195 p.; also includes graphics (some col.) Includes bibliographical references (p. 190-195). Available online via OhioLINK's ETD Center
APA, Harvard, Vancouver, ISO, and other styles
3

Barutcu, Burcu. "The Design And Production Of Interference Edge Filters With Plasma Ion Assisted Deposition Technique For A Space Camera." Master's thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12614574/index.pdf.

Full text
Abstract:
Interference filters are multilayer thin film devices. They use interference effects between the incident and reflected radiation waves at each layer interface to select wavelengths. The production of interference filters depend on the precise deposition of thin material layers on substrates which have suitable optical properties. In this thesis, the main target is to design and produce two optical filters (short-pass filter and long-pass filter) for the CCDs that will be used in the electronics of a space camera. By means of these filters, it is possible to take image in different bands (RGB and NIR) by identical two CCDs. The filters will be fabricated by plasma ion-assisted deposition technique.
APA, Harvard, Vancouver, ISO, and other styles
4

Bignell, Allan M. "Photonic bus and photonic mesh networks : design techniques in extremely high speed networks /." *McMaster only, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Muretto, Giovanni <1976&gt. "Design and control techniques of optical networks." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2007. http://amsdottorato.unibo.it/403/1/Tesidott.pdf.

Full text
Abstract:
The world of communication has changed quickly in the last decade resulting in the the rapid increase in the pace of peoples’ lives. This is due to the explosion of mobile communication and the internet which has now reached all levels of society. With such pressure for access to communication there is increased demand for bandwidth. Photonic technology is the right solution for high speed networks that have to supply wide bandwidth to new communication service providers. In particular this Ph.D. dissertation deals with DWDM optical packet-switched networks. The issue introduces a huge quantity of problems from physical layer up to transport layer. Here this subject is tackled from the network level perspective. The long term solution represented by optical packet switching has been fully explored in this years together with the Network Research Group at the department of Electronics, Computer Science and System of the University of Bologna. Some national as well as international projects supported this research like the Network of Excellence (NoE) e-Photon/ONe, funded by the European Commission in the Sixth Framework Programme and INTREPIDO project (End-to-end Traffic Engineering and Protection for IP over DWDM Optical Networks) funded by the Italian Ministry of Education, University and Scientific Research. Optical packet switching for DWDM networks is studied at single node level as well as at network level. In particular the techniques discussed are thought to be implemented for a long-haul transport network that connects local and metropolitan networks around the world. The main issues faced are contention resolution in a asynchronous variable packet length environment, adaptive routing, wavelength conversion and node architecture. Characteristics that a network must assure as quality of service and resilience are also explored at both node and network level. Results are mainly evaluated via simulation and through analysis.
APA, Harvard, Vancouver, ISO, and other styles
6

Muretto, Giovanni <1976&gt. "Design and control techniques of optical networks." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2007. http://amsdottorato.unibo.it/403/.

Full text
Abstract:
The world of communication has changed quickly in the last decade resulting in the the rapid increase in the pace of peoples’ lives. This is due to the explosion of mobile communication and the internet which has now reached all levels of society. With such pressure for access to communication there is increased demand for bandwidth. Photonic technology is the right solution for high speed networks that have to supply wide bandwidth to new communication service providers. In particular this Ph.D. dissertation deals with DWDM optical packet-switched networks. The issue introduces a huge quantity of problems from physical layer up to transport layer. Here this subject is tackled from the network level perspective. The long term solution represented by optical packet switching has been fully explored in this years together with the Network Research Group at the department of Electronics, Computer Science and System of the University of Bologna. Some national as well as international projects supported this research like the Network of Excellence (NoE) e-Photon/ONe, funded by the European Commission in the Sixth Framework Programme and INTREPIDO project (End-to-end Traffic Engineering and Protection for IP over DWDM Optical Networks) funded by the Italian Ministry of Education, University and Scientific Research. Optical packet switching for DWDM networks is studied at single node level as well as at network level. In particular the techniques discussed are thought to be implemented for a long-haul transport network that connects local and metropolitan networks around the world. The main issues faced are contention resolution in a asynchronous variable packet length environment, adaptive routing, wavelength conversion and node architecture. Characteristics that a network must assure as quality of service and resilience are also explored at both node and network level. Results are mainly evaluated via simulation and through analysis.
APA, Harvard, Vancouver, ISO, and other styles
7

Feng, Ning-Ning Huang Wei-Ping. "Modeling, simulation and design techniques for high-density complex photonic integrated devices and circuits." *McMaster only, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Dale, Brian M. "Optimal Design of MR Image Acquisition Techniques." Case Western Reserve University School of Graduate Studies / OhioLINK, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=case1081556784.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Luís, Ruben Soares. "Design and optimization of optical routing techniques and devices." Doctoral thesis, Universidade de Aveiro, 2007. http://hdl.handle.net/10773/2212.

Full text
Abstract:
Doutoramento em Engenharia Electrotécnica
Este trabalho apresenta três estudos principais acerca do desenvolvimento e aplicação de sistemas de monitorização óptica avançados com base na análise de histogramas assíncronos, conversão de comprimento de onda de sinais de débito elevado e o impacto das não-linearidades das fibra ópticas em sistemas utilizando técnicas avançadas de transmissão. Mostra-se que a comparação de histogramas assíncronos com histogramas de referência pode ser usada para extrair informação a respeito da qualidade e do ruído que afecta o sinal em análise. O método proposto é validado através de simulação numérica e experiência. Um modelo analítico para a computação das limitações em frequência da modulação cruzada de fase (XPM) em conversores de comprimento de onda de fibra é proposto e validado através de simulação numérica até frequências de modulação acima de 1 THz. O modelo proposto permite a derivação de regras de engenharia para o dimensionamento de conversores de comprimento de onda compostos por espelhos de fibra não linear. O dimensionamento de um novo filtro para a optimização da conversão da XPM em modulação de intensidado é proposto e validado por simulação numérica. O impacto das não-linearidades na transmissão em fibra óptica de sinais de banda lateral única a 10 Gb/s com compensação de dispersão concentrada é avaliado através de simulação numérica. Mostra-se que as não-linearidades intra-canal levam a severa degradação do desempenho. A degradação de sinais de chaveamento por desvio diferencial de fase (DPSK) a 40 Gb/s devida a XPM com sinais de chaveamento por desvio de amplitude herdados de sistemas anteriores é também analisada. Uma análise bomba-sonda mostra que a degradação do sinal resulta da modulação de intensidade induzida por XPM. Este resultado permite a derivação de um modelo para estimar a probabilidade de erro dos sinais DPSK degradados por XPM. Finalmente, é apresentada uma abordagem analítica baseada em pequenas perturbações para o estudo de não linearidades intra-canal em fibra óptica em sinais com razão de extinção finita. Este estudo permite a identificação de duas novas formas de degradação tomando a forma de impulsos entre diferentes símbolos e fluctuações temporais e de amplitude.
This work presents three main studies regarding the development and application of advanced optical monitoring systems based on the analysis of asynchronous amplitude histograms, the wavelength conversion of ultra-high bit-rate signals, and the impact of fiber nonlinearities in systems employing advanced transmission techniques. It is shown that asynchronous amplitude histograms may be numerically compared with reference histograms to extract information regarding quality and the noise degrading the signal under analysis. The proposed method is validated through numerical simulation and experiment. An analytical model to compute the frequency limitations of cross-phase modulation (XPM) in all-optical fiber wavelength converters is proposed and validated using numerical simulation at modulation frequencies exceeding 1 THz. The proposed model allows deriving engineering rules for the dimensioning of wavelength converters using nonlinear optical loop mirrors. A novel filter design to optimize the conversion of XPM-induced phase modulation in intensity modulation is proposed and validated using numerical simulation. Numerical simulation is used to evaluate the impact of fiber nonlinearities in the transmission of 10 Gb/s single sideband signals in links using concentrated electrical or optical dispersion compensation. It is shown that, intra-channel fiber nonlinearities severely degrade the performance. The degradation of 40 Gb/s differential phase-shift-keying (DPSK) signals due to XPM with legacy amplitude-shift keying signals is also analyzed. Pump-probe analysis show that the signal degradation results from XPM-induced intensity modulation. This allows deriving and validating a novel analytical model to estimate the bit-error probability of the XPM-degraded DPSK signals. Finally, an analytical smallperturbations approach to the study of intra-channel fiber nonlinearities in signals with finite extinction ratio is presented. It allows the identification of two new forms of degradation taking the form of impulses between symbols and amplitude and temporal jitter.
APA, Harvard, Vancouver, ISO, and other styles
10

Burcklen, Marie-Anne. "Conception conjointe de l'optique et du traitement pour l'optimisation des systèmes d'imagerie." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLO001.

Full text
Abstract:
Aujourd'hui, les systèmes d'imagerie sont des instruments complexes qui font interagir optique, détecteur et traitement numérique. Afin de dépasser les performances d'imagerie conventionnelles, il devient nécessaire de tenir compte de cette interaction dès la phase de conception et d'optimiser simultanément les paramètres optiques et numériques. L'objectif de ma thèse est de développer des méthodes d'optimisation conjointe pour obtenir de nouveaux systèmes à performances d'imagerie augmentées et de complexité réduite. J'ai tout d'abord étudié le cas de l'augmentation de la profondeur de champ d'une combinaison optique existante. Un masque de phase binaire a été inséré au voisinage du diaphragme d'ouverture d'un objectif ouvert à f/1.2 et a été optimisé de façon conjointe avec un traitement de déconvolution en utilisant le critère basé sur la qualité de l'image restaurée. L'augmentation de profondeur de champ a été quantifiée et des mesures précises de la fonction de transfert de modulation ont permis de valider expérimentalement l'efficacité de ce type d'imageur non-conventionnel. Dans ces premiers travaux, seul le masque de phase a été modifié lors de l'optimisation. Pour accroître encore l'efficacité du système d'imagerie, il est nécessaire d'optimiser l'ensemble de tous les paramètres optiques. Or l'optimisation d'une combinaison optique est un problème complexe qui doit tenir compte de contraintes spécifiques et nécessite l'utilisation d'un logiciel de calcul optique dédié, comme le logiciel Code V qui a été utilisé dans cette thèse. Comme le critère d'optimisation conjointe basé sur la qualité image n'est plus adapté à ce type d'optimisation, j'ai proposé un nouveau critère. Il se base sur les critères d'optimisation classiques de Code V, qui ont été modifiés de façon à prendre en compte de manière implicite le traitement de déconvolution. Cette méthode de conception a tout d'abord été validée dans le cas de l'optimisation d'un masque de phase pour l'extension de profondeur de champ d'une combinaison optique existante. Les résultats obtenus sont équivalents à ceux donnés par l'optimisation suivant le critère de qualité d'image. La technique a ensuite été utilisée pour améliorer une combinaison conventionnelle existante à très forte ouverture (f/0.75) : en modifiant ses paramètres optiques, la combinaison a été allégée et la qualité d'image a été homogénéisée sur l'ensemble du champ. Enfin, j'ai appliqué cette méthode de conception conjointe pour résoudre le problème important de la sensibilité thermique d'un système infrarouge dans la bande 8-12 µm. Cette méthode a permis de concevoir, à partir de zéro, plusieurs types de combinaisons optiques à courte et longue focale, rendues insensibles à la température. Elles présentent un schéma optique plus simple que les solutions athermalisées de façon classique, tout en ayant des performances d'imagerie similaires voire supérieures
Imaging systems are now complex instruments where lens, sensor and digital processing interact strongly together. In order to obtain better imaging performance than conventional imaging, it has become necessary to take into account this interaction in the design stage and therefore to optimize jointly optical and digital parameters. The objective of my thesis is to develop joint optical-digital optimization methods in order to obtain imaging systems with higher performance and lower complexity. I first considered extending the depth of field of an already existing lens. A binary phase mask has been inserted in the vicinity of the aperture stop of a f/1.2 lens, and it has been optimized jointly with a deconvolution filter using the restored image quality criterion. The increase in depth of field has been quantified, and modulation transfer function measurements have proved experimentally the efficiency of this unconventional imaging system. During this first study only the phase mask was optimized. To further increase the imaging system efficiency, all the optical parameters need to be optimized. However, optical design is a complex problem in which specific constraints have to be taken into account and for which one needs to use a dedicated software. In this thesis I used the Code V optical design software. Since the image quality-based optimization cannot be easily implemented in this type of software, I proposed a new criterion. It is based on classical optical optimization criteria used in Code V that have been modified in order to take into account deconvolution in a implicit manner. This design method has been first validated on the optimization of a phase mask for depth of field extension of an already existing lens. Results were similar to those given by the image quality-based optimization. Then this method has been used to enhance a very fast f/0.75 lens: by modifying its optical parameters, the lens has been simplified, and the image quality has been homogenized over the field. Eventually I applied this joint design method to solve the important problem of thermal sensitivity of an 8-12 µm infrared system. By using this method I designed from scratch several types of short and long focal length athermalized lenses. The obtained lenses are simpler than conventionally athermalized ones while having similar or even higher imaging performance
APA, Harvard, Vancouver, ISO, and other styles
11

Emerton, Neil. "Design and fabrication techniques for surface relief diffractive optical elements." Thesis, Imperial College London, 1986. http://hdl.handle.net/10044/1/38000.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Ghazisaeidi, Amirhossein. "Advanced Numerical Techniques for Design and Optimization of Optical Links Employing Nonlinear Semiconductor Optical Amplifiers." Thesis, Université Laval, 2011. http://www.theses.ulaval.ca/2011/27541/27541.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Aydogdu, Selcuk. "Near Infrared Interference Filter Design And The Production Withion-assisted Deposition Techniques." Master's thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12614092/index.pdf.

Full text
Abstract:
Near infrared region (NIR) of the electromagnetic spectrum (EM) is defined as 700nm to 1400nm wavelength interval by International Commission on Illumination(CIE). This wavelength interval is extensively used for target acquisition, night vision, wireless communication etc. Therefore, filtering the desired portion of EM spectra becomes a need for that kind of applications. Interference filters are multilayer optical devices which can be designed and produced for the desired wavelength intervals. The production of near infrared interference filters is a process of depositing thin material layers on the suitable substrates. In this thesis, a multilayer NIR filter will be designed for a selected wavelength interval by the use of dierent materials. Then, transmission quality, thermal stability, dependence of the transmission values on the incoming beam angle, performance and durability of the filter will be studied.
APA, Harvard, Vancouver, ISO, and other styles
14

Liu, Cheng. "Advanced system design and signal processing techniques for converged high-speed optical and wireless applications." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/49058.

Full text
Abstract:
The ever-increasing data traffic demand drives the evolution of telecommunication networks, including the last-mile access networks as well as the long-haul backbone networks. This Ph.D. dissertation focuses on system design and signal processing techniques for next-generation converged optical-wireless access systems and the high-speed long-haul coherent optical communication systems. The convergence of high-speed millimeter-wave wireless communications and high-capacity fiber-optic backhaul networks provides tremendous potential to meet the capacity requirements of future access networks. In this work, a cloud-radio-over-fiber access architecture is proposed. The proposed architecture enables a large-scale small-cell system to be deployed in a cost-effective, power-efficient, and flexible way. Based on the proposed architecture, a multi-service reconfigurable small-cell backhaul network is developed and demonstrated experimentally. Additionally, the combination of high-speed millimeter-wave radio and fiber-optic backhaul is investigated. Several novel methods that enable high-spectral-efficient vector signal transmission in millimeter-wave radio-over-fiber systems are proposed and demonstrated through both theoretical analysis and experimental verification. For long-haul core networks, ultra-high-speed optical communication systems which can support 1Terabit/s per channel transmission will soon be required to meet the increasing capacity demand in the core networks. Grouping a number of tightly spaced optical subcarriers to form a terabit superchannel has been considered as a promising solution to increases channel capacity while minimizing the need for high-level modulation formats and high baud rate. Conventionally, precise spectral control at transmitter side is required to avoid strong inter-channel interference (ICI) at tight channel spacing. In this work, a novel receiver-side approach based on “super receiver” architecture is proposed and demonstrated. By jointly detecting and demodulating multiple channels simultaneously, the penalties associated with the limitations of generating ideal spectra can be mitigated. Several joint DSP algorithms are developed for linear ICI cancellation and joint carrier-phase recovery. Performance analysis under different system configurations is conducted to demonstrate the feasibility and robustness of the proposed joint DSP algorithms, and improved system performance is observed with both experimental and simulation data.
APA, Harvard, Vancouver, ISO, and other styles
15

Liou, K. S. "Improvement in automatic lens design techniques." Thesis, University of Reading, 1985. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.355846.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Engevik, Erlend L. "Optimal Design of Tidal Power Generator Using Stochastic Optimization Techniques." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for elkraftteknikk, 2014. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-27239.

Full text
Abstract:
Particle Swarm Optimization (PSO) and Genetic Algorithms (GA) are usedto reduce the cost of a permanent magnet synchronous generator with concentratedwindings for tidal power applications. Reducing the cost of the electricalmachine is one way of making tidal energy more competitive compared to traditionalsources of electricity.Hybrid optimization combining PSO or GA with gradient based algorithmsseems to be suited for design of electrical machines. Results from optimizationwith Matlab indicate that hybrid GA performs better than Hybrid PSO forthis kind of optimization problems. Hybrid GA shows better convergence, lessvariance and shorter computation time than hybrid PSO.Hybrid GA managed to converge to an average cost of the generator that is 5.2% lower than what was reached by the hybrid PSO. Optimization results showa variance that is 98.6 % lower for hybrid GA than it is for hybrid PSO. Movingfrom a pure GA optimization to the hybrid version reduced the average cost31.2 %.Parallel processing features are able to reduce the computation time of eachoptimization up to 97 % for large problems. The time it took to compute aGA problem with 2500 individuals was reduced from 12 hours to 21 minutesby switching from a single-processor computer to a computer with 48 processorcores. The run time for PSO with 400 particles and 100 iterations went from18.5 hours to 74 minutes, a 93 % reduction.
APA, Harvard, Vancouver, ISO, and other styles
17

Wan, Li. "Modeling and optimal design of annular array based ultrasound pulse-echo system." Link to electronic thesis, 2001. http://www.wpi.edu/Pubs/ETD/Available/etd-0418101-100413/.

Full text
Abstract:
Thesis (M.S.)--Worcester Polytechnic Institute.
Title from title screen. Keywords: optimal design; modeling; object identification; ultrasound pulse-echo system; annular array. Includes bibliographical references (p. 159-162).
APA, Harvard, Vancouver, ISO, and other styles
18

Soan, Peter Humphrey. "Transformation techniques in optimal design problems with application to harbour shapes." Thesis, Kingston University, 1990. http://eprints.kingston.ac.uk/20545/.

Full text
Abstract:
In this thesis the previously unsolved problem of finding the shape of a harbour which minimises the wave height inside is considered. This is a problem of optimal shape design in which the wave height variation is given by a partial differential equation defined over the harbour region and the optimisation is performed with respect to the shape of the boundary of this same region. A brief review of methods that have been used in the past to solve similar problems is given. It is noted that here, in contrast to most previous approaches which have been iterative, the solution of the partial differential equation and the domain optimisation proceed simultaneously. A simple two-dimensional model of a harbour, with part of the boundary unknown, is formulated in Cartesian co-ordinates. A co-ordinate transformation, in terms of an unknown parameter, is selected which maps all admissible harbour regions onto a semi-circular domain with unit radius. Using the method of Lagrange the shape optimisation problem is then formulated as a nonlinear variational problem over a known, fixed, domain. The equivalence of the transformed and original problems is demonstrated by noting that both lead to the same set of necessary conditions for a solution. The finite element method is applied to the optimisation problem, leading to a set of algebraic equations which are solved using an implementation of the Marquardt Algorithm. The convergence of this method, under certain conditions, is demonstrated. Results for the optimal harbour shape from these calculations are presented. Improvements in the smoothness of the solution with increasing mesh resolution and higher order finite elements are noted. In addition, all solutions show similar general features. Hence it is concluded that the problem has been successfully solved.
APA, Harvard, Vancouver, ISO, and other styles
19

Howlett, Isela D., Wanglei Han, Michael Gordon, Photini Rice, Jennifer K. Barton, and Raymond K. Kostuk. "Volume holographic imaging endoscopic design and construction techniques." SPIE-SOC PHOTO-OPTICAL INSTRUMENTATION ENGINEERS, 2017. http://hdl.handle.net/10150/624713.

Full text
Abstract:
A reflectance volume holographic imaging (VHI) endoscope has been designed for simultaneous in vivo imaging of surface and subsurface tissue structures. Prior utilization of VHI systems has been limited to ex vivo tissue imaging. The VHI system presented in this work is designed for laparoscopic use. It consists of a probe section that relays light from the tissue sample to a handheld unit that contains the VHI microscope. The probe section is constructed from gradient index (GRIN) lenses that form a 1: 1 relay for image collection. The probe has an outer diameter of 3.8 mm and is capable of achieving 228.1 lp/mm resolution with 660-nm Kohler illumination. The handheld optical section operates with a magnification of 13.9 and a field of view of 390 mu m x 244 mu m. System performance is assessed through imaging of 1951 USAF resolution targets and soft tissue samples. The system has also passed sterilization procedures required for surgical use and has been used in two laparoscopic surgical procedures. (C) 2017 Society of Photo-Optical Instrumentation Engineers (SPIE)
APA, Harvard, Vancouver, ISO, and other styles
20

Wang, Y. "Analytical derivatives and other techniques for improving the effectiveness of automatic optical design." Thesis, University of Reading, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.371442.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Butt, Sajid Ullah. "Design and modelling of a fixturing system for an optimal balancing of a part family." Paris, ENSAM, 2012. http://www.theses.fr/2012ENAM0022.

Full text
Abstract:
Les erreurs dimensionnelles des pièces d'une famille de pièces provoquent un mauvais placement initial de la pièce sur le montage affectant la qualité du produit final. Même si la pièce est positionnée correctement, la pièce s'écarte de cette position initiale à cause des forces externe d'usinage et de bridage et de la rigidité du montage. Dans cette thèse, un modèle analytique complet, composé d'un modèle cinématique et un modèle mécanique, ayant une configuration de type 3-2-1 des appuis est proposé. Le modèle cinématique remet la pièce, initialement mal placée dans la référence machine, sur la bonne position par les avancements axiales des six appuis en tenant compte de tout les éléments du montage considérés rigides. Cette pièce repositionnée se déplace à nouveau à partir de la position corrigée sous les forces de bridage et d'usinage. Le modèle mécanique calcule ce déplacement de la pièce en considérant les appuis et des brides élastiques. Le plateau cuboïde rigide, utilisé pour repositionner la pièce précisément, est également considéré élastique au niveau des contacts avec les appuis. Le comportement non-linéaire de la déformation des contacts est linéarisé en convergeant la déformation des appuis jusqu'à ce que la précision requise est atteinte. En utilisant l'hypothèse des petits déplacements et en considérant le frottement nul au niveau des contacts, On calcule le déplacement de la pièce, la déformation de chaque appui suivant l'énergie minimale, la matrice de rigidité et le comportement mécanique du système du montage grâce à la formalisation Lagrangienne. Le déplacement de la pièce est à nouveau compensé par l'avancement axial des six appuis calculés par le modèle cinématique
Dimensional errors of the parts of a part family cause the initial misplacing of the workpiece on the fixture affecting the final product quality. Even if the part is positioned correctly, the external machining forces and clamping load cause the part to deviate from this initial position depending upon that external load and the stiffness of the fixture. In this thesis, a comprehensive analytical model, consist of a kinematic and a mechanical model, of a 3-2-1 fixturing system is proposed. The kinematic model relocates the initially misplaced workpiece in the machine reference by the axial advancements of the six locators considering all the fixturing elements to be rigid. This repositioned part again displaces from the corrected position under the clamping and machining forces. The mechanical model calculates this displacement of the part considering the locators and clamps to be elastic. The rigid cuboid baseplate, used to precisely relocate the workpiece, is also considered elastic at the contacts with the locators. The non-linear behavior of the contact deformation is linearized by the converging the deformation of locators till the required precision is attained. Using small displacement hypothesis with zero friction at contacts, Lagrangian formulation enables us to calculate the rigid body displacement of the workpiece, deformation of each locator following minimum energy, and stiffness matrix and mechanical behavior of the fixturing system. This displacement of the workpiece is again compensated by the advancement of the six axial locators calculated through the kinematic model
APA, Harvard, Vancouver, ISO, and other styles
22

Al-Bizri, N. "Aberration theory and design techniques for refracting prism systems." Thesis, University of Reading, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.233134.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Noble, Christopher Aaron. "Analytical and Numerical Techniques for the Optimal Design of Mineral Separation Circuits." Diss., Virginia Tech, 2013. http://hdl.handle.net/10919/23224.

Full text
Abstract:
The design of mineral processing circuits is a complex, open-ended process.  While several tools and methodologies are available, extensive data collection accompanied with trial-and-error simulation are often the predominant technical measures utilized throughout the process.  Unfortunately, this approach often produces sub-optimal solutions, while squandering time and financial resources.  This work proposes several new and refined methodologies intended to assist during all stages of circuit design.  First, an algorithm has been developed to automatically determine circuit analytical solutions from a user-defined circuit configuration.  This analytical solution may then be used to rank circuits by traditional derivative-based linear circuit analysis or one of several newly proposed objective functions, including a yield indicator (the yield score) or a value-based indicator (the moment of inertia). Second, this work presents a four-reactor flotation model which considers both process kinetics and machine carrying capacity.  The simulator is suitable for scaling laboratory data to predict full-scale performance.  By first using circuit analysis to reduce the number of design alternatives, experimental and simulation efforts may be focused to those configurations which have the best likelihood of enhanced performance while meeting secondary process objectives.  Finally, this work verifies the circuit analysis methodology through a virtual experimental analysis of 17 circuit configurations.  A hypothetical electrostatic separator was implemented into a dynamic physics-based discrete element modeling environment.  The virtual experiment was used to quantify the selectivity of each circuit configuration, and the final results validate the initial circuit analysis projections.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
24

Boonnithivorakul, Nattapong. "OPTIMAL CONTROL DESIGN FOR POLYNOMIAL NONLINEAR SYSTEMS USING SUM OF SQUARES TECHNIQUE WITH GUARANTEED LOCAL OPTIMALITY." OpenSIUC, 2010. https://opensiuc.lib.siu.edu/dissertations/149.

Full text
Abstract:
Optimal control design and implementation for nonlinear systems is a topic of much interest. However, unlike for linear systems, for nonlinear systems explicit analytical solution for optimal feedback control is not available. Numerical techniques, on the other hand, can be used to approximate the solution of the HJB equation to find the optimal control. In this research, a computational approach is developed for finding the optimal control for nonlinear systems with polynomial vector fields based on sum of squares technique. In this research, a numerical technique is developed for optimal control of polynomial nonlinear systems. The approach follows a four-step procedure to obtain both local and approximate global optimality. In the first step, local optimal control is found by using the linearization method and solving the Algebraic Riccati equation with respect to the quadratic part of a given performance index. Next, we utilize the density function method to find a globally stabilizing polynomial nonlinear control for the nonlinear system. In the third step, we find a corresponding Lyapunov function for the designed control in the previous steps based on the Hamilton Jacobi inequality by using semidefinite programming. Finally, to achieve global optimality, we iteratively update the pair of nonlinear control and Lyapunov function based on a state-dependent polynomial matrix inequality. Numerical examples illustrate the effectiveness of the design approach.
APA, Harvard, Vancouver, ISO, and other styles
25

Vazquez, Javier. "Analysis and design of planar active and passive quasi-optical components using new FDTD techniques." Thesis, Queen Mary, University of London, 2002. http://qmro.qmul.ac.uk/xmlui/handle/123456789/28583.

Full text
Abstract:
New Quasi-optical sensor technology, based on the millimetre and submillimetre band of the electromagnetic spectrum, is actually being implemented for many commercial and scientific applications such as remote sensing, astronomy, collision avoidance radar, etc. These novel devices make use of integrated active and passive structures usually as planar arrays. The electromagnetic design and computer simulation of these new structures requires novel numerical techniques. The Finite Difference Time Domain method (FDTD) is well suited for the electromagnetic analysis of integrated devices using active non-linear elements, but is difficult to use for large and/or periodic structures. A rigorous revision of this popular numerical technique is performed in order to permit FDTD to model practical quasi-optical devices. The system impulse response or discrete Green's function (DGF) for FDTD is determined as a polynomial then the FDTD technique is reformulated as a convolution sum. This new alternative algorithm avoids Absorbing Boundary Conditions (ABC's) and can save large amounts of memory to model wire or slot structures. Many applications for the DGF can be foreseen, going beyond quasi-optical components. As an example, the exact ABC based on the DGF for FDTD is implemented for a single grid wall is presented. The problem of time domain analysis of planar periodic structures modelling only one periodic cell is also investigated. Simple Periodic Boundary Conditions (PBC) can be implemented for FDTD, but they can not handle periodic devices (such as phased shift arrays or dichroic screens) which produce fields periodic in a 4D basis (three spatial dimensions plus time). An extended FDTD scheme is presented which uses Lorentz type coordinate transformations to reduce the problem to 3D. The analysis of non-linear devices using FDTD is also considered in the thesis. In this case, the non linear devices are always model using an equivalent lumped element circuit. These circuits are introduced into the FDTD grid by means of the current density following an iterative implicit algorithm. As a demonstration of the technique a quasi-optically feed slot ring mixer with integral lens is designed for operation at 650 GHz.
APA, Harvard, Vancouver, ISO, and other styles
26

Miah, Suruz. "Design and Implementation of Control Techniques for Differential Drive Mobile Robots: An RFID Approach." Thèse, Université d'Ottawa / University of Ottawa, 2012. http://hdl.handle.net/10393/23343.

Full text
Abstract:
Localization and motion control (navigation) are two major tasks for a successful mobile robot navigation. The motion controller determines the appropriate action for the robot’s actuator based on its current state in an operating environment. A robot recognizes its environment through some sensors and executes physical actions through actuation mechanisms. However, sensory information is noisy and hence actions generated based on this information may be non-deterministic. Therefore, a mobile robot provides actions to its actuators with a certain degree of uncertainty. Moreover, when no prior knowledge of the environment is available, the problem becomes even more difficult, as the robot has to build a map of its surroundings as it moves to determine the position. Skilled navigation of a differential drive mobile robot (DDMR) requires solving these tasks in conjunction, since they are inter-dependent. Having resolved these tasks, mobile robots can be employed in many contexts in indoor and outdoor environments such as delivering payloads in a dynamic environment, building safety, security, building measurement, research, and driving on highways. This dissertation exploits the use of the emerging Radio Frequency IDentification (RFID) technology for the design and implementation of cost-effective and modular control techniques for navigating a mobile robot in an indoor environment. A successful realization of this process has been addressed with three separate navigation modules. The first module is devoted to the development of an indoor navigation system with a customized RFID reader. This navigation system is mainly pioneered by mounting a multiple antenna RFID reader on the robot and placing the RFID tags in three dimensional workspace, where the tags’ orthogonal position on the ground define the desired positions that the robot is supposed to reach. The robot generates control actions based on the information provided by the RFID reader for it to navigate those pre-defined points. On the contrary, the second and third navigation modules employ custom-made RFID tags (instead of the RFID reader) which are attached at different locations in the navigation environment (on the ceiling of an indoor office, or on posts, for instance). The robot’s controller generates appropriate control actions for it’s actuators based on the information provided by the RFID tags in order to reach target positions or to track pre-defined trajectory in the environment. All three navigation modules were shown to have the ability to guide a mobile robot in a highly reverberant environment with variant degrees of accuracy.
APA, Harvard, Vancouver, ISO, and other styles
27

Pessoa, Lucio Flavio Cavalcanti. "Nonlinear systems and neural networks with hybrid morphological/rank/linear nodes : optimal design and applications to image processing and pattern recognition." Diss., Georgia Institute of Technology, 1997. http://hdl.handle.net/1853/13519.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Karimli, Nigar. "Parameter Estimation and Optimal Design Techniques to Analyze a Mathematical Model in Wound Healing." TopSCHOLAR®, 2019. https://digitalcommons.wku.edu/theses/3114.

Full text
Abstract:
For this project, we use a modified version of a previously developed mathematical model, which describes the relationships among matrix metalloproteinases (MMPs), their tissue inhibitors (TIMPs), and extracellular matrix (ECM). Our ultimate goal is to quantify and understand differences in parameter estimates between patients in order to predict future responses and individualize treatment for each patient. By analyzing parameter confidence intervals and confidence and prediction intervals for the state variables, we develop a parameter space reduction algorithm that results in better future response predictions for each individual patient. Moreover, use of another subset selection method, namely Structured Covariance Analysis, that considers identifiability of parameters, has been included in this work. Furthermore, to estimate parameters more efficiently and accurately, the standard error (SE- )optimal design method is employed, which calculates optimal observation times for clinical data to be collected. Finally, by combining different parameter subset selection methods and an optimal design problem, different cases for both finding optimal time points and intervals have been investigated.
APA, Harvard, Vancouver, ISO, and other styles
29

Galvanin, Federico. "Optimal model-based design of experiments in dynamic systems: novel techniques and unconventional applications." Doctoral thesis, Università degli studi di Padova, 2010. http://hdl.handle.net/11577/3427095.

Full text
Abstract:
Model-based design of experiments (MBDoE) techniques are a very useful tool for the rapid assessment and development of dynamic deterministic models, providing a significant support to the model identification task on a broad range of process engineering applications. These techniques allow to maximise the information content of an experimental trial by acting on the settings of an experiment in terms of initial conditions, profiles of the manipulated inputs and number and time location of the output measurements. Despite their popularity, standard MBDoE techniques are still affected by some limitations. In fact, when a set of constraints is imposed on the system inputs or outputs, factors like uncertainty on prior parameter estimation and structural system/model mismatch may lead the design procedure to plan experiments that turn out, in practice, to be suboptimal (i.e. scarcely informative) and/or unfeasible (i.e. violating the constraints imposed on the system). Additionally, standard MBDoE techniques have been originally developed considering a discrete acquisition of the information. Therefore, they do not consider the possibility that the information on the system itself could be acquired very frequently if there was the possibility to record the system responses in a continuous manner. In this Dissertation three novel MBDoE methodologies are proposed to address the above issues. First, a strategy for the online model-based redesign of experiments is developed, where the manipulated inputs are updated while an experiment is still running. Thanks to intermediate parameter estimations, the information is exploited as soon as it is generated from an experiment, with great benefit in terms of precision and accuracy of the final parameter estimate and of experimental time. Secondly, a general methodology is proposed to formulate and solve the experiment design problem by explicitly taking into account the presence of parametric uncertainty, so as to ensure by design both feasibility and optimality of an experiment. A prediction of the system responses for the given parameter distribution is used to evaluate and update suitable backoffs from the nominal constraints, which are used in the design session in order to keep the system within a feasible region with specified probability. Finally, a design criterion particularly suitable for systems where continuous measurements are available is proposed in order to optimise the information dynamics of the experiments since the very beginning of the trial. This approach allows tailoring the design procedure to the specificity of the measurement system. A further contribution of this Dissertation is aimed at assessing the general applicability of both standard and advanced MBDoE techniques to the biomedical area, where unconventional experiment design applications are faced. In particular, two identification problems are considered: one related to the optimal drug administration in cancer chemotherapy, and one related to glucose homeostasis models for subjects affected by type 1 diabetes mellitus (T1DM). Particular attention is drawn to the optimal design of clinical tests for the parametric identification of detailed physiological models of T1DM. In this latter case, advanced MBDoE techniques are used to ensure a safe and optimally informative clinical test for model identification. The practicability and effectiveness of a complex approach taking simultaneously into account the redesign-based and the backoff-based MBDoE strategies are also shown. The proposed experiment design procedure provides alternative test protocols that are sufficiently short and easy to carry out, and allow for a precise, accurate and safe estimation of the model parameters defining the metabolic portrait of a diabetic subject.
Le moderne tecniche di progettazione ottimale degli esperimenti basata su modello (MBDoE, model-based design of experiments) si sono dimostrate utili ed efficaci per sviluppare e affinare modelli matematici dinamici di tipo deterministico. Queste tecniche consentono di massimizzare il contenuto informativo di un esperimento di identificazione, determinando le condizioni sperimentali più opportune da adottare nella sperimentazione allo scopo di stimare i parametri di un modello nel modo più rapido ed efficiente possibile. Le tecniche MBDoE sono state applicate con successo in svariate applicazioni industriali. Tuttavia, nella loro formulazione standard, esse soffrono di alcune limitazioni. Infatti, quando sussistono vincoli sugli ingressi manipolabili dallo sperimentatore oppure sulle risposte del sistema, l’incertezza nell’informazione preliminare che lo sperimentatore possiede sul sistema fisico (in termini di struttura del modello e precisione nella stima dei parametri) può profondamente influenzare l’efficacia della procedura di progettazione dell’esperimento. Come conseguenza, è possibile che venga progettato un esperimento poco informativo e dunque inadeguato per stimare i parametri del modello in maniera statisticamente precisa ed accurata, o addirittura un esperimento che porta a violare i vincoli imposti sul sistema in esame. Inoltre, le tecniche MBDoE standard non considerano nella formulazione stessa del problema di progettazione la specificità e le caratteristiche del sistema di misura in termini di frequenza, precisione e accuratezza con cui le misure sono disponibili. Nella ricerca descritta in questa Dissertazione sono sviluppate metodologie avanzate di progettazione degli esperimenti con lo scopo di superare tali limitazioni. In particolare, sono proposte tre nuove tecniche per la progettazione ottimale di esperimenti dinamici basata su modello: 1. una tecnica di progettazione in linea degli esperimenti (OMBRE, online model-based redesign of experiments), che consente di riprogettare un esperimento mentre questo è ancora in esecuzione; 2. una tecnica basata sul concetto di “backoff” (arretramento) dai vincoli, per gestire l’incertezza parametrica e strutturale del modello; 3. una tecnica di progettazione che consente di ottimizzare l’informazione dinamica di un esperimento (DMBDoE, dynamic model-based design of experiments) allo scopo di considerare la specificità del sistema di misura disponibile. La procedura standard MBDoE per la progettazione di un esperimento è sequenziale e si articola in tre stadi successivi. Nel primo stadio l’esperimento viene progettato considerando l’informazione preliminare disponibile in termini di struttura del modello e stima preliminare dei parametri. Il risultato della progettazione è una serie di profili ottimali delle variabili manipolabili (ingressi) e l’allocazione ottimale dei tempi di campionamento delle misure (uscite). Nel secondo stadio l’esperimento viene effettivamente condotto, impiegando le condizioni sperimentali progettate e raccogliendo le misure come da progetto. Nel terzo stadio, le misure vengono utilizzate per stimare i parametri del modello. Seguendo questa procedura, l’informazione ottenuta dall’esperimento viene sfruttata solo a conclusione dell’esperimento stesso. La tecnica OMBRE proposta consente invece di riprogettare l’esperimento, e quindi di aggiornare i profili manipolabili nel tempo, mentre l’esperimento è ancora in esecuzione, attuando stime intermedie dei parametri. In questo modo l’informazione viene sfruttata progressivamente mano a mano che l’esperimento procede. I vantaggi di questa tecnica sono molteplici. Prima di tutto, la procedura di progettazione diventa meno sensibile, rispetto alla procedura standard, alla qualità della stima preliminare dei parametri. In secondo luogo, essa consente una stima dei parametri statisticamente più soddisfacente, grazie alla possibilità di sfruttare in modo progressivo l’informazione generata dall’esperimento. Inoltre, la tecnica OMBRE consente di ridurre le dimensioni del problema di ottimizzazione, con grande beneficio in termini di robustezza computazionale. In alcune applicazioni, risulta di importanza critica garantire la fattibilità dell’esperimento, ossia l’osservanza dei vincoli imposti sul sistema. Nella Dissertazione è proposta e illustrata una nuova procedura di progettazione degli esperimenti basata sul concetto di “backoff” (arretramento) dai vincoli, nella quale l’effetto dell’incertezza sulla stima dei parametri e/o l’inadeguatezza strutturale del modello vengono inclusi nella formulazione delle equazioni di vincolo grazie ad una simulazione stocastica. Questo approccio porta a ridurre lo spazio utile per la progettazione dell’esperimento in modo tale da assicurare che le condizioni di progettazione siano in grado di garantire non solo l’identificazione dei parametri del modello, ma anche la fattibilità dell’esperimento in presenza di incertezza strutturale e/o parametrica del modello. Nelle tecniche standard di progettazione la formulazione del problema di ottimo prevede che le misure vengano acquisite in maniera discreta, considerando una certa distanza temporale tra misure successive. Di conseguenza, l’informazione attesa dall’esperimento viene calcolata e massimizzata durante la progettazione mediante una misura discreta dell’informazione di Fisher. Tuttavia, nella pratica, sistemi di misura di tipo continuo permetterebbero di seguire la dinamica del processo mediante misurazioni molto frequenti. Per questo motivo viene proposto un nuovo criterio di progettazione (DMBDoE), nel quale l’informazione attesa dall’esperimento viene ottimizzata in maniera continua. Il nuovo approccio consente di generalizzare l’approccio della progettazione includendo le caratteristiche del sistema di misura (in termini di frequenza di campionamento, accuratezza e precisione delle misure) nella formulazione stessa del problema di ottimo. Un ulteriore contributo della ricerca presentata in questa Dissertazione è l’estensione al settore biomedico di tecniche MBDoE standard ed avanzate. I sistemi fisiologici sono caratterizzati da elevata complessità, e spesso da scarsa controllabilità e scarsa osservabilità. Questi elementi rendono particolarmente lunghe e complesse le procedure di identificazione parametrica di modelli fisiologici dettagliati. L’attività di ricerca ha considerato due problemi principali inerenti l’identificazione parametrica di modelli fisiologici: il primo legato a un modello per la somministrazione ottimale di agenti chemioterapici per la cura del cancro, il secondo relativo ai modelli complessi dell’omeostasi glucidica per soggetti affetti da diabete mellito di tipo 1. In quest’ultimo caso, al quale è rivolta attenzione particolare, l’obiettivo principale è identificare il set di parametri individuali del soggetto diabetico. Ciò consente di tracciarne un ritratto metabolico, fornendo così un prezioso supporto qualora si intenda utilizzare il modello per sviluppare e verificare algoritmi avanzati per il controllo del diabete di tipo 1. Nella letteratura e nella pratica medica esistono test clinici standard, quali il test orale di tolleranza al glucosio e il test post-prandiale da carico di glucosio, per la diagnostica del diabete e l’identificazione di modelli dell’omeostasi glucidica. Tali test sono sufficientemente brevi e sicuri per il soggetto diabetico, ma si possono rivelare poco informativi quando l’obiettivo è quello di identificare i parametri di modelli complessi del diabete. L’eccitazione fornita durante questi test al sistema-soggetto, in termini di infusione di insulina e somministrazione di glucosio, può infatti essere insufficiente per stimare in maniera statisticamente soddisfacente i parametri del modello. In questa Dissertazione è proposto l’impiego di tecniche MBDoE standard e avanzate per progettare test clinici che permettano di identificare nel modo più rapido ed efficiente possibile il set di parametri che caratterizzano un soggetto affetto da diabete, rispettando durante il test i vincoli imposti sul livello glicemico del soggetto. Partendo dai test standard per l’identificazione di modelli fisiologici del diabete, è così possibile determinare dei protocolli clinici modificati in grado di garantire test clinici altamente informativi, sicuri, poco invasivi e sufficientemente brevi. In particolare, si mostra come un test orale opportunamente modificato risulta altamente informativo per l’identificazione, sicuro per il paziente e di facile implementazione per il clinico. Inoltre, viene evidenziato come l’integrazione di tecniche avanzate di progettazione (quali OMBRE e tecniche basate sul concetto di backoff) è in grado di garantire elevata significatività e sicurezza dei test clinici anche in presenza di incertezza strutturale, oltre che parametrica, del modello. Infine, si mostra come, qualora siano disponibili misure molto frequenti della glicemia, ottimizzare mediante tecniche DMBDoE l’informazione dinamica progressivamente acquisita dal sistema di misura durante il test consente di sviluppare protocolli clinici altamente informativi, ma di durata inferiore, minimizzando così lo stress sul soggetto diabetico. La struttura della Dissertazione è la seguente. Il primo Capitolo illustra lo stato dell’arte delle attuali tecniche di progettazione ottimale degli esperimenti, analizzandone le limitazioni e identificando gli obiettivi della ricerca. Il secondo Capitolo contiene la trattazione matematica necessaria per comprendere la procedure standard di progettazione degli esperimenti. Il terzo Capitolo presenta la nuova tecnica OMBRE per la riprogettazione in linea di esperimenti dinamici. La tecnica viene applicata a due casi di studio, riguardanti un processo di fermentazione di biomassa in un reattore semicontinuo e un processo per la produzione di uretano. Il quarto Capitolo propone e illustra il metodo basato sul concetto di “backoff” per gestire l’effetto dell’incertezza parametrica e strutturale nella formulazione stessa del problema di progettazione. L’efficacia del metodo è verificata su due casi di studio in ambito biomedico. Il primo riguarda l’ottimizzazione dell’infusione di insulina per l’identificazione di un modello dettagliato del diabete mellito di tipo 1; il secondo la somministrazione ottimale di agenti chemioterapici per la cura del cancro. Il quinto Capitolo riguarda interamente il problema della progettazione ottimale di test clinici per l’identificazione di un modello fisiologico complesso del diabete mellito di tipo 1. La progettazione di protocolli clinici modificati avviene adottando tecniche MBDoE in presenza di elevata incertezza parametrica tra modello e soggetto diabetico. Il sesto Capitolo affronta il problema della progettazione dei test clinici assumendo sia incertezza di modello parametrica che strutturale. Il settimo Capitolo propone un nuovo criterio di progettazione (DMBDoE) che ottimizza l’informazione dinamica acquisibile da un esperimento. La tecnica viene applicata a un modello complesso del diabete mellito di tipo 1 e ad un processo per la fermentazione di biomassa in un reattore semicontinuo. Conclusioni e possibili sviluppi futuri vengono descritti nella sezione conclusiva della Dissertazione.
APA, Harvard, Vancouver, ISO, and other styles
30

Nguyen, Giang Thach, and thach nguyen@rmit edu au. "Efficient Resonantly Enhanced Mach-Zehnder Optical Modulator on Lithium Niobate." RMIT University. Electrical and Computer Engineering, 2006. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20070118.162330.

Full text
Abstract:
Photonic links have been proposed to transport radio frequency (RF) signals over optical fiber. External optical modulation is commonly used in high performance RF-photonic links. The practical use of optical fiber to transport RF signals is still limited due to high RF signal loss. In order to reduce the RF signal loss, highly efficient modulators are needed. For many applications, modulators with broad bandwidths are required. However, there are applications that require only a narrow bandwidth. For these narrow-band applications, the modulation efficiency can be improved through the resonant enhancement technique at the expense of reduced bandwidth. The aim of this thesis is to investigate highly efficient Mach-Zehnder optical modulators (MZMs) on Lithium Niobate (LiNbO3) with resonant enhancement techniques for narrow-band RF-photonic applications. This work focuses in particular on analyzing the factors that affect the modulation efficiency through resonant enhancement so that the modulator electrode structure can be optimized for maximum modulation efficiency. A parameter study of the effects of the electrode characteristics on the modulation efficiency of resonantly enhanced modulators (RE-MZM) is provided. From this study, optimum design objectives are identified. Numerical optimization is employed to explore the design trade-offs so that optimal configurations can be found. A sensitivity analysis is carried out to assess the performance of optimal RE-MZMs with respect to the variations of fabrication conditions. The results of these investigations indicate that the RE-MZM with a large electrode gap is the optimal design since it provides high modulation efficiency although the inherent switching voltage is high, and is the most tolerant to the fabrication fluctuations. A highly efficient RE-MZM on X-cut LiNbO3 is practically demonstrated with the resonant enhancement factor of 5 dB when comparing to the unenhanced modulator with the same electrode structure and effective switching voltage of 2 V at 1.8 GHz. The performance of the RF-photonic link using the fabr icated RE-MZM is evaluated. Optimization of RE-MZMs for operating at millimeter-wave frequencies is also reported. Factors that limit the modulation efficiency of an RE-MZM at millimeter-wave frequencies are identified. Novel resonant structures that can overcome these limitations are proposed. Preliminary designs indicate that greatly improved modulation efficiency could be expected.
APA, Harvard, Vancouver, ISO, and other styles
31

Godoy, Rodrigo Juliani Corrêa de. "Plantwide control: a review and proposal of an augmented hierarchical plantwide control design technique." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/3/3139/tde-07112017-140120/.

Full text
Abstract:
The problem of designing control systems for entire plants is studied. A review of previous works, available techniques and current research challenges is presented, followed by the description of some theoretical tools to improve plantwide control, including the proposal of an augmented lexicographic multi-objective optimization procedure. With these, an augmented hierarchical plantwide control design technique and an optimal multi-objective technique for integrated control structure selection and controller tuning are proposed. The main contributions of these proposed techniques are the inclusion of system identification and optimal control tuning as part of the plantwide design procedure for improved results, support to multi-objective control specifications and support to any type of plant and controllers. Finally, the proposed techniques are applied to industrial benchmarks to demonstrate and validate its applicability.
O problema de projetar sistemas de controle para plantas inteiras é estudado. Uma revisão de trabalhos anteriores, técnicas disponíveis e atuais desafios de pesquisa é apresentada, seguida da descrição de algumas ferramentas teóricas para melhorar o controle plantwide, incluindo a proposta de um procedimento de otimização multi-objetivo lexicográfico aumentado. Com tais elementos, são propostas uma nova técnica hierárquica aumentada de projeto de sistemas de controle plantwide e uma técnica multi-objetivo para seleção de estrutura de controlador integrada à sintonia ótima do controlador. As principais contribuições das técnicas propostas são a inclusão de identificação de sistemas e sintonia ótima de controladores como parte do procedimento de projeto de controle plantwide para melhores resultados, suporte a especificações multi-objetivo e suporte a quaisquer tipos de plantas e controladores. Finalmente, as técnicas propostas são aplicadas a benchmarks industriais para demonstrar e validar sua aplicabilidade.
APA, Harvard, Vancouver, ISO, and other styles
32

Di, Pretoro Alessandro. "Optimal design of flexible, operable and sustainable processes under uncertainty : biorefinery applications." Thesis, Toulouse, INPT, 2020. http://www.theses.fr/2020INPT0073.

Full text
Abstract:
La démarche conventionnelle pour le design des procédés est une procédure bien consolidée dans le domaine du génie des procédés et utilise des méthodes d’optimisation basées sur plusieurs objectifs pour satisfaire le différents degrés des liberté. Cependant, l’optimalité de la solution obtenue avec cette procédure est uniquement associée aux conditions opératoires nominales et néglige les perturbations externes. L’objectif de cette thèse est la prise en compte de la flexibilité à chaque étape de la démarche de conception des procédés. À ce propos un cas d’étude concernant une opération de séparation ABE/W dans une bioraffinerie a été choisi. Après une recherche bibliographique détaillée sur la flexibilité, des indices de flexibilité déterministes et stochastiques ont été trouvés et comparés sur la base d’un cas de distillation simple. Le problème concernant le nombre optimal d’étages dans les opérations unitaires multi-étage sous incertitude a été discuté. Les aspects économique et environnemental ont été couplés dans une procédure unifiée pour évaluer le meilleur compromis entre surdimensionnement des équipements (i.e. investissement) et demande d’utilités externes (i.e. coûts opératoires) nécessaires pour compenser les perturbations. À cause du comportement non-idéal de la mélange ABE/W, d’abord la flexibilité thermodynamique a été analysée en utilisant les courbes de résidu pour définir les limites physiques du domaine faisable. La procédure pour une seule colonne a été adaptée au cas d’un train des colonnes ainsi que aux configurations intensifiée équivalentes comme la colonne à paroi. Pour que le procédé soit rentable la récupération efficace du butanol et de l’acétone est nécessaire. À ce propos différentes configurations pour des trains de distillation ont été conçues et comparées en prenant en compte le perturbations de composition dans le courant d’alimentation. La configuration indirecte a été identifiée comme la meilleure solution à cause du contenu de butanol élevé dans l’alimentation. Cependant les trains de distillation sont considérés une solution obsolète pour la purification des mélanges multi-constituant. Ils ont en fait été remplacés par des procédés intensifiés qui permettent de réduire à la fois les coûts d’investissement et les coûts opératoires. Le design d’une colonne à paroi séparatrice (DWC) pour les mêmes spécifications a été donc mis en place avec une méthode à chemin réalisable. Une analyse de flexibilité a été ensuite réalisées sur la DWC pour mettre en évidence les bénéfices et les inconvénients de l’utilisation d’un procédé intensifié. Le design optimal de la DWC a été reformulé en prenant en compte les besoins associé à la flexibilité et les surcouts correspondants par rapport au train de colonnes conventionnel. De plus, pour avoir une vision d’ensemble, des indices de flexibilité concernants la dynamique des procédés ont été analysés. En dynamique les résultats de l’analyse de flexibilité et donc le surdimensionnement nécessaire dépend toujours de la configuration de contrôle. Un nouveau index de «switchabilité» a été défini grâce à la corrélation entre les prestations du système en régime permanent et dynamique dans un point de vue de flexibilité. La définition rigoureuse de cet index a été associé à une stratégie de contrôle prédictif (MPC) pour un système simple mais elle peut aussi être utilisé pour différentes boucles de contrôle pour les comparer. Pour décrire ce dernier cas le même exemple de distillation utilisé pour la comparaison des indices statiques a été simulé avec DynSim en utilisant des régulateurs PID pour mettre en évidence les effets l’influence du transitoire. Le résultat final de ce travail de thèse est donc la définition d’une approche globale pour une conception multiobjectif des opérations unitaires dans des conditions opératoires incertaines et l’analyse de criticités associées. Ces critères sont notamment analyse économique, flexibilité, contrôlabilité et durabilité
The conventional process design procedure is a well-established standard in process engineering and implies the fulfillment of the residual degrees of freedom by mean of optimizations based on several possible criteria. However, the optimality of the solution obtained by this procedure is strictly related to nominal operating conditions and doesn’t account for external perturbations. The purpose of this PhD thesis is then to include flexibility in every step of the process design procedure. An ABE biorefinery separation case study has been selected for this purpose. After a detailed literature research about flexibility, both deterministic and stochastic flexibility indexes have been found and compared on a simple distillation column case study. Then the optimal number of stages problem in equilibrium-staged operation under uncertain conditions has been discussed. The economic and environmental aspects were coupled in a unified procedure in order to assess the best compromise between units oversizing (i.e. investment costs) and external duty demand (OPerating EXpenses) to compensate operating conditions perturbations. Due to the highly non-ideal behaviour of the ABE/W mixture, the thermodynamic flexibility was assessed first by mean of Residue Curve Mapping in order to outline the uncertain domain physical boundaries where the separation is feasible. The procedure for a single column was extended to distillation trains as well as equivalent integrated configurations such as Dividing Wall Column. Since the successfully recovery of at least butanol and acetone is required for the profitability of the process, different configurations for the corresponding distillation column train have been designed and compared accounting for feed composition uncertainty. The indirect configuration was found as the best compromise due to the high butanol content in the feed. However distillation trains are considered an outdated solution for multicomponent mixtures purification. They have been indeed replaced by integrated solutions resulting in both lower investments and lower operating costs. A Dividing Wall Column was then designed with respect to the same process specifications by mean of a feasible path based methodology. It consists of the arrangement of a Petlyuk column into a single column shell. A flexibility assessment was then performed on the DWC as well highlighting both benefits and drawbacks of the employment of a process intensified design solution. The optimal DWC design was then discussed by considering flexibility needs and oversizing costs compared to the classical distillation train configuration. All calculations and simulations performed so far were nonetheless related to steady state conditions. In order to have a more complete overview of flexibility indexes process dynamics was investigated as well. When taking into account process dynamics the flexibility assessment results and thus the required equipment oversizing necessarily depend on the control configuration. A new “switchability” index has been defined by correlating the dynamic and steady state performances under a flexibility point of view. The proper definition of this index was referred to a Model Predictive Control configuration for a simple system but it can be used for any kind of control loop configurations in order to compare them. To describe this latter case study the same distillation column involved in the steady state indexes comparison was simulated in DynSim with PID controllers in order to highlight the influence of dynamics. The final outcome of this thesis work is then the definition of a comprehensive approach for multicriteria design of unit operations in general under uncertain operating conditions and investigate the associated criticalities. Those criteria are respectively economics, flexibility, controllability and sustainability
APA, Harvard, Vancouver, ISO, and other styles
33

Giuglea, Alexandru, Guido Belfiore, Mahdi Khafaji, Ronny Henker, Despoina Petousi, Georg Winzer, Lars Zimmermann, and Frank Ellinger. "Comparison of Segmented and Traveling-Wave Electro-Optical Transmitters Based on Silicon Photonics Mach-Zehnder Modulators." Institute of Electrical and Electronics Engineers (IEEE), 2018. https://tud.qucosa.de/id/qucosa%3A35393.

Full text
Abstract:
This paper presents a brief study of the two most commonly used topologies - segmented and traveling-wave - for realizing monolithically integrated electro-optical transmitters consisting of Si-photonics Mach-Zehnder modulators and their electrical drivers. To this end, two new transmitters employing high swing breakdown voltage doubler drivers were designed in the aforementioned topologies and compared with regard to their extinction ratio and DC power consumption at the data rate of 30 Gb/s. It is shown that for the targeted data rate and extinction ratio, a considerably lower power consumption can be achieved with the traveling-wave topology than with its segmented counterpart. The transmitters were realized in a 250 nm SiGe BiCMOS electronic-photonic integrated technology.
APA, Harvard, Vancouver, ISO, and other styles
34

WAN, Li. "Modeling and Optimal Design of Annular Array Based Ultrasound Pulse-Echo System." Digital WPI, 2001. https://digitalcommons.wpi.edu/etd-theses/219.

Full text
Abstract:
The ability to numerically determine the received signal in an ultrasound pulse-echo system is very important for the development of new ultrasound applications, such as tissue characterization, complex object recognition, and identification of surface topology. The output signal from an ultrasound pulse-echo system depends on the transducer geometry, reflector shape, location and orientation, among others, therefore, only by numerical modeling can the output signal for a given measurement configuration be predicted. This thesis concerns about the numerical modeling and optimal design of annular array based ultrasound pulse-echo system for object recognition. Two numerical modeling methods have been implemented and evaluated for calculating received signal in a pulse-echo system. One is the simple, but computationally demanding Huygens Method and the other one is the computationally more efficient Diffraction Response for Extended Area Method (DREAM). The modeling concept is further extended for pulse-echo system with planar annular array. The optimal design of the ultrasound pulse-echo system is based on annular array transducer that gives us the flexibility to create a wide variety of insonifying fields and receiver characteristics. As the first step towards solving the optimization problem for general conditions, the problem of optimally identifying two specific reflectors is investigated. Two optimization methods, the straightforward, but computationally intensive Global Search Method and the efficient Waveform Alignment Method, have been investigated and compared.
APA, Harvard, Vancouver, ISO, and other styles
35

Abushammala, Omran. "Optimal Helical Tube Design for Intensified Heat / Mass Exchangers." Electronic Thesis or Diss., Université de Lorraine, 2020. http://www.theses.fr/2020LORR0091.

Full text
Abstract:
La recherche de solutions technologiques visant à minimiser la taille d’un dispositif, qualifiée d’intensification, est un objectif classique du génie des procédés. Dans ce mémoire, les possibilités d’intensification offertes par des tubes hélicoïdaux sont étudiées, à la fois pour des échangeurs de chaleur et de matière. L’utilisation de tubes hélicoïdaux en lieu et place de tubes droits présente en effet un intérêt à la fois en termes d’augmentation de la surface d’échange par unité de volume entre les deux fluides circulant dans l’échangeur et par la possibilité d’augmentation des transferts par génération de vortex de Dean dans les tubes. Un ensemble de simulations de mécanique des fluides numérique a été réalisé et confronté à des résultats expérimentaux. Au final, sur la base d’une démarche systématique faisant appel à des corrélations, une réduction volumique d’un facteur 8 est obtenue, tant pour les échangeurs de chaleur que pour les contacteurs à membranes
The search for technological solutions aimed at minimizing the size of a device, known as intensification, is a classic objective of process engineering. In this thesis, the intensification possibilities offered by helical tubes are studied, both for heat and mass exchangers. The use of helical tubes instead of straight tubes is indeed of interest both in terms of increasing the exchange surface per unit volume between the two fluids circulating in the exchanger and by the possibility of increasing the transfers by generating Dean vortices in the tubes. A set of CFD (Computational Fluid Dynamics) type simulations was carried out and compared with experimental results. In the end, on the basis of a systematic approach using correlations, a volume reduction of a factor of 8 was obtained, both for heat exchangers and for membrane contactors
APA, Harvard, Vancouver, ISO, and other styles
36

Mignard-Debise, Lois. "Tools for the paraxial optical design of light field imaging systems." Thesis, Bordeaux, 2018. http://www.theses.fr/2018BORD0009/document.

Full text
Abstract:
L'imagerie plénoptique est souvent présentée comme une révolution par rapport à l'imagerie standard. En effet, elle apporte plus de contrôle à l'utilisateur sur l'image finale puisque les dimensions spatiales et angulaires du champ de lumière offrent la possibilité de changer le point de vue ou de refaire la mise au point après coup ainsi que de calculer la carte de profondeur de la scène. Cependant, cela complique le travail du concepteur optique du système pour deux raisons. La première est qu'il existe une multitude d'appareils de capture plénoptique différents, chacun avec sa propre spécificité. La deuxième est qu'il n'existe pas de modèle qui relie le design de la caméra à ses propriétés optiques d'acquisition et qui puisse guider le concepteur dans sa tâche. Cette thèse répond à ces observations en proposant un modèle optique du premier ordre pour représenter n'importe quel appareil d'acquisition plénoptique. Ce modèle abstrait une caméra plénoptique par un réseau équivalent de caméras virtuelles existant en espace objet et qui effectue un échantillonnage identique de la scène. Ce modèle est utilisé pour étudier et comparer plusieurs caméras plénoptiques ainsi qu'un microscope plénoptique monté en laboratoire, ce qui révèle des lignes directrices pour la conception de systèmes plénoptiques. Les simulations du modèle sont aussi validées par l'expérimentation avec une caméra et le microscope plénoptique
Light field imaging is often presented as a revolution of standard imaging. Indeed, it does bring more control to the user over the final image as the spatio-angular dimensions of the light field offer the possibility to change the viewpoint and refocus after the shot and compute the scene depth map.However, it complicates the work of the optical designer of the system for two reasons. The first is that there exist a multitude of different light field acquisition devices, each with its own specific design. The second is that there is no model that relates the camera design to its optical properties of acquisition and that would guide the designer in his task. This thesis addresses these observations by proposing a first-order optical model to represent any light field acquisition device. This model abstracts a light field camera as en equivalent array of virtual cameras that exists in object space and that performs the same sampling of the scene. The model is used to study and compare several light field cameras as well as a light field microscope setup which reveals guidelines for the conception of light field optical systems. The simulations of the model are also validated through experimentation with a light field camera and a light field microscope that was constructed in our laboratory
APA, Harvard, Vancouver, ISO, and other styles
37

Lopez, Sanchez Francisco Javier. "Optimal design and application of trellis coded modulation techniques defined over the ring of integers." Thesis, Staffordshire University, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.262313.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Gong, Jinlin. "Modélisation et conception optimale d’un moteur linéaire à induction pour système de traction ferroviaire." Thesis, Ecole centrale de Lille, 2011. http://www.theses.fr/2011ECLI0016/document.

Full text
Abstract:
Cette thèse porte sur l’étude des performances du moteur linéaire de référence par la méthode d’analyse éléments finis, et la conception optimale sur un modèle lourd en temps de calcul. La méthode éléments finis est utilisée pour étudier les performances du moteur linéaire de référence, car le modèle analytique d’un moteur linéaire est difficile à construire dû aux effets d’extrémités. Le modèle éléments finis (MEF) 2D permet de prendre en compte l’effet d’extrémité de longueur finie. L’effet d’extrémité de largeur finie est intégré au modèle 2D en faisant varier la conductivité du secondaire et en ajoutant une inductance de tête de bobines. Ensuite, le couplage entre le MEF 3D magnétique et thermique permet de prendre en compte tous les effets d’extrémité et de l’influence de la température. Un banc d’essais est construit pour valider les modélisations éléments finis. La comparaison entre les différents modèles montre l’importance du modèle couplé. L’optimisation directe sur un MEF est très couteuse en temps de calcul. Les stratégies d’intégrer un modèle de substitution au lieu d’un MEF sont étudiées. L’optimisation directe sur un modèle de substitution et l’algorithme Efficient Global Optimisation (EGO) sont comparés. Un algorithme Space Mapping (SM) 3 niveaux est proposé. Les résultats des cas tests montrent que le SM 3 niveaux est plus efficace par rapport à l’algorithme SM 2 niveaux. Une nouvelle stratégie d’optimisation avec un faible budget d’évaluation du MEF est proposée et testée dans le contexte d’une modélisation difficile. La stratégie proposée permet d’évaluer les MEF en parallèle, ainsi permet un gain considérable de réduction du temps d’optimisation
This thesis focuses on studying the performance of the linear induction motor using the method of finite element analysis, and the optimal design on a time-costly model. The finite element method is used to study the performance of the linear induction motor. Firstly, the 2D finite element model (FEM) is constructed, which allows taking into account the longitudinal end effects. The transverse edge effects are taken into account within 2D model by varying the conductivity of the secondary and by adding the inductance of the winding overhang. Secondly, a coupled model between the magnetic and thermal 3D FEM is built which allows taking into account both the end effects and the temperature influence. Finally, a test bench is realized to validate the models. The comparison between the different models shows the importance of the coupled model. Optimal design using finite element modeling tools is a complex task and also time-costly. The surrogate model-assisted optimization strategies are studied. The direct surrogate model-assisted optimization and the Efficient Global Optimization are compared. A three-level output space-mapping technique is proposed to reduce the computation time. The optimization results show that the proposed algorithm allows saving a substantial computation time compared to the classical two level output space-mapping. Using the 3D FEM, a multi-objective optimization with a progressive improvement of a surrogate model is proposed. The proposed strategy evaluates the FEM in parallel. A 3D Pareto front composed of the finite element model evaluation results is obtained, which allows taking the decision for the engineering design
APA, Harvard, Vancouver, ISO, and other styles
39

Straka, Branislav. "Optická pinzeta pro koherencí řízený holografický mikroskop." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2013. http://www.nusl.cz/ntk/nusl-230594.

Full text
Abstract:
In the master's thesis, there has been described and explained the principle of operation of the second generation coherence controlled holographic microscope (CCHM2) designed at the Brno University of Technology. There has also been listed theoretical description of the operation of the optical trap, together with the calculation of the forces acting on it, ways of measuring the stiffness of the optical trap and the principle of~creating a time-shared optical traps. The optical tweezers forming a separate module connectable to CCHM2 was designed. Simulation and optimization of parameters of the optical system, mechanical design, manufacturing documentation, current source to power the laser diode which allows to control the diode output power by the controller card connected to the PC was designed. The galvano-optics mirror angle is controlled by the PC card too. The optical tweezer has been designed, manufactured and tested in conjunction with the CCHM2.
APA, Harvard, Vancouver, ISO, and other styles
40

Krueger, Jared K. (Jared Keith). "CLOSeSat : Perigee-lowering techniques and preliminary design for a small optical imaging satellite operating in very low earth orbit." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/64565.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2010.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 125-126).
The ever-increasing role of intelligence, surveillance, and reconnaissance (ISR) assets in combat may require relatively large numbers of earth observation spacecraft to maintain situational awareness. One way to reduce the cost of such systems is to operate at very low altitudes, thereby minimizing optics size and cost for a given ground resolution. This outside-the-box idea attempts to bridge the gap between high-altitude aerial reconnaissance platforms and traditional LEO satellites. Possible benefits from such a design include enabling a series of cheap, small satellites with improved optical resolution, greater resistance to adversary tracking, and 'quick strike' capability. In this thesis satellite systems design processes and tools are utilized to analyze advanced concepts of low perigee systems and reduce the useful perigee boundary of satellite orbits. The feasibility and utility of such designs are evaluated through the use of the Satellite System Design Tool (SSDT), an integrated approach using models and simulations in MATLAB and Satellite Tool Kit (STK). Finally a potential system design is suggested for a conceptual Continuous Low Orbit Surveillance Satellite (CLOSeSat). The proposed CLOSeSat design utilizes an advanced propulsion system and swooping maneuvers to improve survivability and extend lifetime at operational perigees as low as 160 kilometers, with sustained circular orbits at 240 kilometers. The views expressed in this thesis are those of the author and do not reflect the official policy or position of the United States Air Force, Department of Defense, or the U.S. Government.
by Jared K. Krueger.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
41

Ceschia, Adriano. "Méthodologie de conception optimale de chaines de conversion d’énergie embarquées." Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPAST023.

Full text
Abstract:
Les travaux proposés dans cette thèse s’inscrivent dans le cadre de l’électrification des systèmes embarqués au travers du développement de nouvelles chaines hybrides de conversion d’énergie ; celle-ci se fondant sur de nouvelles motorisations et combinaison de sources d’énergies complémentaires. Ces systèmes présentent de nombreux degrés de liberté, tant vis-à-vis des paramètres de ses constituants, que des réglages des lois de contrôle qui leurs sont associées. L’optimisation (technico-économique) pertinente de ces chaines de conversion complexes repose donc sur l’aptitude des méthodes de recherche d’intégrer simultanément les paramètres macroscopiques des composants ainsi que leurs contraintes technologiques, les conditions aléatoires imposées par l’environnement sur un cycle de fonctionnement et enfin les algorithmes de contrôle de bas niveau comme la gestion énergétique globale. Les performances de ces systèmes reposent sur la capacité des méthodologies de conception à considérer les contraintes multi-physiques liées à leur environnement réel, l’adéquation des technologies, des topologies et des lois de commandes permettant d’intégrer et d’associer efficacement leurs constituants. Dans ce contexte, ces travaux de thèse visent à développer des outils et des méthodes permettant l’optimisation des architectures de puissance et de leurs constituants, en intégrant dès la première phase de conception les notions de contrôle-commande et de gestion énergétique. Ils se déploieront sur l'étude d’une chaîne de conversion hybride électrifiée fondée sur une l’association pile à combustible / batterie.Pour ce faire, une approche globale de conception est proposée, utilisant une stratégie bi-niveau qui considère le problème de la gestion d’énergie directement à l'intérieur du problème de dimensionnement et intègre d’une manière progressive au même niveau plusieurs critères de performance: consommation d’énergie, fiabilité et encombrement. L’approche en question adopte alors deux boucles d'optimisation imbriquées: la boucle externe, basée sur les performances de la PSO (Particle Swarm Optimization), s’occupe du dimensionnement, tandis que la boucle interne agit sur la gestion d’énergie, utilisant la rapidité de la commande optimale. Cela offre un champ d’exploration plus vaste qu’une approche conventionnelle et permet d’atteindre les meilleures performances d’optimisation des sources d’énergie embarquées (minimiser la consommation, respecter les contraintes de fonctionnement de chaque constituant ainsi que les durées de vie cibles, satisfaire les exigences de la charge) avec une convergence rapide ainsi que de bonnes robustesse et précision. Outre ce couplage fort entre dimensionnement et gestion d’énergie, il s’agit de prendre en compte le caractère incertain de l’utilisation en proposant une optimisation robuste du dimensionnement final. Cela nécessite une stratégie de gestion énergétique en temps réel. Ainsi, une extension de l’approche de conception est proposée pour le fonctionnement temps réel utilisant une interaction entre reconnaissance du mode de conduite et contrainte énergétique basée sur l’apprentissage, permettant ainsi de garantir une meilleure intégration de l’approche développée
The research work proposed in this thesis falls within the context of embedded systems electrification with the development of a new hybrid power conversion chain, with new energy sources and powertrains. These systems offer many degrees of freedom regarding both the devices parameters and the tuning values of the associated control laws. The relevant (technico-economic) optimization of these complex power chains relies on the ability of the best-set algorithm to combine simultaneously the main parameters and the technological constraints of each component, the uncertain environmental conditions faced during areal use and finally the control algorithms as well as the global energy management. Their performances are based on the capacity of the design approaches to consider the real environment multiphysic constraints, the adequacy of the technologies, the topologies and the control laws, allowing to integrate and to associate effectively their constituents. In this context, this research work aims at developing tools and methods allowing the optimization of the power architectures and their components (hybrid energy conversion) by integrating in the design process the control-command and the energy management aspects. They consider a use case based on hybrid Fuel cell / Battery power system.For this purpose, a new nested methodology for complex system is been suggested. It enables to tackle large search spaces and considers different performance indexes (energy saving, reliability and volume). It simultaneously tunes and designs the energy management and component sizing by optimizing the main powertrain parameters while respecting the specifications. Technically, it uses two nested loops, combining the particle swarm optimization (PSO) technique’s performance and the rapid optimal control algorithm. This strategy addresses vast search spaces, achieves faster convergence to the global optimal integer design solution, and provides a good accuracy and robustness. In order to consider the randomness feature of real driving cycle (stochastic characteristic), a real time energy management strategy (EMS) was introduced based on an extension of the design approach, which increases its availability. By using machine-learning technique, an estimation of the current driving mode is developed and permits to guide the online energy management system
APA, Harvard, Vancouver, ISO, and other styles
42

Neradovskiy, Maxim. "Guides d’ondes dans un cristal de niobate de lithium périodiquement polarisé : fabrication et étude par des techniques de microscopie à sonde locale." Thesis, Nice, 2016. http://www.theses.fr/2016NICE4035/document.

Full text
Abstract:
Nous avons étudié l'influence de la fabrication de guides d'ondes optiques par échange protonique doux(SPE) sur les cristaux de niobate de lithium (LN) polarisé périodiquement et nous avons montré que,dans certains cas, ce processus conduit à la création de nanodomaines en surface. Ces nanodomaines enforme d'aiguille peuvent être responsables de la réduction de l'efficacité de conversion non linéaireobservée dans les guides qui sont affectés. Nous avons également étudié l'influence de différents typesd'échange protonique sur la formation, par application d'un champ électrique, de domaines dans le LNcongruent. Cette étude montre que le seuil de nucléation peut être fortement réduit par la présence duguide d'onde et que l'apparition et le développement des domaines en forme de traits est fortementmodifiée. Elle montre également que la fusion des nanodomaines existants au voisinage des parois dedomaine aboutit à la formation de parois élargies et de domaines en forme de dendrites. En irradiantavec un faisceau d'électrons la surface Z- d'un échantillon de LN préalablement soumis à un échangeprotonique doux et recouvert d'une couche de résine électronique, nous avons réussi à former desdomaines avec des formes arbitraires. Par cette technique, nous avons fabriqué des domainespériodiques d'excellente qualité dans des cristaux présentant des guides canaux SPE. Des expériences degénération de deuxième harmonique dans ces guides nous ont permis d'obtenir des efficacités deconversion de 48%/W.cm2 ce qui est conforme aux prédictions ainsi que la forme des spectres d'accordde phase que nous avons observés. Ceci démontre tout l'intérêt de ce processus
The investigation of influence of the soft proton exchange (SPE) optical waveguide (WG) creation onperiodically poled lithium niobate (PPLN) has been done. It has been shown that the WG fabricationprocess can induce the formation of needle like nanodomains, which can be responsible for thedegradation of the nonlinear response of the WG created in PPLN crystals. The domain structure (DS)evolution has been studied in congruent lithium niobate (LN) crystals with surface layers modified bythree different proton exchange techniques. The significant decrease of the nucleation threshold fieldand qualitative change of domain rays nucleation and growth have been revealed. The formation of abroad domain boundary and dendrite domain structure as a result of nanodomains merging in front ofthe moving rays has been demonstrated. The formation of DS in LN with SPE by irradiation of coveredby electron resist polar surface of LN has been investigated. Formation of domains with arbitrary shapesas a result of discrete switching has been revealed. Finally, it has been demonstrated that electron beamirradiation of lithium niobate crystals with surface resist layer can produce high quality periodical domainpatterns after channel waveguide fabrication. Nonlinear characterizations show that the conversionefficiencies and the phase matching spectra conform to theoretical predictions, indicating that thiscombination presents a great interest for device fabrication. Second harmonic generation withnormalized nonlinear conversion efficiency up to 48%/(W cm2) has been achieved in such waveguides
APA, Harvard, Vancouver, ISO, and other styles
43

Xia, Liang. "Towards optimal design of multiscale nonlinear structures : reduced-order modeling approaches." Thesis, Compiègne, 2015. http://www.theses.fr/2015COMP2230/document.

Full text
Abstract:
L'objectif principal est de faire premiers pas vers la conception topologique de structures hétérogènes à comportement non-linéaires. Le deuxième objectif est d’optimiser simultanément la topologie de la structure et du matériau. Il requiert la combinaison des méthodes de conception optimale et des approches de modélisation multi-échelle. En raison des lourdes exigences de calcul, nous avons introduit des techniques de réduction de modèle et de calcul parallèle. Nous avons développé tout d’abord un cadre de conception multi-échelle constitué de l’optimisation topologique et la modélisation multi-échelle. Ce cadre fournit un outil automatique pour des structures dont le modèle de matériau sous-jacent est directement régi par la géométrie de la microstructure réaliste et des lois de comportement microscopiques. Nous avons ensuite étendu le cadre en introduisant des variables supplémentaires à l’échelle microscopique pour effectuer la conception simultanée de la structure et de la microstructure. En ce qui concerne les exigences de calcul et de stockage de données en raison de multiples réalisations de calcul multi-échelle sur les configurations similaires, nous avons introduit: les approches de réduction de modèle. Nous avons développé un substitut d'apprentissage adaptatif pour le cas de l’élasticité non-linéaire. Pour viscoplasticité, nous avons collaboré avec le Professeur Felix Fritzen de l’Université de Stuttgart en utilisant son modèle de réduction avec la programmation parallèle sur GPU. Nous avons également adopté une autre approche basée sur le potentiel de réduction issue de la littérature pour améliorer l’efficacité de la conception simultanée
High-performance heterogeneous materials have been increasingly used nowadays for their advantageous overall characteristics resulting in superior structural mechanical performance. The pronounced heterogeneities of materials have significant impact on the structural behavior that one needs to account for both material microscopic heterogeneities and constituent behaviors to achieve reliable structural designs. Meanwhile, the fast progress of material science and the latest development of 3D printing techniques make it possible to generate more innovative, lightweight, and structurally efficient designs through controlling the composition and the microstructure of material at the microscopic scale. In this thesis, we have made first attempts towards topology optimization design of multiscale nonlinear structures, including design of highly heterogeneous structures, material microstructural design, and simultaneous design of structure and materials. We have primarily developed a multiscale design framework, constituted of two key ingredients : multiscale modeling for structural performance simulation and topology optimization forstructural design. With regard to the first ingredient, we employ the first-order computational homogenization method FE2 to bridge structural and material scales. With regard to the second ingredient, we apply the method Bi-directional Evolutionary Structural Optimization (BESO) to perform topology optimization. In contrast to the conventional nonlinear design of homogeneous structures, this design framework provides an automatic design tool for nonlinear highly heterogeneous structures of which the underlying material model is governed directly by the realistic microstructural geometry and the microscopic constitutive laws. Note that the FE2 method is extremely expensive in terms of computing time and storage requirement. The dilemma of heavy computational burden is even more pronounced when it comes to topology optimization : not only is it required to solve the time-consuming multiscale problem once, but for many different realizations of the structural topology. Meanwhile we note that the optimization process requires multiple design loops involving similar or even repeated computations at the microscopic scale. For these reasons, we introduce to the design framework a third ingredient : reduced-order modeling (ROM). We develop an adaptive surrogate model using snapshot Proper Orthogonal Decomposition (POD) and Diffuse Approximation to substitute the microscopic solutions. The surrogate model is initially built by the first design iteration and updated adaptively in the subsequent design iterations. This surrogate model has shown promising performance in terms of reducing computing cost and modeling accuracy when applied to the design framework for nonlinear elastic cases. As for more severe material nonlinearity, we employ directly an established method potential based Reduced Basis Model Order Reduction (pRBMOR). The key idea of pRBMOR is to approximate the internal variables of the dissipative material by a precomputed reduced basis computed from snapshot POD. To drastically accelerate the computing procedure, pRBMOR has been implemented by parallelization on modern Graphics Processing Units (GPUs). The implementation of pRBMOR with GPU acceleration enables us to realize the design of multiscale elastoviscoplastic structures using the previously developed design framework inrealistic computing time and with affordable memory requirement. We have so far assumed a fixed material microstructure at the microscopic scale. The remaining part of the thesis is dedicated to simultaneous design of both macroscopic structure and microscopic materials. By the previously established multiscale design framework, we have topology variables and volume constraints defined at both scales
APA, Harvard, Vancouver, ISO, and other styles
44

Tran, Minh Tue. "Pixel and patch based texture synthesis using image segmentation." University of Western Australia. School of Computer Science and Software Engineering, 2010. http://theses.library.uwa.edu.au/adt-WU2010.0030.

Full text
Abstract:
[Truncated abstract] Texture exists all around us and serves as an important visual cue for the human visual system. Captured within an image, we identify texture by its recognisable visual pattern. It carries extensive information and plays an important role in our interpretation of a visual scene. The subject of this thesis is texture synthesis, which is de ned as the creation of a new texture that shares the fundamental visual characteristics of an existing texture such that the new image and the original are perceptually similar. Textures are used in computer graphics, computer-aided design, image processing and visualisation to produce realistic recreations of what we see in the world. For example, the texture on an object communicates its shape and surface properties in a 3D scene. Humans can discriminate between two textures and decide on their similarity in an instant, yet, achieving this algorithmically is not a simple process. Textures range in complexity and developing an approach that consistently synthe- sises this immense range is a dfficult problem to solve and motivates this research. Typically, texture synthesis methods aim to replicate texture by transferring the recognisable repeated patterns from the sample texture to synthesised output. Feature transferal can be achieved by matching pixels or patches from the sample to the output. As a result, two main approaches, pixel-based and patch-based, have es- tablished themselves in the active eld of texture synthesis. This thesis contributes to the present knowledge by introducing two novel texture synthesis methods. Both methods use image segmentation to improve synthesis results. ... The sample is segmented and the boundaries of the middle patch are confined to follow segment boundaries. This prevents texture features from being cut o prematurely, a common artifact of patch-based results, and eliminates the need for patch boundary comparisons that most other patch- based synthesis methods employ. Since no user input is required, this method is simple and straight-forward to run. The tiling of pre-computed tile pairs allows outputs that are relatively large to the sample size to be generated quickly. Output results show great success for textures with stochastic and semi-stochastic clustered features but future work is needed to suit more highly structured textures. Lastly these two texture synthesis methods are applied to the areas of image restoration and image replacement. These two areas of image processing involve replacing parts of an image with synthesised texture and are often referred to as constrained texture synthesis. Images can contain a large amount of complex information, therefore replacing parts of an image while maintaining image fidelity is a difficult problem to solve. The texture synthesis approaches and constrained synthesis implementations proposed in this thesis achieve successful results comparable with present methods.
APA, Harvard, Vancouver, ISO, and other styles
45

Mehranipornejad, Ebrahim. "Evaluation of AASHTO design specifications for cast-in-place continuous bridge deck using remote sensing technique." [Tampa, Fla] : University of South Florida, 2006. http://purl.fcla.edu/usf/dc/et/SFE0001584.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Roth, Matthias, Jörg Heber, and Klaus Janschek. "System design of programmable 4f phase modulation techniques for rapid intensity shaping: A conceptual comparison." SPIE, 2016. https://tud.qucosa.de/id/qucosa%3A35096.

Full text
Abstract:
The present study analyses three beam shaping approaches with respect to a light-efficient generation of i) patterns and ii) multiple spots by means of a generic optical 4f-setup. 4f approaches share the property that due to the one-to-one relationship between output intensity and input phase, the need for time-consuming, iterative calculation can be avoided. The resulting low computational complexity offers a particular advantage compared to the widely used holographic principles and makes them potential candidates for real-time applications. The increasing availability of high-speed phase modulators, e.g. on the basis of MEMS, calls for an evaluation of the performances of these concepts. Our second interest is the applicability of 4f methods to high-power applications. We discuss the variants of 4f intensity shaping by phase modulation from a system-level point of view which requires the consideration of application relevant boundary conditions. The discussion includes i) the micro mirror based phase manipulation combined with amplitude masking in the Fourier plane, ii) the Generalized Phase Contrast, and iii) matched phase-only correlation filtering combined with GPC. The conceptual comparison relies on comparative figures of merit for energy efficiency, pattern homogeneity, pattern image quality, maximum output intensity and flexibility with respect to the displayable pattern. Numerical simulations illustrate our findings.
APA, Harvard, Vancouver, ISO, and other styles
47

Falcon, Maimone Rafael. "Co-conception des systemes optiques avec masques de phase pour l'augmentation de la profondeur du champ : evaluation du performance et contribution de la super-résolution." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLO006/document.

Full text
Abstract:
Les masques de phase sont des dispositifs réfractifs situés généralement au niveau de la pupille d’un système optique pour en modifier la réponse impulsionnelle (PSF en anglais), par une technique habituellement connue sous le nom de codage de front d’onde. Ces masques peuvent être utilisés pour augmenter la profondeur du champ (DoF en anglais) des systèmes d’imagerie sans diminuer la quantité de lumière qui entre dans le système, en produisant une PSF ayant une plus grande invariance à la défocalisation. Cependant, plus le DoF est grand plus l’image acquise est floue et une opération de déconvolution doit alors lui être appliquée. Par conséquent, la conception des masques de phase doit prendre en compte ce traitement pour atteindre le compromis optimal entre invariance de la PSF à la défocalisation et qualité de la déconvolution.. Cette approche de conception conjointe a été introduite par Cathey et Dowski en 1995 et affinée en 2002 pour des masques de phase continus puis généralisée par Robinson et Stork en 2007 pour la correction d’autres aberrations optiques.Dans cette thèse sont abordés les différents aspects de l’optimisation des masques de phase pour l’augmentation du DoF, tels que les critères de performance et la relation entre ces critères et les paramètres des masques. On utilise la « qualité d’image » (IQ en anglais), une méthode basée sur l’écart quadratique moyen définie par Diaz et al., pour la co-conception des divers masques de phase et pour évaluer leur performance. Nous évaluons ensuite la pertinence de ce critère IQ en comparaison d’autres métriques de conception optique, comme par exemple le rapport de Strehl ou la fonction de transfert de modulation (MTF en anglais). Nous nous concentrons en particulier sur les masques de phase annulaires binaires, l’étude de leur performance pour différents cas comme l’augmentation du DoF, la présence d’aberrations ou l’impact du nombre de paramètres d’optimisation.Nous appliquons ensuite les outils d’analyse exploités pour les masques binaires aux masques de phase continus qui apparaissent communément dans la littérature, comme les masques de phase polynomiaux. Nous avons comparé de manière approfondie ces masques entre eux et aux masques binaires, non seulement pour évaluer leurs avantages, mais aussi parce qu’en analysant leurs différences il est possible de comprendre leurs propriétésLes masques de phase fonctionnent comme des filtres passe-bas sur des systèmes limités par la diffraction, réduisant en pratique les phénomènes de repliement spectral. D’un autre côté, la technique de reconstruction connue sous l’appellation de « superresolution » utilise des images d’une même scène perturbées par du repliement de spectre pour augmenter la résolution du système optique original. Les travaux réalisés durant une période de détachement chez le partenaire industriel de la thèse, KLA-Tencor à Louvain, Belgique, illustrent le propos. A la fin du manuscrit nous étudions la pertinence de la combinaison de cette technique avec l’utilisation de masques de phase pour l’augmentation du DoF
Phase masks are wavefront encoding devices typically situated at the aperture stop of an optical system to engineer its point spread function (PSF) in a technique commonly known as wavefront coding. These masks can be used to extend the depth of field (DoF) of imaging systems without reducing the light throughput by producing a PSF that becomes more invariant to defocus; however, the larger the DoF the more blurred the acquired raw image so that deconvolution has to be applied on the captured images. Thus, the design of the phase masks has to take into account image processing in order to reach the optimal compromise between invariance of PSF to defocus and capacity to deconvolve the image. This joint design approach has been introduced by Cathey and Dowski in 1995 and refined in 2002 for continuous-phase DoF enhancing masks and generalized by Robinson and Stork in 2007 to correct other optical aberrations.In this thesis we study the different aspects of phase mask optimization for DoF extension, such as the different performance criteria and the relation of these criteria with the different mask parameters. We use the so-called image quality (IQ), a mean-square error based criterion defined by Diaz et al., to co-design different phase masks and evaluate their performance. We then compare the relevance of the IQ criterion against other optical design metrics, such as the Strehl ratio, the modulation transfer function (MTF) and others. We focus in particular on the binary annular phase masks, their performance for various conditions, such as the desired DoF range, the number of optimization parameters, presence of aberrations and others.We use then the analysis tools used for the binary phase masks for continuous-phase masks that appear commonly in the literature, such as the polynomial-phase masks. We extensively compare these masks to each other and the binary masks, not only to assess their benefits, but also because by analyzing their differences we can understand their properties.Phase masks function as a low-pass filter on diffraction limited systems, effectively reducing aliasing. On the other hand, the signal processing technique known as superresolution uses several aliased frames of the same scene to enhance the resolution of the final image beyond the sampling resolution of the original optical system. Practical examples come from the works made during a secondment with the industrial partner KLA-Tencor in Leuven, Belgium. At the end of the manuscript we study the relevance of using such a technique alongside phase masks for DoF extension
APA, Harvard, Vancouver, ISO, and other styles
48

Abouseif, Akram. "Emerging DSP techniques for multi-core fiber transmission systems." Electronic Thesis or Diss., Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAT013.

Full text
Abstract:
Les systèmes de communication optique ont connu plusieurs phases de développement au cours des dernières décennies. Ils approchent aujourd'hui les limites de capacité du cana non-linéaire. L'espace est aujourd'hui le dernier degré de liberté à mettre en œuvre afin de continuer à répondre aux demandes de capacité à venir pour les prochaines années. Par conséquent, des recherches intensives sont menées pour explorer tous les aspects concernant le déploiement du système de multiplexage par division spatiale (SDM). Plusieurs dégradations ont un impact sur les systèmes SDM en raison de l'interaction des canaux spatiaux qui dégrade les performances du système. Dans cette thèse, nous nous concentrons sur les fibres multicœurs (MCF) comme l'approche la plus prometteuse pour être le premier représentant du système SDM. Nous présentons différentes solutions numériques et optiques pour atténuer l'effet non unitaire connu sous le nom de perte dépendante du cœur (CDL). La première partie est consacrée à l'étude des performances de la transmission MCF en tenant compte des dégradations de propagation qui impactent les systèmes MCF. Nous proposons un modèle de canal qui aide à identifier le système MCF. La deuxième partie est consacrée à la technique optique pour améliorer les performances de transmission avec une solution optimale pour la réduction des CDL. Ensuite, nous avons introduit des techniques numériques pour des améliorations supplémentaires, la pré-compensation Zero Forcing et le codage spatio-temporel pour une atténuation CDL supplémentaire. Tous les résultats de simulation sont validés analytiquement en dérivant les bornes supérieures de probabilité d'erreur
Optical communication systems have seen several phases in the last decades. It is predictable that the optical systems as we know will reach the non-linear capacity limits. At the moment, the space is the last degree of freedom to be implemented in order to keep delivering the upcoming capacity demands for the next years. Therefore, intensive researches are conducted to explore all the aspects concerning the deployment of the space-division multiplexing (SDM) system. Several impairments impact the SDM systems as a result from the interaction of the spatial channels which degrades the system performance. In this thesis, we focus on the multi-core fibers (MCFs) as the most promising approach to be the first representative of the SDM system. We present different digital and optical solutions to mitigate the non-unitary effect known as the core dependent loss (CDL). The first part is dedicated to study the performance of the MCF transmission taking into account the propagating impairments that impact the MCF systems. We propose a channel model that helps to identify the MCFs system. The second part is devoted to optical technique to enhance the transmission performance with an optimal solution. After, we introduced digital techniques for further enhancement, the Zero Forcing pre-compensation and the space-time coding for further CDL mitigation. All the simulation results are validated analytically by deriving the error probability upper bounds
APA, Harvard, Vancouver, ISO, and other styles
49

Kurth, Mathias. "Contention techniques for opportunistic communication in wireless mesh networks." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2012. http://dx.doi.org/10.18452/16458.

Full text
Abstract:
Auf dem Gebiet der drahtlosen Kommunikation und insbesondere auf den tieferen Netzwerkschichten sind gewaltige Fortschritte zu verzeichnen. Innovative Konzepte und Technologien auf der physikalischen Schicht (PHY) gehen dabei zeitnah in zelluläre Netze ein. Drahtlose Maschennetzwerke (WMNs) können mit diesem Innovationstempo nicht mithalten. Die Mehrnutzer-Kommunikation ist ein Grundpfeiler vieler angewandter PHY Technologien, die sich in WMNs nur ungenügend auf die etablierte Schichtenarchitektur abbilden lässt. Insbesondere ist das Problem des Scheduling in WMNs inhärent komplex. Erstaunlicherweise ist der Mehrfachzugriff mit Trägerprüfung (CSMA) in WMNs asymptotisch optimal obwohl das Verfahren eine geringe Durchführungskomplexität aufweist. Daher stellt sich die Frage, in welcher Weise das dem CSMA zugrunde liegende Konzept des konkurrierenden Wettbewerbs (engl. Contention) für die Integration innovativer PHY Technologien verwendet werden kann. Opportunistische Kommunikation ist eine Technik, die die inhärenten Besonderheiten des drahtlosen Kanals ausnutzt. In der vorliegenden Dissertation werden CSMA-basierte Protokolle für die opportunistische Kommunikation in WMNs entwickelt und evaluiert. Es werden dabei opportunistisches Routing (OR) im zustandslosen Kanal und opportunistisches Scheduling (OS) im zustandsbehafteten Kanal betrachtet. Ziel ist es, den Durchsatz von elastischen Paketflüssen gerecht zu maximieren. Es werden Modelle für Überlastkontrolle, Routing und konkurrenzbasierte opportunistische Kommunikation vorgestellt. Am Beispiel von IEEE 802.11 wird illustriert, wie der schichtübergreifende Entwurf in einem Netzwerksimulator prototypisch implementiert werden kann. Auf Grundlage der Evaluationsresultate kann der Schluss gezogen werden, dass die opportunistische Kommunikation konkurrenzbasiert realisierbar ist. Darüber hinaus steigern die vorgestellten Protokolle den Durchsatz im Vergleich zu etablierten Lösungen wie etwa DCF, DSR, ExOR, RBAR und ETT.
In the field of wireless communication, a tremendous progress can be observed especially at the lower layers. Innovative physical layer (PHY) concepts and technologies can be rapidly assimilated in cellular networks. Wireless mesh networks (WMNs), on the other hand, cannot keep up with the speed of innovation at the PHY due to their flat and decentralized architecture. Many innovative PHY technologies rely on multi-user communication, so that the established abstraction of the network stack does not work well for WMNs. The scheduling problem in WMNs is inherent complex. Surprisingly, carrier sense multiple access (CSMA) in WMNs is asymptotically utility-optimal even though it has a low computational complexity and does not involve message exchange. Hence, the question arises whether CSMA and the underlying concept of contention allows for the assimilation of advanced PHY technologies into WMNs. In this thesis, we design and evaluate contention protocols based on CSMA for opportunistic communication in WMNs. Opportunistic communication is a technique that relies on multi-user diversity in order to exploit the inherent characteristics of the wireless channel. In particular, we consider opportunistic routing (OR) and opportunistic scheduling (OS) in memoryless and slow fading channels, respectively. We present models for congestion control, routing and contention-based opportunistic communication in WMNs in order to maximize both throughput and fairness of elastic unicast traffic flows. At the instance of IEEE 802.11, we illustrate how the cross-layer algorithms can be implemented within a network simulator prototype. Our evaluation results lead to the conclusion that contention-based opportunistic communication is feasible. Furthermore, the proposed protocols increase both throughput and fairness in comparison to state-of-the-art approaches like DCF, DSR, ExOR, RBAR and ETT.
APA, Harvard, Vancouver, ISO, and other styles
50

Price, Nathaniel Bouton. "Conception sous incertitudes de modèles avec prise en compte des tests futurs et des re-conceptions." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSEM012/document.

Full text
Abstract:
Au stade de projet amont, les ingénieurs utilisent souvent des modèles de basse fidélité possédant de larges erreurs. Les approches déterministes prennent implicitement en compte les erreurs par un choix conservatif des paramètres aléatoires et par l'ajout de facteurs de sécurité dans les contraintes de conception. Une fois qu'une solution est proposée, elle est analysée par un modèle haute fidélité (test futur): une re-conception peut s'avérer nécessaire pour restaurer la fiabilité ou améliorer la performance, et le modèle basse fidélité est calibré pour prendre en compte les résultats de l'analyse haute-fidélité. Mais une re-conception possède un coût financier et temporel. Dans ce travail, les effets possibles des tests futurs et des re-conceptions sont intégrés à une procédure de conception avec un modèle basse fidélité. Après les Chapitres 1 et 2 qui donnent le contexte de ce travail et l'état de l'art, le Chapitre 3 analyse le dilemme d'une conception initiale conservatrice en terme de fiabilité ou ambitieuse en termes de performances (avec les re-conceptions associées pour améliorer la performance ou la fiabilité). Le Chapitre 4 propose une méthode de simulation des tests futurs et de re-conception avec des erreurs épistémiques corrélées spatialement. Le Chapitre 5 décrit une application à une fusée sonde avec des erreurs à la fois aléatoires et de modèles. Le Chapitre 6 conclut le travail
At the initial design stage, engineers often rely on low-fidelity models that have high uncertainty. In a deterministic safety-margin-based design approach, uncertainty is implicitly compensated for by using fixed conservative values in place of aleatory variables and ensuring the design satisfies a safety-margin with respect to design constraints. After an initial design is selected, high-fidelity modeling is performed to reduce epistemic uncertainty and ensure the design achieves the targeted levels of safety. High-fidelity modeling is used to calibrate low-fidelity models and prescribe redesign when tests are not passed. After calibration, reduced epistemic model uncertainty can be leveraged through redesign to restore safety or improve design performance; however, redesign may be associated with substantial costs or delays. In this work, the possible effects of a future test and redesign are considered while the initial design is optimized using only a low-fidelity model. The context of the work and a literature review make Chapters 1 and 2 of this manuscript. Chapter 3 analyzes the dilemma of whether to start with a more conservative initial design and possibly redesign for performance or to start with a less conservative initial design and risk redesigning to restore safety. Chapter 4 develops a generalized method for simulating a future test and possible redesign that accounts for spatial correlations in the epistemic model error. Chapter 5 discusses the application of the method to the design of a sounding rocket under mixed epistemic model uncertainty and aleatory parameter uncertainty. Chapter 6 concludes the work
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography