Dissertations / Theses on the topic 'Robots Mathematical models'

To see the other types of publications on this topic, follow the link: Robots Mathematical models.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Robots Mathematical models.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Ma, Ou. "Dynamics of serial-type robotic manipulators." Thesis, McGill University, 1987. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=63771.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zhu, Wenkai, and 朱文凯. "Performance optimisation of mobile robots in dynamic environments." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2012. http://hub.hku.hk/bib/B49617904.

Full text
Abstract:
Rousing applications of robot teams abound over the past three decades, but ferocious demands for viable systems to coordinate teams of mobile robots in dynamic environments still linger on. To meet this challenge, this project proposes a performance optimisation system for mobile robots to make the team performance more reliable and efficient in dynamic environments. A wide range of applications will benefit from the system, such as logistics, military, and disaster rescue. The performance optimisation system comprises three main modules: (1) a task allocation module to assign tasks to robots, (2) a motion planning module to navigate robots, and (3) a graphical simulation module to visualise robot operations and to validate the methodologies of performance optimisation. The task allocation module features a closed-loop bid adjustment mechanism for auctioning tasks to capable robots. Unlike most traditional open-looped methods, each of the robots evaluates its own performance after completing a task as feedback correction to improve its future bid prices of similar tasks. Moreover, a series of adjustments are weighed and averaged to damp out drastic deviations due to operational uncertainties. As such, the accuracy of bid prices is improved, and tasks are more likely allocated to suitable robots that are expected to perform better by offering more reliable bids. The motion planning module is bio-inspired intelligent, characterised by detection of imminent neighbours and design flexibility of virtual forces to enhance the responsiveness of robot motions. Firstly, while similar methods unnecessarily entail each robot to consider all the neighbours, the detection of imminent neighbours instead enables each robot to mimic creatures to identify and only consider imminent neighbours which pose collision dangers. Hence, redundant computations are reduced and undesirable robot movements eliminated. Secondly, to imitate the responsive motion behaviours of creatures, a virtual force method is adopted. It composes virtual attractive forces that drive the robots towards their targets and, simultaneously, exerts virtual repulsive forces to steer the robots away from one another. To enhance the design flexibility of the virtual forces, a twosection function and, more significantly, a spline-based method are proposed. The shapes of force curves can be flexibly designed and adjusted to generate smooth forces with desirable magnitudes. Accordingly, robot motions are streamlined and likelihood of robot collisions reduced. The graphical simulation module simulates and visualises robot team operations, and validates the proposed methodologies. It effectively emulates the operational scenarios and enables engineers to tackle downstream problems earlier in the design cycle. Furthermore, time and costs of robotic system development in the simulation module are considerably cut, compared with a physical counterpart. The performance optimisation system is indeed viable in improving the operational safety and efficiency of robot teams in dynamic environments. It has substantially pushed the frontiers of this field, and may be adapted as an intelligent control software system for practical operations of physical robot teams to benefit various applications.
HKU 3 Minute Thesis Award, 1st Runner-up (2012)
published_or_final_version
Industrial and Manufacturing Systems Engineering
Doctoral
Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
3

Sood, Gaurav. "Simulation and control of a hip actuated robotic model for the study of human standing posture." Thesis, McGill University, 2007. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=99794.

Full text
Abstract:
Human stance in quiet mode, relies on feedback from eyes, skin, muscles and the inner ear and the control produced is a combination of strategies which enable a person to stay standing. This thesis presents the simulation and control of a hip actuated robotic model of human standing posture.
The first part of the thesis is devoted to recalling basic elements of the human balance system and to describe the balance strategies it uses to maintain an upright stance. Of the strategies presented, we consider the hip strategy which motivated the formulation of a hip actuated robot. An investigation into the control of nonlinear underactuated robots by linear controllers is done to verify the range and efficiency of the controlled system.
The second part of the thesis includes the investigation of two simplified models of the robot. Results using linear state feedback control are presented. The two models used are compared to clarify the use of one over the other.
We found that for linear controls, the size of the region of convergence decreased underactuated systems of increasing complexity. For our four degrees of freedom robot, the region of convergence is of 2.3 degrees for the actuated joints and of 1 degree for the unactuated joints. Our system is Lyapunov stable when the fully simplified model is assumed.
APA, Harvard, Vancouver, ISO, and other styles
4

Guo, Lin 1962. "Controller estimation for the adaptive control of robotic manipulators." Thesis, McGill University, 1987. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=63860.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Yang, Xuedong. "Modeling and control of two-axis belt-drive gantry robots." Diss., Georgia Institute of Technology, 2000. http://hdl.handle.net/1853/13061.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

朱國基 and Kwok-kei Chu. "Design and control of a six-legged mobile robot." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2001. http://hub.hku.hk/bib/B31225895.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Carey, Mara L. "An enhanced integrated-circuit implementation of muscular contraction." Thesis, Georgia Institute of Technology, 2000. http://hdl.handle.net/1853/15507.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Rusaw, Shawn. "Sensor-based motion planning via nonsmooth analysis." Thesis, University of Oxford, 2002. http://ora.ox.ac.uk/objects/uuid:46fa490d-c4ca-45ad-9cd5-b1f11920863d.

Full text
Abstract:
In this thesis we present a novel approach to sensor-based motion planning developed using the mathematical tools provided by the field of nonsmooth analysis. The work is based on a broad body of background material developed using the tools of differential topology (smooth analysis), that is limited to simple cases like a point or circular robot. Nonsmooth analysis is required to extend this background work to the case of a polygonal robot moving amidst polygonal obstacles. We present a detailed nonsmooth analysis of the distance function for arbitrary configuration spaces and use this analysis to develop a planner for a rotating and translating polygonal mobile robot. Using the tools of nonsmooth analysis, we then describe a one-dimensional nonsmooth roadmap of the robot's freespace called the Nonsmooth Critical Set + Nonsmooth Generalised Voronoi Graph (NCRIT+NGVG) where the robot is equidistant to a number of obstacles, in a critical configuration or passing between two obstacles. We then use the related field of nonsmooth control theory to develop several provably stable control laws for following and exploring the nonsmooth roadmap. Finally, we implement a motion planner in simulation and for a real polygonal mobile robot, thus verifying the utility and practicality of the nonsmooth roadmap.
APA, Harvard, Vancouver, ISO, and other styles
9

Ngan, Choi-chik, and 顔才績. "A hidden Markov model approach to force-based contact recognition for intelligent robotic assembly." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2002. http://hub.hku.hk/bib/B31243496.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Feng, Jingbin. "Quasi-Static Deflection Compensation Control of Flexible Manipulator." PDXScholar, 1993. https://pdxscholar.library.pdx.edu/open_access_etds/4759.

Full text
Abstract:
The growing need in industrial applications of high-performance robots has led to designs of lightweight robot arms. However the light-weight robot arm introduces accuracy and vibration problems. The classical robot design and control method based on the rigid body assumption is no longer satisfactory for the light-weight manipulators. The effects of flexibility of light-weight manipulators have been an active research area in recent years. A new approach to correct the quasi-static position and orientation error of the end-effector of a manipulator with flexible links is studied in this project. In this approach, strain gages are used to monitor the elastic reactions of the flexible links due to the weight of the manipulator and the payload in real time, the errors are then compensated on-line by a control algorithm. Although this approach is designed to work for general loading conditions, only the bending deflection in a plane is investigated in detail. It is found that a minimum of two strain gages per link are needed to monitor the deflection of a robot arm subjected to bending. A mathematical model relating the deflections and strains is developed using Castigliano's theorem of least work. The parameters of the governing equations are obtained using the identification method. With the identification method, the geometric details of the robot arms and the carrying load need not be known. The deflections monitored by strain gages are fed back to the kinematic model of the manipulator to find the position and orientation of the end-effector of the manipulator. A control algorithm is developed to compensate the deflections. The inverse kinematics that includes deflections as variables is solved in closed form. If the deflections at target position are known, this inverse kinematics will generate the exact joint command for the flexible manipulator. However the deflections of the robot arms at the target position are unknown ahead of time, the current deflections at each sampling time are used to predict the deflections at target position and the joint command is modified until the required accuracy is obtained. An experiment is set up to verify the mathematical model relating the strains to the deflections. The results of the experiment show good agreement with the model. The compensation control algorithm is first simulated in a computer program. The simulation also shows good convergence. An experimental manipulator with two flexible links is built to prove this approach. The experimental results show that this compensation control improves the position accuracy of the flexible manipulator significantly. The following are the brief advantages of this approach: the deflections can be monitored without measuring the payload directly and without the detailed knowledge of link geometry~ the manipulator calibrates itself with minimum human intervention; the compensation control algorithm can be easily integrated with the existing uncompensated rigid-body algorithm~ it is inexpensive and practical for implementation to manipulators installed in workplaces.
APA, Harvard, Vancouver, ISO, and other styles
11

Wu, Wencen. "Bio-inspired cooperative exploration of noisy scalar fields." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/48940.

Full text
Abstract:
A fundamental problem in mobile robotics is the exploration of unknown fields that might be inaccessible or hostile to humans. Exploration missions of great importance include geological survey, disaster prediction and recovery, and search and rescue. For missions in relatively large regions, mobile sensor networks (MSN) are ideal candidates. The basic idea of MSN is that mobile robots form a sensor network that collects information, meanwhile, the behaviors of the mobile robots adapt to changes in the environment. To design feasible motion patterns and control of MSN, we draw inspiration from biology, where animal groups demonstrate amazingly complex but adaptive collective behaviors to changing environments. The main contributions of this thesis include platform independent mathematical models for the coupled motion-sensing dynamics of MSN and biologically-inspired provably convergent cooperative control and filtering algorithms for MSN exploring unknown scalar fields in both 2D and 3D spaces. We introduce a novel model of behaviors of mobile agents that leads to fundamental theoretical results for evaluating the feasibility and difficulty of exploring a field using MSN. Under this framework, we propose and implement source seeking algorithms using MSN inspired by behaviors of fish schools. To balance the cost and performance in exploration tasks, a switching strategy, which allows the mobile sensing agents to switch between individual and cooperative exploration, is developed. Compared to fixed strategies, the switching strategy brings in more flexibility in engineering design. To reveal the geometry of 3D spaces, we propose a control and sensing co-design for MSN to detect and track a line of curvature on a desired level surface.
APA, Harvard, Vancouver, ISO, and other styles
12

Riechel, Andrew T. "Force-Feasible Workspace Analysis and Motor Mount Disturbance Compensation for Point-Mass Cable Robots." Thesis, Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/5243.

Full text
Abstract:
Cable-actuated manipulators (or 'cable robots') constitute a relatively new classification of robots which use motors, located at fixed remote locations, to manipulate an end-effector by extending or retracting cables. These manipulators possess a number of unique properties which make them proficient with tasks involving high payloads, large workspaces, and dangerous or contaminated environments. However, a number of challenges exist which have limited the mainstream emergence of cable robots. This thesis addresses two of the most important of these issues-- workspace analysis and disturbance compensation. Workspace issues are particularly important, as many large-scale applications require the end-effector to operate in regions of a particular shape, and to exert certain minimum forces throughout those regions. The 'Force-Feasible Workspace' represents the set of end-effector positions, for a given robot design, for which the robot can exert a set of required forces on its environment. This can be considered as the robot's 'usable' workspace, and an analysis of this workspace shape for point-mass cable robots is therefore presented to facilitate optimal cable robot design. Numerical simulation results are also presented to validate the analytical results, and to aid visualization of certain complex workspace shapes. Some cable robot applications may require mounting motors to moving bases (i.e. mobile robots) or other surfaces which are subject to disturbances (i.e. helicopters or crane arms). Such disturbances can propagate to the end-effector and cause undesired motion, so the rejection of motor mount disturbances is also of interest. This thesis presents a strategy for measuring these disturbances and compensating for them. General approaches and implementation issues are explored qualitatively with a simple one-degree-of-freedom prototype (including a strategy for mitigating accelerometer drift), and quantitative simulation results are presented as a proof of concept.
APA, Harvard, Vancouver, ISO, and other styles
13

Yun, Yuan. "Kinematics, dynamics and control analysis for micro positioning and active vibration isolation using parallel manipulators." Thesis, University of Macau, 2011. http://umaclib3.umac.mo/record=b2542954.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Scrivens, Jevin Eugene. "The Interactions of Stance Width and Feedback Control Gain: A Modeling Study of Bipedal Postural Control." Diss., Available online, Georgia Institute of Technology, 2007, 2007. http://etd.gatech.edu/theses/available/etd-07082007-202007/.

Full text
Abstract:
Thesis (Ph. D.)--Mechanical Engineering, Georgia Institute of Technology, 2008.
Wayne J. Book, Committee Member ; Young-Hui Chang, Committee Member ; T. Richard Nichols, Committee Member ; Lena H. Ting, Committee Co-Chair ; Stephen P. DeWeerth, Committee Co-Chair.
APA, Harvard, Vancouver, ISO, and other styles
15

Nikolaidis, Stefanos. "Mathematical Models of Adaptation in Human-Robot Collaboration." Research Showcase @ CMU, 2017. http://repository.cmu.edu/dissertations/1121.

Full text
Abstract:
While much work in human-robot interaction has focused on leaderfollower teamwork models, the recent advancement of robotic systems that have access to vast amounts of information suggests the need for robots that take into account the quality of the human decision making and actively guide people towards better ways of doing their task. This thesis proposes an equal-partners model, where human and robot engage in a dance of inference and action, and focuses on one particular instance of this dance: the robot adapts its own actions via estimating the probability of the human adapting to the robot. We start with a bounded-memory model of human adaptation parameterized by the human adaptability - the probability of the human switching towards a strategy newly demonstrated by the robot. We then examine more subtle forms of adaptation, where the human teammate adapts to the robot, without replicating the robot’s policy. We model the interaction as a repeated game, and present an optimal policy computation algorithm that has complexity linear to the number of robot actions. Integrating these models into robot action selection allows for human-robot mutual-adaptation. Human subject experiments in a variety of collaboration and shared-autonomy settings show that mutual adaptation significantly improves human-robot team performance, compared to one-way robot adaptation to the human.
APA, Harvard, Vancouver, ISO, and other styles
16

Huang, Yifan. "A Mathematical Model to Study Meshed-Body Worm Robots." Case Western Reserve University School of Graduate Studies / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=case1499877580576043.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Khalaf, Abdelbaset Abdelrahem. "Evidence based mathematical maintenance model for medical equipement." Versailles-St Quentin en Yvelines, 2012. http://www.theses.fr/2012VERS0036.

Full text
Abstract:
Bien que la maintenance des équipements médicaux a été bien planifiée et exécutée depuis plus de 30 ans, très peu d'études ont été menées pour mesurer et évaluer son efficacité en termes de fiabilité et de disponibilité pour la prestation des services. Le débat en cours, en ingénierie clinique, est de savoir si la maintenance préventive est effectivement nécessaire et, si oui à quelle fréquence et quelles tâches doivent être effectuées. Une approche de modélisation mathématique est utilisée pour analyser la probabilité de survie de divers équipements médicaux. Cette approche permet d'explorer l'impact de la maintenance préventive, de la maintenance corrective et leur combinaison sur la disponibilité des équipements et contribuera aux discussions dans le domaine des maintenances des équipements. Les stratégies d'entretien sont analysées et un nouveau modèle de coûts associés à la maintenance a été développé. Il permet d'adopter des intervalles appropriés de maintenance préventive pour différents types d’équipements médicaux. Un modèle analytique a été développé permettant de calculer le nombre de défaillances et les coûts associés aux maintenances préventive et corrective. Un modèle d’optimisation lié à la planification de maintenance préventive en utilisant la programmation linéaire en nombres entiers ainsi qu’une méthode gloutonne ont été développés et comparés. Cette comparaison nous permet de confirmer que l’algorithme glouton fournit des résultats comparables à ceux obtenus par la programmation linéaire en nombres entiers
Although medical equipment maintenance has been well planned and executed for more than 30 years, very few studies have been conducted to measure and evaluate its effectiveness in terms of reliability and availability for service delivery. The ongoing unresolved debate in clinical engineering is whether preventive maintenance (PM) is actually necessary and, if so, how often and which tasks need to be performed. A mathematical maintenance modelling approach is used to analyse the survival probability of various medical equipment. This approach allows exploring the impact of PM, CM and combined PM/CM on the availability of equipment and will contribute to the intensified debate regarding PM. Maintenance strategies is analysed and a new failure-cost model was developed, which allows adopting appropriate PM intervals for various types of medical equipment. The analytical model to calculate the number of failures and costs associated with PM and CM is a significant contribution. The optimisation problem related to preventive maintenance scheduling using a Mixed-Integer Mathematical Programming solver was solved and compared to a proposed Greedy Algorithm. Simulation results based on the survival model show that the Greedy Algorithm gives the same solution in terms of schedule plan as the mixed integer approach
APA, Harvard, Vancouver, ISO, and other styles
18

Mitka, Darius. "Roboto valdymo sistema." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2004. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2004~D_20040621_172148-76217.

Full text
Abstract:
The final work of master studies reviews various industrial robots constructions and parameters, from which they are characterized. Robotics systems and control of flexible production have been discussed in here. Various robots’ drives and their control advantages and disadvantages are analyzed. In the practical part original robot global movement platform is suggested and algorithm of two flexible production bays handling is created. Static characteristics of linear drive used in platform are calculated. Using software package “Matlab Simulink” model of symmetrical linear induction motor (LIM) is created and dynamic characteristics are gained. Concluding part presents inferences and suggestions.
APA, Harvard, Vancouver, ISO, and other styles
19

Alnor, Harald. "Statistically robust Pseudo Linear Identification." Thesis, Virginia Tech, 1989. http://hdl.handle.net/10919/44697.

Full text
Abstract:

It is common to assume that the noise disturbing measuring devices is of a Gaussian nature. But this assumption is not always fulfilled. A few examples are the cases where the measurement device fails periodically, the data transmission from device to microprocessor fails or the A/D conversion fails. In these cases the noise will no longer be Gaussian distributed, but rather the noise will be a mixture of Gaussian noise and data not related to the physical process. This posses a problem for estimators derived under the Gaussian assumption, in the sense L that these estimators are likely to produce highly biased estimates in a non Gaussian environment.

This thesis devises a way to robustify the Pseudo Linear Identication algorithm (PLID) which is a joint parameter and state estimator of a Kalman filter type. The PLID algorithm is originally derived under a Gaussian noise assumption. The PLID algorithm is made robust by filtering the measurements through a nonlinear odd symmetric function, called the mb function, and let the covariance updating depend on how far away the measurement is from the prediction. In the original PLID the measurements are used unfiltered in the covariance calculation.
Master of Science

APA, Harvard, Vancouver, ISO, and other styles
20

Nifong, Nathaniel H. "Learning General Features From Images and Audio With Stacked Denoising Autoencoders." PDXScholar, 2014. https://pdxscholar.library.pdx.edu/open_access_etds/1550.

Full text
Abstract:
One of the most impressive qualities of the brain is its neuro-plasticity. The neocortex has roughly the same structure throughout its whole surface, yet it is involved in a variety of different tasks from vision to motor control, and regions which once performed one task can learn to perform another. Machine learning algorithms which aim to be plausible models of the neocortex should also display this plasticity. One such candidate is the stacked denoising autoencoder (SDA). SDA's have shown promising results in the field of machine perception where they have been used to learn abstract features from unlabeled data. In this thesis I develop a flexible distributed implementation of an SDA and train it on images and audio spectrograms to experimentally determine properties comparable to neuro-plasticity. Specifically, I compare the visual-auditory generalization between a multi-level denoising autoencoder trained with greedy, layer-wise pre-training (GLWPT), to one trained without. I test a hypothesis that multi-modal networks will perform better than uni-modal networks due to the greater generality of features that may be learned. Furthermore, I also test the hypothesis that the magnitude of improvement gained from this multi-modal training is greater when GLWPT is applied than when it is not. My findings indicate that these hypotheses were not confirmed, but that GLWPT still helps multi-modal networks adapt to their second sensory modality.
APA, Harvard, Vancouver, ISO, and other styles
21

Salehzadeh, Nobari Elnaz. "Development of an online progressive mathematical model of needle deflection for application to robotic-assisted percutaneous interventions." Thesis, Imperial College London, 2015. http://hdl.handle.net/10044/1/29478.

Full text
Abstract:
A highly flexible multipart needle is under development in the Mechatronics in Medicine Laboratory at Imperial College, with the aim to achieve multi-curvature trajectories inside biological soft tissue, such as to avoid obstacles during surgery. Currently, there is no dedicated software or analytical methodology for the analysis of the needle's behaviour during the insertion process, which is instead described empirically on the basis of experimental trials on synthetic tissue phantoms. This analysis is crucial for needle and insertion trajectory design purposes. It is proposed that a real-time, progressive, mathematical model of the needle deflection during insertion be developed. This model can serve three purposes, namely, offline needle and trajectory design in a forward solution of the model, when the loads acting on needle from the substrate are known; online, real-time identification of the loads that act on the needle in a reverse solution, when the deflections at discrete points along the needle length are known; and the development of a sensitivity matrix, which enables the calculation of the corrective loads that are required to drive the needle back on track, if any deviations occur away from a predefined trajectory. Previously developed mathematical models of needle deflection inside soft tissue are limited to small deflection and linear strain. In some cases, identical tip path and body shape after full insertion of the needle are assumed. Also, the axial load acting on the needle is either ignored or is calculated from empirical formulae, while its inclusion would render the model nonlinear even for small deflection cases. These nonlinearities are a result of the effects of the axial and transverse forces at the tip being co-dependent, restricting the calculation of the independent effects of each on the needle's deflection. As such, a model with small deflection assumptions incorporating tip axial forces can be called 'quasi-nonlinear' and a methodology is proposed here to tackle the identification of such axial force in the linear range. During large deflection of the needle, discrepancies between the shape of the needle after the insertion and its tip path, computed during the insertion, also significantly increase, causing errors in a model based on the assumption that they are the same. Some of the models developed to date have also been dependent on existing or experimentally derived material models of soft tissue developed offline, which is inefficient for surgical applications, where the biological soft tissue can change radically and experimentation on the patient is limited. Conversely, a model is proposed in this thesis which, when solved inversely, provides an estimate for the contact stiffness of the substrate in a real-time manner. The study and the proposed model and techniques involved are limited to two dimensional projections of the needle movements, but can be easily extended to the 3-dimensional case. Results which demonstrate the accuracy and validity of the models developed are provided on the basis of simulations and via experimental trials of a multi-part 2D steering needle in gelatine.
APA, Harvard, Vancouver, ISO, and other styles
22

Morales, Juan Carlos. "Planning Robust Freight Transportation Operations." Diss., Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/14107.

Full text
Abstract:
This research focuses on fleet management in freight transportation systems. Effective management requires effective planning and control decisions. Plans are often generated using estimates of how the system will evolve in the future; during execution, control decisions need to be made to account for differences between actual realizations and estimates. The benefits of minimum cost plans can be negated by performing costly adjustments during the operational phase. A planning approach that permits effective control during execution is proposed in this dissertation. This approach is inspired by recent work in robust optimization, and is applied to (i) dynamic asset management and (ii) vehicle routing problems. In practice, the fleet management planning is usually decomposed in two parts; the problem of repositioning empty, and the problem of allocating units to customer demands. An alternative integrated dynamic model for asset management problems is proposed. A computational study provides evidence that operating costs and fleet sizes may be significantly reduced with the integrated approach. However, results also illustrate that not considering inherent demand uncertainty generates fragile plans with potential costly control decisions. A planning approach for the empty repositioning problem is proposed that incorporates demand and supply uncertainty using interval around nominal forecasted parameters. The intervals define the uncertainty space for which buffers need to be built into the plan in order to make it a robust plan. Computational evidence suggests that this approach is tractable. The traditional approach to address the Vehicle Routing Problem with Stochastic Demands (VRPSD) is through cost expectation minimization. Although this approach is useful for building routes with low expected cost, it does not directly consider the maximum potential cost that a vehicle might incur when traversing the tour. Our approach aims at minimizing the maximum cost. Computational experiments show that our robust optimization approach generates solutions with expected costs that compare favorably to those obtained with the traditional approach, but also that perform better in worst-case scenarios. We also show how the techniques developed for this problem can be used to address the VRPSD with duration constraints.
APA, Harvard, Vancouver, ISO, and other styles
23

Landecker, Will. "Interpretable Machine Learning and Sparse Coding for Computer Vision." PDXScholar, 2014. https://pdxscholar.library.pdx.edu/open_access_etds/1937.

Full text
Abstract:
Machine learning offers many powerful tools for prediction. One of these tools, the binary classifier, is often considered a black box. Although its predictions may be accurate, we might never know why the classifier made a particular prediction. In the first half of this dissertation, I review the state of the art of interpretable methods (methods for explaining why); after noting where the existing methods fall short, I propose a new method for a particular type of black box called additive networks. I offer a proof of trustworthiness for this new method (meaning a proof that my method does not "make up" the logic of the black box when generating an explanation), and verify that its explanations are sound empirically. Sparse coding is part of a family of methods that are believed, by many researchers, to not be black boxes. In the second half of this dissertation, I review sparse coding and its application to the binary classifier. Despite the fact that the goal of sparse coding is to reconstruct data (an entirely different goal than classification), many researchers note that it improves classification accuracy. I investigate this phenomenon, challenging a common assumption in the literature. I show empirically that sparse reconstruction is not necessarily the right intermediate goal, when our ultimate goal is classification. Along the way, I introduce a new sparse coding algorithm that outperforms competing, state-of-the-art algorithms for a variety of important tasks.
APA, Harvard, Vancouver, ISO, and other styles
24

Irigoyen, Eizmendi Javier. "Commande en position et force d'un robot manipulateur d'assemblage." Grenoble 2 : ANRT, 1986. http://catalogue.bnf.fr/ark:/12148/cb37598444q.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Friedbaum, Jesse Robert. "Model Predictive Linear Control with Successive Linearization." BYU ScholarsArchive, 2018. https://scholarsarchive.byu.edu/etd/7063.

Full text
Abstract:
Robots have been a revolutionizing force in manufacturing in the 20th and 21st century but have proven too dangerous around humans to be used in many other fields including medicine. We describe a new control algorithm for robots developed by the Brigham Young University Robotics and Dynamics and Robotics Laboratory that has shown potential to make robots less dangerous to humans and suitable to work in more applications. We analyze the computational complexity of this algorithm and find that it could be a feasible control for even the most complicated robots. We also show conditions for a system which guarantee local stability for this control algorithm.
APA, Harvard, Vancouver, ISO, and other styles
26

North, Ben. "Learning dynamical models for visual tracking." Thesis, University of Oxford, 1998. http://ora.ox.ac.uk/objects/uuid:6ed12552-4c30-4d80-88ef-7245be2d8fb8.

Full text
Abstract:
Using some form of dynamical model in a visual tracking system is a well-known method for increasing robustness and indeed performance in general. Often, quite simple models are used and can be effective, but prior knowledge of the likely motion of the tracking target can often be exploited by using a specially-tailored model. Specifying such a model by hand, while possible, is a time-consuming and error-prone process. Much more desirable is for an automated system to learn a model from training data. A dynamical model learnt in this manner can also be a source of useful information in its own right, and a set of dynamical models can provide discriminatory power for use in classification problems. Methods exist to perform such learning, but are limited in that they assume the availability of 'ground truth' data. In a visual tracking system, this is rarely the case. A learning system must work from visual data alone, and this thesis develops methods for learning dynamical models while explicitly taking account of the nature of the training data --- they are noisy measurements. The algorithms are developed within two tracking frameworks. The Kalman filter is a simple and fast approach, applicable where the visual clutter is limited. The recently-developed Condensation algorithm is capable of tracking in more demanding situations, and can also employ a wider range of dynamical models than the Kalman filter, for instance multi-mode models. The success of the learning algorithms is demonstrated experimentally. When using a Kalman filter, the dynamical models learnt using the algorithms presented here produce better tracking when compared with those learnt using current methods. Learning directly from training data gathered using Condensation is an entirely new technique, and experiments show that many aspects of a multi-mode system can be successfully identified using very little prior information. Significant computational effort is required by the implementation of the methods, and there is scope for improvement in this regard. Other possibilities for future work include investigation of the strong links this work has with learning problems in other areas. Most notable is the study of the 'graphical models' commonly used in expert systems, where the ideas presented here promise to give insight and perhaps lead to new techniques.
APA, Harvard, Vancouver, ISO, and other styles
27

Valentini, Gabriele. "The Best-of-n Problem in Robot Swarms." Doctoral thesis, Universite Libre de Bruxelles, 2016. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/232502.

Full text
Abstract:
Collective decision making can be seen as a means of designing and understanding swarm robotics systems. While decision-making is generally conceived as the cognitive ability of individual agents to select a belief based only on their preferences and available information, collective decision making is a decentralized cognitive process, whereby an ensemble of agents gathers, shares, and processes information as a single organism and makes a choice that is not attributable to any of its individuals. A principled selection of the rules governing this cognitive process allows the designer to define, shape, and foresee the dynamics of the swarm.We begin this monograph by introducing the reader to the topic of collective decision making. We focus on artificial systems for discrete consensus achievement and review the literature of swarm robotics. In this endeavor, we formalize the best-of-n problem—a generalization of the logic underlying several cognitive problems—and define a taxonomy of its possible variants that are of interest for the design of robot swarms. By leveraging on this understanding, we identify the building-blocks that are essential to achieve a collective decision addressing the best-of-n problem: option exploration, opinion dissemination, modulation of positive feedback, and individual decision-making mechanism. We show how a modular perspective of a collective decision-making strategy allows for the systematic modeling of the resulting swarm performance. In doing so, we put forward a modular and model-driven design methodology that allows the designer to study the dynamics of a swarm at different level of abstractions. Successively, we employ the proposed design methodology to derive and to study different collective decision-making strategies for the best-of-n problem. We show how the designed strategies can be readily applied to different real-world scenarios by performing two series of robot experiments. In the first series, we use a swarm of 100 robots to tackle a site-selection scenario; in the second series, we show instead how the same strategies apply to a collective perception scenario. We conclude with a discussion of our research contributions and provide futuredirection of research.
Doctorat en Sciences de l'ingénieur et technologie
info:eu-repo/semantics/nonPublished
APA, Harvard, Vancouver, ISO, and other styles
28

Hu, Nan. "A unified discrepancy-based approach for balancing efficiency and robustness in state-space modeling estimation, selection, and diagnosis." Diss., University of Iowa, 2016. https://ir.uiowa.edu/etd/2224.

Full text
Abstract:
Due to its generality and flexibility, the state-space model has become one of the most popular models in modern time domain analysis for the description and prediction of time series data. The model is often used to characterize processes that can be conceptualized as "signal plus noise," where the realized series is viewed as the manifestation of a latent signal that has been corrupted by observation noise. In the state-space framework, parameter estimation is generally accomplished by maximizing the innovations Gaussian log-likelihood. The maximum likelihood estimator (MLE) is efficient when the normality assumption is satisfied. However, in the presence of contamination, the MLE suffers from a lack of robustness. Basu, Harris, Hjort, and Jones (1998) introduced a discrepancy measure (BHHJ) with a non-negative tuning parameter that regulates the trade-off between robustness and efficiency. In this manuscript, we propose a new parameter estimation procedure based on the BHHJ discrepancy for fitting state-space models. As the tuning parameter is increased, the estimation procedure becomes more robust but less efficient. We investigate the performance of the procedure in an illustrative simulation study. In addition, we propose a numerical method to approximate the asymptotic variance of the estimator, and we provide an approach for choosing an appropriate tuning parameter in practice. We justify these procedures theoretically and investigate their efficacy in simulation studies. Based on the proposed parameter estimation procedure, we then develop a new model selection criterion in the state-space framework. The traditional Akaike information criterion (AIC), where the goodness-of-fit is assessed by the empirical log-likelihood, is not robust to outliers. Our new criterion is comprised of a goodness-of-fit term based on the empirical BHHJ discrepancy, and a penalty term based on both the tuning parameter and the dimension of the candidate model. We present a comprehensive simulation study to investigate the performance of the new criterion. In instances where the time series data is contaminated, our proposed model selection criterion is shown to perform favorably relative to AIC. Lastly, using the BHHJ discrepancy based on the chosen tuning parameter, we propose two versions of an influence diagnostic in the state-space framework. Specifically, our diagnostics help to identify cases that influence the recovery of the latent signal, thereby providing initial guidance and insight for further exploration. We illustrate the behavior of these measures in a simulation study.
APA, Harvard, Vancouver, ISO, and other styles
29

Entschev, Peter Andreas. "Efficient construction of multi-scale image pyramids for real-time embedded robot vision." Universidade Tecnológica Federal do Paraná, 2013. http://repositorio.utfpr.edu.br/jspui/handle/1/720.

Full text
Abstract:
Detectores de pontos de interesse, ou detectores de keypoints, têm sido de grande interesse para a área de visão robótica embarcada, especialmente aqueles que possuem robustez a variações geométricas, como rotação, transformações afins e mudanças em escala. A detecção de características invariáveis a escala é normalmente realizada com a construção de pirâmides de imagens em multiescala e pela busca exaustiva de extremos no espaço de escala, uma abordagem presente em métodos de reconhecimento de objetos como SIFT e SURF. Esses métodos são capazes de encontrar pontos de interesse bastante robustos, com propriedades adequadas para o reconhecimento de objetos, mas são ao mesmo tempo computacionalmente custosos. Nesse trabalho é apresentado um método eficiente para a construção de pirâmides de imagens em sistemas embarcados, como a plataforma BeagleBoard-xM, de forma similar ao método SIFT. O método aqui apresentado tem como objetivo utilizar técnicas computacionalmente menos custosas e a reutilização de informações previamente processadas de forma eficiente para reduzir a complexidade computacional. Para simplificar o processo de construção de pirâmides, o método utiliza filtros binomiais em substituição aos filtros Gaussianos convencionais utilizados no método SIFT original para calcular múltiplas escalas de uma imagem. Filtros binomiais possuem a vantagem de serem implementáveis utilizando notação ponto-fixo, o que é uma grande vantagem para muitos sistemas embarcados que não possuem suporte nativo a ponto-flutuante. A quantidade de convoluções necessária é reduzida pela reamostragem de escalas já processadas da pirâmide. Após a apresentação do método para construção eficiente de pirâmides, é apresentada uma maneira de implementação eficiente do método em uma plataforma SIMD (Single Instruction, Multiple Data, em português, Instrução Única, Dados Múltiplos) – a plataforma SIMD usada é a extensão ARM Neon disponível no processador ARM Cortex-A8 da BeagleBoard-xM. Plataformas SIMD em geral são muito úteis para aplicações multimídia, onde normalmente é necessário realizar a mesma operação em vários elementos, como pixels em uma imagem, permitindo que múltiplos dados sejam processados com uma única instrução do processador. Entretanto, a extensão Neon no processador Cortex-A8 não suporta operações em ponto-flutuante, tendo o método sido cuidadosamente implementado de forma a superar essa limitação. Por fim, alguns resultados sobre o método aqui proposto e método SIFT original são apresentados, incluindo seu desempenho em tempo de execução e repetibilidade de pontos de interesse detectados. Com uma implementação direta (sem o uso da plataforma SIMD), é mostrado que o método aqui apresentado necessita de aproximadamente 1/4 do tempo necessário para construir a pirâmide do método SIFT original, ao mesmo tempo em que repete até 86% dos pontos de interesse. Com uma abordagem completamente implementada em ponto-fixo (incluindo a vetorização com a plataforma SIMD) a repetibilidade chega a 92% dos pontos de interesse do método SIFT original, porém, reduzindo o tempo de processamento para menos de 3%.
Interest point detectors, or keypoint detectors, have been of great interest for embedded robot vision for a long time, especially those which provide robustness against geometrical variations, such as rotation, affine transformations and changes in scale. The detection of scale invariant features is normally done by constructing multi-scale image pyramids and performing an exhaustive search for extrema in the scale space, an approach that is present in object recognition methods such as SIFT and SURF. These methods are able to find very robust interest points with suitable properties for object recognition, but at the same time are computationally expensive. In this work we present an efficient method for the construction of SIFT-like image pyramids in embedded systems such as the BeagleBoard-xM. The method we present here aims at using computationally less expensive techniques and reusing already processed information in an efficient manner in order to reduce the overall computational complexity. To simplify the pyramid building process we use binomial filters instead of conventional Gaussian filters used in the original SIFT method to calculate multiple scales of an image. Binomial filters have the advantage of being able to be implemented by using fixed-point notation, which is a big advantage for many embedded systems that do not provide native floating-point support. We also reduce the amount of convolution operations needed by resampling already processed scales of the pyramid. After presenting our efficient pyramid construction method, we show how to implement it in an efficient manner in an SIMD (Single Instruction, Multiple Data) platform -- the SIMD platform we use is the ARM Neon extension available in the BeagleBoard-xM ARM Cortex-A8 processor. SIMD platforms in general are very useful for multimedia applications, where normally it is necessary to perform the same operation over several elements, such as pixels in images, enabling multiple data to be processed with a single instruction of the processor. However, the Neon extension in the Cortex-A8 processor does not support floating-point operations, so the whole method was carefully implemented to overcome this limitation. Finally, we provide some comparison results regarding the method we propose here and the original SIFT approach, including performance regarding execution time and repeatability of detected keypoints. With a straightforward implementation (without the use of the SIMD platform), we show that our method takes approximately 1/4 of the time taken to build the entire original SIFT pyramid, while repeating up to 86% of the interest points found with the original method. With a complete fixed-point approach (including vectorization within the SIMD platform) we show that repeatability reaches up to 92% of the original SIFT keypoints while reducing the processing time to less than 3%.
APA, Harvard, Vancouver, ISO, and other styles
30

Duan, Chunming. "A unified decision analysis framework for robust system design evaluation in the face of uncertainty." Diss., This resource online, 1992. http://scholar.lib.vt.edu/theses/available/etd-06062008-170155/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Kleinfinger, Jean-François. "Modelisation dynamique de robots a chaine : cinematique simple, arborescente, ou fermee, en vue de leur commande." Nantes, 1986. http://www.theses.fr/1986NANT2060.

Full text
Abstract:
Le modele dynamique est base sur trois procedes: equation generee automatiquement par calcul symbolique iteratif; formalismes recursifs de type newton-euler lineaires vis-a-vis des parametres inertiels; regroupement de parametres inertiels
APA, Harvard, Vancouver, ISO, and other styles
32

Cuadros, Bohórquez José Fernando. "Estratégia alternativa de otimização em duas camadas de uma unidade de craqueamento catalítico-FCC : implementação de algoritmos genéticos e metodologia híbrida de otimização." [s.n.], 2012. http://repositorio.unicamp.br/jspui/handle/REPOSIP/266651.

Full text
Abstract:
Orientadores: Rubens Maciel Filho, Delba Nisi Cosme Melo
Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Química
Made available in DSpace on 2018-08-21T11:54:20Z (GMT). No. of bitstreams: 1 CuadrosBohorquez_JoseFernando_D.pdf: 6686167 bytes, checksum: 1899259fbe648f651c06a5f9ca3e2c29 (MD5) Previous issue date: 2012
Resumo: Esta pesquisa teve por finalidade o desenvolvimento de uma metodologia de otimização em duas camadas. A otimização preliminar foi baseada na técnica de planejamento de experimentos junto com a metodologia por superfície de resposta com a finalidade de identificar uma possível região de busca do ponto de operação ótimo, o qual foi obtido através da implementação de métodos híbridos de otimização desenvolvidos mediante associação do modelo determinístico de otimização por programação quadrática sucessiva (SQP) com a técnica dos algoritmos genéticos (GA) no modelo do processo de craqueamento catalítico fluidizado- FCC. Este processo é caracterizado por ser um sistema heterogêneo e não isotérmico, cuja modelagem detalhada engloba as equações de balanço de massa e energia das partículas do catalisador, como também para a fase líquida e gasosa, sendo um dos casos de estudo para a aplicação da metodologia de otimização desenvolvida. Como caso de estudo principal foi considerado o modelo do conversor do processo de FCC desenvolvido por Moro e Odloak (1995). Mediante a metodologia de otimização do processo baseado no uso do modelo determinístico da planta, foram definidas estratégias e políticas operacionais para a operação da unidade de FCC em estudo. Procurou-se alto nível de desempenho e segurança operacional, através da integração das etapas de operação, otimização e controle no contexto de otimização em tempo real do processo. As otimizações foram divididas em quatro etapas: 1) Análises preliminares dos fatores e das variáveis de resposta do modelo do conversor foram realizadas usando a técnica de planejamento de experimentos, com o objetivo de compreender a interação entre elas, assim como obter modelos simplificados das variáveis de resposta. A geração dos modelos simplificados é devido à necessidade de ganho no tempo computacional permitindo o conhecimento prévio da região de otimização já que em casos industriais pode não ser possível representar adequadamente o processo por modelos determinísticos; 2) Otimização usando algoritmos genéticos implementados no modelo simplificado da conversão, e no modelo determinístico com e sem restrições; 3) Otimização considerando o método de otimização SQP implementado no modelo simplificado da conversão e no modelo determinístico com restrições; e 4) otimização multi-objetivo do conversor usando x a técnica dos algoritmos genéticos, com o objetivo de maximizar a conversão, assim como a minimização da vazão dos gases de combustão, especificamente o monóxido de carbono (CO). Das otimizações foram obtidos ganhos em torno de 8% na conversão quando comparado com os valores de conversão sem otimização. Finalmente, foi realizada a integração do modelo do processo, com a otimização e o controle, dando como resultado a otimização em tempo real do conversor de FCC. A variável de otimização foi a conversão e, através da implementação do controle por matriz dinâmica com restrições (QDMC), aplicando a metodologia de controle inferencial. As variáveis escolhidas como variável controlada foi a temperatura de reação e como variável manipulada foi a temperatura da alimentação, com perturbações na vazão de alimentação do ar de regeneração. Valores de conversão da ordem de 88% foram atingidos para o esquema de otimização em tempo real, o método de otimização por algoritmo genético apresentou um desempenho satisfatório, com tempos e cargas computacionais razoáveis para implementação desta metodologia, em nível industrial
Abstract: The purpose of this research was the develop of an optimization methodology. Experimental design technique along with a hybrid optimization methodology obtained by association of sequential quadratic programming (SQP) with genetic algorithms (GA), were implemented in the model of a Fluid Catalytic Cracking process (FCC) developed by Moro and Odloak (1995). This process is described for a heterogeneous, non isothermal system, in which a detailed modeling comprises mass and energy balance equations for catalyst particles, liquid and gaseous phases that makes this process model, a case study for implementing the optimization methodology developed. The process optimization methodology developed; along with the deterministic model of the plant were applied to define operational strategies and policies for the operation of the FCC unit studied aiming to obtain high performance and operational safety, through the integration of control, operation and optimization stages in the context of real-time optimization (RTO) process. Optimizations were divided into four stages: 1) Preliminary analysis of factors and response variables of converter modeling were performed using experimental design technique aiming to understand the factors and response variables interaction, as well as to obtain response variables simplified models to be used as objective function in optimization stages, 2) a optimization using genetic algorithms was implemented in the simplified conversion model, in the deterministic modeling and the deterministic model considering factors restrictions, 3) a optimization considering a local search methodology like sequential quadratic programming (SPQ) was implemented in the simplified model of process conversion and also it was consided the deterministic model with restrictions. As initial estimative, the optimum factor values obtained with genetic algorithms were considered as well as two random points in the search space, and 4) a multi objective optimization considering genetic algorithms technique in order to maximize conversion and minimize combustion gases emissions, specifically carbon monoxide was developed. Applying this optimization methodology was obtained increments of around 8% in the feed conversion when compared with conversion values without optimization. xii Finally, it was developed the integration of optimization, control and process modeling giving as result the real time optimization (RTO) of FCC converter. The variable maximized by genetic algorithms was the feed conversion and the control technique implemented was based on the matrix named (QDMC) in conjunction with inferential control methodology. It was considered as controlled variable the reaction temperature adjusting the feed temperature (manipulated variable), for disturbances in the feed flow of the regeneration air. Feed conversion in the order of 88% were achieved for the real time optimization scheme considered, in which, the genetic algorithm showed an excellent performance in reasonable computational times and computational loads for implementation at industrial level
Doutorado
Desenvolvimento de Processos Químicos
Doutor em Engenharia Química
APA, Harvard, Vancouver, ISO, and other styles
33

Poole, Benjamin Hancock. "A methodology for the robustness-based evaluation of systems-of-systems alternatives using regret analysis." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/24648.

Full text
Abstract:
Thesis (Ph.D.)--Aerospace Engineering, Georgia Institute of Technology, 2009.
Committee Chair: Mavris, Dimitri; Committee Member: Bishop, Carlee; Committee Member: McMichael, James; Committee Member: Nixon, Janel; Committee Member: Schrage, Daniel
APA, Harvard, Vancouver, ISO, and other styles
34

Buerger, Johannes Albert. "Fast model predictive control." Thesis, University of Oxford, 2013. http://ora.ox.ac.uk/objects/uuid:6e296415-f02c-4bc2-b171-3bee80fc081a.

Full text
Abstract:
This thesis develops efficient optimization methods for Model Predictive Control (MPC) to enable its application to constrained systems with fast and uncertain dynamics. The key contribution is an active set method which exploits the parametric nature of the sequential optimization problem and is obtained from a dynamic programming formulation of the MPC problem. This method is first applied to the nominal linear MPC problem and is successively extended to linear systems with additive uncertainty and input constraints or state/input constraints. The thesis discusses both offline (projection-based) and online (active set) methods for the solution of controllability problems for linear systems with additive uncertainty. The active set method uses first-order necessary conditions for optimality to construct parametric programming regions for a particular given active set locally along a line of search in the space of feasible initial conditions. Along this line of search the homotopy of optimal solutions is exploited: a known solution at some given plant state is continuously deformed into the solution at the actual measured current plant state by performing the required active set changes whenever a boundary of a parametric programming region is crossed during the line search operation. The sequence of solutions for the finite horizon optimal control problem is therefore obtained locally for the given plant state. This method overcomes the main limitation of parametric programming methods that have been applied in the MPC context which usually require the offline precomputation of all possible regions. In contrast to this the proposed approach is an online method with very low computational demands which efficiently exploits the parametric nature of the solution and returns exact local DP solutions. The final chapter of this thesis discusses an application of robust tube-based MPC to the nonlinear MPC problem based on successive linearization.
APA, Harvard, Vancouver, ISO, and other styles
35

Alvarado, Christiam Segundo Morales. "Estudo e implementação de métodos de validação de modelos matemáticos aplicados no desenvolvimento de sistemas de controle de processos industriais." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/3/3139/tde-05092017-092437/.

Full text
Abstract:
A validação de modelos lineares é uma etapa importante em um projeto de Identificação de Sistemas, pois a escolha correta do modelo para representar a maior parte da dinâmica do processo, dentro de um número finito de técnicas de identificação e em torno de um ponto de operação, permite o sucesso no desenvolvimento de controladores preditivos e de controladores robustos. Por tal razão, o objetivo principal desta Tese é o desenvolvimento de um método de validação de modelos lineares, tendo como ferramentas de avaliação os métodos estatísticos, avaliações dinâmicas e análise da robustez do modelo. O componente principal do sistema de validação de modelos lineares proposto é o desenvolvimento de um sistema fuzzy para análise dos resultados obtidos pelas ferramentas utilizadas na etapa de validação. O projeto de Identificação de Sistemas é baseado em dados reais de operação de uma Planta-Piloto de Neutralização de pH, localizada no Laboratório de Controle de Processos Industriais da Escola Politécnica da USP. Para verificar o resultado da validação, todos os modelos são testados em um controlador preditivo do tipo QDMC (Quadratic Dynamic Matrix Control) para seguir uma trajetória de referência. Os critérios utilizados para avaliar o desempenho do controlador QDMC, para cada modelo utilizado, foram a velocidade de resposta do controlador e o índice da mínima variabilidade da variável de processo. Os resultados mostram que a confiabilidade do sistema de validação projetado para malhas com baixa e alta não-linearidade em um processo real, foram de 85,71% e 50%, respectivamente, com relação aos índices de desempenho obtidos pelo controlador QDMC.
Linear model validation is the most important stage in System Identification Project because, the model correct selection to represent the most of process dynamic allows the success in the development of predictive and robust controllers, within identification technique finite number and around the operation point. For this reason, the development of linear model validation methods is the main objective in this Thesis, taking as a tools of assessing the statistical, dynamic and robustness methods. Fuzzy system is the main component of model linear validation system proposed to analyze the results obtained by the tools used in validation stage. System Identification project is performed through operation real data of a pH neutralization pilot plant, located at the Industrial Process Control Laboratory, IPCL, of the Escola Politécnica of the University of São Paulo, Brazil. In order to verify the validation results, all modes are used in QDMC type predictive controller, to follow a set point tracking. The criterions used to assess the QDMC controller performance were the speed response and the process variable minimum variance index, for each model used. The results show that the validation system reliability were 85.71% and 50% projected for low and high non-linearity in a real process, respectively, linking to the performance indexes obtained by the QDMC controller.
APA, Harvard, Vancouver, ISO, and other styles
36

MacNair, David Luke. "Modeling cellular actuator arrays." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/50259.

Full text
Abstract:
This work explores the representations and mathematical modeling of biologically-inspired robotic muscles called Cellular Actuator Arrays. These actuator arrays are made of many small interconnected actuation units which work together to provide force, displacement, robustness and other properties beyond the original actuator's capability. The arrays can also exhibit properties generally associated with biological muscle and can thus provide test bed for research into the interrelated nature of the nervous system and muscles, kinematics/dynamics experiments to understand balance and synergies, and building full-strength, safe muscles for prosthesis, rehabilitation, human force amplification, and humanoid robotics. This thesis focuses on the mathematical tools needed bridge the gap between the conceptual idea of the cellular actuator array and the engineering design processes needed to build physical robotic muscles. The work explores the representation and notation needed to express complex actuator array typologies, the mathematical modeling needed to represent the complex dynamics of the arrays, and properties to guide the selection of arrays for engineering purposes. The approach is designed to aid automation and simulation of actuator arrays and provide an intuitive base for future controls and physiology work. The work is validated through numerical results using MatLab's SimMechanics dynamic modeling system and with three physical actuator arrays built using solenoids and shape memory alloy actuators.
APA, Harvard, Vancouver, ISO, and other styles
37

Spoida, Peter. "Robust pricing and hedging beyond one marginal." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:0315824b-52f7-4e44-9ac6-0a688c49762c.

Full text
Abstract:
The robust pricing and hedging approach in Mathematical Finance, pioneered by Hobson (1998), makes statements about non-traded derivative contracts by imposing very little assumptions about the underlying financial model but directly using information contained in traded options, typically call or put option prices. These prices are informative about marginal distributions of the asset. Mathematically, the theory of Skorokhod embeddings provides one possibility to approach robust problems. In this thesis we consider mostly robust pricing and hedging problems of Lookback options (options written on the terminal maximum of an asset) and Convex Vanilla Options (options written on the terminal value of an asset) and extend the analysis which is predominately found in the literature on robust problems by two features: Firstly, options with multiple maturities are available for trading (mathematically this corresponds to multiple marginal constraints) and secondly, restrictions on the total realized variance of asset trajectories are imposed. Probabilistically, in both cases, we develop new optimal solutions to the Skorokhod embedding problem. More precisely, in Part I we start by constructing an iterated Azema-Yor type embedding (a solution to the n-marginal Skorokhod embedding problem, see Chapter 2). Subsequently, its implications are presented in Chapter 3. From a Mathematical Finance perspective we obtain explicitly the optimal superhedging strategy for Barrier/Lookback options. From a probability theory perspective, we find the maximum maximum of a martingale which is constrained by finitely many intermediate marginal laws. Further, as a by-product, we discover a new class of martingale inequalities for the terminal maximum of a cadlag submartingale, see Chapter 4. These inequalities enable us to re-derive the sharp versions of Doob's inequalities. In Chapter 5 a different problem is solved. Motivated by the fact that in some markets both Vanilla and Barrier options with multiple maturities are traded, we characterize the set of market models in this case. In Part II we incorporate the restriction that the total realized variance of every asset trajectory is bounded by a constant. This has been previously suggested by Mykland (2000). We further assume that finitely many put options with one fixed maturity are traded. After introducing the general framework in Chapter 6, we analyse the associated robust pricing and hedging problem for convex Vanilla and Lookback options in Chapters 7 and 8. Robust pricing is achieved through construction of appropriate Root solutions to the Skorokhod embedding problem. Robust hedging and pathwise duality are obtained by a careful development of dynamic pathwise superhedging strategies. Further, we characterize existence of market models with a suitable notion of arbitrage.
APA, Harvard, Vancouver, ISO, and other styles
38

Duncan, Scott Joseph. "Including severe uncertainty into environmentally benign life cycle design using information gap-decision theory." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/22540.

Full text
Abstract:
Thesis (Ph. D.)--Mechanical Engineering, Georgia Institute of Technology, 2008.
Committee Chair: Bras, Bert; Committee Member: Allen, Janet; Committee Member: Chameau, Jean-Lou; Committee Member: McGinnis, Leon; Committee Member: Paredis, Chris.
APA, Harvard, Vancouver, ISO, and other styles
39

Mlot, Nathaniel J. "Fire ant self-assemblages." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/50247.

Full text
Abstract:
Fire ants link their legs and jaws together to form functional structures called self- assemblages. Examples include floating rafts, towers, bridges, and bivouacs. We investigate these self-assemblages of fire ants. Our studies are motivated in part by the vision of providing guidance for programmable robot swarms. The goal for such systems is to develop a simple programmable element from which complex patterns or behaviors emerge on the collective level. Intelligence is decentralized, as is the case with social insects such as fire ants. In this combined experimental and theoretical study, we investigate the construction of two fire ant self-assemblages that are critical to the colony’s survival: the raft and the tower. Using time-lapse photography, we record the construction processes of rafts and towers in the laboratory. We identify and characterize individual ant behaviors that we consistently observe during assembly, and incorporate these behaviors into mathematical models of the assembly process. Our models accurately predict both the assemblages’ shapes and growth patterns, thus providing evidence that we have identified and analyzed the key mechanisms for these fire ant self-assemblages. We also develop novel techniques using scanning electron microscopy and micro-computed tomography scans to visualize and quantify the internal structure and packing properties of live linked fire ants. We compare our findings to packings of dead ants and similarly shaped granular material packings to understand how active arranging affects ant spacing and orientation. We find that ants use their legs to increase neighbor spacing and hence reduce their packing density by one-third compared to packings of dead ants. Also, we find that live ants do not align themselves in parallel with nearest neighbors as much as dead ants passively do. Our main contribution is the development of parsimonious mathematical models of how the behaviors of individuals result in the collective construction of fire ant assemblages. The models posit only simple observed behaviors based on local information, yet their mathe- matical analysis yields accurate predictions of assemblage shapes and construction rates for a wide range of ant colony sizes.
APA, Harvard, Vancouver, ISO, and other styles
40

Boopathy, Komahan. "Uncertainty Quantification and Optimization Under Uncertainty Using Surrogate Models." University of Dayton / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1398302731.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Jackson, Arthur Rhydon. "Predicting Flavonoid UGT Regioselectivity with Graphical Residue Models and Machine Learning." Digital Commons @ East Tennessee State University, 2009. https://dc.etsu.edu/etd/1820.

Full text
Abstract:
Machine learning is applied to a challenging and biologically significant protein classification problem: the prediction of flavonoid UGT acceptor regioselectivity from primary protein sequence. Novel indices characterizing graphical models of protein residues are introduced. The indices are compared with existing amino acid indices and found to cluster residues appropriately. A variety of models employing the indices are then investigated by examining their performance when analyzed using nearest neighbor, support vector machine, and Bayesian neural network classifiers. Improvements over nearest neighbor classifications relying on standard alignment similarity scores are reported.
APA, Harvard, Vancouver, ISO, and other styles
42

Dias, Moreira De Souza Fillipe. "Semantic Description of Activities in Videos." Scholar Commons, 2017. http://scholarcommons.usf.edu/etd/6649.

Full text
Abstract:
Description of human activities in videos results not only in detection of actions and objects but also in identification of their active semantic relationships in the scene. Towards this broader goal, we present a combinatorial approach that assumes availability of algorithms for detecting and labeling objects and actions, albeit with some errors. Given these uncertain labels and detected objects, we link them into interpretative structures using domain knowledge encoded with concepts of Grenander’s general pattern theory. Here a semantic video description is built using basic units, termed generators, that represent labels of objects or actions. These generators have multiple out-bonds, each associated with either a type of domain semantics, spatial constraints, temporal constraints or image/video evidence. Generators combine between each other, according to a set of pre-defined combination rules that capture domain semantics, to form larger structures known as configurations, which here will be used to represent video descriptions. Such connected structures of generators are called configurations. This framework offers a powerful representational scheme for its flexibility in spanning a space of interpretative structures (configurations) of varying sizes and structural complexity. We impose a probability distribution on the configuration space, with inferences generated using a Markov Chain Monte Carlo-based simulated annealing algorithm. The primary advantage of the approach is that it handles known computer vision challenges – appearance variability, errors in object label annotation, object clutter, simultaneous events, temporal dependency encoding, etc. – without the need for a exponentially- large (labeled) training data set.
APA, Harvard, Vancouver, ISO, and other styles
43

Lin, TsungPo. "An adaptive modeling and simulation environment for combined-cycle data reconciliation and degradation estimation." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/24819.

Full text
Abstract:
Thesis (Ph.D.)--Aerospace Engineering, Georgia Institute of Technology, 2008.
Committee Chair: Dimitri Mavris; Committee Member: Erwing Calleros; Committee Member: Hongmei Chen; Committee Member: Mark Waters; Committee Member: Vitali Volovoi.
APA, Harvard, Vancouver, ISO, and other styles
44

Murphy, Jonathan Rodgers. "A robust multi-objective statistical improvement approach to electric power portfolio selection." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/45946.

Full text
Abstract:
Motivated by an electric power portfolio selection problem, a sampling method is developed for simulation-based robust design that builds on existing multi-objective statistical improvement methods. It uses a Bayesian surrogate model regressed on both design and noise variables, and makes use of methods for estimating epistemic model uncertainty in environmental uncertainty metrics. Regions of the design space are sequentially sampled in a manner that balances exploration of unknown designs and exploitation of designs thought to be Pareto optimal, while regions of the noise space are sampled to improve knowledge of the environmental uncertainty. A scalable test problem is used to compare the method with design of experiments (DoE) and crossed array methods, and the method is found to be more efficient for restrictive sample budgets. Experiments with the same test problem are used to study the sensitivity of the methods to numbers of design and noise variables. Lastly, the method is demonstrated on an electric power portfolio simulation code.
APA, Harvard, Vancouver, ISO, and other styles
45

Ye, Guoliang. "Model-based ultrasonic temperature estimation for monitoring HIFU therapy." Thesis, University of Oxford, 2008. http://ora.ox.ac.uk/objects/uuid:6f4c4f84-3ca6-46f2-a895-ab0aa3d9af51.

Full text
Abstract:
High Intensity Focused Ultrasound (HIFU) is a new cancer thermal therapy method which has achieved encouraging results in clinics recently. However, the lack of a temperature monitoring makes it hard to apply widely, safely and efficiently. Conventional ultrasonic temperature estimation based on echo strain suffers from artifacts caused by signal distortion over time, leading to poor estimation and visualization of the 2D temperature map. This thesis presents a novel model-based stochastic framework for ultrasonic temperature estimation, which combines the temperature information from the ultrasound images and a theoretical model of the heat diffusion. Consequently the temperature estimation is more consistent over time and its visualisation is improved. There are 3 main contributions of this thesis related to: improving the conventional echo strain method to estimate temperature, developing and applying approximate heat models to model temperature, and finally combining the estimation and the models. First in the echo strain based temperature estimation, a robust displacement estimator is first introduced to remove displacement outliers caused by the signal distortion over time due to the thermo-acoustic lens effect. To transfer the echo strain to temperature more accurately, an experimental method is designed to model their relationship using polynomials. Experimental results on a gelatine phantom show that the accuracy of the temperature estimation is of the order of 0.1 ◦C. This is better than results reported previously of 0.5 ◦C in a rubber phantom. Second in the temperature modelling, heat models are derived approximately as Gaussian functions which are mathematically simple. Simulated results demonstrate that the approximate heat models are reasonable. The simulated temperature result is analytical and hence computed in much less than 1 second, while the conventional simulation of using finite element methods requires about 25 minutes under the same conditions. Finally, combining the estimation and the heat models is the main contribution of this thesis. A 2D spatial adaptive Kalman filter with the predictive step defined by the shape model from the heat models is applied to the temperature map estimated from ultrasound images. It is shown that use of the temperature shape model enables more reliable temperature estimation in the presence of distorted or blurred strain measurements which are typically found in practice. The experimental results on in-vitro bovine liver show that the visualisation on the temperature map over time is more consistent and the iso-temperature contours are clearly visualised.
APA, Harvard, Vancouver, ISO, and other styles
46

Rolander, Nathan Wayne. "An Approach for the Robust Design of Data Center Server Cabinets." Thesis, Georgia Institute of Technology, 2005. http://hdl.handle.net/1853/7592.

Full text
Abstract:
The complex turbulent flow regimes encountered in many thermal-fluid engineering applications have proven resistant to the effective application of systematic design because of the computational expense of model evaluation and the inherent variability of turbulent systems. In this thesis the integration of the Proper Orthogonal Decomposition (POD) for reduced order modeling of turbulent convection with the application of robust design principles is proposed as a practical design approach. The POD has been used successfully to create low dimensional steady state flow models within a prescribed range of parameters. The underlying foundation of robust design is to determine superior solutions to design problems by minimizing the effects of variation on system performance, without eliminating their causes. The integration of these constructs utilizing the compromise Decision Support Problem (DSP) results in an efficient, effective robust design approach for complex turbulent convective systems. The efficacy of the approach is illustrated through application to the configuration of data center server cabinets. Data centers are computing infrastructures that house large quantities of data processing equipment. The data processing equipment is stored in 2 m high enclosures known as cabinets. The demand for increased computational performance has led to very high power density cabinet design, with a single cabinet dissipating up to 20 kW. The computer servers are cooled by turbulent convection and have unsteady heat generation and cooling air flows, yielding substantial inherent variability, yet require some of the most stringent operational requirements of any engineering system. Through variation of the power load distribution and flow parameters, such as the rate of cooling air supplied, thermally efficient configurations that are insensitive to variations in operating conditions are determined. This robust design approach is applied to three common data center server cabinet designs, in increasing levels of modeling detail and complexity. Results of the application of this approach to the example problems studied show that the resulting thermally efficient configurations are capable of dissipating up to a 50% greater heat load and 15% decrease in the temperature variability using the same cooling infrastructure. These results are validated rigorously, including comparison of detailed CFD simulations with experimentally gathered temperature data of a mock server cabinet. Finally, with the approach validated, augmentations to the approach are considered for multi-scale design, extending approaches domain of applicability.
APA, Harvard, Vancouver, ISO, and other styles
47

Andréa-Novel, Brigitte d'. "Sur la commande d'une classe de systèmes mécaniques." Paris, ENMP, 1987. http://www.theses.fr/1987ENMP0067.

Full text
Abstract:
Elaboration de lois de commande en particulier pour la robotique. Utilisation de l'approche polynomiale pour placer des zeros de transmission a poles fixes. Cas des systemes non lineaires. Etude par topologie algebrique. Utilisation d'une approche par immersion et bouclage linearisant. Problemes de stabilite dans le cas de grands genres
APA, Harvard, Vancouver, ISO, and other styles
48

Reghelin, Ricardo. "Um modelo de gerenciamento microscópico centralizado de tráfego de veículos inteligentes em um segmento de rodovia." Universidade Tecnológica Federal do Paraná, 2014. http://repositorio.utfpr.edu.br/jspui/handle/1/953.

Full text
Abstract:
Este trabalho insere-se na área de pesquisa de sistemas de transporte inteligente e mobilidade urbana buscando um cenário onde a infraestrutura rodoviária é capaz de monitorar um tráfego exclusivo de veículos inteligentes que não dependem de motoristas para serem guiados. A principal contribuição do trabalho é o desenvolvimento de uma solução matemática para otimizar o gerenciamento microscópico centralizado do tráfego de veículos inteligentes em trechos (segmentos) de rodovia. Para isto é apresentado um modelo de otimização baseado em Programação Linear Inteira Mista (MILP), que determina um plano ótimo de trajetórias individuais dos veículos em uma evolução de tráfego. O objetivo é reduzir o tempo de viagem individualmente e assegurar fluidez do tráfego. O modelo considera componentes essenciais do sistema dinâmico viário como topografia da pista, regras de trânsito e a curva de aceleração máxima de cada veículo. São contempladas várias situações de tráfego, tais como ultrapassagens, inclinação na pista, obstáculos e redutores de velocidade. Os resultados indicaram uma média de 20,5 segundos para o cálculo de um cenário com 6 veículos e 11 intervalos de tempo. Como o modelo MILP não tem solução em tempo computacional aceitável para aplicação real, também é proposto um algoritmo de simulação baseado em heurísticas o qual busca reduzir esse tempo de cálculo em detrimento da otimalidade da solução. O algoritmo reproduz o comportamento de um motorista que tenta manter sempre um valor de velocidade escolhido previamente, e por isso é forçado a ultrapassar outros veículos quando obstruído ao longo do trajeto. O resultado do algoritmo tem importância adicional, pois serve de referência para resolver o problema da prioridade nas ultrapassagens. Também são propostos novos indicadores para a avaliação microscópica de qualidade de tráfego. Finalmente, são apresentados resultados de testes em simulações a fim de avaliar e validar o modelo e o algoritmo.
This work focus on the research area of intelligent transportation systems and urban mobility. It considers a scenario where the roadside infrastructure is capable of monitoring traffic composed by 100% of intelligent vehicles that do not rely on drivers to be guided. The main contribution of this work is the development of a mathematical solution to optimize the centralized management of intelligent microscopic vehicular traffic in parts (segments) of highway. Therefore an optimization model based on Mixed Integer Linear Programming (MILP) is presented. The model determines individual trajectories plans of vehicles in a traffic evolution. The objective is to reduce the travel time individually and ensure traffic flow. The model considers essential components of the dynamic highway system, such as, topography of the lane, traffic rules and acceleration curve for each vehicle. Many traffic situations are considered, such as, overtaking, slopes, obstacles and speed reducers. The results indicated an average of 20.5 seconds to calculate a scenario with 6 vehicles and 11 time intervals. As the MILP model has no solution in acceptable computational time for real application, it is proposed an algorithm based on heuristic simulation which seeks to reduce the computation time at the expense of optimality of the solution. The algorithm reproduces the behavior of a driver who always tries to maintain a preselected velocity value, and is therefore forced to overtake other vehicles when blocked along the path. The result of the algorithm has additional importance because it serves as a reference for solving the problem of priority when overtaking. New indicators for microscopic evaluation of quality traffic are also proposed. Finally, test results are presented on simulations to evaluate and validate the model and algorithm.
APA, Harvard, Vancouver, ISO, and other styles
49

Segal, Aleksandr V. "Iterative Local Model Selection for tracking and mapping." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:8690e0e0-33c5-403e-afdf-e5538e5d304f.

Full text
Abstract:
The past decade has seen great progress in research on large scale mapping and perception in static environments. Real world perception requires handling uncertain situations with multiple possible interpretations: e.g. changing appearances, dynamic objects, and varying motion models. These aspects of perception have been largely avoided through the use of heuristics and preprocessing. This thesis is motivated by the challenge of including discrete reasoning directly into the estimation process. We approach the problem by using Conditional Linear Gaussian Networks (CLGNs) as a generalization of least-squares estimation which allows the inclusion of discrete model selection variables. CLGNs are a powerful framework for modeling sparse multi-modal inference problems, but are difficult to solve efficiently. We propose the Iterative Local Model Selection (ILMS) algorithm as a general approximation strategy specifically geared towards the large scale problems encountered in tracking and mapping. Chapter 4 introduces the ILMS algorithm and compares its performance to traditional approximate inference techniques for Switching Linear Dynamical Systems (SLDSs). These evaluations validate the characteristics of the algorithm which make it particularly attractive for applications in robot perception. Chief among these is reliability of convergence, consistent performance, and a reasonable trade off between accuracy and efficiency. In Chapter 5, we show how the data association problem in multi-target tracking can be formulated as an SLDS and effectively solved using ILMS. The SLDS formulation allows the addition of additional discrete variables which model outliers and clutter in the scene. Evaluations on standard pedestrian tracking sequences demonstrates performance competitive with the state of the art. Chapter 6 applies the ILMS algorithm to robust pose graph estimation. A non-linear CLGN is constructed by introducing outlier indicator variables for all loop closures. The standard Gauss-Newton optimization algorithm is modified to use ILMS as an inference algorithm in between linearizations. Experiments demonstrate a large improvement over state-of-the-art robust techniques. The ILMS strategy presented in this thesis is simple and general, but still works surprisingly well. We argue that these properties are encouraging for wider applicability to problems in robot perception.
APA, Harvard, Vancouver, ISO, and other styles
50

Seepersad, Carolyn Conner. "A Robust Topological Preliminary Design Exploration Method with Materials Design Applications." Diss., Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/4868.

Full text
Abstract:
A paradigm shift is underway in which the classical materials selection approach in engineering design is being replaced by the design of material structure and processing paths on a hierarchy of length scales for specific multifunctional performance requirements. In this dissertation, the focus is on designing mesoscopic material and product topology?? geometric arrangement of solid phases and voids on length scales larger than microstructures but smaller than the characteristic dimensions of an overall product. Increasingly, manufacturing, rapid prototyping, and materials processing techniques facilitate tailoring topology with high levels of detail. Fully leveraging these capabilities requires not only computational models but also a systematic, efficient design method for exploring, refining, and evaluating product and material topology and other design parameters for targeted multifunctional performance that is robust with respect to potential manufacturing, design, and operating variations. In this dissertation, the Robust Topological Preliminary Design Exploration Method is presented for designing complex multi-scale products and materials by topologically and parametrically tailoring them for multifunctional performance that is superior to that of standard designs and less sensitive to variations. A comprehensive robust design method is established for topology design applications. It includes computational techniques, guidelines, and a multiobjective decision formulation for evaluating and minimizing the impact of topological and parametric variation on the performance of a preliminary topological design. A method is also established for multifunctional topology design, including thermal topology design techniques and multi-stage, distributed design methods for designing preliminary topologies with built-in flexibility for subsequent modification for enhanced performance in secondary functional domains. Key aspects of the approach are demonstrated by designing linear cellular alloys??ered metallic cellular materials with extended prismatic cells?? three applications. Heat exchangers are designed with increased heat dissipation and structural load bearing capabilities relative to conventional heat sinks for microprocessor applications. Cellular materials are designed with structural properties that are robust to dimensional and topological imperfections such as missing cell walls. Finally, combustor liners are designed to increase operating temperatures and efficiencies and reduce harmful emissions for next-generation turbine engines via active cooling and load bearing within topologically and parametrically customized cellular materials.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography