Academic literature on the topic 'Worst-case complexity analysis'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Worst-case complexity analysis.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Worst-case complexity analysis"

1

Szirmay-Kalos, L., and G. Márton. "Worst-case versus average case complexity of ray-shooting." Computing 61, no. 2 (June 1998): 103–31. http://dx.doi.org/10.1007/bf02684409.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kwas, Marek, and Youming Li. "Worst case complexity of multivariate Feynman–Kac path integration." Journal of Complexity 19, no. 6 (December 2003): 730–43. http://dx.doi.org/10.1016/s0885-064x(03)00048-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Jackowski, Tomasz. "Complexity of multilinear problems in the worst case setting." Journal of Complexity 6, no. 4 (December 1990): 389–408. http://dx.doi.org/10.1016/0885-064x(90)90030-h.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Milanese, M., and A. Vicino. "Information-Based Complexity and Nonparametric Worst-Case System Identification." Journal of Complexity 9, no. 4 (December 1993): 427–46. http://dx.doi.org/10.1006/jcom.1993.1028.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Plaskota, Leszek. "Worst Case Complexity of Problems with Random Information Noise." Journal of Complexity 12, no. 4 (December 1996): 416–39. http://dx.doi.org/10.1006/jcom.1996.0026.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Pemmaraju, Sriram V., and Clifford A. Shaffer. "Analysis of the worst case space complexity of a PR quadtree." Information Processing Letters 49, no. 5 (March 1994): 263–67. http://dx.doi.org/10.1016/0020-0190(94)90065-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Li, Youming, and Grzegorz W. Wasilkowski. "Worst Case Complexity of Weighted Approximation and Integration over Rd." Journal of Complexity 18, no. 1 (March 2002): 330–45. http://dx.doi.org/10.1006/jcom.2001.0632.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Short, Michael. "Bounds on Worst-Case Deadline Failure Probabilities in Controller Area Networks." Journal of Computer Networks and Communications 2016 (2016): 1–12. http://dx.doi.org/10.1155/2016/5196092.

Full text
Abstract:
Industrial communication networks like the Controller Area Network (CAN) are often required to operate reliably in harsh environments which expose the communication network to random errors. Probabilistic schedulability analysis can employ rich stochastic error models to capture random error behaviors, but this is most often at the expense of increased analysis complexity. In this paper, an efficient method (of time complexityO(n log n)) to bound the message deadline failure probabilities for an industrial CAN network consisting ofnperiodic/sporadic message transmissions is proposed. The paper develops bounds for Deadline Minus Jitter Monotonic (DMJM) and Earliest Deadline First (EDF) message scheduling techniques. Both random errors and random bursts of errors can be included in the model. Stochastic simulations and a case study considering DMJM and EDF scheduling of an automotive benchmark message set provide validation of the technique and highlight its application.
APA, Harvard, Vancouver, ISO, and other styles
9

De Haan, Ronald, Anna Roubickova, and Stefan Szeider. "Parameterized Complexity Results for Plan Reuse." Proceedings of the AAAI Conference on Artificial Intelligence 27, no. 1 (June 30, 2013): 224–31. http://dx.doi.org/10.1609/aaai.v27i1.8655.

Full text
Abstract:
Planning is a notoriously difficult computational problem of high worst-case complexity. Researchers have been investing significant efforts to develop heuristics or restrictions to make planning practically feasible. Case-based planning is a heuristic approach where one tries to reuse previous experience when solving similar problems in order to avoid some of the planning effort. Plan reuse may offer an interesting alternative to plan generation in some settings. We provide theoretical results that identify situations in which plan reuse is provably tractable. We perform our analysis in the framework of parameterized complexity, which supports a rigorous worst-case complexity analysis that takes structural properties of the input into account in terms of parameters. A central notion of parameterized complexity is fixed-parameter tractability which extends the classical notion of polynomial-time tractability by utilizing the effect of parameters. We draw a detailed map of the parameterized complexity landscape of several variants of problems that arise in the context of case-based planning. In particular, we consider the problem of reusing an existing plan, imposing various restrictions in terms of parameters, such as the number of steps that can be added to the existing plan to turn it into a solution of the planning instance at hand.
APA, Harvard, Vancouver, ISO, and other styles
10

Kon, Mark, and Leszek Plaskota. "Complexity of Neural Network Approximation with Limited Information: A Worst Case Approach." Journal of Complexity 17, no. 2 (June 2001): 345–65. http://dx.doi.org/10.1006/jcom.2001.0575.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Worst-case complexity analysis"

1

Panigrahi, Sunil Kumar, Soubhik Chakraborty, and Jibitesh Mishra. "A Statistical Analysis of Bubble Sort in terms of Serial and Parallel Computation." IJCSN Journal, 2012. http://hdl.handle.net/10150/214089.

Full text
Abstract:
In some recent papers, the weight based statistical bounds have arguably explained time complexity better than the count based mathematical bounds. This is definitely true for average case where for an arbitrary code it is difficult to identify the pivotal operation or pivotal region in the code for taking the expectation and/or when the probability distribution, over which expectation is taken, becomes unrealistic over the problem domain. In worst case, it can certify whether a mathematical bound is conservative or not. Here we revisit the results on Bubble sort in sequential mode and make an independent study of the same algorithm in parallel mode using statistical bound
APA, Harvard, Vancouver, ISO, and other styles
2

Gurioli, Gianmarco. "Adaptive Regularisation Methods under Inexact Evaluations for Nonconvex Optimisation and Machine Learning Applications." Doctoral thesis, 2021. http://hdl.handle.net/2158/1238314.

Full text
Abstract:
The major aim of this research thesis is to handle two main challenges arising when solving unconstrained optimisation problems with second-order methods: the reduction of the per-iteration cost and the stochastic analysis of the resulting non- deterministic algorithms. This is motivated by the fact that second-order procedures can be more efficient than first-order ones on badly scaled and ill-conditioned problems, since they seem to potentially take advantage of curvature information to easier escape from saddle points, being more robust to the choice of hyperparameters and the parameters tuning, but at the price of a more expensive per-iteration cost, due to the computation of Hessian-vector products. Furthermore, the effort of reducing such a cost with inexact function and/or derivatives evaluations, that have to fulfill suitable accuracy requirements, leads to non-deterministic variants of the methods, that have to be supported by a stochastic complexity analysis. The thesis builds on a particular class of second-order globally convergent methods based on the Adaptive Cubic Regularisation (ARC) framework, motivated by the fact that its complexity, in terms of the worst-case number of iterations to reach a first-order critical point, has been proved to be optimal. To this purpose, the design, analysis and development of novel variants of ARC methods, employing inexact derivatives and/or function. evaluations, are investigated. To start with, a suitable reference version of the ARC method is firstly introduced, obtained by merging existing basic forms of ARC algorithms, in order to set the general background on adaptive cubic regularisation. Having set the scene, we then cope with the need of introducing inexactness in function and derivatives computations while conserving optimal complexity. After setting the finite-sum minimisation framework, this starts with the employment of inexact Hessian information, adaptively chosen, before moving on to an extended framework based on function estimates and approximate derivatives evaluations. The stochastic complexity analysis of the presented frameworks is thus performed. Finally, numerical tests within the context of supervised learning are reported, ranging from popular machine learning datasets to a real-life machine learning industrial application related to the parametric design of centrifugal pumps.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Worst-case complexity analysis"

1

Kowalski, Marek A., Krzystof A. Sikorski, and Frank Stenger. Selected Topics in Approximation and Computation. Oxford University Press, 1995. http://dx.doi.org/10.1093/oso/9780195080599.001.0001.

Full text
Abstract:
Selected Topics in Approximation and Computation addresses the relationship between modern approximation theory and computational methods. The text is a combination of expositions of basic classical methods of approximation leading to popular splines and new explicit tools of computation, including Sinc methods, elliptic function methods, and positive operator approximation methods. It also provides an excellent summary of worst case analysis in information based complexity. It relates optimal computational methods with the theory of s-numbers and n-widths. It can serve as a text for senior-graduate courses in computer science and applied mathematics, and also as a reference for professionals.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Worst-case complexity analysis"

1

Xia, Lirong, and Weiqiang Zheng. "Beyond the Worst Case: Semi-random Complexity Analysis of Winner Determination." In Web and Internet Economics, 330–47. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-22832-2_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Haslbeck, Maximilian P. L., and Peter Lammich. "For a Few Dollars More." In Programming Languages and Systems, 292–319. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-72019-3_11.

Full text
Abstract:
AbstractWe present a framework to verify both, functional correctness and worst-case complexity of practically efficient algorithms. We implemented a stepwise refinement approach, using the novel concept of resource currencies to naturally structure the resource analysis along the refinement chain, and allow a fine-grained analysis of operation counts. Our framework targets the LLVM intermediate representation. We extend its semantics from earlier work with a cost model. As case study, we verify the correctness and $$O(n\log n)$$ O ( n log n ) worst-case complexity of an implementation of the introsort algorithm, whose performance is on par with the state-of-the-art implementation found in the GNU C++ Library.
APA, Harvard, Vancouver, ISO, and other styles
3

Benerecetti, Massimo, Daniele Dell’Erba, and Fabio Mogavero. "Solving Mean-Payoff Games via Quasi Dominions." In Tools and Algorithms for the Construction and Analysis of Systems, 289–306. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-45237-7_18.

Full text
Abstract:
Abstract We propose a novel algorithm for the solution of mean-payoff games that merges together two seemingly unrelated concepts introduced in the context of parity games, small progress measures and quasi dominions. We show that the integration of the two notions can be highly beneficial and significantly speeds up convergence to the problem solution. Experiments show that the resulting algorithm performs orders of magnitude better than the asymptotically-best solution algorithm currently known, without sacrificing on the worst-case complexity.
APA, Harvard, Vancouver, ISO, and other styles
4

Schmid, Stefan, Nicolas Schnepf, and Jiří Srba. "Resilient Capacity-Aware Routing." In Tools and Algorithms for the Construction and Analysis of Systems, 411–29. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-72016-2_22.

Full text
Abstract:
AbstractTo ensure a high availability, communication networks provide resilient routing mechanisms that quickly change routes upon failures. However, a fundamental algorithmic question underlying such mechanisms is hardly understood: how to verify whether a given network reroutes flows along feasible paths, without violating capacity constraints, for up to k link failures? We chart the algorithmic complexity landscape of resilient routing under link failures, considering shortest path routing based on link weights as e.g. deployed in the ECMP protocol. We study two models: a pessimistic model where flows interfere in a worst-case manner along equal-cost shortest paths, and an optimistic model where flows are routed in a best-case manner, and we present a complete picture of the algorithmic complexities. We further propose a strategic search algorithm that checks only the critical failure scenarios while still providing correctness guarantees. Our experimental evaluation on a benchmark of Internet and datacenter topologies confirms an improved performance of our strategic search by several orders of magnitude.
APA, Harvard, Vancouver, ISO, and other styles
5

Albert, Elvira, Samir Genaim, Enrique Martin-Martin, Alicia Merayo, and Albert Rubio. "Lower-Bound Synthesis Using Loop Specialization and Max-SMT." In Computer Aided Verification, 863–86. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-81688-9_40.

Full text
Abstract:
AbstractThis paper presents a new framework to synthesize lower-bounds on the worst-case cost for non-deterministic integer loops. As in previous approaches, the analysis searches for a metering function that under-approximates the number of loop iterations. The key novelty of our framework is the specialization of loops, which is achieved by restricting their enabled transitions to a subset of the inputs combined with the narrowing of their transition scopes. Specialization allows us to find metering functions for complex loops that could not be handled before or be more precise than previous approaches. Technically, it is performed (1) by using quasi-invariants while searching for the metering function, (2) by strengthening the loop guards, and (3) by narrowing the space of non-deterministic choices. We also propose a Max-SMT encoding that takes advantage of the use of soft constraints to force the solver look for more accurate solutions. We show our accuracy gains on benchmarks extracted from the 2020 Termination and Complexity Competition by comparing our results to those obtained by the "Image missing" system.
APA, Harvard, Vancouver, ISO, and other styles
6

Cocco, Simona, and Rémi Monasson. "Analyzing Search Algorithms with Physical Methods." In Computational Complexity and Statistical Physics. Oxford University Press, 2005. http://dx.doi.org/10.1093/oso/9780195177374.003.0010.

Full text
Abstract:
The computational effort needed to deal with large combinatorial structures varies considerably with the task to be performed and the resolution procedure used [425]. The worst-case complexity of a decision or optimization problem is defined as the time required by the best algorithm to treat any possible input to the problem. For instance, the worst-case complexity of the problem of sorting a list of n numbers scales as n log n: there exist several algorithms that can order any list in at most ~ n log n elementary operations, and none with asymptotically fewer operations. Unfortunately, the worst-case complexities of many important computational problems, called NP-complete, are not known. Partitioning a list of n numbers in two sets with equal partial sums is one among hundreds of known NP-complete problems. It is a fundamental conjecture of theoretical computer science that there exists no algorithm capable of partitioning any list of length n, or of solving any other NP-complete problem with inputs of size n, in a time bounded by a polynomial of n. Therefore, when trying to solve such a problem exactly, one necessarily uses algorithms that may take exponential time on some inputs. Quantifying how“frequent” these hard inputs are for a given algorithm is the question answered by the analysis of algorithms. We will present an overview of recent work by physicists to address this point, and more precisely to characterize the average performance—hereafter simply called complexity—of a given algorithm over a distribution of inputs to a computational problem. The history of algorithm analysis by physical methods and ideas is at least as old as the use of computers by physicists. One well-established chapter in this history is the analysis of Monte Carlo sampling algorithms for statistical mechanics models. It is well known that phase transitions, that is, abrupt changes in the physical properties of the model, can imply a dramatic increase in the time necessary for the sampling procedure. This phenomenon is commonly known as critical slowing down. The physicist's insight comes from the analogy between the dynamics of algorithms and the physical dynamics of the system. That analogy is quite natural: in fact many algorithms mimic the physical dynamics.
APA, Harvard, Vancouver, ISO, and other styles
7

Sikorski, Krzysztof A. "Fixed Points- Noncontractive Functions." In Optimal Solution of Nonlinear Equations. Oxford University Press, 2001. http://dx.doi.org/10.1093/oso/9780195106909.003.0007.

Full text
Abstract:
In this chapter we consider the approximation of fixed points of noncontractive functions with respect to the absolute error criterion. In this case the functions may have multiple and/or whole manifolds of fixed points. We analyze methods based on sequential function evaluations as information. The simple iteration usually does not converge in this case, and the problem becomes much more difficult to solve. We prove that even in the two-dimensional case the problem has infinite worst case complexity. This means that no methods exist that solve the problem with arbitrarily small error tolerance for some “bad” functions. In the univariate case the problem is solvable, and a bisection envelope method is optimal. These results are in contrast with the solution under the residual error criterion. The problem then becomes solvable, although with exponential complexity, as outlined in the annotations. Therefore, simplicial and/or homotopy continuation and all methods based on function evaluations exhibit exponential worst case cost for solving the problem in the residual sense. These results indicate the need of average case analysis, since for many test functions the existing algorithms computed ε-approximations with polynomial in 1/ε cost.
APA, Harvard, Vancouver, ISO, and other styles
8

Hema, T., and Micheal Olaolu Arowolo. "Quantum Computing Significance on Multidimensional Data." In Advances in Systems Analysis, Software Engineering, and High Performance Computing, 217–30. IGI Global, 2023. http://dx.doi.org/10.4018/978-1-6684-6697-1.ch012.

Full text
Abstract:
Quantum computing is an emerging field of study and gains importance due to the fact that with the introduction of quantum computers, many challenges and changes are presented for the existing algorithms. The main reason for this is the exponential speed of such computers. This study analyzes some of the benefits and implications of quantum computing on geometrical problems such as the multidimensional search for window queries with R-Trees. A review of the window query on R-Trees in classical computing is done to consider its adaptability to quantum computers by applying the Grover's quantum search algorithm from a theoretical point of view. Thereby, the query time complexity in worst-case scenarios could be improved to quadratic search time.
APA, Harvard, Vancouver, ISO, and other styles
9

Sikorski, Krzysztof A. "Introduction." In Optimal Solution of Nonlinear Equations. Oxford University Press, 2001. http://dx.doi.org/10.1093/oso/9780195106909.003.0004.

Full text
Abstract:
This monograph is devoted to studying worst case complexity results and optimal or nearly optimal methods for the approximation of solutions of nonlinear equations, approximation of fixed points, and computation of the topological degree. The methods are “global” in nature. They guarantee that the computed solution is within a specified error from the exact solution for every function in a given class. A common approach in numerical analysis is to study the rate of convergence and/or locally convergent methods that require special assumptions on the location of initial points of iterations to be “sufficiently” close to the actual solutions. This approach is briefly reviewed in the annotations to chapter 2, as well as in section 2.1.6, dealing with the asymptotic analysis of the bisection method. Extensive literature exists describing the iterative approach, with several monographs published over the last 30 years. We do not attempt a complete review of this work. The reader interested in this classical approach should consult the monographs listed in the annotations to chapter 2. We motivate our analysis and introduce basic notions in a simple example of zero finding for continuous function with different signs at the endpoints of an interval. Example 3.1 We want to approximate a zero of a function f from the class F = {f : [0,1] → R : f(0) ,< 0 and f(1) > 0, continuous}.By an approximate solution of this problem we understand any point x = x (f) such that the distance between x and some zero ∝ = ∝(f) of the function f , f (∝ ) = 0, is at most equal to a given small positive number ∈,|x — ∝ ≤ ∈. To compute x we first gather some information on the function f by sampling f at n sequentially chosen points ti in the interval [0,1]. Then, based on this information we select x. To minimize the time complexity we must select the minimal number of sampling points, that guarantee computing x(f) for any function f in the class F. This minimal number of samples (in the worst case) is called the information complexity of the problem.
APA, Harvard, Vancouver, ISO, and other styles
10

Dvořák, Wolfgang, Matthias König, and Stefan Woltran. "Treewidth for Argumentation Frameworks with Collective Attacks." In Computational Models of Argument. IOS Press, 2022. http://dx.doi.org/10.3233/faia220148.

Full text
Abstract:
Abstract Argumentation is a key formalism to resolve conflicts in incomplete or inconsistent knowledge bases. Argumentation Frameworks (AFs) and extended versions thereof turned out to be a fruitful approach to reason in a flexible and intuitive setting. The addition of collective attacks, we refer to this class of frameworks as SETAFs, enriches the expressiveness and allows for compacter instantiations from knowledge bases, while maintaining the computational complexity of standard argumentation frameworks. This means, however, that standard reasoning tasks are intractable and worst-case runtimes for known standard algorithms can be exponential. In order to still obtain manageable runtimes, we exploit graph properties of these frameworks. In this paper, we initiate a parameterized complexity analysis of SETAFs in terms of the popular graph parameter treewidth. While treewidth is well studied in the context of AFs with their graph structure, it cannot be directly applied to the (directed) hypergraphs representing SETAFs. We thus introduce two generalizations of treewidth based on different graphs that can be associated with SETAFs, i.e., the primal graph and the incidence graph. We show that while some of these notions allow for parameterized tractability results, reasoning remains intractable for other notions, even if we fix the parameter to a small constant.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Worst-case complexity analysis"

1

Marchetti-Spaccamella, A., A. Pelaggi, and D. Sacca. "Worst-case complexity analysis of methods for logic query implementation." In the sixth ACM SIGACT-SIGMOD-SIGART symposium. New York, New York, USA: ACM Press, 1987. http://dx.doi.org/10.1145/28659.28691.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Said, Amir. "Worst-case Analysis of the Low-complexity Symbol Grouping Coding Technique." In 2006 IEEE International Symposium on Information Theory. IEEE, 2006. http://dx.doi.org/10.1109/isit.2006.262028.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wieder, Alexander, and Bjorn B. Brandenburg. "On the Complexity of Worst-Case Blocking Analysis of Nested Critical Sections." In 2014 IEEE Real-Time Systems Symposium (RTSS). IEEE, 2014. http://dx.doi.org/10.1109/rtss.2014.34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Necoara, Ion. "Worst-case computational complexity analysis for embedded MPC based on dual gradient method." In 2014 18th International Conference on System Theory, Control and Computing (ICSTCC). IEEE, 2014. http://dx.doi.org/10.1109/icstcc.2014.6982477.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Shi, Ziqiang, and Rujie Liu. "Better Worst-Case Complexity Analysis of the Block Coordinate Descent Method for Large Scale Machine Learning." In 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA). IEEE, 2017. http://dx.doi.org/10.1109/icmla.2017.00-43.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hausladen, Jürgen, Florian Gerstmayer, Thomas Jerabek, and Martin Horauer. "Integration of Static Worst-Case Execution Time and Stack Usage Analysis for Embedded Systems Software in a Cloud-Based Development Environment." In ASME 2017 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/detc2017-67402.

Full text
Abstract:
New applications relying on embedded systems technologies often come with an increased number of features and functionalities. For instance, improved safety, reliability, usability or reduced power consumption are commonly encountered aspects. Those in turn, however, come usually at the cost of increased complexity. Managing the latter can become challenging, especially when looking at (worst-case) execution times or memory usage of embedded systems. In particular, many applications, e.g., safety-critical or real-time applications, require knowledge about the worst-case execution time and stack usage to make a clear statement on important system parameters such as the overall performance or schedulability with regard to critical deadlines. Assessing these properties require elaborate tool support and profound knowledge and skills of the developers. In this paper, an evaluation of static analysis tools and the required steps to integrate these in a existing development environment is presented. The toolchain is either considered to be offline or deployed within a cloud-based integrated development environment. The cloud-approach enables ubiquitous access to the results and a unique visualization across multiple platforms. Additionally, the results are demonstrated along with a small use case.
APA, Harvard, Vancouver, ISO, and other styles
7

Regli, William C., Satyandra K. Gupta, and Dana S. Nau. "Feature Recognition for Manufacturability Analysis." In ASME 1994 International Computers in Engineering Conference and Exhibition and the ASME 1994 8th Annual Database Symposium collocated with the ASME 1994 Design Technical Conferences. American Society of Mechanical Engineers, 1994. http://dx.doi.org/10.1115/cie1994-0391.

Full text
Abstract:
Abstract While automated recognition of features has been attempted for a wide range of applications, no single existing approach possesses the functionality required to perform manufacturability analysis. In this paper, we present a methodology for taking a CAD model of a part and extracting a set of machinable features that contains the complete set of alternative interpretations of the part as collections of MRSEVs (Material Removal Shape Element Volumes, a STEP-based library of machining features). The approach handles a variety of features including those describing holes, pockets, slots, and chamfering and filleting operations. In addition, the approach considers accessibility constraints for these features, has an worst-case algorithmic time complexity quadratic in the number of solid modeling operations, and modifies features recognized to account for available tooling and produce more realistic volumes for manufacturability analysis.
APA, Harvard, Vancouver, ISO, and other styles
8

Walker, Mark, and Pavel Y. Tabakov. "Design Optimization of Anisotropic Pressure Vessels With Manufacturing Uncertainties Accounted For." In ASME 8th Biennial Conference on Engineering Systems Design and Analysis. ASMEDC, 2006. http://dx.doi.org/10.1115/esda2006-95767.

Full text
Abstract:
Accurate optimal design solutions for most engineering structures present considerable difficulties due to the complexity and multi-modality of the functional design space. The situation is made even more complex when potential manufacturing tolerances must be accounted for in the optimizing process. The present study provides an in-depth analysis of the problem and then a technique for determining the optimal design of engineering structures, with manufacturing tolerances accounted for, is proposed and demonstrated. The numerical examples used to demonstrate the technique involve the design optimisation of anisotropic fibre-reinforced laminated pressure vessels. It is assumed that the probability of any tolerance value occurring within the tolerance band, compared with any other, is equal, and thus it is a worst-case scenario approach. A genetic algorithm with fitness sharing, including a micro-genetic algorithm, has been found to be very suitable to use, and implemented in the technique.
APA, Harvard, Vancouver, ISO, and other styles
9

Nicak, Tomas, Herbert Schendzielorz, and Elisabeth Keim. "Analysis of Fracture Mechanics Specimens Made of Inconel 600 Based on Assessment Methods of Different Complexity." In ASME 2009 Pressure Vessels and Piping Conference. ASMEDC, 2009. http://dx.doi.org/10.1115/pvp2009-77195.

Full text
Abstract:
Fracture mechanics analysis plays an important role in the frame of the safety assessment of nuclear components. Usually the goal of such an analysis is to decide if a given flaw size in the piping (or any component of the primary circuit) is acceptable or not. The word “acceptable” means that structural integrity of the component is guaranteed with sufficient safety margins up to the end of service life or up to the next in-service inspection (considering the worst case loads and lower bound material properties). To fulfil this high-responsible task in practice some useful Engineering Assessment methods (EAM) have been established i.e. Local flow stress concept (Germany), assessment based on J-Integral (France RSE-M), Limit load calculation according to (ASME XI, USA) or Two criteria approach (R6, UK). These EAM are verified by a large number of testscarried out in the past. On a higher level, more advanced assessment methods have been developed during the last years, based on micromechanical models of void nucleation and growth. These advanced micromechanical models are used within the Finite Element Analysis (FEA) and allow to study the whole crack growth process from initiation to final failure in more detail. In the ductile regime, which is the typical case for application of aforementioned methods, the crack growth can be divided into three phases: crack initiation, stable crack growth and unstable crack growth. In this paper methods of different complexity will be applied to analyse fracture mechanics specimens made of Inconel 600. Special focus will be placed on the crack growth modelling based on the Gurson’s porous metal plasticity theory. All performed calculations will be compared with experiments.
APA, Harvard, Vancouver, ISO, and other styles
10

Tasora, Alessandro, and Dan Negrut. "On Some Properties of the Mechanical Topology That Affect Parallel Solvers." In ASME 2013 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2013. http://dx.doi.org/10.1115/detc2013-13201.

Full text
Abstract:
The efficiency of parallel solvers for large multibody systems is affected by the topology of the network of constraints. In the most general setting, that is the case of problems involving contacts between large numbers of parts, the mechanical topology cannot be predicted a priori and also changes during the simulation. Depending on the strategy for splitting the computational workload on the processing units, different types of worst case scenarios can happen. In this paper we discuss a few approaches to the parallelization of multibody solvers, ranging from the fine-grained parallism on GPU to coarse-grained parallelism in clusters, and we show how their bottlenecks are directly related to some graph properties of the mechanical topology. Drawing on the topological analysis of the constraint network and its splitting, lower bounds on the computational complexity of the solver methods are presented, and some guidelines for limiting the worst-case scenarios in parallel algorithms are put forward.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography