Добірка наукової літератури з теми "Worst-case complexity analysis"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Worst-case complexity analysis".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Worst-case complexity analysis"

1

Szirmay-Kalos, L., and G. Márton. "Worst-case versus average case complexity of ray-shooting." Computing 61, no. 2 (June 1998): 103–31. http://dx.doi.org/10.1007/bf02684409.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Kwas, Marek, and Youming Li. "Worst case complexity of multivariate Feynman–Kac path integration." Journal of Complexity 19, no. 6 (December 2003): 730–43. http://dx.doi.org/10.1016/s0885-064x(03)00048-7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Jackowski, Tomasz. "Complexity of multilinear problems in the worst case setting." Journal of Complexity 6, no. 4 (December 1990): 389–408. http://dx.doi.org/10.1016/0885-064x(90)90030-h.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Milanese, M., and A. Vicino. "Information-Based Complexity and Nonparametric Worst-Case System Identification." Journal of Complexity 9, no. 4 (December 1993): 427–46. http://dx.doi.org/10.1006/jcom.1993.1028.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Plaskota, Leszek. "Worst Case Complexity of Problems with Random Information Noise." Journal of Complexity 12, no. 4 (December 1996): 416–39. http://dx.doi.org/10.1006/jcom.1996.0026.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Pemmaraju, Sriram V., and Clifford A. Shaffer. "Analysis of the worst case space complexity of a PR quadtree." Information Processing Letters 49, no. 5 (March 1994): 263–67. http://dx.doi.org/10.1016/0020-0190(94)90065-5.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Li, Youming, and Grzegorz W. Wasilkowski. "Worst Case Complexity of Weighted Approximation and Integration over Rd." Journal of Complexity 18, no. 1 (March 2002): 330–45. http://dx.doi.org/10.1006/jcom.2001.0632.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Short, Michael. "Bounds on Worst-Case Deadline Failure Probabilities in Controller Area Networks." Journal of Computer Networks and Communications 2016 (2016): 1–12. http://dx.doi.org/10.1155/2016/5196092.

Повний текст джерела
Анотація:
Industrial communication networks like the Controller Area Network (CAN) are often required to operate reliably in harsh environments which expose the communication network to random errors. Probabilistic schedulability analysis can employ rich stochastic error models to capture random error behaviors, but this is most often at the expense of increased analysis complexity. In this paper, an efficient method (of time complexityO(n log n)) to bound the message deadline failure probabilities for an industrial CAN network consisting ofnperiodic/sporadic message transmissions is proposed. The paper develops bounds for Deadline Minus Jitter Monotonic (DMJM) and Earliest Deadline First (EDF) message scheduling techniques. Both random errors and random bursts of errors can be included in the model. Stochastic simulations and a case study considering DMJM and EDF scheduling of an automotive benchmark message set provide validation of the technique and highlight its application.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

De Haan, Ronald, Anna Roubickova, and Stefan Szeider. "Parameterized Complexity Results for Plan Reuse." Proceedings of the AAAI Conference on Artificial Intelligence 27, no. 1 (June 30, 2013): 224–31. http://dx.doi.org/10.1609/aaai.v27i1.8655.

Повний текст джерела
Анотація:
Planning is a notoriously difficult computational problem of high worst-case complexity. Researchers have been investing significant efforts to develop heuristics or restrictions to make planning practically feasible. Case-based planning is a heuristic approach where one tries to reuse previous experience when solving similar problems in order to avoid some of the planning effort. Plan reuse may offer an interesting alternative to plan generation in some settings. We provide theoretical results that identify situations in which plan reuse is provably tractable. We perform our analysis in the framework of parameterized complexity, which supports a rigorous worst-case complexity analysis that takes structural properties of the input into account in terms of parameters. A central notion of parameterized complexity is fixed-parameter tractability which extends the classical notion of polynomial-time tractability by utilizing the effect of parameters. We draw a detailed map of the parameterized complexity landscape of several variants of problems that arise in the context of case-based planning. In particular, we consider the problem of reusing an existing plan, imposing various restrictions in terms of parameters, such as the number of steps that can be added to the existing plan to turn it into a solution of the planning instance at hand.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Kon, Mark, and Leszek Plaskota. "Complexity of Neural Network Approximation with Limited Information: A Worst Case Approach." Journal of Complexity 17, no. 2 (June 2001): 345–65. http://dx.doi.org/10.1006/jcom.2001.0575.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Дисертації з теми "Worst-case complexity analysis"

1

Panigrahi, Sunil Kumar, Soubhik Chakraborty, and Jibitesh Mishra. "A Statistical Analysis of Bubble Sort in terms of Serial and Parallel Computation." IJCSN Journal, 2012. http://hdl.handle.net/10150/214089.

Повний текст джерела
Анотація:
In some recent papers, the weight based statistical bounds have arguably explained time complexity better than the count based mathematical bounds. This is definitely true for average case where for an arbitrary code it is difficult to identify the pivotal operation or pivotal region in the code for taking the expectation and/or when the probability distribution, over which expectation is taken, becomes unrealistic over the problem domain. In worst case, it can certify whether a mathematical bound is conservative or not. Here we revisit the results on Bubble sort in sequential mode and make an independent study of the same algorithm in parallel mode using statistical bound
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Gurioli, Gianmarco. "Adaptive Regularisation Methods under Inexact Evaluations for Nonconvex Optimisation and Machine Learning Applications." Doctoral thesis, 2021. http://hdl.handle.net/2158/1238314.

Повний текст джерела
Анотація:
The major aim of this research thesis is to handle two main challenges arising when solving unconstrained optimisation problems with second-order methods: the reduction of the per-iteration cost and the stochastic analysis of the resulting non- deterministic algorithms. This is motivated by the fact that second-order procedures can be more efficient than first-order ones on badly scaled and ill-conditioned problems, since they seem to potentially take advantage of curvature information to easier escape from saddle points, being more robust to the choice of hyperparameters and the parameters tuning, but at the price of a more expensive per-iteration cost, due to the computation of Hessian-vector products. Furthermore, the effort of reducing such a cost with inexact function and/or derivatives evaluations, that have to fulfill suitable accuracy requirements, leads to non-deterministic variants of the methods, that have to be supported by a stochastic complexity analysis. The thesis builds on a particular class of second-order globally convergent methods based on the Adaptive Cubic Regularisation (ARC) framework, motivated by the fact that its complexity, in terms of the worst-case number of iterations to reach a first-order critical point, has been proved to be optimal. To this purpose, the design, analysis and development of novel variants of ARC methods, employing inexact derivatives and/or function. evaluations, are investigated. To start with, a suitable reference version of the ARC method is firstly introduced, obtained by merging existing basic forms of ARC algorithms, in order to set the general background on adaptive cubic regularisation. Having set the scene, we then cope with the need of introducing inexactness in function and derivatives computations while conserving optimal complexity. After setting the finite-sum minimisation framework, this starts with the employment of inexact Hessian information, adaptively chosen, before moving on to an extended framework based on function estimates and approximate derivatives evaluations. The stochastic complexity analysis of the presented frameworks is thus performed. Finally, numerical tests within the context of supervised learning are reported, ranging from popular machine learning datasets to a real-life machine learning industrial application related to the parametric design of centrifugal pumps.
Стилі APA, Harvard, Vancouver, ISO та ін.

Книги з теми "Worst-case complexity analysis"

1

Kowalski, Marek A., Krzystof A. Sikorski, and Frank Stenger. Selected Topics in Approximation and Computation. Oxford University Press, 1995. http://dx.doi.org/10.1093/oso/9780195080599.001.0001.

Повний текст джерела
Анотація:
Selected Topics in Approximation and Computation addresses the relationship between modern approximation theory and computational methods. The text is a combination of expositions of basic classical methods of approximation leading to popular splines and new explicit tools of computation, including Sinc methods, elliptic function methods, and positive operator approximation methods. It also provides an excellent summary of worst case analysis in information based complexity. It relates optimal computational methods with the theory of s-numbers and n-widths. It can serve as a text for senior-graduate courses in computer science and applied mathematics, and also as a reference for professionals.
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Worst-case complexity analysis"

1

Xia, Lirong, and Weiqiang Zheng. "Beyond the Worst Case: Semi-random Complexity Analysis of Winner Determination." In Web and Internet Economics, 330–47. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-22832-2_19.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Haslbeck, Maximilian P. L., and Peter Lammich. "For a Few Dollars More." In Programming Languages and Systems, 292–319. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-72019-3_11.

Повний текст джерела
Анотація:
AbstractWe present a framework to verify both, functional correctness and worst-case complexity of practically efficient algorithms. We implemented a stepwise refinement approach, using the novel concept of resource currencies to naturally structure the resource analysis along the refinement chain, and allow a fine-grained analysis of operation counts. Our framework targets the LLVM intermediate representation. We extend its semantics from earlier work with a cost model. As case study, we verify the correctness and $$O(n\log n)$$ O ( n log n ) worst-case complexity of an implementation of the introsort algorithm, whose performance is on par with the state-of-the-art implementation found in the GNU C++ Library.
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Benerecetti, Massimo, Daniele Dell’Erba, and Fabio Mogavero. "Solving Mean-Payoff Games via Quasi Dominions." In Tools and Algorithms for the Construction and Analysis of Systems, 289–306. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-45237-7_18.

Повний текст джерела
Анотація:
Abstract We propose a novel algorithm for the solution of mean-payoff games that merges together two seemingly unrelated concepts introduced in the context of parity games, small progress measures and quasi dominions. We show that the integration of the two notions can be highly beneficial and significantly speeds up convergence to the problem solution. Experiments show that the resulting algorithm performs orders of magnitude better than the asymptotically-best solution algorithm currently known, without sacrificing on the worst-case complexity.
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Schmid, Stefan, Nicolas Schnepf, and Jiří Srba. "Resilient Capacity-Aware Routing." In Tools and Algorithms for the Construction and Analysis of Systems, 411–29. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-72016-2_22.

Повний текст джерела
Анотація:
AbstractTo ensure a high availability, communication networks provide resilient routing mechanisms that quickly change routes upon failures. However, a fundamental algorithmic question underlying such mechanisms is hardly understood: how to verify whether a given network reroutes flows along feasible paths, without violating capacity constraints, for up to k link failures? We chart the algorithmic complexity landscape of resilient routing under link failures, considering shortest path routing based on link weights as e.g. deployed in the ECMP protocol. We study two models: a pessimistic model where flows interfere in a worst-case manner along equal-cost shortest paths, and an optimistic model where flows are routed in a best-case manner, and we present a complete picture of the algorithmic complexities. We further propose a strategic search algorithm that checks only the critical failure scenarios while still providing correctness guarantees. Our experimental evaluation on a benchmark of Internet and datacenter topologies confirms an improved performance of our strategic search by several orders of magnitude.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Albert, Elvira, Samir Genaim, Enrique Martin-Martin, Alicia Merayo, and Albert Rubio. "Lower-Bound Synthesis Using Loop Specialization and Max-SMT." In Computer Aided Verification, 863–86. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-81688-9_40.

Повний текст джерела
Анотація:
AbstractThis paper presents a new framework to synthesize lower-bounds on the worst-case cost for non-deterministic integer loops. As in previous approaches, the analysis searches for a metering function that under-approximates the number of loop iterations. The key novelty of our framework is the specialization of loops, which is achieved by restricting their enabled transitions to a subset of the inputs combined with the narrowing of their transition scopes. Specialization allows us to find metering functions for complex loops that could not be handled before or be more precise than previous approaches. Technically, it is performed (1) by using quasi-invariants while searching for the metering function, (2) by strengthening the loop guards, and (3) by narrowing the space of non-deterministic choices. We also propose a Max-SMT encoding that takes advantage of the use of soft constraints to force the solver look for more accurate solutions. We show our accuracy gains on benchmarks extracted from the 2020 Termination and Complexity Competition by comparing our results to those obtained by the "Image missing" system.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Cocco, Simona, and Rémi Monasson. "Analyzing Search Algorithms with Physical Methods." In Computational Complexity and Statistical Physics. Oxford University Press, 2005. http://dx.doi.org/10.1093/oso/9780195177374.003.0010.

Повний текст джерела
Анотація:
The computational effort needed to deal with large combinatorial structures varies considerably with the task to be performed and the resolution procedure used [425]. The worst-case complexity of a decision or optimization problem is defined as the time required by the best algorithm to treat any possible input to the problem. For instance, the worst-case complexity of the problem of sorting a list of n numbers scales as n log n: there exist several algorithms that can order any list in at most ~ n log n elementary operations, and none with asymptotically fewer operations. Unfortunately, the worst-case complexities of many important computational problems, called NP-complete, are not known. Partitioning a list of n numbers in two sets with equal partial sums is one among hundreds of known NP-complete problems. It is a fundamental conjecture of theoretical computer science that there exists no algorithm capable of partitioning any list of length n, or of solving any other NP-complete problem with inputs of size n, in a time bounded by a polynomial of n. Therefore, when trying to solve such a problem exactly, one necessarily uses algorithms that may take exponential time on some inputs. Quantifying how“frequent” these hard inputs are for a given algorithm is the question answered by the analysis of algorithms. We will present an overview of recent work by physicists to address this point, and more precisely to characterize the average performance—hereafter simply called complexity—of a given algorithm over a distribution of inputs to a computational problem. The history of algorithm analysis by physical methods and ideas is at least as old as the use of computers by physicists. One well-established chapter in this history is the analysis of Monte Carlo sampling algorithms for statistical mechanics models. It is well known that phase transitions, that is, abrupt changes in the physical properties of the model, can imply a dramatic increase in the time necessary for the sampling procedure. This phenomenon is commonly known as critical slowing down. The physicist's insight comes from the analogy between the dynamics of algorithms and the physical dynamics of the system. That analogy is quite natural: in fact many algorithms mimic the physical dynamics.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Sikorski, Krzysztof A. "Fixed Points- Noncontractive Functions." In Optimal Solution of Nonlinear Equations. Oxford University Press, 2001. http://dx.doi.org/10.1093/oso/9780195106909.003.0007.

Повний текст джерела
Анотація:
In this chapter we consider the approximation of fixed points of noncontractive functions with respect to the absolute error criterion. In this case the functions may have multiple and/or whole manifolds of fixed points. We analyze methods based on sequential function evaluations as information. The simple iteration usually does not converge in this case, and the problem becomes much more difficult to solve. We prove that even in the two-dimensional case the problem has infinite worst case complexity. This means that no methods exist that solve the problem with arbitrarily small error tolerance for some “bad” functions. In the univariate case the problem is solvable, and a bisection envelope method is optimal. These results are in contrast with the solution under the residual error criterion. The problem then becomes solvable, although with exponential complexity, as outlined in the annotations. Therefore, simplicial and/or homotopy continuation and all methods based on function evaluations exhibit exponential worst case cost for solving the problem in the residual sense. These results indicate the need of average case analysis, since for many test functions the existing algorithms computed ε-approximations with polynomial in 1/ε cost.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Hema, T., and Micheal Olaolu Arowolo. "Quantum Computing Significance on Multidimensional Data." In Advances in Systems Analysis, Software Engineering, and High Performance Computing, 217–30. IGI Global, 2023. http://dx.doi.org/10.4018/978-1-6684-6697-1.ch012.

Повний текст джерела
Анотація:
Quantum computing is an emerging field of study and gains importance due to the fact that with the introduction of quantum computers, many challenges and changes are presented for the existing algorithms. The main reason for this is the exponential speed of such computers. This study analyzes some of the benefits and implications of quantum computing on geometrical problems such as the multidimensional search for window queries with R-Trees. A review of the window query on R-Trees in classical computing is done to consider its adaptability to quantum computers by applying the Grover's quantum search algorithm from a theoretical point of view. Thereby, the query time complexity in worst-case scenarios could be improved to quadratic search time.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Sikorski, Krzysztof A. "Introduction." In Optimal Solution of Nonlinear Equations. Oxford University Press, 2001. http://dx.doi.org/10.1093/oso/9780195106909.003.0004.

Повний текст джерела
Анотація:
This monograph is devoted to studying worst case complexity results and optimal or nearly optimal methods for the approximation of solutions of nonlinear equations, approximation of fixed points, and computation of the topological degree. The methods are “global” in nature. They guarantee that the computed solution is within a specified error from the exact solution for every function in a given class. A common approach in numerical analysis is to study the rate of convergence and/or locally convergent methods that require special assumptions on the location of initial points of iterations to be “sufficiently” close to the actual solutions. This approach is briefly reviewed in the annotations to chapter 2, as well as in section 2.1.6, dealing with the asymptotic analysis of the bisection method. Extensive literature exists describing the iterative approach, with several monographs published over the last 30 years. We do not attempt a complete review of this work. The reader interested in this classical approach should consult the monographs listed in the annotations to chapter 2. We motivate our analysis and introduce basic notions in a simple example of zero finding for continuous function with different signs at the endpoints of an interval. Example 3.1 We want to approximate a zero of a function f from the class F = {f : [0,1] → R : f(0) ,< 0 and f(1) > 0, continuous}.By an approximate solution of this problem we understand any point x = x (f) such that the distance between x and some zero ∝ = ∝(f) of the function f , f (∝ ) = 0, is at most equal to a given small positive number ∈,|x — ∝ ≤ ∈. To compute x we first gather some information on the function f by sampling f at n sequentially chosen points ti in the interval [0,1]. Then, based on this information we select x. To minimize the time complexity we must select the minimal number of sampling points, that guarantee computing x(f) for any function f in the class F. This minimal number of samples (in the worst case) is called the information complexity of the problem.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Dvořák, Wolfgang, Matthias König, and Stefan Woltran. "Treewidth for Argumentation Frameworks with Collective Attacks." In Computational Models of Argument. IOS Press, 2022. http://dx.doi.org/10.3233/faia220148.

Повний текст джерела
Анотація:
Abstract Argumentation is a key formalism to resolve conflicts in incomplete or inconsistent knowledge bases. Argumentation Frameworks (AFs) and extended versions thereof turned out to be a fruitful approach to reason in a flexible and intuitive setting. The addition of collective attacks, we refer to this class of frameworks as SETAFs, enriches the expressiveness and allows for compacter instantiations from knowledge bases, while maintaining the computational complexity of standard argumentation frameworks. This means, however, that standard reasoning tasks are intractable and worst-case runtimes for known standard algorithms can be exponential. In order to still obtain manageable runtimes, we exploit graph properties of these frameworks. In this paper, we initiate a parameterized complexity analysis of SETAFs in terms of the popular graph parameter treewidth. While treewidth is well studied in the context of AFs with their graph structure, it cannot be directly applied to the (directed) hypergraphs representing SETAFs. We thus introduce two generalizations of treewidth based on different graphs that can be associated with SETAFs, i.e., the primal graph and the incidence graph. We show that while some of these notions allow for parameterized tractability results, reasoning remains intractable for other notions, even if we fix the parameter to a small constant.
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Worst-case complexity analysis"

1

Marchetti-Spaccamella, A., A. Pelaggi, and D. Sacca. "Worst-case complexity analysis of methods for logic query implementation." In the sixth ACM SIGACT-SIGMOD-SIGART symposium. New York, New York, USA: ACM Press, 1987. http://dx.doi.org/10.1145/28659.28691.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Said, Amir. "Worst-case Analysis of the Low-complexity Symbol Grouping Coding Technique." In 2006 IEEE International Symposium on Information Theory. IEEE, 2006. http://dx.doi.org/10.1109/isit.2006.262028.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Wieder, Alexander, and Bjorn B. Brandenburg. "On the Complexity of Worst-Case Blocking Analysis of Nested Critical Sections." In 2014 IEEE Real-Time Systems Symposium (RTSS). IEEE, 2014. http://dx.doi.org/10.1109/rtss.2014.34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Necoara, Ion. "Worst-case computational complexity analysis for embedded MPC based on dual gradient method." In 2014 18th International Conference on System Theory, Control and Computing (ICSTCC). IEEE, 2014. http://dx.doi.org/10.1109/icstcc.2014.6982477.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Shi, Ziqiang, and Rujie Liu. "Better Worst-Case Complexity Analysis of the Block Coordinate Descent Method for Large Scale Machine Learning." In 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA). IEEE, 2017. http://dx.doi.org/10.1109/icmla.2017.00-43.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Hausladen, Jürgen, Florian Gerstmayer, Thomas Jerabek, and Martin Horauer. "Integration of Static Worst-Case Execution Time and Stack Usage Analysis for Embedded Systems Software in a Cloud-Based Development Environment." In ASME 2017 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/detc2017-67402.

Повний текст джерела
Анотація:
New applications relying on embedded systems technologies often come with an increased number of features and functionalities. For instance, improved safety, reliability, usability or reduced power consumption are commonly encountered aspects. Those in turn, however, come usually at the cost of increased complexity. Managing the latter can become challenging, especially when looking at (worst-case) execution times or memory usage of embedded systems. In particular, many applications, e.g., safety-critical or real-time applications, require knowledge about the worst-case execution time and stack usage to make a clear statement on important system parameters such as the overall performance or schedulability with regard to critical deadlines. Assessing these properties require elaborate tool support and profound knowledge and skills of the developers. In this paper, an evaluation of static analysis tools and the required steps to integrate these in a existing development environment is presented. The toolchain is either considered to be offline or deployed within a cloud-based integrated development environment. The cloud-approach enables ubiquitous access to the results and a unique visualization across multiple platforms. Additionally, the results are demonstrated along with a small use case.
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Regli, William C., Satyandra K. Gupta, and Dana S. Nau. "Feature Recognition for Manufacturability Analysis." In ASME 1994 International Computers in Engineering Conference and Exhibition and the ASME 1994 8th Annual Database Symposium collocated with the ASME 1994 Design Technical Conferences. American Society of Mechanical Engineers, 1994. http://dx.doi.org/10.1115/cie1994-0391.

Повний текст джерела
Анотація:
Abstract While automated recognition of features has been attempted for a wide range of applications, no single existing approach possesses the functionality required to perform manufacturability analysis. In this paper, we present a methodology for taking a CAD model of a part and extracting a set of machinable features that contains the complete set of alternative interpretations of the part as collections of MRSEVs (Material Removal Shape Element Volumes, a STEP-based library of machining features). The approach handles a variety of features including those describing holes, pockets, slots, and chamfering and filleting operations. In addition, the approach considers accessibility constraints for these features, has an worst-case algorithmic time complexity quadratic in the number of solid modeling operations, and modifies features recognized to account for available tooling and produce more realistic volumes for manufacturability analysis.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Walker, Mark, and Pavel Y. Tabakov. "Design Optimization of Anisotropic Pressure Vessels With Manufacturing Uncertainties Accounted For." In ASME 8th Biennial Conference on Engineering Systems Design and Analysis. ASMEDC, 2006. http://dx.doi.org/10.1115/esda2006-95767.

Повний текст джерела
Анотація:
Accurate optimal design solutions for most engineering structures present considerable difficulties due to the complexity and multi-modality of the functional design space. The situation is made even more complex when potential manufacturing tolerances must be accounted for in the optimizing process. The present study provides an in-depth analysis of the problem and then a technique for determining the optimal design of engineering structures, with manufacturing tolerances accounted for, is proposed and demonstrated. The numerical examples used to demonstrate the technique involve the design optimisation of anisotropic fibre-reinforced laminated pressure vessels. It is assumed that the probability of any tolerance value occurring within the tolerance band, compared with any other, is equal, and thus it is a worst-case scenario approach. A genetic algorithm with fitness sharing, including a micro-genetic algorithm, has been found to be very suitable to use, and implemented in the technique.
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Nicak, Tomas, Herbert Schendzielorz, and Elisabeth Keim. "Analysis of Fracture Mechanics Specimens Made of Inconel 600 Based on Assessment Methods of Different Complexity." In ASME 2009 Pressure Vessels and Piping Conference. ASMEDC, 2009. http://dx.doi.org/10.1115/pvp2009-77195.

Повний текст джерела
Анотація:
Fracture mechanics analysis plays an important role in the frame of the safety assessment of nuclear components. Usually the goal of such an analysis is to decide if a given flaw size in the piping (or any component of the primary circuit) is acceptable or not. The word “acceptable” means that structural integrity of the component is guaranteed with sufficient safety margins up to the end of service life or up to the next in-service inspection (considering the worst case loads and lower bound material properties). To fulfil this high-responsible task in practice some useful Engineering Assessment methods (EAM) have been established i.e. Local flow stress concept (Germany), assessment based on J-Integral (France RSE-M), Limit load calculation according to (ASME XI, USA) or Two criteria approach (R6, UK). These EAM are verified by a large number of testscarried out in the past. On a higher level, more advanced assessment methods have been developed during the last years, based on micromechanical models of void nucleation and growth. These advanced micromechanical models are used within the Finite Element Analysis (FEA) and allow to study the whole crack growth process from initiation to final failure in more detail. In the ductile regime, which is the typical case for application of aforementioned methods, the crack growth can be divided into three phases: crack initiation, stable crack growth and unstable crack growth. In this paper methods of different complexity will be applied to analyse fracture mechanics specimens made of Inconel 600. Special focus will be placed on the crack growth modelling based on the Gurson’s porous metal plasticity theory. All performed calculations will be compared with experiments.
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Tasora, Alessandro, and Dan Negrut. "On Some Properties of the Mechanical Topology That Affect Parallel Solvers." In ASME 2013 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2013. http://dx.doi.org/10.1115/detc2013-13201.

Повний текст джерела
Анотація:
The efficiency of parallel solvers for large multibody systems is affected by the topology of the network of constraints. In the most general setting, that is the case of problems involving contacts between large numbers of parts, the mechanical topology cannot be predicted a priori and also changes during the simulation. Depending on the strategy for splitting the computational workload on the processing units, different types of worst case scenarios can happen. In this paper we discuss a few approaches to the parallelization of multibody solvers, ranging from the fine-grained parallism on GPU to coarse-grained parallelism in clusters, and we show how their bottlenecks are directly related to some graph properties of the mechanical topology. Drawing on the topological analysis of the constraint network and its splitting, lower bounds on the computational complexity of the solver methods are presented, and some guidelines for limiting the worst-case scenarios in parallel algorithms are put forward.
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!

До бібліографії