Literatura científica selecionada sobre o tema "Selection et optimisation d'hyperparamètre"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Consulte a lista de atuais artigos, livros, teses, anais de congressos e outras fontes científicas relevantes para o tema "Selection et optimisation d'hyperparamètre".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Artigos de revistas sobre o assunto "Selection et optimisation d'hyperparamètre"

1

Grandjean, Martine, Thomas Meyer, Cédric Haon e Pascale Chenevier. "Selection and Optimisation of Silicon Anodes for All-Solid-State Batteries". ECS Meeting Abstracts MA2022-01, n.º 2 (7 de julho de 2022): 408. http://dx.doi.org/10.1149/ma2022-012408mtgabs.

Texto completo da fonte
Resumo:
In the field of energy storage, Lithium-ion (Li-ion) technology is currently the most widespread on the market. It is widely integrated in portable electronic devices and is gaining interest in the automobile sector due to the development of electrical vehicles. However, its performance are reaching its limits. Solid-state Li-ion battery (SSB) is a very promising technology for next generation energy storage devices due to the promise of higher energy densities and its enhanced safety [1]. SSB are expected to enable a safe use of lithium metal anodes. However, high reactivity of lithium metal with solid electrolyte lead to fast degradation of performances. Another issue is the dendrite growth during cycling [2]. Consequently, silicon has been identified as a promising anode material alternative. Its great abundance, its reasonable working potential of 0,4V vs Li/Li+ and its high theoretical specific capacity of 3579mAh/g are its main advantages. However, due to the formation of an alloy with lithium ions up to the Li15Si4 phase, the silicon will undergo a strong volume expansion of almost 300%. Various techniques are described in the literature to limit this phenomena mainly with liquid electrolytes, such as the use of nano-sized silicon particles [3]–[5]. Moreover, the reactivity of silicon with the solid electrolyte is little studied in all solid-state batteries. In this work, the cyclability and the reactivity of two different silicon materials with sulphide-based solid electrolyte was studied. The silicon materials studied have a different morphology: one is made of commercial micrometric silicon particles (SiMicro) (2-10µm) and the other is silicon nanowires (SiNWs) (10nm) synthesized in the laboratory by a chemical growth process [6]. The silicon-based composite consists of 30wt% silicon, 50wt% solid electrolyte (Li6PS5Cl) and 20wt% conductive additive (VGCF). The different powders are mixed by manual grinding for 15 minutes. To study the effect of composite ageing, composite are either used directly for electrochemical characterisation (Fresh Composite) or after a few weeks of storage in the glove box (Aged Composite). The entire cell manufacturing process is carried out in a glove box. The electrochemical tests are performed with Biologic MPG2 and VMP3 at room temperature, between -0.6V and 1V vs. Li-In at a cycling rate of C/20. Normalized capacity as a function of the number of cycles for composites of SiNWs and SiMicro with and without ageing is shown in Figure 1 (a – b). The first striking result is the difference in composite ageing between SiNWs and SiMicro. Indeed, the aged composite causes a strong degradation of the cycling stability for SiMicro compared to the new composite, with 30% of the initial capacity at the 15th cycle versus 60%. On the contrary, no difference is observed for SiNWs between the two types of composites. It is therefore possible to assume that SiMicro has a higher reactivity with the solid electrolyte than SiNWs. To understand what species are formed during storage of the composite and why SiMicro is more reactive than SiNWs in the composite electrode used, XPS analyses were performed. First the raw materials were analysed, then the composite powders with and without ageing. The largest differences between the different powders could be observed with the S2p spectrum presented in Figure 1 (c – d). The presence of sulphur on the nanowires can be explained either by the synthesis conditions or by the storage in a sulphur glove box. The elements observed on the S2p spectrum are therefore very different depending on whether the composite is made with SiMicro or SiNWs because of the difference in the materials alone. With regard to the ageing of the composite powder, the spectra obtained are identical for SiNWs before or after ageing, whereas an evolution is observed with SiMicro. Indeed, a characteristic doublet of oxidised sulphur appears after ageing. It is therefore possible to observe by XPS a degradation of the electrolyte for SiMicro, which is in agreement with the electrochemical characterizations. Future work will involve modifying the surface of SiMicro to compare with the surface of SiNWs obtained after synthesis. [1] A. Kato et al., 2018, doi: 10.1021/acsaem.7b00140. [2] S. Cangaz et al., 2020, doi : 10.1002/aenm.202001320 [3] R. Okuno et al., 2020, doi: 10.1149/1945-7111/abc3ff. [4] C. Keller et al., 2021, doi: 10.3390/nano11020307. [5] J. Sakabe et al., 2018, doi: 10.1038/s42004-018-0026-y. [6] O. Burchak et al., 2019, doi: 10.1039/C9NR03749G. Figure 1
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Avendaño, S., B. Villanueva e J. A. Woolliams. "Optimisation of selection decisions in the UK Meatlinc breed of sheep". Proceedings of the British Society of Animal Science 2002 (2002): 194. http://dx.doi.org/10.1017/s1752756200008504.

Texto completo da fonte
Resumo:
Best Linear Unbiased Prediction (BLUP) estimates of breeding values (EBVs) have been routinely used for selection decisions in the UK Meatlinc (ML) population since the early nineteen nineties. This has enabled accurate selection and has allowed higher genetic gains for traits of economic relevance than in other terminal sheep breeds (MLC, 1999). However, concerns regarding increased rates of inbreeding (ΔF) by selecting exclusively on BLUP-EBVs have arisen in this small population. Dynamic rules to maximise genetic merit while ΔF is constrained to a pre-defined level using BLUP EBVs are currently available (e.g. Grundy et al 1998). They found higher gains than standard BLUP selection at the same ΔF by using these rules. The objective of this study was to investigate the potential of these procedures for optimising selection decisions under constrained inbreeding in the UK ML sheep population.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Villanueva, B., R. Pong-Wong e J. A. Woolliams. "Benefits from marker assisted selection with optimised contributions and prior information on the QTL effect". Proceedings of the British Society of Animal Science 2002 (2002): 57. http://dx.doi.org/10.1017/s1752756200007134.

Texto completo da fonte
Resumo:
Studies investigating the value of Marker Assisted Selection (MAS) for increasing genetic gain have compared responses from MAS and conventional schemes obtained with standard truncation selection and have ignored rates of inbreeding, DF (e.g. Ruane and Colleau, 1995). On the other hand, research comparing schemes at the same ΔF using optimised selection (Villanueva et al. 1999) has assumed that the effect of the QTL is known without error. This study extends the optimisation methods to include selection on genetic markers rather than on the QTL itself.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Hoos, Holger. "Computer-Aided Algorithm Design: Automated Tuning, Configuration, Selection, and Beyond". Proceedings of the International Conference on Automated Planning and Scheduling 20 (25 de maio de 2021): 268–69. http://dx.doi.org/10.1609/icaps.v20i1.13426.

Texto completo da fonte
Resumo:
In this talk, I will introduce computer-aided algorithm design and discuss its main ingredients: design patterns, which provide ways of structuring potentially large spaces of candidate algorithms, and meta-algorithmic optimisation procedures, which are used for finding good designs within these spaces. After explaining how this algorithm design approach differs from and complements related approaches in program synthesis, genetic programming and so-called hyperheuristics, I will illustrate its success using examples from our own work in SAT-based software verification (Hutter et al. 2007), timetabling (Chiarandini, Fawcett, and Hoos 2008) and mixed integer programming (Hutter, Hoos, and Leyton-Brown 2010). Furthermore, I will argue why this approach can be expected to be particularly useful and effective for building better solvers for rich and diverse classes of combinatorial problems, such as planning and scheduling. Finally, I will outline out how programming by optimisation — a design paradigm that emphasises the automated construction of performance-optimised algorithm by means of searching large spaces of alternative designs — has the potential to transform the design of high-performance algorithm from a craft that is based primarily on experience and intuition into a principled and highly effective engineering effort.
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Saeed, D., G. Carter e C. Parsons. "A systematic review of interventions to improve medicines optimisation in frail older patients in secondary and acute care settings". International Journal of Pharmacy Practice 29, Supplement_1 (26 de março de 2021): i22—i23. http://dx.doi.org/10.1093/ijpp/riab015.026.

Texto completo da fonte
Resumo:
Abstract Introduction Frailty is a geriatric syndrome in which physiological systems have decreased reserve and resistance against stressors. Frailty is associated with polypharmacy, inappropriate prescribing and unfavourable clinical outcomes [1,2]. Aim To identify and evaluate studies of interventions designed to optimise the medications of frail older patients, aged 65 years or over, in secondary or acute care settings. Methods The protocol was registered and published on PROSPERO (CRD42019156623). A literature review was conducted across the following databases and trial registries: Medline, Scopus, Embase, Web of Science, Cochrane Library, Cochrane Central Register of Controlled Trials, International Pharmaceutical Abstracts, Cumulative Index to Nursing and Allied Health Literature Plus (CINAHL Plus), ClinicalTrials.gov, International Clinical Trials Registry Platform and Research Registry. All types of randomised controlled trials (RCTs) and non-randomised studies (NRSs) of interventions relating to any aspect of ‘medicines optimisation’, ‘medicines management’ or ‘pharmaceutical care’ to frail older inpatients (aged ≥ 65 years) were included. Eligible studies published in English were identified from the date of inception to October 2020. Screening and selection of titles, abstracts and full texts were followed by data extraction. Risk of bias was assessed using the Cochrane Collaboration ROB 2.0 tool for RCTs and risk of bias in non-randomized studies-of Interventions (ROBINS-I) tool for NRSs. Results 36 articles were identified and of these, three were eligible for inclusion (Figure 1). All included studies were RCTs. Although all included studies examined the effect of different types of interventions on different outcomes, they all concluded that medication optimisation interventions reduced suboptimal prescribing (measured as polypharmacy, inappropriate prescribing, and underuse) among frail older inpatients. The included studies used different tools to assess prescribing appropriateness; one used the STOPP criteria, one used STOPPFrail criteria and one employed inpatient/ outpatient geriatric evaluation and management according to published guidelines and Veterans Affairs (VA) hospital standards. Two of the included studies was assessed as having ‘some concerns’ of bias, and one was judged to be at ‘high risk’ of bias. Due to the heterogeneity of the included studies, a meta-analysis was not possible. Conclusion This systematic review demonstrates that medication optimisation interventions may improve medication appropriateness in frail older inpatients. Limitations include the small number of included studies and the exclusion of non-English language articles. However, this review highlights the paucity of evidence that examines impact of medication optimisation on quality of prescribing and clinical outcomes for frail older inpatients including hospitalisation, falls, quality of life and mortality. High-quality studies are needed to address this gap and to outline the framework of medication optimisation for this vulnerable cohort group. References 1. Clegg A, Young J, Life S, Rikket MO, Rockwood K. Frailty in Older People. Lancet. 2013;381(9868):752–62. 2. Fried, L. P. Tangen, C. M.Walston, J.Newman, A. B.Hirsch, C.Gottdiener et al. Frailty in older adults: evidence for a phenotype. J Gerontol A Biol Sci Med Sci. 2011; 56(3), 146–M15
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Garba, Issa, Zakari Seybou Abdourahamane, Abdou Amadou Sanoussi e Illa Salifou. "Optimisation de l'Evaluation de la Biomasse Fourragère en Zone Sahélienne Grâce à l’Utilisation de la Méthode de Régression Linéaire Multiple en Conjonction Avec la Stratification". European Scientific Journal, ESJ 19, n.º 33 (30 de novembro de 2023): 52. http://dx.doi.org/10.19044/esj.2023.v19n33p52.

Texto completo da fonte
Resumo:
L'objectif de cette étude, conduite dans la zone pastorale du Niger, est d'optimiser l'estimation de la biomasse fourragère à l'échelle des faciès avec la méthode de Régression Linéaire Multiple (RLM). Les données utilisées englobent les mesures in situ de la masse herbacée entre 2001 et 2012, des données pluviométriques de station, les variables agrométéorologiques dérivées des données météorologiques de « l'European Centre for Medium-Range Weather Forecasts » (ECMWF) traitées via AgroMetShell (AMS), les images satellitaires NDVI de SPOT VEGETATION traitées avec le programme « Vegetation Analysis in Space and Time » (VAST) pour obtenir des variables biophysiques à partir des séries annuelles de NDVI décadaires, et les données de pluies estimées RFE provenant du « Famine Early Warning Systems NETwork » (FEWSNET). Les strates ont été identifiées sur la base de la carte des sols de la FAO, la couche des écorégions et les zones bioclimatiques du pays. Le modèle a été développé en utilisant la méthode de la RLM avec une approche ascendante de sélection de variables basée sur le coefficient de détermination (R²) ajusté et la racine de l'erreur quadratique moyenne (RMSE). Pour évaluer la robustesse du modèle, la validation croisée « leave one out – cross validation » (LOO-CV) a été employé pour calculer les R² de validation et effectué un diagnostic systématique des résidus afin de mieux caractériser le modèle. À l'échelle de l'ensemble de la zone d'étude (échelle globale), le RLM a produit un R² ajusté de 0,69 et un RMSE de 282 kg MS.ha-1, avec seulement une légère différence de 2,72 kg MS.ha-1 entre le RMSE de la calibration et celui de la validation. La stratification a amélioré la performance des modèles, avec des résultats prometteurs. Les modèles basés sur les types de sols FAO ont montré des R² élevés pour Ge5-1a, Qc1, Qc7-1a, Ql1-1a et Re35-a. Les écorégions telles que l'Azaouak, le Manga1 et le Manga2 ont également obtenu de bons résultats. Les paramètres des modèles par faciès ont été encore plus prometteurs, avec des R² allant de 0,77 à 0,93. Ces travaux auront un impact significatif en améliorant la qualité des informations utilisées pour planifier les initiatives de développement visant à protéger la société nigérienne contre les crises pastorales. The aim of this study, conducted in the pastoral zone of Niger, was to optimize the estimation of forage biomass at the scale of the different facies using Multiple Linear Regression (MLR) method. The data used include field measurements of herbaceous mass between 2001 and 2012, station rainfall data, agrometeorological variables derived from meteorological data of the European Centre for Medium-Range Weather Forecasts (ECMWF) processed via AgroMetShell (AMS), SPOT VEGETATION NDVI satellite images processed with the Vegetation Analysis in Space and Time (VAST) program to obtain biophysical variables from annual decadal NDVI series, and estimated RFE rainfall data from the US Famine Early Warning Systems NETwork (FEWSNET) to calculate annual rainfall totals. We identified strata based on the FAO soil map, the ecoregion layer and the country's bioclimatic zones. The model was developed using MLR with a bottom-up variable selection approach based on adjusted R² and root mean square error (RMSE). To assess the model's robustness, we used leave-one-out cross validation (LOO-CV) to calculate the validation R², and carried out systematic residual diagnostics to better characterize the model. At the scale of the entire study area (global scale), the MLR produced an adjusted R² of 0.69 and an RMSE of 282 kg MS.ha-1, with only a slight difference of 2.72 kg MS.ha-1 between the calibration and validation RMSE. Stratification improved model performance, with promising results. Models based on FAO soil types showed high R²s for Ge5-1a, Qc1, Qc7-1a, Ql1-1a and Re35-a. Ecoregions such as Azaouak, Manga1 and Manga2 also performed well. Model parameters by facies were even more promising, with R² ranging from 0.77 to 0.93. This work will have a significant impact in improving the quality of information used to plan development initiatives for protecting Nigerian society from pastoral crises.
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Miraftab, Mohsen, Ian Rushforth e Kirill Horoshenkov. "ACOUSTIC UNDERLAY MANUFACTURED FROM CARPET TILE WASTES". AUTEX Research Journal 6, n.º 1 (1 de março de 2006): 49–58. http://dx.doi.org/10.1515/aut-2006-060107.

Texto completo da fonte
Resumo:
Abstract Carpet waste has successfully been converted into acoustic underlay materials that compete with commercial counterparts both in terms of performance and costs. This paper builds on an earlier paper [Miraftab et al, Autex Res.J.5(2), 96-105 (2005).] where granular/fibre mixing ratios, binder concentration and particle size distribution were shown to play a major role in maximising impact sound insulation capabilities of developed underlays. Product optimisation with respect to the particle size as governed by the aperture dimension and mean effective fibre length is further explored in this paper, and the developed underlay is compared with a selection of commercially available acoustic underlays. The results show that a 2mm-aperture screen at the granulating chamber output yields a waste stream with grains in the size range of 0.5-1.0mm and a mean effective fibre length of 2.75 mm which was most suitable to work with, and gave rise to samples with the best impact sound reduction performance. The optimised sample of 10mm recycled underlay (U2) appeared to perform better than most commercial systems tested. The manufactured underlay withstood, and in some instances, outperformed, during the standard tests as required within the BS 5808 protocol. The study concludes that recycling carpet waste to produce quality acoustic underlay with desirable impact sound insulation characteristics is technically feasible, and a viable alternative to landfill or incineration.
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Podolská, K., D. Mazánková e M. Göböová. "Retrospective Assessment of the Use of Pharmacotherapeutic Agents in Pregnancy with Potential Impact on Neonatal Health". European Pharmaceutical Journal 69, n.º 2 (1 de agosto de 2022): 17–25. http://dx.doi.org/10.2478/afpuc-2022-0015.

Texto completo da fonte
Resumo:
Abstract This study focuses on the role of a clinical pharmacist in the optimisation of pharmacotherapy in the case of patients during pregnancy and its importance within the hospital sector in Slovakia. Retrospective evaluation of pharmacotherapy in pregnant patients with a focus on teratogenicity and appropriate drug selection was used. The hospital data were collected during 24 months from 22 female patients. The main observed outcome was health condition of the newborn, and it was expressed as healthy newborn, illness of the newborn, any congenital defect or malformation, spontaneous abortion, or unspecified information about the newborn. Based on a foetal risk assessment of used therapeutic agents from the Summary of Product Characteristics (SmPC), basal foetal and neonatal risk assessment (Briggs et al., 2017), and recommendations and related human past reports and supporting evidence studies, drugs were divided into two groups: confirmed foetal risk drugs and negative (nonconfirmed) foetal risk drugs. A total of 36.3% of the patients used two drugs. Patients most frequently used drugs during the first trimester (81.8%). During pregnancy, the most used drugs were for the nervous system (25.5%), anti-infective agents (23.6%), and respiratory therapeutic agents (14.5%). Of the 22 patients, 16 (73%) had healthy newborns, despite the use of therapeutic agents with different foetal-risk variations. In the group of therapeutic agents with confirmed risk, in some, negative effect on the newborn's health was clinically manifested. Spontaneous abortion was present after using norethisterone acetate and valproic acid; birth defect (unspecified) was present after usage of interferon β-1a and methylprednisolone sodium succinate. An illness (heart murmur) was present after the use of monohydrate sodium salt of metamizole. Another illness (Wilm's tumour) was present after the use of budesonide. Unspecified information about the newborn was observed in four cases after the use of prednisone, allopurinol, nadroparin, and fluvastatin.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Mccallum, C., M. Campbell, J. Vines, T. Rapley e K. Hackett. "SAT0614-HPR IDENTIFYING AND OPTIMISING MULTIPLE INTERVENTION COMPONENTS AND THEIR DELIVERY WITHIN A SELF-MANAGEMENT SMARTPHONE APP FOR PEOPLE WITH SJÖGREN’S SYNDROME: A QUALITATIVE STUDY". Annals of the Rheumatic Diseases 79, Suppl 1 (junho de 2020): 1267.1–1268. http://dx.doi.org/10.1136/annrheumdis-2020-eular.2283.

Texto completo da fonte
Resumo:
Background:Sjögren’s syndrome (SS) is an autoimmune rheumatic disease with diverse symptoms including mental and physical fatigue, dryness, pain and sleep disturbances. These symptoms are interconnected and rarely occur in isolation. Improving symptoms and quality of life requires people with SS to navigate multiple interventions and engage in self-management. Smartphone applications (apps) can deliver multiple cognitive and behaviour-based interventions in users’ everyday daily lives and are readily accessible. However, delivering several therapeutic interventions together within a single coherent self-management app requires systematic and evidence-based selection of intervention components, and an understanding of existing self-management approaches and their associated challenges for those living with SS.Objectives:To identify theory-based intervention components for inclusion in a SS self-management app. To understand the self-management approaches and challenges of those living with SS to inform in-app component delivery.Methods:First, to identify intervention components within the app, existing interventions that target each symptom of fatigue, dryness, pain, sleep disturbance were identified through a literature search. Their content was coded by the research team using behaviour change techniques and the Theoretical Domains Framework1. The content was grouped to form five intervention components which target multiple symptoms.Second, to understand SS self-management approaches and challenges, 13 people living with SS took part in a series of qualitative focus groups (n=6) and design workshops (n=7). Focus groups involved participants discussing their own self-management experiences and approaches (e.g. when and how they employed a variety of techniques). In design workshops participants sketched metaphors to explain these experiences and used craft materials to create “Magic Machines”2addressing their self-management challenges. Focus groups and design workshops were audio-recorded, transcribed, thematically analysed as a single data set, and findings mapped to the self-determination theory3dimensions of capability, autonomy, and relatedness.Results:Intervention components identified were: i) SS psychoeducation, ii) relaxation techniques, iii) activity pacing and goal setting, iv) assertiveness and communication skills, and v) sleep and dryness tips. Participants tackled complex symptom patterns (i.e. symptom interrelatedness and flares) using different self-management approaches; reactively (focusing on the most severe symptom) or systematically (one symptom at a time). Knowing which intervention techniques to choose was felt to be challenging; however the availability of multiple interventions techniques provided a sense of optimism and motivation. Participants were enthusiastic about accessing several intervention techniques via an app, but warned that smartphones and technology can exacerbate mental fatigue and eye dryness. The invisible nature of symptoms, and highly visible nature of management techniques (e.g. applying eye drops), presented further self-management challenges relating to their interactions with other people.Conclusion:Promising components to include in an SS app were identified but should be tested in an optimisation trial. The in-app delivery of component modules should be designed to support diverse self-management approaches, choice and autonomy, yet provide module recommendations and guidance when needed, and be simple to use to reduce mental fatigue and dry eye symptoms. A self-management app should also be designed to enable users to share information about SS with other people.References:[1] Cane J, et al. (2012)Implementation science,7(1), 37.[2]Andersen K, & Wakkary R. (2019)CHI Conference on Human Factors in Computing Systems(p. 1-13).[3]Deci E, & Ryan R (2008)Canadian psychology, 49(3), 182.Acknowledgments:Versus Arthritis (Grant 22026)Disclosure of Interests:None declared
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Pilewicz, PhD Tomasz, e Wojciech Sabat. "Behavioural location theory – evolution, tools and future". Kwartalnik Nauk o Przedsiębiorstwie 46, n.º 1 (15 de março de 2018): 61–68. http://dx.doi.org/10.5604/01.3001.0012.0998.

Texto completo da fonte
Resumo:
The behavioural location theory emphasises high importance of the limited rationality and the subjective perception of space in selecting of the location for a business activity. The article discusses key competencies from the scope of behavioural location theory. Ac-cording to the Authors, the behavioural location theory is rather complementary than competitive in relation to the neoclassical or modern approach, as it allows to explain the deviations of the decision-makers from the optimisation behaviour. <b>Business location theory has already been discussed in this journal in various contexts, for example in articles by H. Godlewska-Majkowska, K. Kuciński, A. Rutkowska-Górak, A. Kałowski. However, to our knowledge, the behavioural approach has not yet been presented here and we would like to fill the gap and offer a review of selected authors’ works and concepts from this field and hopefully inspire other scholars to develop this promising research direction. In addition, a quantitative analysis of publications on behavioural location theory will be presented.</b> For the purposes of this article we will define the behavioural location theory as the inclusion of psychological and subjective circumstances of the decision makers into location theory, such as bounded rationality, heuristics usage and subjective spatial perception. The behavioural approach seems underutilised in location theory despite its potential to explain many of business location decisions which are inconsistent with the profit maximisation principle. According to R. Domański [1995]:<i> so far it has not been satisfactorily examined how the perception of space influences spatial behaviour of people. Nobody objects that many decisions, at least in part, depend on how people perceive the space surrounding them, how they differentiate it and what value they place on different elements of this space.</i>According to W. Dziemianowicz [1997]:<i> the assessment of location factors by decision makers most often depends on specific qualities of the business and qualities of the decision maker. </i>Surprisingly, decades have passed since last important contributions in the field of behavioural location theory. Location theory has its roots in XIX century, when J.H. von Thuenen offered the agricultural activity location theory in 1826. The interest in location theory revived more than 50 years later, mainly thanks to the works of W. Launhardt [1882] and A. Marshall [1886]. Important dates are also 1909, when A. Weber developed his industrial location theory and proposed the notion of a location factor and 1933, when the first theory of services location emerged, authored by W. Christaller. Then the development of location theory accelerated, with contributions of such authors as A. Loesch [1939], F. Perroux [1964] or P. Krugman [1991]. It can be argued that thanks to P. Krugman location theory entered the mainstream economics, which neglected spatial issues for a long time. Different location theory traditions put the emphasis on different aspects. For example, classical approach theorists indicate minimising production cost as the goal of the location decision maker while behavioural approach suggests satisfactory choice as a goal. According to H. Godlewska-Majkowska, there are five approaches to location theory: classical, neo-classical, structural, behavioural and contemporary. Their focus points are briefly explained in Table 1. There are three similar but distinct terms related to the business location choice: <br>• location factors – specific qualities of particular places which have direct impact on investment volume during building of the company’s plant (plants) and the net profitability of business activity run in those places [Godlewska-Majkowska, 2001],</br><br> • location virtues – specific qualities of places which contribute to it that identical investments will differ depending on location in terms of investment volume, total production cost, sales revenue and taxes [Godlewska-Majkowska, 2015], ocation circumstances – internal and external phenomena which transform a location virtue into a location factor. Internal phenomena can be for example: industry, size and ownership structure of the business. External phenomena include among others economic, environmental and cultural issues [Godlewska-Majkowska, 2013]. Clearly, location requirements are different for various sectors. Therefore, location factors are divided into general (those applying to all or many sectors) and sector-specific (those applying to one or few sectors). There are also other classifications of location factors. The importance of subjective factors in the location choice is reflexed in the classification by Grabow et al. [1995] into soft and hard location factors, on the basis of H. GodlewskaMajkowska [2015]. Hard factors are more traditional, have direct influence on business activity and are easily measurable, while soft factors have indirect influence on business activity and are difficult to quantify. It is worth to note that authors of this classification consider both kinds of factors as equally important and find even the soft factors as ones, which can be parametrised, measured and compared. Figure 1 presents the classification in a more detailed way. In our view, Grabow et al. [1995] showed an excessive scepticism when it comes to measurability of some factors. For example, the local government attitude towards investor may be measured by places in investment attractiveness rankings, such as ‘Gmina na 5!’ conducted every year by Institute of Enterprise at Collegium of Business Administration at Warsaw School of Economics. Apart from it, the classification should be considered to be validated as more than 20 years had passed since its publication and made more precise, because as H. GodlewskaMajkowska [2015] points out, some factors seem to overlap – social climate is presented as separate factor than local government attitude towards investor, but in fact the former includes the latter. null null Each business has to choose its location and the effect of business location selection is called location decision. Location decision may be the result of a more or less formal procedure. There is a consensus among scholars that business location decision is important for entity’s economic performance. At the same time, it is acknowledged in the literature that subjective factors (such as bounded rationality) play a non-negligible role in location choice. As R. Domański [2004] outlines, location decision makers <i>usually have limited knowledge and incomplete information and in many cases the decision maker does not behave like the</i> homo oeconomicus.<i> Sometimes he has limited or biased information about his decision situation and at the same time he assessed the incomplete information in a subjective way.If the situation is complicated, he has to simplify it by using intuitional rules in decision making. He does not try to achieve the optimal result but rather a satisfactory one. </i>Such statements suggest R. Domański finds bounded rationality model convincing. According to classical, neoclassical and contemporary business location theory the decision maker undertakes the optimal choice, while heterodox approaches such as behavioural location theory claim making an optimal choice is impossible. The classical, neoclassical and contemporary theorists assume decision makers are <i>homo oeconomicus</i>, a person with perfect information about the present and the future, able and willing to make complicated calculations and not prone to psychological biases. Behavioural economics accepts different set of assumptions about the human nature: limited (imperfect) knowledge of the decision maker, limited ability to process the knowledge and searching for satisfactory result rather than optimal. Decision maker who behaves in line with those assumptions is purposefully called <i>homo satisfaciendus</i>. null <i>Homo satisfaciendus</i> is the concept of decision maker used in the bounded rationality model created by H. Simon [1955], which is fundamental for behavioural economics, including behavioural location theory. In the model it is assumed that the decision makers do not aim to maximise utility from choice made (making an optimal decision) but rather search for a good enough (satisfactory) option and once they find such an option they also stop search. In practice, it means that typically a decision maker will accept the first location that meets his minimum criteria, the so called aspiration level and will not even check alternative locations. Simon points out that people may use so called heuristics, which are decision making patterns simplifying their decision problems but he did not elaborate on it. null The gap has been filled by D. Kahneman and A. Tversky [1975] who singled out three famous heuristics: availability, representativeness and anchoring. H. Godlewska-Majkowska [2016] argues that such heuristics are used to assess location virtues of places which a location decision maker had visited within business location decision making process. The bounded rationality model has served as the basis for the A. Pred [1967] behavioural matrix, which linked information availability, investor’s information processing ability and profitability of chosen business location. The general rule is that the more information (or information processing ability) one has, the more profitable location one chooses, <i>caeteris paribus</i>. An adapted version of Pred matrix is presented in Figure 2. Point A represents <i>homo oeconomicus</i>, who has perfect information and perfect ability to use it, so he or she will choose the optimal location. All other decision makers make suboptimal decisions and the extreme is reached in point B, where the decision maker has little information and low ability to process it, so he or she will choose a poor location that may result in a loss.
Estilos ABNT, Harvard, Vancouver, APA, etc.

Teses / dissertações sobre o assunto "Selection et optimisation d'hyperparamètre"

1

Bertrand, Quentin. "Hyperparameter selection for high dimensional sparse learning : application to neuroimaging". Electronic Thesis or Diss., université Paris-Saclay, 2021. http://www.theses.fr/2021UPASG054.

Texto completo da fonte
Resumo:
Grâce à leur caractère non invasif et leur excellente résolution temporelle, la magnéto- et l'électroencéphalographie (M/EEG) sont devenues des outils incontournables pour observer l'activité cérébrale. La reconstruction des signaux cérébraux à partir des enregistrements M/EEG peut être vue comme un problème inverse de grande dimension mal posé. Les estimateurs typiques des signaux cérébraux se basent sur des problèmes d'optimisation difficiles à résoudre, composés de la somme d'un terme d'attache aux données et d'un terme favorisant la parcimonie. À cause du paramètre de régularisation notoirement difficile à calibrer, les estimateurs basés sur la parcimonie ne sont actuellement pas massivement utilisés par les praticiens. L'objectif de cette thèse est de fournir un moyen simple, rapide et automatisé de calibrer des modèles linéaires parcimonieux. Nous étudions d'abord quelques propriétés de la descente par coordonnées : identification du modèle, convergence linéaire locale, et accélération. En nous appuyant sur les schémas d'extrapolation d'Anderson, nous proposons un moyen efficace d'accélérer la descente par coordonnées en théorie et en pratique. Nous explorons ensuite une approche statistique pour calibrer le paramètre de régularisation des problèmes de type Lasso. Il est possible de construire des estimateurs pour lesquels le paramètre de régularisation optimal ne dépend pas du niveau de bruit. Cependant, ces estimateurs nécessitent de résoudre des problèmes d'optimisation "non lisses + non lisses". Nous montrons que le lissage partiel préserve leurs propriétés statistiques et nous proposons une application aux problèmes de localisation de sources M/EEG. Enfin, nous étudions l'optimisation d'hyperparamètres, qui comprend notamment la validation croisée. Cela nécessite de résoudre des problèmes d'optimisation à deux niveaux avec des problèmes internes non lisses. De tels problèmes sont résolus de manière usuelle via des techniques d'ordre zéro, telles que la recherche sur grille ou la recherche aléatoire. Nous présentons une technique efficace pour résoudre ces problèmes d'optimisation à deux niveaux en utilisant des méthodes du premier ordre
Due to non-invasiveness and excellent time resolution, magneto- and electroencephalography (M/EEG) have emerged as tools of choice to monitor brain activity. Reconstructing brain signals from M/EEG measurements can be cast as a high dimensional ill-posed inverse problem. Typical estimators of brain signals involve challenging optimization problems, composed of the sum of a data-fidelity term, and a sparsity promoting term. Because of their notoriously hard to tune regularization hyperparameters, sparsity-based estimators are currently not massively used by practitioners. The goal of this thesis is to provide a simple, fast, and automatic way to calibrate sparse linear models. We first study some properties of coordinate descent: model identification, local linear convergence, and acceleration. Relying on Anderson extrapolation schemes, we propose an effective way to speed up coordinate descent in theory and practice. We then explore a statistical approach to set the regularization parameter of Lasso-type problems. A closed-form formula can be derived for the optimal regularization parameter of L1 penalized linear regressions. Unfortunately, it relies on the true noise level, unknown in practice. To remove this dependency, one can resort to estimators for which the regularization parameter does not depend on the noise level. However, they require to solve challenging "nonsmooth + nonsmooth" optimization problems. We show that partial smoothing preserves their statistical properties and we propose an application to M/EEG source localization problems. Finally we investigate hyperparameter optimization, encompassing held-out or cross-validation hyperparameter selection. It requires tackling bilevel optimization with nonsmooth inner problems. Such problems are canonically solved using zeros order techniques, such as grid-search or random-search. We present an efficient technique to solve these challenging bilevel optimization problems using first-order methods
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

YACOUB, MEZIANE. "Selection de caracteristiques et optimisation d'architectures dans les systemes d'apprentissage connexionnistes". Paris 13, 1999. http://www.theses.fr/1999PA132014.

Texto completo da fonte
Resumo:
Cette these est consacree au probleme du choix d'une architecture dont la capacite est adaptee a la difficulte de la tache. Nous proposons une approche structurelle du controle de la capacite de generalisation des systemes d'apprentissage connexionnistes. Cette approche est basee sur une mesure de pertinence, baptisee hvs (heuristic for variable selection), permettant d'evaluer l'importance de chacune des composantes du modele. La mesure sera utilisee dans un premier temps pour le probleme difficile de la selection de variables puis etendue a l'optimisation d'architectures de type perceptron multicouche. Une autre utilisation de cette mesure nous a permis de developper une methodologie d'aide au choix d'une architecture initiale pour traiter des sequences temporelles. Enfin on montre sur un probleme reel d'identification de visages qu'une extension de la mesure de pertinence hvs permet de detecter et de selectionner des zones discriminantes pour ce type d'application en reconnaissance des formes.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

PURBA, ABDUL RAZAK. "Optimisation de la selection recurrente reciproque du palmier a huile (elaeis guineensis jacq. ) par l'utilisation conjointe des index de selection et des marqueurs moleculaires". Montpellier, ENSA, 2000. http://www.theses.fr/2000ENSA0018.

Texto completo da fonte
Resumo:
Le programme d'amelioration du palmier a huile a l'iopri (indonesian oil palm research institute = l'institut indonesien pour les recherches sur le palmier a huile) a debute dans les annees soixante dix apres l'introduction de plusieurs populations selectionnees d'origine africaine. Un programme de selection recurrente reciproque (srr) entre ces origines et l'origine deli (introduite en 1848) a ete engage. Ce programme necessitait de classer les parents utilises en fonction de leur aptitude a la combinaison. Cependant, l'estimation precise de la valeur genetique de parents est rendue difficile a cause des contraintes liees a la biologie de la plante et du caractere desequilibre des essais genetiques disponibles. L'objectif de ce travail de these a ete d'estimer les parametres genetiques des parents en utilisant toutes les informations disponibles : les donnees genealogiques, les donnees agronomiques et les donnees de marqueurs moleculaires. L'exploitation conjointe de l'ensemble de ces informations permet de mieux gerer la variabilite et le controle de la recombinaison. Les aflp (amplified fragment length polymorphism) et les isoenzymes ont ete utilises pour estimer la distance genetique entre individus ou entre populations ainsi que pour structurer les materiels de selection de l'iopri. Des ecarts a l'equilibre de hardy-weinberg et une separation des parents, en au moins trois groupes, ont ete detectes dans les populations etudiees. L'organisation de la variabilite genetique obtenue ouvre donc une discussion sur la separation stricte des populations du palmier a huile en deux groupes heterotiques applique actuellement dans le schema d'amelioration de srr. Les donnees agronomiques et de pedigree des parents testes au premiere cycle de selection a l'iopri ont utilise pour ameliorer l'estimation de leurs parametres genetiques en utilisant la methode de blup (best linear unbiased predictor). En depit du fait que la variabilite genetique du groupe deli soit moins importante que celle du groupe africain, la divergence des individus de deli, choisis comme parents des hybrides testes, entraine d'une contribution comparable de ce groupe a la variabilite genetique. Les resultats ont aussi demontre la coherence de la methode de blup en determinant le classement des geniteurs en se basant sur leurs valeurs genetiques additives. En effet, ces valeurs au stade adulte peuvent etre lineairement expliquees par celles au stade jeune, ce qui implique une possibilite de simplifier le schema srr en effectuant une selection precoce. Les correlations entre les valeurs observees et les valeurs predites des performances des croisements sont en general assez moderees mais sont envisageables pour une selection basee sur la production d'huile, ce qui, en fait, est l'objectif des selectionneurs du palmier a huile. D'autres part, le recours a des marqueurs moleculaires permet de predire les performances de croisements aussi efficacement que celui des donnees de pedigree. Les informations moleculaires peuvent donc etre utilisees si les informations de pedigree sont obscures ou indisponibles. La connaissance sur la structure et la valeur genetique des parents a permis de choisir les meilleurs combinaisons et d'eliminer les mauvais croisements des tests a realiser. Du point de vue pratique, les resultats donneront une contribution significative sur la production de semences et des clones du palmier a huile.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Sarmis, Merdan. "Etude de l'activité neuronale : optimisation du temps de simulation et stabilité des modèles". Thesis, Mulhouse, 2013. http://www.theses.fr/2013MULH3848/document.

Texto completo da fonte
Resumo:
Les neurosciences computationnelles consistent en l’étude du système nerveux par la modélisation et la simulation. Plus le modèle sera proche de la réalité et plus les ressources calculatoires exigées seront importantes. La question de la complexité et de la précision est un problème bien connu dans la simulation. Les travaux de recherche menés dans le cadre de cette thèse visent à améliorer la simulation de modèles mathématiques représentant le comportement physique et chimique de récepteurs synaptiques. Les modèles sont décrits par des équations différentielles ordinaires (EDO), et leur résolution passe par des méthodes numériques. Dans le but d’optimiser la simulation, j’ai implémenté différentes méthodes de résolution numérique des EDO. Afin de faciliter la sélection du meilleur algorithme de résolution numérique, une méthode nécessitant un minimum d’information a été proposée. Cette méthode permet de choisir l’algorithme qui optimise la simulation. La méthode a permis de démontrer que la dynamique d’un modèle de récepteur synaptique influence plus les performances des algorithmes de résolution que la structure cinétique du modèle lui-même. De plus, afin de caractériser des comportements pathogènes, une phase d’optimisation est réalisée. Cependant, certaines valeurs de paramètres rendent le modèle instable. Une étude de stabilité a permis de déterminer la stabilité du modèle pour des paramètres fournis par la littérature, mais également de remonter à des contraintes de stabilité sur les paramètres. Le respect de ces contraintes permet de garantir la stabilité des modèles étudiés, et donc de garantir le succès de la procédure permettant de rendre un modèle pathogène
Computational Neuroscience consists in studying the nervous system through modeling and simulation. It is to characterize the laws of biology by using mathematical models integrating all known experimental data. From a practical point of view, the more realistic the model, the largest the required computational resources. The issue of complexity and accuracy is a well known problem in the modeling and identification of models. The research conducted in this thesis aims at improving the simulation of mathematical models representing the physical and chemical behavior of synaptic receptors. Models of synaptic receptors are described by ordinary differential equations (ODE), and are resolved with numerical procedures. In order to optimize the performance of the simulations, I have implemented various ODE numerical resolution methods. To facilitate the selection of the best solver, a method, requiring a minimum amount of information, has been proposed. This method allows choosing the best solver in order to optimize the simulation. The method demonstrates that the dynamic of a model has greater influence on the solver performances than the kinetic scheme of the model. In addition, to characterize pathogenic behavior, a parameter optimization is performed. However, some parameter values lead to unstable models. A stability study allowed for determining the stability of the models with parameters provided by the literature, but also to trace the stability constraints depending to these parameters. Compliance with these constraints ensures the stability of the models studied during the optimization phase, and therefore the success of the procedure to study pathogen models
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Rincent, Renaud. "Optimisation des stratégies de génétique d'association et de sélection génomique pour des populations de diversité variable : Application au maïs". Thesis, Paris, AgroParisTech, 2014. http://www.theses.fr/2014AGPT0018/document.

Texto completo da fonte
Resumo:
D'importants progrès ont été réalisés dans les domaines du génotypage et du séquençage, ce qui permet de mieux comprendre la relation génotype/phénotype. Il est possible d'analyser l'architecture génétique des caractères (génétique d'association, GA), ou de prédire la valeur génétique des candidats à la sélection (sélection génomique, SG). L'objectif de cette thèse était de développer des outils pour mener ces stratégies de manière optimale. Nous avons d'abord dérivé analytiquement la puissance du modèle mixte de GA, et montré que la puissance était plus faible pour les marqueurs présentant une faible diversité, une forte différentiation entre sous groupes et une forte corrélation avec les marqueurs utilisés pour estimer l'apparentement (K). Nous avons donc considéré deux estimateurs alternatifs de K. Des simulations ont montré qu'ils sont aussi efficaces que la méthode classique pour contrôler les faux positifs et augmentent la puissance. Ces résultats ont été confirmés sur les panels corné et denté du programme Cornfed, avec une augmentation de 40% du nombre de SNP détectés. Ces panels, génotypés avec une puce 50k SNP et phénotypés pour leur précocité et leur biomasse ont permis de décrire la diversité de ces groupes et de détecter des QTL. En SG, des études ont montré l'importance de la composition du jeu de calibration sur la fiabilité des prédictions. Nous avons proposé un algorithme d'échantillonnage dérivé de la théorie du G-BLUP permettant de maximiser la fiabilité des prédictions. Par rapport à un échantillon aléatoire, il permettrait de diminuer de moitié l'effort de phénotypage pour atteindre une même fiabilité de prédiction sur les panels Cornfed
Major progresses have been achieved in genotyping technologies, which makes it easier to decipher the relationship between genotype and phenotype. This contributed to the understanding of the genetic architecture of traits (Genome Wide Association Studies, GWAS), and to better predictions of genetic value to improve breeding efficiency (Genomic Selection, GS). The objective of this thesis was to define efficient ways of leading these approaches. We first derived analytically the power from classical GWAS mixed model and showed that it was lower for markers with a small minimum allele frequency, a strong differentiation among population subgroups and that are strongly correlated with markers used for estimating the kinship matrix K. We considered therefore two alternative estimators of K. Simulations showed that these were as efficient as classical estimators to control false positive and provided more power. We confirmed these results on true datasets collected on two maize panels, and could increase by up to 40% the number of detected associations. These panels, genotyped with a 50k SNP-array and phenotyped for flowering and biomass traits, were used to characterize the diversity of Dent and Flint groups and detect QTLs. In GS, studies highlighted the importance of relationship between the calibration set (CS) and the predicted set on the accuracy of predictions. Considering low present genotyping cost, we proposed a sampling algorithm of the CS based on the G-BLUP model, which resulted in higher accuracies than other sampling strategies for all the traits considered. It could reach the same accuracy than a randomly sampled CS with half of the phenotyping effort
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Blanc, Guylaine. "Selection assistee par marqueurs (sam) dans un dispositif multiparental connecte - application au maÏs et approche par simulations". Phd thesis, INAPG (AgroParisTech), 2006. http://pastel.archives-ouvertes.fr/pastel-00003478.

Texto completo da fonte
Resumo:
L'avènement des marqueurs moléculaires dans les années 80 a ouvert de nouvelles perspectives pour l'identification de locus impliqués dans la variation de caractères quantitatifs (QTL). De nombreuses études, notamment théoriques, ont montré que l'utilisation en sélection des marqueurs associés aux QTL (la Sélection Assistée par Marqueurs, SAM) pourrait permettre un gain d'efficacité par rapport à la sélection conventionnelle. En génétique végétale, la plupart des expériences de détection de QTL sont réalisées dans des populations issues du croisement entre deux lignées pures. Ainsi, beaucoup de moyens se retrouvent concentrés sur une base génétique étroite. Pourtant la probabilité de détecter des QTL est plus importante dans des populations avec une base génétique large, impliquant plus de deux parents car la diversité génétique est plus importante. Dans un contexte multiparental, le cas des populations multiparentales connectées, c'est-à-dire issues de croisements ayant un des parents en commun, présente un intérêt majeur puisque les connexions entre populations permettent pour un effectif global donné d'augmenter la puissance de détection des QTL, de comparer pour chaque QTL l'intérêt relatif de plusieurs allèles, et d'étudier leurs éventuelles interactions avec le fonds génétique. En termes de SAM, on peut penser que les marqueurs seront particulièrement intéressants dans un tel contexte pour diriger les croisements entre individus, afin de contrôler les recombinaisons entre les différents génomes parentaux et d'aider à la sélection d'individus qui cumulent les allèles favorables provenant des différents parents de départ. Aussi l'objectif de ce programme est-il de valider l'intérêt d'un schéma de SAM dans un dispositif multiparental connecté. Un croisement diallèle entre quatre lignées de maïs a permis de générer 6 populations de 150 plantes F2 chacune. La détection de QTL sur ce dispositif de 900 individus a été réalisée pour différents caractères grâce à MCQTL qui permet de prendre en compte les connexions entre populations. La comparaison des QTL détectés population par population et ceux détectés sur le dispositif complet en prenant en compte les connexions ou non, montre que l'analyse globale du dispositif en prenant en compte les connexions entre populations permet un gain de puissance substantiel et conduit à une meilleure précision de la localisation des QTL. A partir de ces résultats nous avons mis en place trois cycles de sélection sur marqueurs dans deux schémas présentant des objectifs distincts : i. obtenir un matériel plus précoce, pour le premier ii. augmenter le rendement tout en conservant une humidité des grains constante à la récolte pour le second. Pour pouvoir suivre la transmission des allèles parentaux aux QTL au cours des générations, un programme de calcul de probabilités d'identité par descendance adapté au dispositif à été développé. L'évaluation expérimentale du progrès génétique nous a permis de mettre en évidence, après 3 cycles de sélection, un gain significatif de précocité de 3 jours pour le schéma floraison et un gain significatif de 3.2 quintaux de rendement pour le schéma rendement. Parallèlement, nous avons comparé par simulation différents schémas de sélection, en nous basant sur le dispositif expérimental mis en place (nombre et effet des QTL, h²
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

Hamdi, Faiza. "Optimisation et planification de l'approvisionnement en présence du risque de rupture des fournisseurs". Thesis, Ecole nationale des Mines d'Albi-Carmaux, 2017. http://www.theses.fr/2017EMAC0002/document.

Texto completo da fonte
Resumo:
La libéralisation des échanges, le développement des moyens de transport de marchandises à faible coût et l’essor économique des pays émergents font de la globalisation (mondialisation) des chaînes logistiques un phénomène irréversible. Si ces chaines globalisées permettent de réduire les coûts, en contrepartie, elles multiplient les risques de rupture depuis la phase d’approvisionnement jusqu’à la phase finale de distribution. Dans cette thèse, nous nous focalisons sur la phase amont. Nous traitons plus spécifiquement le cas d’une centrale d’achat devant sélectionner des fournisseurs et allouer les commandes aux fournisseurs retenus. Chacun des fournisseurs risque de ne pas livrer ses commandes pour des raisons qui lui sont propres (problèmes internes, mauvaise qualité) ou externes (catastrophe naturelle, problèmes de transport). Selon que les fournisseurs sélectionnés livrent ou non leurs commandes, l’opération dégagera un profit ou sera déficitaire. L’objectif de cette thèse, est de fournir des outils d’aide à la décision à un décideur confronté à ce problème tout en prenant en compte le comportement du dit décideur face au risque. Des programmes stochastiques en nombre entiers mixtes ont été proposés pour modéliser ce problème. La première partie du travail porte sur l’élaboration d’un outil visuel d’aide à la décision permettant à un décideur de trouver une solution maximisant le profit espéré pour un risque de perte fixé. La deuxième partie applique les techniques d’estimation et de quantification du risque VAR et CVaR à ce problème. L’objectif est d’aider un décideur qui vise à minimiser la valeur de l’espérance du coût (utilisation de VaR) ou à minimiser la valeur de l’espérance du coût dans le pire des cas (utilisation de VAR et CVaR). Selon nos résultats, il apparaît que le décideur doit prendre en compte les différents scénarios possibles quelque soit leurs probabilités de réalisation, pour que la décision soit efficace
Trade liberalization, the development of mean of transport and the development economic of emerging countries which lead to globalization of supply chain is irreversible phenomen. They can reduce costs, in return, they multiply the risk of disruption from upstream stage to downstream stage. In this thesis, we focus on the inbound supply chain stage. We treat more specifically the case of a purchasing central to select suppliers and allocate the orders. Each of the suppliers cannot deliver its orders due to internal reasons (poor quality problems) or external reasons (natural disasters, transport problems). According to the selected suppliers deliver their orders or not, the transaction operation will generate a profit or loss. The objective of this thesis is to provide decision support tools to a decision maker faced with this problem by taking into account the behavior of decision maker toward risk. We proposed stochastic mixed integer linear programs to model this problem. In the first part, we focuses on the development of a decision support visual tool that allows a decision maker to find a compromise between maximizing the expected profit and minimize the risk of loss. In the second part, we integrated the techniques of estimation of risk VaR and CVaR in this problem. The objective is to help decision maker to minimize the expected cost and minimize the conditional value at risk simultanously via calculating of VaR. Result shows that the decision maker must tack into account the different scenarios of disruption regardless their probability of realisation
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Landru, Didier. "Aides informatisées à la selection des matériaux et des procédés dans la conception des pièces de structure". Grenoble INPG, 2000. http://www.theses.fr/2000INPG0012.

Texto completo da fonte
Resumo:
La selection des materiaux et des procedes pour les mettre en forme, les assembler et les proteger est une etape fondamentale de la conception d'une piece de structure. Nous avons developpe et valide des methodes et des outils d'aide a la conception permettant rechercher les combinaisons optimales de materiaux, formes et procedes pour realiser un cahier des charges. Cette recherche est effectuee de maniere objective, en particulier pour les conceptions multi-contraintes (par un pre-dimensionnement) ou multi-objectives (par une analyse de la valeur). Un module d'aide a la redaction et a la correction du cahier des charges fonde sur une expertise des erreurs les plus courantes permet d'ameliorer la qualite des conceptions les plus complexes. Parallelement a ce travail, nous avons concu un logiciel d'aide a la recherche d'applications pour un materiau donne. Il permet d'identifier les points forts et faibles d'un materiau en vue de son utilisation dans differents domaines d'applications et de suggerer ainsi des applications potentielles.
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Akkouche, Nourredine. "Optimisation du test de production de circuits analogiques et RF par des techniques de modélisation statistique". Phd thesis, Université de Grenoble, 2011. http://tel.archives-ouvertes.fr/tel-00625469.

Texto completo da fonte
Resumo:
La part dû au test dans le coût de conception et de fabrication des circuits intégrés ne cesse de croître, d'où la nécessité d'optimiser cette étape devenue incontournable. Dans cette thèse, de nouvelles méthodes d'ordonnancement et de réduction du nombre de tests à effectuer sont proposées. La solution est un ordre des tests permettant de détecter au plus tôt les circuits défectueux, qui pourra aussi être utilisé pour éliminer les tests redondants. Ces méthodes de test sont basées sur la modélisation statistique du circuit sous test. Cette modélisation inclus plusieurs modèles paramétriques et non paramétrique permettant de s'adapté à tous les types de circuit. Une fois le modèle validé, les méthodes de test proposées génèrent un grand échantillon contenant des circuits défectueux. Ces derniers permettent une meilleure estimation des métriques de test, en particulier le taux de défauts. Sur la base de cette erreur, un ordonnancement des tests est construit en maximisant la détection des circuits défectueux au plus tôt. Avec peu de tests, la méthode de sélection et d'évaluation est utilisée pour obtenir l'ordre optimal des tests. Toutefois, avec des circuits contenant un grand nombre de tests, des heuristiques comme la méthode de décomposition, les algorithmes génétiques ou les méthodes de la recherche flottante sont utilisées pour approcher la solution optimale.
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Pham, Viet Nga. "Programmation DC et DCA pour l'optimisation non convexe/optimisation globale en variables mixtes entières : Codes et Applications". Phd thesis, INSA de Rouen, 2013. http://tel.archives-ouvertes.fr/tel-00833570.

Texto completo da fonte
Resumo:
Basés sur les outils théoriques et algorithmiques de la programmation DC et DCA, les travaux de recherche dans cette thèse portent sur les approches locales et globales pour l'optimisation non convexe et l'optimisation globale en variables mixtes entières. La thèse comporte 5 chapitres. Le premier chapitre présente les fondements de la programmation DC et DCA, et techniques de Séparation et Evaluation (B&B) (utilisant la technique de relaxation DC pour le calcul des bornes inférieures de la valeur optimale) pour l'optimisation globale. Y figure aussi des résultats concernant la pénalisation exacte pour la programmation en variables mixtes entières. Le deuxième chapitre est consacré au développement d'une méthode DCA pour la résolution d'une classe NP-difficile des programmes non convexes non linéaires en variables mixtes entières. Ces problèmes d'optimisation non convexe sont tout d'abord reformulées comme des programmes DC via les techniques de pénalisation en programmation DC de manière que les programmes DC résultants soient efficacement résolus par DCA et B&B bien adaptés. Comme première application en optimisation financière, nous avons modélisé le problème de gestion de portefeuille sous le coût de transaction concave et appliqué DCA et B&B à sa résolution. Dans le chapitre suivant nous étudions la modélisation du problème de minimisation du coût de transaction non convexe discontinu en gestion de portefeuille sous deux formes : la première est un programme DC obtenu en approximant la fonction objectif du problème original par une fonction DC polyèdrale et la deuxième est un programme DC mixte 0-1 équivalent. Et nous présentons DCA, B&B, et l'algorithme combiné DCA-B&B pour leur résolution. Le chapitre 4 étudie la résolution exacte du problème multi-objectif en variables mixtes binaires et présente deux applications concrètes de la méthode proposée. Nous nous intéressons dans le dernier chapitre à ces deux problématiques challenging : le problème de moindres carrés linéaires en variables entières bornées et celui de factorisation en matrices non négatives (Nonnegative Matrix Factorization (NMF)). La méthode NMF est particulièrement importante de par ses nombreuses et diverses applications tandis que les applications importantes du premier se trouvent en télécommunication. Les simulations numériques montrent la robustesse, rapidité (donc scalabilité), performance et la globalité de DCA par rapport aux méthodes existantes.
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!

Vá para a bibliografia