Academic literature on the topic 'Allocation conjointe'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Allocation conjointe.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Allocation conjointe"

1

Ul Hassan Zardari, Noor, and Ian Cordery. "Determining Irrigators Preferences for Water Allocation Criteria Using Conjoint Analysis." Journal of Water Resource and Protection 04, no. 05 (2012): 249–55. http://dx.doi.org/10.4236/jwarp.2012.45027.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Huxtable, Richard. "Logical separation? Conjoined twins, slippery slopes and resource allocation." Journal of Social Welfare and Family Law 23, no. 4 (January 2001): 459–71. http://dx.doi.org/10.1080/09649060110079341.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Danesh, Ahmad, Fariba Asghari, Hojjat Zeraati, Kamran Yazdani, Saharnaz Nedjat, Mohammad-Ali Mansournia, Ali Jafarian, and Akbar Fotouhi. "Public preferences for allocation of donated livers for transplantation: A conjoint analysis." Clinical Ethics 11, no. 4 (July 7, 2016): 176–81. http://dx.doi.org/10.1177/1477750916657662.

Full text
Abstract:
Despite the fact that the criteria for allocation of donated livers have been laid down for years, these criteria may not help to select a potential recipient from those with the same medical requirements. This study used conjoint analysis method to determine the importance of certain non-medical factors from the public’s point of view. Through a population based study, a sample of 899 randomly selected persons filled a questionnaire where in each question the respondents had to choose one out of two hypothetical patients as the recipients of a donor liver considering their expressed characteristics. The collected data were analyzed by means of conjoint analysis method, and the importance of each characteristic was determined. According to the respondents the important criteria for allocation of donated livers included younger age, being married or breadwinner of the family, more than 3-year survival after transplantation, and having no role in causing the illness. Among the selected criteria, financial ability to pay post-operation costs had the least value on the selection. The findings of this study indicate that the public may values certain social and individual factors in case of multiple potential recipients with equal medical need for liver transplant.
APA, Harvard, Vancouver, ISO, and other styles
4

Rashtchi, Rozita, Ramy H. Gohary, and Halim Yanikomeroglu. "Conjoint Routing and Resource Allocation in OFDMA-Based D2D Wireless Networks." IEEE Access 6 (2018): 18868–82. http://dx.doi.org/10.1109/access.2018.2816817.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Stepanov, Sergey N., Juvent Ndayikunda, and Margarita G. Kanishcheva. "Resource allocation model for LTE technology with functionality of NB-IoT and reservation." T-Comm 15, no. 11 (2021): 69–76. http://dx.doi.org/10.36724/2072-8735-2021-15-11-69-76.

Full text
Abstract:
The tremendous growth in the volume of multimedia data streams to be collected by multiple video cameras and a large number of sensors or smart meters in the Internet of Things application is one of the main challenges in the transition from 4G to true 5G network systems. The necessity of conjoint servicing of heterogeneous data over the existing infrastructure has been recognized and supported by 3GPP by introducing the standardization and formalization of Narrowband Internet of Things (NB-IoT) technology. The NB-IoT is the most promising technology for big data collection in the IoT landscape thanks to its particular characteristics such as long-range coverage (10 km), high energy efficient consumption and low-cost radio design. The same spectrum is shared between LTE high-rate end equipment and NB-IoT low-rate end devices. However, the challenge is how to share efficiently the available radio resources between multiple complex devices with priorities of some type of data flow. The model of resource sharing for conjoint serv icing for both traffic originated by video surveillance cameras and by sensors is constructed. Access control offering priority to one type of flows is used to create the differentiated servicing of the incoming sessions. Probability values of the constructed model's stationary states are used to determine the main performance measures. The constructed mathematical model can be used to study the reservation based resource allocation and sharing scenarios between the LTE and NB-IoT traffic flow over 3GPP LTE with NB-IoT functionality.
APA, Harvard, Vancouver, ISO, and other styles
6

Ghosh, Avijit, and C. Samuel Craig. "An Approach to Determining Optimal Locations for New Services." Journal of Marketing Research 23, no. 4 (November 1986): 354–62. http://dx.doi.org/10.1177/002224378602300405.

Full text
Abstract:
Designing a network of service centers involves a tradeoff between the revenue likely to be generated by providing a high level of service and the cost of operating the service network. The authors develop a procedure for determining the location and design strategies for new services using a modified maximal covering location-allocation model. The network optimization procedure relies on direct assessment of consumer sensitivity to distance and nondistance factors through conjoint analysis and simultaneously determines the network size, location of outlets, price, and operating characteristics.
APA, Harvard, Vancouver, ISO, and other styles
7

Sassi, Franco, David McDaid, and Walter Ricciardi. "Conjoint analysis of preferences for cardiac risk assessment in primary care." International Journal of Technology Assessment in Health Care 21, no. 2 (April 2005): 211–18. http://dx.doi.org/10.1017/s0266462305050282.

Full text
Abstract:
Objectives:Many evaluations underestimate the utility associated with diagnostic interventions by failing to capture the nonclinical value of diagnostic information. This is a cause of bias in resource allocation decisions. A study was undertaken to investigate preferences for the assessment of cardiac risk, testing the suitability of conjoint analysis, a multiattribute preference elicitation method, in the field of clinical diagnosis.Methods:Two conjoint analysis models focusing on selected characteristics of cardiac risk assessment in asymptomatic patients 40–50 years of age were applied to elicit preferences for cardiac risk assessment from samples of general practitioners and the general public in the United Kingdom and Italy. Both models were based on rankings of alternative scenarios, and the results were analyzed using multivariate analysis of variance and an ordered probit model.Results:In both countries, members of the public attached at least three times more importance to prognostic value (relative to clinical value) than did general practitioners. Significantly different patterns were found in the two countries with regard to other characteristics of the assessment. Variation within samples was partly associated with personal characteristics.Conclusions:Only a fraction of the value of cardiac risk assessment to individuals and physicians in this study was linked to health outcomes. The study confirmed the appropriateness and validity of conjoint analysis in the assessment of preferences for diagnostic interventions. A wider use of this technique might significantly strengthen the existing evidence-base for diagnostic interventions, leading to a more efficient use of health-care resources.
APA, Harvard, Vancouver, ISO, and other styles
8

Hare, Christopher. "LOSS ALLOCATION FOR MATERIALLY ALTERED CHEQUES." Cambridge Law Journal 60, no. 1 (March 2001): 1–58. http://dx.doi.org/10.1017/s0008197301710616.

Full text
Abstract:
IN the conjoined appeals Smith v. Lloyds TSB Group plc; Jones v. Woolwich plc [2000] 3 W.L.R. 1725 the Court of Appeal had the opportunity to consider the single issue of whether the true owner of a cheque or banker’s draft, which it was accepted had been “materially altered”, and so, subject to irrelevant exceptions, avoided within the terms of the Bills of Exchange Act 1882, s. 64, and subsequently converted, is entitled to damages equivalent to the face value of the instrument. In Smith the Insolvency Service drew a cheque crossed “account payee” in favour of the Inland Revenue, on behalf of the claimants, who were the joint liquidators of ILG Travel Ltd. An unknown third party stole the cheque, altered the payee’s name to “Joseph Smitherman” and paid it into an account held in that name with the defendant, Lloyds Bank plc. The cheque was cleared before the fraud was discovered and the claimants sued the collecting bank for its conversion. The facts of Jones were in all material respects identical, save that the claim was brought against the paying bank in respect of a banker’s draft. Applications were made for the summary disposal of both cases. In Smith Blofeld J. struck out the claim against the collecting bank ([2000] 1 W.L.R. 1225), but in Jones Judge Hallgarten Q.C. awarded the claimants the face value of the draft. A unanimous Court of Appeal (Pill, Potter and Stuart-Smith L.JJ.) held that the claimants in both cases were only entitled to nominal damages.
APA, Harvard, Vancouver, ISO, and other styles
9

Duch, Raymond, Laurence S. J. Roope, Mara Violato, Matias Fuentes Becerra, Thomas S. Robinson, Jean-Francois Bonnefon, Jorge Friedman, et al. "Citizens from 13 countries share similar preferences for COVID-19 vaccine allocation priorities." Proceedings of the National Academy of Sciences 118, no. 38 (September 15, 2021): e2026382118. http://dx.doi.org/10.1073/pnas.2026382118.

Full text
Abstract:
How does the public want a COVID-19 vaccine to be allocated? We conducted a conjoint experiment asking 15,536 adults in 13 countries to evaluate 248,576 profiles of potential vaccine recipients who varied randomly on five attributes. Our sample includes diverse countries from all continents. The results suggest that in addition to giving priority to health workers and to those at high risk, the public favors giving priority to a broad range of key workers and to those with lower income. These preferences are similar across respondents of different education levels, incomes, and political ideologies, as well as across most surveyed countries. The public favored COVID-19 vaccines being allocated solely via government programs but were highly polarized in some developed countries on whether taking a vaccine should be mandatory. There is a consensus among the public on many aspects of COVID-19 vaccination, which needs to be taken into account when developing and communicating rollout strategies.
APA, Harvard, Vancouver, ISO, and other styles
10

Kumar, V. V., M. Tripathi, M. K. Pandey, and M. K. Tiwari. "Physical programming and conjoint analysis-based redundancy allocation in multistate systems: A Taguchi embedded algorithm selection and control (TAS&C) approach." Proceedings of the Institution of Mechanical Engineers, Part O: Journal of Risk and Reliability 223, no. 3 (June 29, 2009): 215–32. http://dx.doi.org/10.1243/1748006xjrr210.

Full text
Abstract:
Amidst increasing system complexity and technological advancements, the manufacturer aims to win the consumer's trust to maintain his or her permanent goodwill. This expectation directs the manufacturer to address the problem of attaining desired quality and reliability standards; hence, the measure of performance of a system in terms of reliability and utility optimization poses an issue of primary concern. In order to meet the requirement of a reliable and trouble-free product, optimal allocation of all conflicting parameters is essential during the design phase of a system. With this in mind, this paper presents a physical programming and conjoint analysis-based redundancy allocation model (PPCA-RAM) for a multistate series—parallel system. Use of physical programming approach is the key feature of the proposed algorithm to eliminate the need for multi-objective optimization. Physical programming methodology provides an adequate balance among various associated performance measures and thus provides an efficient tool for formulating the objective function of a practical redundancy allocation problem. The proposed model has been addressed by a novel methodology called Taguchi embedded algorithm selection and control (TAS&C). An illustrative example has been presented to authenticate the efficiency of the proposed model and algorithm. The results obtained are compared with the genetic algorithm (GA), artificial immune system (AIS), and particle swarm optimization (PSO), where TAS&C was seen to significantly outperform the rest.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Allocation conjointe"

1

Oueis, Jessica. "Gestion conjointe de ressources de communication et de calcul pour les réseaux sans fils à base de cloud." Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAM007/document.

Full text
Abstract:
Cette thèse porte sur le paradigme « Mobile Edge cloud» qui rapproche le cloud des utilisateurs mobiles et qui déploie une architecture de clouds locaux dans les terminaisons du réseau. Les utilisateurs mobiles peuvent désormais décharger leurs tâches de calcul pour qu’elles soient exécutées par les femto-cellules (FCs) dotées de capacités de calcul et de stockage. Nous proposons ainsi un concept de regroupement de FCs dans des clusters de calculs qui participeront aux calculs des tâches déchargées. A cet effet, nous proposons, dans un premier temps, un algorithme de décision de déportation de tâches vers le cloud, nommé SM-POD. Cet algorithme prend en compte les caractéristiques des tâches de calculs, des ressources de l’équipement mobile, et de la qualité des liens de transmission. SM-POD consiste en une série de classifications successives aboutissant à une décision de calcul local, ou de déportation de l’exécution dans le cloud.Dans un deuxième temps, nous abordons le problème de formation de clusters de calcul à mono-utilisateur et à utilisateurs multiples. Nous formulons le problème d’optimisation relatif qui considère l’allocation conjointe des ressources de calculs et de communication, et la distribution de la charge de calcul sur les FCs participant au cluster. Nous proposons également une stratégie d’éparpillement, dans laquelle l’efficacité énergétique du système est améliorée au prix de la latence de calcul. Dans le cas d’utilisateurs multiples, le problème d’optimisation d’allocation conjointe de ressources n’est pas convexe. Afin de le résoudre, nous proposons une reformulation convexe du problème équivalente à la première puis nous proposons deux algorithmes heuristiques dans le but d’avoir un algorithme de formation de cluster à complexité réduite. L’idée principale du premier est l’ordonnancement des tâches de calculs sur les FCs qui les reçoivent. Les ressources de calculs sont ainsi allouées localement au niveau de la FC. Les tâches ne pouvant pas être exécutées sont, quant à elles, envoyées à une unité de contrôle (SCM) responsable de la formation des clusters de calculs et de leur exécution. Le second algorithme proposé est itératif et consiste en une formation de cluster au niveau des FCs ne tenant pas compte de la présence d’autres demandes de calculs dans le réseau. Les propositions de cluster sont envoyées au SCM qui évalue la distribution des charges sur les différentes FCs. Le SCM signale tout abus de charges pour que les FCs redistribuent leur excès dans des cellules moins chargées.Dans la dernière partie de la thèse, nous proposons un nouveau concept de mise en cache des calculs dans l’Edge cloud. Afin de réduire la latence et la consommation énergétique des clusters de calculs, nous proposons la mise en cache de calculs populaires pour empêcher leur réexécution. Ici, notre contribution est double : d’abord, nous proposons un algorithme de mise en cache basé, non seulement sur la popularité des tâches de calculs, mais aussi sur les tailles et les capacités de calculs demandés, et la connectivité des FCs dans le réseau. L’algorithme proposé identifie les tâches aboutissant à des économies d’énergie et de temps plus importantes lorsqu’elles sont téléchargées d’un cache au lieu d’être recalculées. Nous proposons ensuite d’exploiter la relation entre la popularité des tâches et la probabilité de leur mise en cache, pour localiser les emplacements potentiels de leurs copies. La méthode proposée est basée sur ces emplacements, et permet de former des clusters de recherche de taille réduite tout en garantissant de retrouver une copie en cache
Mobile Edge Cloud brings the cloud closer to mobile users by moving the cloud computational efforts from the internet to the mobile edge. We adopt a local mobile edge cloud computing architecture, where small cells are empowered with computational and storage capacities. Mobile users’ offloaded computational tasks are executed at the cloud-enabled small cells. We propose the concept of small cells clustering for mobile edge computing, where small cells cooperate in order to execute offloaded computational tasks. A first contribution of this thesis is the design of a multi-parameter computation offloading decision algorithm, SM-POD. The proposed algorithm consists of a series of low complexity successive and nested classifications of computational tasks at the mobile side, leading to local computation, or offloading to the cloud. To reach the offloading decision, SM-POD jointly considers computational tasks, handsets, and communication channel parameters. In the second part of this thesis, we tackle the problem of small cell clusters set up for mobile edge cloud computing for both single-user and multi-user cases. The clustering problem is formulated as an optimization that jointly optimizes the computational and communication resource allocation, and the computational load distribution on the small cells participating in the computation cluster. We propose a cluster sparsification strategy, where we trade cluster latency for higher system energy efficiency. In the multi-user case, the optimization problem is not convex. In order to compute a clustering solution, we propose a convex reformulation of the problem, and we prove that both problems are equivalent. With the goal of finding a lower complexity clustering solution, we propose two heuristic small cells clustering algorithms. The first algorithm is based on resource allocation on the serving small cells where tasks are received, as a first step. Then, in a second step, unserved tasks are sent to a small cell managing unit (SCM) that sets up computational clusters for the execution of these tasks. The main idea of this algorithm is task scheduling at both serving small cells, and SCM sides for higher resource allocation efficiency. The second proposed heuristic is an iterative approach in which serving small cells compute their desired clusters, without considering the presence of other users, and send their cluster parameters to the SCM. SCM then checks for excess of resource allocation at any of the network small cells. SCM reports any load excess to serving small cells that re-distribute this load on less loaded small cells. In the final part of this thesis, we propose the concept of computation caching for edge cloud computing. With the aim of reducing the edge cloud computing latency and energy consumption, we propose caching popular computational tasks for preventing their re-execution. Our contribution here is two-fold: first, we propose a caching algorithm that is based on requests popularity, computation size, required computational capacity, and small cells connectivity. This algorithm identifies requests that, if cached and downloaded instead of being re-computed, will increase the computation caching energy and latency savings. Second, we propose a method for setting up a search small cells cluster for finding a cached copy of the requests computation. The clustering policy exploits the relationship between tasks popularity and their probability of being cached, in order to identify possible locations of the cached copy. The proposed method reduces the search cluster size while guaranteeing a minimum cache hit probability
APA, Harvard, Vancouver, ISO, and other styles
2

Ben, Slimane Jamila. "Allocation conjointe des canaux de fréquence et des créneaux de temps et routage avec QdS dans les réseaux de capteurs sans fil denses et étendus." Thesis, Université de Lorraine, 2013. http://www.theses.fr/2013LORR0224/document.

Full text
Abstract:
Le thème général du sujet tourne autour de l'optimisation inter-couche des réseaux de capteurs basés sur la technologie ultra large bande ULB (UWB, Ultra Wide Band) moyennant des solutions protocolaires permettant d'un côté de répondre au besoin de qualité de service QdS à critères multiples dans les réseaux de capteurs sans fil et d'autre côté d'assurer le partage et l'allocation efficace les ressources disponibles (spectrale et temporelle) ainsi que l'optimisation de la consommation d'énergie dans des tels réseaux. Le domaine d'application cible choisi dans le présent travail est les systèmes de suivi des patients au sein d'un réseau de capteurs déployé en hôpital intelligent (WHSN, Large-scale Wireless Hospital Sensor Network). Dans ce contexte, nous avons proposé le modèle UWBCAS pour assurer le partage des ressources spectrales entre les PANs. Puis, nous avons conçu et implémenté un protocole MAC multi-canal multi-créneau de temps avec support de qualité de service, PMCMTP, pour assurer une allocation conjointe des canaux de fréquence et des créneaux de temps au sein de chaque réseau PAN. Enfin nous avons proposé l'algorithme JSAR qui traite à la fois les problèmes d'ordonnancement des cycles d'activités des membres du réseau dans le but d'optimiser la consommation d'énergie, d'allocation efficace des canaux de fréquence et des créneaux de temps afin d'améliorer le taux d'utilisation des ressources et les performances du réseau et de routage avec support de QdS à critères multiples afin de répondre aux besoins des applications supportées
The general context of the present memory is about the cross-layer optimization of wireless sensors networks based on ultra wide band technology UWB. The proposed solutions ensure the share and the efficient allocation of spectral and temporal resources, the optimization of the energy consumption and the support of multi-constraints quality of services QoS. The most challenging issue is providing a tradeoff between the resource efficiency and the multiconstrained QoS support. For this purpose, we proposed a new Wireless Hospital Sensor Network (WHSN) three-tiered architecture in order to support large-scale deployment and to improve the network performance. Then we designed a channel allocation scheme (UWBCAS,)and a prioritized multi-channel multi-time slot MAC protocol (PMCMTP) to enhance network performance and maximize the resource utilization. Finally, we proposed a joint duty cycle scheduling, resource allocation and multi-constrained QoS routing algorithm (JSAR) which simultaneously combines, a duty cycle scheduling scheme for energy saving, a resource allocation scheme for efficient use of frequency channels and time slots, and an heuristic for multi-constrained routing protocol
APA, Harvard, Vancouver, ISO, and other styles
3

Sharara, Mahdi. "Resource Allocation in Future Radio Access Networks." Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG024.

Full text
Abstract:
Cette thèse considère l'allocation des ressources radio et de calcul dans les futurs réseaux d'accès radio et plus précisément dans les réseaux Cloud-RAN (Cloud-Radio Access Networks) ainsi que les réseaux Open-RAN (Open-Radio Access Networks). Dans ces architectures, le traitement en bande de base de plusieurs stations de base est centralisé et virtualisé. Cela permet une meilleure optimisation du réseau et une réduction des dépenses d'investissement et d'exploitation. Dans la première partie de cette thèse, nous considérons un schéma de coordination entre les ordonnanceurs radio et de calcul. Dans le cas où les ressources de calcul ne sont pas suffisantes, l'ordonnanceur de calcul envoie un retour d'information à l'ordonnanceur radio pour mettre à jour les paramètres radio. Bien que cela réduise le débit radio de l'utilisateur, il garantit que la trame sera traitée au niveau de l'ordonnanceur de calcul. Nous modélisons ce schéma de coordination à l'aide de la programmation linéaire en nombres entiers (ILP) avec comme objectifs de maximiser le débit total ainsi que la satisfaction des utilisateurs. Les résultats montrent la capacité de ce schéma de coordination à améliorer différents paramètres, notamment la réduction du gaspillage de puissance de transmission. Ensuite, nous proposons des heuristiques à faible complexité et nous les testons dans un environnement de services multiples avec des exigences différentes. Dans la deuxième partie de cette thèse, nous considérons l'allocation conjointe des ressources radio et de calcul. Les ressources radio et de calcul sont allouées conjointement dans le but de minimiser la consommation énergétique. Le problème est modélisé à l'aide de la programmation linéaire mixte en nombres entiers (MILP), et est ensuite comparé à un autre problème MILP ayant comme objectif de maximiser le débit total. Les résultats montrent que l'allocation conjointe des ressources radio et de calcul est plus efficace que l'allocation séquentielle pour minimiser la consommation énergétique. Enfin, nous proposons un algorithme basé sur la théorie de matching (matching theory) à faible complexité qui pourra être une alternative pour résoudre le problème MILP à haute complexité. Dans la dernière partie de cette thèse, nous étudions l'utilisation des outils de l'apprentissage machine (machine learning). Tout d'abord, nous considérons un modèle d'apprentissage profond (deep learning) qui vise à apprendre comment résoudre le problème de coordination ILP, mais en un temps beaucoup plus court. Ensuite, nous considérons un modèle d'apprentissage par renforcement (reinforcement learning) qui vise à allouer des ressources de calcul aux utilisateurs afin de maximiser le profit de l'opérateur
This dissertation considers radio and computing resource allocation in future radio access networks and more precisely Cloud Radio Access Network (Cloud-RAN) and Open Radio Access Network (Open-RAN). In these architectures, the baseband processing of multiple base stations is centralized and virtualized. This permits better network optimization and allows for saving capital expenditure and operational expenditure. In the first part, we consider a coordination scheme between radio and computing schedulers. In case the computing resources are not sufficient, the computing scheduler sends feedback to the radio scheduler to update the radio parameters. While this reduces the radio throughput of the user, it guarantees that the frame will be processed at the computing scheduler level. We model this coordination scheme using Integer Linear Programming (ILP) with the objectives of maximizing the total throughput and users' satisfaction. The results demonstrate the ability of this scheme to improve different parameters, including the reduction of wasted transmission power. Then, we propose low-complexity heuristics, and we test them in an environment of multiple services with different requirements. In the second part, we consider the joint radio and computing resource allocation. Radio and computing resources are jointly allocated with the aim of minimizing energy consumption. The problem is modeled as a Mixed Integer Linear Programming Problem (MILP) and is compared to another MILP problem that maximizes the total throughput. The results demonstrate the ability of joint allocation to minimize energy consumption in comparison with the sequential allocation. Finally, we propose a low-complexity matching game-based algorithm that can be an alternative for solving the high-complexity MILP problem. In the last part, we investigate the usage of machine learning tools. First, we consider a deep learning model that aims to learn how to solve the coordination ILP problem, but with a much shorter time. Then, we consider a reinforcement learning model that aims to allocate computing resources for users to maximize the operator's profit
APA, Harvard, Vancouver, ISO, and other styles
4

Soua, Ridha. "Wireless sensor networks in industrial environment : energy efficiency, delay and scalability." Phd thesis, Université Pierre et Marie Curie - Paris VI, 2014. http://tel.archives-ouvertes.fr/tel-00978887.

Full text
Abstract:
Some industrial applications require deterministic and bounded gathering delays. We focus on the joint time slots and channel assignment that minimizes the time of data collection and provides conflict-free schedules. This assignment allows nodes to sleep in any slot where they are not involved in transmissions. Hence, these schedules save the energy budjet of sensors. We calculate the minimum number of time slots needed to complete raw data convergecast for a sink equipped with multiple radio interfaces and heterogeneous nodes traffic. We also give optimal schedules that achieve the optimal bounds. We then propose MODESA, a centralized joint slots and channels assignment algorithm. We prove the optimality of MODESA in specific topologies. Through simulations, we show that MODESA is better than TMCP, a centralized subtree based scheduling algorithm. We improve MODESA with different strategies for channels allocation. In addition, we show that the use of a multi-path routing reduces the time of data collection .Nevertheless, the joint time slot and channels assignment must be able to adapt to changing traffic demands of the nodes ( alarms, additional requests for temporary traffic ) . We propose AMSA , an adaptive joint time slots and channel assignment based on incremental technical solution. To address the issue of scalability, we propose, WAVE, a distributed scheduling algorithm for convergecat that operates in centralized or distributed mode. We show the equivalence of schedules provided by the two modes.
APA, Harvard, Vancouver, ISO, and other styles
5

Zheng, Shuo. "Prise en compte des contraintes de canal dans les schémas de codage vidéo conjoint du source-canal." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLT005/document.

Full text
Abstract:
Les schémas de Codage Vidéo Linéaire (CVL) inspirés de SoftCast ont émergé dans la dernière décennie comme une alternative aux schémas de codage vidéo classiques. Ces schémas de codage source-canal conjoint exploitent des résultats théoriques montrant qu’une transmission (quasi-)analogique est plus performante dans des situations de multicast que des schémas numériques lorsque les rapports signal-à-bruit des canaux (C-SNR) diffèrent d’un récepteur à l’autre. Dans ce contexte, les schémas de CVL permettent d’obtenir une qualité de vidéo décodée proportionnelle au C-SNR du récepteur.Une première contribution de cette thèse concerne l’optimisation de la matrice de précodage de canal pour une transmission de type OFDM de flux générés par un CVL lorsque les contraintes de puissance diffèrent d’un sous-canal à l’autre. Ce type de contrainte apparait en sur des canaux DSL, ou dans des dispositifs de transmission sur courant porteur en ligne (CPL). Cette thèse propose une solution optimale à ce problème de type multi-level water filling et nécessitant la solution d’un problème de type Structured Hermitian Inverse Eigenvalue. Trois algorithmes sous-optimaux de complexité réduite sont également proposés. Des nombreux résultats de simulation montrent que les algorithmes sous-optimaux ont des performances très proches de l’optimum et réduisent significativement le temps de codage. Le calcul de la matrice de précodage dans une situation de multicast est également abordé. Une seconde contribution principale consiste en la réduction de l’impact du bruit impulsif dans les CVL. Le problème de correction du bruit impulsif est formulé comme un problème d’estimation d’un vecteur creux. Un algorithme de type Fast Bayesian Matching Pursuit (FBMP) est adapté au contexte CVL. Cette approche nécessite de réserver des sous-canaux pour la correction du bruit impulsif, entrainant une diminution de la qualité vidéo en l'absence de bruit impulsif. Un modèle phénoménologique (MP) est proposé pour décrire l’erreur résiduelle après correction du bruit impulsif. Ce modèle permet de d’optimiser le nombre de sous-canaux à réserver en fonction des caractéristiques du bruit impulsif. Les résultats de simulation montrent que le schéma proposé améliore considérablement les performances lorsque le flux CVL est transmis sur un canal sujet à du bruit impulsif
SoftCast based Linear Video Coding (LVC) schemes have been emerged in the last decade as a quasi analog joint-source-channel alternative to classical video coding schemes. Theoretical analyses have shown that analog coding is better than digital coding in a multicast scenario when the channel signal-to-noise ratios (C-SNR) differ among receivers. LVC schemes provide in such context a decoded video quality at different receivers proportional to their C-SNR.This thesis considers first the channel precoding and decoding matrix design problem for LVC schemes under a per-subchannel power constraint. Such constraint is found, e.g., on Power Line Telecommunication (PLT) channels and is similar to per-antenna power constraints in multi-antenna transmission system. An optimal design approach is proposed, involving a multi-level water filling algorithm and the solution of a structured Hermitian Inverse Eigenvalue problem. Three lower-complexity alternative suboptimal algorithms are also proposed. Extensive experiments show that the suboptimal algorithms perform closely to the optimal one and can reduce significantly the complexity. The precoding matrix design in multicast situations also has been considered.A second main contribution consists in an impulse noise mitigation approach for LVC schemes. Impulse noise identification and correction can be formulated as a sparse vector recovery problem. A Fast Bayesian Matching Pursuit (FBMP) algorithm is adapted to LVC schemes. Subchannels provisioning for impulse noise mitigation is necessary, leading to a nominal video quality decrease in absence of impulse noise. A phenomenological model (PM) is proposed to describe the impulse noise correction residual. Using the PM model, an algorithm to evaluate the optimal number of subchannels to provision is proposed. Simulation results show that the proposed algorithms significantly improve the video quality when transmitted over channels prone to impulse noise
APA, Harvard, Vancouver, ISO, and other styles
6

Soua, Ridha. "Wireless sensor networks in industrial environment : energy efficiency, delay and scalability." Electronic Thesis or Diss., Paris 6, 2014. http://www.theses.fr/2014PA066029.

Full text
Abstract:
Certaines applications industrielles nécessitent des délais de collecte déterministes et bornés, nous nous concentrons sur l'allocation conjointe de slots temporels et de canaux sans conflit qui minimisent la durée de collecte. Cette allocation permet aux noeuds de dormir dans n'importe quel slot où ils ne sont pas impliqués dans des transmissions. Nous calculons le nombre minimal de slots temporels nécessaire pour compléter la collecte de données brute pour un puits équipé de plusieurs interfaces radio et des demandes de trafic hétérogènes. Nous donnons également des ordonnancements optimaux qui permettent d'atteindre ces bornes optimales. Nous proposons ensuite MODESA, un algorithme centralisé d'allocation conjointe de slots et de canaux. Nous montrons l'optimalité de MODESA dans des topologies particulières. Par les simulations, nous montrons que MODESA surpasse TMCP , un ordonnancement centralisé à base de sous-arbre. Nous améliorons MODESA avec différentes stratégies d'allocation de canaux. En outre , nous montrons que le recours à un routage multi-chemins réduit le délai de collecte.Néanmoins, l'allocation conjointe de slot et de canaux doit être capable de s'adapter aux changements des demandes des noeuds (des alarmes, des demandes de trafic supplémentaires temporaires). Nous proposons AMSA , une solution d'assignation conjointe de slots et de canaux basée sur une technique incrémentale. Pour aborder la question du passage à l'échelle, nous proposons, WAVE , une solution d'allocation conjointe de slots et de canaux qui fonctionne en mode centralisé ou distribué. Nous montrons l'équivalence des ordonnancements fournis par les deux modes
Some industrial applications require deterministic and bounded gathering delays. We focus on the joint time slots and channel assignment that minimizes the time of data collection and provides conflict-free schedules. This assignment allows nodes to sleep in any slot where they are not involved in transmissions. Hence, these schedules save the energy budjet of sensors. We calculate the minimum number of time slots needed to complete raw data convergecast for a sink equipped with multiple radio interfaces and heterogeneous nodes traffic. We also give optimal schedules that achieve the optimal bounds. We then propose MODESA, a centralized joint slots and channels assignment algorithm. We prove the optimality of MODESA in specific topologies. Through simulations, we show that MODESA is better than TMCP, a centralized subtree based scheduling algorithm. We improve MODESA with different strategies for channels allocation. In addition, we show that the use of a multi-path routing reduces the time of data collection .Nevertheless, the joint time slot and channels assignment must be able to adapt to changing traffic demands of the nodes (alarms, additional requests for temporary traffic) . We propose AMSA , an adaptive joint time slots and channel assignment based on incremental technical solution. To address the issue of scalability, we propose, WAVE, a distributed scheduling algorithm for convergecat that operates in centralized or distributed mode. We show the equivalence of schedules provided by the two modes
APA, Harvard, Vancouver, ISO, and other styles
7

Kumar, Abhishek. "A tolerance allocation framework using fuzzy comprehensive evaluation and decision support processes." Thesis, Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/37212.

Full text
Abstract:
Tolerances play an important role in product fabrication. Tolerances impact the needs of the designer and the manufacturer. Engineering designers are concerned with the impact of tolerances on the variation of the output, while manufacturers are more concerned with the cost of fitting the parts. Traditional tolerance control methods do not take into account both these needs. In this thesis, the author proposes a framework that overcomes the drawbacks of the traditional tolerance control methods, and reduces subjectivity via fuzzy set theory and decision support systems (DSS). Those factors that affect the manufacturing cost (geometry, material etc) of a part are fuzzy (i.e. subjective) in nature with no numerical measure. Fuzzy comprehensive evaluation (FCE) is utilized in this thesis as a method of quantifying the fuzzy (i.e. subjective) factors. In the FCE process, the weighted importance of each factor affects the manufacturing cost of the part. There is no systematic method of calculating the importance weights. This brings about a need for decision support in the evaluation of the weighted importance of each factor. The combination of FCE and DSS, in the form of Conjoint Analysis (CA), is used to reduce subjectivity in calculation of machining cost. Taguchi's quality loss function is considered in this framework to reduce the variation in the output. The application of the framework is demonstrated with three practical engineering applications. Tolerances are allocated for three assemblies; a friction clutch, an accumulator O-ring seal and a Power Generating Shock Absorber (PGSA) using the proposed framework. The output performances of the PGSA and the clutch are affected by the allocated tolerances. On using the proposed framework, there is seen to be a reduction in variation of output performance for the clutch and the PGSA. The use of CA is also validated by checking efficiency of final tolerance calculation with and without use of CA.
APA, Harvard, Vancouver, ISO, and other styles
8

Bourg, Salomé. "The evolution of mechanism underlying the allocation of resources and consequences on the shape of trade-offs in multicellular organisms." Thesis, Lyon, 2019. http://www.theses.fr/2019LYSE1266.

Full text
Abstract:
Pour se développer, survivre ou se reproduire, les organismes ont besoin d’énergie, généralement acquise par l’alimentation. Or cette ressource alimentaire est présente en quantité fluctuante et limitée dans l’environnement, obligeant les êtres vivants à faire des compromis et à répartir leur énergie entre différentes fonctions. Ces compromis évolutifs sont visibles à l’échelle d’une population sous la forme d’une relation négative entre traits appelée trade-off. Les trade-offs ont longtemps été considérés comme étant uniquement la résultante d’une allocation différentielle des ressources et comme étant immuables. Ainsi, allouer davantage d’énergie à un trait, la survie par exemple, réduit nécessairement la part pouvant être redistribuée aux autres traits tel que la fécondité ou la croissance. Or l’allocation différentielle des ressources est un processus régulé par un mécanisme endocrinien, lui-même codé génétiquement et donc capable d’évoluer. L’objectif de cette thèse a été de comprendre, de manière théorique, comment l’évolution du mécanisme endocrinien impacte la forme des trade-offs et comment la forme des trade-offs elle-même évolue. Pour ce faire, j’ai développé des modèles évolutifs où l’allocation des ressources est régie par un système endocrinien. Ce système peut évoluer, sous l’effet de mutations impactant l’expression et la conformation des hormones et des récepteurs qui composent ce système endocrinien. J’ai ainsi pu montrer que les relations négatives entre traits peuvent évoluer et que leur forme dépend fortement d’un paramètre rarement considéré : le coût du stockage. Dans un second temps, j’ai étudié l’impact de la variabilité temporelle dans l’abondance d’une ressource alimentaire sur l’allocation différentielle et les mécanismes endocriniens sous–jacents. Mon projet de thèse comprend un troisième volet, complémentaire à la partie théorique, qui s’attache à tester certaines prédictions empiriquement. J’ai mené une expérience de sélection artificielle dans laquelle je contrôlais la topologie d’un paysage de fitness, permettant de sélectionner des combinaisons de trait n’appartenant pas à la relation phénotypique habituellement observée. Cette expérience, conduite chez Drosophila melanogaster durant 10 générations, a montré que l’évolution peut effectivement se produire dans ce contexte, remettant partiellement en cause notre compréhension des mécanismes sous-tendant l’expression des traits phénotypiques
In order to grow, survive or reproduce, all organisms need energy, usually acquired through diet. However, this food resource is present both in fluctuating and limited quantities in the environment, forcing living beings to compromise and thus to divide their energy between their different functions. These evolutionary compromises, visible at the scale of a population in the form of a negative relationship between traits, are called trade-off. Trade-offs have long been considered as the result of a differential resource allocation and as immutable. Therefore, allocating more energy to a trait such as survival necessarily reduces the amount that can be redistributed to other traits, such as fecundity or growth. It is noteworthy that the differential allocation of resources is a process regulated by an endocrine mechanism, itself genetically coded and thereby able to evolve. The aim of my PhD thesis was to understand, theoretically, (i) how the evolution of the endocrine mechanism impacts the shape of trade-offs and (ii) how the shape of trade-offs itself evolves.To do so, I first developed evolutionary models where the allocation of resources is governed by an endocrine system. This system can evolve under the effect of mutations that impact both the expression and the conformation of hormones and receptors constituting this endocrine system. Thanks to this model, I show that the negative relationships between traits can evolve and that their shape strongly depends on a parameter rarely considered: the cost of storage. In a second step, I studied the impact of temporal variability in food abundance on the endocrine mechanisms responsible for the differential allocation of resources.Lastly, my thesis project includes a component complementary to the theoretical part, which attempts to empirically test certain of the expressed predictions. I conducted an artificial selection experiment in which I controlled the topology of a fitness landscape, thus allowing to select combinations of traits not belonging to the phenotypic relationship usually observed. This experiment, implemented in Drosophila melanogaster for 10 generations, has shown that evolution can indeed occur in this context, thereby partially challenging our understanding of the mechanisms underlying the expression of phenotypic traits
APA, Harvard, Vancouver, ISO, and other styles
9

Parrein, Benoît. "Description multiple de l'information par transformation Mojette." Phd thesis, Université de Nantes, 2001. http://tel.archives-ouvertes.fr/tel-00300613.

Full text
Abstract:
La représentation scalable de l'information s'impose aujourd'hui pour supporter l'hétérogénéité d'un réseau interconnecté tel que l'Internet. Le codage de source adopte, pour ce faire, une approche multi-résolution pouvant délivrer progressivement à un utilisateur le contenu de sa requête. Cependant, en supposant au cours de la transmission une gestion de bout en bout des priorités ainsi établies, ces schémas restent sommairement adaptés aux environnements de pertes de paquets et de qualité de service non garantie.
Les codages à description multiple offrent une alternative à la transmission hiérarchisée de l'information en brisant la scalabilité de la source aux abords du canal. Dans cette thèse, nous proposons une méthode originale de description multiple qui réalise une protection différenciée de chaque niveau hiérarchique de la source en fonction des propriétés dynamiques du canal de transmission.
La transformation Mojette (transformation de Radon discrète exacte) est une transformation unitaire qui permet de partager un volume de données en un ensemble plus ou moins redondant de projections équivalentes. L'évolution de ce type d'opérateur initialement utilisé dans un espace continu pour la reconstruction tomographique étend le concept de support d'image à celui de mémoire tampon géométrique pour données multimédias. Ce codage à description multiple, généralisé à N canaux, autorise la reconstruction de la mémoire initiale de manière déterministe par des sous-ensembles de projections dont le nombre caractérise le niveau de protection. Ce schéma est particulièrement adapté au mode de transport par paquets sans contrôle d'intégrité extensible du canal de transmission. La hiérarchie de la source est dans ce cas communiquée sous forme transparente pour le canal via des descriptions banalisées.
L'évaluation du codage est effectuée en comparant les débits engendrés avec ceux d'un code MDS (Maximum Distance Separable) qui fournissent une solution optimale dans le nombre de symboles nécessaires au décodage. La relaxation des propriétés MDS dans un code (1+ε)MDS avec la transformation Mojette demande une légère augmentation de débit au profit d'une complexité réduite.
L'application sur des schémas de compression d'images valide concrètement l'adaptation possible des sources actuelles à un canal de type best-effort. L'utilisation dans un environnement distribué (micro-paiement, stockage distribué de données multimédia) illustre en outre un partage sécurisé de l'information.
En perspectives de ce travail, nous avons abordé l'intégration de cette méthode dans un protocole de transmission scalable multimédia et étudié une version probabiliste du système.
APA, Harvard, Vancouver, ISO, and other styles
10

Ouro-Bodi, Ouro-Gnaou. "Les Etats et la protection internationale de l'environnement : la question du changement climatique." Thesis, Bordeaux, 2014. http://www.theses.fr/2014BORD0228/document.

Full text
Abstract:
Le changement climatique est devenu aujourd’hui le fléau environnemental qui préoccupe etmobilise le plus la communauté internationale. L’aboutissement de cette mobilisation générale reste sansdoute la mise en place du régime international de lutte contre le changement climatique dont la Conventioncadredes Nations Unies sur le changement climatique et le Protocole de Kyoto constituent les basesjuridiques. Ce régime innove en ce qu’il fixe des engagements quantifiés de réduction des émissions de gaz àeffet de serre pour les États pollueurs, mais aussi en ce qu’il instaure des mécanismes dits de « flexibilité »dont la mise en oeuvre est assortie d’un contrôle original basé sur un Comité dit de « l’observance ». Mais, endépit de toute cette production normative, il est regrettable de constater aujourd’hui que le régimeinternational du climat est un véritable échec. En effet, si la mobilisation des États ne fait aucun doute, enrevanche, les mêmes États qui ont volontairement accepté de s’engager refusent délibérément d’honorer leursengagements pour des raisons essentiellement politiques, économiques et stratégiques. Ce travail ambitionnedonc de lever le voile sur les causes de cet échec en dressant un bilan mitigé de la première périoded’engagement de Kyoto qui a pris fin en 2012, et propose des perspectives pour un régime juridique duclimat post-Kyoto efficient et efficace, en mesure d’être à la hauteur des enjeux
Climate change has become the scourge environmental concern and mobilizes more theinternational community. The outcome of this mobilization remains probably the implementation ofinternational climate change regime for which the Climate Convention and the Kyoto Protocol are the legalbases. This system is innovative in that it sets quantified emission reduction commitments for greenhouse gasemissions (GHG) for polluters States, but also in that it establishes mechanisms known as of “flexibility”whose implementation is accompanied by a control based on a Committee known as of “compliance”. Butdespite all this normative production, it is regrettable that today the international climate regime is a realfailure. Indeed, if the mobilization of states is no doubt, however, the same states that have voluntarily agreedto engage deliberately refuse to honour their commitments for essentially political, economic and strategicreasons. This work therefore aims to shed light on the causes of this failure by developing a mixed record ofthe first Kyoto commitment ended period in 2012, and offers prospects for a legal regime of the post-Kyotoclimate and efficient, able to be up to the challenges
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Allocation conjointe"

1

Rao, Vithala R., and Henrik Sattler. "Measurement of Price Effects with Conjoint Analysis: Separating Informational and Allocative Effects of Price." In Conjoint Measurement, 47–66. Berlin, Heidelberg: Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-540-24713-5_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Rao, Vithala R., and Henrik Sattler. "Measurement of Price Effects with Conjoint Analysis: Separating Informational and Allocative Effects of Price." In Conjoint Measurement, 47–66. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/978-3-662-06392-7_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Rao, Vithala R., and Henrik Sattler. "Measurement of Price Effects with Conjoint Analysis: Separating Informational and Allocative Effects of Price." In Conjoint Measurement, 47–66. Berlin, Heidelberg: Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/978-3-662-06395-8_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Rao, Vithala R., and Henrik Sattler. "Measurement of Price Effects with Conjoint Analysis: Separating Informational and Allocative Effects of Price." In Conjoint Measurement, 31–46. Berlin, Heidelberg: Springer Berlin Heidelberg, 2007. http://dx.doi.org/10.1007/978-3-540-71404-0_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Daum, Diane L., and Jennifer A. Stoll. "Employee Preferences." In Employee Surveys and Sensing, 153–70. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780190939717.003.0010.

Full text
Abstract:
Understanding and delivering on employee preferences results in real business outcomes, such as more effective hiring, decreased attrition, and stronger customer service. The authors begin with an introduction to the literature on employee preferences, especially as related to the employee value proposition (EVP), employer branding, and person–organization and person–job fit. They advocate using direct preference measurement techniques such as ranking, point-allocation exercises, and conjoint surveys that require respondents to make trade-offs that reveal what matters most to them and supplementing these with qualitative techniques such as interviews, focus groups, and open-ended comments to provide additional context. The authors emphasize the importance of using the information collected to ensure that the EVP supports the organization’s strategy and will be credible to employees and candidates, while conveying what differentiates them from talent competitors.
APA, Harvard, Vancouver, ISO, and other styles
6

Annas, George J. "Minerva v. National Health Agency, 53 U.S. 2d 333 (2020)." In Standard Of Care, 218–33. Oxford University PressNew York, NY, 1993. http://dx.doi.org/10.1093/oso/9780195072471.003.0018.

Full text
Abstract:
Abstract In 2016 the National Health Agency (“the Agency”) promulgated regulations which provided for the allocation of artificial hearts in the United States under the authority of the National Health Insurance Act of 1996 (P.L. 104-602). The regulations prohibited the manufacture, sale, or implantation of an artificial heart without a permit from the Agency; prohibited individual purchasers from being recipients of artificial hearts without a permit from the Agency; and pro vided that permits to recipients be issued only by the Agency ‘s computer, which would pick qualified applicants at random from a master list. This regulation is challenged by P. Minerva, a thoracic surgeon, and two of her patients, Z. Themis and Z. Dike. Themis did not meet the Agency ‘s qualification standards as he is less than fifteen years old; Dike, while meeting the standards, has not yet been chosen by the computer. The plaintiffs challenge the regulations as a The separation of Siamese twins with conjoined hearts, while a rare event, has received some ethical and legal discussion. Most of it, unfortunately, has been characterized by fuzzy thinking and flawed analogies. Perhaps this is inevitable. Siamese twins have always seemed bizarre to us. Their very name comes from history ‘s most famous such twins, Eng and Chang Bunker. Born in Siam in 1811, they lived until 1874 and were exhibited around the world by P. T. Barnum. The second most famous are Amos and Eddie Smith, fictional Siamese twins who are the subjects of Judith Rossner ‘s challenging novel, Attachments, and who, in the novel, also spent some time with Ringling Brothers Barnum & Bailey.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Allocation conjointe"

1

Kumar, Abhishek, Lorens Goksel, and Seung-Kyum Choi. "Tolerance Allocation of Assemblies Using Fuzzy Comprehensive Evaluation and Decision Support Processes." In ASME 2010 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2010. http://dx.doi.org/10.1115/detc2010-29023.

Full text
Abstract:
Advancements in manufacturing technology significantly impact the design process. The ability to manufacture assembly components with specific tolerances has increased the need for tolerance allocation. This research proposes a framework that overcomes the drawbacks of the traditional tolerance control methods, and reduces subjectivity by using fuzzy set theory and decision support processes. The combination of fuzzy comprehensive evaluation and conjoint analysis facilitate the reduction of subjectivity in the tolerance control process. The application of the framework is demonstrated with two practical engineering problems. Tolerances are allocated for a clutch assembly and an O-ring seal in an accumulator.
APA, Harvard, Vancouver, ISO, and other styles
2

Belahcène, Khaled, Vincent Mousseau, and Anaëlle Wilczynski. "Combining Fairness and Optimality when Selecting and Allocating Projects." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/6.

Full text
Abstract:
We consider the problem of the conjoint selection and allocation of projects to a population of agents, e.g. students are assigned papers and shall present them to their peers. The selection can be constrained either by quotas over subcategories of projects, or by the preferences of the agents themselves. We explore fairness and optimality issues and refine the analysis of the rank-maximality and popularity optimality concepts. We show that they are compatible with reasonable fairness requirements related to rank-based envy-freeness and can be adapted to select globally good projects according to the preferences of the agents.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography