Littérature scientifique sur le sujet « Composition et compatibilité des services web »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Composition et compatibilité des services web ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Articles de revues sur le sujet "Composition et compatibilité des services web"

1

Lécué, Freddy, Alain Léger et Ramy Ragab Hassen. « Les web services sémantiques, automate et intégration. II. Composition de services web, technologies et plateformes, applications industrielles ». Techniques et sciences informatiques 28, no 2 (février 2009) : 263–93. http://dx.doi.org/10.3166/tsi.28.263-293.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Rouached, Mohsen, Walid Fdhila et Claude Godart. « Web Services Compositions Modelling and Choreographies Analysis ». International Journal of Web Services Research 7, no 2 (avril 2010) : 87–110. http://dx.doi.org/10.4018/jwsr.2010040105.

Texte intégral
Résumé :
In Rouached et al. (2006) and Rouached and Godart (2007) the authors described the semantics of WSBPEL by way of mapping each of the WSBPEL (Arkin et al., 2004) constructs to the EC algebra and building a model of the process behaviour. With these mapping rules, the authors describe a modelling approach of a process defined for a single Web service composition. However, this modelling is limited to a local view and can only be used to model the behaviour of a single process. The authors further the semantic mapping to include Web service composition interactions through modelling Web service conversations and their choreography. This paper elaborates the models to support a view of interacting Web service compositions extending the mapping from WSBPEL to EC, and including Web service interfaces (WSDL) for use in modelling between services. The verification and validation techniques are also exposed while automated induction-based theorem prover is used as verification back-end.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Pellier, Damien, et Humbert Fiorino. « Un modèle de composition automatique et distribuée de services web par planification ». Revue d'intelligence artificielle 23, no 1 (24 février 2009) : 13–46. http://dx.doi.org/10.3166/ria.23.13-46.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Abraham, Ajith, Sung-Bae Cho, Thomas Hite et Sang-Yong Han. « Special Issue on Web Services Practices ». Journal of Advanced Computational Intelligence and Intelligent Informatics 10, no 5 (20 septembre 2006) : 703–4. http://dx.doi.org/10.20965/jaciii.2006.p0703.

Texte intégral
Résumé :
Web services – a new breed of self-contained, self-describing, modular applications published, located, and invoked across the Web – handle functions, from simple requests to complicated business processes. They are defined as network-based application components with a services-oriented architecture (SOA) using standard interface description languages and uniform communication protocols. SOA enables organizations to grasp and respond to changing trends and to adapt their business processes rapidly without major changes to the IT infrastructure. The Inaugural International Conference on Next-Generation Web Services Practices (NWeSP'05) attracted researchers who are also the world's most respected authorities on the semantic Web, Web-based services, and Web applications and services. NWeSP'05 was held in cooperation with the IEEE Computer Society Task Force on Electronic Commerce, the Technical Committee on Internet, and the Technical Committee on Scalable Computing. This special issue presents eight papers focused on different aspects of Web services and their applications. Papers were selected based on fundamental ideas and concepts rather than the thoroughness of techniques employed. Papers are organized as follows: <I>Taher et al.</I> present the first paper, on a Quality of Service Information and Computational framework (QoS-IC) supporting QoS-based service selection for SOA. The framework's functionality is expanded using a QoS constraints model that establishes an association relationship between different QoS properties and is used to govern QoS-based service selection in the underlying algorithm. Using a prototype implementation, the authors demonstrate how QoS constraints improve QoS-based service selection and save consumers valuable time. Due to the complex infrastructure of web applications, response times perceived by clients may be significantly longer than desired. To overcome some of the current problems, <I>Vilas et al.</I>, in the second paper, propose a cache-based extension of the architecture that enhances the current web services architecture, which is mainly based on program-logic or protocol-dependent optimization. In the third paper, Jo and Yoo present authorization for securing XML sources on the Web. One of the disadvantages of existing access control is that the DOM tree must be loaded into memory while all XML documents are parsed to generate the DOM tree, such that a lot of memory is used in repetitive search for tree to authorize access to all nodes in the DOM tree. The complex authorization evaluation process required thus lowers system performance. Existing access control fails to consider information structure and semantics sufficiently due to basic HTML limitations. The authors overcome some of these limitations in the proposed model. In the fourth paper, Jung and Cho propose a novel behavior-network-based method for Web service composition. The behavior network selects services automatically through internal and external links with environmental information from sensors and goals. An optimal service is selected at each step, resulting in a globally optimal service sequence for achieving preset goals. The authors detail experimental results for the proposed model by comparing them with rule-based system and user tests. <I>Kong et al.</I> present an efficient method in the fifth paper for merging heterogeneous ontologies – no ontology building standard currently exists – and the many ontology-building tools available are based on different ontology languages, mostly focusing on how to create, edit and infer the ontology efficiently. Even ontologies about the same domain differ because ontology experts hold different view points. For these reasons, interoperability between ontologies is very low. The authors propose merging heterogeneous domain ontologies by overcoming some of the above limitations. In the sixth paper, Chen and Che provide polynomial-time tree pattern query minimization algorithm whose efficiency stems from two key observations: (i) Inherent redundant "components" usually exist inside the rudimentary query provided by the user, and (ii) nonedundant nodes may become redundant when constraints such as co-occurrence and required child/descendant are given. They show that the algorithm obtained by first augmenting the input tree pattern using constraints, then applying minimization, invariably finds a unique minimal equivalent to the original query. Chen and Che present a polynomial-time algorithm for tree pattern query (TPQ) minimization without XML constraints in the seventh paper. The two-part algorithm is a dynamic programming strategy for finding all matching subtrees within a TPQ. The algorithm consists of one for subtree recognization and a second for subtree deletion. In the last paper, <I>Bagchi et al.</I> present the mobile distributed virtual memory (MDVM) concept and architecture for cellular networks containing server-groups (SG). They detail a two-round randomized distributed algorithm to elect a unique leader and co-leader of the SG that is free of any assumption about network topology, and buffer space limitations and is based on dynamically elected coordinators eliminating single points of failure. As guest editors, we thank all authors featured in this special issue for their contributions and the referees for critically evaluating the papers within the short time allotted. We sincerely believe that readers will share our enjoyment of this special issue and find the information it presents both timely and useful.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Ramachandra, T. V. « Innovative ecological approaches to ensure clean and adequate water for all ». Journal of Environmental Biology 43, no 03 (2 mai 2022) : i—ii. http://dx.doi.org/10.22438/jeb/43/3/editorial.

Texte intégral
Résumé :
The Western Ghats, a range of ancient hills extends between 8° N and 21° N latitude, and 73° E and 77° E longitude(from the tip of peninsular India at Kanyakumari to Gujarat). The Western Ghats runs parallel to the west coast of India, covering approximately 160,000 sq. km, which constitutes less than 5% of India's geographical extent. Numerous streams originate in the Western Ghats, which drain millions of hectares, ensuring water and food security for 245 million people and hence are aptly known as the water tower of peninsular India(Ramachandra and Bharath, 2019; Bharath et al., 2021). The region is endowed with diverse ecological regions depending on altitude, latitude, rainfall, and soil characteristics. The Western Ghats are among the eight hottest hotspots of biodiversity and 36 global biodiversity hotspots with exceptional endemic flora and fauna. Natural forests of Western Ghats have been providing various goods and services and are endowed with species of 4,600+ flowering plants (38% endemics), 330 butterflies (11% endemics), 156 reptiles (62% endemics), 508 birds (4% endemics), 120 mammals (12% endemics), 289 fishes (41% endemics) and 135 amphibians (75% endemics). The Western Ghats, gifted with enormous natural resource potential, and the mandate of sustainable development based on the foundation of prudent management of ecosystems, is yet a reality. Various unplanned developmental programs, which are proclaimed to be functioning on sustainability principles, have only been disrupting the complex web of life, impacting ecosystems, and causing a decline in overall productivity, including four major sectors: forestry, fisheries, agriculture, and water (Ramachandra and Bharath, 2019).The prevalence of barren hilltops, conversion of perennial streams to intermittent or seasonal streams, frequent floods and droughts, changes in water quality, soil erosion and sedimentation, the decline of endemic flora, and fauna, etc. highlights the consequences of unplanned developmental activities with a huge loss to the regional economy during the last century. The development goals need to be ecologically, economically, and socially sustainable, which can be achieved through the conservation and prudent management of ecosystems. Sustainability implies the equilibrium between society, ecosystem integrity, and sustenance of natural resources. Water sustenance in streams and rivers depends on the integrity of the catchment (watershed), as vegetation helps in retarding the velocity of water by allowing impoundment and recharging of groundwater through infiltration (Ramachandra et al., 2020). As water moves in the terrestrial ecosystem, part of it is percolated (recharging groundwater resources and contributing to sub-surface flow during post-monsoon seasons), while another fraction gets back to the atmosphere through evaporation and transpiration. Forests with native vegetation act as a sponge by retaining and regulating water transfer between land and the atmosphere. The mechanism by which vegetation controls flow regime is dependent on various bio-physiographic characteristics, namely, type of vegetation, species composition, maturity, density, root density and depth, hydro-climatic condition, etc. Roots of vegetation help (i) in binding soil, ii) improve soil structure by enhancing the stability of aggregates, which provide habitat for diverse microfauna and flora, leading to higher porosity of the soil, thereby creating the conduit for infiltration through the soil. An undisturbed native forest has a consistent hydrologic regime with sustained flows during lean seasons. Native species of vegetation with the assemblage of diverse native species help in recharging the groundwater, mitigating floods, and other hydro-ecological processes (Ramachandra et al., 2020; Bharath et al., 2021). Hence, it necessitates safeguarding and maintaining native forest patches and restoring existing degraded lands to sustain the hydrological regime, which caters to biotic (ecological and societal) demands. A comparative assessment of people's livelihood with soil water properties and water availability in sub-catchments of four major river basins in the Western Ghats reveals that streams in catchments with > 60% vegetation of native species are perennial with higher soil moisture (Ramachandra et al., 2020). The higher soil moisture due to water availability during all seasons facilitates farming of commercial crops with higher economic returns to the farmers, unlike the farmers who face water crises during the lean season. In contrast, streams are intermittent (6-8 months of water) in catchments dominated by monoculture plantations and seasonal (4 months, monsoon period) in catchments with vegetation cover lower than 30%. The study highlights the need to maintain ecosystem integrity to sustain water. Also, lower instances of COVID 19 in villages with native forests emphasize ecosystems' role in maintaining the health of biota. The need to maintain native vegetation in the catchment and its potential to support people's livelihood with water availability at local and regional levels is evident from the revenue of Rs. Rs.2,74,658 ha-1 yr-1 (in villages with perennial streams and farmers growing cash crops or three crops a year due to water availability), Rs. 1,50,679 ha-1 yr-1 (in villages with intermittent streams) and Rs. 80000 ha-1 yr-1 (in villages with seasonal streams). Also, the crop yield (at least 1.5 to 1.8 times) is higher in agriculture fields due to efficient pollination with the prevalence of diverse pollinators in the vicinity of native forests. The study emphasizes the need for maintaining the natural flow regime and prudent management of watershed to i) sustain higher faunal diversity, ii) maintain the health of water body, and iii) sustain people's livelihood with higher revenues. Hence, the premium should be on conserving the forests with native species to sustain water and biotic diversity in the water bodies, vital for food security. There still exists a chance to restore the lost natural ecosystems through appropriate ecological restoration approaches, with location-specific conservation and management practices to ensure adequate and clean water for all. GDP (Gross Domestic Product), a measure of the current economic well-being of a population, based on the market exchange of material well-being, will indicate resource depletion/degradation only through a positive gain in the economy and will not represent the decline in these assets (wealth) at all. Thus, the existing GDP growth percentages used as yardsticks to measure the development and well-being of citizens in decision-making processes are substantially misleading, yet they are being used. The traditional national accounts need to include resource depletion or degradation due to developmental activities and climate change. The country should move toward adopting Green GDP by accounting for the environmental consequences of the growth in the conventional GDP, which entails monetizing the services provided by ecosystems, the degradation cost of ecosystems, and accounts for costs caused by climate change. The forest ecosystems are under severe threat due to anthropogenic pressures, which are mostly related to the GDP.The appraisal of forest ecosystem services and biodiversity can help clarify trade­-offs among conflicting environmental, social, and economic goals in the development and implementation of policies and to improve the management in order biodiversity.Natural capital accounting and valuation of ecosystem services reveal that forest ecosystems provide (i) provisioning services (timber, fuelwood, food, NTFP, medicines, genetic materials) of Rs 2,19,494 ha-1 yr-1, (ii) regulating services (global climate regulation - carbon sequestration, soil conservation, and soil fertility, water regulation and groundwater recharge, water purification, pollination, waste treatment, air filtration, local climate regulation) of Rs 3,31,216 ha-1 yr-1 and (iii) cultural services (aesthetic, spiritual, tourism and recreation, education and scientific research) of Rs.1,04,561 ha-1 yr-1. Total ecosystem supply value (TESV), an aggregation of provisioning, regulating, and cultural services, amounts to Rs. 6,56,172 ha-1 yr-1, and the Net Present Value (NPV) of one hectare of forests amounts to 16.88 million rupees ha-1. NPV helps in estimating ecological compensation while diverting forest lands for other purposes. The recovery of an ecosystem with respect to its health, integrity, and sustainability is evident from an initiative of planting (500 saplings of 49 native species) in a degraded landscape (dominated by invasive species) of two hectares in the early 1990s at the Indian Institute of Science campus (Ramachandra et al., 2016),and the region has now transformed into a mini forest with numerous benefits such as improvements in groundwater at 3-6 m (compared to 30-40 m in 1990), moderated microclimate (with lower temperature) and numerous fauna (including four families of Slender Loris). While confirming the linkages of hydrology, ecology, and biodiversity, the experiment advocates the need for integrated watershed approaches based on sound ecological and engineering protocols to sustain water and ensure adequate water for all. A well-known and successful model of integrated wetlands ecosystem (Secondary treatment plant integrated with constructed wetlands and algae pond) at Jakkur Lake in Bangalore (Ramachandra et al., 2018) provides insights into the optimal treatment of wastewater and mitigation of pollution. Complete removal of nutrients and chemical contaminants happens when partially treated sewage (secondary treated) passes through constructed wetlands and algae pond (sedimentation pond), undergoes bio-physical and chemical processes. The water in the lake is almost potable with minimal nutrients and microbial counts. This model has been functioning successfully for the last ten years after interventions to rejuvenate the lake. This system is one of the self-sustainable ways of lake management while benefitting all stakeholders - washing, fishing, irrigation, and local people. Wells in the buffer zone (500 m), now have higher water levels and are without any nutrients (nitrate). Groundwater quality assessment in 25 wells in the same region during 2005 (before the rejuvenation of Jakkur Lake) had higher nitrate values. Adopting this model ensures optimal sewage treatment at decentralized levels, and letting treated water to the lake also provides nutrient-free and clean groundwater. The Jal Shakti ministry,the Government of India, through Jal Jeevan Mission, has embarked on the noble and novel mission of providing tap water supply to all rural households and public institutions in villages such as schools, health centers, panchayat buildings, etc. The success of this program depends on the availability of water. The imminent threat of acute water scarcity due to climate changes with global warming necessitates implementing integrated watershed development (planting of native species in the watershed of water bodies), rainwater harvesting (rooftop harvesting at individual household levels, and retaining rainwater in rejuvenated lakes, which also helps in recharge of groundwater) and reuse of wastewater through treatment at decentralized levels (a model similar to Jakkur lake at Bangalore). These prudent management initiatives at decentralized levels throughout the country aid in achieving the goals of providing clean and adequate water to the local community.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Cobanoglu, Cihan, Muhittin Cavusoglu et Gozde Turktarhan. « A beginner’s guide and best practices for using crowdsourcing platforms for survey research : The Case of Amazon Mechanical Turk (MTurk) ». Journal of Global Business Insights 6, no 1 (mars 2021) : 92–97. http://dx.doi.org/10.5038/2640-6489.6.1.1177.

Texte intégral
Résumé :
Introduction Researchers around the globe are utilizing crowdsourcing tools to reach respondents for quantitative and qualitative research (Chambers & Nimon, 2019). Many social science and business journals are receiving studies that utilize crowdsourcing tools such as Amazon Mechanical Turk (MTurk), Qualtrics, MicroWorkers, ShortTask, ClickWorker, and Crowdsource (e.g., Ahn, & Back, 2019; Ali et al., 2021; Esfahani, & Ozturk, 2019; Jeong, & Lee, 2017; Zhang et al., 2017). Even though the use of these tools presents a great opportunity for sharing large quantities of data quickly, some challenges must also be addressed. The purpose of this guide is to present the basic ideas behind the use of crowdsourcing for survey research and provide a primer for best practices that will increase their validity and reliability. What is crowdsourcing research? Crowdsourcing describes the collection of information, opinions, or other types of input from a large number of people, typically via the internet, and which may or may not receive (financial) compensation (Hargrave, 2019; Oxford Dictionary, n.d.). Within the behavioral science realm, crowdsourcing is defined as the use of internet services for hosting research activities and for creating opportunities for a large population of participants. Applications of crowdsourcing techniques have evolved over the decades, establishing the strong informational power of crowds. The advent of Web 2.0 has expanded the possibilities of crowdsourcing, with new online tools such as online reviews, forums, Wikipedia, Qualtrics, or MTurk, but also other platforms such as Crowdflower and Prolific Academic (Peer et al., 2017; Sheehan, 2018). Crowdsourcing platforms in the age of Web 2.0 use remote labor recruited via the internet to assist employers complete tasks that cannot be left to machines. Key characteristics of crowdsourcing include payment for workers, their recruitment from any location, and the completion of tasks (Behrend et al., 2011). They also allow for a relatively quick collection of data compared to data collection in the field, and participants are rewarded with an incentive—often financial compensation. Crowdsourcing not only offers a large participation pool but also a streamlined process for the study design, participant recruitment, and data collection as well as integrated participant compensation system (Buhrmester et al., 2011). Also, compared to other traditional marketing firms, crowdsourcing makes it easier to detect possible sampling biases (Garrow et al., 2020). Due to advantages such as reduced costs, diversity of participants, and flexibility, crowdsourcing platforms have surged in popularity for researchers. Advantages MTurk is one of the most popular crowdsourcing platforms among researchers, allowing Requesters to submit tasks for Workers to complete (Cummings & Sibona, 2017). MTurk has been used as an online crowdsourcing platform for the recruitment of human subjects for research purposes (Paolacci & Chandler, 2014). Research has also shown MTurk to be a reliable and cost-effective tool, capable of providing representative data for research in the behavioral sciences (e.g., Crump et al., 2013; Goodman et al., 2013; Mason & Suri, 2012; Rand, 2012; Simcox & Fiez, 2014). In addition to its use in social science studies, the platform has been used in marketing, hospitality and tourism, psychology, political science, communication, and sociology contexts (Sheehan, 2018). To illustrate, between 2012 and 2017, more than 40% of the studies published in the Journal of Consumer Research used crowdsourcing websites for their data collection (Goodman & Paolacci, 2017). Disadvantages Although researchers have assessed crowdsourcing platforms as reliable and cost-effective for data collection in the behavioral sciences, they are not exempt of flaws. One disadvantage is the possibility of unsatisfactory data quality. In fact, the virtual setting of the survey implies that the investigator is physically separated from the participant, and this lack of monitoring could lead to data quality issues (Sheehan, 2018). In addition, participants in survey research on crowdsourcing platforms are not always who they claim to be, creating issues of trust with the data provided and, ultimately, the quality of the research findings (McGonagle, 2015; Smith et al., 2016). A recurrent concern with MTurk workers, for instance, is their assessment as experienced survey takers (Chandler et al., 2015). This experience is mainly acquired through completion of dozens of surveys per day, especially when they are faced with similar items and scales. Smith et al. (2016) identified two types of problems performing data collection using MTurk; namely, cheaters and speeders. As compared to Qualtrics—which has a strict screening and quality-control processes to ensure that participants are who they claim to be—MTurk appears to be less exigent regarding the workers. However, a downside for data collection with Qualtrics is more expensive fees—about $5.00 per questionnaire on Qualtrics, against $0.50 to $1.50 on MTurk (Ford, 2017). Hence, few researchers were able to conduct surveys and compare respondent pools with Qualtrics or other traditional marketing research firms (Garrow et al., 2020). Another challenge using MTurk arises when trying to collect a desired number of responses from a population targeted to a specific city or area (Ross et al., 2010). The issues inherent to the selection process of MTurk have been the subject of investigations in several studies (e.g., Berinsky et al., 2012; Chandler et al., 2014; 2015; Harms & DeSimone, 2015; Paolacci et al., 2010; Rand, 2012). Feitosa et al. (2015) pointed out that international respondents may still identify themselves as U.S. respondents with the use of fake addresses and accounts. They found that 5% to 10% of participants identifying themselves as U.S. respondents were actually from overseas locations. Moreover, Babin et al. (2016) assessed that the use of trap questions allowed researchers to uncover that many respondents change their genders, ages, careers, or income within the course of a single survey. The issues of (a) experienced workers for the quality control of questions and (b) speeders, which, for MTurk can be attributed to the platform being the main source of revenue for a given respondent, remain the inherent issues of crowdsourcing platforms used for research purposes. Best practices Some best practices can be recommended in the use of crowdsourcing platforms for data collection purposes. Workers IDs can be matched with IDs from previous studies, thus allowing researchers to exclude responses from workers who had answered previous similar studies (Goodman & Paolacci, 2017). Furthermore, proceed to a manual assignment of qualification on MTurk prior to data collection (Litman et al., 2015; Park & Park, 2020). When dealing with experienced workers, both using multiple attention checks and optimizing the survey in a way to have the participants exposed to the stimuli for a sufficient length of time to better address the questions are also recommended (Sheehan, 2018). In this sense, shorter surveys are preferred to longer ones, which affect the participant’s concentration, and may, in turn, adversely impact the quality of their answers. Most importantly, pretest the survey to make sure that all parts are working as expected. Researchers should also keep in mind that in the context of MTurk, the primary method for measurement is the web interface. Thus, to avoid method biases, researchers should ponder whether or not method factors emerge in the latent measurement models (Podsakoff et al., 2012). As such, time-lagged research designs may be preferred as predictor and criterion variables can be measured at different points in time or administered in different platforms, such as Qualtrics vs MTurk (Cheung et al., 2017). In general, the use of crowdsourcing platforms including MTurk may be appropriate according to the research question; and the quality of data is reliant on the quality-control strategies used by researchers to enhance data quality. Trade-offs between various validity types need to be prioritized according to the research objectives (Cheung et al., 2017). From our experience using crowdsourcing tools for our own research as the editorial team members of several journals and chair of several conferences, we provide the best practices as outlined below: MTurk Worker (Respondent) Selection: Researchers should consider their study population before using MTurk for data collection. The MTurk platform should be used for the appropriate study population. For example, if the study targets restaurant owners or company CEOs, MTurk workers may not be suitable for the study. However, if the target population is diners, hotel guests, grocery shoppers, online shoppers, students, or hourly employees, utilizing a sample from MTurk would be suitable. Researchers should use the selection tool in the software. For example, if you target workers only from one country, exclude responses that came from an internet protocol (IP) address outside the targeted country and report the results in the method section. Researchers should consider the demographics of workers on MTurk which must reflect the study targeted population. For example, if the study focuses on baby boomers use of technology, then the MTurk sample should include only baby boomers. Similarly, the gender balance, racial composition, and income of people on MTurk should mirror the targeted population. Researchers should use multiple screening tools that identify quality respondents and avoid problematic response patterns. For example, MTurk provides the approval rate for the respondents. This refers to how many times a respondent is rejected for various reasons (i.e., wrong code entered). We recommend using a 90% or higher approval rate. Researchers should include screening questions in different places with different type of questions to make sure that the respondents are appropriate for your study. One way is to use knowledge-based questions about the subject. For example, rather than asking “How experienced are you with accounting practices?”, a supplemental question such as “Which of the following is a component of an income statement?” should be integrated into the study in a different section of the survey. Survey Validity: Researchers should conduct a pilot survey from MTurk workers to identify and fix any potential data quality and programming problems before the entire data set is collected. Researcher can estimate time required to complete the survey from the pilot study. This average time should be used in calculating incentive payment for the workers in such a way that the payment should equate or exceed minimum wage in the targeted country. Researchers should build multiple validity-check tools into the survey. One of them is to ask attention check questions such as “please click on ‘strongly agree’ in this question” or “What is 2+2? Please choose 5” (Cobanoglu et al., 2016) Even though these attention questions are good and should be implemented, experienced survey takers or bots easily identify them and answer them correctly, but then give random answers to other questions. Instead, we recommend building in more involved validity check questions. One of the best is asking the same question in different places and in different forms. For example, asking the age of the respondent in the beginning of the survey and then asking them the year of their birth at the end of the survey is an effective way to check that they are replying to the survey honestly. Exclude all those who answered the same question differently. Report the results of these validity checks in the methodology. Cavusoglu (2019) found that almost 20% of the surveys were eliminated due to the failure of the validity check questions which were embedded in different places and in different forms in his survey. Researchers should be aware of internet bot, which is a software that runs automated tasks. Some respondents use a bot to reply to the surveys. To avoid this, use Captcha verification, which forces respondents to perform random tasks such as moving the bar to a certain area, clicking in boxes that has cars, or checking boxes to verify the person taking the survey is not a bot. Whenever appropriate, researchers should use time limit options offered by online survey tools such as Qualtrics to control the time that a survey taker must spend to advance to the next question. We found that this is a great tool, especially when you want the respondents to watch a video, read a scenario, or look at a picture before they respond to other questions. Researchers should collect data in different days and times during the week to collect a more diverse and representative sample. Data Cleaning: Researchers should be aware that some respondents do not read questions. They simply select random answers or type nonsense text. To exclude them from the study, manually inspect the data. Exclude anyone who filled out the survey too quickly. We recommend excluding all responses filled out less than 40% of the average time to take the survey. For example, if it takes 10 minutes to fill out a survey, we exclude everyone who fills out this survey in 4 minutes or less. After we separated these two groups, we compared them and found that the speeders’ (aka cheaters) data was significantly different than the regular group. Researchers should always collect more data than needed. Our rule of thumb is to collect 30% more data than needed. For example, if 500 clean data responses are wanted, collect at least 650 data. The targeted number of data will still be available after cleaning the data. Report the process of cleaning data in the method section of your article, showing the editor and reviewers that you have taken steps to increase the validity and reliability of the survey responses. Calculating a response rate for the samples using MTurk is not possible. However, it is possible to calculate active response rate (Ali et al., 2021). It can be calculated as the raw response numbers deducted from all screening and validity check question results. For example, if you have 1000 raw responses and you eliminated 100 responses for coming from IP address outside of the United States, another 100 surveys for failing the validity check questions, then your active response rate would be 800/1000= 80%.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Temglit, N., H. ALIANE et M. Ahmed Nacer. « Un modèle de composition des services web sémantiques ». Revue Africaine de la Recherche en Informatique et Mathématiques Appliquées Volume 11, 2009 - Special... (24 septembre 2009). http://dx.doi.org/10.46298/arima.1928.

Texte intégral
Résumé :
International audience The work presented here aims to provide a composition model of semantic web services. This model is based on a semantic representation of domain concepts handled by web services, namely, operations and the static concepts used to describe static properties of Web services. Different levels of abstraction are given to the concept of operation to allow gradual access to concret services. Thus, two different levels of the composition plan are generated (abstract and concret). This will reuse plans already constructed to meet similar needs even with modified preferences. Le travail présenté ici vise à proposer un modèle pour la composition des services web sémantiques. Ce modèle est basé sur une représentation sémantique de l'ensemble des concepts manipulés par les services web d’un domaine d'application, à savoir, les opérations et les concepts statiques utilisés pour décrire les propriétés des services web. Différents niveaux d'abstraction sont donnés au concept opération pour permettre un accès progressif aux services concrets. Ainsi, deux plans de composition à granularités différentes (abstrait et concrets) sont générés. Ceci permettrade réutiliser des plans déjà construits pour répondre à des besoins similaires et même avec despréférences modifiées.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Ould Mohamed, Mohamed Salem, Amor Keziou, Hassan Fenniri et Georges Delaunay. « Nouveau critère de séparation aveugle de sources cyclostationnaires au second ordre ». Revue Africaine de la Recherche en Informatique et Mathématiques Appliquées Volume 12, 2010 (5 octobre 2010). http://dx.doi.org/10.46298/arima.1929.

Texte intégral
Résumé :
International audience Le travail présenté ici vise à proposer un modèle pour la composition des services web sémantiques. Ce modèle est basé sur une représentation sémantique de l'ensemble des concepts manipulés par les services web d’un domaine d'application, à savoir, les opérations et les concepts statiques utilisés pour décrire les propriétés des services web. Différents niveaux d'abstraction sont donnés au concept opération pour permettre un accès progressif aux services concrets. Ainsi, deux plans de composition à granularités différentes (abstrait et concrets) sont générés. Ceci permettrade réutiliser des plans déjà construits pour répondre à des besoins similaires et même avec despréférences modifiées. The work presented here aims to provide a composition model of semantic web services. This model is based on a semantic representation of domain concepts handled by web services, namely, operations and the static concepts used to describe static properties of Web services. Different levels of abstraction are given to the concept of operation to allow gradual access to concret services. Thus, two different levels of the composition plan are generated (abstract and concret). This will reuse plans already constructed to meet similar needs even with modified preferences.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Macklin, James, David Shorthouse et Falko Glöckler. « I Know Something You Don’t Know : The annotation saga continues… ». Biodiversity Information Science and Standards 7 (14 septembre 2023). http://dx.doi.org/10.3897/biss.7.112715.

Texte intégral
Résumé :
Over the past 20 years, the biodiversity informatics community has pursued components of the digital annotation landscape with varying degrees of success. We will provide an historical overview of the theory, the advancements made through a few key projects, and will identify some of the ongoing challenges and opportunities. The fundamental principles remain unchanged since annotations were first proposed. Someone (or something): (1) has an enhancement to make elsewhere from the source where original data or information are generated or transcribed; (2) wishes to broadcast these statements to the originator and to others who may benefit; and (3) expects persistence, discoverability, and attribution for their contributions alongside the source. The Filtered Push project (Morris et al. 2013) considered several use cases and pioneered development of services based on the technology of the day. The exchange of data between parties in a universally consistent way necessitated the development of a novel draft standard for data annotations via an extension of the World Wide Web Consortium’s Web Annotation Working Group standard (Sanderson et al. 2013) to be sufficiently informative for a data curator to confidently make a decision. Figure 2 from Morris et al. (2013), reproduced here as Fig. 1, outlines the composition of an annotation data package for a taxonomic identification. The package contains the data object(s) associated with an occurrence, an expression of the motivation(s) for updating, some evidence for an assertion, and a stated expectation for how the receiving entity should take action. The Filtered Push and Annosys (Tschöpe et al. 2013) projects also considered implementation strategies involving collection management systems (e.g., Symbiota) and portals (e.g., European Distributed Institute of Taxonomy, EDIT). However, there remain technological barriers for these systems to operate at scale, the least of which is the absence of globally unique, persistent, resolvable identifiers for shared objects and concepts. Major aggregation infrastructures like the Global Biodiversity Information Facility (GBIF) and the Distributed System of Scientific Collections (DiSSCo) rely on data enhancement to improve the quality of their resources and have annotation services in their work plans. More recently, the Digital Extended Specimen (DES) concept (Hardisty et al. 2022) will rely on annotation services as key components of the proposed infrastructure. Recent work on annotation services more generally has considered various new forms of packaging and delivery such as Frictionless Data (Fowler et al. 2018), Journal Article Tag Suite XML (Agosti et al. 2022), or nanopublications (Kuhn et al. 2018). There is risk in fragmentation of this landscape and disenfranchisement of both biological collections and the wider research community if we fail to align the purpose, content, and structure of these packages or if these fail to remain aligned with FAIR principles. Institutional collection management systems currently represent the canonical data store that provides data to researchers and data aggregators. It is critical that information and/or feedback about the data they release be round-tripped back to them for consideration. However, the sheer volume of annotations that could be generated by both human and machine curation processes will overwhelm local data curators and the systems supporting them. One solution to this is to create a central annotation store with write and discovery services that best support the needs of all stewards of data. This will require an international consortium of parties with a governance and technical model to assure its sustainability.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Soiland-Reyes, Stian, Leyla Jael Castro, Daniel Garijo, Marc Portier, Carole Goble et Paul Groth. « Updating Linked Data practices for FAIR Digital Object principles ». Research Ideas and Outcomes 8 (12 octobre 2022). http://dx.doi.org/10.3897/rio.8.e94501.

Texte intégral
Résumé :
Background The FAIR principles (Wilkinson et al. 2016) are fundamental for data discovery, sharing, consumption and reuse; however their broad interpretation and many ways to implement can lead to inconsistencies and incompatibility (Jacobsen et al. 2020). The European Open Science Cloud (EOSC) has been instrumental in maturing and encouraging FAIR practices across a wide range of research areas. Linked Data in the form of RDF (Resource Description Framework) is the common way to implement machine-readability in FAIR, however the principles do not prescribe RDF or any particular technology (Mons et al. 2017). FAIR Digital Object FAIR Digital Object (FDO) (Schultes and Wittenburg 2019) has been proposed to improve researcher’s access to digital objects through formalising their metadata, types, identifiers and exposing their computational operations, making them actionable FAIR objects rather than passive data sources. FDO is a set of principles (Bonino et al. 2019), implementable in multiple ways. Current realisations mostly use Digital Object Interface Protocol (DOIPv2) (DONA Foundation 2018), with the main implementation CORDRA. We can consider DOIPv2 as a simplified combination of object-oriented (CORBA, SOAP) and document-based (HTTP, FTP) approaches. More recently, the FDO Forum has prepared detailed recommendations, currently open for comments, including a DOIP endorsement and updated FDO requirements. These point out Linked Data as another possible technology stack, which is the focus of this work. Linked Data Linked Data standards (LD), based on the Web architecture, are commonplace in sciences like bioinformatics, chemistry and medical informatics – in particular to publish Open Data as machine-readable resources. LD has become ubiquitous on the general Web, the schema.org vocabulary is used by over 10 million sites for indexing by search engines – 43% of all websites use JSON-LD. Although LD practices align to FAIR (Hasnain and Rebholz-Schuhmann 2018), they do not fully encompass active aspects of FDOs. The HTTP protocol is used heavily for applications (e.g. mobile apps and cloud services), with REST APIs of customised JSON structures. Approaches that merge the LD and REST worlds include Linked Data Platform (LDP), Hydra and Web Payments. Meeting FDO principles using Linked Data standards Considering the potential of FDOs when combined with the mature technology stack of LD, here we briefly discuss how FDO principles in Bonino et al. (2019) can be achieved using existing standards. The general principles (G1–G9) apply well: Open standards with HTTP being stable for 30 years, JSON-LD is widely used, FAIR practitioners mainly use RDF, and a clear abstraction between the RDF model with stable bindings available in multiple serialisations. However, when considering the specific principles (FDOF1–FDOF12) we find that additional constraints and best practices need to be established – arbitrary LD resources cannot be assumed to follow FDO principles. This is equivalent to how existing use of DOIP is not FDO-compliant without additional constraints. Namely, persistent identifiers (PIDs) (McMurry et al. 2017) (FDOF1) are common in LD world (e.g. using http://purl.org/ or https://w3id.org/), however they don’t always have a declared type (FDOF2), or the PID may not even appear in the metadata. URL-based PIDs are resolvable (FDOF3), typically over HTTP using redirections and content-negotiation. One great advantage of RDF is that all attributes are defined semantic artefacts with PIDs (FDOF4), and attributes can be reused across vocabularies. While CRUD operations (FDOF6) are supported by native HTTP operations (GET/PUT/POST/DELETE) as in LDP , there is little consistency on how to define operation interfaces in LD (FDOF5). Existing REST approaches like OpenAPI and URI templates are mature and good candidates, and should be related to defined types to support machine-actionable composition (FDOF7). HTTP error code 410 Gone is used in tombstone pages for removed resources (FDOF12), although more frequent is 404 Not Found. Metadata is resolved to HTTP documents with their own URIs, but these frequently don’t have their own PID (FDOF8). RDF-Star and nanopublications (Kuhn et al. 2021) give ways to identify and trace provenance of individual assertions. Different metadata levels (FDOF9) are frequently developed for LD vocabularies across different communities (FDOF10), such as FHIR for health data, Bioschemas for bioinformatics and &gt;1000 more specific bioontologies. Increased declaration and navigation of profiles is therefore essential for machine-actionability and consistent consumption across FAIR endpoints. Several standards exist for rich collections (FDOF11), e.g. OAI-ORE, DCAT, RO-Crate, LDP. These are used and extended heterogeneously across the Web, but consistent machine-actionable FDOs will need specific choices of core standards and vocabularies. Another challenge is when multiple PIDs refer to “almost the same” concept in different collections – significant effort have created manual and automated semantic mappings (Baker et al. 2013, de Mello et al. 2022). Currently the FDO Forum has suggested the use of LDP as a possible alternative for implementing FAIR Digital Objects (Bonino da Silva Santos 2021), which proposes a novel approach of content-negotiation with custom media types. Discussion The Linked Data stack provides a set of specifications, tools and guidelines in order to help the FDO principles become a reality. This mature approach can accelerate uptake of FDO by scholars and existing research infrastructures such as the European Open Science Cloud (EOSC). However, the amount of standards and existing metadata vocabularies poses a potential threat for adoption and interoperability. Yet, the challenges for agreeing on usage profiles apply equally to DOIP as LD approaches. We have worked with different scientific communities to define RO-Crate (Soiland-Reyes et al. 2022), a lightweight method to package research outputs along with their metadata. While RO-Crate’s use of schema.org shows just one possible metadata model, it's powerful enough to be able to express FDOs, and familiar to web developers. We have also used FAIR Signposting (Van de Sompel et al. 2022) with HTTP Link: headers as a way to support navigation to the individual core properties of an FDO (PID, type, metadata, licence, bytestream) that does not require heuristics of content-negotiation and is agnostic to particular metadata vocabularies and serialisations. We believe that by adopting Linked Data principles, we can accelerate FDO today – and even start building practical ways to assist scientists in efficiently answering topical questions based on knowledge graphs.
Styles APA, Harvard, Vancouver, ISO, etc.

Thèses sur le sujet "Composition et compatibilité des services web"

1

Serrai, Walid. « Évaluation de performances de solutions pour la découverte et la composition des services web ». Electronic Thesis or Diss., Paris Est, 2020. http://www.theses.fr/2020PESC0032.

Texte intégral
Résumé :
Les systèmes logiciels accessibles via le web sont construits en utilisant des services web existants et distribués qui s'interagissent par envoi de messages. Le service web expose ses fonctionnalités à travers une interface décrite dans un format manipulable par ordinateur. Les autres systèmes interagissent, sans intervention humaine, avec le service web selon une procédure prescrite en utilisant les messages d’un protocole.Les services web peuvent être déployés sur des plateformes cloud. Ce type de déploiement occasionne un grand nombre de services à gérer au niveau de mêmes répertoires soulevant différents problèmes : Comment gérer efficacement ces services afin de faciliter leur découverte pour une éventuelle composition. En effet, étant donné un répertoire, comment définir une architecture voire une structure de données permettant d’optimiser la découverte des services, leur composition et leur gestion.La découverte de services consiste à trouver un ou plusieurs services satisfaisant les critères du client. La composition de services consiste quant à elle à trouver un nombre de services pouvant être exécutés selon un schéma et satisfaisant les contraintes du client. Comme le nombre de services augmente sans cesse, la demande pour la conception d’architectures permettant d’offrir non seulement un service de qualité mais aussi un temps de réponse rapide pour la découverte, la sélection et la composition, est de plus en plus intense. Ces architectures doivent de plus être facilement gérables et maintenables dans le temps. L’exploration de communautés et de structures d’index corrélée avec l’utilisation de mesures multi critères pourrait offrir une solution efficace à condition de bien choisir les structures de données, les types de mesures, et les techniques appropriés. Dans cette thèse, des solutions sont proposées pour la découverte, la sélection de services et leur composition de telle façon à optimiser la recherche en termes de temps de réponse et de pertinence des résultats. L’évaluation des performances des solutions proposées est conduite en utilisant des plateformes de simulations
Software systems accessible via the web are built using existing and distributed web services that interact by sending messages. The web service exposes its functionalities through an interface described in a computer-readable format. Other systems interact, without human intervention, with the web service according to a prescribed procedure using the messages of a protocol. Web services can be deployed on cloud platforms. This type of deployment causes a large number of services to be managed at the level of the same directories raising different problems: How to manage these services effectively to facilitate their discovery for a possible composition. Indeed, given a directory, how to define an architecture or even a data structure to optimize the discovery of services, their composition, and their management. Service discovery involves finding one or more services that meet the client’s criteria. The service composition consists of finding many services that can be executed according to a scheme and that satisfy the client’s constraints. As the number of services is constantly increasing, the demand for the design of architectures to provide not only quality service but also rapid responsetime for discovery, selection, and composition, is getting more intense. These architectures must also be easily manageable and maintainable over time. The exploration of communities and index structures correlated with the use of multi-criteria measures could offer an effective solution provided that the data structures, the types of measures, are chosen correctly, and the appropriate techniques. In this thesis, solutions are proposed for the discovery, the selection of services and their composition in such a way as to optimizethe search in terms of response time and the relevance of the results. The performance evaluation of the proposed solutions is carried out using simulation platforms
Styles APA, Harvard, Vancouver, ISO, etc.
2

Guermouche, Nawal. « Etude des Interactions Temporisées dans la Composition de Services Web ». Phd thesis, Université Henri Poincaré - Nancy I, 2010. http://tel.archives-ouvertes.fr/tel-00540646.

Texte intégral
Résumé :
L'avantage majeur qu'offrent les services Web est le fait qu'ils reposent sur des standards et les technologies du Web pour interagir en s'échangeant des messages. A part les séquences de messages, d'autres facteurs affectent l'interopérabilité des services Web, telles que les contraintes temporelles qui spécifient les délais nécessaires pour échanger des messages. La thèse rapportée dans ce manuscrit étudie l'impact de ces propriétés dans la composition de services Web. La considération de telles propriétés soulève plusieurs problèmes auxquels on a essayé d'apporter une solution. Le premier aspect consiste à définir un modèle qui tienne compte des abstractions nécessaires afin de pouvoir analyser et synthétiser une composition, à savoir les messages, les données, les contraintes de données, les propriétés temporelles et l'aspect asynchrone des communications des services. En se basant sur ce modèle, le deuxième problème consiste à proposer une approche d'analyse de compatibilité. Cette analyse vise à caractériser la compatibilité ou la non-compatibilité des services Web et ce en prenant en considération les abstractions précédemment citées. Nous étudions particulièrement l'impact des propriétés temporelles dans une chorégraphie dans laquelle les services Web supportent des communications asynchrones. Nous proposons une démarche basée sur le model checking qui permet de détecter les éventuels conflits temporisés qui peuvent surgir dans une chorégraphie. Finalement, le dernier problème auquel nous nous intéressons est celui de la construction d'une composition qui essaie de répondre au besoin du client et ce en prenant en compte les aspects temporels. L'approche que l'on propose est basée sur la génération d'un médiateur pour essayer, quand c'est possible, de contourner les incompatibilités temporisées et non-temporisées qui peuvent surgir lors d'une collaboration. Des mécanismes et des algorithmes ont été développés afin de mettre en oeuvre ces objectifs.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Bonner, Chantal. « Classification et composition de services Web : une perspective réseaux complexes ». Corte, 2011. http://www.theses.fr/2011CORT0008.

Texte intégral
Résumé :
Les services Web sont des briques de bases logicielles s’affranchissant de toute contrainte de compatibilité logicielle ou matérielle. Ils sont mis en œuvre dans une architecture orientée service. A l’heure actuelle, les travaux de recherche se concentrent principalement sur la découverte et la composition. Cependant, la complexité de la structure de l’espace des services Web et son évolution doivent nécessairement être prises en compte. Ceci ne peut se concevoir sans faire appel à la science des systèmes complexes, et notamment à la théorie des réseaux complexes. Dans cette thèse, nous définissons un ensemble de réseaux pour la composition sur la base de services décrits dans des langages syntaxique (WSDL) et sémantique (SAWSDL). L’exploration expérimentale de ces réseaux permet de mettre en évidence les propriétés caractéristiques des grands graphes de terrain (la propriété petit monde et la distribution sans échelle). On montre par ailleurs que ces réseaux possèdent une structure communautaire. Ce résultat permet d’apporter une réponse alternative à la problématique de la classification de services selon les domaines d’intérêts. En effet, les communautés regroupent non pas des services aux fonctionnalités similaires, mais des services qui ont en commun de nombreuses relations d’interaction. Cette organisation peut être utilisée entre autres, afin de guider les algorithmes de recherche de compositions. De plus, en ce qui concerne la classification des services aux fonctionnalités similaires en vue de la découverte ou de la substitution, nous proposons un ensemble de modèles de réseaux pour les représentations syntaxique et sémantique des services, traduisant divers degrés de similitude. L’analyse topologique de ces réseaux fait apparaître une structuration en composantes et une organisation interne des composantes autour de motifs élémentaires. Cette propriété permet une caractérisation à deux niveaux de la notion de communauté de services similaires, mettant ainsi en avant la souplesse de ce nouveau modèle d’organisation. Ces travaux ouvrent de nouvelles perspectives dans les problématiques de l’architecture orientée service
Web services are building blocks for modular applications independent of any software or hardware platforms. They implement the service oriented architecture (SOA). Research on Web services mainly focuses on discovery and composition. However, complexity of the Web services space structure and its development must necessarily be taken into account. This cannot be done without using the complex systems science, including the theory of complex networks. In this thesis, we define a set of networks based on Web services composition when Web services are syntactically (WSDL) and semantically (SAWSDL) described. The experimental exploration of these networks can reveal characteristic properties of complex networks (small world property and scale-free distribution). It also shows that these networks have a community structure. This result provides an alternative answer to the problem of Web services classification by domain of interest. Indeed, communities don’t gather Web services with similar functionalities, but Web services that share many interaction relationships. This organization can be used among others, to guide compositions search algorithms. Furthermore, with respect to the classification based on Web services functional similarity for discovery or substitution, we propose a set of network models for syntactic and semantic representations of Web services, reflecting various similarity degrees. The topological analysis of these networks reveals a component structure and internal organization of thecomponents around elementary patterns. This property allows a two-level characterization of the notion of community of similar Web services that highlight the flexibility of this new organizational model. This work opens new perspectives in the issues of service-oriented architecture
Styles APA, Harvard, Vancouver, ISO, etc.
4

Cherifi, Chantal. « Classification et Composition de Services Web : Une Perspective Réseaux Complexes ». Phd thesis, Université Pascal Paoli, 2011. http://tel.archives-ouvertes.fr/tel-00652852.

Texte intégral
Résumé :
Les services Web sont des briques de bases logicielles s‟affranchissant de toute contrainte de compatibilité logicielle ou matérielle. Ils sont mis en oeuvre dans une architecture orientée service. A l‟heure actuelle, les travaux de recherche se concentrent principalement sur la découverte et la composition. Cependant, la complexité de la structure de l‟espace des services Web et son évolution doivent nécessairement être prises en compte. Ceci ne peut se concevoir sans faire appel à la science des systèmes complexes, et notamment à la théorie des réseaux complexes. Dans cette thèse, nous définissons un ensemble de réseaux pour la composition sur la base de services décrits dans des langages syntaxique (WSDL) et sémantique (SAWSDL). L‟exploration expérimentale de ces réseaux permet de mettre en évidence les propriétés caractéristiques des grands graphes de terrain (la propriété petit monde et la distribution sans échelle). On montre par ailleurs que ces réseaux possèdent une structure communautaire. Ce résultat permet d‟apporter une réponse alternative à la problématique de la classification de services selon les domaines d‟intérêts. En effet, les communautés regroupent non pas des services aux fonctionnalités similaires, mais des services qui ont en commun de nombreuses relations d‟interaction. Cette organisation peut être utilisée entre autres, afin de guider les algorithmes de recherche de compositions. De plus, en ce qui concerne la classification des services aux fonctionnalités similaires en vue de la découverte ou de la substitution, nous proposons un ensemble de modèles de réseaux pour les représentations syntaxique et sémantique des services, traduisant divers degrés de similitude. L‟analyse topologique de ces réseaux fait apparaître une structuration en composantes et une organisation interne des composantes autour de motifs élémentaires. Cette propriété permet une caractérisation à deux niveaux de la notion de communauté de services similaires, mettant ainsi en avant la souplesse de ce nouveau modèle d‟organisation. Ces travaux ouvrent de nouvelles perspectives dans les problématiques de l‟architecture orientée service.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Gschwind, Benoît. « Composition automatique et adaptative de services web pour la météorologie ». Phd thesis, École Nationale Supérieure des Mines de Paris, 2009. http://tel.archives-ouvertes.fr/tel-00460604.

Texte intégral
Résumé :
Les données et les observations sont des éléments fondamentaux dans les progrès de la science. Leur accès et leur partage sont cruciaux pour les chercheurs. Ces derniers se sont appuyés sur Internet et les services Web ces dernières années, mais cette solution n'est pas entièrement satisfaisante. Il y a en particulier en météorologie un décalage entre les besoins et les informations disponibles qui peut se résoudre en développant des outils permettant de composer les services Web entre eux. Cette composition permet d'accomplir des tâches qu'un service seul n'aurait pas pu réaliser. Pour répondre aux besoins exprimés, une méthode de composition doit être automatique et adaptative, signifiant qu'une composition ne doit pas nécessiter l'intervention de l'homme et qu'elle doit prendre en compte la disponibilité et le résultat de l'exécution des services Web. Pour combler ce décalage, mes objectifs sont de proposer, formaliser et développer une telle méthode. La première contribution est une formalisation des besoins spécifiques de la météorologie pour la composition de services Web. Cette thèse met en évidence les différences entre les services Web utilisés dans le domaine de la météorologie vis-à-vis des services Web habituellement rencontrés comme ceux de e-Commerce. Ma thèse propose également une nouvelle méthode de composition de services Web permettant la concaténation de données et proposant un moyen d'évaluer la qualité de ces compositions. Enfin, elle propose une méthode pour évaluer les méthodes de composition, qui répond aux besoins en météorologie.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Mekki, Mohamed-Anis. « Synthèse et compilation de services web sécurisés ». Thesis, Nancy 1, 2011. http://www.theses.fr/2011NAN10123/document.

Texte intégral
Résumé :
La composition automatique de services web est une tâche difficile. De nombreux travaux ont considérés des modèles simplifiés d'automates qui font abstraction de la structure des messages échangés par les services. Pour le domaine des services sécurisés nous proposons une nouvelle approche pour automatiser la composition des services basée sur leurs politiques de sécurité. Étant donnés, une communauté de services et un service objectif, nous réduisons le problème de la synthèse de l'objectif à partir des services dans la communauté à un problème de sécurité, où un intrus que nous appelons médiateur doit intercepter et rediriger les messages depuis et vers la communauté de services et un service client jusqu'à atteindre un état satisfaisant pour le dernier. Nous avons implémenté notre algorithme dans la plateforme de validation du projet AVANTSSAR et nous avons testé l'outil correspondant sur plusieurs études de cas. Ensuite, nous présentons un outil qui compile les traces obtenues décrivant l'exécution d'un médiateur vers le code exécutable correspondant. Pour cela nous calculons d'abord une spécification exécutable aussi prudente que possible de son rôle dans l'orchestration. Cette spécification est exprimé en ASLan, un langage formel conçu pour la modélisation des services Web liés à des politiques de sécurité. Ensuite, nous pouvons vérifier avec des outils automatiques que la spécification ASLan obtenue vérifie certaines propriétés requises de sécurité telles que le secret et l'authentification. Si aucune faille n'est détectée, nous compilons la spécification ASLan vers une servlet Java qui peut être utilisé par le médiateur pour contrôler l'orchestration
Automatic composition of web services is a challenging task. Many works have considered simplified automata models that abstract away from the structure of messages exchanged by the services. For the domain of secured services we propose a novel approach to automated composition of services based on their security policies. Given a community of services and a goal service, we reduce the problem of composing the goal from services in the community to a security problem where an intruder we call mediator should intercept and redirect messages from the service community and a client service till reaching a satisfying state. We have implemented the algorithm in AVANTSSAR Platform and applied the tool to several case studies. Then we present a tool that compiles the obtained trace describing the execution of a the mediator into its corresponding runnable code. For that we first compute an executable specification as prudent as possible of her role in the orchestration. This specification is expressed in ASLan language, a formal language designed for modeling Web Services tied with security policies. Then we can check with automatic tools that this ASLan specification verifies some required security properties such as secrecy and authentication. If no flaw is found, we compile the specification into a Java servlet that can be used by the mediatior to lead the orchestration
Styles APA, Harvard, Vancouver, ISO, etc.
7

Ozanne, Alain. « Interact : un modèle général de contrat pour la garantie des assemblages de composants et services ». Phd thesis, Université Pierre et Marie Curie - Paris VI, 2007. http://tel.archives-ouvertes.fr/tel-00292148.

Texte intégral
Résumé :
Pour satisfaire aux nouveaux besoins de flexibilité, modularité, d'adaptabilité et de distribution des applications, les paradigmes composants et services ont été déclinés dans des frameworks reconnus comme J2EE, OSGI, SCA ou encore Fractal. Néanmoins, ceux-ci offrent peu d'outils permettant de garantir la fiabilité des applications en raisonnant de manière générique sur leur configuration architecturale et les spécifications des participants. Dans cette thèse, j'envisage l'organisation de la vérification des assemblages, et le diagnostic des défaillances, sous l'angle de l'approche par contrat dirigée par les responsabilités. Pour cela, j'analyse d'abord sous quelles hypothèses intégrer différents formalismes à cette approche, puis comment appliquer cette approche à différentes architectures. J'étudie par ailleurs comment les intervenants de la mise en oeuvre des systèmes pourraient en bénéficier. Cela m'amène à présenter un modèle de contrat, qui intègre et organise différentes propriétés, analysées comme requises pour la validité de l'assemblage, conjointement et uniformément sur différentes architectures. J'en définis le modèle objet qui réifie la logique contractuelle, ainsi que son implémentation sous forme d'un framework. Ce dernier est validé sur l'architecture Fractal et deux formalismes contractuels, l'un à base d'assertions et l'autre de contraintes sur les séquences d'interactions valides entre participants. Une validation plus avancée est montrée sur l'exemple d'une application de communautés instantanées.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Ben, Njima Cheyma. « Élaboration d'un modèle de découverte et de composition des services web mobiles ». Thesis, Lyon, 2017. http://www.theses.fr/2017LYSE3033.

Texte intégral
Résumé :
Au cours des dernières décennies, Internet a connu une révolution et une croissance exponentielle.A la suite de cette croissance, un grand nombre de services web et d’applications ont émergé pour répondre aux différents besoins des consommateurs. En même temps, l’industrie du réseau mobile est devenue omniprésente, ce qui rend la plupart des utilisateurs inséparables de leurs terminaux mobiles. La combinaison de la technologie mobile et des services web fournit un nouveau paradigme appelé services web mobiles. Ainsi, la consommation des services web a` partir des appareils mobiles émerge en proposant plusieurs facilites´ aux utilisateurs et en imposant plus de manipulations de ces services.En effet, afin que les utilisateurs trouvent des services répondant a` leurs besoins, un mécanisme de découverte est nécessaire, par ailleurs, les demandes sont devenues non seulement plus complexes mais aussi plus dynamiques, un service unique qui offre une fonctionnalité simple et primitive est devenu insuffisant pour satisfaire les besoins et les exigences complexes. Par conséquent, la combinaison de multiples services pour fournir un service composite est de plus en plus utilisée demandée. Nous parlons ainsi des mécanismes de découverte et de composition des services web mobiles. Ces deux paradigmes sont mutuellement liés et complémentaires.La découverte et la composition des services web dans un environnement mobile soulèvent plusieurs défis qui n’existent pas dans un environnement classique (non mobile). Parmi ces défis se trouve les contraintes limitées de l’appareil mobile, appelé dans ce travail contexte statique, ainsi que le changement de contexte qui est duˆ principalement a` la mobilité du dispositif, appelé contexte dynamique.Ainsi, l’objet de la présente thèse est de proposer un Framework de composition de services web mobile englobant deux approches complémentaires. Une première approche proposée est consacrée a` la découverte des services web mobiles appelée MobiDisc et une deuxième qui propose une solution a` la problématique de composition dans un contexte dynamique. Notre première approche exploite le contexte statique avec les propriétés de QoS et les préférences´ utilisateurs dans les descriptions sémantiques des services et de la requête utilisateur afin d’augmenter l’exactitude du processus de découverte. Quand a` l’approche de composition, elle met l’accent sur le contexte dynamique qui peut modifier le résultat de la composition. L’objectif est de déterminer la sensibilité des services au contexte dynamique et de générer des plans de composition pour l’utilisateur tries´ selon leurs valeurs de sensibilité globale lui permettant de choisir la meilleure composition
Over the last two decades, Internet has grown exponentially. causing the emergence of web ser-vices and applications that meet the different needs of the consumers. During the same period, the mobile network industry has become ubiquitous, making most users inseparable from their mobile devices. So the combination of mobile technology and web services provides a new paradigm named mobile web services. Thus, the consumption of web services from mobile devices emerges by offering several facilities to users and requiring greater manipulation of these services such as discovery, composition and execution.Indeed, in order for users to find services that meet their requirements, a discovery mechanism is needed. Since requests have become not only more complex, but also more dynamic, a single service that offers simple and primitive functionality has become insufficient to satisfy the complex requirements. Therefore, the combination of multiple services to provide a composite service is more and more requested. We talk about mobile web service discovery and composition. These two paradigms are mutually linked and complementary.The discovery and composition of web services in a mobile environment raise several challenges that do not exist in a traditional (non-mobile) environment. Among these challenges are the limited constraints of the mobile device, called in this work static context, as well as the change of context which is due mainly to the mobility of the device which called dynamic context.In this thesis we propose a framework for the composition of mobile web services encompassing two complementary approaches. A first proposed approach called MobiDisc, speaking about the discovery of mobile web services and a second that proposes a solution to the problem of composition in a dynamic context. Our first approach uses the static context with QoS properties and user preferences in the semantic descriptions of services and the user query to increase the accuracy of the discovery process. As for the second compositional approach, it focuses on the dynamic context that can modify the composition result. The objective is to determine the sensitivity of the services to the dynamic context and to generate composition plans to the user ordered according to a sensitivity value
Styles APA, Harvard, Vancouver, ISO, etc.
9

Djenouhat, Manel Amel. « Un cadre sémantique formel pour la description, sélection et composition des services web ». Thesis, Paris, CNAM, 2017. http://www.theses.fr/2017CNAM1137/document.

Texte intégral
Résumé :
Le but de cette thèse est de dégager un cadre sémantique formel approprié supportant l'interopérabilité dedifférents formalismes déjà utilisés pour décrire et déployer un service Web. En d’autres termes, nouscontribuons au développement d’un formalisme mathématique rigoureux permettant de décrire un service Webcomplexe susceptible de changer pendant l’exécution et de coordonner avec les autres services de façonadaptative. Pour atteindre cet objectif, les étapes de description, de sélection et de composition ont constitué lestrois majeures problématiques étudiées dans cette thèse.Pour ce faire, nous avons proposé dans un premier temps, à travers l’utilisation du cadre sémantique formel K lelangage K-WSDL; un langage de description de services Web doté d’une sémantique opérationnelle en terme derègles de réécriture qui peut être exécutable et analysable sous Maude. Nous avons introduit, dans un secondtemps, l’approche WS-Sim basée sur la théorie des catégories qui évalue l’équivalence comportementale entreservices en représentant chaque service par une catégorie et en établissant des liens formels (foncteur) entre elles.Enfin, nous avons présenté le modèle RMop-ECATNet (Refined Meta Open ECATNet ) : un modèle dédié à laspécification formelle de la composition des services Web et fruit du raffinement du modèle Mop-ECATNetproposé par [LB14]. Nous avons étendu et enrichi ce dernier aux trois niveaux : structurel, comportemental etimplémentation
The aim of this thesis is to provide a suitable formal semantic framework that supports interoperability ofdifferent formalisms already used to describe and deploy a Web service. In other words, we contribute to thedevelopment of a rigorous mathematical formalism to describe a complex Web service that may change duringexecution and coordinate with other services adaptively. To achieve this goal, the steps of description, selectionand composition constitute the three major issues studied in this thesis.We proposed so, initially, through the use of the K semantic framework the K-WSDL : a Web servicesdescription language endowed with an operational semantics in terms of rewriting rules which can be executedand analyzed in Maude. We introduced, in a second step, WS-Sim, a new approach based on the category theorywhich evaluates the behavioral equivalence between services by representing each service by a category and byestablishing formal links (functor) between them. Finally, we present RMop-ECATNet (Refined Meta OpenECATNet): a formal model for the specification of services composition. product of the refinement of the Mop-ECATNets model, introduced initially by [LB14]. We extended and enriched this model at three distinct levels:at the structural, behavioural level and implementation levels
Styles APA, Harvard, Vancouver, ISO, etc.
10

Yacoubi, Nadia. « Une nouvelle approche de Découverte et de Composition de Services Web à base de médiation sémantique et de raisonnement déductif : application au domaine informatique ». Paris, CNAM, 2010. http://www.theses.fr/2010CNAM0703.

Texte intégral
Résumé :
The Semantic Web as an extension of the current Web enables the emergence of a new generation of Web services, called Semantic Web Services (SWS), which the meaning of descriptions are well-defined for better enabling computers and people to work in cooperation. In this thesis, we propose a stratified meta-framework that we call BioMed allowing service mediation in the bioinformatic domain. Mediation in BioMed is threefold allowing semantic mediation considering heterogeneous semantic descriptions, ontological considering heterogeneous bio-ontologies and deductive enabling the processes of discovery and composition. We propose a semantic-driven approach to describe services based on canonical model and semantically annotate them based on a ontological meta-map created to represent the global and shared semantics of a network of biomedical ontologies. Two main classes of processes are adressed : service discovery and composition. Both of them rely on a Datalog deductive reasoning engine implemented as a deductive meta-service that enables to infer implicit semantic properties of services and augment the search space of satisfiable services in the discovery process and composable services in the composition process. In the empirical study, we report the size of the search space without and with reasoning tasks and quality of discovered services. Finally, we propose a set of techniques to classify the search space of satisfiable and composable services in order to identify the best services in terms of non-functional properties
L’avènement du Web sémantique a permis l’apparition d’une nouvelle génération de services Web, dénommés Services Web Sémantiques (SWS) intégrant dans leurs descriptions une dimension sémantique décrivant différents aspects fonctionnels et non fonctionnels d’un service Web. Au niveau de cette thèse, nous proposons un méta-framework stratifié que nous nommons BioMed pour la médiation de services Web dans le domaine bioinformatique, cette médiation est triple, à la fois interprétative, ontologique et inférentielle. Le travail mené consiste à proposer une méthodologie de sémantisation de services Web en proposant un modèle de descriptions canoniques de SWS réconciliant des descriptions hétérogènes créées sous différents frameworks. Une réconciliation tant sémantique qu’ontologique au cours de laquelle les SWS décrits canoniquement s’adossent à une méta-carte ontologique permet de pallier à l’hétérogénéité des ontologies du domaine. Deux grandes classes de processus sont considérées: la découverte et la composition de SWS. Ces deux processus sont effectués à travers un moteur inférentiel Datalog-like conçu comme un méta-service Web déductif et sur la base d’une sémantique inférentielle élargissant la couverture sémantique des descriptions et l’espace de recherche des services atomiques et celui des services composables dans le cas d’une composition. Les expérimentations montrent l’impact des techniques de relaxation sur la taille de l’espace de recherche des services découverts. Enfin, nous proposons différentes alternatives afin de classer l’ensemble des solutions et cela afin de déceler les meilleurs services atomiques et/ou plans de composition
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie