Littérature scientifique sur le sujet « Genetic Testing Market Share »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les listes thématiques d’articles de revues, de livres, de thèses, de rapports de conférences et d’autres sources académiques sur le sujet « Genetic Testing Market Share ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Articles de revues sur le sujet "Genetic Testing Market Share"

1

MacMinn, R. D., P. L. Brockett et J. A. Raeburn. « Health Insurance, Genetic Testing and Adverse Selection ». Annals of Actuarial Science 2, no 2 (septembre 2007) : 327–47. http://dx.doi.org/10.1017/s1748499500000385.

Texte intégral
Résumé :
ABSTRACTThe implications of genetic testing information availability for society, medicine, employment, and individual privacy rights have generated much political debate, legislation and academic research. Part of this debate centres on the ethical and economic considerations resultant from this expanded knowledge, particularly for insurance practices. Within insurance economics, the possibility of adverse selection has been debated and the potential for a ban on an insurer's use of genetic testing has been studied with respect to whether or not such a ban might actually result in insurance market failure due to this adverse selection. Studies have examined the issue using expected loss cost (actuarial or ‘fair’) pricing models, and have not considered either equilibrium (supply and demand) price setting as is present in markets, or the potentially swamping effect of background health care risks facing the insured, having nothing to do with any particular genetic mutation. Here we construct a supply and demand function with both high and low risk individuals in the presence of background health care cost risks, and derive an equilibrium price and market composition to determine whether, if genetic information is allowed for individuals, but this same information is not shared with insurers: (1) is market failure inevitable? (it is not if the background risk is sufficiently high relative to potential genetic risk costs); (2) will equilibrium prices result in all low risk insured exiting the market? (not in the presence of significant background risk); and (3) how much would prices increase and market sales decrease if insurers do not have the same genetic information as the insured? (prices will increase, but not necessarily very much in the presence of background risk, and not as much as that previously estimated in the insurance literature).
Styles APA, Harvard, Vancouver, ISO, etc.
2

Hendricks-Sturrup, Rachele M., et Christine Y. Lu. « What motivates the sharing of consumer-generated genomic information ? » SAGE Open Medicine 8 (janvier 2020) : 205031212091540. http://dx.doi.org/10.1177/2050312120915400.

Texte intégral
Résumé :
Genomic medicine is an emerging practice that followed the completion of the Human Genome Project and that considers genomic information about an individual in the provision of their clinical care. Large and start-up direct-to-consumer genetic testing companies like Ancestry, 23andMe, Luna DNA, and Nebula Genomics have capitalized on findings from the Human Genome Project by offering genetic health testing services to consumers without a clinical intermediary. Genomic medicine is thus further propelled by unprecedented supply and demand market forces driven by direct-to-consumer genetic testing companies. As government entities like the National Human Genome Research Institute question how genomics can be implemented into routine medical practice to prevent disease and improve the health of all members of a diverse community, we believe that stakeholders must first examine how and scenarios in which stakeholders can become motivated to share or receive genomic information. In this commentary, we discuss consumers three scenarios: satisfying personal curiosity, providing a social good, and receiving a financial return. We examine these motivations based on recent events and current avenues through which have engaged or can engage in genomic data sharing via private, secure (e.g. centralized genomic databases and de-centralized platforms like blockchain) and public, unsecure platforms (e.g. open platforms that are publicly available online). By examining these scenarios, we can likely determine how various stakeholders, such as consumers, might prefer to extract value from genomic information and how privacy preferences among those stakeholders might vary depending on how they seek to use or share genomic information. From there, one can recommend best practices to promote transparency and uphold privacy standards and expectations among stakeholders engaged in genomic medicine.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Van Eenennaam, Alison L., et Daniel J. Drake. « Where in the beef-cattle supply chain might DNA tests generate value ? » Animal Production Science 52, no 3 (2012) : 185. http://dx.doi.org/10.1071/an11060.

Texte intégral
Résumé :
DNA information has the potential to generate value for each sector of the beef-cattle industry. The value distribution among sectors (breeding, commercial, feedlot, processing) will differ depending on marketing. The more descendants an animal produces, the more valuable each unit of genetic improvement becomes. Therefore, the value of using DNA testing to increase the accuracy of selection and accelerate the rate of genetic gain is highest in the breeding sector, particularly for replacement stud animals. There is a lesser value associated with increasing the accuracy of yearling commercial bulls. The cost to DNA test commercial sires will likely be incurred by breeders before sale, and must be recouped through higher bull sale prices or increased market share. Commercial farmers could also use DNA tests to improve the accuracy of replacement female selection. This assumes the development of DNA tests that perform well for the low-heritability traits that directly affect maternal performance (e.g. days to calving) in commercial cattle populations. DNA tests may provide the sole source of information for traits that are not routinely measured on commercial farms. In that case, DNA test information will provide new selection criteria to allow for genetic improvement in those traits. As DNA test offerings mature to have improved accuracy for traits of great value to the feedlot (e.g. feed conversion, disease resistance) and processing (e.g. meat quality) sectors, the added value derived from DNA-enabled selection for these traits will need to be efficiently transferred up the beef production chain to incentivise continued investment. The widespread adoption of DNA testing to enhance the accuracy of selection will likely require an approach to share the value realised by downstream sectors of the beef-cattle industry with those upstream sectors incurring DNA collection and testing expenses.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Rehan Haider Rehan Haider. « Pharmaceutical Market : An Overview ». International Journal of Integrative Sciences 2, no 12 (28 décembre 2023) : 2087–104. http://dx.doi.org/10.55927/ijis.v2i12.6272.

Texte intégral
Résumé :
Drug display has a substantial risk in terms of the investigation, occurrence, outcomes, and distribution of drugs generally. It is a dynamic and error-prone aspect of the healthcare production process. An overview of pharmaceuticals is provided in this abstract, with emphasis on their key traits, challenges, and methods.The pharmaceutical industry offers a vast array of brands, including generic drugs, biologics, vaccines, formula drugs, and investments in corporate cures. This is due to the ongoing advancements in science and biological research, as well as the growing demand for realistic scenarios in a range of medical contexts One of the trademarks of pharmaceutical presentations (R&D) is the allure of a significant reliance on testing. It takes a long time, money, and uncertainty to bring a new drug to market. It requires stringent preclinical and objective testing, supervisory permissions, and post-shopping follow-ups to ensure security and efficacy. Patents shield innovations and grant pharmaceutical companies a fenestella of uniqueness that helps them recover their large R&D expenses. However, this industry faces many challenges. Increasing healthcare costs, a rise in regulatory investigations, and pressure from payers and governments on prices have all contributed to a cost curb. In addition, there are still problems with research and development because of the rise in complicated diseases, antibiotic resistance, and the need for integrated healthcare. A shift toward biopharmaceuticals, digital healthcare, and precise therapies are just a few of the noteworthy themes that pharmaceutical advertising has helped to foster recently. The COVID-19 pandemic has increased the number of cure cases, revealing the manufacturing industry's susceptibility to pervasive health crises. Drug display is quite competitive, with minor biotech companies and two multinational alliances vying for market share. To expand device portfolios and gain access to new sciences, partnerships, acquisitions, and mergers are the most common approaches
Styles APA, Harvard, Vancouver, ISO, etc.
5

Vaishali R Surjuse. « Stock Price Prediction and Stock Behaviour using GA and LR with NIFTY50 index Based on Daily Charts ». Journal of Electrical Systems 20, no 2s (4 avril 2024) : 614–24. http://dx.doi.org/10.52783/jes.1528.

Texte intégral
Résumé :
The price of a company's stock, which can increase in lockstep with the price of a single share, is the one of the indicator to measure its performance. Clients or stockholding companies find it challenging to make long-term projections regarding the value of specific stocks due to the unpredictable nature of stock prices. Consequently, there is no business-related subject more talked about than stock market predictions. It is crucial to resolve this issue in a way that benefits buyers and investors because they frequently experience investment losses. Machine learning is useful in developing model for stock value predictions. We are utilizing Python and Linear Regression, one of the Machine Learning statistical techniques for predictive analysis, to create a stock price prediction website in order to address this issue. Our study primarily focuses on the NIFTY50 index's performance in distributed lag with the purpose of predicting stock prices in the Indian stock market. Several useful characteristics of the NIFTY50 lag index were extracted by means of a genetic algorithm. After that, we uncovered hidden correlations between the stock index and a given stock's trend by using the linear regression classifier. For the purpose of testing our approach, we used it to forecast the future of three distinct equities. In comparison to state-of-the-art forecasting methodologies, our experimental results demonstrated an accuracy of 82.55%. For the purpose of predicting daily changes in stock prices, the NIFTY50 stock index proved to be useful.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Hartmann, Martin, et Peter Vortisch. « A German Passenger Car and Heavy Vehicle Stock Model : Towards an Autonomous Vehicle Fleet ». Transportation Research Record : Journal of the Transportation Research Board 2672, no 46 (17 juin 2018) : 55–63. http://dx.doi.org/10.1177/0361198118782042.

Texte intégral
Résumé :
Automated vehicles are becoming a reality. Many pilot projects have already begun demonstrating the technological capabilities, as public authorities now allow the testing of automated vehicles in real traffic. To smooth the transition from a conventional to an automated fleet, effective fiscal and regulatory policies must be developed by governmental agencies. But at what rate will automated vehicles actually be adopted, and what automation technology will be available for use in new cars joining the national fleet? A national vehicle stock model can be used to answer these questions and to observe the aggregate impact of governmental policies on individual vehicle purchase decisions. In this paper, we present a passenger car and heavy vehicle stock cohort model that forecasts the diffusion of automation technology in Germany. The model uses national data on vehicle stock and vehicle utilization patterns on German freeways and predicts market shares of generic automation levels in predefined instances of a trend scenario. Results point toward market saturation of automated vehicles beyond 2050, with almost 90% of the passenger car fleet being classified as at least partially automatized by this date. The results also suggest that technology diffusion will be faster in the heavy vehicle fleet than in the passenger car fleet. This implies a positive correlation between emission-linked road user charges for heavy vehicles on the freeway network and the renewal rate of the heavy vehicle fleet. The forecast shares of automated vehicles can be used as an input for traffic flow simulations or as a basis for those infrastructure measures and traffic policies that are sensitive to the share of automated vehicles.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Bennett, Nikki E., Silvio Ernesto Mirabal Torres et Peter B. Gray. « Exploratory content analysis of direct-to-consumer pet genomics : What is being marketed and what are consumers saying ? » PLOS ONE 17, no 1 (7 janvier 2022) : e0261694. http://dx.doi.org/10.1371/journal.pone.0261694.

Texte intégral
Résumé :
Mars Petcare introduced the first direct-to-consumer domestic dog genetic test in 2009 and Basepaws introduced the first direct-to-consumer cat genetic test in 2016. Social science research has evaluated numerous aspects of the human direct-to-consumer market, yet no such exploration has evaluated the occurrence of pet owners pursuing pet genetic tests. Using a mixed methods approach, we conducted an exploratory content analysis of direct-to-consumer pet genetic company webpages and consumer reviews shared on Amazon. Initial data reviews indicated some companies may be key industry players, relative to others. Our results present content frequency for each group (key industry players, all other companies), though the primary themes for each remained the same. Analysis showed genetic companies are primarily sharing product and purchasing information, along with trustworthiness to establish the merit of the company and their products. Companies also used statements directed towards pet owners that are suggestive of both pets and “pet parents” benefiting from the test results. The primary themes identified in consumer reviews involved consumers sharing their perception about the tests (e.g., accuracy), what aspects of the test results they focused on (e.g., breed information), and experiences with using the test (e.g., ease of use). Amazon reviews were primarily positive, though the companies with smaller review numbers had higher percentages of negative and ambiguous sentiments. Of interest, reviews most often indicated tests were being used to determine a pet’s breed identity, while companies most frequently promoted the health advantages of using their products. Reviews revealed some consumers respond to tests by sharing their pet’s results with someone or by altering their pet’s care. Considering these results in addition to the growing popularity of this industry and the advancements of genomic technology, further research is needed to determine the role pet genetic testing may have in society and on human-animal relationships.
Styles APA, Harvard, Vancouver, ISO, etc.
8

TWOMLOW, STEVE, BEKELE SHIFERAW, PETER COOPER et J. D. H. KEATINGE. « INTEGRATING GENETICS AND NATURAL RESOURCE MANAGEMENT FOR TECHNOLOGY TARGETING AND GREATER IMPACT OF AGRICULTURAL RESEARCH IN THE SEMI-ARID TROPICS ». Experimental Agriculture 44, no 2 (avril 2008) : 235–56. http://dx.doi.org/10.1017/s0014479708006340.

Texte intégral
Résumé :
SUMMARYGood management of natural resources is the key to good agriculture. This is true everywhere – and particularly in the semi-arid tropics, where over-exploitation of fragile or inherently vulnerable agro-ecosystems is leading to land and soil degradation, productivity decline, and increasing hunger and poverty. Modern crop varieties offer high yields, but the larger share of this potential yield can only be realized with good crop management. The International Crops Research Institute for the Semi-Arid Tropics (ICRISAT), working over a vast and diverse mandate area, has learned one key lesson: that technologies and interventions must be matched not only to the crop or livestock enterprise and the biophysical environment, but also with the market and investment environment, including input supply systems and policy. Various Natural Resource Management (NRM) technologies have been developed over the years, but widespread adoption has been limited for various reasons: technical, socio-economic and institutional. To change this, ICRISAT hypothesizes that ‘A research approach, founded on the need to integrate a broad consideration of technical, socio-economic and institutional issues into the generation of agricultural innovations will result in a higher level of adoption and more sustainable and diverse impacts in the rainfed systems of the semi-arid tropics.’ Traditionally, crop improvement and NRM were seen as distinct but complementary disciplines. ICRISAT is deliberately blurring these boundaries to create the new paradigm of IGNRM or Integrated Genetic and Natural Resource Management. Improved varieties and improved resource management are two sides of the same coin. Most farming problems require integrated solutions, with genetic, management-related and socio-economic components. In essence, plant breeders and NRM scientists must integrate their work with that of private and public sector change agents to develop flexible cropping systems that can respond to rapid changes in market opportunities and climatic conditions. The systems approach looks at various components of the rural economy – traditional food grains, new potential cash crops, livestock and fodder production, as well as socio-economic factors such as alternative sources of employment and income. Crucially the IGNRM approach is participatory, with farmers closely involved in technology development, testing and dissemination. ICRISAT has begun to use the IGNRM approach to catalyse technology uptake and substantially improve food security and incomes in smallholder farm communities at several locations in India, Mali, Niger, Vietnam, China, Thailand and Zimbabwe.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Ferreira, Talita da Silva, Giovanni M. Pauletti, Eduardo Agostinho Freitas Fernandes, Andrés Figueroa Campos, Alexis Aceituno, Miguel Ángel Cabrera-Pérez, Diego Gutierrez Triana et Luis Vázquez-Suárez. « Knowledge-based : facilitating access to medicines in Latin America ». Brazilian Journal of Health Review 7, no 2 (30 avril 2024) : e69323. http://dx.doi.org/10.34119/bjhrv7n2-480.

Texte intégral
Résumé :
The World Health Organization (WHO), with the scientific support of the International Pharmaceutical Federation (FIP), guides the development of multisource pharmaceutical products for market authorization using in vivo bioequivalence studies or, where applicable, in vitro biowaiver strategies based on the Biopharmaceutical Classification System (BCS). A review of the regulatory framework guiding generic medicines approval in Latin American countries revealed that less than 50% of regional health authorities offer a generic medicines development pathway utilizing a BCS-based biowaiver strategy. Aligned with the ONE FIP Strategy to facilitate access to medicines, a regional case study was carried out to implement and harmonize BCS-based biowaiver knowledge in Latin American countries. A steering committee involving regional representatives from health authorities, the pharmaceutical industry, and universities were established to coordinate to develop activities. A series of digital engagement events were held in Spanish and English with representatives from Latin America to share knowledge on BCS-based regulatory strategy, promote collaborations, and explore the alignment of biowaiver approval and regulatory pathways among Latin American countries. Feedback from diverse Latin American stakeholders demonstrated inconsistent implementation of bioequivalence testing within the region. However, there is support for a synergistic approach among countries to reduce duplication and increase efficiency in market authorization for generic medicines. This includes alignment with the WHO Prequalification of Medicines program as well as the development of a computational database for the classification of active pharmaceutical ingredients to demonstrate therapeutic interchangeability of immediate-release oral dosage forms according to the BCS. FIP-facilitated digital learning opportunities raised awareness of the BCS-based biowaiver regulatory strategy among Latin American stakeholders. It resulted in a plan to continually strengthen collaborative efforts in the region to harmonize regulations relevant to drug development generics medicines to introduce cost-effective medicines products that benefit public health.
Styles APA, Harvard, Vancouver, ISO, etc.
10

O'neill, Onora. « Genetic information and insurance : some ethical issues ». Philosophical Transactions of the Royal Society of London. Series B : Biological Sciences 352, no 1357 (29 août 1997) : 1087–93. http://dx.doi.org/10.1098/rstb.1997.0089.

Texte intégral
Résumé :
Life is risky, and insurance provides one of the best developed ways of controlling risks. By pooling, and so transferring risks, those who turn out to suffer antecedently uncertain harms can be assured in advance that they will be helped if those harms arise; they can then plan their lives and activities with confidence that they are less at the mercy of ill fortune. Both publicly organized and commercial insurance can organize the pooling of risk in ways that are beneficial for all concerned. They provide standard ways of securing fundamental ethical values such as solidarity and mutuality. Although policy holders do not know or contract with one another, each benefits from the contribution of others to a shared scheme for pooling and so controlling risk. Although there is a limit to the degree to which commercially–based insurance, where premiums depend on risk level, can go beyond mutuality towards solidarity, in practice it too often achieves a measure of solidarity by taking a broad brush approach to pooling risk However, the ordinary practices of insurance, and in particular of commercial insurance, also raise ethical questions. These may be put in simple terms by contrasting the way in which an insurance market discriminates between different people, on the basis of characteristics that (supposedly) determine their risk level, and our frequent abhorrence of discrimination, in particular on the basis on religious, racial and gender grounds. Are the discriminations on which insurance practice relies upon as standard acceptable or not? The increasing availability of genetic information, which testing (of individuals) and screening (of populations) may provide, could lend urgency to these questions. Genetic information may provide a way of obtaining more accurate assessment of individual risks to health and life. This information could be used to discriminate more finely between the risk levels of different individuals, and then to alter the availability and the costs of health, life and unemployment insurance to them. Since all of these forms of insurance bear very directly on the way most people live, it will matter to them how (if at all) insurers take account of genetic information. Will use of this information improve or damage the capacity of insurance to provide confidence in the face of uncertain harms, and help if they happen? Will it discriminate in acceptable or in unacceptable ways? Will it support or damage the sorts of mutuality and solidarity various sorts of insurance schemes have successfully institutionalized?
Styles APA, Harvard, Vancouver, ISO, etc.

Thèses sur le sujet "Genetic Testing Market Share"

1

Jansen, Sebastian [Verfasser]. « Testing market imperfections via genetic programming / vorgelegt von Sebastian Jansen ». 2010. http://d-nb.info/1011299917/34.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Charbonneau, MJM. « Think before you spit ... and share : protecting consumers in Australia’s direct-to-consumer health-related genetic testing (DTCGT) space ». Thesis, 2020. https://eprints.utas.edu.au/34824/1/Charbonneau_whole_thesis.pdf.

Texte intégral
Résumé :
Traditionally health-related genetic testing was only available within the jurisdictional bounds of a country’s healthcare system, subject to strict requirements for access, funding and actioning. Direct-to-consumer health-related genetic testing (DTCGT) represents a paradigm shift from medical to consumer, as private companies now provide health-related genetic tests and results directly to consumers in commercial transactions, typically conducted online. Since 2007, when US company 23andMe invited the world to spit in a tube, pay a comparatively small fee, and discover its genetic roots and destiny, DTCGT has offered a future of hope, according to its proponents, and fear, according to its critics. DTCGT’s key promise is consumer empowerment – that individuals armed with personal genetic information about their current and future health status will use it to make autonomous, informed decisions about their healthcare and lifestyles. Its critics, however, focus on the potential for consumer harm, especially psychological, in situations where individuals are required to self-interpret complex genetic information provided for bundles of different tests, and then determine for themselves how they feel and what they might do. This research reports on modelling of Australia’s DTCGT and clinical genetic testing (CGT) spaces and the results of an online panel survey. While the research focus is Australia, the online panel was conducted with 2000 respondents, 1000 each from Australia and the United States, with the United States results used to provide context and comparison. Modelling revealed what was initially believed to be a bifurcated system – consumer in the marketplace or patient in the clinic – had the potential for individuals to be both consumer and patient through consumer or company-initiated engagement with healthcare. Given current industry focus on monetisation of genetic data, DTCGT consumers can also assume the role of research participant by allowing use of their data in company research. As such, three distinct regulatory regimes are involved in the DTCGT space – medico-legal, consumer and human research – each affording different regulatory protections enlivened by DTCGT consumers’ roles as consumer, patient and research participant. In the survey component of this research, respondents were presented with sample DTCGT results for two disease predisposition tests and one pharmacogenomics test, randomly allocated into different risk and metabolisation rate treatments, and then asked to both interpret and contextualise results. The construct of ‘match/mismatch’ was developed based on consistency of personal interpretation with DTCGT disease predisposition results presented, and then applied to DTCGT engagement. Analysis demonstrated those who ‘mismatched’ experienced disproportionate emotional distress and engagement, and intended to engage in behaviours unwarranted by actual results when compared to those who ‘matched’, providing evidence of potential consumer harm, especially psychological harm. The potential for harm to overall health was found relative to the pharmacogenomics test, with over one in ten respondents intending to independently alter their medication dosage based on results, and the potential for strain on healthcare resources as almost eight out ten intended to seek expert advice. Most notable overall was the similarity in response patterns between Australian and US respondents, suggesting at least a certain amount of 'universality' or response consistency in how individuals engage with DTCGT results. The outcome of this research resulted in two key recommendations. First, given the pivotal role played by interpretation in engagement with DTCGT, the need for genetics education both for the general public and the medical profession was strongly recommended, not just to prepare both for DTCGT but for whatever the accelerating development of genetic tests, treatments and technologies brings. Secondly, with regard to regulatory reform, the recommendation was to do nothing UNTIL key players in the three regulatory spheres are brought together to consider DTCGT and future genetic offerings both in the clinic and the marketplace – from a holistic perspective. Australia’s DTCGT space demonstrates regulatory congestion – too many laws, areas of both overlap and gaps ripe for regulatory avoidance or commercial exploitation, with none totally fit for purpose. The traditional ‘siloed’ approach where each of these spheres regulates within its silo is no longer ‘fit for purpose’. The siloes need to be broken down and regulation developed from a holistic perspective as the window of opportunity to at least stay abreast of the commercialisation of genetics is rapidly closing.
Styles APA, Harvard, Vancouver, ISO, etc.
3

SLÁDKOVÁ, Petra. « Posouzení efektivity kapitálového trhu a výběr vhodné investiční strategie ». Master's thesis, 2010. http://www.nusl.cz/ntk/nusl-53641.

Texte intégral
Résumé :
In my diploma I analyzed the USA capital market. I concentrated on 5 representative branches of this market - the biotechnology, the food industry, the car industry, the mining and the finances. 12 companies, which quote their share of stocks in the american capital market, were choosed. I tested the efficiency of this capital market and tried to establish the rate of this market´s efficiency. Later the best strategy was added to the rate of capital market´s efficiency. I counted the average decree, the standard deviation, the variation coefficient, the {$\alpha$} coefficient and {$\beta$} coefficient at the choosed share of stocks. I accomplished the correlative and the runs testing, which were supposed to certify the efficiency of market. The certain anomalies as The Day of the Week Effect, The January Effect and The Size Effect were investigated in more detail. Further I was considering if either the active or the passive strategy should have been used. I concluded that the active strategy is better for investors in times of the financial crisis. I also analyzed P/E ratio at choosed companies. The performated testing shows that the american market of shares is effective, it has the form of low efficiency peak-form efficient markets hypothesis.
Styles APA, Harvard, Vancouver, ISO, etc.

Livres sur le sujet "Genetic Testing Market Share"

1

Venkatraman, N. The market share--profitability relationship : Testing temporal stability across business cycles. Cambridge, Mass : Sloan School of Management, Massachusetts Institute of Technology, 1989.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Catalano, Lisa, et Alice K. Tanner. Test Development and Validation. Oxford University Press, 2017. http://dx.doi.org/10.1093/med/9780190604929.003.0009.

Texte intégral
Résumé :
The process of developing a genetic test starts with an initial assessment of need and utility, followed by the development and validation of the chosen testing method. Several factors must be taken into account during test development, including existing technology, financial requirements, personnel resources, and market share. This chapter reviews the steps involved in the development and the validation/verification of clinical genetic testing. It discusses the factors that are addressed by clinical genetic testing laboratories during the design and implementation phases of test development. The role of the laboratory genetic counselor in this process is highlighted throughout.
Styles APA, Harvard, Vancouver, ISO, etc.
3

The constrained asset share estimation (CASE) method : Testing mean-variance efficiency of the U.S. stock market. Cambridge, Mass : National Bureau of Economic Research, 1993.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Truth & Aura Associates. Genetic Testing : Markets and Users in Medical, Forensic, Paternity, and Food Safety Application (Kalorama Information Market Intelligence Report). Find/SVP, 2003.

Trouver le texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

Petit, Véronique, Kaveri Qureshi, Yves Charbit et Philip Kreager, dir. The Anthropological Demography of Health. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780198862437.001.0001.

Texte intégral
Résumé :
This book provides an integrative framework for the anthropological demography of health, a field of interdisciplinary population research grounded in ethnography and in critical examination of the social, political, and economic histories that have shaped relations between peoples. The field has grown from the 1990s, extending to a remarkable range of key human and policy issues, including: genetic disorders; nutrition; mental health; infant, child and maternal morbidity; malaria; HIV/AIDS; disability and chronic diseases; new reproductive technologies; and population ageing. Collaboration with social, medical, and demographic historians enables these issues to be situated in the evolution of institutional structures and inequalities that shape health and care access. Understanding fertility levels and trends has widened beyond parity and contraception to the many life course risks and alternative healing systems that shape reproductive health. By going beyond conventional demographic and epidemiological methods, and idealised macro/micro-level units, the anthropological demography of health places people’s health-seeking behaviour in a compositional demography based on ethnographic observation of group formation and change over time, and of variance between what people say and do. It tracks family and community networks; class, linguistic, and religious groups; sectoral labour and market distributions; health and healing specialisms; and relations between these bodies and with groups controlling local and national governments. The approach enables examination of how local cultures and experience are translated formally into measures on which survey and clinical programmes rely, thus testing the empirical adequacy of such translations, and leading to revision of concepts of risk and governance.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Phillips, Andelka M. Buying your Self on the Internet. Edinburgh University Press, 2019. http://dx.doi.org/10.3366/edinburgh/9781474422598.001.0001.

Texte intégral
Résumé :
The personal genomics industry (aka direct-to-consumer genetic testing) has created a market for genetic tests as consumer services. This has taken genetic testing out of the clinic and into people’s homes. The industry is diverse offering tests for various health conditions and ancestry, as well as more dubious tests, such as ‘peace of mind’ paternity, ‘infidelity’ (or surreptitious testing), child talent, and even matchmaking. It is growing rapidly, but at present many tests are not standardized and the industry has not been subject to specific regulation. As with many other Internet based industries, companies tend to rely on their electronic wrap contracts to govern their relationships with their consumers. This book provides an introduction to the world of personal genomics and examines the rise of the industry and its use of ‘wrap’ contracts, drawing upon the author’s review of the contracts of 71 companies that provide tests for health purposes. It explores the different types of tests available and the issues that this industry raises for law and for society.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Maschke, Karen J., et Michael K. Gusmano. Debating Modern Medical Technologies. Praeger, 2018. http://dx.doi.org/10.5040/9798400638398.

Texte intégral
Résumé :
This book analyzes policy fights about what counts as good evidence of safety and effectiveness when it comes to new health care technologies in the United States and what political decisions mean for patients and doctors. Medical technologies often promise to extend and improve quality of life but come with many questions: Are they safe and effective? Are they worth the cost? When should they be allowed on the market, and when should Medicare, Medicaid, and private insurance companies be required to pay for drugs, devices, and diagnostic tests? Using case studies of disputes about the value of mammography screening; genetic testing for disease risk; brain imaging technologies to detect biomarkers associated with Alzheimer’s disease; cell-based therapies; and new, expensive drugs, Maschke and Gusmano illustrate how scientific disagreements about what counts as good evidence of safety and effectiveness are often swept up in partisan fights over health care reform and battles among insurance and health care companies, physicians, and patient advocates. Debating Modern Medical Technologies: The Politics of Safety, Effectiveness, and Patient Access reveals stakeholders’ differing values and interests regarding patient choice, physician autonomy, risk assessment, government intervention in medicine and technology assessment, and scientific innovation as a driver of national and global economies. It will help readers to understand the nature and complexity of past and current policy disagreements and their effects on patients.
Styles APA, Harvard, Vancouver, ISO, etc.

Chapitres de livres sur le sujet "Genetic Testing Market Share"

1

Opeskin, Brian, et David Weisbrot. « Insurance and Genetics : Regulating a Private Market in the Public Interest ». Dans The Moral, Social, and Commercial Imperatives of Genetic Testing and Screening, 125–63. Dordrecht : Springer Netherlands, 2006. http://dx.doi.org/10.1007/978-1-4020-4619-3_6.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Crespo-Herrera, Leonardo A., José Crossa, Mateo Vargas et Hans-Joachim Braun. « Defining Target Wheat Breeding Environments ». Dans Wheat Improvement, 31–45. Cham : Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-90673-3_3.

Texte intégral
Résumé :
AbstractThe main objective of a plant breeding program is to deliver superior germplasm for farmers in a defined set of environments, or a target population of environments (TPE). Historically, CIMMYT has characterized the environments in which the developed germplasm will be grown. The main factors that determine when and where a wheat variety can be grown are flowering time, water availability and the incidence of pests and diseases. A TPE consists of many (population) environments and future years or seasons, that share common variation in the farmers’ fields, it can also be seen as a variable group of future production environments. TPEs can be characterized by climatic, soil and hydrological features, as well as socioeconomic aspects. Whereas the selection environments (SE) are the environments where the breeder does the selection of the lines. The SE are identified for predicting the performance in the TPE, but the SE may not belong to the TPE. The utilization of advanced statistical methods allows the identification of GEI to obtain higher precision when estimating the genetic effects. Multi-environmental testing (MET) is a fundamental strategy for CIMMYT to develop stable high grain yielding germplasm in countries with developing economies. An adequate MET strategy allows the evaluation of germplasm in stress hotspots and the identification of representative and correlated sites; thus, breeders can make better and targeted decisions in terms of crossing, selection and logistic operations.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Knight, Amber, et Joshua Miller. « Autonomy in Political Theory ». Dans Prenatal Genetic Testing, Abortion, and Disability Justice, 35–60. Oxford University PressOxford, 2023. http://dx.doi.org/10.1093/oso/9780192870957.003.0002.

Texte intégral
Résumé :
Abstract This chapter provides the normative grounding for the book through a brief survey of how autonomy has been conceptualized, valued, and politicized in the Western canon of political thought. Traditionally defined as an individual’s capacity for self-rule, autonomy has been criticized from various schools of thought. Feminist theorists and critical disability scholars have been especially critical of its associations with unrealistic masculinist and ableist conceptions of citizenship. We share these concerns, but our goal in this chapter is to rethink autonomy without giving it up. We do so by endorsing a relational conception of autonomy—one that foregrounds socialization and takes stock of how oppressive social relations negatively impact on the exercise of autonomy in practice—within a perfectionist liberal framework. Ultimately, we show how a relational understanding of reproductive autonomy committed to perfectionist liberal politics is descriptively accurate, is normatively desirable, and has more political purchase than its mainstream liberal predecessor.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Liu, Zhenya, Danyuanni Han et Shixuan Wang. « Testing Bubbles : Exuberance and collapse in the Shanghai A-share stock market ». Dans China's New Sources of Economic Growth : Vol. 1. ANU Press, 2016. http://dx.doi.org/10.22459/cnseg.07.2016.11.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
5

« Genetic Algorithms (GAs) and Stock Trading Systems ». Dans Advances in Computational Intelligence and Robotics, 154–72. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-4105-0.ch009.

Texte intégral
Résumé :
Recent advances and improvements in hardware and software technology have benefitted research in the field of finance. It has made research even more interesting. A large amount of data can be processed with lightning speed. Investors are now eagerly looking for small models and algorithms of a lucrative value and with better accuracy. Earlier, the decision to enter the market or to exit is taken emotionally with fear then by any rational thinking. Due to this, a lot of bad decisions were taken, and a huge amount of capital was lost. With the help of this computational system, a computer program will decide when to enter and when to exit the market without any human intervention. The decision here will be taken by testing a large amount of permutation and combination of various parameters associated with the system. This chapter briefly explains the scope and agenda of various algorithms for computation research in finance.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Zhang, Lingwei, Xiaolei Ding et Biyuan Yang. « The Validity of Quantitative Technical Indicator Factors Based on Back Testing ». Dans Advances in Transdisciplinary Engineering. IOS Press, 2024. http://dx.doi.org/10.3233/atde240052.

Texte intégral
Résumé :
Technical indicator factors can quickly reflect the transformation of current market behavior. The application system in quantitative trading has become increasingly mature in recent years. Back testing is widely used in factor validity tests because of its validity. However, the current research on the effectiveness of technical indicator factors has ignored the adaptability to the model timing strategy, and the use of factors is not differentiated enough. How to carry out reasonable and effective factor validity research has become a difficult problem for many scholars to discuss. This paper first selected representative technical indicators as the research object and crawled the trading data of Chinese A-share listed companies through Python. It then calculated the sample data using computer databases such as Pandas and NumPy. Furthermore, this paper confirms the optimal interval of each factor with the method of back testing. On this basis, it tests the income distribution of each factor and the maximum pullback. It introduces the timing method of the simple moving average for comparison and discusses the feasibility of using a technical indicator strategy to conduct stock selection trading.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Rani K. P., Asha, et Gowrishankar S. « Integration of Advanced Design Patterns in Deep Learning for Agriculture Along With Waste Processing ». Dans Revolutionizing Automated Waste Treatment Systems, 320–54. IGI Global, 2024. http://dx.doi.org/10.4018/979-8-3693-6016-3.ch021.

Texte intégral
Résumé :
Over the past few decades, there has been a tremendous development in machine learning (ML), particularly in the areas of deep learning (DL) and transfer learning (TL). Deep learning has emerged as a powerful approach for solving complex problems in various domains such as computer vision, natural language processing, and speech recognition. At the same time, transfer learning has proven to be an effective technique for leveraging pre-trained deep learning models in new application domains with limited data. Design patterns, that are formalized best practices, offer a way to capture common problems and provide reusable solutions using generic and well-proven machine learning designs. This chapter aims to provide an overview of the advancements in deep learning and transfer learning, while emphasizing the significance of design patterns in addressing common challenges during the design of machine learning applications and systems. This work explores the implementation and results of various machine learning models on the mushroom classification dataset. The dataset comprises descriptions of 23 species of gilled mushrooms, with diverse features like cap shape, color, odor, and more. The goal was to classify mushrooms as edible, poisonous, or of unknown edibility. Among the models considered, the multi-layer perceptron (MLP), recurrent neural network (RNN), long short-term memory (LSTM), autoencoders, and Boltzmann machine were trained and evaluated. The MLP, RNN, and LSTM exhibited exceptional performance, achieving perfect training and testing accuracies of 1.0000. These models successfully learned the underlying patterns and features, resulting in accurate predictions on both training and shown test data. Deep learning can optimize mushroom waste processing by classifying waste types, optimizing composting conditions, and extracting nutrients for reuse, enhancing sustainability and resource recovery in agriculture. It also predicts market demand, automates quality control, and facilitates predictive maintenance, improving efficiency and reducing environmental impact.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Deamer, David W. « Integrating Chemistry, Geology, and Life’s Origin Coauthored with Bruce Damer ». Dans Assembling Life. Oxford University Press, 2019. http://dx.doi.org/10.1093/oso/9780190646387.003.0015.

Texte intégral
Résumé :
Chapter 8 recalled John Platt’s recommendation that testing alternative hypotheses is a preferred way to perform research rather than focusing on a single hypothesis. Karl Popper proposed an additional way to evaluate research approaches, which is that a strong hypothesis is one that can be falsified by one or more crucial experiments. This chapter proposes that life can begin with chance ensembles of encapsulated polymers, some of which happen to store genetic information in the linear sequences of their monomers while others catalyze polymerization reactions. These interact in cycles in which genetic polymers guide the synthesis of catalytic polymers, which in turn catalyze the synthesis of the genetic polymers. At first, the cycle occurs in the absence of metabolism, driven solely by the existing chemical energy available in the environment. At a later stage, other polymers incorporated in the encapsulated systems begin to function as catalysts of primitive metabolic reactions described in Chapter 7. The emergence of protocells with metabolic processes that support polymerization of self-reproducing systems of interacting catalytic and genetic polymers marks the final step in the origin of life. The above scenario can be turned into a hypothesis if it can be experimentally tested— or falsified, as described in the epigraph. The goal of falsification tends to be uncomfortable for active researchers. It’s a very human tendency to be delighted with a creative new idea and want to prove it correct. This can be such a strong emotion that some fall in love with their idea and actually hesitate to test it. They begin to dislike colleagues who are critical and skeptical. However, my experience after 50 years of active research is that we need to think of our ideas as mental maps and expect that most of them will not match the real world very well. And so, I say to my students, “When you have a new idea it’s OK to enjoy it and share it with others, but then you must come up with an experiment that lets you discard it.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Parrington, John. « Life as a Machine ». Dans Redesigning Life, 209–33. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780198766834.003.0010.

Texte intégral
Résumé :
Bacteria are a source of many of the tools used in biotechnology. A technique called the polymerase chain reaction, or PCR, made it possible for the first time to amplify tiny starting amounts of DNA and has revolutionised medical diagnosis, testing of IVF embryos for mutations, and forensic science. PCR involves the repeated generation of DNA from a starting sequence in a cycle, one stage of which occurs at boiling point. Because of this PCR uses a DNA polymerase enzyme purified from an ‘extremophile’ bacterium that lives in hot springs. More recently scientists have constructed artificial bacterial or yeast genomes from scratch. The next step will be to create reconfigured bacteria and yeast with enhanced characteristics for use in agriculture, energy production, or generation of new materials. Some scientists are now seeking to expand the genetic code itself. The DNA code that human beings share with all other species on the planet has four ‘letters’, A, C, G, and T, which pair as A:T and C:G to join the two strands of the DNA double helix. And each particular triplet of DNA letters, for instance CGA, or TGC, codes for a specific amino acid, the 20 different amino acids joining together in a specific sequence to make up a particular protein. Scientists have now developed a new DNA letter pair, X:Y. By introducing this into an artificial bacterial genome, it is becoming possible to create many more amino acids than the current 20 naturally occurring ones, and thereby allowing many new types of proteins.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Prabhakaran, Arya, Tincy Varghese, Subodh Gupta, Sunil Kumar Nayak, Vasanthakumaran Kumar et Prabhashlal P. « IMPLICATIONS OF FISH MODEL IN DRUG DISCOVERY ». Dans Futuristic Trends in Agriculture Engineering & ; Food Sciences Volume 3 Book 15, 625–38. Iterative International Publisher, Selfypage Developers Pvt Ltd, 2024. http://dx.doi.org/10.58532/v3bcag15p5ch4.

Texte intégral
Résumé :
Drug is any chemical that can change the way our body works. They are important to human being because they act as a cure to new diseases or they aid in reduction of previously untreatable conditions. In some cases, it may not be a new drug that is required, but that a previously known drug can be used for a new purpose. Introducing a new drug to market is a complex and time-consuming process that can cost pharmaceutical companies an average $1.3 billion and ten years of research and development. There are multiple defined stages for this process, each with their own associated challenges, timelines, and costs. Drug development involves extensive testing in animal models to determine if the drug is safe for human trials and it if it performs as it should. The reduction of animal testing is one of the goals set out by FDA in ISO10993, where zebrafish can be considered an alternative organism, which is highly efficient to replace rats and other test animals. With the completion of the zebrafish genome project and the establishment of a robust infrastructure for genetic and physiological studies, the zebrafish system sits poised to take on a larger role in the field of drug development. By contributing to target identification and validation, drug lead discovery and toxicology, the zebrafish might provide a shorter route to developing novel therapies for human disease. In the upcoming years, the advancement of contemporary technology may enable the zebrafish to serve as a significant stand-in for other mammalian models used in pharmaceutical discovery.
Styles APA, Harvard, Vancouver, ISO, etc.

Actes de conférences sur le sujet "Genetic Testing Market Share"

1

Young, Kevin M., et Scott M. Ferguson. « Intelligent Genetic Algorithm Crossover Operators for Market-Driven Design ». Dans ASME 2016 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2016. http://dx.doi.org/10.1115/detc2016-59534.

Texte intégral
Résumé :
Heuristic algorithms have been adopted as a means of developing solutions for complex problems within the design community. Previous research has looked into the implications of genetic algorithm tuning when applied to solving product line optimization problems. This study investigates the effects of developing informed heuristic operators for product line optimization problems, specifically in regards to optimizing the market share of preference of an automobile product line. Informed crossover operators constitute operators that use problem-related information to inform their actions within the algorithm. For this study, a crossover operator that alters its actions based on the relative market share of preference for each product within product lines was found to be most effective. The presented results indicate a significant improvement in computational efficiency and increases in market share of preference when compared to a standard scattered crossover approach. Future work in this subject will investigate the development of additional informed selection and mutation operators, as well as problem informed schema.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Phelps, James W. « Competitiveness in the US Optics Industry ». Dans Optical Fabrication and Testing. Washington, D.C. : Optica Publishing Group, 1988. http://dx.doi.org/10.1364/oft.1988.wa2.

Texte intégral
Résumé :
There is no question that foreign competition has been successful in taking over the optical production market. In 1984, 63.5% of high technology laser optics were made in North America, with 17.5% produced through off-shore facilities. It has been predicted that by the end of 1988 that the North American share will drop to 45% and the Asian share, alone, will increase to 40.0%.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Turner, Callaway, Scott Ferguson et Joseph Donndelinger. « Exploring Heterogeneity of Customer Preference to Balance Commonality and Market Coverage ». Dans ASME 2011 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2011. http://dx.doi.org/10.1115/detc2011-48581.

Texte intégral
Résumé :
Offering increased variety in a market is one method of capturing greater market share. However, we generally observe diminishing marginal returns in share as the size of the product line is increased. Leveraging commonality is a means of offsetting this constraint as it leads to reductions in manufacturing costs and build complexity. Product platforms strive to capitalize on the naturally occurring phenomena that yield commonality in a product line. The structure of design variable values of individually optimized products create opportunities for commonality in a bottom-up platform, while a top-down platform discovers opportunities for commonality through similarity in customer preferences. This paper explores the effect of changing the number of products, and commonality between those products, on market share. Results from designing a varying number of products independently are leveraged to create a bottom-up product platform. A top-down product platform approach based on a heterogeneous discrete choice model and a multiobjective genetic algorithm is presented that allow commonality decisions and product configuration to occur simultaneously. Using the platforming techniques presented in this paper, it is shown that the top-down platforming approach allows for more well-informed platformed design by providing knowledge of the tradeoff between commonality and market share.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Ponso, Alberto, Angelo Bonfitto, Mario Silvagni et Sara Luciani. « Off-Board Testing Device for Battery Diagnostics and Market Analysis for Battery Reuse ». Dans ASME 2023 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2023. http://dx.doi.org/10.1115/detc2023-116596.

Texte intégral
Résumé :
Abstract The growth in electric vehicles market share is one of the main actions taken to fight greenhouse gases emissions, but it also brings new environmental challenges to the table. Because of the high costs connected to the extractions of Lithium and other battery raw materials, the environmental risks posed by battery disposal and the intermittent nature of electricity production from renewable sources, batteries which are not suitable anymore for traction use can turn from waste to a resource, through their recycling for stationary applications. This paper presents a stand-alone device for the diagnostics of battery modules with the following applications: a) it can be used by EV owners for predictive maintenance; b) it enables end-of-line quality control in manufacturing plants after the production of battery cells; c) it can be used to test batteries exhausted for traction use, matching batteries with similar quality for stationary storage units based on batteries in second life. The application of the proposed device can have noticeable impacts on battery manufacturing procedures, accelerating the transition towards EV-based mobility. The diagnostics is performed through Artificial Intelligence trained with experimental characterization of battery cells. This paper in particular presents the hardware selected for a 48 V 25 Ah battery, but the general architecture of the device and the methodology of the training procedure can be extended to any battery size.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Li, Meifang, et Mian Li. « Decision Support for Performance Arts Using Support Vector Regression With Genetic and Particle Swarm Algorithms ». Dans ASME 2015 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2015. http://dx.doi.org/10.1115/detc2015-47059.

Texte intégral
Résumé :
Different from typical mechanical products, tickets for movies and performing arts can be considered as a special type of consumer products. Compared to widely known box-office receipts prediction with single-output in movie industry, estimating the market share and price for performing arts is still a challenging problem due to high dimensional datasets yet limited number of samples. This paper describes a data-driven decision support system to help arts managers make strategic decisions, especially on session-determination and price-setting, considering price discrimination and prediction on the corresponding sales volume. Eight different attributes from the database, with multiple labels in each attribute, are used to accurately and comprehensively represent and classify the characteristics of performing arts in each genre. A web-based influence factor is also defined to quantify the popularity and publicity of performing arts. For this multi-input and multi-output problem, support vector regression (SVR) is employed and its optimal parameters are determined using genetic algorithm (GA) and particle swarm optimization (PSO) respectively. Price utility axiom with the law of demand is applied to maximize the receipts. Compared to artificial neural networks (ANN), those two optimization based SVR methods perform much better, in terms of effectiveness and reliability.
Styles APA, Harvard, Vancouver, ISO, etc.
6

Ivanova-Kadiri, Ivelina. « Customer Genetic Data for Sustainability and Innovation Management ». Dans 9th International Scientific Conference ERAZ - Knowledge Based Sustainable Development. Association of Economists and Managers of the Balkans, Belgrade, Serbia, 2023. http://dx.doi.org/10.31410/eraz.s.p.2023.169.

Texte intégral
Résumé :
The availability of affordable genetic testing has enabled the col­lection of vast amounts of genetic data, creating new opportunities for mar­keting management. The use of genetic data empowers companies to de­velop personalized products and services and enhance customer relation­ship management. This, in turn, creates a competitive advantage for boost­ing companies’ strategic market positioning by enhancing their sustainabil­ity and innovation policies. This review paper aims to explore how business­es can leverage genetic data for sustainability and innovation management. The framework presented outlines the integration of genetic data into differ­ent stages of sustainable product development thus allowing for precision targeting through responsible innovation management. The paper also ex­amines the potential ethical and legal implications of using genetic data in marketing management.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Foster, Garrett, et Scott Ferguson. « Enhanced Targeted Initial Populations for Multiobjective Product Line Optimization ». Dans ASME 2013 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2013. http://dx.doi.org/10.1115/detc2013-13303.

Texte intégral
Résumé :
Initial populations for genetic algorithms are often created using randomly generated designs in an effort to maximize the genetic diversity in the design space. However, research indicates that the inclusion of solutions generated based on domain knowledge (i.e. non-random solutions) can notably improve the performance of the genetic algorithm with respect to solution performance and/or computational cost for convergence. This performance increase is extremely valuable for computationally expensive problems, such as product line optimization. In prior research, the authors demonstrated these improvements for product line design problems where market share of preference was the performance objective. Initial product line solutions were constructed from products that had the largest product-level utility for individual respondents. However, this simple product identification strategy did not adequately scale to accommodate the richer design problem associated with multiple objectives. This paper extends the creation of targeted initial populations to multiobjective product line design problems by using the objectives of the problem, instead of product level utility, to identify candidate designs. A MP3 player and vehicle feature packaging product line design problems are used to demonstrate this approach and assess the improvement of this modification.
Styles APA, Harvard, Vancouver, ISO, etc.
8

Kale, Sandip, et Jagadeesh Hugar. « Static Strength Design of Small Wind Turbine Blade Using Finite Element Analysis and Testing ». Dans ASME 2015 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2015. http://dx.doi.org/10.1115/imece2015-53485.

Texte intégral
Résumé :
Today, wind power has become the most accepted renewable energy source and contributing major share in renewable energy market. Large wind turbines are now producing power effectively and delivering satisfactory performance to satisfy researchers, scientists, investors and governments. Large wind turbine technology has achieved respectable position across the globe. In addition to large wind turbine technology, it is observed that small wind technology has started movement toward a satisfactory growth. A considerable growth is forecasted by many experts in coming decades. The small wind turbine technology can be accepted by market if industry will provide small wind turbines with good desirable characteristics. Self starting behavior at a low wind speed, affordable compatible cost, maintenance free wind turbine system, low weight, reliable and satisfactory performance in low wind will always receive significant attraction of people for various applications. Low weight tower-top system and hence supporting structure, light weight and efficient generator, rotor’s ability to efficient wind to mechanical energy conversion and components manufacturing simplicity are also always expected by wind turbine users. This work is one of the attempts to design and develop a blade for small wind turbine in the line of objectives stated. Wind turbine blade is most important element in wind turbine system which converts wind energy in to mechanical energy. In addition to efficient aerodynamic blade design its strength design is also important so that it can withstand against various loads acting on it. Wind turbine blades strength has been analyzed by different researchers by conducting their static and fatigue testing. The objective of present work is to perform static strength test for newly developed blade of 1.5 m length. This newly developed blade consists of two new airfoils. A thick airfoil is used at the root and thin airfoil is used for remaining sections. The different loads acting on the blade are calculated using Blade Element Momentum theory at survival wind speed. It is decided to manufacture this blade using glass fiber reinforced plastic. The properties of material combination used are determined as per ASTM norms. The computational strength analysis is carried out using ANSYS. During this analysis blade is considered as a cantilever beam and equivalent load is applied. The blade is also tested experimentally using strain gauges. From both result analyses, it is found that developed blade is capable to take various loads acting on wind turbine blade at survival wind speed.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Della Villa, Salvatore A. « Energy Innovation : A Focus on Power Generation Data Capture and Analytics in a Competitive Market ». Dans ASME Turbo Expo 2018 : Turbomachinery Technical Conference and Exposition. American Society of Mechanical Engineers, 2018. http://dx.doi.org/10.1115/gt2018-75030.

Texte intégral
Résumé :
Efficiency, operational flexibility, durability, environmental friendliness, and reliability — these words are often used to define expectations and requirements for product performance, required in the energy market today. Yet, depending on where one looks at the curve of product evolution, these words have very different meanings. One thing remains constant, irrespective of where one looks at performance growth over time, product improvement requires actual field experience data on a number of operating plants and units, to supplement the testing and verification processes of the Original Equipment Manufacturer (OEM). The combination of the field experience with the latest testing, verification, and quality control techniques, followed by the OEM, is essential to achieve the inherent reliability and ultimately, the life-time expectations set by the market. There is always a question as to the amount of actual field experience data that is necessary to validate product improvement or advancement. Today, the market seems to accept a lower standard of operating time (∼ 8,000 Actual Operating Hours (AOH)), compared with the past (∼ 100,000 AOH) for product validation. If Equivalent Operating Hours (EOH) were the basis for assessing the maturity of a new technology, the times would be lower. Whether the number is 8,000 or 100,000 AOH, the fact is that field experience data represents the intersection of the OEM’s design, validated through component and system level testing, coupled with the actual operating and maintenance practice and experience of the owner/operator. It is here that the OEM and owner/operator begin to directly share a common objective; the ability to demonstrate the efficiency, operational flexibility, durability, environmental friendliness and reliability of the product improvement. Consequently, the OEM and the owner/operator have a strong and shared interest in the accuracy, fidelity, and the completeness of field data; to ensure that an effective Failure Reporting and Corrective Action (FRACA) process is in place — driving continuous product improvement; to meet market expectations for equipment performance, and to support new product introduction. The focus of this paper is field experience data, past and present, with a specific emphasis on the increasing influence and value that data has in the ever changing and competitive energy market. The paper will suggest that the present and future need for high fidelity equipment data (at a component level of detail) is not just essential for supporting engineering efforts for product evolution, but also to support effective Operations & Maintenance (O&M) strategies. The paper will advance the notion that the fusion of total plant data, from three primary sources, with the ability to transform, analyze, and act based on integrating subject matter expertise is essential for effectively managing assets for optimum performance and profitability; executing and delivering on the promise of “Big Data” and advanced analytics.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Jung, Sangjin, et Timothy W. Simpson. « Multidisciplinary Analysis and Product Family Optimization of Front-Loading Washing Machines ». Dans ASME 2016 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2016. http://dx.doi.org/10.1115/detc2016-59520.

Texte intégral
Résumé :
In the past decade, the market share of front-loading washing machines has seen explosive growth in the United States. As a result, many companies are now offering families of front-loading washing machines with a variety of features and options. Understanding the tradeoffs within these product families is challenging as existing research has focused primarily on a single disciplinary analysis (e.g., dynamic analysis, strength analysis); few models exist for cleanliness, reliability, energy efficiency, etc. In this paper, we introduce a new integrated multidisciplinary analysis using simulations, mathematical models, and response surface models based on the ratings of product attributes. In order to determine feasible design solutions for a front-loading washer family, we formulate a product family design problem using deviation functions and a product family penalty function. A multi-objective genetic algorithm is applied to solve the proposed formulation, and the results indicate that designers can successfully determine solutions for the best performance, most common, and compromise families of front-loading washers.
Styles APA, Harvard, Vancouver, ISO, etc.

Rapports d'organisations sur le sujet "Genetic Testing Market Share"

1

Abbott, Albert G., Doron Holland, Douglas Bielenberg et Gregory Reighard. Structural and Functional Genomic Approaches for Marking and Identifying Genes that Control Chilling Requirement in Apricot and Peach Trees. United States Department of Agriculture, septembre 2009. http://dx.doi.org/10.32747/2009.7591742.bard.

Texte intégral
Résumé :
Structural and functional genomic approaches for marking and identifying genes that control chilling requirement in apricot and peach trees. Specific aims: 1) Identify and characterize the genetic nature of chilling requirement for flowering and dormancy break of vegetative shoots in Prunusgermplasm through the utilization of existing apricot (NeweYa'ar Research Center, ARO) and peach (Clemson University) genetic mapping populations; 2) Use molecular genetic mapping techniques to identify markers flanking genomic regions controlling chilling; 3) Comparatively map the regions controlling chilling requirement in apricot and peach and locate important genomic regions influencing chilling requirement on the Prunus functional genomic database as an initial step for identification of candidate genes; 4) Develop from the functional genomics database a set of markers facilitating the development of cultivars with optimized chilling requirements for improved and sustained fruit production in warm-winter environments. Dormant apricot (prunus armeniaca L.) and peach [Prunus persica (L.) Batsch] trees require sustained exposure to low, near freezing, temperatures before vigorous floral and vegetative bud break is possible after the resumption of warm temperatures in the spring. The duration of chilling required (the chilling requirement, CR) is determined by the climatic adaptation of the particular cultivar, thus limiting its geographic distribution. This limitation is particularly evident when attempting to introduce superior cultivars to regions with very warm winter temperatures, such as Israel and the coastal southern United States. The physiological mechanism of CR is not understood and although breeding programs deliberately manipulate CR in apricot and peach crosses, robust closely associated markers to the trait are currently not available. We used segregating populations of apricot (100 Fl individuals, NeweYa'ar Research Center, ARO) and peach (378 F2 individuals, Clemson University) to discover several discreet genomic loci that regulate CR and blooming date. We used the extensive genomic/genetic resources available for Prunus to successfully combine our apricot and peach genetic data and identify five QTL with strong effects that are conserved between species as well as several QTL that are unique to each species. We have identified markers in the key major QTL regions for testing in breeding programs which we are carrying out currently; we have identified an initial set of candidate genes using the peach physical/transcriptome map and whole peach genome sequences and we are testing these currently to identify key target genes for manipulation in breeding programs. Our collaborative work to date has demonstrated the following: 1) CR in peach and apricot is predominantly controlled by a limited number ofQTL loci, seven detected in a peach F2 derived map comprising 65% of the character and 12 in an apricot Fl map comprising 71.6% and 55.6% of the trait in the Perfection and A. 1740 parental maps, respectively and that peach and apricot appear in our initial maps to share five genomic intervals containing potentially common QTL. 2) Application of common anchor markers of the Prunus/peach, physical/genetic map resources has allowed us not only to identify the shared intervals but also to have immediately available some putative candidate gene information from these intervals, the EVG region on LG1 in peach the TALY 1 region in apricot on LG2 in peach; and several others involved in vernalization pathways (LGI and LG7). 3) Mapped BACcontigs are easily defined from the complete physical map resources in peach through the common SSR markers that anchor our CR maps in the two species, 4) Sequences of BACs in these regions can be easily mined for additional polymorphic markers to use in MAS applications.
Styles APA, Harvard, Vancouver, ISO, etc.
2

Fridman, Eyal, et Eran Pichersky. Tomato Natural Insecticides : Elucidation of the Complex Pathway of Methylketone Biosynthesis. United States Department of Agriculture, décembre 2009. http://dx.doi.org/10.32747/2009.7696543.bard.

Texte intégral
Résumé :
Plant species synthesize a multitude of specialized compounds 10 help ward off pests. and these in turn may well serve as an alternative to synthetic pesticides to reduce environmental damage and health risks to humans. The general goal of this research was to perform a genetic and biochemical dissection of the natural-insecticides methylketone pathway that is specific to the glandular trichomes of the wild species of tomato, Solanumhabrochaites f. glabratum (accession PI126449). Previous study conducted by us have demonstrated that these compounds are synthesized de novo as a derivate pathway of the fatty acid biosynthesis, and that a key enzyme. designated MethylketoneSynthase 1 (MKS 1). catalyzes conversion of the intermediate B-ketoacyl- ACPs to the corresponding Cn-1 methylketones. The approach taken in this proposed project was to use an interspecific F2 population. derived from the cross between the cultivated lV182 and the wild species PIl26449. for three objectives: (i) Analyze the association between allelic status of candidate genes from the fatty acid biosynthesis pathway with the methylketone content in the leaves (ii) Perform bulk segregant analysis of genetic markers along the tomato genome for identifying genomic regions that harbor QTLs for 2TD content (iii) Apply differential gene expression analysis using the isolated glands of bulk segregant for identifying new genes that are involved in the pathway. The genetic mapping in the interspecific F2 population included app. 60 genetic markers, including the candidate genes from the FAS pathway and SSR markers spread evenly across the genome. This initial; screening identified 5 loci associated with MK content including the candidate genes MKS1, ACC and MaCoA:ACP trans. Interesting observation in this genetic analysis was the connection between shape and content of the glands, i.e. the globularity of the four cells, typical to the wild species. was associated with increased MK in the segregating population. In the next step of the research transcriptomic analysis of trichomes from high- and 10w-MK plants was conducted. This analysis identified a new gene, Methy1ketone synthase 2 (MKS2), whose protein product share sequence similarity to the thioesterase super family of hot-dog enzymes. Genetic analysis in the segregating population confirmed its association with MK content, as well as its overexpression in E. coli that led to formation of MK in the media. There are several conclusions drawn from this research project: (i) the genetic control of MK accumulation in the trichomes is composed of biochemical components in the FAS pathway and its vicinity (MKS 1 and MKS2). as well as genetic factors that mediate the morphology of these specialized cells. (ii) the biochemical pathway is now realized different from what was hypothesized before with MKS2 working upstream to I\1KS 1 and serves as the interface between primary (fatty acids) and secondary (MK) metabolism. We are currently testing the possible physical interactions between these two proteins in vitro after the genetic analysis showed clear epistatic interactions. (iii) the regulation of the pathway that lead to specialized metabolism in the wild species is largely mediated by transcription and one of the achievements of this project is that we were able to isolate and verify the specificity of the MKS1 promoter to the trichomes which allows manipulation of the pathways in these cells (currently in progress). The scientific implications of this research project is the advancement in our knowledge of hitherto unknown biochemical pathway in plants and new leads for studying a new family in plants (hot dog thioesterase). The agricultural and biotechnological implication are : (i) generation of new genetic markers that could assist in importing this pathway to cultivated tomato hence enhancing its natural resistance to insecticides, (ii) the discovery of MKS2 adds a new gene for genetic engineering of plants for making new fatty acid derived compounds. This could be assisted with the use of the isolated and verified MKS1 promoter. The results of this research were summarized to a manuscript that was published in Plant Physiology (cover paper). to a chapter in a proceeding book. and one patent was submitted in the US.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Engel, Charles, Jeffrey Frankel, Kenneth Froot et Anthony Rodrigues. The Constrained Asset Share Estimation (CASE) Method : Testing Mean-Variance Efficiency of the U.S. Stock Market. Cambridge, MA : National Bureau of Economic Research, mars 1993. http://dx.doi.org/10.3386/w4294.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
4

Willis, C., F. Jorgensen, S. A. Cawthraw, H. Aird, S. Lai, M. Chattaway, I. Lock, E. Quill et G. Raykova. A survey of Salmonella, Escherichia coli (E. coli) and antimicrobial resistance in frozen, part-cooked, breaded or battered poultry products on retail sale in the United Kingdom. Food Standards Agency, mai 2022. http://dx.doi.org/10.46756/sci.fsa.xvu389.

Texte intégral
Résumé :
Frozen, breaded, ready-to-cook chicken products have been implicated in outbreaks of salmonellosis. Some of these outbreaks can be large. For example, one outbreak of Salmonella Enteritidis involved 193 people in nine countries between 2018 and 2020, of which 122 cases were in the UK. These ready-to-cook products have a browned, cooked external appearance, which may be perceived as ready-to-eat, leading to mishandling or undercooking by consumers. Continuing concerns about these products led FSA to initiate a short-term (four month), cross-sectional surveillance study undertaken in 2021 to determine the prevalence of Salmonella spp., Escherichia coli and antimicrobial resistance (AMR) in frozen, breaded or battered chicken products on retail sale in the UK. This study sought to obtain data on AMR levels in Salmonella and E. coli in these products, in line with a number of other FSA instigated studies of the incidence and nature of AMR in the UK food chain, for example, the systematic review (2016). Between the beginning of April and the end of July 2021, 310 samples of frozen, breaded or battered chicken products containing either raw or partly cooked chicken, were collected using representative sampling of retailers in England, Wales, Scotland and Northern Ireland based on market share data. Samples included domestically produced and imported chicken products and were tested for E. coli (including extended-spectrum beta-lactamase (ESBL)-producing, colistin-resistant and carbapenem-resistant E. coli) and Salmonella spp. One isolate of each bacterial type from each contaminated sample was randomly selected for additional AMR testing to determine the minimum inhibitory concentration (MIC) for a range of antimicrobials. More detailed analysis based on Whole Genome Sequencing (WGS) data was used to further characterise Salmonella spp. isolates and allow the identification of potential links with human isolates. Salmonella spp. were detected in 5 (1.6%) of the 310 samples and identified as Salmonella Infantis (in three samples) and S. Java (in two samples). One of the S. Infantis isolates fell into the same genetic cluster as S. Infantis isolates from three recent human cases of infection; the second fell into another cluster containing two recent cases of infection. Countries of origin recorded on the packaging of the five Salmonella contaminated samples were Hungary (n=1), Ireland (n=2) and the UK (n=2). One S. Infantis isolate was multi-drug resistant (i.e. resistant to three different classes of antimicrobials), while the other Salmonella isolates were each resistant to at least one of the classes of antimicrobials tested. E. coli was detected in 113 samples (36.4%), with counts ranging from <3 to >1100 MPN (Most Probable Number)/g. Almost half of the E. coli isolates (44.5%) were susceptible to all antimicrobials tested. Multi-drug resistance was detected in 20.0% of E. coli isolates. E. coli isolates demonstrating the ESBL (but not AmpC) phenotype were detected in 15 of the 310 samples (4.8%) and the AmpC phenotype alone was detected in two of the 310 samples (0.6%) of chicken samples. Polymerase Chain Reaction (PCR) testing showed that five of the 15 (33.3%) ESBL-producing E. coli carried blaCTX-M genes (CTX-M-1, CTX-M-55 or CTX-M-15), which confer resistance to third generation cephalosporin antimicrobials. One E. coli isolate demonstrated resistance to colistin and was found to possess the mcr-1 gene. The five Salmonella-positive samples recovered from this study, and 20 similar Salmonella-positive samples from a previous UKHSA (2020/2021) study (which had been stored frozen), were subjected to the cooking procedures described on the sample product packaging for fan assisted ovens. No Salmonella were detected in any of these 25 samples after cooking. The current survey provides evidence of the presence of Salmonella in frozen, breaded and battered chicken products in the UK food chain, although at a considerably lower incidence than reported in an earlier (2020/2021) study carried out by PHE/UKHSA as part of an outbreak investigation where Salmonella prevalence was found to be 8.8%. The current survey also provides data on the prevalence of specified AMR bacteria found in the tested chicken products on retail sale in the UK. It will contribute to monitoring trends in AMR prevalence over time within the UK, support comparisons with data from other countries, and provide a baseline against which to monitor the impact of future interventions. While AMR activity was observed in some of the E. coli and Salmonella spp. examined in this study, the risk of acquiring AMR bacteria from consumption of these processed chicken products is low if the products are cooked thoroughly and handled hygienically.
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie