Dissertations / Theses on the topic 'Longitudinal Value'

To see the other types of publications on this topic, follow the link: Longitudinal Value.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Longitudinal Value.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Ledden, Lesley. "The dynamic nature of value : a longitudinal study." Thesis, Kingston University, 2009. http://eprints.kingston.ac.uk/20283/.

Full text
Abstract:
Accepting the view that the marketing process is centred on exchange between two parties (Hunt, 2002), it follows that exchange will take place between two (or more) parties when each party trades something of value in return for something of greater value. Consequently the logical conclusion is that value is the cornerstone of marketing (see for example Eggert and Ulaga, 2002; Holbrook, 2005). Perceptions of value can vary over time and experience (Eggert and Ulaga, 2002; Woodall, 2003; Sánchez-Fernandez and Iniesta-Bonillo, 2007). However, even though the temporal nature of value is widely acknowledged, research in this area has been largely overlooked, and while there is limited investigation within the b2b domain (see Flint et al., 2002; Beverland and Lockshin, 2003; Eggert et al., 2006) a literature search has been unable to identify any research that examines actual changes in perceptions of value within consumer research. Consequently, the aim of this study is to empirically examine the temporal stability (i.e., the nature and strength) of the functional relationships between value and its antecedents and outcomes. In order to address the above aim a theoretically grounded model is proposed. Based on common acceptance among researchers (see review by Woodall, 2003) value is conceptualised as the result of a 'trade-off' between benefits (get) and sacrifices (give). However, instead of treating value as a composite higher order construct the behaviours of its two components (get and give) with the following constructs are examined separately: service quality and personal values (terminal and instrumental) are modelled as determinants while satisfaction and intention are the outcomes of value. In addition the impact of knowledge (cognitive; Woodruff, 1997) and emotions (affective; Richins, 1997) as direct determinants of value and additionally as moderators of the value to satisfaction relationship is tested. The research was-conducted within the Higher Education sector among consumers of postgraduate education at a London business school. To test the temporal stability and pattern of development of the functional relationships between the value components and their above defined nomologically related constructs, related data were collected longitudinally from two sample of cohorts at three points in time (i.e., the beginning, middle and end of their studies) via a personally (Times 1 and 2) and internet (Time 3) administrated questionnaire. A total of 34 and 45 usable responses were collected from Cohorts 1 and 2 respectively over the three time points. The data were analysed using Partial Least Squares. Analysis indicates that the give component of value should be separated into money, and time and effort (denoted in this study as give). There is support for knowledge and emotions as direct determinants of the now three value components rather than as moderators of the relationships between these components and satisfaction. Comparisons between the two cohorts reveal the existence of a number of significant differences in the relative strength of corresponding relationships. Finally, in terms of the focal interest of this study, there is substantial evidence of the temporal nature of the functional relationships of the value components. Four of the hypothesised relationships are supported only at a single time point, while a number of significant changes in the strength of the functional relationships between the three points in time are identified. The research is considered to make the following contributions to the subject matter. It confirms the idiosyncratic nature of the value components in terms of their functional relationship with antecedents and consequences. It highlights the need to consider the location of monetary sacrifice within the give component. The existence of a time lag before some determinants have a significant impact on the formation of value is identified. There is tentative evidence to suggest that as consumption progresses, value is formed by a larger number of determinants. For the get component, significant variations in the strength of its functional relationships over time are found to exist.
APA, Harvard, Vancouver, ISO, and other styles
2

McLaren, Josephine Anne. "Can EVA™ create value? : a dynamic longitudinal investigation of three New Zealand companies." Thesis, University of Newcastle upon Tyne, 2012. http://hdl.handle.net/10443/2875.

Full text
Abstract:
When Economic Value Added (EVA™) was first promoted by the patent-holders, Stern Stewart and Company, it was hailed as an innovation in management accounting. The suggestion was that this measure could be used as the basis for the management control system within the firm, covering planning, control, investment decision making and remuneration determination. Many firms introduced the EVA system. New Zealand, in particular, was exposed to the EVA methodology through the publication in 1996 of a Value-Based Reporting Protocol that was recommended for state-owned enterprises. This study adopts a longitudinal perspective to examine the experience of three large companies in New Zealand, who implemented EVA in the late 1990s. These companies are ex-nationalised firms; two are state-owned enterprises and one is listed. The firms implemented EVA in the late 1990s and continued to use it as the management control system for a period of 10-15 years. The evidence is gathered from a questionnaire conducted in 1999, interviews conducted in 2001 and 2011, and supporting documentary evidence. It covers the entire ‘life cycle’ of EVA, from initial implementation, through its evolution to the eventual decline. Three different theoretical frameworks are developed from three academic disciplines and applied in an original context to analyse this EVA evidence. The first is the discovery theory framework, drawing from the economics literature base. This framework is used to consider whether EVA can be regarded as a discovery process within the organisation, to discover the source of value that is known to exist in these ex-nationalised firms. The second, from the management literature, is used to investigate whether EVA can be viewed as a management model in the firm. Finally, contingency theory as applied in management accounting is extended to a longitudinal perspective to analyse the variables that were important at each stage of the EVA life cycle. A central theme of each framework was the information provided and the incentives created by the measure. The thesis provides original contributions to the evidence on EVA, including why EVA needed to evolve and why it eventually failed. Further contributions are the suggestions for development and extension of each framework and the synthesising of the frameworks. Finally, implications for practitioners and policy makers are considered.
APA, Harvard, Vancouver, ISO, and other styles
3

Lange, Joshua. "Exploring value through international work placements in social entrepreneurial organisations : a multiple case longitudinal study." Thesis, University of Exeter, 2015. http://hdl.handle.net/10871/22106.

Full text
Abstract:
Universities and their partner organisations are promising that short-term work placements in social entrepreneurial organisations will increase student employability, leadership skills, and knowledge of socially innovative practice, while providing students meaningful opportunities to ‘change the world;’ yet theory and empirical studies are lacking that show what is beneficial and important to students, how students develop, and what influences their development through these cross-cultural and interdisciplinary experiential learning programs. This is the first study to explore the value of UK and US students participating in international internships and fellowships related to social entrepreneurship from a socioeconomic perspective. For this study, a value heuristic was developed from organisational models in the social entrepreneurship and educational philosophy literature followed by a qualitative longitudinal multiple case study. Fifteen individual student cases were chosen from two programmes involving two UK and three US universities, taking place in eleven host countries over five distinct data collection intervals. Findings across cases show a broad range of perceived value to students: from research skills and cross-cultural understanding, to critical thinking and self-confidence. Findings also show how student perspectives changed as a result of the placement experience and what ‘internal’ and ‘context-embedded’ features of the placements influenced students’ personal and professional lives. However, the ambiguity of social impact measures raises ethical questions about engaging students with limited knowledge, skills, and preparation on projects where they are unprepared to create long-term value for beneficiaries. This study contributes to the literature on higher education and international non-profit and business education by: providing an expansive matrix of value to students engaging in international placements; initiating a ‘hybridisation’ theory of personal value; creating a rigorous methodology transferable to similar programmes; outlining embedded features that programme developers can integrate in order to improve their own social and educational impact; raising ethical questions related to theory and practice; and including the researcher’s own multi-continent journey into the substance of the work.
APA, Harvard, Vancouver, ISO, and other styles
4

Print, Carole. "An exploratory longitudinal study of the implementation of shareholder value within three large international companies." Thesis, Henley Business School, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.288764.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Connor, Paul Brereton. "Exploring customer perceived value change in an UK aerospace manufacturing company : a longitudinal case study." Thesis, University of Nottingham, 2013. http://eprints.nottingham.ac.uk/28661/.

Full text
Abstract:
Organisational members need clearer comprehension about value/s dynamic nature in a constantly changing networked ecosystem of resource integration and mutual service provision. This is so organisational pursuit of competitive advantage for business success is achievable through better stakeholder value propositions which customers especially value in-use. Empirical studies indicate customer perceptions of value are reasonably well documented, however the same cannot be said of knowledge about changes in customer perceived value where the paucity of research reflects its embryonic state. This exploratory longitudinal case study in a major blue-chip company setting is the first known research design employing an idiographic interactive approach of three interpretive qualitative methods mixed with descriptive quantitative elements. The UK aerospace industry is considered to be appropriate to study the phenomenon of customer perceived value change, because of its characteristics i.e. periods of cyclical change, innovation, new technologies and its long-term relationship marketing nature. Research findings highlight customer perceived value changes are determined idiosyncratically and phenomenologically by beneficiaries and manifested at the attribute level of service provision rather than at higher levels of consequences and values. Different macro-environmental and micro-environmental factors influence the organisational climate continuously albeit not as critical incidents having a direct impact on individual perception of value. Value constructs informed by organisational values and personal values did not change per se; however thematic recalibrations within each respondent/s values system hierarchy were detected. Exemplars responses reveal most increases or decreases in value change appear associated with changing positions and roles within the organisation for two individuals. Whereas the least change in value came from the remaining exemplar's supplier development role in providing stability and surety of supply. Key marketing implications are the surfacing and management of tacit customer perceived value change knowledge, facilitated through common language and leading to enhanced external and internal collaborative relationships.
APA, Harvard, Vancouver, ISO, and other styles
6

Gundogdu, Didem. "The role of social and human capital in assessing firm value : a longitudinal study of UK firms." Thesis, University of Exeter, 2017. http://hdl.handle.net/10871/30194.

Full text
Abstract:
This study examines the role of board social and human capital in assessing the market value of firms in the UK context. As the world economy has shifted from manufacturing to service and knowledge-based economies, attributes such as knowledge, expertise, skills, ability and reputation are increasingly fundamental to the success of business enterprises. There is a growing consensus that these attributes are an increasingly valuable form of capital, asset or resource, despite their intangibility. In accounting, there are a number of problems arising from the accountability of non-physical, non-financial capital. Firstly, some forms of capital and certain assets are neither recognised nor presented in the statement of financial position. Secondly, some accounting practices relating to intangible assets are very conservative, resulting in undervalued assets and overstated liabilities. Consequently, there is an increasing gap between the book value and market value of firms. This gap restricts the relevance of information presented in financial statements and suggests that there is something missing in financial statements. This is the research problem being addressed in this study. While prior literature demonstrates that it has proven difficult to operationalise intangible forms of capital, there has been significant empirical attention and theoretical development in social and human forms. This thesis aims to contribute to accounting theory and practice by exploring the impact that board social and human capital have on firm market value. In light of extant research, it is hypothesised that social and human capital possessed at board level are positively related to the market value of firms. This study employs the Ohlson’s (1995) residual income valuation model to test the impact of social and human capital using a sample of UK firms listed on the FTSE All Share index for a period of 10 years (2001-2010). Social and human capital measures are derived from interlocking directorate ties and detailed biographic information of board directors. This study benefits from Pajek and Ucinet network packages to generate network maps and calculate positional metrics such as centrality and structural hole measures.
APA, Harvard, Vancouver, ISO, and other styles
7

Silva, Rodrigo Alves. "Viabilidade de utilização da teoria de opções reais no processo de avaliação de empresas de telecomunicações." Universidade de São Paulo, 2010. http://www.teses.usp.br/teses/disponiveis/18/18157/tde-08102010-084629/.

Full text
Abstract:
O processo de avaliação de empresas por modelos e técnicas formais tem por objetivo nortear a gestão e os grupos de interessados quanto à tomada de decisão ótima. Em geral, no processo de avaliação estes modelos são utilizados seguindo pressupostos acerca do valor dos benefícios da firma, dando a conotação de que o valor do negócio é o valor destes benefícios. Em empresas que atuam em mercados com alta competitividade e elevado nível de desenvolvimento e emprego de tecnologias e inovações, a utilização isolada de técnicas focadas em benefícios dos negócios atuais se mostram inadequadas para avaliar a habilidade da organização na resposta às variáveis mercadológicas. Sob este prisma, as vantagens advindas de estratégias competitivas e minimização das possibilidades de perdas do negócio, principalmente conquistadas através de estratégias de flexibilização e de geração de oportunidades de novos negócios se mostram importantes direcionadores de valor. O valor gerado por oportunidades e flexibilidades em empresas de telecomunicações é o foco da presente pesquisa que objetiva fundamentar em suas discussões e testes a viabilidade de incorporação do modelo de opções reais no processo de avaliação das empresas do setor, partindo do pressuposto de que o mercado, visualizando a importância das estratégias de gestão dos investimentos e da estrutura da empresa para o seu sucesso na geração de valor, remuneram estas organizações, atribuindo o valor de acordo com suas expectativas. Foram testados os modelos de efeitos fixos e aleatórios de dados em painel para verificar a significância das variáveis explicativas geradoras de valor potencial de opções reais. Os testes demonstram significância estatística das variáveis, embasando o modelo. Não obstante, a pesquisa posiciona estudos e levantamentos teóricos acerca dos modelos de avaliação abordados para contextualizar a utilidade do modelo de opções reais em processos de avaliação, bem como destaca a aplicabilidade procedimental do modelo de opções reais em conjunto com a técnica de fluxo de caixa descontado na avaliação de empresas do setor. Seus objetivos de discussão e averbação da aplicabilidade da teoria são alcançados, dado o conjunto de métodos empíricos e ilustrativos de sua técnica.
The process of business valuation for formal models and techniques aims to guide the management and stakeholder groups as to the optimal decision-making. In general, in the evaluating process these models are used following assumptions about the value of the firm benefits, giving the connotation that business value is the value of these benefits. In companies that operate in markets with high competitiveness and high level of development and use of technologies and innovations, the isolated use of techniques focused on the benefits of today\'s businesses have shown inadequate to assess the organization ability in response to marketing variables. From that perspective, the benefits arising from competitive strategies and minimization of business loss chances, mainly won through relaxation strategies and generating new business opportunities to show important value drivers. The value generated by the opportunities and flexibilities in the firm\'s telecommunications companies is the focus of this research that aims to support in their discussions and tests the feasibility of incorporating the real options model in the evaluation of companies in the sector, on the assumption that the market, seeing the importance of strategies for investment management and company structure for its success in generating value, remunerate these organizations, assigning the value according to your expectations. Models of fixed and random effects panel data were tested to assess the significance of the explanatory variables generating potential value of real options. The tests demonstrate statistical significance of the variables, basing the model. Nevertheless, this research positions studies and theoretical surveys about the valuation models addressed in order to contextualize the usefulness of the real options model in evaluation processes, and highlights the applicability of the real options model procedural in conjunction with the technique of cash flow discounted in evaluating companies. The goals for discussion and annotation of the theory applicability are achieved, given the set of empirical and illustrative of this technique.
APA, Harvard, Vancouver, ISO, and other styles
8

Deng, Cheng. "Time Series Decomposition Using Singular Spectrum Analysis." Digital Commons @ East Tennessee State University, 2014. https://dc.etsu.edu/etd/2352.

Full text
Abstract:
Singular Spectrum Analysis (SSA) is a method for decomposing and forecasting time series that recently has had major developments but it is not yet routinely included in introductory time series courses. An international conference on the topic was held in Beijing in 2012. The basic SSA method decomposes a time series into trend, seasonal component and noise. However there are other more advanced extensions and applications of the method such as change-point detection or the treatment of multivariate time series. The purpose of this work is to understand the basic SSA method through its application to the monthly average sea temperature in a point of the coast of South America, near where “EI Ni˜no” phenomenon originates, and to artificial time series simulated using harmonic functions. The output of the basic SSA method is then compared with that of other decomposition methods such as classic seasonal decomposition, X-11 decomposition using moving averages and seasonal decomposition by Loess (STL) that are included in some time series courses.
APA, Harvard, Vancouver, ISO, and other styles
9

Hoang, Thinh Quoc. "Exploring Vietnamese first-year English-major students’ motivation: A longitudinal, mixed-methods investigation." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2021. https://ro.ecu.edu.au/theses/2423.

Full text
Abstract:
Learner motivation is recognised as a crucial determinant of successful second language (L2) learning. However, to date, little research has been directed into the motivational dimensions of L2 learning in Vietnam, where English has currently become the most popular foreign language with millions of learners nationwide. Further, there is a limited amount of research internationally that explores the motivational levels and development of L2 students at the transition from school level to higher education. This study aimed to develop a profile of the motivation and learning experiences of a cohort of Vietnamese first-year English-major students over one academic year. As an attempt to integrate the L2 research field with mainstream educational psychology, the study drew theoretically from Eccles et al.’s expectancy-value theory (EVT). This framework, though recognised as one of the most influential motivation theories, has received limited attention in the L2 field. Specifically, the research explored: the EVT constructs of attainment value (personal importance), intrinsic value, utility value, cost, perceived competence, and expectancies for success; their variations across the cohort over one year; their correlations; and their impacts on motivational indicators of English-major choice, English learning effort and willingness to communicate. The study also offers explanations for those variations. Informed by critical realist perspectives, the study adopted a longitudinal, explanatory mixed-methods design. A cohort of 149 first-year English-major students at one Vietnamese university were surveyed three times over one academic year. Drawing on the results of the first survey, a sample of 15 participants exhibiting a range of motivational profiles were recruited to take part in three rounds of individual interviews over the same year. Results demonstrated various explanatory powers that the EVT constructs had in understanding Vietnamese English-major students’ motivated behaviours. For example, while personal importance and utility value linked to English seemed to be more potent reasons for participants enrolling in an English major, their L2 learning engagement and willingness to communicate in English were linked more strongly to intrinsic value and expectancies for success. The study further revealed different developmental trajectories of student values and beliefs. While the students maintained relatively stable levels of personal importance and utility value, studying English became slightly less interesting to them. Regarding cost dimensions, the participants reported an increase in opportunity cost they perceived from iv studying English while becoming less anxious about speaking the language. For the two competence-related beliefs, while the students perceived an improvement in their English proficiency, they reported decreasing levels of expectancies for success and became more realistic about the potential to improve their English. The participants also reported a lower investment in learning effort and less willingness to communicate in English, which paralleled the declines in intrinsic value and expectancy beliefs. Interviews with participants revealed the impacts of different contextual and individual factors, especially those of teaching and learning activities on their L2 motivation. Overall, the findings of this study suggested that expectancy-value model provided a fresh but effective theoretical approach to understanding the motivational patterns of Vietnamese first-year English-major students and is potentially applicable to inquiry into L2 motivation in other contexts. Moreover, this study’s findings also contribute to extending current understandings of the EVT constructs. Finally, the findings from this study provide valuable insights and suggestions to better support English language learners in Vietnamese tertiary institutions and similar contexts.
APA, Harvard, Vancouver, ISO, and other styles
10

Dalben, Adilson 1965. "Fatores associados à proficiência em leitura e matemática : uma aplicação do modelo linear hierárquico com dados longitudinais do Projeto GERES." [s.n.], 2014. http://repositorio.unicamp.br/jspui/handle/REPOSIP/253935.

Full text
Abstract:
Orientadores: Luiz Carlos de Freitas, Dalton Francisco de Andrade
Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Educação
Made available in DSpace on 2018-08-24T22:44:15Z (GMT). No. of bitstreams: 1 Dalben_Adilson_D.pdf: 5011742 bytes, checksum: e9c6413b4e6fb98c276dcbdecd13440b (MD5) Previous issue date: 2014
Resumo: Esta pesquisa é um estudo sobre a eficácia e equidade escolar que tem ganhado atenção especial nos países que usam as avaliações em larga escala a serviço da gestão do sistema educativo. No Brasil, que desde a década de 1990 colocou a avaliação educacional como recurso central em suas políticas educacionais, mas coletando dados seccionais, que são muito frágeis para essa finalidade. Essa fragilidade decorre da alta associação que os fatores extraescolares, sobretudo o nível socioeconômico do aluno, têm sobre as medidas de proficiência. Diante disso, foram usados dados longitudinais e a análise foi feita por meio de modelos lineares hierárquicos. Esta pesquisa teve como objetivo principal desenvolver um modelo estatístico capaz de identificar tais fatores para a realidade brasileira, considerando que a aprendizagem é um processo complexo, isto é, ela é influenciada simultaneamente por múltiplos fatores. Foram desenvolvidos modelos de valor agregado que não só identificam tais variáveis, como também caracterizam sua influência em alunos com distintas proficiências no início de cada período de escolarização. A base de dados utilizada nesses modelos foi fornecida pelo Projeto GERES, que, no período de 2005 a 2008, coletou dados dos mesmos alunos de 1ª a 4ª séries de uma amostra de 312 escolas em cinco grandes cidades brasileiras. Foram medidas as proficiências em Leitura e Matemática de 35.538 alunos e coletadas informações de contexto desses alunos, seus familiares, professores, diretores e escola. Após a redução do grande número de informações disponibilizadas pelo Projeto GERES, feita por meio da Análise Fatorial Exploratória (AFE), as variáveis resultantes foram reorganizadas em três arquivos usados para análise em modelos lineares hierárquicos de três níveis. Os resultados encontrados evidenciam uma significativa instabilidade nos efeitos que as variáveis têm sobre a proficiência, tanto em leitura quanto em matemática. Ao final da pesquisa, são encontrados alguns fatores que influenciam positivamente e negativamente a proficiência em Leitura e Matemática e outros que afetam especificamente cada uma dessas áreas, indicando que podem colaborar para o aumento da eficácia e da equidade das escolas. No entanto, constatam-se também algumas variáveis que têm comportamentos incoerentes com o esperado e outras com comportamentos opostos nas duas áreas. Assim, dos achados das pesquisas, comprova-se que, com base nos dados utilizados, procedimentos metodológicos e modelos estatísticos adotados, os modelos de valor agregado melhoram a confiabilidade das análises em comparação aos modelos que usam dados seccionais, mas ainda são inviáveis como ferramentas para a gestão do sistema educativo, sobretudo para o uso meritocrático de seus resultados. Dessa forma, esta pesquisa corrobora os achados de outras realizadas no âmbito internacional e permite afirmar que a qualidade da modelagem estatística depende da qualidade dos dados que busca modelar, podendo gerar distorções, estabelecer relações inesperadas ou levar a conclusões equivocadas. Em contrapartida, trata-se de recursos que podem ser usados no sistema educativo, fornecendo dados importantes para a orientação das políticas públicas numa perspectiva de avaliação formativa, com vistas ao melhoramento da qualidade de ensino oferecido pelas escolas e à melhor formação dos profissionais docentes e não-docentes que nelas trabalham
Abstract: This research is a study on school effectiveness and equality in Brazil, adding up to a number of other researches that have drawn special attention in countries that use large-scale evaluations at the service of the education system management. In the Brazil has regarded the educational evaluation as a central resource in national education policies, but using cross-sectional data, which are far more fragile for such purpose. This fragility has derived from the great influence that extra-school factors, particularly the students¿ socioeconomic status, exerts on proficiency measures. Longitudinal data was used in the analyses with hierarchical linear models. The main objective of this research was to develop a statistical model to identify such factors in the Brazilian reality, considering that learning is a complex process, i.e. it is simultaneously influenced by multiple factors. Value-added models were developed not only to identify such variables, but also to characterize their influence on students showing different proficiencies at the beginning of every school term. The data base used in those models was provided by the GERES Project, which collected data of the same students from the 1st to the 4th grade from a sample of 312 schools in five Brazilian cities from 2005 to 2008. Proficiencies of 35,538 students were measured, and information about these students¿ context, family, teachers, principals and school were gathered. After the reduction of the great amount of information made available by the GERES Project by means of Exploratory Factor Analysis (EFA), the resulting variables were reorganized in three files used for analysis in three-level hierarchical linear models. The results evidenced significant instability in the effects that the variables have on proficiency both in Reading and in Mathematics. At the end of the research, some factors that influence Reading and Mathematics proficiency either positively or negatively, as well as other factors that specifically affect one of those areas, were found, thus indicating that they may contribute to increased school effectiveness and equality. However, some variables whose behavior was inconsistent with the one expected, and others with opposite behaviors in the two areas were also found. Therefore, from the research findings, based on the data used, the methodological procedures and the statistical models adopted, it has been evidenced that value-added models improve the analysis reliability in comparison with models that use cross-sectional data, but they are still impracticable as tools for education system management, particularly for meritocratic use of their results. Hence, this research has corroborated the findings of other studies carried out over the world and has enabled us to state that the quality of the statistical modeling depends on the quality of data that it attempts to model, and it may generate distortions, establish unexpected relationships or lead to misleading conclusions. On the other hand, these resources may be used in the education system by providing important data for guiding public policies in a educative evaluation perspective, aiming at improving the quality of teaching offered by schools, teachers and other professionals that work in the school setting
Doutorado
Ensino e Práticas Culturais
Doutor em Educação
APA, Harvard, Vancouver, ISO, and other styles
11

Ghallab, Eman. "Exploring the pedagogical value and challenges of using e-portfolios as a learning tool for nursing students : a single longitudinal qualitative case study with Activity Theory as an analytical framework." Thesis, University of Nottingham, 2017. http://eprints.nottingham.ac.uk/48373/.

Full text
Abstract:
With the advances in technology and computers, e-portfolios replaced paper based portfolios in many higher education institutions and professions, including nursing (Skiba, 2005; Garrett and Jackson, 2006). They both have some overlapping benefits but the most important benefit of eportfolios over the paper based version lies in its potential for meaningful interaction and network structure (Butler, 2006). The existing literature, therefore, suggests that e-portfolios may have some added pedagogical value. Understanding this added value through research can provide valuable data for academic, clinical and administrative staff on how to implement and use e-portfolios effectively to support students’ learning. According to the literature review, there was a lack of a detailed and comprehensive study on benefits and challenges of using e-portfolios for nursing students. Hence, this study explored the pedagogical value of eportfolios and the contextual challenges that influenced its effectiveness as a learning tool from the perspective of the key stakeholders: students, tutors, and curriculum planners. The study used qualitative instrumental case study to collect data through initial and follow up interviews, online observation and document analysis. The data were collected for a period of 21 months. The participants in this study included 14 students, 11 tutors, and 6 curriculum planners. The study employed critical realism as philosophical approach and Activity Theory as a methodological and analytical framework. This study contributes to the literature through being the first longitudinal empirical inquiry in nursing education that adopted Activity Theory to explore the pedagogical value of e-portfolios and the contradictions that may influence their perceived value from student and tutor perspectives. The findings of this study show that the e-portfolio allowed for a more student-centred approach to learning through providing these four affordances: (1) Fostering communication and interaction between the students and their peers and tutors. The findings indicated that the e-portfolio allowed for a flexible, instantaneous and ongoing online communication that helped the students to stay connected, seek feedback and support share their work, experiences, and any personal information with their tutors and raise their awareness of any issue they thought would have an impact on their learning and progress, at any time and from anywhere using their mobile devices. It also allowed them to create and participate in different online learning communities with their peers to exchange experiences and ideas, learn from and support each other. (2) Supporting formative assessment through facilitating formative feedback and self-assessment. The eportfolio offered the tutors unlimited and flexible access to any work shared by their students, and this enabled them to formatively assess their students’ performance and progress towards achieving the desired learning outcomes, spot more quickly those who were struggling and in need for remediation, and provide them with immediate and more personalized feedback specifically tailored to their needs. Moreover, the reflective nature of the e-portfolio and its structure encouraged continuous self-assessment through prompting the students to iteratively look back and reflect on their collection of evidence and experiences. This iterative process allowed them to develop a better understanding of their own strengths and weaknesses and engage in selfdevelopment, make connections between their seemingly disparate learning experiences, hence, see a comprehensive picture of their learning. (3) Scaffolding the reflective process. The findings indicated that e-portfolio, through its inbuilt structured reflective templates, acted as a scaffold for the students who were struggling with writing reflection. These templates were perceived to have made their attempts at reflection easier and stimulated them to think deeper about their experiences. Improvement in their reflective writing was noted by some tutors and students after using the templates for a considerable amount of time. (4) Allowing for multiple modes of presentation and self-expression. The multimedia capabilities of the e-portfolio allowed the students to present evidence of their learning and express their thoughts and feelings in variety of modes such as videos, images, audios, and texts. Creating and combining multimedia evidence and materials enabled for presenting a more realistic and authentic view of their experiences and integrating information from different sources to show the multifaceted nature of these experiences. Interestingly, being able to present learning and achievements in multiple modes encouraged the students to view the e-portfolio as a potential tool to apply for jobs after graduation. With regard to the challenges and contradictions, mainly secondary contradictions emerged in the context in which the e-portfolio was used. The majority of these contradictions, for example, technical issues, the tutors’ perceived complexity and inefficiency of the e-portfolio system, the students’ underdeveloped digital literacy, and the limited access to computers on placements, have had a short term or less harmful effect on the use and the perceived value of the eportfolio. But, contradictions such as the mismatch that existed between the students’ expectations and the tutors’ perception of the frequency and adequacy of feedback that needed to be given in the e-portfolio appears to have had a more detrimental effect on the students’ motivation to use the e-portfolio.
APA, Harvard, Vancouver, ISO, and other styles
12

Sousa, Júnior Severino Cavalcante de [UNESP]. "Persistência da lactação e Influência da estrutura de dados sobre a estimação de parâmetros genéticos para a produção de leite de bovinos da raça Holandesa." Universidade Estadual Paulista (UNESP), 2010. http://hdl.handle.net/11449/104906.

Full text
Abstract:
Made available in DSpace on 2014-06-11T19:33:32Z (GMT). No. of bitstreams: 0 Previous issue date: 2010-02-24Bitstream added on 2014-06-13T18:45:22Z : No. of bitstreams: 1 sousajunior_sc_dr_jabo.pdf: 1454758 bytes, checksum: 5d233cf3c874be7d7b3c671f43249a20 (MD5)
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Para o presente estudo foram utilizadas 3.202 primeiras lactações, de vacas da raça Holandesa pertencentes a quatro fazendas da região Sudeste, registradas semanalmente, com o objetivo de verificar a influência da estrutura de dados de produção de leite no decorrer da lactação, sobre os parâmetros genéticos estimados por modelos de regressão aleatória. Foram testados quatro arquivos contendo estruturas diferentes de dados. O arquivo de controles semanais (CS) contava com 122.842 controles, o arquivo mensal (CM) com 30.883 controles, o bimestral (CB) com 15.837 controles, e por fim, o arquivo de controles trimestrais (CT) continha de 12.702 controles. O modelo utilizado foi o de regressão aleatória, e como aleatórios foram considerados os efeitos genético aditivo e o de ambiente permanente de animal. A ordem das funções de covariância para estes dois efeitos foi de sexta ordem para efeito genético aditivo e de sétima ordem para o efeito de ambiente permanente, com variâncias residuais heterogêneas a estrutura de variâncias residuais foi modelada por meio de uma “step function” O modelo teve como efeito fixo os grupos de contemporâneos (GC) comuns para todos os arquivos de dados. Os GC foram compostos por fazenda, mês e ano do controle,e como co-variável a idade da vaca ao parto (regressão linear e quadrática) e o número de dias em lactação (regressão fixa para a média populacional). Todos os arquivos de dados estudados foram analisados incluindo arquivo de genealogia composto por 4.380 animais com 3202 mães e 228 touros. As estimativas de herdabilidades para produção de leite apresentaram tendências semelhantes entre os arquivos de dados analisados, com maior semelhança entre os bancos CS, CM e CB. As estimativas de herdabilidade para produção de leite no dia do controle (PLDC) do banco CT apresentou pequenas diferenças, em relação aos demais...
To this study the first 1293 Holstein dairy lactation registered weekly were used, having cows belonging to four farms in the Southeast region of Brazil, aiming verify the milk production data structure influence during lactation under genetic parameters estimated by random regression models. Four files with different data structures were tested. The week control files (CS) counted with 122,842 controls; the month files (CM), 30,883 controls; the bimestrial (CB) had 15,837 controls and finally the quarterly (CT) had 12,702 controls. It was used the random regression model and, as random, the genetic additive and the animal permanent environmental effects were considered. The covariance function to these two effects was the sixth grade to the genetic effect additive and the seventh grade to the permanent environmental effect, having heterogeneous residual variances, and the residual variance structure was modeled by a “step function”. The model had as the fixed effect the contemporaneous groups (GC) commons to all data set, GC were compounded by farm, month, and year of control. The co-variable was the cow age at birth (linear and quadratic regression) and the milking days (fixed regression to the population average). All evaluated data files presented genealogy file composed by 4,380 animals having 1,416 mothers and 228 bulls. The estimate of heritability presented tendencies similar among the analyzed data, having the higher similarity CS, CM and CB. The CT presented small differences in the estimate of heritability when compared to the others. The CB data file presented all the analyzed genetic parameters estimative with the same tendency and magnificence of the CS and CM, allowing the milk control in a CB structure, random regression model in genetic evaluations speaking. The genetic values (VG) to partial milking period production were predicted (MRA100, MRA200, MRA300 e MRA90_305) ...(Complete abstract click electronic access below)
APA, Harvard, Vancouver, ISO, and other styles
13

Schiratti, Jean-Baptiste. "Methods and algorithms to learn spatio-temporal changes from longitudinal manifold-valued observations." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLX009/document.

Full text
Abstract:
Dans ce manuscrit, nous présentons un modèle à effets mixtes, présenté dans un cadre Bayésien, permettant d'estimer la progression temporelle d'un phénomène biologique à partir d'observations répétées, à valeurs dans une variété Riemannienne, et obtenues pour un individu ou groupe d'individus. La progression est modélisée par des trajectoires continues dans l'espace des observations, que l'on suppose être une variété Riemannienne. La trajectoire moyenne est définie par les effets mixtes du modèle. Pour définir les trajectoires de progression individuelles, nous avons introduit la notion de variation parallèle d'une courbe sur une variété Riemannienne. Pour chaque individu, une trajectoire individuelle est construite en considérant une variation parallèle de la trajectoire moyenne et en reparamétrisant en temps cette parallèle. Les transformations spatio-temporelles sujet-spécifiques, que sont la variation parallèle et la reparamétrisation temporelle sont définnies par les effets aléatoires du modèle et permettent de quantifier les changements de direction et vitesse à laquelle les trajectoires sont parcourues. Le cadre de la géométrie Riemannienne permet d'utiliser ce modèle générique avec n'importe quel type de données définies par des contraintes lisses. Une version stochastique de l'algorithme EM, le Monte Carlo Markov Chains Stochastic Approximation EM (MCMC-SAEM), est utilisé pour estimer les paramètres du modèle au sens du maximum a posteriori. L'utilisation du MCMC-SAEM avec un schéma numérique permettant de calculer le transport parallèle est discutée dans ce manuscrit. De plus, le modèle et le MCMC-SAEM sont validés sur des données synthétiques, ainsi qu'en grande dimension. Enfin, nous des résultats obtenus sur différents jeux de données liés à la santé
We propose a generic Bayesian mixed-effects model to estimate the temporal progression of a biological phenomenon from manifold-valued observations obtained at multiple time points for an individual or group of individuals. The progression is modeled by continuous trajectories in the space of measurements, which is assumed to be a Riemannian manifold. The group-average trajectory is defined by the fixed effects of the model. To define the individual trajectories, we introduced the notion of « parallel variations » of a curve on a Riemannian manifold. For each individual, the individual trajectory is constructed by considering a parallel variation of the average trajectory and reparametrizing this parallel in time. The subject specific spatiotemporal transformations, namely parallel variation and time reparametrization, are defined by the individual random effects and allow to quantify the changes in direction and pace at which the trajectories are followed. The framework of Riemannian geometry allows the model to be used with any kind of measurements with smooth constraints. A stochastic version of the Expectation-Maximization algorithm, the Monte Carlo Markov Chains Stochastic Approximation EM algorithm (MCMC-SAEM), is used to produce produce maximum a posteriori estimates of the parameters. The use of the MCMC-SAEM together with a numerical scheme for the approximation of parallel transport is discussed. In addition to this, the method is validated on synthetic data and in high-dimensional settings. We also provide experimental results obtained on health data
APA, Harvard, Vancouver, ISO, and other styles
14

Mayeli, Nader. "An experimental study of the unrestrained shrinkage of isotropic paper sheets." Thesis, University of Manchester, 2016. https://www.research.manchester.ac.uk/portal/en/theses/an-experimental-study-of-the-unrestrained-shrinkage-of-isotropicpaper-sheets(6f0e3356-2e17-4433-8303-723a863c11f0).html.

Full text
Abstract:
The influence of several hardwoods and softwoods pulp fibre on the free shrinkage of isotropic paper sheets was investigated. The effect of properties such as density, grammage, Fractional Contact Area (FCA), Water Retention Value (WRV), fines content and fibre morphology were also investigated on the free shrinkage of isotropic paper sheets. Further, the influence of Lyocell fibre and glycerol on the free shrinkage of isotropic paper sheets is reported. Experimental results showed that in general the free shrinkage of hardwood pulps is a few percent higher than that of softwood pulps at the same density. It was found that although free shrinkage increases with fines content, a high fines content does not imply high shrinkage, and some pulp samples with higher amount of fines, exhibited lower free shrinkage. For all pulps at low densities there is little influence of grammage on free shrinkage, though as density increases a significant dependence is observed. The results showed that the free shrinkage of isotropic paper sheets formed from hardwood pulps is more sensitive to grammage compared to that of softwood pulps. Interestingly, it was shown that some pulp samples with the same intrinsic density, WRV and FCA exhibited different free shrinkage over the range of grammages. In addition, some pulp samples with stiffer fibres but higher amount of fines exhibited higher free shrinkage. Experimental results showed that longitudinal shrinkage of a fibre is an important parameter and pulp samples with higher microfibril angle (MFA) exhibited higher longitudinal shrinkage. Finally, the free shrinkage of isotropic paper sheets was reduced by applying Lyocell fibre and glycerol. Interestingly, by adding a small amount of Lyocell fibre, 2%, an increase in tensile index, tensile energy absorption (TEA) and modulus is observed, while the free shrinkage reduced up to 2%. In addition, adding glycerol to the pulp samples not only reduced the free shrinkage of isotropic paper sheets up to 1.5%, but also mechanical properties, such as tensile index and stretch slightly improved.
APA, Harvard, Vancouver, ISO, and other styles
15

Hombourger-Barès, Sabrina. "La contribution du design de l'espace de vente à l'évolution du positionnement de l'enseigne : une analyse longitudinale." Thesis, Dijon, 2014. http://www.theses.fr/2014DIJOE002/document.

Full text
Abstract:
L’une des voies d’innovation plébiscitées par les détaillants pour orchestrer l’évolution de leur proposition de valeur consiste à réviser le design des espaces de vente. Les contributions académiques sur l’expérience en magasin se sont concentrées sur les perceptions des consommateurs ainsi que sur le repérage de pratiques managériales significatives. En prenant pour objet la traduction du repositionnement d’une enseigne par le design expérientiel des espaces de vente, la thèse propose de suivre le processus au cœur même du marché et de mettre à jour les mécanismes qui le sous-tendent. La conduite d’une étude longitudinale de cas enchâssés dévoile une conception holistique, basée sur l’interaction souhaitée du chaland avec le magasin. L’analyse relate l’enchâssement des quatre phases du cycle de vie et permet de recenser pour chaque phase les événements et problématiques associés aux six dimensions du processus. La thèse établit le rôle prégnant de la vision entrepreneuriale du dirigeant, clé de voûte de l’innovation. La proposition de valeur se matérialise par trois composantes gigognes que sont l’intrigue, l’action et le décor. Pour chacune des cinq étapes du parcours-client, des éléments de décor sont implantés pour relayer ou renforcer l’action souhaitée. Ces éléments constituent des mécanismes ou dispositifs destinés à stimuler le système expérientiel du chaland. L’évaluation, qui porte sur la mesure du positionnement perçu et vécu, contribue à ajuster la proposition de valeur au regard de quatre niveaux de cohérence et de la flexibilité du design. Enfin, les logiques de coproduction occasionnent une possible co-destruction de valeur, intentionnelle ou accidentelle
One of the innovative ways favoured by retailers to drive change in their value proposition is to review the design of their stores. Academic contributions to the in-store experience have mostly focused on consumer perspective and identifying relevant managerial practices. The core of this research studies how repositioning a retail brand translates into the experiential design of retail spaces. To this end, the research follows the repositioning process from a managerial perspective and updates the mechanisms that underlie it. The longitudinal study of embedded cases reveals the importance of an holistic design that takes into account the desired interactions between the shopper and the store. The analysis shows the four overlapping phases of the store’s life cycle, and breaks down the process into six dimensions, each with its own events and issues. The six dimensions are vision, plotline, action, decor, assessment and coproduction.The entrepreneurial vision of the leader is the cornerstone of the whole innovation process. The value proposition is embodied by three components, namely plotline, action and decor. For each of the five stages of the shopper’s journey, elements of the decor are implemented to relay or reinforce the desired action. These are mechanisms or devices meant to stimulate the shopper’s experiential system. The assessment, which involves measuring the perceived and experienced positioning, helps to adjust the value proposition in terms of four levels of consistency and flexibility of design. Finally, the coproduction of store design between different stakeholders can cause a co-destruction of value, whether intentional or accidental
APA, Harvard, Vancouver, ISO, and other styles
16

Sousa, Júnior Severino Cavalcante de. "Persistência da lactação e Influência da estrutura de dados sobre a estimação de parâmetros genéticos para a produção de leite de bovinos da raça Holandesa /." Jaboticabal : [s.n.], 2010. http://hdl.handle.net/11449/104906.

Full text
Abstract:
Resumo: Para o presente estudo foram utilizadas 3.202 primeiras lactações, de vacas da raça Holandesa pertencentes a quatro fazendas da região Sudeste, registradas semanalmente, com o objetivo de verificar a influência da estrutura de dados de produção de leite no decorrer da lactação, sobre os parâmetros genéticos estimados por modelos de regressão aleatória. Foram testados quatro arquivos contendo estruturas diferentes de dados. O arquivo de controles semanais (CS) contava com 122.842 controles, o arquivo mensal (CM) com 30.883 controles, o bimestral (CB) com 15.837 controles, e por fim, o arquivo de controles trimestrais (CT) continha de 12.702 controles. O modelo utilizado foi o de regressão aleatória, e como aleatórios foram considerados os efeitos genético aditivo e o de ambiente permanente de animal. A ordem das funções de covariância para estes dois efeitos foi de sexta ordem para efeito genético aditivo e de sétima ordem para o efeito de ambiente permanente, com variâncias residuais heterogêneas a estrutura de variâncias residuais foi modelada por meio de uma "step function" O modelo teve como efeito fixo os grupos de contemporâneos (GC) comuns para todos os arquivos de dados. Os GC foram compostos por fazenda, mês e ano do controle,e como co-variável a idade da vaca ao parto (regressão linear e quadrática) e o número de dias em lactação (regressão fixa para a média populacional). Todos os arquivos de dados estudados foram analisados incluindo arquivo de genealogia composto por 4.380 animais com 3202 mães e 228 touros. As estimativas de herdabilidades para produção de leite apresentaram tendências semelhantes entre os arquivos de dados analisados, com maior semelhança entre os bancos CS, CM e CB. As estimativas de herdabilidade para produção de leite no dia do controle (PLDC) do banco CT apresentou pequenas diferenças, em relação aos demais... (Resumo completo, clicar acesso eletrônico abaixo)
Abstract: To this study the first 1293 Holstein dairy lactation registered weekly were used, having cows belonging to four farms in the Southeast region of Brazil, aiming verify the milk production data structure influence during lactation under genetic parameters estimated by random regression models. Four files with different data structures were tested. The week control files (CS) counted with 122,842 controls; the month files (CM), 30,883 controls; the bimestrial (CB) had 15,837 controls and finally the quarterly (CT) had 12,702 controls. It was used the random regression model and, as random, the genetic additive and the animal permanent environmental effects were considered. The covariance function to these two effects was the sixth grade to the genetic effect additive and the seventh grade to the permanent environmental effect, having heterogeneous residual variances, and the residual variance structure was modeled by a "step function". The model had as the fixed effect the contemporaneous groups (GC) commons to all data set, GC were compounded by farm, month, and year of control. The co-variable was the cow age at birth (linear and quadratic regression) and the milking days (fixed regression to the population average). All evaluated data files presented genealogy file composed by 4,380 animals having 1,416 mothers and 228 bulls. The estimate of heritability presented tendencies similar among the analyzed data, having the higher similarity CS, CM and CB. The CT presented small differences in the estimate of heritability when compared to the others. The CB data file presented all the analyzed genetic parameters estimative with the same tendency and magnificence of the CS and CM, allowing the milk control in a CB structure, random regression model in genetic evaluations speaking. The genetic values (VG) to partial milking period production were predicted (MRA100, MRA200, MRA300 e MRA90_305) ...(Complete abstract click electronic access below)
Orientadora: Lucia Galvão de Albuquerque
Coorientadora: Lenira El Faro Zadra
Banca: Danísio Prado Munari
Banca: Humberto Tonhati
Banca: Maria Eugênia Zerlotti Mercadante
Banca: Claudia Cristina Paro de Paz
Doutor
APA, Harvard, Vancouver, ISO, and other styles
17

Eliasson, Martin, Khawar Malik, and Benjamin Österlund. "A Value Relevant Fundamental Investment Strategy : The use of weighted fundamental signals to improve predictability." Thesis, Uppsala universitet, Företagsekonomiska institutionen, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-145255.

Full text
Abstract:
The aim of this study is to investigate the possibility to improve the investment model defined in Piotroski (2000) and the subsequent research carried out on this model. Our model builds further upon the original fundamental score put forth by Piotroski. This further developed model is tested in two different contexts; firstly, a weighted fundamental score is developed that is updated every year in order to control for any changes in the predictive ability of fundamental signals over time. Secondly, the behavior of this score is analyzed in context of recession and growth cycles of the macro economy. Our findings show that high book-to-market portfolio consist of poor performing firms, as shown by Fama and French (1995) and is thereby outperformed by both Piotroski's F_score and our own developed scores. The score based on a rolling window correlation is performing a little better then F_score, but the score based on correlations for prior Up and Down periods is not. The conclusions we draw from the results are that improvements have to be made, both to F_score and our own developments, to sort winners from loser to get an even more profitable zero-investment hedge strategy.
APA, Harvard, Vancouver, ISO, and other styles
18

Torku, Thomas K. "Takens Theorem with Singular Spectrum Analysis Applied to Noisy Time Series." Digital Commons @ East Tennessee State University, 2016. https://dc.etsu.edu/etd/3013.

Full text
Abstract:
The evolution of big data has led to financial time series becoming increasingly complex, noisy, non-stationary and nonlinear. Takens theorem can be used to analyze and forecast nonlinear time series, but even small amounts of noise can hopelessly corrupt a Takens approach. In contrast, Singular Spectrum Analysis is an excellent tool for both forecasting and noise reduction. Fortunately, it is possible to combine the Takens approach with Singular Spectrum analysis (SSA), and in fact, estimation of key parameters in Takens theorem is performed with Singular Spectrum Analysis. In this thesis, we combine the denoising abilities of SSA with the Takens theorem approach to make the manifold reconstruction outcomes of Takens theorem less sensitive to noise. In particular, in the course of performing the SSA on a noisy time series, we branch of into a Takens theorem approach. We apply this approach to a variety of noisy time series.
APA, Harvard, Vancouver, ISO, and other styles
19

Dragset, Ingrid Garli. "Analysis of Longitudinal Data with Missing Values. : Methods and Applications in Medical Statistics." Thesis, Norwegian University of Science and Technology, Department of Mathematical Sciences, 2009. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9945.

Full text
Abstract:

Missing data is a concept used to describe the values that are, for some reason, not observed in datasets. Most standard analysis methods are not feasible for datasets with missing values. The methods handling missing data may result in biased and/or imprecise estimates if methods are not appropriate. It is therefore important to employ suitable methods when analyzing such data. Cardiac surgery is a procedure suitable for patients suffering from different types of heart diseases. It is a physical and psychical demanding surgical operation for the patients, although the mortality rate is low. Health-related quality of life (HRQOL) is a popular and widespread measurement tool to monitor the overall situation of patients undergoing cardiac surgery, especially in elderly patients with naturally limited life expectancies [Gjeilo, 2009]. There has been a growing attention to possible differences between men and women with respect to HRQOL after cardiac surgery. The literature is not consistent regarding this topic. Gjeilo et al. [2008] studied HRQOL in patients before and after cardiac surgery with emphasis on differences between men and women. In the period from September 2004 to September 2005, 534 patients undergoing cardiac surgery at St Olavs Hospital were included in the study. HRQOL were measured by the self-reported questionnaires Short-Form 36 (SF-36) and the Brief Pain Inventory (BPI) before surgery and at six and twelve months follow-up. The SF-36 reflects health-related quality of life measuring eight conceptual domains of health [Loge and Kaasa, 1998]. Some of the patients have not responded to all questions, and there are missing values in the records for about 41% of the patients. Women have more missing values than men at all time points. The statistical analyses performed in Gjeilo et al. [2008] employ the complete-case method, which is the most common method to handle missing data until recent years. The complete-case method discards all subjects with unobserved data prior to the analyses. It makes standard statistical analyses accessible and is the default method to handle missing data in several statistical software packages. The complete-case method gives correct estimates only if data are missing completely at random without any relation to other observed or unobserved measurements. This assumption is seldom met, and violations can result in incorrect estimates and decreased efficiency. The focus of this paper is on improved methods to handle missing values in longitudinal data, that is observations of the same subjects at multiple occasions. Multiple imputation and imputation by expectation maximization are general methods that can be applied with many standard analysis methods and several missing data situations. Regression models can also give correct estimates and are available for longitudinal data. In this paper we present the theory of these approaches and application to the dataset introduced above. The results are compared to the complete-case analyses published in Gjeilo et al. [2008], and the methods are discussed with respect to their properties of handling missing values in this setting. The data of patients undergoing cardiac surgery are analyzed in Gjeilo et al. [2008] with respect to gender differences at each of the measurement occasions; Presurgery, six months, and twelve months after the operation. This is done by a two-sample Student's t-test assuming unequal variances. All patients observed at the relevant occasion is included in the analyses. Repeated measures ANOVA are used to determine gender differences in the evolution of the HRQOL-variables. Only patients with fully observed measurements at all three occasions are included in the ANOVA. The methods of expectation maximization (EM) and multiple imputation (MI) are used to obtain plausible complete datasets including all patients. EM gives a single imputed dataset that can be analyzed similar to the complete-case analysis. MI gives multiple imputed datasets where all dataset must be analyzed sepearately and their estimates combined according to a technique called Rubin's rules. Results of both Student's t-tests and repeated measures ANOVA can be performed by these imputation methods. The repeated measures ANOVA can be expressed as a regression equation that describes the HRQOL-score improvement in time and the variation between subjects. The mixed regression models (MRM) are known to model longitudinal data with non-responses, and can further be extended from the repeated measures ANOVA to fit data more sufficiently. Several MRM are fitted to the data of cardiac surgery patients to display their properties and advantages over ANOVA. These models are alternatives to the imputation analyses when the aim is to determine gender differences in improvement of HRQOL after surgery. The imputation methods and mixed regression models are assumed to handle missing data in an adequate way, and gives similar analysis results for all methods. These results differ from the complete-case method results for some of the HRQOL-variables when examining the gender differences in improvement of HRQOL after surgery.

APA, Harvard, Vancouver, ISO, and other styles
20

Chan, Pui-shan, and 陳佩珊. "On the use of multiple imputation in handling missing values in longitudinal studies." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2004. http://hub.hku.hk/bib/B45009879.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Tavares, Nelsilene Mota Carvalho 1971. "Intervalos de referência condicionais de parâmetros dopplervelocimétricos maternofetais = Conditional reference intervals of materno-fetal Doppler parameters." [s.n.], 2012. http://repositorio.unicamp.br/jspui/handle/REPOSIP/311745.

Full text
Abstract:
Orientadores: Ricardo Barini, Cleisson Fábio Andrioli Peralta
Dissertação (Mestrado) - Universidade Estadual de Campinas, Faculdade de Ciências Médicas
Made available in DSpace on 2018-08-20T20:27:06Z (GMT). No. of bitstreams: 1 Tavares_NelsileneMotaCarvalho_M.pdf: 1623700 bytes, checksum: 57e06ff429364aff34689dd52216eeb9 (MD5) Previous issue date: 2012
Resumo: Objetivos: Determinar intervalos de referência condicionais (longitudinais) para os valores de índices de pulsatilidade (IP) nos fluxos da artéria umbilical (AU), cerebral média (ACM), ducto venoso (DV) e IP médio das artérias uterinas (AUT), por meio da avaliação de gestantes de baixo risco de uma amostra da população brasileira. Métodos: Estudo observacional descritivo longitudinal realizado de fevereiro de 2010 a maio de 2012, no Hospital da Mulher Prof. Dr. José Aristodemo Pinotti, Centro de Atenção Integral à Saúde da Mulher, Universidade Estadual de Campinas (CAISM-UNICAMP). Exames ultrassonográficos quinzenais foram realizados para obtenção dos IP das AU, AUT, ACM e IP venoso do DV em gestantes de baixo risco da 18ª a 40ª semana de gravidez. Análise estatística: Modelos lineares mistos foram usados para elaboração de intervalos de referência longitudinais (percentis 5, 50 e 95) dos IP dos vasos mencionados. Variáveis maternas e perinatais foram descritas com o uso de medianas e limites (variáveis contínuas), frequências absolutas e relativas (variáveis categóricas). Os IP das porções placentária e abdominal do cordão umbilical foram comparados por meio do teste T para amostras independentes. Valores de p menores do que 0,05 foram considerados significativos. Resultados: Duzentas e três gestantes de baixo risco foram incluídas no estudo. Cento e sessenta e quatro gestantes concluíram o estudo, sendo realizados 1242 exames ultrassonográficos, com mediana de oito exames por paciente (limites de 4 a 12). Houve redução significativa dos IP de todos os vasos estudados com a idade gestacional (IG). As equações obtidas para predição das medianas foram: IP-AU = 1,5602786 - (0,020623 x IG); Logaritmo do IP-ACM = 0,8149111 - (0,004168 x IG) - [0,002543 x (IG - 28,7756)2]; Logaritmo do IP-DV = 0,26691- (0,015414 x IG); IP-AUt = 1,2362403 - (0,014392 x IG). Houve diferença significativa entre os IP-AU obtidos nas extremidades placentária e abdominal fetal (p < 0,001). Conclusões: Foram estabelecidos intervalos de referência condicionais (longitudinais) dos principais parâmetros dopplervelocimétricos gestacionais em uma amostra da população brasileira. Estes podem ser mais adequados para o acompanhamento das modificações hemodinâmicas maternofetais em gestações normais ou não, a partir de uma validação
Abstract: Purpose: To determine longitudinal reference intervals of pulsatility index (PI) of the umbilical artery (UA), middle cerebral artery (MCA), uterine arteries (UTA) and ductus venosus (DV) in low risk pregnancies of the a Brazilian cohort. Methods: Longitudinal observational study performed from February 2010 to May 2012. Obstetric scans were performed fortnightly for the measurements of the PI of the UA, UTA, MCA, DV in low-risk pregnant women between 18-40 weeks of gestation. Statistical analysis: Linear mixed models were used for the elaboration of longitudinal reference intervals (5th, 50th and 95th centiles) of these measurements. Maternal and perinatal variables were described using median and range (continuous variables), absolute and relative frequencies (categorical variables). The PI obtained at the placental and abdominal portions of the umbilical artery were compared using independent samples T test. P values of less than 0,05 were considered statistically significant. Results: Two hundred and three low-risk pregnancies were included in the study. One hundred sixty four pregnant women completed the study and underwent 1242 scans. There was a significant decrease in the PI values of all studied vessels with gestational age (GA). The equations obtained for the prediction of the medians were as follows: PI-AU = 1,5602786 - (0,020623 x GA); Logarithm of the PI-MCA = 0,8149111 - (0,004168 x GA) - [0,002543 x (GA - 28,7756)2]; Logarithm of the PI-DV = - 0,26691- (0,015414 x GA); PI-UtA = 1,2362403 - (0,014392 x GA). There was a significant difference between the PI-UA obtained at the abdominal and placental ends of the umbilical cord (p < 0,001). Conclusions: Longitudinal reference intervals for the main gestational Doppler parameters were obtained from a Brazilian cohort. These intervals could be more adequate for the follow-up of materno-fetal haemodynamic modifications in normal and abnormal pregnancies, which still requires further validation
Mestrado
Saúde Materna e Perinatal
Mestre em Ciências da Saúde
APA, Harvard, Vancouver, ISO, and other styles
22

Fung, David S. "Methods for the estimation of missing values in time series." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2006. https://ro.ecu.edu.au/theses/63.

Full text
Abstract:
Time Series is a sequential set of data measured over time. Examples of time series arise in a variety of areas, ranging from engineering to economics. The analysis of time series data constitutes an important area of statistics. Since, the data are records taken through time, missing observations in time series data are very common. This occurs because an observation may not be made at a particular time owing to faulty equipment, lost records, or a mistake, which cannot be rectified until later. When one or more observations are missing it may be necessary to estimate the model and also to obtain estimates of the missing values. By including estimates of missing values, a better understanding of the nature of the data is possible with more accurate forecasting. Different series may require different strategies to estimate these missing values. It is necessary to use these strategies effectively in order to obtain the best possible estimates. The objective of this thesis is to examine and compare the effectiveness of various techniques for the estimation of missing values in time series data models.
APA, Harvard, Vancouver, ISO, and other styles
23

Hubbard, Scott. "A longitudinal study of the theory of parallel growth." Oklahoma City : [s.n.], 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
24

Chevallier, Juliette. "Statistical models and stochastic algorithms for the analysis of longitudinal Riemanian manifold valued data with multiple dynamic." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLX059/document.

Full text
Abstract:
Par delà les études transversales, étudier l'évolution temporelle de phénomènes connait un intérêt croissant. En effet, pour comprendre un phénomène, il semble plus adapté de comparer l'évolution des marqueurs de celui-ci au cours du temps plutôt que ceux-ci à un stade donné. Le suivi de maladies neuro-dégénératives s'effectue par exemple par le suivi de scores cognitifs au cours du temps. C'est également le cas pour le suivi de chimiothérapie : plus que par l'aspect ou le volume des tumeurs, les oncologues jugent que le traitement engagé est efficace dès lors qu'il induit une diminution du volume tumoral.L'étude de données longitudinales n'est pas cantonnée aux applications médicales et s'avère fructueuse dans des cadres d'applications variés tels que la vision par ordinateur, la détection automatique d'émotions sur un visage, les sciences sociales, etc.Les modèles à effets mixtes ont prouvé leur efficacité dans l'étude des données longitudinales, notamment dans le cadre d'applications médicales. Des travaux récent (Schiratti et al., 2015, 2017) ont permis l'étude de données complexes, telles que des données anatomiques. L'idée sous-jacente est de modéliser la progression temporelle d'un phénomène par des trajectoires continues dans un espace de mesures, que l'on suppose être une variété riemannienne. Sont alors estimées conjointement une trajectoire moyenne représentative de l'évolution globale de la population, à l'échelle macroscopique, et la variabilité inter-individuelle. Cependant, ces travaux supposent une progression unidirectionnelle et échouent à décrire des situations telles que la sclérose en plaques ou le suivi de chimiothérapie. En effet, pour ces pathologies, vont se succéder des phases de progression, de stabilisation et de remision de la maladie, induisant un changement de la dynamique d'évolution globale.Le but de cette thèse est de développer des outils méthodologiques et algorithmiques pour l’analyse de données longitudinales, dans le cas de phénomènes dont la dynamique d'évolution est multiple et d'appliquer ces nouveaux outils pour le suivi de chimiothérapie. Nous proposons un modèle non-linéaire à effets mixtes dans lequel les trajectoires d'évolution individuelles sont vues comme des déformations spatio-temporelles d'une trajectoire géodésique par morceaux et représentative de l'évolution de la population. Nous présentons ce modèle sous des hypothèses très génériques afin d'englober une grande classe de modèles plus spécifiques.L'estimation des paramètres du modèle géométrique est réalisée par un estimateur du maximum a posteriori dont nous démontrons l'existence et la consistance sous des hypothèses standards. Numériquement, du fait de la non-linéarité de notre modèle, l'estimation est réalisée par une approximation stochastique de l'algorithme EM, couplée à une méthode de Monte-Carlo par chaînes de Markov (MCMC-SAEM). La convergence du SAEM vers les maxima locaux de la vraisemblance observée ainsi que son efficacité numérique ont été démontrées. En dépit de cette performance, l'algorithme SAEM est très sensible à ses conditions initiales. Afin de palier ce problème, nous proposons une nouvelle classe d'algorithmes SAEM dont nous démontrons la convergence vers des minima locaux. Cette classe repose sur la simulation par une loi approchée de la vraie loi conditionnelle dans l'étape de simulation. Enfin, en se basant sur des techniques de recuit simulé, nous proposons une version tempérée de l'algorithme SAEM afin de favoriser sa convergence vers des minima globaux
Beyond transversal studies, temporal evolution of phenomena is a field of growing interest. For the purpose of understanding a phenomenon, it appears more suitable to compare the evolution of its markers over time than to do so at a given stage. The follow-up of neurodegenerative disorders is carried out via the monitoring of cognitive scores over time. The same applies for chemotherapy monitoring: rather than tumors aspect or size, oncologists asses that a given treatment is efficient from the moment it results in a decrease of tumor volume. The study of longitudinal data is not restricted to medical applications and proves successful in various fields of application such as computer vision, automatic detection of facial emotions, social sciences, etc.Mixed effects models have proved their efficiency in the study of longitudinal data sets, especially for medical purposes. Recent works (Schiratti et al., 2015, 2017) allowed the study of complex data, such as anatomical data. The underlying idea is to model the temporal progression of a given phenomenon by continuous trajectories in a space of measurements, which is assumed to be a Riemannian manifold. Then, both a group-representative trajectory and inter-individual variability are estimated. However, these works assume an unidirectional dynamic and fail to encompass situations like multiple sclerosis or chemotherapy monitoring. Indeed, such diseases follow a chronic course, with phases of worsening, stabilization and improvement, inducing changes in the global dynamic.The thesis is devoted to the development of methodological tools and algorithms suited for the analysis of longitudinal data arising from phenomena that undergo multiple dynamics and to apply them to chemotherapy monitoring. We propose a nonlinear mixed effects model which allows to estimate a representative piecewise-geodesic trajectory of the global progression and together with spacial and temporal inter-individual variability. Particular attention is paid to estimation of the correlation between the different phases of the evolution. This model provides a generic and coherent framework for studying longitudinal manifold-valued data.Estimation is formulated as a well-defined maximum a posteriori problem which we prove to be consistent under mild assumptions. Numerically, due to the non-linearity of the proposed model, the estimation of the parameters is performed through a stochastic version of the EM algorithm, namely the Markov chain Monte-Carlo stochastic approximation EM (MCMC-SAEM). The convergence of the SAEM algorithm toward local maxima of the observed likelihood has been proved and its numerical efficiency has been demonstrated. However, despite appealing features, the limit position of this algorithm can strongly depend on its starting position. To cope with this issue, we propose a new version of the SAEM in which we do not sample from the exact distribution in the expectation phase of the procedure. We first prove the convergence of this algorithm toward local maxima of the observed likelihood. Then, with the thought of the simulated annealing, we propose an instantiation of this general procedure to favor convergence toward global maxima: the tempering-SAEM
APA, Harvard, Vancouver, ISO, and other styles
25

Lasley, Chandra Y. "Asian Americans: the mediating effects of family on the longitudinal impact of discrimination on self-esteem and wellbeing." Diss., Kansas State University, 2016. http://hdl.handle.net/2097/32531.

Full text
Abstract:
Doctor of Philosophy
Family Studies and Human Services
Joyce Baptist
The model minority stereotype portrays Asian Americans as resilient, educationally and financially successful, and family-focused, while it downplays the realities of discrimination and its effects on self-esteem. Research suggests that gender roles and immigration experiences are contributing factors to why Asian American women, especially second-generation immigrants, experience greater stress than women of other ethnic groups and Asian American men in general. Considering most Asian Americans are of East and Southeast Asian heritages influenced by Confucian family values and gender roles, this study examined how these values mediated the associated from discrimination to self-esteem during adolescence, and to educational and financial achievement (wellbeing) during adulthood for second-generation immigrants. Using data from the Children of Immigrants Longitudinal Study (N = 554), results from a partially constrained group-comparison model demonstrated that Confucian values of familism and family cohesion were factors that significantly predicted adolescent self-esteem and adult educational achievement. Men’s level of familism endorsement was also uniquely related to experiences with discrimination. Clinical implications and further research directions are discussed.
APA, Harvard, Vancouver, ISO, and other styles
26

Fountain, Jason Morgan. "Differences in Generational Work Values in America and Their Implications for Educational Leadership| A Longitudinal Test of Twenge's Model." Thesis, University of Louisiana at Lafayette, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=3622936.

Full text
Abstract:

Three generations of Americans are currently coexisting in the workforce. One of the primary challenges for educational leaders is to understand the similarities and differences in each generation while also educating a new generation of Americans – today's youth. This longitudinal study used data from the General Social Survey to determine if generational work values differ in accord with the five general categories outlined by Twenge.

Several significant differences emerged. First, Millennials rate higher in work ethic over Boomers and GenXers. Additionally, a linear decline from Boomers to Millennials was found in intrinsic values, while Millennials were found to have the highest need for extrinsic values. Finally, a linear decline from Boomers to GenXers to Millennials was evident in relation to social values in the work setting.

The primary implication from this study involves the contradictory nature of Millennials. While they have the highest work ethic, they also rate highest in leisure values and the need for extrinsic values. Further research should be conducted to isolate values pertinent to teachers and a cross-sectional study should be conducted to determine value differences of the current workforce.

APA, Harvard, Vancouver, ISO, and other styles
27

Ferreira, Sabrina Girotto 1978. "Idade gestacional e dopplervelocimetria fetoplacentária e úteroplacentária em relação ao grau placentário de grannum em gestações de baixo risco = estudo longitudinal = Gestational age and fetomaternal doppler parameters according to placenta grannum grading in low-risk pregnancies : a longitudinal study." [s.n.], 2012. http://repositorio.unicamp.br/jspui/handle/REPOSIP/308764.

Full text
Abstract:
Orientadores: Cleisson Fábio Andrioli Peralta, Ricardo Barini
Dissertação (Mestrado) - Universidade Estadual de Campinas, Faculdade de Ciências Médicas
Made available in DSpace on 2018-08-20T20:27:17Z (GMT). No. of bitstreams: 1 Ferreira_SabrinaGirotto_M.pdf: 3220177 bytes, checksum: fc5df188c73f1cc85083481b3efdcf1b (MD5) Previous issue date: 2012
Resumo: Objetivo: Estabelecer intervalos de referência de idade gestacional (IG), índices de pulsatilidade (IP) das artérias umbilical (AU) e média das artérias uterinas (AUT) de acordo com o grau placentário de Grannum. Esses dados servirão como base para estudo em andamento que avalia o impacto do amadurecimento precoce da placenta nos parâmetros dopplervelocimétricos acima mencionados e os resultados perinatais. Sujeitos e métodos: estudo prospectivo longitudinal observacional realizado em hospital universitário terciário, em que 133 gestações de baixo risco foram avaliadas quinzenalmente pela USG, entre 18 e 41 semanas. A classificação placentária de Grannum et al. (1), IP-AU e IPm-AUT foram obtidos em cada exame. Os intervalos de referência (mediana, 5?, 10?, 90? e 95? percentil) de IG, IP-AU e IPm-AUT foram estabelecidos para cada grau placentário. Testes de Mann-Whitney com correção de Bonferroni foram utilizados para comparar os parâmetros acima mencionados entre dois diferentes graus placentários. O valor de p bicaudal menor que 0,05 foi considerado estatisticamente significativo. Resultados: A IG aumentou significativamente com a mudança do grau placentário. IP-AU e IP-AUT reduziram significativamente do grau zero para o um e do grau um para o dois, mas permaneceram estáveis depois disso. Conclusões: Os intervalos de referência dos parâmetros dopplervelocimétricos que refletem implantação normal e função placentária foram elaborados como um primeiro passo que permitirá comparar e testar nossos dados com as publicações existentes para a predição de resultados perinatais em casos de amadurecimento precoce da placenta em gestações de baixo risco
Abstract: Objective: To establish reference intervals of gestational age (GA), umbilical artery (UmbA) and mean uterine artery (mUtA) Doppler pulsatility indexes (PI) according to placental grade. This will serve as a basis for an ongoing study aimed at evaluating the impact of early placental ageing and the abovementioned Doppler parameters on pregnancy outcomes. Materials and Methods: Longitudinal observational study realized in tertiary university hospital where 133 low-risk pregnancies were scanned fortnightly from 18 to 41 weeks. Placental classification according to Grannum et al. (1), UmbA-PI and and mUtA-PI were obtained in each scan. Reference intervals (median, 5th, 10th, 90th and 95th centiles) of GA, UmbA-PI and mUtA-PI were established for each placental grade. Mann-Whitney U tests with Bonferroni adjustments were used to compare these parameters between two different placental grades. Two-tailed p-values of less than 0.05 were considered statistically significant. Results: GA significantly increased as placental grade changed. UmbA-PI and mUtA-PI significantly decreased between grades zero and one and between grades one and two, but remained stable between grades two and three. Conclusions: Reference intervals of GA and Doppler parameters which reflect normal implantation and function of the placenta were elaborated as a first step to allow further testing and comparison of our data with those previously published for the prediction of pregnancy outcomes in cases of suspicion of an early placental ageing is suspected
Mestrado
Saúde Materna e Perinatal
Mestre em Ciências da Saúde
APA, Harvard, Vancouver, ISO, and other styles
28

Loker, Troy. "Character strengths and virtues of young internationally adopted Chinese children : a longitudinal study from preschool to school age." [Tampa, Fla] : University of South Florida, 2009. http://purl.fcla.edu/usf/dc/et/SFE0003270.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Moreno, Betancur Margarita. "Regression modeling with missing outcomes : competing risks and longitudinal data." Thesis, Paris 11, 2013. http://www.theses.fr/2013PA11T076/document.

Full text
Abstract:
Les données manquantes sont fréquentes dans les études médicales. Dans les modèles de régression, les réponses manquantes limitent notre capacité à faire des inférences sur les effets des covariables décrivant la distribution de la totalité des réponses prévues sur laquelle porte l'intérêt médical. Outre la perte de précision, toute inférence statistique requière qu'une hypothèse sur le mécanisme de manquement soit vérifiée. Rubin (1976, Biometrika, 63:581-592) a appelé le mécanisme de manquement MAR (pour les sigles en anglais de « manquant au hasard ») si la probabilité qu'une réponse soit manquante ne dépend pas des réponses manquantes conditionnellement aux données observées, et MNAR (pour les sigles en anglais de « manquant non au hasard ») autrement. Cette distinction a des implications importantes pour la modélisation, mais en général il n'est pas possible de déterminer si le mécanisme de manquement est MAR ou MNAR à partir des données disponibles. Par conséquent, il est indispensable d'effectuer des analyses de sensibilité pour évaluer la robustesse des inférences aux hypothèses de manquement.Pour les données multivariées incomplètes, c'est-à-dire, lorsque l'intérêt porte sur un vecteur de réponses dont certaines composantes peuvent être manquantes, plusieurs méthodes de modélisation sous l'hypothèse MAR et, dans une moindre mesure, sous l'hypothèse MNAR ont été proposées. En revanche, le développement de méthodes pour effectuer des analyses de sensibilité est un domaine actif de recherche. Le premier objectif de cette thèse était de développer une méthode d'analyse de sensibilité pour les données longitudinales continues avec des sorties d'étude, c'est-à-dire, pour les réponses continues, ordonnées dans le temps, qui sont complètement observées pour chaque individu jusqu'à la fin de l'étude ou jusqu'à ce qu'il sorte définitivement de l'étude. Dans l'approche proposée, on évalue les inférences obtenues à partir d'une famille de modèles MNAR dits « de mélange de profils », indexés par un paramètre qui quantifie le départ par rapport à l'hypothèse MAR. La méthode a été motivée par un essai clinique étudiant un traitement pour le trouble du maintien du sommeil, durant lequel 22% des individus sont sortis de l'étude avant la fin.Le second objectif était de développer des méthodes pour la modélisation de risques concurrents avec des causes d'évènement manquantes en s'appuyant sur la théorie existante pour les données multivariées incomplètes. Les risques concurrents apparaissent comme une extension du modèle standard de l'analyse de survie où l'on distingue le type d'évènement ou la cause l'ayant entrainé. Les méthodes pour modéliser le risque cause-spécifique et la fonction d'incidence cumulée supposent en général que la cause d'évènement est connue pour tous les individus, ce qui n'est pas toujours le cas. Certains auteurs ont proposé des méthodes de régression gérant les causes manquantes sous l'hypothèse MAR, notamment pour la modélisation semi-paramétrique du risque. Mais d'autres modèles n'ont pas été considérés, de même que la modélisation sous MNAR et les analyses de sensibilité. Nous proposons des estimateurs pondérés et une approche par imputation multiple pour la modélisation semi-paramétrique de l'incidence cumulée sous l'hypothèse MAR. En outre, nous étudions une approche par maximum de vraisemblance pour la modélisation paramétrique du risque et de l'incidence sous MAR. Enfin, nous considérons des modèles de mélange de profils dans le contexte des analyses de sensibilité. Un essai clinique étudiant un traitement pour le cancer du sein de stade II avec 23% des causes de décès manquantes sert à illustrer les méthodes proposées
Missing data are a common occurrence in medical studies. In regression modeling, missing outcomes limit our capability to draw inferences about the covariate effects of medical interest, which are those describing the distribution of the entire set of planned outcomes. In addition to losing precision, the validity of any method used to draw inferences from the observed data will require that some assumption about the mechanism leading to missing outcomes holds. Rubin (1976, Biometrika, 63:581-592) called the missingness mechanism MAR (for “missing at random”) if the probability of an outcome being missing does not depend on missing outcomes when conditioning on the observed data, and MNAR (for “missing not at random”) otherwise. This distinction has important implications regarding the modeling requirements to draw valid inferences from the available data, but generally it is not possible to assess from these data whether the missingness mechanism is MAR or MNAR. Hence, sensitivity analyses should be routinely performed to assess the robustness of inferences to assumptions about the missingness mechanism. In the field of incomplete multivariate data, in which the outcomes are gathered in a vector for which some components may be missing, MAR methods are widely available and increasingly used, and several MNAR modeling strategies have also been proposed. On the other hand, although some sensitivity analysis methodology has been developed, this is still an active area of research. The first aim of this dissertation was to develop a sensitivity analysis approach for continuous longitudinal data with drop-outs, that is, continuous outcomes that are ordered in time and completely observed for each individual up to a certain time-point, at which the individual drops-out so that all the subsequent outcomes are missing. The proposed approach consists in assessing the inferences obtained across a family of MNAR pattern-mixture models indexed by a so-called sensitivity parameter that quantifies the departure from MAR. The approach was prompted by a randomized clinical trial investigating the benefits of a treatment for sleep-maintenance insomnia, from which 22% of the individuals had dropped-out before the study end. The second aim was to build on the existing theory for incomplete multivariate data to develop methods for competing risks data with missing causes of failure. The competing risks model is an extension of the standard survival analysis model in which failures from different causes are distinguished. Strategies for modeling competing risks functionals, such as the cause-specific hazards (CSH) and the cumulative incidence function (CIF), generally assume that the cause of failure is known for all patients, but this is not always the case. Some methods for regression with missing causes under the MAR assumption have already been proposed, especially for semi-parametric modeling of the CSH. But other useful models have received little attention, and MNAR modeling and sensitivity analysis approaches have never been considered in this setting. We propose a general framework for semi-parametric regression modeling of the CIF under MAR using inverse probability weighting and multiple imputation ideas. Also under MAR, we propose a direct likelihood approach for parametric regression modeling of the CSH and the CIF. Furthermore, we consider MNAR pattern-mixture models in the context of sensitivity analyses. In the competing risks literature, a starting point for methodological developments for handling missing causes was a stage II breast cancer randomized clinical trial in which 23% of the deceased women had missing cause of death. We use these data to illustrate the practical value of the proposed approaches
APA, Harvard, Vancouver, ISO, and other styles
30

Ochoa, Banafsheh K. "Maxillary growth in comparison to mandibular growth." Oklahoma City : [s.n.], 2002. http://library.ouhsc.edu/epub/theses/Ochoa-Banafsheh-K.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Ignell, Caroline. "Exploring changes of conceptions, values and beliefs concerning the environment : A longitudinal study of upper secondary school students in business and economics education." Doctoral thesis, Stockholms universitet, Institutionen för pedagogik och didaktik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-147639.

Full text
Abstract:
This thesis examines students’ understanding of economic aspects of global environmental problems. The first aim is to identify and characterise changes in business and economics students’ conceptions of negative environmental effects and pricing goods and services. The second aim is to identify and characterise changes in students’ values, beliefs and personal norms regarding effective solutions to climate change problems. Three studies were carried out with students in Swedish upper secondary schools. The first study used an open-ended questionnaire and is presented in Article I. The second and third studies drew on a longitudinal study, using both qualitative and quantitative research methods and results are presented in Article II and Article III. Article I shows that students’ awareness of environmental issues varies in relation to the type of good. Some goods are seen as more harmful to nature than others, for example, jeans were not perceived as environmentally negative while beef burgers and travel services were to some extent. This indicates that environmental references are often characterised through perceptible aspects of goods’ production i.e. being more expensive because of environmentally friendly production. Furthermore, some understanding of negative externalities was revealed. Interestingly, when value aspects of how prices should be set students more frequently refer to environmental impact. Article II describes changes in students’ price and environmental conceptions over the course of a year. It identifies the fragmentary nature of students’ every-day thinking in relation to productivity, consumer preference and negative externalities. Differences in conceptions of how prices are linked to negative impact is characterised in terms of basic, partial and complex understandings of productivity as well as basic and partial understandings of consumers’ influences. Partial conceptions are seen as students’ conceptions in a process of change towards a more scientific understanding of price and negative environmental impact. Most interestingly, the results show that more than one aspect of environmental impact and pricing are simultaneously relevant. This is highlighted by a change from views putting productivity at the centre for how prices are set to include consumers’ preferences when judgmentally describing how prices should be set. The results conclude that students show a broader content knowledge regarding pricing and the environment when including normative preferences. Article III explores changes in students’ value orientations, beliefs regarding efficient solutions to climate change and norms for pro-environmental actions. Small changes are observed regarding the three constructs. Value changes are reported in terms of a small average increase in importance of altruistic, biospheric and egoistic orientations while common individual changes are shown in shifts between weak and strong values. Beliefs regarding efficient climate change solutions are taxes and legislations while changes in market prices are perceived as being least effective. The findings show no direct relations between values and norms hence change in norms is associated with values through changes in beliefs.

At the time of the doctoral defense, the following paper was unpublished and had a status as follows: Paper 3: Manuscript.

APA, Harvard, Vancouver, ISO, and other styles
32

Sakamoto, Kathia. "Construção de curvas de normalidade dos índices dopplervelocimétricos das circulações uteroplacentária, fetoplacentária e fetal." Universidade de São Paulo, 2007. http://www.teses.usp.br/teses/disponiveis/5/5139/tde-01062007-122759/.

Full text
Abstract:
INTRODUÇÃO: A dopplervelocimetria foi incorporada à propedêutica obstétrica por ser um método rotineiro e não-invasivo, que possibilita a avaliação da função placentária e da resposta fetal à hipoxia em gestações de alto risco. Várias curvas de valores de referência dos índices dopplervelocimétricos dos vasos de interesse na especialidade já foram publicados, porém os mais antigos utilizaram tecnologia ultra-sonográfica inferior à disponível atualmente; outros apresentaram falhas na execução do estudo (por ex. tamanho e seleção da amostra, número de exames; método estatística na análise dos dados). OBJETIVO: Este estudo tem como finalidade a confecção de curvas de normalidade dos índices dopplervelocimétricos das circulações uteroplacentária (artérias uterinas direita e esquerda maternas - AUT), fetoplacentária (artérias umbilicais - AU) e fetal (artéria cerebral média - ACM, aorta torácica descendente - ATD e ducto venoso - DV), da 14ª à 42ª semana de gestação, confeccionadas a partir de dados obtidos prospectivamente e longitudinalmente, de gestações normais cujos resultados de parto e neonatal não foram adversos. MÉTODOS: As gestantes foram recrutadas e avaliadas nas unidades de atendimento pré-natal de baixo risco da Clínica Obstétrica no Setor de Avaliação da Vitalidade Fetal do Hospital das Clínicas da Faculdade de Medicina da Universidade de São Paulo, São Paulo, Brasil; no período de abril de 2000 a maio de 2004. Das 294 pacientes que foram recrutadas, somente 154 foram selecionadas para fazer parte da amostra analisada. As 908 avaliações dopplervelocimétricas foram realizadas somente por um operador, com taxa de sucesso de exame maior que 94% para cada vaso estudado. Foram utilizados aparelhos de ultra-sonografia com Doppler pulsátil e imagem em tempo real com dispositivo de mapeamento colorido de fluxo. Na análise estatística, adotou-se método de efeitos aleatórios ou modelo de múltiplos níveis descrito por Royston (1995). A distribuição dos valores dos índices para cada idade gestacional (IG em semanas) foi estudada. Quando verificada distribuição não-normal dos valores, aplicou-se modelos de transformação, sendo escolhido o modelo mais simples. Os valores originais dos índices foram mantidos quando a transformação não surtiu melhora da distribuição dos dados. A variável tempo (IG em semanas) foi testada por modelos de transformação polinomial (polinômios fracionários), sendo escolhido aquele que apresentou melhor desempenho. RESULTADOS: Foram determinados os intervalos de referência condicional e não-condicional dos valores dos índices dopplervelocimétricos de cada vaso, utilizando-se intervalo de confiança de 90% (alfa = 0,10). Após comparação, por sobreposição das curvas obtidas, considerando-se os intervalos de referência condicional e não-condicional, optou-se pelo uso dos valores calculados a partir de intervalos de referência não-condicionais por apresentarem curvas mais suaves e os valores serem muito semelhantes. CONCLUSÃO: foram obtidas as curvas de normalidade dos índices dopplervelocimétricos das AUT, das AU, das ACM e ATD fetais (relação S/D, IP e IR); e do DV (relação S/a, IPV e IDV), da 14ª à 42ª semana de gestação, a partir de dados coletados longitudinalmente de população brasileira, apresentando os percentis 5, 50 e 95.
INTRODUCTION: The Doppler velocimetry is a non-invasive method that allows the evaluation of the placental function and the fetal response to hypoxia in high-risk pregnancies. Many publications have presented the Doppler velocimetric indices reference ranges of the vessels. However, the oldest ones used inferior Doppler ultrasound technology when compared to what is currently available. Others have shown flaws in the methods size and selection of the samples, number of exams performed, the use of inappropriate statistical method to analyze the obtained data. OBJECTIVE: The purpose of this study is to construct the normal Doppler velocimetric indices reference ranges of the uteroplacental (maternal right and left uterine arteries - UAT), fetoplacental (umbilical arteries - UA) and fetal (middle cerebral artery - MCA, descendent thoracic aorta - DTA, ductus venosus - DV) circulations. Prospective and longitudinal data was obtained from normal pregnancies from 14 to 42 gestational weeks with normal delivery and neonatal outcomes. METHODS: Pregnant women were recruited from low-risk prenatal care units, and examined in the Fetal Surveillance Unit at the Obstetric Clinic (Hospital das Clínicas da Faculdade de Medicina da Universidade de São Paulo, SP, Brazil), between April 2000 and May 2004. Two-hundred ninety-four patients were recruited and 154 qualified to be part of the sample. A total of 908 Doppler velocimetric evaluations were performed by the same operator with a success rate of greater than 94% for each vessel studied, using ultrasound equipment with pulsatile Doppler and real-time color flow mapping technology. Random effects or multilevel models described by Royston (1995) were used in the analysis of the data obtained. The indices and the gestational age (in weeks) were tested for normal distribution by using fitted models and fractional polynomials transformation, respectively. When the transformed data did not show improvement in the normal distribution, the original values were used. RESULTS: The conditional and unconditional reference intervals for the 90% confidence interval were obtained for the indices of each vessel. Since the comparison between the unconditional and conditional reference intervals curves were very similar, the curves of the unconditional reference intervals were presented as they showed smoothed lines. CONCLUSION: The normal reference ranges of the Doppler velocimetric indices S/D ration, pulsatility index (PI) and resistance index (RI) of the UAT, UA, fetal MCA and DTA; and the S/a ratio, pulsatility index for veins (IPV) and DV index (IDV) of the DV were obtained from normal pregnancies between 14 and 42 weeks\' gestation, in a prospective and longitudinal study of the Brazilian population.
APA, Harvard, Vancouver, ISO, and other styles
33

Salmeh, Louay. "L'abandon sportif : des motifs d'abandon aux modèles théoriques : une recherche longitudinale chez les handballeuses et les basketteuses." Phd thesis, Université de Bourgogne, 2011. http://tel.archives-ouvertes.fr/tel-00661589.

Full text
Abstract:
L'objectif de cette thèse était d'une part d'identifier les motifs d'abandon chez les handballeuses et les basketteuses syriennes et de tester d'autre part trois modèles théoriques qui expliquent l'abandon sportif. Une première étude (N = 402) s'est attachée à identifier les motifs d'abandon chez ces joueuses. Les résultats ont montré que les joueuses estiment que certains motifs (e.g., tomber enceinte, avoir un mari qui ne permet plus à sa femme de faire du sport) sont très importants pour abandonner le sport. Les motifs d'abandon sont fonction du type et du niveau des joueuses ainsi que de l'âge et du niveau d'expérience. Une deuxième étude (N = 328) a testé le modèle de l'abandon sportif de Sarrazin et al., (2002) afin d'expliquer comment le climat perçu de l'entraîneur conduit à l'abandon sportif chez ces joueuses. Les résultats ont rapporté un soutien important pour ce modèle et pour la chaîne causale du modèle hiérarchique de Vallerand (1997). Ils ont montré que l'impact du climat motivationnel de l'entraîneur sur la motivation est médié par les perceptions de compétence, d'affiliation et d'autonomie des joueuses. De plus, la motivation est corrélée aux intentions d'abandonner qui prédisent l'abandon sportif chez ces joueuses. Une troisième étude (N = 237) a testé le modèle de l'engagement sportif de Guillet et al., (2002). Les résultats n'ont pas pu appuyer ce modèle. Par contre, une version modifiée a été validée. Dans celle-ci, les résultats ont montré d'une part qu'il y a trois antécédents qui prédisent l'engagement des joueuses (i.e, les contraintes sociales, l'attrait pour d'autres activités et les bénéfices perçue). D'autre part, l'engagement est relié négativement au comportement d'abandon chez les joueuses. Une quatrième étude (N = 328) s'est focalisée sur le lien entre l'identité de genre des joueuses et leur persévérance dans les activités pratiquées. Les résultats ont montré que les joueuses typées Androgynes et Masculines ont persisté beaucoup plus dans leurs activités masculines que celles typées Féminines. Finalement, une cinquième étude (N = 328) a testé une version simplifiée du modèle d'Eccles et al., (1983) proposée par Guillet et al., (2006). Les résultats ont soutenu partiellement ce modèle en montrant que l'influence de la masculinité sur les intentions d'abandonner est médiée par les valeurs de l'activité et l'habilité perçue. Ces dernières ne jouent cependant pas un rôle médiateur entre la féminité et les intentions d'abandonner. Les intentions d'abandonner prédisent l'abandon réel chez ces joueuses. En conclusion, si les motifs d'abandon donnent une idée sur les raisons qui conduisent probablement à l'abandon, les modèles théoriques reflètent une meilleure explication à l'abandon réel.
APA, Harvard, Vancouver, ISO, and other styles
34

Ayvazyan, Vigen. "Etude de champs de température séparables avec une double décomposition en valeurs singulières : quelques applications à la caractérisation des propriétés thermophysiques des matérieux et au contrôle non destructif." Thesis, Bordeaux 1, 2012. http://www.theses.fr/2012BOR14671/document.

Full text
Abstract:
La thermographie infrarouge est une méthode largement employée pour la caractérisation des propriétés thermophysiques des matériaux. L’avènement des diodes laser pratiques, peu onéreuses et aux multiples caractéristiques, étendent les possibilités métrologiques des caméras infrarouges et mettent à disposition un ensemble de nouveaux outils puissants pour la caractérisation thermique et le contrôle non desturctif. Cependant, un lot de nouvelles difficultés doit être surmonté, comme le traitement d’une grande quantité de données bruitées et la faible sensibilité de ces données aux paramètres recherchés. Cela oblige de revisiter les méthodes de traitement du signal existantes, d’adopter de nouveaux outils mathématiques sophistiqués pour la compression de données et le traitement d’informations pertinentes. Les nouvelles stratégies consistent à utiliser des transformations orthogonales du signal comme outils de compression préalable de données, de réduction et maîtrise du bruit de mesure. L’analyse de sensibilité, basée sur l’étude locale des corrélations entre les dérivées partielles du signal expérimental, complète ces nouvelles approches. L'analogie avec la théorie dans l'espace de Fourier a permis d'apporter de nouveaux éléments de réponse pour mieux cerner la «physique» des approches modales.La réponse au point source impulsionnel a été revisitée de manière numérique et expérimentale. En utilisant la séparabilité des champs de température nous avons proposé une nouvelle méthode d'inversion basée sur une double décomposition en valeurs singulières du signal expérimental. Cette méthode par rapport aux précédentes, permet de tenir compte de la diffusion bi ou tridimensionnelle et offre ainsi une meilleure exploitation du contenu spatial des images infrarouges. Des exemples numériques et expérimentaux nous ont permis de valider dans une première approche cette nouvelle méthode d'estimation pour la caractérisation de diffusivités thermiques longitudinales. Des applications dans le domaine du contrôle non destructif des matériaux sont également proposées. Une ancienne problématique qui consiste à retrouver les champs de température initiaux à partir de données bruitées a été abordée sous un nouveau jour. La nécessité de connaitre les diffusivités thermiques du matériau orthotrope et la prise en compte des transferts souvent tridimensionnels sont complexes à gérer. L'application de la double décomposition en valeurs singulières a permis d'obtenir des résultats intéressants compte tenu de la simplicité de la méthode. En effet, les méthodes modales sont basées sur des approches statistiques de traitement d'une grande quantité de données, censément plus robustes quant au bruit de mesure, comme cela a pu être observé
Infrared thermography is a widely used method for characterization of thermophysical properties of materials. The advent of the laser diodes, which are handy, inexpensive, with a broad spectrum of characteristics, extend metrological possibilities of infrared cameras and provide a combination of new powerful tools for thermal characterization and non destructive evaluation. However, this new dynamic has also brought numerous difficulties that must be overcome, such as high volume noisy data processing and low sensitivity to estimated parameters of such data. This requires revisiting the existing methods of signal processing, adopting new sophisticated mathematical tools for data compression and processing of relevant information.New strategies consist in using orthogonal transforms of the signal as a prior data compression tools, which allow noise reduction and control over it. Correlation analysis, based on the local cerrelation study between partial derivatives of the experimental signal, completes these new strategies. A theoretical analogy in Fourier space has been performed in order to better understand the «physical» meaning of modal approaches.The response to the instantaneous point source of heat, has been revisited both numerically and experimentally. By using separable temperature fields, a new inversion technique based on a double singular value decomposition of experimental signal has been introduced. In comparison with previous methods, it takes into account two or three-dimensional heat diffusion and therefore offers a better exploitation of the spatial content of infrared images. Numerical and experimental examples have allowed us to validate in the first approach our new estimation method of longitudinal thermal diffusivities. Non destructive testing applications based on the new technique have also been introduced.An old issue, which consists in determining the initial temperature field from noisy data, has been approached in a new light. The necessity to know the thermal diffusivities of an orthotropic medium and the need to take into account often three-dimensional heat transfer, are complicated issues. The implementation of the double singular value decomposition allowed us to achieve interesting results according to its ease of use. Indeed, modal approaches are statistical methods based on high volume data processing, supposedly robust as to the measurement noise
APA, Harvard, Vancouver, ISO, and other styles
35

Audulv, Åsa. "Being creative and resourceful : Individuals’ abilities and possibilities for self-management of chronic illness." Doctoral thesis, Mittuniversitetet, Institutionen för hälsovetenskap, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-13512.

Full text
Abstract:
Individuals’ self-management styles are crucial for how they manage to live with illness. Commonly investigated factors include social support, self-efficacy, health beliefs, and demographics. There is a gap in the literature with regard to in-depth studies of how those factors actually influence an individual’s self-management.   The aim of this thesis was to investigate the underlying mechanisms of self-management from the perspective of individuals living with chronic illness.   Interviews were conducted with 47 individuals with various chronic illnesses, some of them repeatedly over two and a half years (a total of 107 interviews). The material was analysed with; constructive grounded theory, content analysis, phenomenography, and interpretive description.   The Self-management Support Model identified aspects that influenced participants’ self-management: economic and social situation, social support, views and perspectives on illness, attribution of responsibility, and ability to integrate self-management into an overall life situation. For example, individuals with a life-oriented or disease-oriented perspective on illness prioritized different aspects of self-management. People who attributed internal responsibility performed a more complex self-management regimen than individuals who attributed external responsibility. In conclusion, individuals who were creative and resourceful had a better chance of tailoring a self-management regimen that suited them well. People in more disadvantaged positions (e.g., financial strain, limited support, or severe intrusive illness) experienced difficulty in finding a method of self-management that fit their life situation.   These findings can inspire healthcare providers to initiate a reflective dialogue about self-management with their patients.
Exploring individuals’ conceptions as a way to understand self-management among people living with long term medical conditions
APA, Harvard, Vancouver, ISO, and other styles
36

Jakobsson, Anton. "Prognostic value of global longitudinal strain in patients with myocardial infarction and normal LVEF." Thesis, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-380395.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Campbell, Paul. "Analysing and understanding changes in "fixed" characteristics over time: The value of longitudinal data." Phd thesis, 2022. http://hdl.handle.net/1885/274331.

Full text
Abstract:
Over the life-course, people change. Changes span circumstances, demographic characteristics, and individual preferences, attitudes and behaviour. The work in this thesis centres on understanding such change, and in particular, changes in constructs that are often taken to be fixed. It harnesses cross-sectional and longitudinal datasets to demonstrate change. It seeks to link change at the individual-level with broader demographic processes. The studies focus on four constructs: migration (intentions, certainty, and outcomes), attitudes to immigration, Indigenous identification, and personality. Each are studied using longitudinal data. In the case of Australian Indigenous identification, the work presented is the first that has directly observed change, using a large, nationally representative longitudinal dataset. Identification change has long been cited as a likely source of Indigenous population growth, but had not previously been observed directly. In the case of personality, the Big Five personality traits are lauded for both their predictive validity and their stability across the life-course, though recent research suggests they do change over time. This thesis looks to contribute to a growing body of research assessing the stability and pattern of change of personality traits over the lifespan, assessing change attributable to age, to life events and change in circumstance. The studies also consider the relationship between individual-level psychological constructs and broader demographic outcomes, including migration intention, behaviour, and attitudes to immigration. This research shows that dispositional traits demonstrably predict differences in migration intentions, certainty, and outcomes. They also affect the strength of relationship between migration intention and subsequent migration. Changes at the individual level can have important implications at the population level, and longitudinal change has implications for cross-sectional analysis. In quantitatively assessing change in Indigenous identification, this work demonstrates that new identification appears to account for a considerable proportion of the growth in the Indigenous population. The newly identified group also appear to possess different characteristics to those who consistently identified as Indigenous across the two time points. In the case of Indigenous Australians, identification change can affect change in health and socioeconomic outcomes at the population level, observed in multiple cross-sectional analyses. That is, improvements over time in life outcomes may in part be attributed to changes in the demographic identifying as Indigenous. Changes in personality traits, particularly change driven by life events typically predicted by scores on the Big 5, introduces endogeneity between personality and life events, and this should be considered when personality is used to predict life outcomes. The studies utilise several large datasets, each with a unique profile of scope and collection methodology. Attention is given to the relationship between datasets and analysis, and how analysis can be used to validate data, particularly probabilistically linked longitudinal data. Changes in seemingly fixed attributes can be analysed meaningfully, particularly as more longitudinal datasets become available. The implications of such analysis stretch far, and longitudinal change should be considered in cross-sectional research.
APA, Harvard, Vancouver, ISO, and other styles
38

Gerardo, Bianca Salguinho. "Visuoconstructional impairment in Mild Cognitive Impairment: Diagnostic utility and predictive value in the progression to Alzheimer’s disease." Master's thesis, 2018. http://hdl.handle.net/10316/85593.

Full text
Abstract:
Dissertação de Mestrado Integrado em Psicologia apresentada à Faculdade de Psicologia e de Ciências da Educação
Introdução: A visuoconstrução é um domínio cognitivo que se define pela capacidade de organizar e manipular manualmente informações espaciais, por forma criar um design ou copiar um modelo. Este é um domínio complexo que integra processos como a visuoperceção, a análise visuospacial, as capacidades motoras finas, a atenção e várias funções executivas. Na doença de Alzheimer (DA) a visuoconstrução é frequentemente avaliada através de provas de desenho por cópia ou de desenho livre. Tendo em conta a complexidade da visuoconstrução, e considerando o comprometimento de diversas áreas cerebrais implicado na DA, é possível observar-se défices visuoconstrutivos desde as fases iniciais da doença.Objetivos: (1) Avaliar os défices visuoconstrutivos no Defeito Cognitivo Ligeiro (DCL) e estudar as diferenças entre doentes com DCL amnésico (DCLa) e DCL amnésico multidomínios (DCLam); (2) avaliar o valor do domínio visuoconstrutivo como preditor da conversão para DA.Métodos: Cento e oitenta e quatro doentes diagnosticados com DCLa (n=121) e DCLam (n=63) foram avaliados anualmente (entre 2 a 11 anos) em contexto de consulta de demência ao nível da sua cognição, funcionalidade e psicopatologia, através de uma extensiva bateria neuropsicológica. A avaliação da visuoconstrução foi realizada através das seguintes tarefas: o Teste do Desenho dos Pentágonos (TDP), o Teste da Cópia do Cubo (TCC), a Tarefa de Praxia Construtiva (TPC) e a condição de desenho livre do Teste de Desenho do Relógio (TDR). O domínio visuospacial do Montreal Cognitive Assessment (MoCA) foi igualmente utilizado como medida visuoconstrutiva.Resultados: Os doentes diagnosticados com DCLam apresentaram piores capacidades visuoconstrutivas que os doentes com DCLa. Apesar de as diferenças entre os dois grupos terem sido detetadas tanto em tarefas de cópia como em tarefas de desenho livre, as tarefas de cópia menos complexas (TDP e os dois itens mais simples da TPC) não reportaram diferenças estatisticamente significativas (p>.05). Em relação às diferenças entre os doentes que converteram (C) e aqueles que não converteram (NC) para demência, observou-se uma inclusão significativamente maior de doentes com inícios tardios da doença no grupo C (t(135.66)= -4.233, p<.001). As análises de sobrevivência acusaram taxas de conversão significativamente diferentes para doentes com inícios precoces (IP) e para doentes com inícios tardios (IT) da doença (χ2(1)=13.416, p<.001), com estes últimos a apresentarem sistematicamente menores probabilidades de permanecerem estáveis. As análises multinível dos doentes IP acusaram o TDR (χ2(1)=5.019, p=.025) e o efeito de interação ente o TDP e o Tempo (χ2(1)=6.655, p=.010) como preditores significativos da demência. Para os doentes IT, os preditores revelados foram o TDR (χ2(1)=16.677, p<.001) e o domínio visuospacial do MoCA (χ2(1)=4.157, p=.041). Em ambos os modelos, piores pontuações nas tarefas de desenho implicaram um aumento da probabilidade de converter relativamente à probabilidade de não converter para demência.Conclusões: Os resultados sugerem um comprometimento visuoconstrutivo mais acentuado nos doentes com DCLam comparativamente aos doentes com DCLa. Adicionalmente, a visuoconstrução parece constituir-se como preditor significativo da DA, com maiores défices nesta capacidade a traduzirem um incremento na probabilidade de progredir de uma condição de DCL para demência. Uma vez que os doentes com inícios tardios da doença apresentam menores probabilidades de permanecerem estáveis comparativamente aos seus homólogos com inícios precoces, doentes com inícios tardios e fracas capacidades visuoconstrutivas puderão constituir um grupo em elevado risco de conversão. Neste sentido, os défices visuoconstrutivos poderão ser utilizados como importantes sinais de aviso da probabilidade de conversão de um doente.
Introduction: Visuoconstruction is a cognitive domain that can be defined as the ability to organize and manually manipulate spatial information in order to create a design or copy a model. As a complex domain, visuoconstruction requires the interaction of different processes, such as visuoperception, visuospatial analysis, fine motor skills, attention and executive functions. In Alzheimer’s disease (AD), this domain is frequently assessed through drawing tasks, including both copying and drawing-to-command. Given the complexity of this capacity, and since AD involves the impairment of several brain areas, visuoconstruction can be compromised in the early stages of the disease.Objectives: (1) To assess visuoconstructional impairments in Mild Cognitive Impairment (MCI) and study differences between amnestic single-domain MCI (aMCI) and amnestic multidomain MCI (amMCI) patients; (2) to assess the value of visuoconstruction as a predictor of AD.Methodology: One-hundred and eighty four patients diagnosed with aMCI (n=121) and amMCI (n=63) were followed in dementia consultation an evaluated annually through an extensive neuropsychological battery. All patients were assessed at a cognitive, functional and psychopathological level. To assess visuoconstruction, we applied the following tasks: the Pentagon Drawing Test (PDT), the Cube Copying Test (CCT), the Constructional Praxis Task (CPT) and the drawing-to-command condition of the Clock Drawing Test (CDT). The visuospatial domain of Montreal Cognitive Assessment (MoCA) was also used as a measure of visuoconstruction.Results: Overall, amMCI patients presented worst visuoconstructional abilities than aMCI patients. Although these differences were detected either by copying or drawing-to-command tasks, less complex copying tasks (PDT and the two simpler items of the CPT) could not distinguish the two groups (p>.05). Regarding the differences between patients who converted (C) and patients who did not convert (NC) to dementia, the C group included significantly more MCI patients with late-onsets than the NC group (t(135.66)= -4.233, p<.001). Survival analysis reported significantly different patterns of conversion for patients with early-onsets (EO) and late-onsets (LO) (χ2(1)=13.416, p<.001), with the latter presenting lower probabilities of remaining stable. Multilevel analysis of EO patients yelled the CDT (χ2(1)=5.019, p=.025) and the interaction effect between the PDT and Time (χ2(1)=6.655, p=.010) as significant predictors of dementia. For LO patients, the yelled predictors were the CDT (χ2(1)=16.677, p<.001) and the visuospatial domain of MoCA (χ2(1)=4.157, p=.041). In both models, worse visuoconstructional scores implied an increase in the chances of a patient converting versus the chances of not converting.Conclusions: Our results suggest that amMCI patients present steeper visuoconstructional impairment in comparison to aMCI patients. Besides, visuoconstruction is a significant predictor of AD, with greater deficits leading to an increase in the probability to convert from MCI to dementia. Since late-onset patients are less likely to remain stable compared to their early-onset counterparts, late-onset patients with poor visuoconstructive capabilities may constitute a group at high risk of conversion. In this sense, visuoconstructive deficits may be used as an important warning sign of the probability of a patient to develop dementia.
APA, Harvard, Vancouver, ISO, and other styles
39

Pestana, Carolina Alexandre Festas Gomes. "Study of biomarkers in Multiple Myeloma: a statistical approach for longitudinal assessment of extracellular vesicles and its prognostic value." Master's thesis, 2021. http://hdl.handle.net/10451/49084.

Full text
Abstract:
Trabalho de projeto de mestrado em Bioestatística, Faculdade de Ciências, Universidade de Lisboa, 2021
O Mieloma Múltiplo (MM) é um cancro hematológico que tem origem na proliferação de células plasmáticas clonais que se acumulam na medula óssea e segregam imunoglobulinas monoclonais (mIg). Esta proliferação é acompanhada pela diminuição da função da medula óssea e pela destruição do tecido ósseo (Mangan, 2005). O MM é quase sempre precedido por dois estádios de doenças: MGUS (Monoclonal Gammopathy of Undetermined Significance) e de um estádio intermédio/indolente denominado SMM (Smoldering Multiple Myeloma). Nestes estádios, nenhuma das características que definem o MM está presente e os doentes requerem apenas vigilância médica periódica (Ho et al., 2020). Na maior parte dos casos, o doente é diagnosticado a partir de análises laboratoriais, uma vez que os sintomas só começam a surgir numa fase mais tardia. No entanto, o diagnóstico final é realizado através de aspiração e biópsia de medula óssea. Estes testes, apesar de invasivos, são considerados gold standard para o diagnóstico, uma vez que permitem a quantificação dos plasmócitos clonais na medula óssea. Contudo, não existe ainda na prática clínica outro teste disponível que seja menos agressivo e que o substitua. Atualmente, o Revised International Staging System (R-ISS) (Palumbo et al., 2015) é o modelo usado como base para a determinação do prognóstico de doentes MM recentemente diagnosticados, correspondendo a um score de agressividade de doença. Através de uma escala de três níveis, este modelo de estratificação de risco determina quais os doentes com melhor prognóstico, intermédio e pior. Apesar dos constantes avanços a que se assiste no campo do MM, doentes que aparentemente têm prognósticos semelhantes acabam por apresentar desfechos bastante heterógeneos. Isto indica que os mecanismos atuais, embora úteis, falham na determinação de um prognóstico preciso. De forma a encontrar novos biomarcadores séricos desta doença que permitam um acompanhamento constante do prognóstico do doente de forma menos invasiva, têm sido estudados novos métodos utilizando biópsias líquidas (Ferreira et al., 2020). Este tipo de biópsia permite obter informação sobre a doença a partir de amostras sanguíneas utilizando uma técnica minimamente invasiva. É neste contexto que surge o estudo de vesículas extracelulares, em particular de exosomas. Os exosomas são partículas com um diâmetro entre 30 e 100 nanómetros, produzidos por vários tipos de células e secretadas por estas para o microambiente extracelular. Sabe-se que têm várias funções e estão envolvidos em diversos processos celulares, alguns patológicos, como a transformação de células saudáveis em células malignas, bem como na indução do nicho pré-metastático (Kawano et al., 2015). Existem já vários estudos que demonstram que os exosomas isolados a partir do sangue periférico de doentes com MM podem servir como marcadores de diagnóstico e predição de progressão de doença, uma vez que os exosomas refletem a grande parte das propriedades das células que lhes deram origem (Moloudizargari et al., 2019); neste caso, das células tumorais. Neste estudo, cuja mediana do tempo de observação ronda os 2 anos, foram sendo colhidas prospectivamente amostras de 102 doentes diagnosticados e/ou tratados no centro clínico da Fundação Champalimaud (Lisboa, Portugal) que foram diagnosticados à entrada no estudo com MM ou outra gamapatia monoclonal (MGUS ou SMM). O objetivo principal deste estudo foi a validação das vesículas extracelulares, não só enquanto biomarcador de prognóstico, mas também enquanto mecanismo de monitorização e acompanhamento regular dos doentes na prática clínica. A avaliação destas vesículas extracelulares foi desenvolvida por intermédio da variável EVs Cargo (Extracellular Vesicles Cargo). Esta variável corresponde ao rácio entre a concentração proteica e a concentração das partículas (μg/108 partículas) e, até agora, nunca antes tinha sido estudada neste âmbito, uma vez que habitualmente representa apenas uma medida de pureza, demonstrando a eficiência dos protocolos de isolamento. As variáveis de interesse incluíram variáveis demográficas tais como a idade, o género, o diagnóstico à entrada no estudo, os scores de agressividade de doença (para MGUS, SMM e MM), a sequência cronológica em que as amostras foram colhidas, bem como o número de linhas de tratamento previamente realizadas em cada colheita de amostra, mas também variáveis laboratoriais, tais como os níveis de IgG, IgA e IgM, os níveis de sFLC kappa e lambda, o nível do rácio sFLC, bem como os níveis de LDH, hemoglobina, neutrófilos, plaquetas, beta-2-microglobulina, creatinina, albumina e proteína-C reactiva. Estas variáveis foram selecionadas tendo em conta o seu impacto e relevância clínica, de acordo com a literatura existente. A sobrevivência global foi definida como o tempo, em meses, desde a entrada no estudo até morte por qualquer causa. Doentes que não manifestavam o outcome de interesse à data do último follow-up conhecido foram censurados nessa data. Numa fase inicial, através das estimativas de Kaplan-Meier das funções de sobrevivência global, foi possível verificar que, à entrada no estudo, a coorte estudada assume as mesmas características das já demonstradas e verificadas pela literatura. Nomeadamente, foi possível observar que doentes MGUS apresentam uma probabilidade de sobrevivência superior a doentes SMM e MM, e que doentes SMM apresentam uma probabilidade de sobrevivência superior a MM. Contudo, aproximadamente a partir da mediana do tempo de estudo, doentes SMM revelam pior sobrevivência do que doentes MM. Por outro lado, focando-nos nos doentes MM, verificou-se que, não só existem diferenças relativamente à sobrevivência global dos doentes consoante o nível R-ISS, mas também que o prognóstico piora progressivamente ao longo da escala R-ISS (Palumbo et al., 2015). Finalmente, foi também observado que os doentes que, à entrada no estudo, tivessem já realizado alguma linha de tratamento, apresentam pior prognóstico, sendo que este piora com o aumento do número de linhas de tratamento. Esta observação é concordante com o publicado por Kumar et al. (2004). Na primeira fase deste projeto foi testada a associação da variável de interesse (EVs Cargo) com a sobrevivência global dos doentes, à altura da inclusão no estudo. Contudo, face à não validação dos pressupostos necessários à aplicação de modelos de regressão de Cox, esta variável foi categorizada tendo como base uma metodologia direcionada especificamente para o outcome pretendido: neste caso, a sobrevivência global. Através desta metodologia, foi possível obter um ponto de corte ótimo, que nos permitiu categorizar a variável EVs Cargo em dois grupos: EVs Cargo Low (≤ 0.6 μg/108 partículas) e EVs Cargo High (> 0.6 μg/108 partículas), com uma diferença significativa entre estes dois grupos relativamente à sobrevivência global. De forma interessante, através das estimativas de Kaplan-Meier das funções de sobrevivência global foi também possível observar que, apesar de doentes MM (independentemente do nível EVs Cargo) apresentarem sempre pior prognóstico do que doentes SMM, com o passar do tempo, doentes SMM com EVs Cargo High passam a apresentar pior prognóstico do que qualquer uma das categorias de MM. Este resultado pode ser indicador da necessidade de tratamento de doentes SMM de alto risco: devido ao facto destes doentes não serem tratados, doentes SMM de risco elevado acabam por apresentar, ao longo do tempo, um pior prognóstico do que doentes MM, para qualquer um dos níveis de EVs Cargo. Em seguida, tendo em conta a estratificação da variável EVs Cargo, foi desenvolvido um modelo longitudinal que nos permitiu avaliar a significância de alguns fatores na explicação de doentes que, ao longo do tempo, manifestam EVs Cargo High. O que se observou foi que os doentes que ao longo do tempo manifestam níveis reduzidos de IgA bem como níveis elevados de sFLC lambda e um reduzido tempo em resposta (caracterizado pelo tempo entre linhas de tratamento) são doentes que têm maiores chances de apresentar EVs Cargo High e, em consequência, um pior prognóstico. A qualidade do ajustamento do modelo proposto foi verificada pela curva ROC e pela respetiva AUC. Por último, e através da análise diferencial da expressão proteica entre os diversos níveis das variáveis selecionadas como significativas no modelo proposto, foi possível validar os resultados e conclusões obtidos pelo modelo. Para a validação final enquanto biomarcador de prognóstico, foi feita também a comparação entre doentes e dadores saudáveis ao nível da expressão proteica. Esta foi a primeira vez que vesículas extracelulares provenientes de doentes com gamapatias monoclonais foram analisadas usando um estudo clínico observacional, no qual a evolução natural da doença foi sendo seguida ao longo do tempo. Através deste estudo foi possível verificar que, não só as vesículas extracelulares têm potencial para serem usadas como biomarcadores no contexto do MM, como também que a variável EVs Cargo em particular reúne todas as condições para que possa ser usada enquanto variável de prognóstico e como mecanismo de avaliação periódica dos doentes. A utilização desta variável enquanto biomarcador nunca antes tinha sido testada. Embora estes resultados careçam de validação numa coorte maior, seguida durante mais tempo, creio que as conclusões obtidas com este estudo têm uma enorme relevância translacional e que podem, futuramente, constituir uma forma de prognóstico e acompanhamento constante das alterações no estado clínico dos doentes.
Over the years there have been significant improvements in the oncological field. We are currently moving towards a more patient-oriented medicine, aiming at constant monitoring and follow-up of the evolution of each patient’s condition. In the particular case of MM, we know that patients with similar prognoses may present heterogeneous outcomes. This indicates that the current prognostic mechanisms, although useful, are not sufficiently precise. In this sense, the study of extracellular vesicles emerges as a prognostic and follow-up mechanism of patients. In this project, based on a total of 102 patients monitored over a median period of, approximately, 2 years and using the variable EVs Cargo (protein:particle ratio) as representative of extracellular vesicles, we could prove that, at study entry, patients presenting EVs Cargo > 0.6 μg/108 particles (high risk) have a significantly worse prognosis than patients with EVs Cargo ≤ 0.6 μg/108 particles (low risk). Furthermore, it was possible to observe that high-risk SMM patients have, over time, a worse prognosis than low-risk MM patients and, to some extent, even worse than high-risk MM patients. On the other hand, and taking into account the established cut-off, it was possible to verify that patients who, over time, manifest a depletion of IgA levels, high levels of sFLC lambda and a short time in response have increased odds of presenting high levels of EVs Cargo and, consequently, a worse prognosis. The quality of the adjustment of the model was verified by the ROC curve and the AUC measurement. Furthermore, the validity of the results obtained was verified by the analysis of differential protein expression between the levels of the variables included in the model, but also by protein comparison between healthy donors and patients. Through this complete approach, it was possible not only to validate the EVs Cargo as a prognostic biomarker but also as a follow-up mechanism of patients throughout the natural course of their disease. Further studies with a longer follow-up period and a different cohort of patients are needed to confirm our results. If these results are validated, this could be a new tool to be used as the first alarm signal of the deterioration of the patient’s health condition, allowing more thorough examinations in such cases.
APA, Harvard, Vancouver, ISO, and other styles
40

Kundu, Madan Gopal. "Advanced Modeling of Longitudinal Spectroscopy Data." Thesis, 2014. http://hdl.handle.net/1805/5454.

Full text
Abstract:
Indiana University-Purdue University Indianapolis (IUPUI)
Magnetic resonance (MR) spectroscopy is a neuroimaging technique. It is widely used to quantify the concentration of important metabolites in a brain tissue. Imbalance in concentration of brain metabolites has been found to be associated with development of neurological impairment. There has been increasing trend of using MR spectroscopy as a diagnosis tool for neurological disorders. We established statistical methodology to analyze data obtained from the MR spectroscopy in the context of the HIV associated neurological disorder. First, we have developed novel methodology to study the association of marker of neurological disorder with MR spectrum from brain and how this association evolves with time. The entire problem fits into the framework of scalar-on-function regression model with individual spectrum being the functional predictor. We have extended one of the existing cross-sectional scalar-on-function regression techniques to longitudinal set-up. Advantage of proposed method includes: 1) ability to model flexible time-varying association between response and functional predictor and (2) ability to incorporate prior information. Second part of research attempts to study the influence of the clinical and demographic factors on the progression of brain metabolites over time. In order to understand the influence of these factors in fully non-parametric way, we proposed LongCART algorithm to construct regression tree with longitudinal data. Such a regression tree helps to identify smaller subpopulations (characterized by baseline factors) with differential longitudinal profile and hence helps us to identify influence of baseline factors. Advantage of LongCART algorithm includes: (1) it maintains of type-I error in determining best split, (2) substantially reduces computation time and (2) applicable even observations are taken at subject-specific time-points. Finally, we carried out an in-depth analysis of longitudinal changes in the brain metabolite concentrations in three brain regions, namely, white matter, gray matter and basal ganglia in chronically infected HIV patients enrolled in HIV Neuroimaging Consortium study. We studied the influence of important baseline factors (clinical and demographic) on these longitudinal profiles of brain metabolites using LongCART algorithm in order to identify subgroup of patients at higher risk of neurological impairment.
Partial research support was provided by the National Institutes of Health grants U01-MH083545, R01-CA126205 and U01-CA086368
APA, Harvard, Vancouver, ISO, and other styles
41

Nelson, Dean E. "A simulation study of the hierarchical linear model with serially correlated longitudinal data and missing values /." Diss., 1999. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:9935173.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Almeida, Ana Isabel Silva. "On the work values of entrepreneurs and non-entrepreneurs: A european longitudinal study." Master's thesis, 2014. https://repositorio-aberto.up.pt/handle/10216/75941.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Porfeli, Erik J. "A longitudinal study of a developmental-centextual model of work values during adolescence." 2004. http://www.etda.libraries.psu.edu/theses/approved/WorldWideIndex/ETD-578/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Almeida, Ana Isabel Silva. "On the work values of entrepreneurs and non-entrepreneurs: A european longitudinal study." Dissertação, 2014. https://repositorio-aberto.up.pt/handle/10216/75941.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

"Changing consumption values in urban China: a longitudinal study of newspaper advertising, 1981-2003." 2004. http://library.cuhk.edu.hk/record=b5891941.

Full text
Abstract:
Tan Jieyu.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2004.
Includes bibliographical references (leaves 106-113).
Abstracts in English and Chinese.
Chapter Chapter 1 --- Introduction --- p.1
Chapter Chapter 2 --- Literature Review --- p.4
Consumption in a modern world --- p.4
Modernization and consumption values --- p.10
Chapter Chapter 3 --- Modernization in Urban China since 1979 and Research Hypotheses and Question --- p.20
Modernization in urban China --- p.20
Previous findings on consumption values in China --- p.28
Research hypotheses and question --- p.33
Chapter Chapter 4 --- Research Methods --- p.35
Content analysis of newspaper ads --- p.35
Interviews with experienced advertising professionals --- p.51
Chapter Chapter 5 --- Research Findings --- p.54
Consumption values manifested in newspaper advertising --- p.54
The findings from interviews: changing consumption values in urban China corresponds to modernization --- p.77
Chapter Chapter 6 --- Conclusion and Discussion --- p.87
Concluding remarks --- p.87
Limitations and suggestions for future research --- p.91
Appendix --- p.94
Basic questions for in-depth interviews with advertising professionals --- p.94
Salience trends of consumption values found in newspaper ads --- p.95
References --- p.106
APA, Harvard, Vancouver, ISO, and other styles
46

HUANG, YAN-LING, and 黃彥菱. "Analysis of Longitudinal Data with Censored and Missing Values via Nonlinear Mixed-effects Models." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/23267683758841290881.

Full text
Abstract:
碩士
逢甲大學
統計學系
105
Repeated measures from clinical trials or biomedical research are usually collected and shown to be nonlinear profiles, the nonlinear mixed-effects model (NLMM) has become a popular modelling tool for analyzing such kind of data. However, censored and missing data often occur in longitudinal studies due to limitations of the measuring technology, missed visits, loss to follow-up, and so on. This thesis formulates the nonlinear mixed-effects model with censored and missing responses (NLMM-CM), which allows the analysts to model longitudinal data in the presence of censored and missing values simultaneously. The nonlinear mixed-effects model with censored values (NLMM-C), nonlinear mixed-effects model with missing values (NLMM-M) and nonlinear mixed-effects model (NLMM), which are treated as special cases of the NLMM-CM, are also presented and compared to the proposed NLMM-CM in simulation sudies. To carry out maximum likelihood estimation of model parameters, we provide an efficient expectation conditional maximization (ECM) algorithm. This method is developed under the complete pseudo-data likelihood function, which is derived by using first-order Taylor expansion around individual-specific parameters. Real-data examples and simulation studies are used to demonstrate the performance of our proposed methods.
APA, Harvard, Vancouver, ISO, and other styles
47

Carrillo, Garcia Ivan Adolfo. "Analysis of Longitudinal Surveys with Missing Responses." Thesis, 2008. http://hdl.handle.net/10012/3971.

Full text
Abstract:
Longitudinal surveys have emerged in recent years as an important data collection tool for population studies where the primary interest is to examine population changes over time at the individual level. The National Longitudinal Survey of Children and Youth (NLSCY), a large scale survey with a complex sampling design and conducted by Statistics Canada, follows a large group of children and youth over time and collects measurement on various indicators related to their educational, behavioral and psychological development. One of the major objectives of the study is to explore how such development is related to or affected by familial, environmental and economical factors. The generalized estimating equation approach, sometimes better known as the GEE method, is the most popular statistical inference tool for longitudinal studies. The vast majority of existing literature on the GEE method, however, uses the method for non-survey settings; and issues related to complex sampling designs are ignored. This thesis develops methods for the analysis of longitudinal surveys when the response variable contains missing values. Our methods are built within the GEE framework, with a major focus on using the GEE method when missing responses are handled through hot-deck imputation. We first argue why, and further show how, the survey weights can be incorporated into the so-called Pseudo GEE method under a joint randomization framework. The consistency of the resulting Pseudo GEE estimators with complete responses is established under the proposed framework. The main focus of this research is to extend the proposed pseudo GEE method to cover cases where the missing responses are imputed through the hot-deck method. Both weighted and unweighted hot-deck imputation procedures are considered. The consistency of the pseudo GEE estimators under imputation for missing responses is established for both procedures. Linearization variance estimators are developed for the pseudo GEE estimators under the assumption that the finite population sampling fraction is small or negligible, a scenario often held for large scale population surveys. Finite sample performances of the proposed estimators are investigated through an extensive simulation study. The results show that the pseudo GEE estimators and the linearization variance estimators perform well under several sampling designs and for both continuous response and binary response.
APA, Harvard, Vancouver, ISO, and other styles
48

LUO, CHUN-CHIUNG, and 羅鈞瓊. "The longitudinal research in the effect of working values on salary performance of Taiwan youths." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/k48mw3.

Full text
Abstract:
碩士
輔仁大學
企業管理學系管理學碩士在職專班
105
In recent years, Taiwan has been faced with changes in industrial environment and the problem of low birth rate. Thus, in terms of human resource management, enterprise also needs to consider how to manage the new-generation young personnel and retain the outstanding talents. This research aims to discuss the effect of youth’s (aged 16 to 24) working values on their salary level in different time points when they are more than 25 years old. This research adopted the longitudinal tracing investigation data of “Panel Study of Family Dynamics” in Humanities and Social Sciences Research Center, Academia Sinica collected from 1999 to 2014 in Taiwan. In this research, 9,291 samples were used and there were 643 investigated samples in 9 time intervals of all databases, but only 92 samples had the common intersection in the 2 types of data. The research results show that both environmental and self working values can positively predict salary level. In group comparison, when the group with higher working values is in a poor external environment, they can maintain the stable salary level. With the recovery in business activity, the group with higher self values can present better bounce strength. In terms of methods of enterprise’s employee incentive, it can be observed from the long-term salary level that it is more effective to motivate employees with intrinsic motivation and it can provide reference for enterprise’s human resource managers to make incentive strategy.
APA, Harvard, Vancouver, ISO, and other styles
49

Yeh, Ya-Lan, and 葉雅嵐. "Left Atrial Longitudinal Deformation Analysis by Two-Dimensional Speckle Tracking Echocardiography in Dogs with Mitral Valve Disease." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/20959914474773011631.

Full text
Abstract:
碩士
國立中興大學
獸醫學系暨研究所
100
Left atrial (LA) remodeling, one of the major pathophysiological results of chronic myxomatous mitral valve disease (MMVD), not only serves as a compensatory mechanism for prevention of overt pulmonary edema, but also gauges the disease severity and the risk of developing left-sided congestive heart failure. Many methods regarding quantification LA size and function have been presented, but effective assessment of LA remodeling remains a challenging issue. Recently, the feasibility of quantification of LA longitudinal myocardial deformation dynamics by two-dimensional speckle tracking echocardiography (2D-STE) has been approved in humans. However, there is a paucity of related literature in veterinary medicine. To explore the application of quantification of LA longitudinal myocardial deformation dynamics by 2D-STE in small animals, we analysis and compare the parameters of left atrial longitudinal dynamics deformation in dogs with and without myxomatous mitral valve disease. 21 dogs without any anomaly of cardiac valve were as Controls. Dogs with MMVD were further divided into 3 groups: 13 dogs without obvious related cardiac remodeling (MMVD-1), 9 with obvious cardiac remodeling (MMVD-2), and 7 with clinical signs of heart failure (MMVD-3). The results revealed LA longitudinal strain was higher in MMVD-1 than that in controls, highest in MMVD-2, but declined in MMVD-3. Similar results was found in LA longitudinal strain rate. Significant correlations between these parameters and corresponding LA emptying fractions showed these LA longitudinal deformation parameters may reflect the function of LA modulating ventricular filling. However, there was no significant difference in the parameters among all groups. We thought assessing LA remodeling via LA longitudinal deformation analysis by STE in small animal maybe needs improvement for application.
APA, Harvard, Vancouver, ISO, and other styles
50

Hsueh, Hui-Chuan, and 薛惠娟. "Power comparison about repeated measures, hierarchical linear model and regression model in analyzing longitudinal data with missing values." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/12944805989680506107.

Full text
Abstract:
碩士
國立臺北大學
統計學系
98
Hierarchical Linear Model (HLM) and Repeated Measures (RM) can be used to analyze the longitudinal data. The data which can be used in RM will be fewer than that in HLM when there are missing values. Hua Fang (2006) compared HLM with RM when the data is complete. In this paper, we discuss the power of HLM vs. RM when there are missing values. Moreover, the traditional regression model is also included in the comparison. This research simulate data by different missing types, MCAR and MCAR_IT, and use HLM, RM with deleting missing values, RM with imputing missing values and regression model to perform the analysis. Probability of Type I error and powers are to be discussed and suggest are proposed in selecting model methods. This research concludes that the imputation is not helpful in RM ana HLM is a better method in analyzing longitudinal data with missing values.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography