To see the other types of publications on this topic, follow the link: Continuous and deep sedation.

Dissertations / Theses on the topic 'Continuous and deep sedation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Continuous and deep sedation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Blondet, Vanessa. "Les pratiques sédatives en unités de soins palliatifs, entre travail du care et négociation." Thesis, Strasbourg, 2019. https://publication-theses.unistra.fr/restreint/theses_doctorat/2019/Blondet_Vanessa_2019_ED519.pdf.

Full text
Abstract:
Quels sont les différentes formes de la sédation en unité de soins palliatifs ? Comment les sédations sont-elles négociées entre les soignants, le patient et ses proches ? Qu’est-ce que l’usage du Midazolam et sa négociation nous dit du travail en unité de soins palliatifs en France ? Cette thèse repose sur une enquête qualitative menée auprès de quatre structures de soins palliatifs différentes. Le travail s’appuie sur des observations directes et indirectes au sein des structures, le suivi des évolutions dans les doses de Midazolam pour 42 patients, ainsi que soixante entretiens semi-directifs. L'analyse des matériaux montre l’existence de huit usages du Midazolam, dont cinq pratiques sédatives. Les entretiens révèlent que le travail en soins palliatifs vise notamment à (re)socialiser la fin de vie. Cette visée entre en contradiction avec la mise en place d’une sédation profonde et continue jusqu’au décès, ce qui amène parfois les équipes à préférer une sédation plus progressive
What are the different type of sedation in palliative care units ? How caregivers, patients and relatives negociate any kind of sedation ? What are uses of Midazolam and its negociation saying about the work in palliative care units in France ? This thesis is based on a qualitative survey, conducted among four palliative care structures. The work is based on direct and undirect observations, tracking Midazolam doses progression for 42 patients, and sixty semi-structured interviews. Materials analysis show eight Midazolam uses and among them, five sedations types. Semi-structured interviews show that palliative care work seek notably end of life (re)socialisation. Yet, there is a contradiction between this goal and the implementation of continuous deep sedation until death. Therefore, caregivers sometimes prefer a more progressive form of sedation
APA, Harvard, Vancouver, ISO, and other styles
2

Bando, Catherine. "Assisted Death: Historical, Moral and Theological Perspectives of End of Life Options." Digital Commons at Loyola Marymount University and Loyola Law School, 2018. https://digitalcommons.lmu.edu/etd/513.

Full text
Abstract:
The paper explores historical positions on suicide and philosophical, theological, and moral positions on physician-assisted suicide and euthanasia. In 1900, most people died from infectious diseases, which have relatively short periods of morbid decline. With advances in the biomedical sciences, people are living longer, and most people die from chronic diseases, which are usually accompanied by prolonged periods of morbid decline. In addition to living longer, people today are generally more individualist and seek methods to control many aspects of life. While assisted death is rarely used, it represents a means to control end-of-life suffering. The paper demonstrates that there is substantial opposition to assisted death among philosophers, theologians and bioethicists. The paper also argues that improved education about end-of-life palliative alternatives would alleviate fears about end-of-life suffering. The thesis is that the use of palliative alternatives is morally and ethically superior to physician-assisted suicide or euthanasia.
APA, Harvard, Vancouver, ISO, and other styles
3

Treggiari, Miriam Monica. "Randomized trial of light versus deep sedation on mental health after critical illness /." Thesis, Connect to this title online; UW restricted, 2007. http://hdl.handle.net/1773/10928.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Le, Dorze Matthieu. "Les facultés éthiques des réanimateurs, l'ajustement et l'alignement." Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASR033.

Full text
Abstract:
Les réanimatrices et les réanimateurs sont amenés quotidiennement à accompagner la fin de vie. La réalité de leurs décisions et de leurs pratiques de fin de vie y est complexe impliquant de nombreuses personnes (le patient, ses proches, les soignants), de multiples éléments médicaux et non médicaux, et souvent plusieurs inconnues et incertitudes. L'intention de ce travail doctoral est, par une approche descriptive et analytique, d'expliciter cette complexité dans l'objectif de mettre à jour, par une approche normative, des facultés éthiques que les réanimateurs pourraient développer pour bien agir ou du moins pour agir le mieux possible. Ce travail s'appuie sur trois axes méthodologiques : un savoir expérientiel individuel, une réflexion collective et une démarche scientifique pluridisciplinaire associant enquêtes, recherches quantitatives et qualitatives. Il se déploie dans deux axes thématiques en explorant à la fois la qualification de l'obstination déraisonnable, la sédation profonde et continue et la déclaration de la mort dans le contexte ordinaire de la fin de vie en réanimation, et leurs reconfigurations dans le contexte plus spécifique du don d'organes Maastricht III. Cette fabrique de l'éthique structurée par la réalité concrète des situations cliniques conduit à mettre en évidence deux facultés, l'ajustement et l'alignement, construites et enrichies progressivement au cours de la réflexion. Seules des institutions attentives au développement d'un climat éthique apaisé sont à même de permettre aux réanimateurs de mobiliser ces facultés pour investir positivement les tensions liées à l'accompagnement de fin de vie et au don d'organes comme les objets d'une inventivité éthique sans cesse renouvelée
In routine daily practice, intensive care physicians are involved in end-of-life care. Their end-of-life decisions and practices are highly complex, involving many people (patient, relatives, and caregivers), a variety of medical and non-medical factors, and often a number of unknowns and uncertainties.The aim of this work is to describe and analyze this complexity with a view to highlighting, throw a normative approach, the ethical faculties that intensive care physicians could use to act well or at least as well as possible. This work is based on three methodological approaches: individual experience, group discussion, and a multidisciplinary scientific approach that includes surveys as well as quantitative and qualitative research. It is based on two different areas of research: The definition of “unreasonable obstinacy”, continuous deep sedation and the declaration of death in the everyday context of end-of-life in intensive care, and how these are reshaped in relation to the specific issue of controlled donation after circulatory death. This ethical process, based on the practical realities of clinical situations, provides the basis for two skills - fit and line. These skills are developed and improved step by step. It is only through organisations concerned with the development of a peaceful ethical climate that intensive care physicians will be able to use these skills to positively address the tensions associated with end-of-life care and organ donation as a subject of ongoing ethical creativity
APA, Harvard, Vancouver, ISO, and other styles
5

Conway, Aaron. "Nurse-administered procedural sedation and analgesia in the cardiac catheterisation laboratory: A mixed methods study." Thesis, Australian Catholic University, 2013. https://acuresearchbank.acu.edu.au/download/a71c1257b013741928b98e8cb6c5843c8123a54f7d3ece774609bf0bf0d6c2c2/11420741/64829_downloaded_stream_54.pdf.

Full text
Abstract:
Other information: The cardiac catheterisation laboratory (CCL) is a specialised medical radiology facility where both chronic-stable and life-threatening cardiovascular illness is evaluated and treated. Although there are many potential sources of discomfort and distress associated with procedures performed in the CCL, a general anaesthetic is not usually required. For this reason, an anaesthetist is not routinely assigned to the CCL. Instead, to manage pain, discomfort and anxiety during the procedure, nurses administer a combination of sedative and analgesic medications according to direction from the cardiologist performing the procedure. This practice is referred to as nurse-administered procedural sedation and analgesia (PSA). While anecdotal evidence suggested that nurse-administered PSA was commonly used in the CCL, it was clear from the limited information available that current nurse-led PSA administration and monitoring practices varied and that there was contention around some aspects of practice including the type of medications that were suitable to be used and the depth of sedation that could be safely induced without an anaesthetist present. The overall aim of the program of research presented in this thesis was to establish an evidence base for nurse-led sedation practices in the CCL context. A sequential mixed methods design was used over three phases. The objective of the first phase was to appraise the existing evidence for nurse-administered PSA in the CCL. Two studies were conducted. The first study was an integrative review of empirical research studies and clinical practice guidelines focused on nurse-administered PSA in the CCL as well as in other similar procedural settings. This was the first review to systematically appraise the available evidence supporting the use of nurse-administered PSA in the CCL. A major finding was that, overall, nurse-administered PSA in the CCL was generally deemed to be safe.;However, it was concluded from the analysis of the studies and the guidelines that were included in the review, that the management of sedation in the CCL was impacted by a variety of contextual factors including local hospital policy, workforce constraints and cardiologists' preferences for the type of sedation used. The second study in the first phase was conducted to identify a sedation scale that could be used to monitor level of sedation during nurse-administered PSA in the CCL. It involved a structured literature review and psychometric analysis of scale properties. However, only one scale was found that was developed specifically for the CCL, which had not undergone psychometric testing. Several weaknesses were identified in its item structure. Other sedation scales that were identified were developed for the ICU. Although these scales have demonstrated validity and reliability in the ICU, weaknesses in their item structure precluded their use in the CCL. As findings indicated that no existing sedation scale should be applied to practice in the CCL, recommendations for the development and psychometric testing of a new sedation scale were developed. The objective of the second phase of the program of research was to explore current practice. Three studies were conducted in this phase using both quantitative and qualitative research methods. The first was a qualitative explorative study of nurses' perceptions of the issues and challenges associated with nurse-administered PSA in the CCL. Major themes emerged from analysis of the qualitative data regarding the lack of access to anaesthetists, the limitations of sedative medications, the barriers to effective patient monitoring and the impact that the increasing complexity of procedures has on patients' sedation requirements. The second study in Phase Two was a cross-sectional survey of nurse-administered PSA practice in Australian and New Zealand CCLs.;This was the first study to quantify the frequency that nurse-administered PSA was used in the CCL setting and to characterise associated nursing practices. It was found that nearly all CCLs utilise nurse-administered PSA (94%). Of note, by characterising nurse-administered PSA in Australian and New Zealand CCLs, several strategies to improve practice, such as setting up protocols for patient monitoring and establishing comprehensive PSA education for CCL nurses, were identified. The third study in Phase Two was a matched case-control study of risk factors for impaired respiratory function during nurse-administered PSA in the CCL setting. Patients with acute illness were found to be nearly twice as likely to experience impaired respiratory function during nurse-administered PSA (OR=1.78; 95%CI=1.19-2.67; p=0.005). These significant findings can now be used to inform prospective studies investigating the effectiveness of interventions for impaired respiratory function during nurse-administered PSA in the CCL. The objective of the third and final phase of the program of research was to develop recommendations for practice. To achieve this objective, a synthesis of findings from the previous phases of the program of research informed a modified Delphi study, which was conducted to develop a set of clinical practice guidelines for nurse-administered PSA in the CCL. The clinical practice guidelines that were developed set current best practice standards for pre-procedural patient assessment and risk screening practices as well as the intra and post-procedural patient monitoring practices that nurses who administer PSA in the CCL should undertake in order to deliver safe, evidence based and consistent care to the many patients who undergo procedures in this setting.;In summary, the mixed methods approach that was used clearly enabled the research objectives to be comprehensively addressed in an informed sequential manner, and, as a consequence, this thesis has generated a substantial amount of new knowledge to inform and support nurse-led sedation practice in the CCL context. However, a limitation of the research to note is that the comprehensive appraisal of the evidence conducted, combined with the guideline development process, highlighted that there were numerous deficiencies in the evidence base. As such, rather than being based on high-level evidence, many of the recommendations for practice were produced by consensus. For this reason, further research is required in order to ascertain which specific practices result in the most optimal patient and health service outcomes. Therefore, along with necessary guideline implementation and evaluation projects, post-doctoral research is planned to follow up on the research gaps identified, which are planned to form part of a continuing program of research in this field.
APA, Harvard, Vancouver, ISO, and other styles
6

Conway, Aaron. "Nurse-administered procedural sedation and analgesia in the cardiac catheterisation laboratory : a mixed methods study." Thesis, Australian Catholic University, 2013. https://eprints.qut.edu.au/61474/1/Final_version_thesis_AC_all_pages_24_6_13.pdf.

Full text
Abstract:
The cardiac catheterisation laboratory (CCL) is a specialised medical radiology facility where both chronic-stable and life-threatening cardiovascular illness is evaluated and treated. Although there are many potential sources of discomfort and distress associated with procedures performed in the CCL, a general anaesthetic is not usually required. For this reason, an anaesthetist is not routinely assigned to the CCL. Instead, to manage pain, discomfort and anxiety during the procedure, nurses administer a combination of sedative and analgesic medications according to direction from the cardiologist performing the procedure. This practice is referred to as nurse-administered procedural sedation and analgesia (PSA). While anecdotal evidence suggested that nurse-administered PSA was commonly used in the CCL, it was clear from the limited information available that current nurse-led PSA administration and monitoring practices varied and that there was contention around some aspects of practice including the type of medications that were suitable to be used and the depth of sedation that could be safely induced without an anaesthetist present. The overall aim of the program of research presented in this thesis was to establish an evidence base for nurse-led sedation practices in the CCL context. A sequential mixed methods design was used over three phases. The objective of the first phase was to appraise the existing evidence for nurse-administered PSA in the CCL. Two studies were conducted. The first study was an integrative review of empirical research studies and clinical practice guidelines focused on nurse-administered PSA in the CCL as well as in other similar procedural settings. This was the first review to systematically appraise the available evidence supporting the use of nurse-administered PSA in the CCL. A major finding was that, overall, nurse-administered PSA in the CCL was generally deemed to be safe. However, it was concluded from the analysis of the studies and the guidelines that were included in the review, that the management of sedation in the CCL was impacted by a variety of contextual factors including local hospital policy, workforce constraints and cardiologists’ preferences for the type of sedation used. The second study in the first phase was conducted to identify a sedation scale that could be used to monitor level of sedation during nurse-administered PSA in the CCL. It involved a structured literature review and psychometric analysis of scale properties. However, only one scale was found that was developed specifically for the CCL, which had not undergone psychometric testing. Several weaknesses were identified in its item structure. Other sedation scales that were identified were developed for the ICU. Although these scales have demonstrated validity and reliability in the ICU, weaknesses in their item structure precluded their use in the CCL. As findings indicated that no existing sedation scale should be applied to practice in the CCL, recommendations for the development and psychometric testing of a new sedation scale were developed. The objective of the second phase of the program of research was to explore current practice. Three studies were conducted in this phase using both quantitative and qualitative research methods. The first was a qualitative explorative study of nurses’ perceptions of the issues and challenges associated with nurse-administered PSA in the CCL. Major themes emerged from analysis of the qualitative data regarding the lack of access to anaesthetists, the limitations of sedative medications, the barriers to effective patient monitoring and the impact that the increasing complexity of procedures has on patients' sedation requirements. The second study in Phase Two was a cross-sectional survey of nurse-administered PSA practice in Australian and New Zealand CCLs. This was the first study to quantify the frequency that nurse-administered PSA was used in the CCL setting and to characterise associated nursing practices. It was found that nearly all CCLs utilise nurse-administered PSA (94%). Of note, by characterising nurse-administered PSA in Australian and New Zealand CCLs, several strategies to improve practice, such as setting up protocols for patient monitoring and establishing comprehensive PSA education for CCL nurses, were identified. The third study in Phase Two was a matched case-control study of risk factors for impaired respiratory function during nurse-administered PSA in the CCL setting. Patients with acute illness were found to be nearly twice as likely to experience impaired respiratory function during nurse-administered PSA (OR=1.78; 95%CI=1.19-2.67; p=0.005). These significant findings can now be used to inform prospective studies investigating the effectiveness of interventions for impaired respiratory function during nurse-administered PSA in the CCL. The objective of the third and final phase of the program of research was to develop recommendations for practice. To achieve this objective, a synthesis of findings from the previous phases of the program of research informed a modified Delphi study, which was conducted to develop a set of clinical practice guidelines for nurse-administered PSA in the CCL. The clinical practice guidelines that were developed set current best practice standards for pre-procedural patient assessment and risk screening practices as well as the intra and post-procedural patient monitoring practices that nurses who administer PSA in the CCL should undertake in order to deliver safe, evidence-based and consistent care to the many patients who undergo procedures in this setting. In summary, the mixed methods approach that was used clearly enabled the research objectives to be comprehensively addressed in an informed sequential manner, and, as a consequence, this thesis has generated a substantial amount of new knowledge to inform and support nurse-led sedation practice in the CCL context. However, a limitation of the research to note is that the comprehensive appraisal of the evidence conducted, combined with the guideline development process, highlighted that there were numerous deficiencies in the evidence base. As such, rather than being based on high-level evidence, many of the recommendations for practice were produced by consensus. For this reason, further research is required in order to ascertain which specific practices result in the most optimal patient and health service outcomes. Therefore, along with necessary guideline implementation and evaluation projects, post-doctoral research is planned to follow up on the research gaps identified, which are planned to form part of a continuing program of research in this field.
APA, Harvard, Vancouver, ISO, and other styles
7

Santos, Marcos Eduardo Lera dos. "Sedação em endoscopia digestiva alta: estudo comparativo com uso combinado de propofol e fentanil versus midazolam e fentanil." Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/5/5154/tde-23022012-120930/.

Full text
Abstract:
Introdução: o uso da sedação na rotina endoscópica tem sido empregado praticamente de maneira universal. O emprego de propofol parece se relacionar a exame mais confortável, com aumento da satisfação com o exame, pelo médico e pelo paciente. Entretanto, o uso do propofol está associado ao maior risco de sedação profunda e às complicações a ela associadas. Objetivo: comparar dois esquemas de sedação para a realização de endoscopia digestiva alta diagnóstica, grupo midazolam (midazolam e fentanil) com o grupo propofol (propofol e fentanil) quanto ao nível de sedação profunda e, como desfechos secundários, o grau de satisfação, o tempo de recuperação e a frequência de complicações entre os grupos. Método: foi realizado estudo experimental, prospectivo, randômico, cego com 200 pacientes, 100 no grupo midazolam e 100 no grupo propofol. Resultado: utilizando a escala OAA/S e o índice bispectral (BIS), respectivamente 11% e 7% dos pacientes do grupo midazolam e 25% e 19 % do grupo propofol apresentaram níveis de sedação profunda, sendo significativamente mais frequente neste último grupo. Houve boa correlação do nível de sedação da escala clínica OAA/S com o índice bispectral (BIS) para os dois grupos (k=0,63 para o grupo midazolam e k=0,71 para o grupo propofol). Quarenta e dois por cento dos pacientes do grupo propofol e 26% dos pacientes do grupo midazolam precisaram de oferta suplementar de oxigênio (p=0,025). O tempo médio de recuperação dos pacientes do grupo midazolam foi de 44,13 min e do grupo propofol foi de 28,82 min (p<0,001). O grau de satisfação dos pacientes foi semelhante entre os grupos e os médicos deram preferência ao uso da associação propofol/fentanil. Não se observaram complicações graves decorrentes da sedação em ambos os grupos. Conclusão: ambos os esquemas de sedação levam à sedação profunda. O grupo propofol utilizando o propofol e fentanil apresentou eventos de sedação profunda mais frequentemente. Por outro lado, os dois esquemas são seguros. Os pacientes do grupo propofol apresentaram tempo de indução da sedação, de recuperação e de liberação menores
Introduction: the use of sedation is almost universal for the practice of upper gastrointestinal (GI) endoscopy. The use of propofol seems to be associated with higher physician and patient satisfaction. However there is a higher risk of deep sedation and its related complication when propofol is used. Objective: compare the frequency of deep sedation events with two drug associations for the sedation in upper GI endoscopy. The OAA/S score and the bispectral index monitoring (BIS) were employed for the assessment of consciousness level. Secondarily we compared patient and physician satisfaction, recovery time and the complication rates between the two groups. Methods: two hundred patients sent for upper GI endoscopy were randomized in two groups: midazolam and propofol, each of them with 100 patients. Results: Deep sedation events occurred in 11% (OAA/S score) and 7% (BIS) in group midazolam and significantly more frequent in group propofol (25%- OAA/S score and 19% - BIS). There was a good agreement between the OAA/S score and the bispectral index (BIS) in both groups (k=0.63 and K=0.71 for groups midazolam and propofol, respectively). Forty two per cent of group propofol patients and 26% of group midazolam patients needed oxygen supplementation (p=0.025). The mean recovery time for groups midazolam and propofol patients were 44.13 min and 28.82 min, respectively (p<0.001). While patients were equally satisfied with both drug associations, physicians were more satisfied with the propofol/fentanyl association. We did not record any severe complications related with sedation. Conclusion: both drug associations are associated with deep sedation events. The propofol/fentanyl association causes deep sedation events more frequently when compared with midazolam/fentanyl association. Both associations are safe. The induction sedation, recovery and discharge times were shorter with propofol/fentanyl association
APA, Harvard, Vancouver, ISO, and other styles
8

Ashour, Ashraf Fawzy. "Behaviour and strength of reinforced concrete continuous deep beams." Thesis, University of Cambridge, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.319339.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Mickos, Johan. "Design of a Network Library for Continuous Deep Analytics." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-232129.

Full text
Abstract:
Data-intensive stream processing applications have escalated in popularity in recent years, producing numerous designs and implementations for handling unbounded streams of high-volume data. The sheer size and dimensionality of these types of data requires multiple machines to push processing throughput of hundreds of millions events per second at low latencies. Advances in the fields of distributed deep learning and stream processing have highlighted networking-specific challenges and requirements such as flow control and scalable communication abstractions. Existing stream processing frameworks, however, only address subsets of these requirements. This thesis proposes a design and implementation in the Rust programming language for a modular networking library able to address these requirements together. The design entails protocol framing, buffer management, stream multiplexing, flow control, and stream prioritization. The implemented prototype handles multiplexing of logical streams and credit-based flow control through a flexible application programming interface. The prototype is tested for overall throughput and round-trip latency in a distributed environment, displaying promising results in both categories.
Under de senaste åren har applikationer för dataintensiv ström bearbetning blivit avsevärt mer vanliga. Detta har lett till en uppsjö av modeller och implementationer för hantering av dataströmmar av gränslös volym. Blotta datamängden och dess dimensionalitet kräver otaliga maskiner för att med låg latens hantera hundratals miljoner händelser per sekund. Framsteg inom området för distribuerad djupinlärning och ström bearbetning har blottlagt nätverksspecifika utmaningar och krav såsom flödeskontroll och skalbara kommunikationsabstraktioner. Nuvarande beräkningssystem för ström bearbetning uppfyller dessvärre bara en del av dessa villkor. Detta examensarbete presenterar en modell och implementation i programmeringsspråket Rust för ett modulärt nätverksbibliotek som kan hantera alla dessa krav på en gång. Modellen inbegriper datainramning, bufferhantering, ström multiplexing, flödeskontroll och ström prioritering. Prototypen som här implementerats hanterar multiplexing av logiska dataströmmar och kreditbaserad flödeskontroll genom ett flexibelt applikationsgränssnitt. Prototypen har testats i avseende å nätverk genomströmning och tur-och-returtid i ett distribuerat upplägg, med lovande resultat i bägge kategorier.
APA, Harvard, Vancouver, ISO, and other styles
10

Otero, Maria Jose. "Teaching Children How to Stay Still Using Movies to Provide Continuous Feedback." Thesis, University of North Texas, 2019. https://digital.library.unt.edu/ark:/67531/metadc1609110/.

Full text
Abstract:
External beam radiation therapy is often used as a form of treatment for individuals diagnosed with cancer. However, because staying completely still can often be difficult for children, sedation is often used daily to remedy the need for stillness. In this document, we introduce the development, implementation, and testing of a technology designed to teach healthy children to self-monitor and control their movements. This technology monitored a child's body movement and created a continuous feedback loop, playing a preferred movie based on the amount of body movement observed. Study 1 compares the amount of body movement observed when children were instructed to remain still (instructions alone) to access to a movie contingent on maintained low rates of movement (contingent movie). Study 2 compares the amount of body movement observed in the instructions alone condition with two other conditions: non-contingent access to a movie (non-contingent movie) and contingent movie. Study 3 compares the amount of body movement observed in the instructions alone condition to the contingent movie condition over an extended period of time. Lastly, Study 4 compares the amount of body movement observed when children have previously been taught to stay still using the technology described above across various days throughout various conditions. Generally, we found three things: a) instructions alone were insufficient to produce the level of control over motion required; b) some control over motion was established in the non-contingent movie condition relative to the instructions alone condition; and c) the combination of movies and feedback contingent on movement was necessary to gain the level of control over body motion necessary to adhere to the medical protocol.
APA, Harvard, Vancouver, ISO, and other styles
11

Shalookh, Othman H. Zinkaah. "Behaviour of continuous concrete deep beams reinforced with GFRP bars." Thesis, University of Bradford, 2019. http://hdl.handle.net/10454/18381.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Mufida, Miratul Khusna. "Deep learning for continuous parking occupancy forecasting in urban environments." Electronic Thesis or Diss., Valenciennes, Université Polytechnique Hauts-de-France, 2023. http://www.theses.fr/2023UPHF0024.

Full text
Abstract:
L'apprentissage profond a été largement adopté dans divers domaines en raison de sa capacité à extraire des caractéristiques complexes à partir de grandes quantités de données. Dans cette thèse, nous proposons une approche basée sur l'apprentissage profond pour la prédiction continue de l'occupation des parkiL'apprentissage profond a été largement adopté dans divers domaines en raison de sa capacité à extraire des caractéristiques complexes à partir de grandes quantités de données. Dans cette thèse, nous proposons une approche basée sur l'apprentissage profond pour la prédiction continue de l'occupation des parkings. Nous avons donc collecté un large ensemble de données sur l'occupation des parkings (tant pour les parkings couverts que pour les parkings en bord de rue) provenant de différentes villes de deux pays différents et les avons utilisées pour entraîner des modèles de réseaux neuronaux profonds. Nos expériences montrent que l'approche proposée surpasse les modèles classiques et basés sur l'apprentissage machine en termes de précision des prévisions et de performances en temps réel. De plus, notre approche peut également être facilement intégrée aux systèmes de stationnement intelligents existants pour améliorer leur efficacité et leur commodité.Pour un déploiement au niveau de la ville, nous proposons également un cadre permettant de partager les modèles entre les parkings en analysant leur similarité spatiale et temporelle. En identifiant les caractéristiques spatiales et temporelles pertinentes de chaque parking (profil de stationnement) et en les regroupant en conséquence, notre approche permet le développement de modèles précis de prévision de l'occupation pour un ensemble de parkings, ce qui permet de réduire les coûts de calcul et d'améliorer la transférabilité des modèles. Nos expériences démontrent l'efficacité de la stratégie proposée en termes de réduction des coûts de déploiement des modèles tout en maintenant une bonne qualité des prévisions.En conclusion, ce travail démontre l'efficacité de l'apprentissage profond pour résoudre le problème de la prévision continue de l'occupation des parkings et met en évidence son potentiel pour les futures applications de stationnement intelligentes
Deep learning has been widely adopted in various fields for its ability to extract complex features from large amounts of data. In this thesis, we propose a deep learning-based approach for continuous parking occupancy prediction. We therefore collected a large dataset of parking occupancy data (for both off-street and on-street parking) from various cities in two different countries and used it to train deep neural network models. Our experiments show that the proposed approach outperforms classical and machine learning baseline models in terms of forecast accuracy and real-time performance. Furthermore, our approach can also be easily integrated into existing smart parking systems to improve their efficiency and convenience. For a city-level deployment, we also propose a framework for sharing models amongst parking lots by analyzing their spatial and temporal profiles similarity. By identifying the relevant spatial and temporal characteristics of each parking lot (parking profile) and grouping them accordingly, our approach allows the development of accurate occupancy forecasting models for a set of parking lots, thereby reducing computational costs and improving model transferability. Our experiments demonstrate the effectiveness of the proposed strategy in reducing model deployment costs while maintaining a good quality of the forecast. In conclusion, this work demonstrates the effectiveness of deep learning in addressing the problem of continuous parking occupancy forecasting and highlights its potential for future smart parking applications
APA, Harvard, Vancouver, ISO, and other styles
13

Kramer, Kyle J. "Comparison of Mixtures of Propofol-Remifentanil vs. Propofol-Ketamine for Deep Sedation for Third Molar Extraction Surgery (IRB # 2009H0306)." The Ohio State University, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=osu1291232805.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Weir, Mercedes E. "Assessing Baseline and Post-Discharge Risk Factors in Subjects with and without Sleep Apnea Undergoing Endoscopy with Deep Sedation." VCU Scholars Compass, 2018. https://scholarscompass.vcu.edu/etd/5669.

Full text
Abstract:
ABSTRACT ASSESSING BASELINE AND POST-DISCHARGE RISK FACTORS IN SUBJECTS WITH AND WITHOUT SLEEP APNEA UNDERGOING ENDOSCOPY WITH DEEP SEDATION Background: Outpatient procedures encompass over 60% of all surgeries in the United States, and the prevalence of obstructive sleep apnea (OSA) remains high among adult surgical ambulatory patients. Ambulatory surgery poses problems for patients with OSA because narcotics and anesthetics used during surgery can complicate the negative effects of OSA, leading to cardiac events, brain hypoxia, and even death. This study was designed to evaluate the prevalence of cardiopulmonary risk factors among post endoscopic patients with diagnosed and undiagnosed sleep apnea. Methods: The study involved a prospective, descriptive cross-sectional design and incorporated a pre-test or post-test data collection approach, using Actigraphy, pulse oximetry and 24-hour ECG monitoring via Bluetooth technology to monitor outpatients undergoing endoscopy with deep Propofol sedation. Patients were recruited pre-procedure to obtain a resting baseline ECG, and pre-procedure values were then monitored post procedure continuously for 24 hours. A p-value less than 0.05 was considered to be statistically significant. A target sample included 50 adult outpatients from a Florida suburban endoscopy center. Results: Pulse oximetry and Actigraph scores revealed no difference based on OSA. The ANOVA for oxygen desaturation events and sleep quality indices reflected no differences across groups. Sleep quality had no measurable influence on adverse events and was similar across groups; participants diagnosed with OSA slept longer than those in the untreated or no OSA group. Regressions for sleep quality indices reflected no differences among groups. Conclusions: There remains a lack of literature on cardiopulmonary and ECG indicators of cardiac risks in patients with OSA in the 24 hours following discharge from ambulatory surgery. This dissertation characterized the ECG at baseline and post-discharge among post-endoscopy outpatients with OSA and without OSA. Further research is recommended.
APA, Harvard, Vancouver, ISO, and other styles
15

Lindholm, Maj-Lis. "Utility of bispectral index (BIS) monitoring during general anesthesia." Stockholm, 2009. http://diss.kib.ki.se/2009/978-91-7409-697-2/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Rodriguez-Garcia, Julio M. "Tunable, continuous-wave semiconductor disk lasers with emission in the deep ultraviolet." Thesis, University of Strathclyde, 2017. http://digitool.lib.strath.ac.uk:80/R/?func=dbin-jump-full&object_id=30464.

Full text
Abstract:
The work here presented is focused around the design, characterization and ultimately first demonstration of a deep ultraviolet, frequency tripled semiconductor disk laser (SDL). The construction of such a laser is described, together with a review of the relevant theory and investigation of the underlying processes in order to improve the system. SDLs based on gallium indium phosphide have attracted intense investigation over the last ten years for their fundamental emission in the 660nm-690nm (visible red emission), a region previously only available to SDLs by means of nonlinear frequency conversion. Such frequency conversion applied to red SDLs provides access to the highly energetic ultraviolet region of the electromagnetic spectum but, although its conversion to the near ultraviolet has been rather successful, further extension into the deep UV presents a new set of challenges that must be addressed in order to achieve efficient laser emission. 78 μW of deep ultraviolet emission at 225nm have been achieved in continuous-wave operation. The output wavelength is tunable for 350 cm-1. This is the shortest wavelength emitted from an SDL to date and is the first implementation of intracavity frequency tripling in a visible SDL.
APA, Harvard, Vancouver, ISO, and other styles
17

Granger, Nicolas. "Deep-learning for high dimensional sequential observations : application to continuous gesture recognition." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLL002/document.

Full text
Abstract:
Cette thèse a pour but de contribuer à améliorer les interfaces Homme-machine. En particulier, nos appareils devraient répliquer notre capacité à traiter continûment des flux d'information. Cependant, le domaine de l’apprentissage statistique dédié à la reconnaissance de séries temporelles pose de multiples défis. Nos travaux utilisent la reconnaissance de gestes comme exemple applicatif, ces données offrent un mélange complexe de poses corporelles et de mouvements, encodées sous des modalités très variées. La première partie de notre travail compare deux modèles temporels de l’état de l’art pour la reconnaissance continue sur des séquences, plus précisément l’hybride réseau de neurones -- modèle de Markov caché (NN-HMM) et les réseaux de neurones récurrents bidirectionnels (BD-RNN) avec des unités commandées par des portes. Pour ce faire, nous avons implémenté un environnement de test partagé qui est plus favorable à une étude comparative équitable. Nous proposons des ajustements sur les fonctions de coût utilisées pour entraîner les réseaux de neurones et sur les expressions du modèle hybride afin de gérer un large déséquilibre des classes de notre base d’apprentissage. Bien que les publications récentes semblent privilégier l’architecture BD-RNN, nous démontrons que l’hybride NN-HMM demeure compétitif. Cependant, ce dernier est plus dépendant de son modèle d'entrées pour modéliser les phénomènes temporels à court terme. Enfin, nous montrons que les facteurs de variations appris sur les entrées par les deux modèles sont inter-compatibles. Dans un second temps, nous présentons une étude de l'apprentissage dit «en un coup» appliqué aux gestes. Ce paradigme d'apprentissage gagne en attention mais demeure peu abordé dans le cas de séries temporelles. Nous proposons une architecture construite autour d’un réseau de neurones bidirectionnel. Son efficacité est démontrée par la reconnaissance de gestes isolés issus d’un dictionnaire de langage des signes. À partir de ce modèle de référence, nous proposons de multiples améliorations inspirées par des travaux dans des domaines connexes, et nous étudions les avantages ou inconvénients de chacun
This thesis aims to improve the intuitiveness of human-computer interfaces. In particular, machines should try to replicate human's ability to process streams of information continuously. However, the sub-domain of Machine Learning dedicated to recognition on time series remains barred by numerous challenges. Our studies use gesture recognition as an exemplar application, gestures intermix static body poses and movements in a complex manner using widely different modalities. The first part of our work compares two state-of-the-art temporal models for continuous sequence recognition, namely Hybrid Neural Network--Hidden Markov Models (NN-HMM) and Bidirectional Recurrent Neural Networks (BDRNN) with gated units. To do so, we reimplemented the two within a shared test-bed which is more amenable to a fair comparative work. We propose adjustments to Neural Network training losses and the Hybrid NN-HMM expressions to accommodate for highly imbalanced data classes. Although recent publications tend to prefer BDRNNs, we demonstrate that Hybrid NN-HMM remain competitive. However, the latter rely significantly on their input layers to model short-term patterns. Finally, we show that input representations learned via both approaches are largely inter-compatible. The second part of our work studies one-shot learning, which has received relatively little attention so far, in particular for sequential inputs such as gestures. We propose a model built around a Bidirectional Recurrent Neural Network. Its effectiveness is demonstrated on the recognition of isolated gestures from a sign language lexicon. We propose several improvements over this baseline by drawing inspiration from related works and evaluate their performances, exhibiting different advantages and disadvantages for each
APA, Harvard, Vancouver, ISO, and other styles
18

Granger, Nicolas. "Deep-learning for high dimensional sequential observations : application to continuous gesture recognition." Electronic Thesis or Diss., Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLL002.

Full text
Abstract:
Cette thèse a pour but de contribuer à améliorer les interfaces Homme-machine. En particulier, nos appareils devraient répliquer notre capacité à traiter continûment des flux d'information. Cependant, le domaine de l’apprentissage statistique dédié à la reconnaissance de séries temporelles pose de multiples défis. Nos travaux utilisent la reconnaissance de gestes comme exemple applicatif, ces données offrent un mélange complexe de poses corporelles et de mouvements, encodées sous des modalités très variées. La première partie de notre travail compare deux modèles temporels de l’état de l’art pour la reconnaissance continue sur des séquences, plus précisément l’hybride réseau de neurones -- modèle de Markov caché (NN-HMM) et les réseaux de neurones récurrents bidirectionnels (BD-RNN) avec des unités commandées par des portes. Pour ce faire, nous avons implémenté un environnement de test partagé qui est plus favorable à une étude comparative équitable. Nous proposons des ajustements sur les fonctions de coût utilisées pour entraîner les réseaux de neurones et sur les expressions du modèle hybride afin de gérer un large déséquilibre des classes de notre base d’apprentissage. Bien que les publications récentes semblent privilégier l’architecture BD-RNN, nous démontrons que l’hybride NN-HMM demeure compétitif. Cependant, ce dernier est plus dépendant de son modèle d'entrées pour modéliser les phénomènes temporels à court terme. Enfin, nous montrons que les facteurs de variations appris sur les entrées par les deux modèles sont inter-compatibles. Dans un second temps, nous présentons une étude de l'apprentissage dit «en un coup» appliqué aux gestes. Ce paradigme d'apprentissage gagne en attention mais demeure peu abordé dans le cas de séries temporelles. Nous proposons une architecture construite autour d’un réseau de neurones bidirectionnel. Son efficacité est démontrée par la reconnaissance de gestes isolés issus d’un dictionnaire de langage des signes. À partir de ce modèle de référence, nous proposons de multiples améliorations inspirées par des travaux dans des domaines connexes, et nous étudions les avantages ou inconvénients de chacun
This thesis aims to improve the intuitiveness of human-computer interfaces. In particular, machines should try to replicate human's ability to process streams of information continuously. However, the sub-domain of Machine Learning dedicated to recognition on time series remains barred by numerous challenges. Our studies use gesture recognition as an exemplar application, gestures intermix static body poses and movements in a complex manner using widely different modalities. The first part of our work compares two state-of-the-art temporal models for continuous sequence recognition, namely Hybrid Neural Network--Hidden Markov Models (NN-HMM) and Bidirectional Recurrent Neural Networks (BDRNN) with gated units. To do so, we reimplemented the two within a shared test-bed which is more amenable to a fair comparative work. We propose adjustments to Neural Network training losses and the Hybrid NN-HMM expressions to accommodate for highly imbalanced data classes. Although recent publications tend to prefer BDRNNs, we demonstrate that Hybrid NN-HMM remain competitive. However, the latter rely significantly on their input layers to model short-term patterns. Finally, we show that input representations learned via both approaches are largely inter-compatible. The second part of our work studies one-shot learning, which has received relatively little attention so far, in particular for sequential inputs such as gestures. We propose a model built around a Bidirectional Recurrent Neural Network. Its effectiveness is demonstrated on the recognition of isolated gestures from a sign language lexicon. We propose several improvements over this baseline by drawing inspiration from related works and evaluate their performances, exhibiting different advantages and disadvantages for each
APA, Harvard, Vancouver, ISO, and other styles
19

Yamamoto, Takahiro S. "New method of all-sky searches for continuous gravitational waves." Doctoral thesis, Kyoto University, 2021. http://hdl.handle.net/2433/264642.

Full text
Abstract:
京都大学
新制・課程博士
博士(理学)
甲第23361号
理博第4732号
新制||理||1679(附属図書館)
京都大学大学院理学研究科物理学・宇宙物理学専攻
(主査)教授 田中 貴浩, 准教授 久徳 浩太郎, 教授 萩野 浩一
学位規則第4条第1項該当
Doctor of Science
Kyoto University
DFAM
APA, Harvard, Vancouver, ISO, and other styles
20

Bjuhr, Oscar. "Dynamic Configuration of a Relocatable Driver and Code Generator for Continuous Deep Analytics." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-232079.

Full text
Abstract:
Modern stream processing engines usually use the Java virtual machine (JVM) as execution platform. The JVM increases portability and safety of applications at the cost of not fully utilising the performance of the physical machines. Being able to use hardware accelerators such as GPUs for computationally heavy analysis of data streams is also restricted when using the JVM. The project Continuous Deep Analytics (CDA) explores the possibility of a stream processor executing native code directly on the underlying hardware using Rust. Rust is a young programming language which can statically guarantee the absence of memory errors and data races in programs without incurring performance penalties during runtime. Rust is built on top of LLVM which gives Rust a theoretical possibility to compile to a large set of target platforms. Each specific target platform does however require a specific configured runtime environment for Rust’s compiler to work properly. The CDA compiler will run in a distributed setting where the compiler has to be able to reallocate to different nodes to handle node failures. Setting up a reassignable Rust compiler in such a setting can be error prone and Docker is explored as a solution to this problem. A concurrent thread based system is implemented in Scala for building Docker images and compiling Rust in containers. Docker shows a potential of enabling easy reallocation of the driver without manual configuration. Docker has no major effect on Rust’s compile time. The large Docker images required to compile Rust is a drawback of the solution. They will require substantial network traffic to reallocate the driver. Reducing the size of the images would therefore make the solution more responsive.
Moderna strömprocessorer använder vanligtvis Javas virtuella maskin (JVM) som plattform för exekvering. Det gör strömprocessorerna portabla och säkra men begränsar hur väl de kan använda kapaciteten i den underliggande fysiska maskinen. Att kunna använda sig av hårdvaruaccelerator som t.ex. grafikkort för tung beräkning och analys av dataströmmar är en anledning till varför projektet Continuous Deep Analytics (CDA) utforskar möjligheten att istället exekvera en strömprocessor direkt i den underliggande maskinen. Rust är ett ungt programmeringsspråk som statiskt kan garantera att program inte innehåller minnesfel eller race conditions", detta utan att negativt påverka prestanda vid exekvering. Rust är byggt på LLVM vilket ger Rust en teoretisk möjlighet att kompilera till en stor mängd olika maskinarkitekturer. Varje specifik maskinarkitektur kräver dock att kompileringsmiljön är konfigurerad på ett specifikt sätt. CDAs kompilator kommer befinna sig i ett distribuerat system där kompilatorn kan bli flyttad till olika maskiner för att kunna hantera maskinfel. Att dynamiskt konfigurera kompilatorn i en sådan miljö kan leda till problem och därför testas Docker som en lösning på problemet. Ett trådbaserat system för parallell exekvering är implementerat i Scala för att bygga Docker bilder och kompilera Rust i containrar. Docker visar sig att ha en potential för att möjliggöra lätt omallokering av drivern utan manuell konfiguration. Docker har ingen stor påverkan på Rusts kompileringstid. De stora storlekarna på de Docker bilder som krävs för att kompilera Rust är en nackdel med lösningen. De gör att om allokering av drivern kräver mycket nätverkstrafik och kan därför ta lång tid. För att göra lösningen kvickare kan storleken av bilderna reduceras.
APA, Harvard, Vancouver, ISO, and other styles
21

Abd, Gaus Yona Falinie. "Artificial intelligence system for continuous affect estimation from naturalistic human expressions." Thesis, Brunel University, 2018. http://bura.brunel.ac.uk/handle/2438/16348.

Full text
Abstract:
The analysis and automatic affect estimation system from human expression has been acknowledged as an active research topic in computer vision community. Most reported affect recognition systems, however, only consider subjects performing well-defined acted expression, in a very controlled condition, so they are not robust enough for real-life recognition tasks with subject variation, acoustic surrounding and illumination change. In this thesis, an artificial intelligence system is proposed to continuously (represented along a continuum e.g., from -1 to +1) estimate affect behaviour in terms of latent dimensions (e.g., arousal and valence) from naturalistic human expressions. To tackle the issues, feature representation and machine learning strategies are addressed. In feature representation, human expression is represented by modalities such as audio, video, physiological signal and text modality. Hand- crafted features is extracted from each modality per frame, in order to match with consecutive affect label. However, the features extracted maybe missing information due to several factors such as background noise or lighting condition. Haar Wavelet Transform is employed to determine if noise cancellation mechanism in feature space should be considered in the design of affect estimation system. Other than hand-crafted features, deep learning features are also analysed in terms of the layer-wise; convolutional and fully connected layer. Convolutional Neural Network such as AlexNet, VGGFace and ResNet has been selected as deep learning architecture to do feature extraction on top of facial expression images. Then, multimodal fusion scheme is applied by fusing deep learning feature and hand-crafted feature together to improve the performance. In machine learning strategies, two-stage regression approach is introduced. In the first stage, baseline regression methods such as Support Vector Regression are applied to estimate each affect per time. Then in the second stage, subsequent model such as Time Delay Neural Network, Long Short-Term Memory and Kalman Filter is proposed to model the temporal relationships between consecutive estimation of each affect. In doing so, the temporal information employed by a subsequent model is not biased by high variability present in consecutive frame and at the same time, it allows the network to exploit the slow changing dynamic between emotional dynamic more efficiently. Following of two-stage regression approach for unimodal affect analysis, fusion information from different modalities is elaborated. Continuous emotion recognition in-the-wild is leveraged by investigating mathematical modelling for each emotion dimension. Linear Regression, Exponent Weighted Decision Fusion and Multi-Gene Genetic Programming are implemented to quantify the relationship between each modality. In summary, the research work presented in this thesis reveals a fundamental approach to automatically estimate affect value continuously from naturalistic human expression. The proposed system, which consists of feature smoothing, deep learning feature, two-stage regression framework and fusion using mathematical equation between modalities is demonstrated. It offers strong basis towards the development artificial intelligent system on estimation continuous affect estimation, and more broadly towards building a real-time emotion recognition system for human-computer interaction.
APA, Harvard, Vancouver, ISO, and other styles
22

Pérez, Dattari Rodrigo Javier. "Interactive learning with corrective feedback for continuous-action policies based on deep neural networks." Tesis, Universidad de Chile, 2019. http://repositorio.uchile.cl/handle/2250/170535.

Full text
Abstract:
Tesis para optar al grado de Magíster en Ciencias de la Ingeniería, Mención Eléctrica
Memoria para optar al título de Ingeniero Civil Eléctrico
El Aprendizaje Reforzado Profundo (DRL) se ha transformado en una metodología poderosa para resolver problemas complejos de toma de decisión secuencial. Sin embargo, el DRL tiene varias limitaciones cuando es usado en problemas del mundo real (p.ej. aplicaciones de robótica). Por ejemplo, largos tiempos de entrenamiento (que no se pueden acelerar) son requeridos, en contraste con ambientes simulados, y las funciones de recompensa pueden ser difíciles de especificar/modelar y/o computar. Más aún, el traspaso de políticas aprendidas en simulaciones al mundo real no es directo (\emph{reality gap}). Por otro lado, métodos de aprendizaje de máquinas basados en la transferencia de conocimiento humano a un agente han mostrado ser capaces de obtener políticas con buenos desempeños sin necesariamente requerir el uso de una función de recompensa, siendo eficientes en lo que respecta al tiempo. En este contexto, en esta tesis se introduce una estrategia de Aprendizaje Interactivo de Máquinas (IML) para entrenar políticas modeladas como Redes Neuronales Profundas (DNNs), basada en retroalimentación correctiva humana con un método llamado D-COACH. Se combina Aprendizaje Profundo (DL) con el método Asesoramiento Correctivo Comunicado por Humanos (COACH), en donde humanos no expertos pueden entrenar políticas corrigiendo las acciones que va tomando el agente en ejecución. El método D-COACH tiene el potencial de resolver problemas complejos sin la necesidad de utilizar muchos datos o tiempo. Resultados experimentales validan la eficiencia del método propuesto en plataformas simuladas y del mundo real, en espacios de estados de baja y alta dimensionalidad, mostrando la capacidad de aprender políticas en espacios de acción continuos de manera efectiva. El método propuesto mostró resultados particularmente interesantes cuando políticas parametrizadas con Redes Neuronales Convolucionales (CNNs) fueron usadas para resolver problemas con espacios de estado de alta dimensionalidad, como pixeles desde una imagen. Al usar CNNs, los agentes tienen la capacidad de construir valiosas representaciones del estado del ambiente sin la necesidad de hacer ingeniería de características por el lado del diseñador (lo que era siempre necesario en el Aprendizaje Reforzado (RL) clásico). Estas propiedades pueden ser muy útiles en robótica, ya que es común encontrar aplicaciones en donde la información adquirida por los sensores del sistema es de alta dimensionalidad, como imágenes RGB. Darles la habilidad a los robots de aprender desde datos del alta dimensionalidad va a permitir aumentar la complejidad de los problemas que estos pueden resolver. A lo largo de esta tesis se proponen y validan tres variaciones de D-COACH. La primera introduce una estructura general para resolver problemas de estado de baja y alta dimensionalidad. La segunda propone una variación del primer método propuesto para problemas de estado de alta dimensionalidad, reduciendo el tiempo y esfuerzo de un humano al entrenar una política. Y por último, la tercera introduce el uso de Redes Neuronales Recurrentes para añadirle memoria a los agentes en problemas con observabilidad parcial.
FONDECYT 1161500
APA, Harvard, Vancouver, ISO, and other styles
23

Webb, Colin. "A continuous flow elevator to lift ore vertically for deep mine haulage using a cable disc elevator." Thesis, Federation University Australia, 2020. http://researchonline.federation.edu.au/vital/access/HandleResolver/1959.17/175285.

Full text
Abstract:
Vertical continuous ore haulage with elevators in mining for deep haulage is virtually non-existent. In this, research investigations concentrated on a cable disc elevator. The problem of using a cable disc elevator is the friction between the elevator fixed tube and the moving ore on the disc. This research establishes the friction forces existing as the elevator cable and discs are elevated up a stationary tube. Then the focus is to find a way to eliminate that friction. The method involved developing three test rigs: Test Rig 1 measures static friction with the ore placed on a disc in a tube mounted on load cells to measure the resistance with the ore on the disc lifted by a counterweight. This is relevant for an elevator that has stopped under load. Test Rig 2 measures the dynamic friction in an operational 5-inch elevator with the tube on the lifting side held stationary by load cells when the cable discs are lifting the ore. Test Rig 3 eliminates friction in the lifting tube by using a pipe conveyor that travels vertically at the same speed as the cable disc elevator to contain the ore on the cable disc elevator. The cable disc elevator does all the ore lifting. The research generated results for static and dynamic friction for gravel, granite and coal. Cable tension required for ore lift of 1000 metres and the maximum hoisting distance for some existing cables are calculated. Implications of this research are that the cable disc elevator has the potential to haul from depths greater than existing elevators, has a small footprint in a mine, and with some further development could eliminate the need for truck haulage in open cut and underground mining from the mine.
Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
24

Junior, Antonio Paulo Nassar. "Impacto da sedação intermitente ou interrupção diária da sedação em pacientes sob ventilação mecânica." Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/5/5169/tde-14122015-155846/.

Full text
Abstract:
Introdução: Interrupção diária da sedação e sedação intermitente são efetivas na redução do tempo de ventilação mecânica. No entanto, a superioridade de uma em relação à outra não foi ainda determinada. Nosso objetivo foi comparar a interrupção diária da sedação e a sedação intermitente quanto à duração da ventilação mecânica em uma unidade de terapia intensiva (UTI) com baixa densidade de pessoal de enfermagem. Métodos: Pacientes adultos com expectativa de permanecerem por mais de 24h em ventilação mecânica foram randomizados, em um centro único, à interrupção diária da infusão de sedativos e opioides ou à sedação intermitente. Em ambos os casos, o objetivo era manter os pacientes em nível de SAS (Sedation Agitation Scale) 3 ou 4, ou seja, os pacientes deveriam estar calmos ou serem facilmente despertos com estímulo verbal ou leve sacudida. O desfecho primário foi número de dias livres da ventilação mecânica em 28 dias. Os desfechos secundários foram mortalidade na UTI e hospitalar, incidência de delirium, carga de trabalho da enfermagem, extubação acidental e stress psicológico seis meses após a alta da UTI. Resultados: Foram incluídos 60 pacientes. Não houve diferenças quanto ao número de dias livres em 28 dias entre interrupção diária da sedação e sedação intermitente (mediana: 24 vs. 25 dias, P = 0,160). Também não houve diferenças quanto à mortalidade na UTI (40 vs. 23,3%, P = 0,165) ou hospitalar (43,3 vs. 30%, P = 0,284), incidência de delirium (30 vs. 40%, p = 0,472), extubação acidental (3,3 vs. 6,7%, P = 0,514) e stress psicológico seis meses após a alta da UTI. A carga de trabalho de enfermagem não foi diferente entre os dois grupos, mas reduziu-se no quinto dia de internação na UTI em comparação com o primeiro dia [Nurse Activity Score (NAS) no grupo sedação intermitente foi 54 no dia 1vs. 39 no dia 5, P < 0.001; NAS no grupo interrupção diária da sedação foi 53 no dia 1 vs. 38 no dia 5, P < 0.001]. As dosagens de fentanil e midazolam por paciente foram maiores no grupo interrupção diária da sedação. O volume corrente foi maior no grupo sedação intermitente durante os primeiros cinco dias de internação na UTI. Conclusões: Não houve diferenças quanto ao número de dias livres de ventilação mecânica em 28 dias entre os grupos. Sedação intermitente associou-se a um menor uso de sedativos e opioides
Introduction: Daily sedative interruption and intermittent sedation are effective in abbreviating the time on mechanical ventilation. Whether one is superior to the other has not yet been determined. Our aim was to compare daily interruption and intermittent sedation during the mechanical ventilation period in a low nurse staffing intensive care unit (ICU). Methods: Adult patients expected to need mechanical ventilation for more than 24 hours were randomly assigned, in a single center, either to daily interruption of continuous sedative and opioid infusion or to intermittent sedation. In both cases, our goal was to maintain a Sedation Agitation Scale (SAS) level of 3 or 4; that is patients should be calm, easily arousable or awakened with verbal stimuli or gentle shaking. Primary outcome was ventilator-free days in 28 days. Secondary outcomes were ICU and hospital mortality, incidence of delirium, nurse workload, self-extubation and psychological distress six months after ICU discharge. Results: A total of 60 patients were included. There were no differences in the ventilator-free days in 28 days between daily interruption and intermittent sedation (median: 24 versus 25 days, P = 0.160). There were also no differences in ICU mortality (40 versus 23.3%, P = 0.165), hospital mortality (43.3 versus 30%, P = 0.284), incidence of delirium (30 versus 40%, P = 0.472), self-extubation (3.3 versus 6.7%, P = 0.514), and psychological stress six months after ICU discharge. Also, the nurse workload was not different between groups, but it was reduced on day 5 compared to day 1 in both groups (Nurse Activity Score (NAS) in the intermittent sedation group was 54 on day 1versus 39 on day 5, P < 0.001; NAS in daily interruption group was 53 on day 1 versus 38 on day 5, P < 0.001). Fentanyl and midazolam total dosages per patient were higher in the daily interruption group. The tidal volume was higher in the intermittent sedation group during the first five days of ICU stay. Conclusions: There was no difference in the number of ventilator-free days in 28 days between both groups. Intermittent sedation was associated with lower sedative and opioid doses
APA, Harvard, Vancouver, ISO, and other styles
25

Dreißigacker, Christoph [Verfasser]. "Searches for continuous gravitational waves : sensitivity estimation and deep learning as a novel search method / Christoph Dreißigacker." Hannover : Gottfried Wilhelm Leibniz Universität Hannover, 2020. http://d-nb.info/1220422142/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Khatab, Mahmoud A. T. "Behaviour of continuously supported self-compacting concrete deep beams." Thesis, University of Bradford, 2016. http://hdl.handle.net/10454/14628.

Full text
Abstract:
The present research is conducted to investigate the structural behaviour of continuously supported deep beams made with SCC. A series of tests on eight reinforced two-span continuous deep beams made with SCC was performed. The main parameters investigated were the shear span-to-depth ratio, the amount and configuration of web reinforcement and the main longitudinal reinforcement ratio. All beams failed due to a major diagonal crack formed between the applied mid-span load and the intermediate support separating the beam into two blocks: the first one rotated around the end support leaving the rest of the beam fixed on the other two supports. The amount and configuration of web reinforcement had a major effect in controlling the shear capacity of SCC continuous deep beams. The shear provisions of the ACI 318M-11 reasonably predicted the load capacity of SCC continuous deep beams. The strut-and-tie model recommended by different design codes showed conservative results for all SCC continuous deep beams. The ACI Building Code (ACI 318M-11) predictions were more accurate than those of the EC2 and Canadian Code (CSA23.3-04). The proposed effectiveness factor equations for the strut-and-tie model showed accurate predictions compared to the experimental results. The different equations of the effectiveness factor used in upper-bound analysis can reasonably be applied to the prediction of the load capacity of continuously supported SCC deep beams although they were proposed for normal concrete (NC). The proposed three dimensional FE model accurately predicted the failure modes, the load capacity and the load-deflection response of the beams tested.
APA, Harvard, Vancouver, ISO, and other styles
27

Vignau, Benjamin. "Méthodes d'évaluation des systèmes biométriques cardiaques." Electronic Thesis or Diss., Bourges, INSA Centre Val de Loire, 2024. http://www.theses.fr/2024ISAB0011.

Full text
Abstract:
Cette thèse s’intéresse aux systèmes d’authentification continue basés sur les signaux cardiaques, et plus précisément du signal PPG. Ce dernier est très largement utilisé dans les montres connectées et équipements médicaux, et son usage principal est de mesurer la fréquence cardiaque ainsi que le taux d’oxygène dans le sang.Cependant, ce signal est suffisamment précis et complexe pour différencier les individus et créer un nouveau système d’authentification. La mesure de ce signal en continu est déjà largement répandue au sein des populations, ainsi il paraît logique de chercher à l’utiliser pour créer un système d’authentification continue.Afin de créer un tel système, nous avons d’abord effectué un état de l’art. Ce dernier a mis en lumière divers biais méthodologiques au sein de la communauté. L’identification de ces derniers nous a poussés à créer un nouveau système de comparaison ainsi qu’une nouvelle méthode d’analyse des résultats. Nous avons ainsi testé et comparé plus de 250 modèles d’intelligence artificielle capables de reconnaître des individus à l’aide de leurs battements de cœur.Puis, nous avons créé un premier système d’authentification continue basé sur le signal PPG et étudié diverses attaques par corruption. Cette étude nous a permis de déterminer certaines failles de l’apprentissage continu.Enfin, nous avons déterminé une nouvelle méthode pour paramétrer de manière optimale un système d’authentification continue. Cette procédure permet de maximiser l’ergonomie de chaque utilisateur pour un niveau de sécurité défini
This thesis focuses on continuous authentication systems based on cardiac signals, and more specifically on the PPG signal. The latter is widely used in connected watches and medical equipment, and its main purpose is to measure heart rate and oxygen levels in the blood.However, this signal is sufficiently precise and complex to differentiate between individuals and create a new authentication system. The continuous measurement of this signal is already widespread in populations, so it seems logical to seek to use it to create a continuous authentication system.In order to create such a system, we first carried out a state-of-the-art study. This revealed various methodological biases within the community. Identifying these biases prompted us to create a new comparison system and a new method for analyzing the results. We tested and compared over 250 artificial intelligence models capable of recognizing individuals by their heartbeat.We then created a first continuous authentication system based on the PPG signal and studied various corruption attacks. This study enabled us to identify some of the shortcomings of continuous learning.Finally, we determined a new method for optimally parameterizing a continuous authentication system. This procedure maximizes the usability of each user for a given level of security
APA, Harvard, Vancouver, ISO, and other styles
28

Tomczyk, Martyna. "Sédation continue, maintenue jusqu'au décès : quelle communication dans les unités de soins palliatifs en France et en Pologne ? Pour une éthique de la présence à l'autre." Thesis, Sorbonne Paris Cité, 2016. http://www.theses.fr/2016USPCB214/document.

Full text
Abstract:
Cette thèse d’éthique médicale, située dans une approche interdisciplinaire et tentant de s’affranchir de tout préjugé personnel de la part de l’auteur, tend à apporter un éclairage, aussi objectif que possible, sur une question, jusqu’à présent, inexplorée : la communication sur la sédation continue, maintenue jusqu’au décès, dans les unités de soins palliatifs, en France et en Pologne. En premier lieu, une recherche bibliographique a été réalisée et les principales failles des publications existantes ont été mises en exergue dont deux en particulier : le flou terminologique et conceptuel autour de la notion de sédation dans le champ de la médecine palliative, et de la notion de représentation elle-même. Pour que l’objet de recherche soit correctement cerné, deux notions-clés – la notion de sédation continue, maintenue jusqu’au décès et la notion de représentation – ont été d’abord explicitées, puis articulées entre elles. En second lieu, une recherche qualitative fondée sur l’étude de cas multiples a été réalisée sur le terrain, c’est-à-dire dans différentes unités de soins palliatifs, en France et en Pologne. Deux méthodes qualitatives ont été adoptées : l’analyse de dossiers et les entretiens semi-directifs individuels. Ces derniers ont été réalisés auprès des principaux acteurs impliqués dans une situation de communication : les médecins prescripteurs, les infirmières et les proches des malades sédatés. Les malades n’ont pas été interrogés, mais leurs témoignages ont été recueillis via les entretiens avec les professionnels de santé et les proches. Quinze cas complets par pays, soit trente cas en tout, ont été inclus dans l’étude. Les données obtenues ont été analysées à l’aide d’outils linguistiques adaptés. Les résultats ont montré que les représentations de la « sédation continue, maintenue jusqu’au décès », faites par les professionnels de santé, avaient un impact sur la délivrance de l’information. Une certaine influence des contextes nationaux a été observée, dans la plupart des cas. En revanche, les souhaits des malades et de leurs proches, quant au contenu de l’information, étaient les mêmes dans les deux pays. Par ailleurs, ce n’était pas tant l’information stricto sensu qui comptait mais une présence bienveillante. Ce besoin de relation nous a amenés à nous interroger sur une certaine universalité de l’être souffrant, malgré ses particularités, d’où notre proposition d’une éthique de la présence à l’autre. Ne devrait-elle pas être intégrée dans le champ de la médecine palliative ou plutôt dans la médecine toute entière, voire dans nos vies de tous les jours ?
This thesis addresses an issue of medical ethics which has previously been investigated, that of communication concerning continuous sedation until death as practised in palliative care units in France and Poland. Using an interdisciplinary approach, free of any personal preconceptions by the author, it aims to provide an objective insight into the issue. A literature review is performed initially which highlights the main flaws in the existing publications of which there are two in particular: terminological and conceptual confusion around the idea of sedation in palliative medicine and its conceptual representation. In order to properly frame the object of research, two key concepts: continuous sedation until death and representation are first clarified and then linked together. Subsequently, a qualitative multiple-case field study is performed in a number of different palliative care units in France and Poland. Two qualitative methods are used: case analyses and individual semi-structured interviews with the main parties involved in the communication process - prescribing clinicians, nurses and the families and friends of sedated patients. Patients were not directly interviewed but their experiences were accounted for via the interviews with the carers and family members. Thirty completed case, fifteen per country, are included in the study. The data obtained are analysed using the appropriate linguistic tools. The results show that carers' representations of “continuous sedation until death” influence the delivery of information to patients. The national contexts are seen to exert a certain influence in most cases. However, with regard to the content of information, the wishes of patients and family members are the same in both countries. Moreover, it is less the information itself that counts as much as the caring way it is delivered. The emergence from this study of a needful wish to be cared for leads us to question whether, despite individual differences, there is not a universal dimension to the suffering being. This in turn prompts our suggestion of an ethical scope to the presence of the other. Should this not be at the root of palliative medicine and moreover throughout the entire field of medicine? And if that's the case, why not in our everyday lives ?
APA, Harvard, Vancouver, ISO, and other styles
29

Sors, Arnaud. "Apprentissage profond pour l'analyse de l'EEG continu." Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAS006/document.

Full text
Abstract:
Ces travaux de recherche visent à développer des méthodes d’apprentissage automatique pour l’analyse de l’électroencéphalogramme (EEG) continu. L’EEG continu est une modalité avantageuse pour l’évaluation fonctionnelle des états cérébraux en réanimation ou pour d’autres applications. Cependant son utilisation aujourd’hui demeure plus restreinte qu’elle ne pourrait l’être, car dans la plupart des cas l’interprétation est effectuée visuellement par des spécialistes.Les sous-parties de ce travail s’articulent autour de l’évaluation pronostique du coma post-anoxique, choisie comme application pilote. Un petit nombre d’enregistrement longue durée a été réalisé, et des enregistrements existants ont été récupérés au CHU Grenoble.Nous commençons par valider l’efficacité des réseaux de neurones profonds pour l’analyse EEG d’échantillons bruts. Nous choisissons à cet effet de travailler sur la classification de stades de sommeil. Nous utilisons un réseau de neurones convolutionnel adapté pour l’EEG que nous entrainons et évaluons sur le jeu de données SHHS (Sleep Heart Health Study). Cela constitue le premier system neuronal à cette échelle (5000 patients) pour l’analyse du sommeil. Les performances de classification atteignent ou dépassent l’état de l’art.En utilisation réelle, pour la plupart des applications cliniques le défi principal est le manque d’annotations adéquates sur les patterns EEG ou sur de court segments de données (et la difficulté d’en établir). Les annotations disponibles sont généralement haut niveau (par exemple, le devenir clinique) est sont donc peu nombreuses. Nous recherchons comment apprendre des représentations compactes de séquences EEG de façon non-supervisée/semi-supervisée. Le domaine de l’apprentissage non supervisé est encore jeune. Pour se comparer aux travaux existants nous commençons avec des données de type image, et investiguons l’utilisation de réseaux adversaires génératifs (GANs) pour l’apprentissage adversaire non-supervisé de représentations. La qualité et la stabilité de différentes variantes sont évaluées. Nous appliquons ensuite un GAN de Wasserstein avec pénalité sur les gradients à la génération de séquences EEG. Le système, entrainé sur des séquences mono-piste de patients en coma post anoxique, est capable de générer des séquences réalistes. Nous développons et discutons aussi des idées originales pour l’apprentissage de représentations en alignant des distributions dans l’espace de sortie du réseau représentatif.Pour finir, les signaux EEG multipistes ont des spécificités qu’il est souhaitable de prendre en compte dans les architectures de caractérisation. Chaque échantillon d’EEG est un mélange instantané des activités d’un certain nombre de sources. Partant de ce constat nous proposons un système d’analyse composé d’un sous-système d’analyse spatiale suivi d’un sous-système d’analyse temporelle. Le sous-système d’analyse spatiale est une extension de méthodes de séparation de sources construite à l’aide de couches neuronales avec des poids adaptatifs pour la recombinaison des pistes, c’est à dire que ces poids ne sont pas appris mais dépendent de caractéristiques du signal d’entrée. Nous montrons que cette architecture peut apprendre à réaliser une analyse en composantes indépendantes, si elle est entrainée sur une mesure de non-gaussianité. Pour l’analyse temporelle, des réseaux convolutionnels classiques utilisés séparément sur les pistes recombinées peuvent être utilisés
The objective of this research is to explore and develop machine learning methods for the analysis of continuous electroencephalogram (EEG). Continuous EEG is an interesting modality for functional evaluation of cerebral state in the intensive care unit and beyond. Today its clinical use remains more limited that it could be because interpretation is still mostly performed visually by trained experts. In this work we develop automated analysis tools based on deep neural models.The subparts of this work hinge around post-anoxic coma prognostication, chosen as pilot application. A small number of long-duration records were performed and available existing data was gathered from CHU Grenoble. Different components of a semi-supervised architecture that addresses the application are imagined, developed, and validated on surrogate tasks.First, we validate the effectiveness of deep neural networks for EEG analysis from raw samples. For this we choose the supervised task of sleep stage classification from single-channel EEG. We use a convolutional neural network adapted for EEG and we train and evaluate the system on the SHHS (Sleep Heart Health Study) dataset. This constitutes the first neural sleep scoring system at this scale (5000 patients). Classification performance reaches or surpasses the state of the art.In real use for most clinical applications, the main challenge is the lack of (and difficulty of establishing) suitable annotations on patterns or short EEG segments. Available annotations are high-level (for example, clinical outcome) and therefore they are few. We search how to learn compact EEG representations in an unsupervised/semi-supervised manner. The field of unsupervised learning using deep neural networks is still young. To compare to existing work we start with image data and investigate the use of generative adversarial networks (GANs) for unsupervised adversarial representation learning. The quality and stability of different variants are evaluated. We then apply Gradient-penalized Wasserstein GANs on EEG sequences generation. The system is trained on single channel sequences from post-anoxic coma patients and is able to generate realistic synthetic sequences. We also explore and discuss original ideas for learning representations through matching distributions in the output space of representative networks.Finally, multichannel EEG signals have specificities that should be accounted for in characterization architectures. Each EEG sample is an instantaneous mixture of the activities of a number of sources. Based on this statement we propose an analysis system made of a spatial analysis subsystem followed by a temporal analysis subsystem. The spatial analysis subsystem is an extension of source separation methods built with a neural architecture with adaptive recombination weights, i.e. weights that are not learned but depend on features of the input. We show that this architecture learns to perform Independent Component Analysis if it is trained on a measure of non-gaussianity. For temporal analysis, standard (shared) convolutional neural networks applied on separate recomposed channels can be used
APA, Harvard, Vancouver, ISO, and other styles
30

Vukotic, Verdran. "Deep Neural Architectures for Automatic Representation Learning from Multimedia Multimodal Data." Thesis, Rennes, INSA, 2017. http://www.theses.fr/2017ISAR0015/document.

Full text
Abstract:
La thèse porte sur le développement d'architectures neuronales profondes permettant d'analyser des contenus textuels ou visuels, ou la combinaison des deux. De manière générale, le travail tire parti de la capacité des réseaux de neurones à apprendre des représentations abstraites. Les principales contributions de la thèse sont les suivantes: 1) Réseaux récurrents pour la compréhension de la parole: différentes architectures de réseaux sont comparées pour cette tâche sur leurs facultés à modéliser les observations ainsi que les dépendances sur les étiquettes à prédire. 2) Prédiction d’image et de mouvement : nous proposons une architecture permettant d'apprendre une représentation d'une image représentant une action humaine afin de prédire l'évolution du mouvement dans une vidéo ; l'originalité du modèle proposé réside dans sa capacité à prédire des images à une distance arbitraire dans une vidéo. 3) Encodeurs bidirectionnels multimodaux : le résultat majeur de la thèse concerne la proposition d'un réseau bidirectionnel permettant de traduire une modalité en une autre, offrant ainsi la possibilité de représenter conjointement plusieurs modalités. L'approche été étudiée principalement en structuration de collections de vidéos, dons le cadre d'évaluations internationales où l'approche proposée s'est imposée comme l'état de l'art. 4) Réseaux adverses pour la fusion multimodale: la thèse propose d'utiliser les architectures génératives adverses pour apprendre des représentations multimodales en offrant la possibilité de visualiser les représentations dans l'espace des images
In this dissertation, the thesis that deep neural networks are suited for analysis of visual, textual and fused visual and textual content is discussed. This work evaluates the ability of deep neural networks to learn automatic multimodal representations in either unsupervised or supervised manners and brings the following main contributions:1) Recurrent neural networks for spoken language understanding (slot filling): different architectures are compared for this task with the aim of modeling both the input context and output label dependencies.2) Action prediction from single images: we propose an architecture that allow us to predict human actions from a single image. The architecture is evaluated on videos, by utilizing solely one frame as input.3) Bidirectional multimodal encoders: the main contribution of this thesis consists of neural architecture that translates from one modality to the other and conversely and offers and improved multimodal representation space where the initially disjoint representations can translated and fused. This enables for improved multimodal fusion of multiple modalities. The architecture was extensively studied an evaluated in international benchmarks within the task of video hyperlinking where it defined the state of the art today.4) Generative adversarial networks for multimodal fusion: continuing on the topic of multimodal fusion, we evaluate the possibility of using conditional generative adversarial networks to lean multimodal representations in addition to providing multimodal representations, generative adversarial networks permit to visualize the learned model directly in the image domain
APA, Harvard, Vancouver, ISO, and other styles
31

Shain, Cory Adam. "Language, time, and the mind: Understanding human language processing using continuous-time deconvolutional regression." The Ohio State University, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=osu1619002281033782.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Hamitouche, Ilyes. "Machine learning for determining continuous conformational transitions of biomolecular complexes from single-particle cryo-electron microscopy images." Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS047.

Full text
Abstract:
Au cours de mon travail de thèse, j'ai développé trois méthodes basées sur l'apprentissage profond pour extraire la variabilité conformationnelle continue des complexes biomoléculaires à partir d’images de cryo microscopie électronique de particules isolées. Les trois méthodes suivantes sont décrites dans ce manuscrit de thèse, ainsi que leurs résultats sur des données de test : DeepHEMNMA supervisée, Cryo-VIT supervisée, et Cryo-VIT non supervisée. DeepHEMNMA est une méthode de détermination rapide de l'espace conformationnel qui utilise un réseau de neurones convolutifs pour accélérer une méthode précédemment développée pour l'analyse conformationnelle continue, HEMNMA , qui combine une simulation du mouvement calculée par l'analyse des modes normaux (NMA) avec une approche de traitement d'image. Contrairement à DeepHEMNMA, les approches Cryo-ViT apprennent à faire correspondre chaque image à un grand nombre de coordonnées atomiques à l'aide d'un autoencodeur variationnel
During my thesis, I developed three methods based on deep learning to extract continuous conformational variability of biomolecular complexes from single-particle cryo electron microscopy images. The following three methods are described in this thesis manuscript, along with their results on test data: supervised DeepHEMNMA, supervised Cryo-VIT, and unsupervised Cryo-VIT. DeepHEMNMA is a fast conformational space determination method that uses a convolutional neural network to accelerate a previously developed method for continuous conformational analysis, HEMNMA , which combines a motion simulation computed by normal mode analysis (NMA) with an image processing approach. In contrast to DeepHEMNMA, the Cryo-ViT approaches learn to match each image to a large number of atomic coordinates using a variational autoencoder
APA, Harvard, Vancouver, ISO, and other styles
33

Carvalho, Paulo Henrique Boaventura de. "Sedação em colonoscopia: utilização do propofol em estudo comparativo entre três diferentes modos de administração." Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/5/5132/tde-14122015-122427/.

Full text
Abstract:
O uso do propofol em sedação para colonoscopia e outros procedimentos endoscópicos é cada vez mais frequente, devido ao seu rápido início de efeito e curto período de recuperação, com poucos efeitos residuais, o que o torna um anestésico ideal para o uso em condutas médicas realizadas em regime ambulatorial. Seu perfil farmacológico o posiciona como um anestésico adequado a métodos de administração endovenosa contínuos ou titulados, possibilitando maior controle na sua concentração plasmática. Devido à sua alta lipossolubilidade, o propofol difunde-se rapidamente ao sistema nervoso e outros tecidos aonde exercerá seu efeito clínico, intimamente ligado à propofolemia, com diminuição da atividade do sistema nervoso central, que determinará tanto a sedação nos seus diversos níveis, quanto os indesejados efeitos depressores do sistema cardiovascular e respiratório, podendo levar a uma diminuição importante do débito cardíaco e pressão arterial e também a uma depressão central do sistema regulatório da respiração, que pode gerar apneia ou hipoventilação significativas. O presente estudo teve como objetivo avaliar clinicamente, e com dosagem sérica, o propofol em três esquemas diferentes de infusão endovenosa. Foram avaliados aleatoriamente 50 pacientes submetidos à colonoscopia nos Serviços de Endoscopia do Hospital Ana Costa (Santos - SP) e no Hospital das Clínicas da Faculdade de Medicina da Universidade de São Paulo (São Paulo-SP). Os pacientes foram divididos em três grupos, conforme o esquema de sedação que foi utilizado. O Grupo 1 recebeu fentanil no início, uma dose inicial de propofol de um miligrama por quilo em um minuto na indução, posteriormente recebeu propofol em infusão intermitente de doses fracionadas de 30 mg (bolus) conforme necessidade clínica durante o exame. O Grupo 2 recebeu fentanil no início, uma dose inicial de propofol de 1 mg/kg em um minuto na indução, após essa, recebeu propofol contínuo em uma solução diluída a 0,2% em solução glicosada a 5%, em uma dose inicial de 1 gota/kg de peso do paciente, o que equivale a aproximadamente 100 ug/kg/min, controlada manualmente e alterada conforme a necessidade clínica do exame. O Grupo 3 recebeu fentanil no início do exame, e propofol com dose calculada e administrada por bomba eletrônica computadorizada (Diprifusor®) em esquema de infusão contínua alvo controlada, numa dose inicial de indução de 4 ug/mL administrada em um minuto, baixada a 2 ug/mL após a dose inicial completada, e alteradas para mais ou para menos conforme a necessidade clínica do exame. Os pacientes foram monitorizados com eletrocardiografia contínua, pressão arterial não invasiva medida de dois em dois minutos, oximetria de pulso, capnografia de aspiração lateral e índice bispectral (BIS). As dosagens séricas de propofol foram feitas em três amostras de sangue colhidas por paciente. A primeira amostra, cinco minutos após a indução, a segunda ao endoscopista alcançar o ceco durante o exame e a terceira a cinco minutos após a última dose de propofol administrada ou ao término da infusão contínua, no final do exame. Não houve diferença estatística significativa entre os Grupos em relação às características físicas pessoais dos pacientes como: sexo (p = 0,976), estado físico de acordo com a American Society of Anestesiology (ASA) (p = 0,945), idade (p = 0,896), peso (p = 0,340), altura (p = 0,947), índice de massa corpórea (IMC) (p = 0406), nos parâmetros clínicos observados como menor valor de índice BIS (p = 0,871) e o tempo para alcançá-lo (p = 0,052), tempo médio do exame (p = 0,123) e efeitos adversos observados como a queda da saturação de oxigênio abaixo de 90% (p = 0,054). Houve diferença estatisticamente significativa nas pressões arteriais iniciais dos Grupos 2 e 3, que foram ligeiramente elevadas em relação ao Grupo 1 a sistólica (p = 0,008), diastólica (p = 0,018) e média (p = 0,008), porém após a indução, a média das pressões arteriais sistólica (p = 0,440), diastólica (p = 0,960) e média (p = 0,815), e as menores pressões alcançadas não foram estatisticamente diferentes: sistólica (p = 0,656), diastólica (p = 0,783) e média (p = 0,993). Não houve diferença estatística em relação à frequência cardíaca inicial (p = 0,453), média após indução (p = 0,702), e menor frequência cardíaca alcançada (p = 0,788). Houve diferença entre o número de agitações médias entre os Grupos (p = 0,001), sendo maior no Grupo 1, porém este número foi relacionado ao esquema de administração do propofol no Grupo 1, que foi administrado após a indução quando o paciente apresentou algum grau de agitação que necessitou aprofundamento anestésico. Houve queda de saturação de oxigênio em seis pacientes (12%) da amostra avaliada, revertidas em tempo menor que cinco minutos com manobras de elevação da mandíbula do paciente ou utilização de cânula de Guedel para desobstrução das vias aéreas. Antes das quedas na saturação de oxigênio, foram percebidas alterações típicas de obstrução de vias aéreas, hipopneia ou apneia nas ondas de capnografia em 16 pacientes (32%), sendo que, em alguns pacientes por mais de uma vez, demonstrando esse ser um bom parâmetro de monitorização para prevenir hipóxia, não houve diferença entre os Grupos no parâmetro de obstrução de vias aéreas/apneia (p = 0,543). Em relação à propofolemia, o comportamento médio dos pacientes dos três Grupos foi estatisticamente igual ao longo dos momentos de avaliação (p = 0,830), não havendo diferença média estatisticamente significativa entre os Grupos (p = 0,964). Não houve diferença entre o consumo do propofol médio por minuto de exame (p = 0,748). Em relação à análise de custos com a administração do propofol, o Grupo 1 apresentou o menor valor médio para as colonoscopias avaliadas com gasto médio de R$ 7,00, o Grupo 2 gastou em média R$ 17,50 e o Grupo 3 gastou em média R$ 112,70 com diferença estatisticamente significativa entre eles (p < 0,001). A conclusão é que os esquemas de administração do propofol testados foram seguros, e houve semelhança entre os Grupos nos parâmetros avaliados incluindo a propofolemia, porém com custos diferenciados entre eles. Em relação ao Grupo 1, devido ao maior número de agitações por minuto este pode ser um bom método para procedimentos mais curtos, para procedimentos mais longos os Grupos 2 e 3 se mostraram mais confortáveis para o responsável pela sedação
The use of propofol sedation for colonoscopies and other endoscopic procedures is increasing due to the rapid onset of effect and short recovery time with few residual effects, which makes it an ideal anesthetic for usingin outpatient medical procedures. Its pharmacological profile places it as a suitable anesthetic to continuous or titred intravenous administration, providing increased control in its plasma levels. Due to its high liposolubility, propofol diffuses rapidly to the central nervous system and other tissues where it shall perform its clinical effects, closely related to plasma concentration, and providing sedation at different levels, as much as the unwanted depressant effects of the cardiovascular and respiratory system, it may lead to a significant reduction in cardiac output and blood pressure and also a central regulatory breathing system depression, that can result in significant apnea or hypoventilation. This study aimed to evaluate clinically and serum, propofol in three different regimens of intravenous infusion. 50 patients submitted to colonoscopy in the endoscopy centers at Hospital Ana Costa (Santos - SP), and Hospital das Clínicas da Faculdade de Medicina da Universidade de São Paulo (São Paulo-SP), have been randomly assessed. Such patients were divided into three groups, according to the sedation scheme that was used for them. Group 1 received fentanyl at first, then a one milligram per kilogram propofol dose, at induction, in a minute, later they received intermittent infusion of propofol in fractionated doses of 30 mg (Bolus) according to clinical needs during the test. Group 2 received fentanyl in the beginning, a starting dose of propofol 1 mg/kg at induction in one minute, after that received propofol in a 0.2% solution diluted in 5% glucose solution at an initial 1 drop/kg of patient weight dose, equivalent to about one 100 u100/min, manually controlled and changed according to clinical need of the examination. Group 3 received in the beginning of the examination fentanyl and propofol calculated by target controlled continuous infusion electronic device (Diprifusor®), an initial loading dose of 4 ug/mL was administered in one minute, reduced at 2 ug/mL after the initial dose, changed up or down according to clinical needs of examination. Patients were monitorized with continuous electrocardiography, non-invasive blood pressure measured every two minutes, pulse oximetry, side suction capnography and bispectral index (BIS). Serum levels of propofol were performed on three samples of blood taken by each patient. The first sample, five minutes after the induction, the second when the endoscopist reached the cecum during the examination and the third sample five minutes after the last administered dose or the end of continuous infusion of propofol, at the end of the test. No statistically significant difference between groups with respect to personal physical characteristics of patients as: sex (p = 0.976), physical state according to the American Society of Anesthesiology (ASA) (p = 0.945), age (p = 0.896), weight (p = 0.340), height (p = 0.947), body mass index body (BMI) (p = 0.406) in clinical parameters observed as a minor reached bispectral index value (BIS) (p = 0.871) and time to reach it (p = 0.052), mean procedure time (p = 0.123) and adverse effects observed as a drop in oxygen saturation below 90% (p = 0.054). There was a difference between the number of averages agitations between groups (p = 0.001), being higher in Group 1, but that number was related to propofol administration scheme in Group 1, as this was administered after induction when the patient had some agitation that required deeper anesthesia. There was a statistically significant difference in initial blood pressures of groups 2 and 3, which were slightly higher compared to Group 1: systolic (p = 0.008), diastolic (p = 0.018) and mean (p=0.008), but after induction, the average systolic (p = 0.440), diastolic (p = 0.960) and average (p = 0.815), and lower pressures achieved: systolic (p = 0.656) and diastolic (p = 0.783) and average (p = 0.993), were not statistically different. There was no statistical difference from the initial heart rate (p = 0.453), average heart rate after induction (p=0.702), and lower heart rate achieved (p = 0.788). There was oxygen dessaturation below 90% in six patients (12%) of the study sample, reversed in less than five minutes with patient jaw thrust maneuver or use of Guedel cannula, for airway clearance. Before the declines in oxygen saturation, typical tract obstruction, hypopnea or apnea wave changes were noted in capnography in sixteen patients (32%), and in some patients for more than once, showing this to be a good monitoring parameter to prevent hypoxia in patients, there was no difference between Groups in the airway obstruction/apnea parameter (p = 0.543). Regarding serum propofol, the average behavior of patients in the three Groups were statistically similar over the time (p = 0.830), with no statistically significant mean difference between groups (p = 0.964). There was no difference between the average propofol consumption per minute examination (p = 0.748). Regarding cost analysis with the administration of propofol, Group 1 had the lowest average value for colonoscopies evaluated with an average expense of R$ 7.00, Group 2 spent on average R$ 17.50 and the Group spent 3 on average R$ 112.70 with a statistically significant difference (p < 0.001). The conclusion is that propofol administration schemes tested were safe and there was similarity between the Groups in the evaluated parameters including propofolemia, but with different costs among them. With respect to Group 1 due to the larger number of agitations per minute, this is a good method for shorter procedures, for longer procedures groups 2 and 3 were more comfortable for the person responsible for sedation
APA, Harvard, Vancouver, ISO, and other styles
34

Gatto, Lorenzo. "Apprendimento continuo per il riconoscimento di immagini." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/15607/.

Full text
Abstract:
Negli ultimi anni il deep learning ha riscontrando molto interesse da parte della comunità scientifica, in gran parte grazie ai risultati ottenuti a partire dal 2012 nell’ambito di visione artificiale, del riconoscimento del parlato e della sintesi vocale. I più importanti risultati nell’ambito di deep learning sono stati ottenuti addestrando modelli di machine learning su dataset statici, iterando più volte la procedura di addestramento su tutti i dati disponibili. Ciò si contrappone a come gli umani imparano, cioè vedendo i dati (immagini, suoni, ecc.) una sola volta ma riuscendo comunque a ricordare il passato con un elevato livello di precisione. Il modo in cui gli umani apprendono viene chiamato apprendimento continuo (o continuous learning). Approcci all’addestramento dei modelli che evitano di osservare un pattern del dataset ripetutamente soffrono di un problema chiamato catastrophic forgetting, per il quale si tende a dimenticare le caratteristiche dei pattern visti in passato, facendo sì che il modello riconosca solo pattern simili a quelli visti recentemente. Varie soluzioni al problema sono state proposte, ma nessuna ottiene ancora performance simili a quelle ottenibili con l’approccio cumulativo, che esegue la procedura di addestramento su tutti i dati disponibili iterativamente. Il contributo dell’autore è stato quello di calcolare l’accuratezza della tecnica di apprendimento continuo iCaRL su dataset CORe50 e confrontare la performance ottenuta con i risultati ottenuti da altri autori nello stesso dataset utilizzando altre tecniche. I risultati mostrano come gli approcci attuali all’apprendimento continuo siano ancora nella loro infanzia, ottenendo livelli di accuratezza non comparabili a quelli ottenibili con l’approccio cumulativo.
APA, Harvard, Vancouver, ISO, and other styles
35

Kudrna, Jiří. "Nosná železobetonová konstrukce víceúčelového objektu." Master's thesis, Vysoké učení technické v Brně. Fakulta stavební, 2017. http://www.nusl.cz/ntk/nusl-265738.

Full text
Abstract:
This work deals with the structural solution of reinforced concrete floor slab, deep beam and its support of multipurpose building. The structure is designed and assessed in according to valid standards. The continuous one-way slab is analyzed by Finite Element Method, deep beam is solved by Strut-and-Tie model.
APA, Harvard, Vancouver, ISO, and other styles
36

Sugianto, Nehemia. "Responsible AI for Automated Analysis of Integrated Video Surveillance in Public Spaces." Thesis, Griffith University, 2021. http://hdl.handle.net/10072/409586.

Full text
Abstract:
Understanding customer experience in real-time can potentially support people’s safety and comfort while in public spaces. Existing techniques, such as surveys and interviews, can only analyse data at specific times. Therefore, organisations that manage public spaces, such as local government or business entities, cannot respond immediately when urgent actions are needed. Manual monitoring through surveillance cameras can enable organisation personnel to observe people. However, fatigue and human distraction during constant observation cannot ensure reliable and timely analysis. Artificial intelligence (AI) can automate people observation and analyse their movement and any related properties in real-time. Analysing people’s facial expressions can provide insight into how comfortable they are in a certain area, while analysing crowd density can inform us of the area’s safety level. By observing the long-term patterns of crowd density, movement, and spatial data, the organisation can also gain insight to develop better strategies for improving people’s safety and comfort. There are three challenges to making an AI-enabled video surveillance system work well in public spaces. First is the readiness of AI models to be deployed in public space settings. Existing AI models are designed to work in generic/particular settings and will suffer performance degradation when deployed in a real-world setting. Therefore, the models require further development to tailor them for the specific environment of the targeted deployment setting. Second is the inclusion of AI continual learning capability to adapt the models to the environment. AI continual learning aims to learn from new data collected from cameras to adapt the models to constant visual changes introduced in the setting. Existing continuous learning approaches require long-term data retention and past data, which then raise data privacy issues. Third, most of the existing AI-enabled surveillance systems rely on centralised processing, meaning data are transmitted to a central/cloud machine for video analysis purposes. Such an approach involves data privacy and security risks. Serious data threats, such as data theft, eavesdropping or cyberattack, can potentially occur during data transmission. This study aims to develop an AI-enabled intelligent video surveillance system based on deep learning techniques for public spaces established on responsible AI principles. This study formulates three responsible AI criteria, which become the guidelines to design, develop, and evaluate the system. Based on the criteria, a framework is constructed to scale up the system over time to be readily deployed in a specific real-world environment while respecting people’s privacy. The framework incorporates three AI learning approaches to iteratively refine the AI models within the ethical use of data. First is the AI knowledge transfer approach to adapt existing AI models from generic deployment to specific real-world deployment with limited surveillance datasets. Second is the AI continuous learning approach to continuously adapt AI models to visual changes introduced by the environment without long-period data retention and the need for past data. Third is the AI federated learning approach to limit sensitive and identifiable data transmission by performing computation locally on edge devices rather than transmitting to the central machine. This thesis contributes to the study of responsible AI specifically in the video surveillance context from both technical and non-technical perspectives. It uses three use cases at an international airport as the application context to understand passenger experience in real-time to ensure people’s safety and comfort. A new video surveillance system is developed based on the framework to provide automated people observation in the application context. Based on real deployment using the airport’s selected cameras, the evaluation demonstrates that the system can provide real-time automated video analysis for three use cases while respecting people’s privacy. Based on comprehensive experiments, AI knowledge transfer can be an effective way to address limited surveillance datasets issue by transferring knowledge from similar datasets rather than training from scratch on surveillance datasets. It can be further improved by incrementally transferring knowledge from multi-datasets with smaller gaps rather than a one-stage process. Learning without Forgetting is a viable approach for AI continuous learning in the video surveillance context. It consistently outperforms fine-tuning and joint-training approaches with lower data retention and without the need for past data. AI federated learning can be a feasible solution to allow continuous learning in the video surveillance context without compromising model accuracy. It can obtain comparable accuracy with quicker training time compared to joint-training.
Thesis (PhD Doctorate)
Doctor of Philosophy (PhD)
Dept Bus Strategy & Innovation
Griffith Business School
Full Text
APA, Harvard, Vancouver, ISO, and other styles
37

Chancan, Leon Marvin Aldo. "The role of motion-and-visual perception in robot place learning and navigation." Thesis, Queensland University of Technology, 2022. https://eprints.qut.edu.au/229769/8/Marvin%20Aldo_Chancan%20Leon_Thesis.pdf.

Full text
Abstract:
This thesis was a step forward in developing new robot learning-based localisation and navigation systems using real world data and simulation environments. Three new methods were proposed to provide new insights on the role of joint motion-and-vision-based end-to-end robot learning in both place recognition and navigation tasks, within modern reinforcement learning and deep learning frameworks. Inspired by biological neural circuits underlying these complex tasks in insect and rat mammalian brains, these methods were shown to be orders of magnitude faster than classical techniques, while setting new state-of-the-art performance standards in terms of accuracy, throughput and latency.
APA, Harvard, Vancouver, ISO, and other styles
38

Fonseca, De Sam Bento Ribeiro Manuel. "Suprasegmental representations for the modeling of fundamental frequency in statistical parametric speech synthesis." Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/31338.

Full text
Abstract:
Statistical parametric speech synthesis (SPSS) has seen improvements over recent years, especially in terms of intelligibility. Synthetic speech is often clear and understandable, but it can also be bland and monotonous. Proper generation of natural speech prosody is still a largely unsolved problem. This is relevant especially in the context of expressive audiobook speech synthesis, where speech is expected to be fluid and captivating. In general, prosody can be seen as a layer that is superimposed on the segmental (phone) sequence. Listeners can perceive the same melody or rhythm in different utterances, and the same segmental sequence can be uttered with a different prosodic layer to convey a different message. For this reason, prosody is commonly accepted to be inherently suprasegmental. It is governed by longer units within the utterance (e.g. syllables, words, phrases) and beyond the utterance (e.g. discourse). However, common techniques for the modeling of speech prosody - and speech in general - operate mainly on very short intervals, either at the state or frame level, in both hidden Markov model (HMM) and deep neural network (DNN) based speech synthesis. This thesis presents contributions supporting the claim that stronger representations of suprasegmental variation are essential for the natural generation of fundamental frequency for statistical parametric speech synthesis. We conceptualize the problem by dividing it into three sub-problems: (1) representations of acoustic signals, (2) representations of linguistic contexts, and (3) the mapping of one representation to another. The contributions of this thesis provide novel methods and insights relating to these three sub-problems. In terms of sub-problem 1, we propose a multi-level representation of f0 using the continuous wavelet transform and the discrete cosine transform, as well as a wavelet-based decomposition strategy that is linguistically and perceptually motivated. In terms of sub-problem 2, we investigate additional linguistic features such as text-derived word embeddings and syllable bag-of-phones and we propose a novel method for learning word vector representations based on acoustic counts. Finally, considering sub-problem 3, insights are given regarding hierarchical models such as parallel and cascaded deep neural networks.
APA, Harvard, Vancouver, ISO, and other styles
39

Segeljakt, Klas. "A Scala DSL for Rust code generation." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-235358.

Full text
Abstract:
Continuous Deep Analytics (CDA) is a new form of analytics with performance requirements exceeding what the current generation of distributed systems can offer. This thesis is part of a five year project in collaboration between RISE SICS and KTH to develop a next generation distributed system capable of CDA. The two issues which the system aims to solve are computation sharing and hardware acceleration. The former refers to how BigData and machine learning libraries such as TensorFlow, Pandas and Numpy must collaborate in the most efficient way possible. Hardware acceleration relates to how the back-end of current generation general purpose data processing systems such as Spark and Flink are bottlenecked by the Java Virtual Machine (JVM). As the JVM abstracts over the underlying hardware, its applications become portable but also forfeit the opportunity to fully exploit the available hardware resources. This thesis aims to explore the area of Domain Specific Languages (DSLs) and code generation as a solution to hardware acceleration. The idea is to translate incoming queries to the system into low-level code, tailor suited to each worker machine’s specific hardware. To this end, two Scala DSLs for generating Rust code have been developed for the translation step. Rust is a new, low-level programming language with a unique take on memory management which makes it as safe as Java and fast as C. Scala is a language which is well suited towards development of DSLs due to its flexible syntax and semantics. The first DSL is implemented as a string interpolator. The interpolator splices strings of Rust code together, at compile time or runtime, and passes the result to an external process for static checking. The second DSL instead provides an API for constructing an abstract syntax tree, which after construction can be traversed and printed into Rust source code. The API combines three concepts: heterogeneous lists, fluent interfaces, and algebraic data types. These allow the userto express advanced Rust syntax such as polymorphic structs, functions, and traits, without sacrificing type safety.
Kontinuerlig Djup Analys (CDA) är en ny form av analys med prestandakrav som överstiger vad den nuvarande generationen av distributerade system kan erbjuda. Den här avhandlingen är del av ett project mellan RISE SICS och KTH för att utveckla ett nästa-generations distribuerat system kapabelt av CDA. Det är två problem som systemet syftar på att lösa: hårdvaruacceleration och beräkningsdelning. Det första handlar om hur BigData och maskininlärningssystem som sådan som TensorFlow, Pandas och Numpy måste kunna samarbeta så effektivt som möjligt. Hårdvaruacceleration relaterar till hur back-end delen i den dagens distribuerade beräknings system, såsom Spark och Flink, flaskhalsas av Javas Virtuella Maskin. JVM:en abstraherar över den underliggande hårvaran. Som resultat blir dess applikationer portabla, men ger också upp möjligheten att fullständigt utnyttja de tillgängliga hårdvaruresurserna. Den här avhandlingen siktar på att utforska området kring Domänspecifika Språk (DSLer) och kodgenerering som en lösning till hårdvaruacceleration. Idén är att översätta inkommande förfrågningar till låg-nivå kod, skräddarsydd till varje arbetar maskin’s specifika hårdvara. Till detta ändamål har två Scala DSLer utvecklats för generering av Rust kod. Rust är ett nytt låg-nivå språk med ett unikt vidtagande kring minneshantering som gör det både lika säkert som Java och snabbt som C. Scala är ett språk som passar bra tillutveckling av DSLer pågrund av dess flexibla syntax och semantik. Den första DSLen är implementerad som en sträng-interpolator. Interpolatorn sammanfogar strängar av Rust kod, under kompileringstid eller exekveringstid, och passerar resultatet till enextern process för statisk kontroll. Den andra DSLen består istället av ett API för att konstruera ett abstrakt syntaxträd, som efteråt kan traverseras och skrivas ut till Rust kod. API:et kombinerar tre koncept: heterogena listor, flytande gränssnitt, och algebraiska datatyper. Dessa tillåter användaren att uttrycka avancerad Rust syntax, såsom polymorfiska strukts, funktioner, och traits, utan att uppoffra typsäkerhet.
APA, Harvard, Vancouver, ISO, and other styles
40

Zimmer, Matthieu. "Apprentissage par renforcement développemental." Thesis, Université de Lorraine, 2018. http://www.theses.fr/2018LORR0008/document.

Full text
Abstract:
L'apprentissage par renforcement permet à un agent d'apprendre un comportement qui n'a jamais été préalablement défini par l'homme. L'agent découvre l'environnement et les différentes conséquences de ses actions à travers des interactions avec celui-ci : il apprend de sa propre expérience, sans avoir de connaissances préétablies des buts ni des effets de ses actions. Cette thèse s'intéresse à la façon dont l'apprentissage profond peut aider l'apprentissage par renforcement à gérer des espaces continus et des environnements ayant de nombreux degrés de liberté dans l'optique de résoudre des problèmes plus proches de la réalité. En effet, les réseaux de neurones ont une bonne capacité de mise à l'échelle et un large pouvoir de représentation. Ils rendent possible l'approximation de fonctions sur un espace continu et permettent de s'inscrire dans une approche développementale nécessitant peu de connaissances a priori sur le domaine. Nous cherchons comment réduire l'expérience nécessaire à l'agent pour atteindre un comportement acceptable. Pour ce faire, nous avons proposé le cadre Neural Fitted Actor-Critic qui définit plusieurs algorithmes acteur-critique efficaces en données. Nous examinons par quels moyens l'agent peut exploiter pleinement les transitions générées par des comportements précédents en intégrant des données off-policy dans le cadre proposé. Finalement, nous étudions de quelle manière l'agent peut apprendre plus rapidement en tirant parti du développement de son corps, en particulier, en procédant par une augmentation progressive de la dimensionnalité de son espace sensorimoteur
Reinforcement learning allows an agent to learn a behavior that has never been previously defined by humans. The agent discovers the environment and the different consequences of its actions through its interaction: it learns from its own experience, without having pre-established knowledge of the goals or effects of its actions. This thesis tackles how deep learning can help reinforcement learning to handle continuous spaces and environments with many degrees of freedom in order to solve problems closer to reality. Indeed, neural networks have a good scalability and representativeness. They make possible to approximate functions on continuous spaces and allow a developmental approach, because they require little a priori knowledge on the domain. We seek to reduce the amount of necessary interaction of the agent to achieve acceptable behavior. To do so, we proposed the Neural Fitted Actor-Critic framework that defines several data efficient actor-critic algorithms. We examine how the agent can fully exploit the transitions generated by previous behaviors by integrating off-policy data into the proposed framework. Finally, we study how the agent can learn faster by taking advantage of the development of his body, in particular, by proceeding with a gradual increase in the dimensionality of its sensorimotor space
APA, Harvard, Vancouver, ISO, and other styles
41

Liu, Li. "Modélisation pour la reconnaissance continue de la langue française parlée complétée à l'aide de méthodes avancées d'apprentissage automatique." Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAT057/document.

Full text
Abstract:
Cette thèse de doctorat traite de la reconnaissance automatique du Langage français Parlé Complété (LPC), version française du Cued Speech (CS), à partir de l’image vidéo et sans marquage de l’information préalable à l’enregistrement vidéo. Afin de réaliser cet objectif, nous cherchons à extraire les caractéristiques de haut niveau de trois flux d’information (lèvres, positions de la main et formes), et fusionner ces trois modalités dans une approche optimale pour un système de reconnaissance de LPC robuste. Dans ce travail, nous avons introduit une méthode d’apprentissage profond avec les réseaux neurono convolutifs (CNN)pour extraire les formes de main et de lèvres à partir d’images brutes. Un modèle de mélange de fond adaptatif (ABMM) est proposé pour obtenir la position de la main. De plus, deux nouvelles méthodes nommées Modified Constraint Local Neural Fields (CLNF Modifié) et le model Adaptive Ellipse Model ont été proposées pour extraire les paramètres du contour interne des lèvres (étirement et ouverture aux lèvres). Le premier s’appuie sur une méthode avancée d’apprentissage automatique (CLNF) en vision par ordinateur. Toutes ces méthodes constituent des contributions significatives pour l’extraction de caractéristiques du LPC. En outre, en raison de l’asynchronie des trois flux caractéristiques du LPC, leur fusion est un enjeu important dans cette thèse. Afin de le résoudre, nous avons proposé plusieurs approches, y compris les stratégies de fusion au niveau données et modèle avec une modélisation HMM dépendant du contexte. Pour obtenir le décodage, nous avons proposé trois architectures CNNs-HMMs. Toutes ces architectures sont évaluées sur un corpus de phrases codées en LPC en parole continue sans aucun artifice, et la performance de reconnaissance CS confirme l’efficacité de nos méthodes proposées. Le résultat est comparable à l’état de l’art qui utilisait des bases de données où l’information pertinente était préalablement repérée. En même temps, nous avons réalisé une étude spécifique concernant l’organisation temporelle des mouvements de la main, révélant une avance de la main en relation avec l’emplacement dans la phrase. En résumé, ce travail de doctorat propose les méthodes avancées d’apprentissage automatique issues du domaine de la vision par ordinateur et les méthodologies d’apprentissage en profondeur dans le travail de reconnaissance CS, qui constituent un pas important vers le problème général de conversion automatique de CS en parole audio
This PhD thesis deals with the automatic continuous Cued Speech (CS) recognition basedon the images of subjects without marking any artificial landmark. In order to realize thisobjective, we extract high level features of three information flows (lips, hand positions andshapes), and find an optimal approach to merging them for a robust CS recognition system.We first introduce a novel and powerful deep learning method based on the ConvolutionalNeural Networks (CNNs) for extracting the hand shape/lips features from raw images. Theadaptive background mixture models (ABMMs) are also applied to obtain the hand positionfeatures for the first time. Meanwhile, based on an advanced machine learning method Modi-fied Constrained Local Neural Fields (CLNF), we propose the Modified CLNF to extract theinner lips parameters (A and B ), as well as another method named adaptive ellipse model. Allthese methods make significant contributions to the feature extraction in CS. Then, due tothe asynchrony problem of three feature flows (i.e., lips, hand shape and hand position) in CS,the fusion of them is a challenging issue. In order to resolve it, we propose several approachesincluding feature-level and model-level fusion strategies combined with the context-dependentHMM. To achieve the CS recognition, we propose three tandem CNNs-HMM architectureswith different fusion types. All these architectures are evaluated on the corpus without anyartifice, and the CS recognition performance confirms the efficiency of our proposed methods.The result is comparable with the state of the art using the corpus with artifices. In parallel,we investigate a specific study about the temporal organization of hand movements in CS,especially about its temporal segmentation, and the evaluations confirm the superior perfor-mance of our methods. In summary, this PhD thesis applies the advanced machine learningmethods to computer vision, and the deep learning methodologies to CS recognition work,which make a significant step to the general automatic conversion problem of CS to sound.The future work will mainly focus on an end-to-end CNN-RNN system which incorporates alanguage model, and an attention mechanism for the multi-modal fusion
APA, Harvard, Vancouver, ISO, and other styles
42

Zimmer, Matthieu. "Apprentissage par renforcement développemental." Electronic Thesis or Diss., Université de Lorraine, 2018. http://www.theses.fr/2018LORR0008.

Full text
Abstract:
L'apprentissage par renforcement permet à un agent d'apprendre un comportement qui n'a jamais été préalablement défini par l'homme. L'agent découvre l'environnement et les différentes conséquences de ses actions à travers des interactions avec celui-ci : il apprend de sa propre expérience, sans avoir de connaissances préétablies des buts ni des effets de ses actions. Cette thèse s'intéresse à la façon dont l'apprentissage profond peut aider l'apprentissage par renforcement à gérer des espaces continus et des environnements ayant de nombreux degrés de liberté dans l'optique de résoudre des problèmes plus proches de la réalité. En effet, les réseaux de neurones ont une bonne capacité de mise à l'échelle et un large pouvoir de représentation. Ils rendent possible l'approximation de fonctions sur un espace continu et permettent de s'inscrire dans une approche développementale nécessitant peu de connaissances a priori sur le domaine. Nous cherchons comment réduire l'expérience nécessaire à l'agent pour atteindre un comportement acceptable. Pour ce faire, nous avons proposé le cadre Neural Fitted Actor-Critic qui définit plusieurs algorithmes acteur-critique efficaces en données. Nous examinons par quels moyens l'agent peut exploiter pleinement les transitions générées par des comportements précédents en intégrant des données off-policy dans le cadre proposé. Finalement, nous étudions de quelle manière l'agent peut apprendre plus rapidement en tirant parti du développement de son corps, en particulier, en procédant par une augmentation progressive de la dimensionnalité de son espace sensorimoteur
Reinforcement learning allows an agent to learn a behavior that has never been previously defined by humans. The agent discovers the environment and the different consequences of its actions through its interaction: it learns from its own experience, without having pre-established knowledge of the goals or effects of its actions. This thesis tackles how deep learning can help reinforcement learning to handle continuous spaces and environments with many degrees of freedom in order to solve problems closer to reality. Indeed, neural networks have a good scalability and representativeness. They make possible to approximate functions on continuous spaces and allow a developmental approach, because they require little a priori knowledge on the domain. We seek to reduce the amount of necessary interaction of the agent to achieve acceptable behavior. To do so, we proposed the Neural Fitted Actor-Critic framework that defines several data efficient actor-critic algorithms. We examine how the agent can fully exploit the transitions generated by previous behaviors by integrating off-policy data into the proposed framework. Finally, we study how the agent can learn faster by taking advantage of the development of his body, in particular, by proceeding with a gradual increase in the dimensionality of its sensorimotor space
APA, Harvard, Vancouver, ISO, and other styles
43

El, Mekdad Fatima. "La rhizodéposition dans les horizons profonds du sol peut-elle permettre de stocker du carbone ?" Electronic Thesis or Diss., Sorbonne université, 2023. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2023SORUS086.pdf.

Full text
Abstract:
L'augmentation des émissions anthropiques de CO2 dans l'atmosphère accélère le changement climatique. Les sols contiennent trois fois plus de carbone que l'atmosphère et constituent donc un réservoir d'importance cruciale pour la régulation du climat. Il existe actuellement une réflexion pour stocker le carbone dans les couches profondes du sol, notamment via la rhizodéposition des plantes. Nous avons donc mené une expérience au CEREEP-Ecotron Ile-de-France pour quantifier les apports, et la persistance, du carbone rhizodéposé par les plantes à l'aide d'un marquage continu au 13C-CO2. Pour ce faire, deux variétés de blé aux systèmes racinaires contrastés ont été plantés dans des mésocosmes et cultivés pendant une saison de croissance complète et sous atmosphère enrichie en 13C. Nos objectifs étaient de quantifier le flux de carbone de l'atmosphère vers le sol et de mesurer sa persistance à court terme. Nos résultats suggèrent que la variété ancienne Plantahof rhizodépose une quantité plus élevée de carbone par rapport à la variété récente Nara notamment en profondeur. Cependant, le carbone apporté au sol par ces deux variétés a conduit à des pertes par minéralisation et des priming effects similaires. Ainsi, le bilan total du carbone était plus affecté par la profondeur du sol que les variétés utilisées dans l'étude. Par ailleurs, j'ai étudié, à partir d'une analyse bibliographique, la distribution selon la profondeur des activités enzymatiques hydrolases et oxydoréductases impliquées dans les cycles du carbone, de l'azote et du phosphore en fonction de la profondeur du sol. Les résultats de cette analyse ont montré que les profils d'activité dépendaient très fortement de la façon dont ces activités étaient exprimées, avec des activités qui diminuent avec la profondeur lorsqu'exprimées par masse de sol alors qu'elles sont plutôt stables, voire augmentent, lorsque exprimé par rapport à la biomasse microbienne. Pris dans leur ensemble, ces résultats montrent que la prise en compte du fonctionnement sur l'intégralité de la colonne de sol est indispensable pour comprendre la dynamique du carbone dans les écosystèmes terrestres
Increasing anthropogenic emissions of CO2 to the atmosphere are accelerating climate change. These emissions could be partially compensated by carbon fixation in the oceans, vegetation and soils. In particular, soils contain three times more carbon than the atmosphere, and therefore play a crucial role in climate regulation. It has been suggested that storing carbon in the deep layers of the soil, via rhizodeposition of plants, may be a useful avenue to pursue in order to mitigate climate change. We therefore conducted an experiment at CEREEP-Ecotron Ile-de-France to quantify the input and persistence of rhizodeposited carbon by plants using a continuous 13C-CO2 label. Two wheat varieties with contrasting root systems were planted in mesocosms and grown for a full growing season in a 13C-enriched atmosphere. Our objectives were to quantify the rooting-dependent flux of carbon from the atmosphere to the soil by isotopic tracing with 13C, and to measure its short-term persistence. The results showed that the old variety Plantahof rhizodeposited a larger amount of carbon than the more recent variety Nara, especially at depth. However, the carbon supplied to the soil by these two varieties led to similar amounts of organic C mineralization and priming effects. Thus, the total carbon balance was more related to the effect of soil depth than to the varieties used in the study. Furthermore, I carried out a meta-analysis of the distribution of enzymatic activities as a function of soil depth for hydrolases and oxidoreductases involved in the carbon, nitrogen and phosphorus cycle. The results of this analysis showed that the activity profiles depended very strongly on the way these activities were expressed, with activities mostly decreasing when expressed per soil mass, but remaining rather stable or even increasing with depth when expressed per unit microbial biomass. Taken together, these results show that considering the functioning of the entire soil column is essential to understand the dynamics of carbon in terrestrial ecosystems
APA, Harvard, Vancouver, ISO, and other styles
44

Poiron-Guidoni, Nicolas. "Apports des méthodes d’optimisation et du calcul haute performance à la théorie de la modélisation et de la simulation : application à la gestion des ressources halieutiques." Thesis, Corte, 2021. http://www.theses.fr/2021CORT0013.

Full text
Abstract:
Le projet informatique (SiSU) de l’Unité Mixe de Recherche CNRS Science pour l’Environnement conçoit des méthodes d’aide à la décision pour aider à une meilleure gestion des systèmes complexes environnementaux. Ces travaux de thèse s’inscrivent dans ce contexte. Ils ont pour objectif d'étudier les apports de plusieurs types de méthodes informatiques afin d'améliorer nos connaissances sur les systèmes complexes et ainsi de fournir une aide à leur gestion en situation de fortes incertitudes. En effet, les systèmes complexes environnementaux ne peuvent pas toujours être connus et modélisés avec précision. C’est par exemple le cas en biologie halieutique où des méthodes de gestion doivent être proposées malgré un manque de connaissances sur le système observé, dans notre cas d’étude : la pêche côtière Corse. Nos premiers travaux ont porté sur la calibration de modèles, c’est-à-dire le recherche de valeurs de paramètres permettant à nos modèles de représenter au mieux la dynamique du système. Ils ont montré les limites des approches habituelles et la nécessité d’utiliser des approches probabilistes basées sur de grandes quantités de simulations. Elles apportent une aide précieuse quant à l’acquisition de connaissances, notamment en délimitant des ensembles de solutions. Ceux-ci peuvent alors être utilisés dans des méthodes d’optimisation robuste, voire d’optimisation robuste ajustable. Ces approches permettent non seulement de prendre en compte les incertitudes, mais également de quantifier la réduction d’incertitude que de nouvelles années de données pourront apporter, afin de proposer des stratégies de plus en plus précises à long terme. L’optimisation est donc utilisable efficacement à l’échelle des décideurs. Cependant, la petite pêche côtière Corse, est un système sur lequel agissent un grand nombre d’acteurs avec des comportements différents et difficilement prévisibles et contrôlables. L’optimisation ne semble pas adaptée à l’étude de cette échelle de par la quantité de paramètres et le nombre infini de transitions stochastiques engendrées. Pour cela, des méthodes basées sur l’apprentissage profond par renforcement ont été proposées. Ces approches nous ont permis dans un premier temps de proposer un modèle gérant à la fois décideurs et pêcheurs, les uns cherchant à réduire l’impact écologique, les autres à maximiser leurs gains. À partir de cela, nous avons pu montrer que de faibles connaissances suffisent pour la maximisation des gains des pêcheurs. De plus, cette approche, couplée à de l’optimisation, a permis d’obtenir des décisions d’instauration de quotas efficaces. Enfin, ce système nous a permis d’étudier l’impact de certains comportements individuels de maximisation des gains au détriment du respect des recommandations des décideurs. Il est alors apparu que des politiques de gestion efficaces et adaptées peuvent permettre de pallier l’impact écologique d’une quantité non négligeable de ces comportements. Ainsi, nous avons pu contribuer de manière théorique à élargir les domaines d’application de la théorie de la modélisation et de la simulation, proposer un ensemble d’outils d’optimisation et d’apprentissage automatique à la gestion de systèmes dynamiques partiellement observables, mais également applicative pour la problématique de la gestion de la pêche en Corse
The computer science project (SiSU) of the CNRS Science for the Environment Joint Research Unit designs decision support methods to help better management of complex environmental systems.This thesis work is part of this context. They aim to study the contributions of several types of computer methods to improve our knowledge of complex systems and thus provide assistance in their management in situations of high uncertainty. Indeed, complex environmental systems cannot always be known and modeled with precision. This is for example the case in fisheries biology where management methods must be proposed despite a lack of knowledge on the observed system, in our case study: the Corsican coastal fishery. Our first work focused on the calibration of models, i.e. the search for parameter values allowing our models to best represent the dynamics of the system. They have shown the limits of the usual approaches and the need to use probabilistic approaches based on large quantities of simulations. They bring a precious help for the acquisition of knowledge, in particular by delimiting sets of solutions. These sets can then be used in robust optimization methods, or even in adjustable robust optimization. These approaches allow not only to take into account the uncertainties, but also to quantify the reduction of uncertainty that new years of data can bring, in order to propose more and more precise strategies in the long term. Optimization can therefore be used effectively at the level of decision makers. However, the small-scale coastal fishery in Corsica is a system in which a large number of actors act with different behaviors that are difficult to predict and control. Optimization does not seem adapted to the study of this scale because of the quantity of parameters and the infinite number of stochastic transitions generated. For this, methods based on deep reinforcement learning have been proposed. These approaches allowed us to propose a model that manages both decision-makers and fishermen, the former seeking to reduce the ecological impact, the latter to maximize their gains. From this, we were able to show that little knowledge is sufficient for the maximization of the fishermen's gains. Moreover, this approach, coupled with optimization, allowed us to obtain efficient quota decisions. Finally, this system allowed us to study the impact of certain individual behaviors of maximizing gains to the detriment of respecting the recommendations of the decision makers. It then appeared that effective and adapted management policies can help to mitigate the ecological impact of a significant amount of these behaviors. Thus, we were able to contribute in a theoretical way to broaden the application domains of the theory of modeling and simulation, to propose a set of optimization and machine learning tools for the management of dynamic systems partially observable, but also applicative for the problem of fisheries management in Corsica
APA, Harvard, Vancouver, ISO, and other styles
45

Monteil, Hélène. "Development and implementation of the Bio-electro-Fenton process : application to the removal of pharmaceuticals from water A review on efficiency and cost effectiveness of electro- and bio-electro-Fenton processes: application to the treatment of pharmaceutical pollutants in water. Efficient removal of diuretic hydrochlorothiazide from water by electro-Fenton process using BDD anode: a kinetic and degradation pathway study Electro-Fenton treatment of the widely used analgesic tramadol using BDD anode: a kinetic, energetic and degradation pathway study Efficiency of a new pilot scale continuous reactor for wastewater treatment by electrochemical advanced oxidation processes: influence of operating conditions and focus on hydrodynamics Electrochemical advanced oxidation processes combined with a biological treatment for wastewater treatment: a deep understanding on the influence of operating conditions and global efficiency." Thesis, Paris Est, 2019. http://www.theses.fr/2019PESC2045.

Full text
Abstract:
Les procédés électrochimiques d'oxydation avancée constituent une technologie efficace pour traiter les produits pharmaceutiques car ils permettent la formation d'oxydants puissants tels que les radicaux hydroxyles capables d'éliminer presque tout type de contaminants organiques grâce à leur très haut pouvoir oxydant. Parmi eux, l’électro-Fenton et l’oxydation anodique sont des méthodes respectueuses de l'environnement car ils n'utilisent aucun réactif chimique (oxydation anodique) ou uniquement l'oxygène de l'air et des ions fer en tant que catalyseur (électro-Fenton).Dans cette thèse, quatre produits pharmaceutiques appartenant à des familles différentes ont été sélectionnés en fonction de leur toxicité et de leur présence dans les eaux de l'environnement. Leur élimination de l'eau a été effectuée par électro-Fenton. Les objectifs de ce travail étaient de déterminer (i) les meilleures conditions opératoires à l’échelle du laboratoire (courant et concentration du catalyseur), (ii) la cinétique de dégradation et de minéralisation et enfin (iii) de proposer une voie de minéralisation basée sur des intermédiaires aromatiques, des acides carboxyliques et des ions libérés dans la solution.Comme ces traitements ont été appliqués avec succès, un réacteur pilote composé alternativement d'anodes en BDD et de cathodes en feutre de carbone, doté d’un système d'aération et fonctionnant en mode continu a été construit pour évaluer la faisabilité d’un changement d’échelle et se diriger vers une pré-industrialisation du procédé. Différentes configurations d'électrodes ont été testées. Le débit et le courant se sont avérés être plus influents sur le taux de minéralisation et sur la consommation d'énergie, respectivement. Pour mieux comprendre le rôle du débit et des configurations, une étude hydrodynamique a été réalisée. Le modèle hydrodynamique a été associé à un modèle cinétique de minéralisation afin d'obtenir un modèle permettant de prédire le pourcentage de minéralisation à différentes positions à l'intérieur du réacteur en régime permanent. Ainsi, ce modèle peut aider à optimiser les conditions opératoires et à dimensionner les futurs réacteurs en fonction de l’objectif de minéralisation du traitement (taux de minéralisation élevé, traitement combiné, flux élevé,…).Afin de réduire les coûts opératoires, la combinaison d’un procédé électrochimique et d’un traitement biologique a ensuite été étudiée. Afin d’obtenir un traitement combiné efficace, il a été constaté que le traitement électrochimique devait (i) dégrader l'hydrochlorothiazide (ii) réduire de manière significative la concentration de ses intermédiaires aromatiques car ils inhibent de manière significative l'activité bactérienne, (iii) favoriser la formation de molécules biodégradables telles que les acides carboxyliques. La biodégradation de quatre acides carboxyliques formés lors du traitement par électro-Fenton de l'hydrochlorothiazide a également été étudiée. Il a été démontré qu'ils étaient dégradés de manière séquentielle avec différentes phases de latence et cinétiques de dégradation. Ainsi, pour les minéraliser, un réacteur de type «piston» est recommandé. La combinaison de traitement a ensuite été appliquée à un traitement électrochimique effectué à faible courant avec une anode en BDD et une anode en Platine. Un degré de minéralisation de 38 et 50% a été obtenu par le traitement biologique permettant d'atteindre un taux de minéralisation global de 66 et 85% avec les anodes en BDD et Platine respectivement. Ainsi, cette combinaison de traitement a été un succès, un changement d’échelle du procédé peut alors être envisagé
Electrochemical advanced oxidation processes (EAOPs) constitute an efficient technology to treat the pharmaceuticals as they allow the formation of strong oxidants such as hydroxyl radicals able to remove nearly any type of organic contaminants thanks to their very high oxidation power. Among them the electro-Fenton and anodic oxidation processes are environmentally friendly methods as they use no chemical reagent (anodic oxidation) or only oxygen of air and iron ions as a catalyst (electro-Fenton).In this thesis, four pharmaceuticals from different families and structures were selected based on their toxicity and their occurrence in environmental waters and their removal from water was performed by EAOPs. The objectives of this work were to determine the best operating conditions at lab scale (current and catalyst concentration), investigate the kinetic of degradation and mineralization and finally propose a mineralization pathway based on aromatic intermediates, carboxylic acids and ions released to the solution.As these treatments were successfully applied, a lab scale pilot reactor composed alternately of BDD anodes and carbon felt cathodes with a bottom aeration system and working in the continuous mode was built to scale-up these processes in order to pre-industrialize them. Different configurations of electrodes were tested. The flow rate and the current were found to be more influent on the mineralization rate and on the energy consumption, respectively. To deeper understand the role of the flow rate and the configurations a hydrodynamic study was performed. The hydrodynamic results were gathered with a kinetic model for the mineralization to obtain a model predicting the percentage of mineralization at different position inside the reactor during the steady state. Thus, this model can help to optimize the operating conditions and to size future reactors depending on the mineralization objective of the treatment (high mineralization rate, combined treatment, high flow, …).To reduce operating cost, the combination of an electrochemical process and a biological treatment was then investigated. In this frame, it was found that electrochemical treatment can (i) degrade the hydrochlorothiazide (ii) reduce significantly the concentration of its aromatic intermediates as they were shown to significantly inhibit the bacterial activity, (iii) promote the formation of biodegradable molecules such as carboxylic acids. The biodegradation of four carboxylic acids formed during the electro-Fenton treatment of the hydrochlorothiazide at lab scale was also studied. It was demonstrated that they were sequentially degraded with different lag phases and kinetics of degradation. Thus to mineralize them, a “plug flow” type reactor is recommended. The combination of treatment was then applied with an electrochemical treatment performed at low current with a BDD anode and a Platine anode. A mineralization degree of 38% and 50% were obtained by the biological treatment enabling to globally reach a mineralization rate of 66% and 85% with the BDD and the Platine anodes respectively. Thus this combined treatment was successful and open the way for the scale-up of these processes
APA, Harvard, Vancouver, ISO, and other styles
46

Wu, T. J., and 吳賢杰. "Behavior of Two-Span Continuous Deep Beam." Thesis, 1995. http://ndltd.ncl.edu.tw/handle/58466971469446533738.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Ashour, Ashraf F., C. T. Morley, and N. K. Subedi. "Reinforced concrete two-span continuous deep beams." 2002. http://hdl.handle.net/10454/867.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Yang, Keun-Hyeok, and Ashraf F. Ashour. "Load capacity of reinforced concrete continuous deep beams." 2008. http://hdl.handle.net/10454/6245.

Full text
Abstract:
Most codes of practice, such as EC2 and ACI 318-05, recommend the use of strut-and-tie models for the design of reinforced concrete deep beams. However, studies on the validity of the strut-and-tie models for continuous deep beams are rare. This paper evaluates the strut-and-tie model specified by ACI 318-05 and mechanism analysis of the plasticity theory in predicting the load capacity of 75 reinforced concrete continuous deep beams tested in the literature. The influence of such main parameters as compressive strength of concrete, shear span-to-overall depth ratio, main longitudinal bottom reinforcement, and shear reinforcement on the load capacity is also investigated using both methods and experimental results. Experimental results were closer to the predictions obtained from the mechanism analysis than the strut-and-tie model. The strut-and-tie model highly overestimated the load capacity of continuous deep beams without shear reinforcement.
APA, Harvard, Vancouver, ISO, and other styles
49

Yang, Keun-Hyeok, and Ashraf F. Ashour. "Aggregate interlock in lightweight concrete continuous deep beams." 2010. http://hdl.handle.net/10454/7562.

Full text
Abstract:
yes
There are very few, if any, available experimental investigations on aggregate interlock capacity along diagonal cracks in lightweight concrete deep beams. As a result, the shear design provisions including the modification factor of ACI 318-08 and EC 2 for lightweight concrete continuous deep beams are generally developed and validated using normal weight simple deep beam specimens. This paper presents the testing of 12 continuous beams made of all-lightweight, sand-lightweight and normal weight concrete having maximum aggregate sizes of 4, 8, 13 and 19 mm. The load capacities of beams tested are compared with the predictions of strut-and-tie models recommended in ACI 318-08 and EC 2 provisions including the modification factor for lightweight concrete. The beam load capacity increased with the increase of maximum aggregate size, though the aggregate interlock contribution to the load capacity of lightweight concrete deep beams was less than that of normal weight concrete deep beams. It was also shown that the lightweight concrete modification factor in EC 2 is generally unconservative, while that in ACI 318-08 is conservative for all-lightweight concrete but turns to be unconservative for sand-lightweight concrete with a maximum aggregate size above 13 mm. The conservatism of the strut-and-tie models specified in ACI 318-08 and EC 2 decreased with the decrease of maximum aggregate size, and was less in lightweight concrete deep beams than in normal weight concrete deep beams.
APA, Harvard, Vancouver, ISO, and other styles
50

Lin, Yutang, and 林語堂. "Evaluation of pain/nociception of handicapped under deep sedation/general anesthesia by hemodynamic response and electroencephalogram." Thesis, 1997. http://ndltd.ncl.edu.tw/handle/76121409642181097791.

Full text
Abstract:
碩士
國立臺灣大學
牙醫學系
85
ABSTRACT Pain was given for a warning of protection of harm by God. However, it was the most unpleasant experience. Human made their efforts for analgesia from the beginning of the human history and got the progressive fruits. To the some of the handicapped, they can not accept dental treatments because of fear of pain without communication with people and description of pain. So, it is necessary for the handicapped to accept the treatments under deep sedation and general anesthesia. It is different from the gen al anesthesia in operation room with much restraint. Therefore, it is the final goal for the complete anesthesia under the minimum anesthesia agents. In this study, we use hemodynamic responses (the changes of blood pressure and heart rate) and electroencephalogram to assess the pain(or nociception) of the handicapped under deep sedation/general anesthesia. The result revealed significant changes under the nociceptive stimulation( p<0.05).In addition, the analgesia of Ketamine decreased significantly under t dose of 5mg/kg in route of intramuscle after 16 minutes( p<0.05).Besides, there was no statically significant difference between different diagnosis, age, sex respectively and nociceptive reflex.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography