Academic literature on the topic 'Multi-user interpreter'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Multi-user interpreter.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Multi-user interpreter"

1

Costa, Beverley, and Stephen Briggs. "Service-users’ experiences of interpreters in psychological therapy: a pilot study." International Journal of Migration, Health and Social Care 10, no. 4 (December 9, 2014): 231–44. http://dx.doi.org/10.1108/ijmhsc-12-2013-0044.

Full text
Abstract:
Purpose – Working across languages is playing an increasingly important role in the delivery of mental health services, notably through psychotherapy and psychological therapies. Growing awareness of the complex processes that ensue in working across languages, including the presence and role of an interpreter, is generating new conceptualisations of practice, but there is a need now to evidence how these impact on service users. The paper aims to discuss these issues. Design/methodology/approach – This paper discusses the model for working with interpretation developed by Mothertongue multi-ethnic counselling service, which conceptualises the therapeutic process as working within triangular relationships consisting of service user, therapist and interpreter. Second, the paper discusses the qualitative, practice-near methods applied in, and findings from a pilot study to evaluate the interpreter's role. Findings – Three patterns of response to interpreters were identified: negative impacts on the therapy, the interpreter as conduit for therapy and the therapist and interpreter jointly demonstrating a shared enterprise. It is concluded that the method and findings of the pilot justify a larger study that will further evaluate the experiences of service users and continue to develop and test conceptualisations for best practice. Originality/value – Working across languages is now recognised as an increasingly important aspect of therapy in contexts where migration has created new demographics. This paper contributes to the discussion of working therapeutically with people with mental health difficulties across languages. Its originality lies, first, in the discussion of a new clinical approach to working with interpreters, and second in the methods used to access the views of service users about their experiences of interpreters.
APA, Harvard, Vancouver, ISO, and other styles
2

BIZOUARD, M. A., N. ARNAUD, G. BARRAND, M. BARSUGLIA, F. CAVALIER, M. DAVIER, P. HELLO, P. HEUSSE, and T. PRADIER. "DANTE: A PROPOSAL FOR AN INTERACTIVE DATA ANALYSIS TOOL FOR THE VIRGO COLLABORATION." International Journal of Modern Physics D 09, no. 03 (June 2000): 287–91. http://dx.doi.org/10.1142/s0218271800000293.

Full text
Abstract:
Dante is an interactive data analysis environment proposed for the VIRGO collaboration. It is based on Open-Scientist, an architecture written in C++ and developed at LAL mainly for the High Energy Physics community. Its modularity and openness guarantees an adaptability to the fast turnover of software technologies. The architecture presents a hierarchical multi-package decomposition in which basic "user" packages (histogramming, fitting, algorithm toolboxes) are independent of facility packages (storage, visualization, command interpreter). The main functionalities of a preliminary implementation of Dante are reviewed.
APA, Harvard, Vancouver, ISO, and other styles
3

LuperFoy, Susann. "Machine interpretation of bilingual dialogue." Interpreting. International Journal of Research and Practice in Interpreting 1, no. 2 (January 1, 1996): 213–33. http://dx.doi.org/10.1075/intp.1.2.03lup.

Full text
Abstract:
This paper examines the role of the dialogue manager component of a machine interpreter. It is a report on one project to design the discourse module for such a voice-to-voice machine translation (MT) system known as the Interpreting Telephone. The theoretical discourse framework that underlies the proposed dialogue manager supports the job of extracting and collecting information from the context, and facilitating human-machine language interaction in a multi-user environment. Empirical support for the dialogue theory and the implementation described herein, comes from an observational study of one human interpreter engaged in a three-way, bilingual telephone conversation. We begin with a brief description of the interpreting telephone research endeavor, then examine the discourse requirements of such a language-processing system, and finally, report on the application of the discourse processing framework to this voice-to-voice machine translation task.
APA, Harvard, Vancouver, ISO, and other styles
4

Sánchez, Alfredo J., Soraia S. Prietch, Silvia B. Fajardo-Flores, and Laura S. Gaytán-Lugo. "A tangible interface approach to the codesign of a literacy platform for deaf users." Avances en Interacción Humano-Computadora, no. 1 (November 30, 2022): 5. http://dx.doi.org/10.47756/aihc.y7i1.118.

Full text
Abstract:
We report initial results of the codesign process of a software platform aimed to support the development of reading and writing skills among deaf students at the elementary level. This platform is one of the main outcomes set out for a broad multi-institutional, multi-disciplinary deaf literacy project. As one of the initial user research activities, we held a co-creation workshop with six deaf participants, one sign language interpreter, and four hearing researchers. In this workshop we explored the application of a design technique intended to enhance participation and communication by relying on low-tech tangible representations of interface components that can be combined to generate interaction designs. Through observation during the workshop and analysis of video recordings we have derived adaptations and adjustments to our approach for its application in upcoming codesign activities.
APA, Harvard, Vancouver, ISO, and other styles
5

Zekoll, Viktoria, Raquel de los Reyes, and Rudolf Richter. "A Newly Developed Algorithm for Cloud Shadow Detection—TIP Method." Remote Sensing 14, no. 12 (June 18, 2022): 2922. http://dx.doi.org/10.3390/rs14122922.

Full text
Abstract:
The masking of cloud shadows in optical satellite imagery is an important step in automated processing chains. A new method (the TIP method) for cloud shadow detection in multi-spectral satellite images is presented and compared to current methods. The TIP method is based on the evaluation of thresholds, indices and projections. Most state-of-the-art methods solemnly rely on one of these evaluation steps or on a complex working mechanism. Instead, the new method incorporates three basic evaluation steps into one algorithm for easy and accurate cloud shadow detection. Furthermore the performance of the masking algorithms provided by the software packages ATCOR (“Atmospheric Correction”) and PACO (“Python-based Atmospheric Correction”) is compared with that of the newly implemented TIP method on a set of 20 Sentinel-2 scenes distributed over the globe, covering a wide variety of environments and climates. The algorithms incorporated in each piece of masking software use the class of cloud shadows, but they employ different rules and class-specific thresholds. Classification results are compared to the assessment of an expert human interpreter. The class assignment of the human interpreter is considered as reference or “truth”. The overall accuracies for the class cloud shadows of ATCOR and PACO (including TIP) for difference areas of the selected scenes are 70.4% and 76.6% respectively. The difference area encompasses the parts of the classification image where the classification maps disagree. User and producer accuracies for the class cloud shadow are strongly scene-dependent, typically varying between 45% and 95%. The experimental results show that the proposed TIP method based on thresholds, indices and projections can obtain improved cloud shadow detection performance.
APA, Harvard, Vancouver, ISO, and other styles
6

Povoroznyuk, Roksolana V. "Harmonization of TIQA standards for specialized texts." Linguistics and Culture Review 5, S2 (August 6, 2021): 678–96. http://dx.doi.org/10.21744/lingcure.v5ns2.1411.

Full text
Abstract:
This research explores translation and interpreting quality assessment standards (TIQA), selecting those fit for the purpose of specialized translation quality assurance, with the aim to systematize them into a step-by-step framework, referred to as “the TIQA pyramid”, a framework that provides valid and reproducible benchmarks that are endowed with universal features and reflected in codes of ethics and professional standards. The TIQA standards may be subdivided into two major groups: text-oriented and ethical-deontological ones. Such classification is based on the notion of translation quality which is the projection of a translator (interpreter)’s personality (inchoate quality assurance arising out of a system of ethical and deontological precepts), or of textual requirements (choate quality assurance arising out of a system of text-oriented criteria). The “pas-de-trois” in a translated interaction among the commissioner of a specialized translation, its performer and end-user is grounded in the presumably existing mediated communication contract (typically a translation brief). Its positive upshot is manifested in the confidence-imbued multi-party polypragmatic interlingual and intercultural behaviour; the negative, however, is underscored by its implicit nature which leads to the absence of a concerted system of quality criteria, resulting in a lack of satisfaction and mutual trust.
APA, Harvard, Vancouver, ISO, and other styles
7

SureshKumar, K., B. S.Santhosh Phani Raj, K. B.Sindhush, D. Bhanu Prakash, and S. Manoj Kumar. "Successive Interference Reduction in Multi User MIMO Channels Using DCI." International Journal of Engineering & Technology 7, no. 2.32 (May 31, 2018): 408. http://dx.doi.org/10.14419/ijet.v7i2.32.15727.

Full text
Abstract:
Propelled by the accomplishment of Dahrouj and Yu in connecting the Han-Kobayashi transmission management for mitigating the inter-cell interference in a multi-cell multiple-use multiple in single out interference mesh, this bi-parted messages into privacy and general address in a multi-cell multi-user MIMO IN. In particular, the co-variances of the private and public messages are superintend to optimize either the sum rate or the minimal rate . The public and private messages are decoded in sequence using successive decoding. It reveals how hard to optimize problems can be adequately interpreted by D.C optimization over a simple convex set. Theoretical and simulated outputs shows the use of our proposing solutions for diverse types of Interference networks. In the superintend system, messages are fragmented into private message and public messages. In accordance to optimize the sum rate and minimal rate, co-variances of private and public messages are estimated. The successive decoding algorithm proposed for decoding both private and public messages. The optimization problems will apparent up by accomplishing the difference of concave functions (D.C). Developing a potent D.C optimization network, which is furnished over certain area of bi-parted the private and public rates in several users Multi-Input Multi-Output Interference networks for decreasing the sum rate and minimal user rate. Han Kobayashi (HK) rate bi-parting scheme is exposed as the best plan to mitigate the interference and increase the performance.
APA, Harvard, Vancouver, ISO, and other styles
8

Su, Chang, Xiaohong Guan, Youtian Du, Qian Wang, and Fei Wang. "A fast multi-level algorithm for community detection in directed online social networks." Journal of Information Science 44, no. 3 (March 14, 2017): 392–407. http://dx.doi.org/10.1177/0165551517698305.

Full text
Abstract:
The discovery of underlying community structures plays a significant role in online social network (OSN) analysis. Many previous methods suffer from inaccuracy or incompleteness in community descriptions because of the multiple factors affecting OSNs and the high computational complexity caused by the large scale of these networks. We present a new community detection approach that focuses on two aspects. First, it relies on a combination of user interests and cohesiveness in describing community structures. Second, it introduces a multi-level community discovery algorithm for large-scale OSN datasets. The algorithm consists of three steps: (1) network coarsening based on the combination of two categories of properties, (2) stochastic inference to find an initial community assignment over the coarsest network and (3) projection and refinement of this assignment to obtain the final community detection result by solving a semi-supervised learning problem. The combination of user interests and cohesiveness leads to a complete and well-interpreted description of the communities embedded in OSNs, and the multi-level algorithm speeds up the computation process and improves the likelihood of finding the global optimal solution by reducing the parameter space. Experiments conducted on both synthetic and real datasets demonstrate the effectiveness and efficiency of our method.
APA, Harvard, Vancouver, ISO, and other styles
9

Tallerås, Kim, Jørn Helge B. Dahl, and Nils Pharo. "User conceptualizations of derivative relationships in the bibliographic universe." Journal of Documentation 74, no. 4 (July 9, 2018): 894–916. http://dx.doi.org/10.1108/jd-10-2017-0139.

Full text
Abstract:
Purpose Considerable effort is devoted to developing new models for organizing bibliographic metadata. However, such models have been repeatedly criticized for their lack of proper user testing. The purpose of this paper is to present a study on how non-experts in bibliographic systems map the bibliographic universe and, in particular, how they conceptualize relationships between independent but strongly related entities. Design/methodology/approach The study is based on an open concept-mapping task performed to externalize the conceptualizations of 98 novice students. The conceptualizations of the resulting concept maps are identified and analyzed statistically. Findings The study shows that the participants’ conceptualizations have great variety, differing in detail and granularity. These conceptualizations can be categorized into two main groups according to derivative relationships: those that apply a single-entity model directly relating document entities and those (the majority) that apply a multi-entity model relating documents through a high-level collocating node. These high-level nodes seem to be most adequately interpreted either as superwork devices collocating documents belonging to the same bibliographic family or as devices collocating documents belonging to a shared fictional world. Originality/value The findings can guide the work to develop bibliographic standards. Based on the diversity of the conceptualizations, the findings also emphasize the need for more user testing of both conceptual models and the bibliographic end-user systems implementing those models.
APA, Harvard, Vancouver, ISO, and other styles
10

Zang, Xinming, Zhenqi Guo, Jingai Ma, Yongguang Zhong, and Xiangfeng Ji. "Target-Oriented User Equilibrium Considering Travel Time, Late Arrival Penalty, and Travel Cost on the Stochastic Tolled Traffic Network." Sustainability 13, no. 17 (September 6, 2021): 9992. http://dx.doi.org/10.3390/su13179992.

Full text
Abstract:
In this paper, we employ a target-oriented approach to analyze the multi-attribute route choice decision of travelers in the stochastic tolled traffic network, considering the influence of three attributes, which are (stochastic) travel time, (stochastic) late arrival penalty, and (deterministic) travel cost. We introduce a target-oriented multi-attribute travel utility model for this analysis, where each attribute is assigned a target by travelers, and travelers’ objective is to maximize their travel utility that is determined by the achieved targets. Moreover, the interaction between targets is interpreted as complementarity relationship between them, which can further affect their travel utility. In addition, based on this travel utility model, a target-oriented multi-attribute user equilibrium model is proposed, which is formulated as a variational inequality problem and solved with the method of successive average. Target for travel time is determined via travelers’ on-time arrival probability, while targets for late arrival penalty and travel cost are given exogenously. Lastly, we apply the proposed model on the Braess and Nguyen–Dupuis traffic networks, and conduct sensitivity analysis of the parameters, including these three targets and the target interaction between them. The study in this paper can provide a new perspective for travelers’ multi-attribute route choice decision, which can further show some implications for the policy design.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Multi-user interpreter"

1

Gao, Song, Yu Liu, Yuhao Kang, and Fan Zhang. "User-Generated Content: A Promising Data Source for Urban Informatics." In Urban Informatics, 503–22. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-15-8983-6_28.

Full text
Abstract:
AbstractThis chapter summarizes different types of user-generated content (UGC) in urban informatics and then gives a systematic review of their data sources, methodologies, and applications. Case studies in three genres are interpreted to demonstrate the effectiveness of UGC. First, we use geotagged social media data, a type of single-sourced UGC, to extract citizen demographics, mobility patterns, and place semantics associated with various urban functional regions. Second, we bridge UGC and professional-generated content (PGC), in order to take advantage of both sides. The third application links multi-sourced UGC to uncover urban spatial structures and human dynamics. We suggest that UGC data contain rich information in diverse aspects. In addition, analysis of sentiment from geotagged texts and photos, along with the state-of-the-art artificial intelligence methods, is discussed to help understand the linkage between human emotions and surrounding environments. Drawing on the analyses, we summarize a number of future research areas that call for attention in urban informatics.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Multi-user interpreter"

1

Banas, Ryan, Andrew McDonald, and Tegwyn Perkins. "NOVEL METHODOLOGY FOR AUTOMATION OF BAD WELL LOG DATA IDENTIFICATION AND REPAIR." In 2021 SPWLA 62nd Annual Logging Symposium Online. Society of Petrophysicists and Well Log Analysts, 2021. http://dx.doi.org/10.30632/spwla-2021-0070.

Full text
Abstract:
Subsurface analysis-driven field development requires quality data as input into analysis, modelling, and planning. In the case of many conventional reservoirs, pay intervals are often well consolidated and maintain integrity under drilling and geological stresses providing an ideal logging environment. Consequently, editing well logs is often overlooked or dismissed entirely. Petrophysical analysis however is not always constrained to conventional pay intervals. When developing an unconventional reservoir, pay sections may be comprised of shales. The requirement for edited and quality checked logs becomes crucial to accurately assess storage volumes in place. Edited curves can also serve as inputs to engineering studies, geological and geophysical models, reservoir evaluation, and many machine learning models employed today. As an example, hydraulic fracturing model inputs may span over adjacent shale beds around a target reservoir, which are frequently washed out. These washed out sections may seriously impact logging measurements of interest, such as bulk density and acoustic compressional slowness, which are used to generate elastic properties and compute geomechanical curves. Two classifications of machine learning algorithms for identifying outliers and poor-quality data due to bad hole conditions are discussed: supervised and unsupervised learning. The first allows the expert to train a model from existing and categorized data, whereas unsupervised learning algorithms learn from a collection of unlabeled data. Each classification type has distinct advantages and disadvantages. Identifying outliers and conditioning well logs prior to a petrophysical analysis or machine learning model can be a time-consuming and laborious process, especially when large multi-well datasets are considered. In this study, a new supervised learning algorithm is presented that utilizes multiple-linear regression analysis to repair well log data in an iterative and automated routine. This technique allows outliers to be identified and repaired whilst improving the efficiency of the log data editing process without compromising accuracy. The algorithm uses sophisticated logic and curve predictions derived via multiple linear regression in order to systematically repair various well logs. A clear improvement in efficiency is observed when the algorithm is compared to other currently used methods. These include manual processing by a petrophysicist and unsupervised outlier detection methods. The algorithm can also be leveraged over multiple wells to produce more generalized predictions. Through a platform created to quickly identify and repair invalid log data, the results are controlled through input and supervision by the user. This methodology is not a direct replacement of an expert interpreter, but complementary by allowing the petrophysicist to leverage computing power, improve consistency, reduce error and improve turnaround time.
APA, Harvard, Vancouver, ISO, and other styles
2

Solares, Santiago D. "Exploring Dynamic Non-Idealities in Multi-Frequency Atomic Force Microscopy." In ASME 2012 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2012. http://dx.doi.org/10.1115/detc2012-70098.

Full text
Abstract:
Multi-frequency atomic force microscopy (AFM), in which the microcantilever is driven at more than one frequency, has emerged as a promising technique for simultaneous topographical imaging and material property mapping. While enabling significant advantages over traditional tapping-mode AFM, the greater dynamic complexity of multi-frequency AFM also requires deeper understanding on the part of the user in order to properly interpret the results obtained. This paper illustrates this challenge by exploring a few key dynamic non-idealities, which if neglected could lead to errors of interpretation. These non-ideal phenomena offer a unique opportunity for mechanical engineers to make significant contributions to nanoscale science by providing an increased understanding of the imaging process dynamics and by developing mitigation strategies for dynamics-related artifacts.
APA, Harvard, Vancouver, ISO, and other styles
3

Meyer, Joerg, and Huan T. Nguyen. "Multi-Dimensional Transfer Functions for Tissue Selection in Computed Tomography." In ASME 2008 3rd Frontiers in Biomedical Devices Conference. ASMEDC, 2008. http://dx.doi.org/10.1115/biomed2008-38107.

Full text
Abstract:
Computed Tomography (CT) is a widely used 3-D imaging technique. A 3-D volumetric grid is obtained from 2-D cross-sectional images. In order to be useful as a diagnostic tool, voxel-based numerical values that represent X-ray absorption in each voxel must be interpreted and combined to form an image that depicts tissue composition at a particular location. Transfer functions are used to translate measured X-ray absorption data into Hounsfield units, and Hounsfield units into intensities, colors and transparency values. Gradient-based transfer functions are used to highlight material boundaries and interfaces between different tissues. Multi-dimensional transfer functions combine the advantages of regular and gradientbased transfer functions, facilitating a wide spectrum of visual representations. Transfer functions are usually under user control and often difficult to find. Improper transfer functions can create misleading visualizations and may lead to erroneous diagnoses. This article discusses how multi-dimensional transfer functions can be derived that are clinically relevant and meaningful.
APA, Harvard, Vancouver, ISO, and other styles
4

Slivovsky, Lynne A., and Hong Z. Tan. "A Real-Time Static Posture Classification System." In ASME 2000 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2000. http://dx.doi.org/10.1115/imece2000-2411.

Full text
Abstract:
Abstract As computing becomes more ubiquitous, there is a need for distributed intelligent human-computer interfaces that can perceive and interpret a user’s actions through sensors that see, hear and feel. A perceptually intelligent interface enables a more natural interaction between a user and a machine in the sense that the user can look at, talk to or touch an object instead of using a machine language. The goal of the present work on a Sensing Chair is to enable a computer to track, in real time, the sitting postures of a user through contact sensors that act like a layer of artificial skin. This is accomplished with surface-mounted pressure distribution sensors placed on the backrest and the seatpan of an office chair. Given the similarity between a pressure distribution map from the contact sensors and a greyscale image, computer vision and pattern recognition algorithms, such as Principal Components Analysis, are applied to the problem of classifying steady-state sitting postures. A real-time multi-user sitting posture classification system has been implemented in our laboratory. The system is trained on pressure distribution data from subjects with varying anthropometrics, and performs at an overall accuracy of 96%. Future work will focus on the modeling of transient postures when a user moves from one steady-state posture to the next. A robust, real-time sitting posture tracking system can lead to many exciting applications such as automatic control of airbag deployment forces, ergonomics of furniture design, and biometric authentication for computer security.
APA, Harvard, Vancouver, ISO, and other styles
5

Jovanoska, Dijana, and Gjorgji Mancheski. "On-Line Big Data Processing Using Python Libraries for Multiple Linear Regression in Complex Environment." In 27th International Scientific Conference Strategic Management and Decision Support Systems in Strategic Management. University of Novi Sad, Faculty of Economics in Subotica, 2022. http://dx.doi.org/10.46541/978-86-7233-406-7_228.

Full text
Abstract:
The phenomenon called Big Data today is one of the most significant and least visible consequences of the development of technology and the Internet. Namely, the data generated by today's globally connected world is growing at an exponential rate and they are a real "gold mine" for those users who know how to correctly interpret such data and make successful decisions based on them. Data analysis and processing is one of the most important components of a large data system, and in this branch of data science the most popular is the Python programming language, which provides its users with a large number of constantly maintained program libraries and developing environments. The most important thing for legal entities and individuals is that almost all program libraries and functions provided by this programming language come with free licenses and possess open code, maintained and quality technical documentation, which provides each company with significant money savings and time. This research paper is dedicated to the possibility of determining and creating a multi regression model of large amounts of data by using Python, on the basis of large amounts of data provided by two market retailers in order to display a multi regression model and assess its predictive power. Because the number of variables is large, several models have been made in this research paper and a comparative analysis of the different models has been made, which shows that Python is a good tool that can be used repeatedly to select different variants and evaluate the resulting model for which a graphical interface can be made and would be much more acceptable as an end user, can be placed on a server on the Internet or on a modern Cloud platform and used by users as an on-demand concept and the results can be embedded in end-user interfaces and models made in this way (with dynamic data extraction)can be used in BI and machine learning processes.
APA, Harvard, Vancouver, ISO, and other styles
6

Lima Angelo dos Santos, Laura, Nadege Bize-Forest, Giovanna de Fraga Cameiro, Adna Grazielly Paz de Vasconcelos, and Patrick Pereira Machado. "Unsupervised Facies Pattern Recognition of Brazilian Pre-Salt Carbonates Borehole Images." In 2022 SPWLA 63rd Annual Symposium. Society of Petrophysicists and Well Log Analysts, 2022. http://dx.doi.org/10.30632/spwla-2022-0129.

Full text
Abstract:
We apply our novel automated image interpretation workflow to Brazilian pre-salt ultrasonic borehole image data. We obtain an immediate, un-biased classification of the full data, requiring no further input data beyond the borehole image itself. This interactive solution combines statistical and deep learning algorithms for image embedding to provide data-driven, multi-purpose borehole image interpretation. Borehole images are a source of important information for building static reservoir models. Textures observed in these high-resolution well logs are the results of and provide insights into the different processes that have occurred: from the moment of the deposition until the image acquisition. Each field, reservoir, well, and interval has a unique textural assemblage, consequence of its own depositional facies, diagenetic processes, geomechanics and wellbore conditions or well intervention and completion. Efforts to automate facies interpretation in our industry often rely on applying supervised machine learning models. These supervised algorithms are restricted to executing very specific tasks, based on extensive amounts of consistently labeled data. In the example of depositional geological classification, generating labeled data can be a complex and extensive task, subject to interpreters’ experience – resulting in a low human performance benchmark. The solution proposed here comprises a sequence of five steps: • Prepare data; • Apply a first embedding step using statisticalmethods or convolutional autoencoders; • Apply PCA or t-SNE techniques as the secondembedding step; • Perform manual or automatic clustering; • Finally, assign a facies class to each textural group. This paper discusses applying this innovative workflow to acoustic borehole images of pre-salt carbonates from the Santos basin. Various preprocessing and embedding options were tested and compared to the geological core interpretation. Using statistics, semi-supervised t-SNE and k-means clustering methods, we divide the data into textural groups and describe these groups according to their distinct geological, diagenetic or geomechanical characteristics. With this new approach, facies are defined based solely on borehole image logs in a fast, consistent and less user-biased form. Ultimately, our innovative workflow allows us to not only gain insights into the depositional, geological and geomechanical processes and their correlation with the pre-salt carbonates reservoir quality, but to establish a more efficient, reliable method for borehole image interpretation in general.
APA, Harvard, Vancouver, ISO, and other styles
7

Süyük Makakli, Elif, and Ebru Yücesan. "Spatial Experience Of Physical And Virtual Space." In SPACE International Conferences April 2021. SPACE Studies Publications, 2022. http://dx.doi.org/10.51596/cbp2021.jrvm8060.

Full text
Abstract:
Abstract Fictional spaces produced with multidisciplinary research using improving technologiescreate settings that provoke new questions and have diff erent answers. This comes about bybroadening the horizons in virtual space studies, space concept, design, and experience. Evaluatingvirtual space as a layer of reality represents architectural space that belongs to the physical world.The principal factors that form the physicality of a space, its shape and content, are related tocultural, public, societal, perceptual, and intellectual codes. The space concept can be explained asa physical concept. In the sense of human interaction with space, the feelings it elicits, perceptualfactors, both in the subjective and abstract dimensions, that can be described as feelings, and 3Dphysicality. Spaces designed and produced for human use can be perceived diff erently and meanother things to diff erent people through human–space interactions. Perceiving, interpreting, anddescribing a space is a complex process that can only occur by experiencing it.Although virtual reality emerged as a simulation of physical space, there are increasing attempts toform an emotional and physical connection to such spaces today. New technologies used to createnew spaces and descriptions such as virtual reality, virtual space, cyberspace, and hybrid space arearticulated as new layers within the spatial memory accumulated to date.Virtual reality technologies, which can be explained as an interface between humans and machinesand describe diff erent life systems, give one the feeling of being in another space. Although thesespaces are virtual, they can be related to the space concept as they can be experienced and give thefeeling of being somewhere. These settings, which present multi-dimensional spatial experiences bytaking humans into a digital reality, are created using computer support and are experienced usingvarious electronic tools. These settings in which human and machine, organic and non-organicentities meet are also crucial in design education as they improve creative processes related to thefuture, machine-human interaction, and the space concept and its formation.As virtuality beingevaluated as a layer of reality becomes a representation of architectural space that belongs to thephysical world, it also has the potential to approach space design in a new way.It has the potential to aff ect and improve the perception of creating space and deliver spatialsolutions, understand new living conditions, and discover the future by responding to technologicalimprovements.Virtual reality creates a personal space experience that diff racts space and time—improvingtechnologies set these spaces, which simulate reality, as a layer of fact, a refl ection or representation.The cyber and virtual experiences that have emerged in new media spaces have reduced space’sdependency on the physical world through the integration of improving technologies and art. ‘SALT Research’ within Salt Galata, a monumental building in Galata-İstanbul, and ‘Virtual Archive’, a media art project by Refik Anadol that questions the virtual-digital space concept, were chosen as experience spaces. It was emphasized that there are holistic composition differences between spaces due to the current physical space experience that composes the infrastructure of the study and virtual space. It is composed of different elements and is perceived just like real space. The dataset includes a detailed assessment of two different spaces with similar contexts and contains the physical and virtual space analysis through syntactic, semantic and pragmatic scales. Volunteer participants emphasized the differences in holistic composition between the two spaces. They noted that the virtual space differs from the physical space and is composed of different elements and that the user has the perception of belonging just like in a physical space.The physical space, SALT Research, was evaluated as satisfactory and high-quality in terms of aesthetics and equipment. Phrases used to describe it were neat, high spaces, comfort, spaciousness, light, dark areas, tranquillity, silence, acoustic balance, harmony, historical, gripping, transformation, aesthetic and functional, and plain. In contrast, participants saw the Virtual Archive is a new, exciting, different, and innovative experience. The bodily freedom of the virtual space experience was described as optimistic. Through a brief understanding of the space, they overcame the difficulties of physical existence that arose when accessing information in this new environment.Fictional space produced with a multidisciplinary study using improving technologies creates settings where new questions are asked, and different answers are made, broadening the horizons in virtual space studies, space concept, design, and experience. Virtuality being evaluated as a layer of reality represents architectural space that belongs to the physical world.Virtual reality technology changes and influences our time, dimension, and architectural perceptions, the modes of expression and interaction models in art and architecture by taking us into a different universe experienced spiritually and mentally in new space creations.The space experience through the journey of interpretation and understanding of space and architecture tells different things for each person on each occasion. Perceiving space through the physical space experience and active senses via intellectual feedback also affects virtual reality interactions.Different disciplines examine the machine, human, space, and future relations in an interdisciplinary environment. Different designs’ varieties and opportunities have a place in architecture and interior architecture. In the future, the integration of physical space, virtual space, and machine intelligence into space design and design education and the role and effect of the designer will continue to be discussed.Today, new representation environments present new evolutions that improve, evaluate, and interpret spatial ideas. Despite changing technologies, humans must exist somewhere, and existence is related to our sensory, emotional, and memorial creations. In this sense, the place of humans and designers will continue to be questioned in the new spaces created. Keywords: Patrik Schumacher, ethics, ethical paradigms in architecture, humanitarian architecture, architectural media platforms.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography