Search results for: empathic accuracy
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3745

Search results for: empathic accuracy

625 Off-Line Text-Independent Arabic Writer Identification Using Optimum Codebooks

Authors: Ahmed Abdullah Ahmed

Abstract:

The task of recognizing the writer of a handwritten text has been an attractive research problem in the document analysis and recognition community with applications in handwriting forensics, paleography, document examination and handwriting recognition. This research presents an automatic method for writer recognition from digitized images of unconstrained writings. Although a great effort has been made by previous studies to come out with various methods, their performances, especially in terms of accuracy, are fallen short, and room for improvements is still wide open. The proposed technique employs optimal codebook based writer characterization where each writing sample is represented by a set of features computed from two codebooks, beginning and ending. Unlike most of the classical codebook based approaches which segment the writing into graphemes, this study is based on fragmenting a particular area of writing which are beginning and ending strokes. The proposed method starting with contour detection to extract significant information from the handwriting and the curve fragmentation is then employed to categorize the handwriting into Beginning and Ending zones into small fragments. The similar fragments of beginning strokes are grouped together to create Beginning cluster, and similarly, the ending strokes are grouped to create the ending cluster. These two clusters lead to the development of two codebooks (beginning and ending) by choosing the center of every similar fragments group. Writings under study are then represented by computing the probability of occurrence of codebook patterns. The probability distribution is used to characterize each writer. Two writings are then compared by computing distances between their respective probability distribution. The evaluations carried out on ICFHR standard dataset of 206 writers using Beginning and Ending codebooks separately. Finally, the Ending codebook achieved the highest identification rate of 98.23%, which is the best result so far on ICFHR dataset.

Keywords: off-line text-independent writer identification, feature extraction, codebook, fragments

Procedia PDF Downloads 512
624 A Bayesian Approach for Analyzing Academic Article Structure

Authors: Jia-Lien Hsu, Chiung-Wen Chang

Abstract:

Research articles may follow a simple and succinct structure of organizational patterns, called move. For example, considering extended abstracts, we observe that an extended abstract usually consists of five moves, including Background, Aim, Method, Results, and Conclusion. As another example, when publishing articles in PubMed, authors are encouraged to provide a structured abstract, which is an abstract with distinct and labeled sections (e.g., Introduction, Methods, Results, Discussions) for rapid comprehension. This paper introduces a method for computational analysis of move structures (i.e., Background-Purpose-Method-Result-Conclusion) in abstracts and introductions of research documents, instead of manually time-consuming and labor-intensive analysis process. In our approach, sentences in a given abstract and introduction are automatically analyzed and labeled with a specific move (i.e., B-P-M-R-C in this paper) to reveal various rhetorical status. As a result, it is expected that the automatic analytical tool for move structures will facilitate non-native speakers or novice writers to be aware of appropriate move structures and internalize relevant knowledge to improve their writing. In this paper, we propose a Bayesian approach to determine move tags for research articles. The approach consists of two phases, training phase and testing phase. In the training phase, we build a Bayesian model based on a couple of given initial patterns and the corpus, a subset of CiteSeerX. In the beginning, the priori probability of Bayesian model solely relies on initial patterns. Subsequently, with respect to the corpus, we process each document one by one: extract features, determine tags, and update the Bayesian model iteratively. In the testing phase, we compare our results with tags which are manually assigned by the experts. In our experiments, the promising accuracy of the proposed approach reaches 56%.

Keywords: academic English writing, assisted writing, move tag analysis, Bayesian approach

Procedia PDF Downloads 330
623 Identifying a Drug Addict Person Using Artificial Neural Networks

Authors: Mustafa Al Sukar, Azzam Sleit, Abdullatif Abu-Dalhoum, Bassam Al-Kasasbeh

Abstract:

Use and abuse of drugs by teens is very common and can have dangerous consequences. The drugs contribute to physical and sexual aggression such as assault or rape. Some teenagers regularly use drugs to compensate for depression, anxiety or a lack of positive social skills. Teen resort to smoking should not be minimized because it can be "gateway drugs" for other drugs (marijuana, cocaine, hallucinogens, inhalants, and heroin). The combination of teenagers' curiosity, risk taking behavior, and social pressure make it very difficult to say no. This leads most teenagers to the questions: "Will it hurt to try once?" Nowadays, technological advances are changing our lives very rapidly and adding a lot of technologies that help us to track the risk of drug abuse such as smart phones, Wireless Sensor Networks (WSNs), Internet of Things (IoT), etc. This technique may help us to early discovery of drug abuse in order to prevent an aggravation of the influence of drugs on the abuser. In this paper, we have developed a Decision Support System (DSS) for detecting the drug abuse using Artificial Neural Network (ANN); we used a Multilayer Perceptron (MLP) feed-forward neural network in developing the system. The input layer includes 50 variables while the output layer contains one neuron which indicates whether the person is a drug addict. An iterative process is used to determine the number of hidden layers and the number of neurons in each one. We used multiple experiment models that have been completed with Log-Sigmoid transfer function. Particularly, 10-fold cross validation schemes are used to access the generalization of the proposed system. The experiment results have obtained 98.42% classification accuracy for correct diagnosis in our system. The data had been taken from 184 cases in Jordan according to a set of questions compiled from Specialists, and data have been obtained through the families of drug abusers.

Keywords: drug addiction, artificial neural networks, multilayer perceptron (MLP), decision support system

Procedia PDF Downloads 299
622 A Support Vector Machine Learning Prediction Model of Evapotranspiration Using Real-Time Sensor Node Data

Authors: Waqas Ahmed Khan Afridi, Subhas Chandra Mukhopadhyay, Bandita Mainali

Abstract:

The research paper presents a unique approach to evapotranspiration (ET) prediction using a Support Vector Machine (SVM) learning algorithm. The study leverages real-time sensor node data to develop an accurate and adaptable prediction model, addressing the inherent challenges of traditional ET estimation methods. The integration of the SVM algorithm with real-time sensor node data offers great potential to improve spatial and temporal resolution in ET predictions. In the model development, key input features are measured and computed using mathematical equations such as Penman-Monteith (FAO56) and soil water balance (SWB), which include soil-environmental parameters such as; solar radiation (Rs), air temperature (T), atmospheric pressure (P), relative humidity (RH), wind speed (u2), rain (R), deep percolation (DP), soil temperature (ST), and change in soil moisture (∆SM). The one-year field data are split into combinations of three proportions i.e. train, test, and validation sets. While kernel functions with tuning hyperparameters have been used to train and improve the accuracy of the prediction model with multiple iterations. This paper also outlines the existing methods and the machine learning techniques to determine Evapotranspiration, data collection and preprocessing, model construction, and evaluation metrics, highlighting the significance of SVM in advancing the field of ET prediction. The results demonstrate the robustness and high predictability of the developed model on the basis of performance evaluation metrics (R2, RMSE, MAE). The effectiveness of the proposed model in capturing complex relationships within soil and environmental parameters provide insights into its potential applications for water resource management and hydrological ecosystem.

Keywords: evapotranspiration, FAO56, KNIME, machine learning, RStudio, SVM, sensors

Procedia PDF Downloads 69
621 Multi-Stage Optimization of Local Environmental Quality by Comprehensive Computer Simulated Person as Sensor for Air Conditioning Control

Authors: Sung-Jun Yoo, Kazuhide Ito

Abstract:

In this study, a comprehensive computer simulated person (CSP) that integrates computational human model (virtual manikin) and respiratory tract model (virtual airway), was applied for estimation of indoor environmental quality. Moreover, an inclusive prediction method was established by integrating computational fluid dynamics (CFD) analysis with advanced CSP which is combined with physiologically-based pharmacokinetic (PBPK) model, unsteady thermoregulation model for analysis targeting micro-climate around human body and respiratory area with high accuracy. This comprehensive method can estimate not only the contaminant inhalation but also constant interaction in the contaminant transfer between indoor spaces, i.e., a target area for indoor air quality (IAQ) assessment, and respiratory zone for health risk assessment. This study focused on the usage of the CSP as an air/thermal quality sensor in indoors, which means the application of comprehensive model for assessment of IAQ and thermal environmental quality. Demonstrative analysis was performed in order to examine the applicability of the comprehensive model to the heating, ventilation, air conditioning (HVAC) control scheme. CSP was located at the center of the simple model room which has dimension of 3m×3m×3m. Formaldehyde which is generated from floor material was assumed as a target contaminant, and flow field, sensible/latent heat and contaminant transfer analysis in indoor space were conducted by using CFD simulation coupled with CSP. In this analysis, thermal comfort was evaluated by thermoregulatory analysis, and respiratory exposure risks represented by adsorption flux/concentration at airway wall surface were estimated by PBPK-CFD hybrid analysis. These Analysis results concerning IAQ and thermal comfort will be fed back to the HVAC control and could be used to find a suitable ventilation rate and energy requirement for air conditioning system.

Keywords: CFD simulation, computer simulated person, HVAC control, indoor environmental quality

Procedia PDF Downloads 361
620 Fuzzy Logic-Based Approach to Predict Fault in Transformer Oil Based on Health Index Using Dissolved Gas Analysis

Authors: Kharisma Utomo Mulyodinoto, Suwarno, Ahmed Abu-Siada

Abstract:

Transformer insulating oil is a key component that can be utilized to detect incipient faults within operating transformers without taking them out of service. Dissolved gas-in-oil analysis has been widely accepted as a powerful technique to detect such incipient faults. While the measurement of dissolved gases within transformer oil samples has been standardized over the past two decades, analysis of the results is not always straightforward as it depends on personnel expertise more than mathematical formulas. In analyzing such data, the generation rate of each dissolved gas is of more concern than the absolute value of the gas. As such, history of dissolved gases within a particular transformer should be archived for future comparison. Lack of such history may lead to misinterpretation of the obtained results. IEEE C57.104-2008 standards have classified the health condition of the transformer based on the absolute value of individual dissolved gases along with the total dissolved combustible gas (TDCG) within transformer oil into 4 conditions. While the technique is easy to implement, it is considered as a very conservative technique and is not widely accepted as a reliable interpretation tool. Moreover, measured gases for the same oil sample can be within various conditions limits and hence, misinterpretation of the data is expected. To overcome this limitation, this paper introduces a fuzzy logic approach to predict the health condition of the transformer oil based on IEEE C57.104-2008 standards along with Roger ratio and IEC ratio-based methods. DGA results of 31 chosen oil samples from 469 transformer oil samples of normal transformers and pre-known fault-type transformers that were collected from Indonesia Electrical Utility Company, PT. PLN (Persero), from different voltage rating: 500/150 kV, 150/20 kV, and 70/20 kV; different capacity: 500 MVA, 60 MVA, 50 MVA, 30 MVA, 20 MVA, 15 MVA, and 10 MVA; and different lifespan, are used to test and establish the fuzzy logic model. Results show that the proposed approach is of good accuracy and can be considered as a platform toward the standardization of the dissolved gas interpretation process.

Keywords: dissolved gas analysis, fuzzy logic, health index, IEEE C57.104-2008, IEC ratio method, Roger ratio method

Procedia PDF Downloads 157
619 Applying Semi-Automatic Digital Aerial Survey Technology and Canopy Characters Classification for Surface Vegetation Interpretation of Archaeological Sites

Authors: Yung-Chung Chuang

Abstract:

The cultural layers of archaeological sites are mainly affected by surface land use, land cover, and root system of surface vegetation. For this reason, continuous monitoring of land use and land cover change is important for archaeological sites protection and management. However, in actual operation, on-site investigation and orthogonal photograph interpretation require a lot of time and manpower. For this reason, it is necessary to perform a good alternative for surface vegetation survey in an automated or semi-automated manner. In this study, we applied semi-automatic digital aerial survey technology and canopy characters classification with very high-resolution aerial photographs for surface vegetation interpretation of archaeological sites. The main idea is based on different landscape or forest type can easily be distinguished with canopy characters (e.g., specific texture distribution, shadow effects and gap characters) extracted by semi-automatic image classification. A novel methodology to classify the shape of canopy characters using landscape indices and multivariate statistics was also proposed. Non-hierarchical cluster analysis was used to assess the optimal number of canopy character clusters and canonical discriminant analysis was used to generate the discriminant functions for canopy character classification (seven categories). Therefore, people could easily predict the forest type and vegetation land cover by corresponding to the specific canopy character category. The results showed that the semi-automatic classification could effectively extract the canopy characters of forest and vegetation land cover. As for forest type and vegetation type prediction, the average prediction accuracy reached 80.3%~91.7% with different sizes of test frame. It represented this technology is useful for archaeological site survey, and can improve the classification efficiency and data update rate.

Keywords: digital aerial survey, canopy characters classification, archaeological sites, multivariate statistics

Procedia PDF Downloads 142
618 Teaching Audiovisual Translation (AVT):Linguistic and Technical Aspects of Different Modes of AVT

Authors: Juan-Pedro Rica-Peromingo

Abstract:

Teachers constantly need to innovate and redefine materials for their lectures, especially in areas such as Language for Specific Purposes (LSP) and Translation Studies (TS). It is therefore essential for the lecturers to be technically skilled to handle the never-ending evolution in software and technology, which are necessary elements especially in certain courses at university level. This need becomes even more evident in Audiovisual Translation (AVT) Modules and Courses. AVT has undergone considerable growth in the area of teaching and learning of languages for academic purposes. We have witnessed the development of a considerable number of masters and postgraduate courses where AVT becomes a tool for L2 learning. The teaching and learning of different AVT modes are components of undergraduate and postgraduate courses. Universities, in which AVT is offered as part of their teaching programme or training, make use of professional or free software programs. This paper presents an approach in AVT withina specific university context, in which technology is used by means of professional and nonprofessional software. Students take an AVT subject as part of their English Linguistics Master’s Degree at the Complutense University (UCM) in which they are using professional (Spot) and nonprofessional (Subtitle Workshop, Aegisub, Windows Movie Maker) software packages. The students are encouraged to develop their tasks and projects simulating authentic professional experiences and contexts in the different AVT modes: subtitling for hearing and deaf and hard of hearing population, audio description and dubbing. Selected scenes from TV series such as X-Files, Gossip girl, IT Crowd; extracts from movies: Finding Nemo, Good Will Hunting, School of Rock, Harry Potter, Up; and short movies (Vincent) were used. Hence, the complexity of the audiovisual materials used in class as well as the activities for their projects were graded. The assessment of the diverse tasks carried out by all the students are expected to provide some insights into the best way to improve their linguistic accuracy and oral and written productions with the use of different AVT modes in a very specific ESP university context.

Keywords: ESP, audiovisual translation, technology, university teaching, teaching

Procedia PDF Downloads 518
617 Lexical-Semantic Deficits in Sinhala Speaking Persons with Post Stroke Aphasia: Evidence from Single Word Auditory Comprehension Task

Authors: D. W. M. S. Samarathunga, Isuru Dharmarathne

Abstract:

In aphasia, various levels of symbolic language processing (semantics) are affected. It is shown that Persons with Aphasia (PWA) often experience more problems comprehending some categories of words than others. The study aimed to determine lexical semantic deficits seen in Auditory Comprehension (AC) and to describe lexical-semantic deficits across six selected word categories. Thirteen (n =13) persons diagnosed with post-stroke aphasia (PSA) were recruited to perform an AC task. Foods, objects, clothes, vehicles, body parts and animals were selected as the six categories. As the test stimuli, black and white line drawings were adapted from a picture set developed for semantic studies by Snodgrass and Vanderwart. A pilot study was conducted with five (n=5) healthy nonbrain damaged Sinhala speaking adults to decide familiarity and applicability of the test material. In the main study, participants were scored based on the accuracy and number of errors shown. The results indicate similar trends of lexical semantic deficits identified in the literature confirming ‘animals’ to be the easiest category to comprehend. Mann-Whitney U test was performed to determine the association between the selected variables and the participants’ performance on AC task. No statistical significance was found between the errors and the type of aphasia reflecting similar patterns described in aphasia literature in other languages. The current study indicates the presence of selectivity of lexical semantic deficits in AC and a hierarchy was developed based on the complexity of the categories to comprehend by Sinhala speaking PWA, which might be clinically beneficial when improving language skills of Sinhala speaking persons with post-stroke aphasia. However, further studies on aphasia should be conducted with larger samples for a longer period to study deficits in Sinhala and other Sri Lankan languages (Tamil and Malay).

Keywords: aphasia, auditory comprehension, selective lexical-semantic deficits, semantic categories

Procedia PDF Downloads 253
616 Pupil Size: A Measure of Identification Memory in Target Present Lineups

Authors: Camilla Elphick, Graham Hole, Samuel Hutton, Graham Pike

Abstract:

Pupil size has been found to change irrespective of luminosity, suggesting that it can be used to make inferences about cognitive processes, such as cognitive load. To see whether identifying a target requires a different cognitive load to rejecting distractors, the effect of viewing a target (compared with viewing distractors) on pupil size was investigated using a sequential video lineup procedure with two lineup sessions. Forty one participants were chosen randomly via the university. Pupil sizes were recorded when viewing pre target distractors and post target distractors and compared to pupil size when viewing the target. Overall, pupil size was significantly larger when viewing the target compared with viewing distractors. In the first session, pupil size changes were significantly different between participants who identified the target (Hits) and those who did not. Specifically, the pupil size of Hits reduced significantly after viewing the target (by 26%), suggesting that cognitive load reduced following identification. The pupil sizes of Misses (who made no identification) and False Alarms (who misidentified a distractor) did not reduce, suggesting that the cognitive load remained high in participants who failed to make the correct identification. In the second session, pupil sizes were smaller overall, suggesting that cognitive load was smaller in this session, and there was no significant difference between Hits, Misses and False Alarms. Furthermore, while the frequency of Hits increased, so did False Alarms. These two findings suggest that the benefits of including a second session remain uncertain, as the second session neither provided greater accuracy nor a reliable way to measure it. It is concluded that pupil size is a measure of face recognition strength in the first session of a target present lineup procedure. However, it is still not known whether cognitive load is an adequate explanation for this, or whether cognitive engagement might describe the effect more appropriately. If cognitive load and cognitive engagement can be teased apart with further investigation, this would have positive implications for understanding eyewitness identification. Nevertheless, this research has the potential to provide a tool for improving the reliability of lineup procedures.

Keywords: cognitive load, eyewitness identification, face recognition, pupillometry

Procedia PDF Downloads 404
615 Good Practices for Model Structure Development and Managing Structural Uncertainty in Decision Making

Authors: Hossein Afzali

Abstract:

Increasingly, decision analytic models are used to inform decisions about whether or not to publicly fund new health technologies. It is well noted that the accuracy of model predictions is strongly influenced by the appropriateness of model structuring. However, there is relatively inadequate methodological guidance surrounding this issue in guidelines developed by national funding bodies such as the Australian Pharmaceutical Benefits Advisory Committee (PBAC) and The National Institute for Health and Care Excellence (NICE) in the UK. This presentation aims to discuss issues around model structuring within decision making with a focus on (1) the need for a transparent and evidence-based model structuring process to inform the most appropriate set of structural aspects as the base case analysis; (2) the need to characterise structural uncertainty (If there exist alternative plausible structural assumptions (or judgements), there is a need to appropriately characterise the related structural uncertainty). The presentation will provide an opportunity to share ideas and experiences on how the guidelines developed by national funding bodies address the above issues and identify areas for further improvements. First, a review and analysis of the literature and guidelines developed by PBAC and NICE will be provided. Then, it will be discussed how the issues around model structuring (including structural uncertainty) are not handled and justified in a systematic way within the decision-making process, its potential impact on the quality of public funding decisions, and how it should be presented in submissions to national funding bodies. This presentation represents a contribution to the good modelling practice within the decision-making process. Although the presentation focuses on the PBAC and NICE guidelines, the discussion can be applied more widely to many other national funding bodies that use economic evaluation to inform funding decisions but do not transparently address model structuring issues e.g. the Medical Services Advisory Committee (MSAC) in Australia or the Canadian Agency for Drugs and Technologies in Health.

Keywords: decision-making process, economic evaluation, good modelling practice, structural uncertainty

Procedia PDF Downloads 186
614 Resisting Adversarial Assaults: A Model-Agnostic Autoencoder Solution

Authors: Massimo Miccoli, Luca Marangoni, Alberto Aniello Scaringi, Alessandro Marceddu, Alessandro Amicone

Abstract:

The susceptibility of deep neural networks (DNNs) to adversarial manipulations is a recognized challenge within the computer vision domain. Adversarial examples, crafted by adding subtle yet malicious alterations to benign images, exploit this vulnerability. Various defense strategies have been proposed to safeguard DNNs against such attacks, stemming from diverse research hypotheses. Building upon prior work, our approach involves the utilization of autoencoder models. Autoencoders, a type of neural network, are trained to learn representations of training data and reconstruct inputs from these representations, typically minimizing reconstruction errors like mean squared error (MSE). Our autoencoder was trained on a dataset of benign examples; learning features specific to them. Consequently, when presented with significantly perturbed adversarial examples, the autoencoder exhibited high reconstruction errors. The architecture of the autoencoder was tailored to the dimensions of the images under evaluation. We considered various image sizes, constructing models differently for 256x256 and 512x512 images. Moreover, the choice of the computer vision model is crucial, as most adversarial attacks are designed with specific AI structures in mind. To mitigate this, we proposed a method to replace image-specific dimensions with a structure independent of both dimensions and neural network models, thereby enhancing robustness. Our multi-modal autoencoder reconstructs the spectral representation of images across the red-green-blue (RGB) color channels. To validate our approach, we conducted experiments using diverse datasets and subjected them to adversarial attacks using models such as ResNet50 and ViT_L_16 from the torch vision library. The autoencoder extracted features used in a classification model, resulting in an MSE (RGB) of 0.014, a classification accuracy of 97.33%, and a precision of 99%.

Keywords: adversarial attacks, malicious images detector, binary classifier, multimodal transformer autoencoder

Procedia PDF Downloads 112
613 Application of Hydrological Engineering Centre – River Analysis System (HEC-RAS) to Estuarine Hydraulics

Authors: Julia Zimmerman, Gaurav Savant

Abstract:

This study aims to evaluate the efficacy of the U.S. Army Corp of Engineers’ River Analysis System (HEC-RAS) application to modeling the hydraulics of estuaries. HEC-RAS has been broadly used for a variety of riverine applications. However, it has not been widely applied to the study of circulation in estuaries. This report details the model development and validation of a combined 1D/2D unsteady flow hydraulic model using HEC-RAS for estuaries and they are associated with tidally influenced rivers. Two estuaries, Galveston Bay and Delaware Bay, were used as case studies. Galveston Bay, a bar-built, vertically mixed estuary, was modeled for the 2005 calendar year. Delaware Bay, a drowned river valley estuary, was modeled from October 22, 2019, to November 5, 2019. Water surface elevation was used to validate both models by comparing simulation results to NOAA’s Center for Operational Oceanographic Products and Services (CO-OPS) gauge data. Simulations were run using the Diffusion Wave Equations (DW), the Shallow Water Equations, Eulerian-Lagrangian Method (SWE-ELM), and the Shallow Water Equations Eulerian Method (SWE-EM) and compared for both accuracy and computational resources required. In general, the Diffusion Wave Equations results were found to be comparable to the two Shallow Water equations sets while requiring less computational power. The 1D/2D combined approach was valid for study areas within the 2D flow area, with the 1D flow serving mainly as an inflow boundary condition. Within the Delaware Bay estuary, the HEC-RAS DW model ran in 22 minutes and had an average R² value of 0.94 within the 2-D mesh. The Galveston Bay HEC-RAS DW ran in 6 hours and 47 minutes and had an average R² value of 0.83 within the 2-D mesh. The longer run time and lower R² for Galveston Bay can be attributed to the increased length of the time frame modeled and the greater complexity of the estuarine system. The models did not accurately capture tidal effects within the 1D flow area.

Keywords: Delaware bay, estuarine hydraulics, Galveston bay, HEC-RAS, one-dimensional modeling, two-dimensional modeling

Procedia PDF Downloads 199
612 GIS Data Governance: GIS Data Submission Process for Build-in Project, Replacement Project at Oman electricity Transmission Company

Authors: Rahma Saleh Hussein Al Balushi

Abstract:

Oman Electricity Transmission Company's (OETC) vision is to be a renowned world-class transmission grid by 2025, and one of the indications of achieving the vision is obtaining Asset Management ISO55001 certification, which required setting out a documented Standard Operating Procedures (SOP). Hence, documented SOP for the Geographical information system data process has been established. Also, to effectively manage and improve OETC power transmission, asset data and information need to be governed as such by Asset Information & GIS department. This paper will describe in detail the current GIS data submission process and the journey for developing it. The methodology used to develop the process is based on three main pillars, which are system and end-user requirements, Risk evaluation, data availability, and accuracy. The output of this paper shows the dramatic change in the used process, which results subsequently in more efficient, accurate, and updated data. Furthermore, due to this process, GIS has been and is ready to be integrated with other systems as well as the source of data for all OETC users. Some decisions related to issuing No objection certificates (NOC) for excavation permits and scheduling asset maintenance plans in Computerized Maintenance Management System (CMMS) have been made consequently upon GIS data availability. On the Other hand, defining agreed and documented procedures for data collection, data systems update, data release/reporting and data alterations has also contributed to reducing the missing attributes and enhance data quality index of GIS transmission data. A considerable difference in Geodatabase (GDB) completeness percentage was observed between the years 2017 and year 2022. Overall, concluding that by governance, asset information & GIS department can control the GIS data process; collect, properly record, and manage asset data and information within the OETC network. This control extends to other applications and systems integrated with/related to GIS systems.

Keywords: asset management ISO55001, standard procedures process, governance, CMMS

Procedia PDF Downloads 125
611 An Interactive Online Academic Writing Resource for Research Students in Engineering

Authors: Eleanor K. P. Kwan

Abstract:

English academic writing, it has been argued, is an acquired language even for English speakers. For research students whose English is not their first language, however, the acquisition process is often more challenging. Instead of hoping that students would acquire the conventions themselves through extensive reading, there is a need for the explicit teaching of linguistic conventions in academic writing, as explicit teaching could help students to be more aware of the different generic conventions in different disciplines in science. This paper presents an interuniversity effort to develop an online academic writing resource for research students in five subdisciplines in engineering, upon the completion of the needs analysis which indicates that students and faculty members are more concerned about students’ ability to organize an extended text than about grammatical accuracy per se. In particular, this paper focuses on the materials developed for thesis writing (also called dissertation writing in some tertiary institutions), as theses form an essential graduation requirement for all research students and this genre is also expected to demonstrate the writer’s competence in research and contributions to the research community. Drawing on Swalesian move analysis of research articles, this online resource includes authentic materials written by students and faculty members from the participating institutes. Highlight will be given to several aspects and challenges of developing this online resource. First, as the online resource aims at moving beyond providing instructions on academic writing, a range of interactive activities need to be designed to engage the users, which is one feature which differentiates this online resource from other equally informative websites on academic writing. Second, it will also include discussion on divergent textual practices in different subdisciplines, which help to illustrate different practices among these subdisciplines. Third, since theses, probably one of the most extended texts a research student will complete, require effective use of signposting devices to facility readers’ understanding, this online resource will also provide both explanation and activities on different components that contribute to text coherence. Finally results from piloting will also be included to shed light on the effectiveness of the materials, which could be useful for future development.

Keywords: academic writing, English for academic purposes, online language learning materials, scientific writing

Procedia PDF Downloads 269
610 A Case Study of An Artist Diagnosed with Schizophrenia-Using the Graphic Rorschach (Digital version) “GRD”

Authors: Maiko Kiyohara, Toshiki Ito

Abstract:

In this study, we used a psychotherapy process for patient with dissociative disorder and the graphic Rorschach (Digital version) (GRD). A dissociative disorder is a type of dissociation characterized by multiple alternating personalities (also called alternate identity or another identity). "dissociation" is a state in which consciousness, memory, thinking, emotion, perception, behavior, body image, and so on are divided and experienced. Dissociation symptoms, such as lack of memory, are seen, and the repetition of blanks in daily events causes serious problems in life. Although the pathological mechanism of dissociation has not yet been fully elucidated, it is said that it is caused by childhood abuse or shocking trauma. In case of Japan, no reliable data has been reported on the number of patients and prevalence of dissociative disorders, no drug is compatible with dissociation symptoms, and no clear treatment has been established. GRD is a method that the author revised in 2017 to a Graphic Rorschach, which is a special technique for subjects to draw language responses when enforce Rorschach. GRD reduces the burden on both the subject and the examiner, reduces the complexity of organizing data, improves the simplicity of organizing data, and improves the accuracy of interpretation by introducing a tablet computer during the drawing reaction. We are conducting research for the purpose. The patient in this case is a woman in her 50s, and has multiple personalities since childhood. At present, there are about 10 personalities whose main personality is just grasped. The patients is raising her junior high school sons as single parent, but personal changes often occur at home, which makes the home environment inferior and economically oppressive, and has severely hindered daily life. In psychotherapy, while a personality different from the main personality has appeared, I have also conducted psychotherapy with her son. In this case, the psychotherapy process and the GRD were performed to understand the personality characteristics, and the possibility of therapeutic significance to personality integration is reported.

Keywords: GRD, dissociative disorder, a case study of psychotherapy process, dissociation

Procedia PDF Downloads 117
609 Digital Architectural Practice as a Challenge for Digital Architectural Technology Elements in the Era of Digital Design

Authors: Ling Liyun

Abstract:

In the field of contemporary architecture, complex forms of architectural works continue to emerge in the world, along with some new terminology emerged: digital architecture, parametric design, algorithm generation, building information modeling, CNC construction and so on. Architects gradually mastered the new skills of mathematical logic in the form of exploration, virtual simulation, and the entire design and coordination in the construction process. Digital construction technology has a greater degree in controlling construction, and ensure its accuracy, creating a series of new construction techniques. As a result, the use of digital technology is an improvement and expansion of the practice of digital architecture design revolution. We worked by reading and analyzing information about the digital architecture development process, a large number of cases, as well as architectural design and construction as a whole process. Thus current developments were introduced and discussed in our paper, such as architectural discourse, design theory, digital design models and techniques, material selecting, as well as artificial intelligence space design. Our paper also pays attention to the representative three cases of digital design and construction experiment at great length in detail to expound high-informatization, high-reliability intelligence, and high-technique in constructing a humane space to cope with the rapid development of urbanization. We concluded that the opportunities and challenges of the shift existed in architectural paradigms, such as the cooperation methods, theories, models, technologies and techniques which were currently employed in digital design research and digital praxis. We also find out that the innovative use of space can gradually change the way people learn, talk, and control information. The past two decades, digital technology radically breaks the technology constraints of industrial technical products, digests the publicity on a particular architectural style (era doctrine). People should not adapt to the machine, but in turn, it’s better to make the machine work for users.

Keywords: artificial intelligence, collaboration, digital architecture, digital design theory, material selection, space construction

Procedia PDF Downloads 136
608 Improving Subjective Bias Detection Using Bidirectional Encoder Representations from Transformers and Bidirectional Long Short-Term Memory

Authors: Ebipatei Victoria Tunyan, T. A. Cao, Cheol Young Ock

Abstract:

Detecting subjectively biased statements is a vital task. This is because this kind of bias, when present in the text or other forms of information dissemination media such as news, social media, scientific texts, and encyclopedias, can weaken trust in the information and stir conflicts amongst consumers. Subjective bias detection is also critical for many Natural Language Processing (NLP) tasks like sentiment analysis, opinion identification, and bias neutralization. Having a system that can adequately detect subjectivity in text will boost research in the above-mentioned areas significantly. It can also come in handy for platforms like Wikipedia, where the use of neutral language is of importance. The goal of this work is to identify the subjectively biased language in text on a sentence level. With machine learning, we can solve complex AI problems, making it a good fit for the problem of subjective bias detection. A key step in this approach is to train a classifier based on BERT (Bidirectional Encoder Representations from Transformers) as upstream model. BERT by itself can be used as a classifier; however, in this study, we use BERT as data preprocessor as well as an embedding generator for a Bi-LSTM (Bidirectional Long Short-Term Memory) network incorporated with attention mechanism. This approach produces a deeper and better classifier. We evaluate the effectiveness of our model using the Wiki Neutrality Corpus (WNC), which was compiled from Wikipedia edits that removed various biased instances from sentences as a benchmark dataset, with which we also compare our model to existing approaches. Experimental analysis indicates an improved performance, as our model achieved state-of-the-art accuracy in detecting subjective bias. This study focuses on the English language, but the model can be fine-tuned to accommodate other languages.

Keywords: subjective bias detection, machine learning, BERT–BiLSTM–Attention, text classification, natural language processing

Procedia PDF Downloads 130
607 Evaluation of Weather Risk Insurance for Agricultural Products Using a 3-Factor Pricing Model

Authors: O. Benabdeljelil, A. Karioun, S. Amami, R. Rouger, M. Hamidine

Abstract:

A model for preventing the risks related to climate conditions in the agricultural sector is presented. It will determine the yearly optimum premium to be paid by a producer in order to reach his required turnover. The model is based on both climatic stability and 'soft' responses of usually grown species to average climate variations at the same place and inside a safety ball which can be determined from past meteorological data. This allows the use of linear regression expression for dependence of production result in terms of driving meteorological parameters, the main ones of which are daily average sunlight, rainfall and temperature. By simple best parameter fit from the expert table drawn with professionals, optimal representation of yearly production is determined from records of previous years, and yearly payback is evaluated from minimum yearly produced turnover. The model also requires accurate pricing of commodity at N+1. Therefore, a pricing model is developed using 3 state variables, namely the spot price, the difference between the mean-term and the long-term forward price, and the long-term structure of the model. The use of historical data enables to calibrate the parameters of state variables, and allows the pricing of commodity. Application to beet sugar underlines pricer precision. Indeed, the percentage of accuracy between computed result and real world is 99,5%. Optimal premium is then deduced and gives the producer a useful bound for negotiating an offer by insurance companies to effectively protect its harvest. The application to beet production in French Oise department illustrates the reliability of present model with as low as 6% difference between predicted and real data. The model can be adapted to almost any agricultural field by changing state parameters and calibrating their associated coefficients.

Keywords: agriculture, production model, optimal price, meteorological factors, 3-factor model, parameter calibration, forward price

Procedia PDF Downloads 376
606 A Comprehensive Approach to Scour Depth Estimation Through HEC-RAS 2D and Physical Modeling

Authors: Ashvinie Thembiliyagoda, Kasun De Silva, Nimal Wijayaratna

Abstract:

The lowering of the riverbed level as a result of water erosion is termed as scouring. This phenomenon remarkably undermines the potential stability of the bridge pier, causing a threat of failure or collapse. The formation of vortices in the vicinity of bridges due to the obstruction caused by river flow is the main reason behind this pursuit. Scouring is aggravated by factors including high flow rates, bridge pier geometry, sediment configuration etc. Tackling scour-related problems when they become severe is more costly and disruptive compared to implementing preventive measures based on predicted scour depths. This paper presents a comprehensive investigation of the development of a numerical model that could reproduce the scouring effect around bridge piers and estimate the scour depth. The numerical model was developed for one selected bridge in Sri Lanka, the Kelanisiri Bridge. HEC-RAS two-dimensional (2D) modeling approach was utilized for the development of the model and was calibrated and validated with field data. To further enhance the reliability of the model, a physical model was developed, allowing for additional validation. Results from the numerical model were compared with those obtained from the physical model, revealing a strong correlation between the two methods and confirming the numerical model's accuracy in predicting scour depths. The findings from this study underscore the ability of the HEC-RAS two-dimensional modeling approach for the estimation of scour depth around bridge piers. The developed model is able to estimate the scour depth under varying flow conditions, and its flexibility allows it to be adapted for application to other bridges with similar hydraulic and geomorphological conditions, providing a robust tool for widespread use in scour estimation. The developed two-dimensional model not only offers reliable predictions for the case study bridge but also holds significant potential for broader implementation, contributing to the improved design and maintenance of bridge structures in diverse environments.

Keywords: piers, scouring, HEC-RAS, physical model

Procedia PDF Downloads 14
605 To Evaluate the Function of Cardiac Viability After Administration of I131

Authors: Baburao Ganpat Apte, Gajodhar

Abstract:

Introduction: diopathic Parkinson’s disease (PD) is the most common neurodegenerative disorder. Early PD may present a diagnostic challenge with broad differential diagnoses that are not associated with striatal dopamine deficiency. This test was performed by using special type of radioactive precursor which was made available through our logistics. 131I-TOPA L-6-[131I] Iodo-3,4-Trihydroxyphenylalnine (131I -TOPA) is a positron emission tomography (PET) agent that measures the uptake of dopamine precursors for assessment of presynaptic dopaminergic integrity and has been shown to accurately reflect the sign of nervous mind going in patients suffers from monoaminergic disturbances in PD. Both qualitative and quantitative analyses of the scans were performed. Therefore, the early clinical diagnosis alone may be accurate and this reinforces the importance of functional imaging targeting the patholigically of the disease process. The patient’s medical records were then assessed for length of follow-up, response to levotopa, clinical course of sickness, and usually though of symptoms at time of 131I -TOPA PET. A respective analysis was carried out for all patients that gone through 131I -TOPA PET brain scan for motor symptoms suspicious for PD between 2000 - 2006. The eventual diagnosis by the referring neurologist, movement therapist, physiotherapist, was used as the accurate measurements in standard for further analysis. In this study, our goal to illustrate our local experience to determine the accuracy of 131I -TOPA PET for diagnosis of PD. We studied a total of 48 patients. Of the 25 scans, it found that one was a false negative, 40 were true positives, and 7 were true negatives. The resultant values are Sensitivity 90.4% (95% CI: 100%-71.3%), Specificity 100% (92% CI: 100%-58.0%), PPV 100% (91% CI 100%-75.7%), and NPV 80.5% (95% CI: 92.5%-48.5%). Result: Twenty-three patients were found in the initial query, and 1 were excluded (2 uncertain diagnosis, 2 inadequate follow-up). Twenty-eight patients (28 scans) remained with 15 males (62%) and 8 females (30%). All the patients had a clinical follow-up of at least 3 years, however the median length of follow-up was 5.5 years (range: 2-8 years). The median age at scan time was 51.2 years (range: 35-75)

Keywords: 18F-TOPA, petct, parkinson’s disease, cardiac

Procedia PDF Downloads 27
604 Debris Flow Mapping Using Geographical Information System Based Model and Geospatial Data in Middle Himalayas

Authors: Anand Malik

Abstract:

The Himalayas with high tectonic activities poses a great threat to human life and property. Climate change is another reason which triggering extreme events multiple fold effect on high mountain glacial environment, rock falls, landslides, debris flows, flash flood and snow avalanches. One such extreme event of cloud burst along with breach of moraine dammed Chorabri Lake occurred from June 14 to June 17, 2013, triggered flooding of Saraswati and Mandakini rivers in the Kedarnath Valley of Rudraprayag district of Uttrakhand state of India. As a result, huge volume of water with its high velocity created a catastrophe of the century, which resulted into loss of large number of human/animals, pilgrimage, tourism, agriculture and property. Thus a comprehensive assessment of debris flow hazards requires GIS-based modeling using numerical methods. The aim of present study is to focus on analysis and mapping of debris flow movements using geospatial data with flow-r (developed by team at IGAR, University of Lausanne). The model is based on combined probabilistic and energetic algorithms for the assessment of spreading of flow with maximum run out distances. Aster Digital Elevation Model (DEM) with 30m x 30m cell size (resolution) is used as main geospatial data for preparing the run out assessment, while Landsat data is used to analyze land use land cover change in the study area. The results of the study area show that model can be applied with great accuracy as the model is very useful in determining debris flow areas. The results are compared with existing available landslides/debris flow maps. ArcGIS software is used in preparing run out susceptibility maps which can be used in debris flow mitigation and future land use planning.

Keywords: debris flow, geospatial data, GIS based modeling, flow-R

Procedia PDF Downloads 273
603 Building a Parametric Link between Mapping and Planning: A Sunlight-Adaptive Urban Green System Plan Formation Process

Authors: Chenhao Zhu

Abstract:

Quantitative mapping is playing a growing role in guiding urban planning, such as using a heat map created by CFX, CFD2000, or Envi-met, to adjust the master plan. However, there is no effective quantitative link between the mappings and planning formation. So, in many cases, the decision-making is still based on the planner's subjective interpretation and understanding of these mappings, which limits the improvement of scientific and accuracy brought by the quantitative mapping. Therefore, in this paper, an effort has been made to give a methodology of building a parametric link between the mapping and planning formation. A parametric planning process based on radiant mapping has been proposed for creating an urban green system. In the first step, a script is written in Grasshopper to build a road network and form the block, while the Ladybug Plug-in is used to conduct a radiant analysis in the form of mapping. Then, the research creatively transforms the radiant mapping from a polygon into a data point matrix, because polygon is hard to engage in the design formation. Next, another script is created to select the main green spaces from the road network based on the criteria of radiant intensity and connect the green spaces' central points to generate a green corridor. After that, a control parameter is introduced to adjust the corridor's form based on the radiant intensity. Finally, a green system containing greenspace and green corridor is generated under the quantitative control of the data matrix. The designer only needs to modify the control parameter according to the relevant research results and actual conditions to realize the optimization of the green system. This method can also be applied to much other mapping-based analysis, such as wind environment analysis, thermal environment analysis, and even environmental sensitivity analysis. The parameterized link between the mapping and planning will bring about a more accurate, objective, and scientific planning.

Keywords: parametric link, mapping, urban green system, radiant intensity, planning strategy, grasshopper

Procedia PDF Downloads 142
602 Comparison of Water Equivalent Ratio of Several Dosimetric Materials in Proton Therapy Using Monte Carlo Simulations and Experimental Data

Authors: M. R. Akbari , H. Yousefnia, E. Mirrezaei

Abstract:

Range uncertainties of protons are currently a topic of interest in proton therapy. Two of the parameters that are often used to specify proton range are water equivalent thickness (WET) and water equivalent ratio (WER). Since WER values for a specific material is nearly constant at different proton energies, it is a more useful parameter to compare. In this study, WER values were calculated for different proton energies in polymethyl methacrylate (PMMA), polystyrene (PS) and aluminum (Al) using FLUKA and TRIM codes. The results were compared with analytical, experimental and simulated SEICS code data obtained from the literature. In FLUKA simulation, a cylindrical phantom, 1000 mm in height and 300 mm in diameter, filled with the studied materials was simulated. A typical mono-energetic proton pencil beam in a wide range of incident energies usually applied in proton therapy (50 MeV to 225 MeV) impinges normally on the phantom. In order to obtain the WER values for the considered materials, cylindrical detectors, 1 mm in height and 20 mm in diameter, were also simulated along the beam trajectory in the phantom. In TRIM calculations, type of projectile, energy and angle of incidence, type of target material and thickness should be defined. The mode of 'detailed calculation with full damage cascades' was selected for proton transport in the target material. The biggest difference in WER values between the codes was 3.19%, 1.9% and 0.67% for Al, PMMA and PS, respectively. In Al and PMMA, the biggest difference between each code and experimental data was 1.08%, 1.26%, 2.55%, 0.94%, 0.77% and 0.95% for SEICS, FLUKA and SRIM, respectively. FLUKA and SEICS had the greatest agreement (≤0.77% difference in PMMA and ≤1.08% difference in Al, respectively) with the available experimental data in this study. It is concluded that, FLUKA and TRIM codes have capability for Bragg curves simulation and WER values calculation in the studied materials. They can also predict Bragg peak location and range of proton beams with acceptable accuracy.

Keywords: water equivalent ratio, dosimetric materials, proton therapy, Monte Carlo simulations

Procedia PDF Downloads 323
601 An Experimental Machine Learning Analysis on Adaptive Thermal Comfort and Energy Management in Hospitals

Authors: Ibrahim Khan, Waqas Khalid

Abstract:

The Healthcare sector is known to consume a higher proportion of total energy consumption in the HVAC market owing to an excessive cooling and heating requirement in maintaining human thermal comfort in indoor conditions, catering to patients undergoing treatment in hospital wards, rooms, and intensive care units. The indoor thermal comfort conditions in selected hospitals of Islamabad, Pakistan, were measured on a real-time basis with the collection of first-hand experimental data using calibrated sensors measuring Ambient Temperature, Wet Bulb Globe Temperature, Relative Humidity, Air Velocity, Light Intensity and CO2 levels. The Experimental data recorded was analyzed in conjunction with the Thermal Comfort Questionnaire Surveys, where the participants, including patients, doctors, nurses, and hospital staff, were assessed based on their thermal sensation, acceptability, preference, and comfort responses. The Recorded Dataset, including experimental and survey-based responses, was further analyzed in the development of a correlation between operative temperature, operative relative humidity, and other measured operative parameters with the predicted mean vote and adaptive predicted mean vote, with the adaptive temperature and adaptive relative humidity estimated using the seasonal data set gathered for both summer – hot and dry, and hot and humid as well as winter – cold and dry, and cold and humid climate conditions. The Machine Learning Logistic Regression Algorithm was incorporated to train the operative experimental data parameters and develop a correlation between patient sensations and the thermal environmental parameters for which a new ML-based adaptive thermal comfort model was proposed and developed in our study. Finally, the accuracy of our model was determined using the K-fold cross-validation.

Keywords: predicted mean vote, thermal comfort, energy management, logistic regression, machine learning

Procedia PDF Downloads 63
600 The Visualization of Hydrological and Hydraulic Models Based on the Platform of Autodesk Civil 3D

Authors: Xiyue Wang, Shaoning Yan

Abstract:

Cities in China today is faced with an increasingly serious river ecological crisis accompanying with the development of urbanization: waterlogging on account of the fragmented urban natural hydrological system; the limited ecological function of the hydrological system caused by a destruction of water system and waterfront ecological environment. Additionally, the eco-hydrological processes of rivers are affected by various environmental factors, which are more complex in the context of urban environment. Therefore, efficient hydrological monitoring and analysis tools, accurate and visual hydrological and hydraulic models are becoming more important basis for decision-makers and an important way for landscape architects to solve urban hydrological problems, formulating sustainable and forward-looking schemes. The study mainly introduces the river and flood analysis model based on the platform of Autodesk Civil 3D. Taking the Luanhe River in Qian'an City of Hebei Province as an example, the 3D models of the landform, river, embankment, shoal, pond, underground stream and other land features were initially built, with which the water transfer simulation analysis, river floodplain analysis, and river ecology analysis were carried out, ultimately the real-time visualized simulation and analysis of rivers in various hypothetical scenarios were realized. Through the establishment of digital hydrological and hydraulic model, the hydraulic data can be accurately and intuitively simulated, which provides basis for rational water system and benign urban ecological system design. Though, the hydrological and hydraulic model based on Autodesk Civil3D own its boundedness: the interaction between the model and other data and software is unfavorable; the huge amount of 3D data and the lack of basic data restrict the accuracy and application range. The hydrological and hydraulic model based on Autodesk Civil3D platform provides more possibility to access convenient and intelligent tool for urban planning and monitoring, a solid basis for further urban research and design.

Keywords: visualization, hydrological and hydraulic model, Autodesk Civil 3D, urban river

Procedia PDF Downloads 297
599 A Single Feature Probability-Object Based Image Analysis for Assessing Urban Landcover Change: A Case Study of Muscat Governorate in Oman

Authors: Salim H. Al Salmani, Kevin Tansey, Mohammed S. Ozigis

Abstract:

The study of the growth of built-up areas and settlement expansion is a major exercise that city managers seek to undertake to establish previous and current developmental trends. This is to ensure that there is an equal match of settlement expansion needs to the appropriate levels of services and infrastructure required. This research aims at demonstrating the potential of satellite image processing technique, harnessing the utility of single feature probability-object based image analysis technique in assessing the urban growth dynamics of the Muscat Governorate in Oman for the period 1990, 2002 and 2013. This need is fueled by the continuous expansion of the Muscat Governorate beyond predicted levels of infrastructural provision. Landsat Images of the years 1990, 2002 and 2013 were downloaded and preprocessed to forestall appropriate radiometric and geometric standards. A novel approach of probability filtering of the target feature segment was implemented to derive the spatial extent of the final Built-Up Area of the Muscat governorate for the three years period. This however proved to be a useful technique as high accuracy assessment results of 55%, 70%, and 71% were recorded for the Urban Landcover of 1990, 2002 and 2013 respectively. Furthermore, the Normalized Differential Built – Up Index for the various images were derived and used to consolidate the results of the SFP-OBIA through a linear regression model and visual comparison. The result obtained showed various hotspots where urbanization have sporadically taken place. Specifically, settlement in the districts (Wilayat) of AL-Amarat, Muscat, and Qurayyat experienced tremendous change between 1990 and 2002, while the districts (Wilayat) of AL-Seeb, Bawshar, and Muttrah experienced more sporadic changes between 2002 and 2013.

Keywords: urban growth, single feature probability, object based image analysis, landcover change

Procedia PDF Downloads 274
598 Ratings of Hand Activity and Force Levels in Identical Hand-Intensive Work Tasks in Women and Men

Authors: Gunilla Dahlgren, Per Liv, Fredrik Öhberg, Lisbeth Slunga Järvholm, Mikael Forsman, Börje Rehn

Abstract:

Background: Accuracy of risk assessment tools in hand-repetitive work is important. This can support precision in the risk management process and for a sustainable working life for women and men equally. Musculoskeletal disorders, MSDs, from the hand, wrist, and forearm, are common in the working population. Women report a higher prevalence of MSDs in these regions. Objective: The objective of this study was to compare if women and men who performed the identical hand-intensive work task were rated equally using the Hand Activity Threshold Limit Value® (HA-TLV) when self-rated and observer-rated. Method: Fifty-six workers from eight companies participated, with various intensities in hand-repetitive work tasks. In total, 18 unique identical hand-intensive work tasks were executed in 28 pairs of a woman and a man. Hand activity and force levels were assessed. Each worker executed the work task for 15 minutes, which was also video recorded. Data was collected on workers who self-rated directly after the execution of the work task. Also, experienced observers performed ratings from videos of the same work tasks. For comparing means between women and men, paired samples t-tests were used. Results: The main results showed that there was no difference in self-ratings of hand activity level and force by women and men who executed the same work task. Further, there was no difference between observer ratings of hand activity level. However, the observer force ratings of women and men differed significantly (p=0.01). Conclusion: Hand activity and force levels are rated equally in women and men when self-rated, also by observers for hand activity. However, it is an observandum that observer force rating is rated higher for women and lower for men. This indicates the need of comparing force ratings with technical measures.

Keywords: gender, equity, sex differences, repetitive strain injury, cumulative trauma disorders, upper extremity, exposure assessment, workload, health risk assessment, observation, psychophysics

Procedia PDF Downloads 125
597 Optimization of Shale Gas Production by Advanced Hydraulic Fracturing

Authors: Fazl Ullah, Rahmat Ullah

Abstract:

This paper shows a comprehensive learning focused on the optimization of gas production in shale gas reservoirs through hydraulic fracturing. Shale gas has emerged as an important unconventional vigor resource, necessitating innovative techniques to enhance its extraction. The key objective of this study is to examine the influence of fracture parameters on reservoir productivity and formulate strategies for production optimization. A sophisticated model integrating gas flow dynamics and real stress considerations is developed for hydraulic fracturing in multi-stage shale gas reservoirs. This model encompasses distinct zones: a single-porosity medium region, a dual-porosity average region, and a hydraulic fracture region. The apparent permeability of the matrix and fracture system is modeled using principles like effective stress mechanics, porous elastic medium theory, fractal dimension evolution, and fluid transport apparatuses. The developed model is then validated using field data from the Barnett and Marcellus formations, enhancing its reliability and accuracy. By solving the partial differential equation by means of COMSOL software, the research yields valuable insights into optimal fracture parameters. The findings reveal the influence of fracture length, diversion capacity, and width on gas production. For reservoirs with higher permeability, extending hydraulic fracture lengths proves beneficial, while complex fracture geometries offer potential for low-permeability reservoirs. Overall, this study contributes to a deeper understanding of hydraulic cracking dynamics in shale gas reservoirs and provides essential guidance for optimizing gas production. The research findings are instrumental for energy industry professionals, researchers, and policymakers alike, shaping the future of sustainable energy extraction from unconventional resources.

Keywords: fluid-solid coupling, apparent permeability, shale gas reservoir, fracture property, numerical simulation

Procedia PDF Downloads 71
596 Modal Approach for Decoupling Damage Cost Dependencies in Building Stories

Authors: Haj Najafi Leila, Tehranizadeh Mohsen

Abstract:

Dependencies between diverse factors involved in probabilistic seismic loss evaluation are recognized to be an imperative issue in acquiring accurate loss estimates. Dependencies among component damage costs could be taken into account considering two partial distinct states of independent or perfectly-dependent for component damage states; however, in our best knowledge, there is no available procedure to take account of loss dependencies in story level. This paper attempts to present a method called "modal cost superposition method" for decoupling story damage costs subjected to earthquake ground motions dealt with closed form differential equations between damage cost and engineering demand parameters which should be solved in complex system considering all stories' cost equations by the means of the introduced "substituted matrixes of mass and stiffness". Costs are treated as probabilistic variables with definite statistic factors of median and standard deviation amounts and a presumed probability distribution. To supplement the proposed procedure and also to display straightforwardness of its application, one benchmark study has been conducted. Acceptable compatibility has been proven for the estimated damage costs evaluated by the new proposed modal and also frequently used stochastic approaches for entire building; however, in story level, insufficiency of employing modification factor for incorporating occurrence probability dependencies between stories has been revealed due to discrepant amounts of dependency between damage costs of different stories. Also, more dependency contribution in occurrence probability of loss could be concluded regarding more compatibility of loss results in higher stories than the lower ones, whereas reduction in incorporation portion of cost modes provides acceptable level of accuracy and gets away from time consuming calculations including some limited number of cost modes in high mode situation.

Keywords: dependency, story-cost, cost modes, engineering demand parameter

Procedia PDF Downloads 180