Search results for: text preprocessing
1012 A Clustering-Based Approach for Weblog Data Cleaning
Authors: Amine Ganibardi, Cherif Arab Ali
Abstract:
This paper addresses the data cleaning issue as a part of web usage data preprocessing within the scope of Web Usage Mining. Weblog data recorded by web servers within log files reflect usage activity, i.e., End-users’ clicks and underlying user-agents’ hits. As Web Usage Mining is interested in End-users’ behavior, user-agents’ hits are referred to as noise to be cleaned-off before mining. Filtering hits from clicks is not trivial for two reasons, i.e., a server records requests interlaced in sequential order regardless of their source or type, website resources may be set up as requestable interchangeably by end-users and user-agents. The current methods are content-centric based on filtering heuristics of relevant/irrelevant items in terms of some cleaning attributes, i.e., website’s resources filetype extensions, website’s resources pointed by hyperlinks/URIs, http methods, user-agents, etc. These methods need exhaustive extra-weblog data and prior knowledge on the relevant and/or irrelevant items to be assumed as clicks or hits within the filtering heuristics. Such methods are not appropriate for dynamic/responsive Web for three reasons, i.e., resources may be set up to as clickable by end-users regardless of their type, website’s resources are indexed by frame names without filetype extensions, web contents are generated and cancelled differently from an end-user to another. In order to overcome these constraints, a clustering-based cleaning method centered on the logging structure is proposed. This method focuses on the statistical properties of the logging structure at the requested and referring resources attributes levels. It is insensitive to logging content and does not need extra-weblog data. The used statistical property takes on the structure of the generated logging feature by webpage requests in terms of clicks and hits. Since a webpage consists of its single URI and several components, these feature results in a single click to multiple hits ratio in terms of the requested and referring resources. Thus, the clustering-based method is meant to identify two clusters based on the application of the appropriate distance to the frequency matrix of the requested and referring resources levels. As the ratio clicks to hits is single to multiple, the clicks’ cluster is the smallest one in requests number. Hierarchical Agglomerative Clustering based on a pairwise distance (Gower) and average linkage has been applied to four logfiles of dynamic/responsive websites whose click to hits ratio range from 1/2 to 1/15. The optimal clustering set on the basis of average linkage and maximum inter-cluster inertia results always in two clusters. The evaluation of the smallest cluster referred to as clicks cluster under the terms of confusion matrix indicators results in 97% of true positive rate. The content-centric cleaning methods, i.e., conventional and advanced cleaning, resulted in a lower rate 91%. Thus, the proposed clustering-based cleaning outperforms the content-centric methods within dynamic and responsive web design without the need of any extra-weblog. Such an improvement in cleaning quality is likely to refine dependent analysis.Keywords: clustering approach, data cleaning, data preprocessing, weblog data, web usage data
Procedia PDF Downloads 1701011 The Translation of Code-Switching in African Literature: Comparing the Two German Translations of Ngugi Wa Thiongo’s "Petals of Blood"
Authors: Omotayo Olalere
Abstract:
The relevance of code-switching for intercultural communication through literary translation cannot be overemphasized. The translation of code-switching and its implications for translations studies have been studied in the context of African literature. In these cases, code-switching was examined in the more general terms of its usage in source text and not particularly in Ngugi’s novels and its translations. In addition, the functions of translation and code-switching in the lyrics of some popular African songs have been studied, but this study is related more with oral performance than with written literature. As such, little has been done on the German translation of code-switching in African works. This study intends to fill this lacuna by examining the concept of code-switching in the German translations in Ngugi’s Petals of Blood. The aim is to highlight the significance of code-switching as a phenomenon in this African (Ngugi’s) novel written in English and to also focus on its representation in the two German translations. The target texts to be used are Verbrannte Blueten and Land der flammenden Blueten. “Abrogration“ as a concept will play an important role in the analysis of the data. Findings will show that the ideology of a translator plays a huge role in representing the concept of “abrogration” in the translation of code-switching in the selected source text. The study will contribute to knowledge in translation studies by bringing to limelight the need to foreground aspects of language contact in translation theory and practice, particularly in the African context. Relevant translation theories adopted for the study include Bandia’s (2008) postcolonial theory of translation and Snell-Hornby”s (1988) cultural translation theory.Keywords: code switching, german translation, ngugi wa thiong’o, petals of blood
Procedia PDF Downloads 951010 Predicting Personality and Psychological Distress Using Natural Language Processing
Authors: Jihee Jang, Seowon Yoon, Gaeun Son, Minjung Kang, Joon Yeon Choeh, Kee-Hong Choi
Abstract:
Background: Self-report multiple choice questionnaires have been widely utilized to quantitatively measure one’s personality and psychological constructs. Despite several strengths (e.g., brevity and utility), self-report multiple-choice questionnaires have considerable limitations in nature. With the rise of machine learning (ML) and Natural language processing (NLP), researchers in the field of psychology are widely adopting NLP to assess psychological constructs to predict human behaviors. However, there is a lack of connections between the work being performed in computer science and that psychology due to small data sets and unvalidated modeling practices. Aims: The current article introduces the study method and procedure of phase II, which includes the interview questions for the five-factor model (FFM) of personality developed in phase I. This study aims to develop the interview (semi-structured) and open-ended questions for the FFM-based personality assessments, specifically designed with experts in the field of clinical and personality psychology (phase 1), and to collect the personality-related text data using the interview questions and self-report measures on personality and psychological distress (phase 2). The purpose of the study includes examining the relationship between natural language data obtained from the interview questions, measuring the FFM personality constructs, and psychological distress to demonstrate the validity of the natural language-based personality prediction. Methods: The phase I (pilot) study was conducted on fifty-nine native Korean adults to acquire the personality-related text data from the interview (semi-structured) and open-ended questions based on the FFM of personality. The interview questions were revised and finalized with the feedback from the external expert committee, consisting of personality and clinical psychologists. Based on the established interview questions, a total of 425 Korean adults were recruited using a convenience sampling method via an online survey. The text data collected from interviews were analyzed using natural language processing. The results of the online survey, including demographic data, depression, anxiety, and personality inventories, were analyzed together in the model to predict individuals’ FFM of personality and the level of psychological distress (phase 2).Keywords: personality prediction, psychological distress prediction, natural language processing, machine learning, the five-factor model of personality
Procedia PDF Downloads 791009 An Inviscid Compressible Flow Solver Based on Unstructured OpenFOAM Mesh Format
Authors: Utkan Caliskan
Abstract:
Two types of numerical codes based on finite volume method are developed in order to solve compressible Euler equations to simulate the flow through forward facing step channel. Both algorithms have AUSM+- up (Advection Upstream Splitting Method) scheme for flux splitting and two-stage Runge-Kutta scheme for time stepping. In this study, the flux calculations differentiate between the algorithm based on OpenFOAM mesh format which is called 'face-based' algorithm and the basic algorithm which is called 'element-based' algorithm. The face-based algorithm avoids redundant flux computations and also is more flexible with hybrid grids. Moreover, some of OpenFOAM’s preprocessing utilities can be used on the mesh. Parallelization of the face based algorithm for which atomic operations are needed due to the shared memory model, is also presented. For several mesh sizes, 2.13x speed up is obtained with face-based approach over the element-based approach.Keywords: cell centered finite volume method, compressible Euler equations, OpenFOAM mesh format, OpenMP
Procedia PDF Downloads 3201008 Transferring Cultural Meanings: A Case of Translation Classroom
Authors: Ramune Kasperaviciene, Jurgita Motiejuniene, Dalia Venckiene
Abstract:
Familiarising students with strategies for transferring cultural meanings (intertextual units, culture-specific idioms, culture-specific items, etc.) should be part of a comprehensive translator training programme. The present paper focuses on strategies for transferring such meanings into other languages and explores possibilities for introducing these methods and practice to translation students. The authors (university translation teachers) analyse the means of transferring cultural meanings from English into Lithuanian in a specific travel book, attribute these means to theoretically grounded strategies, and make calculations related to the frequency of adoption of specific strategies; translation students are familiarised with concepts and methods related to transferring cultural meanings and asked to put their theoretical knowledge into practice, i.e. interpret and translate certain culture-specific items from the same source text, and ground their decisions on theory; the comparison of the strategies employed by the professional translator of the source text (as identified by the authors of this study) and by the students is made. As a result, both students and teachers gain valuable experience, and new practices of conducting translation classes for a specific purpose evolve. Conclusions highlight the differences and similarities of non-professional and professional choices, summarise the possibilities for introducing methods of transferring cultural meanings to students, and round up with specific considerations of the impact of theoretical knowledge and the degree of experience on decisions made in the translation process.Keywords: cultural meanings, culture-specific items, strategies for transferring cultural meanings, translator training
Procedia PDF Downloads 3521007 Wideband Performance Analysis of C-FDTD Based Algorithms in the Discretization Impoverishment of a Curved Surface
Authors: Lucas L. L. Fortes, Sandro T. M. Gonçalves
Abstract:
In this work, it is analyzed the wideband performance with the mesh discretization impoverishment of the Conformal Finite Difference Time-Domain (C-FDTD) approaches developed by Raj Mittra, Supriyo Dey and Wenhua Yu for the Finite Difference Time-Domain (FDTD) method. These approaches are a simple and efficient way to optimize the scattering simulation of curved surfaces for Dielectric and Perfect Electric Conducting (PEC) structures in the FDTD method, since curved surfaces require dense meshes to reduce the error introduced due to the surface staircasing. Defined, on this work, as D-FDTD-Diel and D-FDTD-PEC, these approaches are well-known in the literature, but the improvement upon their application is not quantified broadly regarding wide frequency bands and poorly discretized meshes. Both approaches bring improvement of the accuracy of the simulation without requiring dense meshes, also making it possible to explore poorly discretized meshes which bring a reduction in simulation time and the computational expense while retaining a desired accuracy. However, their applications present limitations regarding the mesh impoverishment and the frequency range desired. Therefore, the goal of this work is to explore the approaches regarding both the wideband and mesh impoverishment performance to bring a wider insight over these aspects in FDTD applications. The D-FDTD-Diel approach consists in modifying the electric field update in the cells intersected by the dielectric surface, taking into account the amount of dielectric material within the mesh cells edges. By taking into account the intersections, the D-FDTD-Diel provides accuracy improvement at the cost of computational preprocessing, which is a fair trade-off, since the update modification is quite simple. Likewise, the D-FDTD-PEC approach consists in modifying the magnetic field update, taking into account the PEC curved surface intersections within the mesh cells and, considering a PEC structure in vacuum, the air portion that fills the intersected cells when updating the magnetic fields values. Also likewise to D-FDTD-Diel, the D-FDTD-PEC provides a better accuracy at the cost of computational preprocessing, although with a drawback of having to meet stability criterion requirements. The algorithms are formulated and applied to a PEC and a dielectric spherical scattering surface with meshes presenting different levels of discretization, with Polytetrafluoroethylene (PTFE) as the dielectric, being a very common material in coaxial cables and connectors for radiofrequency (RF) and wideband application. The accuracy of the algorithms is quantified, showing the approaches wideband performance drop along with the mesh impoverishment. The benefits in computational efficiency, simulation time and accuracy are also shown and discussed, according to the frequency range desired, showing that poorly discretized mesh FDTD simulations can be exploited more efficiently, retaining the desired accuracy. The results obtained provided a broader insight over the limitations in the application of the C-FDTD approaches in poorly discretized and wide frequency band simulations for Dielectric and PEC curved surfaces, which are not clearly defined or detailed in the literature and are, therefore, a novelty. These approaches are also expected to be applied in the modeling of curved RF components for wideband and high-speed communication devices in future works.Keywords: accuracy, computational efficiency, finite difference time-domain, mesh impoverishment
Procedia PDF Downloads 1351006 Surface Hole Defect Detection of Rolled Sheets Based on Pixel Classification Approach
Authors: Samira Taleb, Sakina Aoun, Slimane Ziani, Zoheir Mentouri, Adel Boudiaf
Abstract:
Rolling is a pressure treatment technique that modifies the shape of steel ingots or billets between rotating rollers. During this process, defects may form on the surface of the rolled sheets and are likely to affect the performance and quality of the finished product. In our study, we developed a method for detecting surface hole defects using a pixel classification approach. This work includes several steps. First, we performed image preprocessing to delimit areas with and without hole defects on the sheet image. Then, we developed the histograms of each area to generate the gray level membership intervals of the pixels that characterize each area. As we noticed an intersection between the characteristics of the gray level intervals of the images of the two areas, we finally performed a learning step based on a series of detection tests to refine the membership intervals of each area, and to choose the defect detection criterion in order to optimize the recognition of the surface hole.Keywords: classification, defect, surface, detection, hole
Procedia PDF Downloads 231005 Reading Strategies of Generation X and Y: A Survey on Learners' Skills and Preferences
Authors: Kateriina Rannula, Elle Sõrmus, Siret Piirsalu
Abstract:
Mixed generation classroom is a phenomenon that current higher education establishments are faced with daily trying to meet the needs of modern labor market with its emphasis on lifelong learning and retraining. Representatives of mainly X and Y generations in one classroom acquiring higher education is a challenge to lecturers considering all the characteristics that differ one generation from another. The importance of outlining different strategies and considering the needs of the students lies in the necessity for everyone to acquire the maximum of the provided knowledge as well as to understand each other to study together in one classroom and successfully cooperate in future workplaces. In addition to different generations, there are also learners with different native languages which have an impact on reading and understanding texts in third languages, including possible translation. Current research aims to investigate, describe and compare reading strategies among the representatives of generation X and Y. Hypotheses were formulated - representatives of generation X and Y use different reading strategies which is also different among first and third year students of the before mentioned generations. Current study is an empirical, qualitative study. To achieve the aim of the research, relevant literature was analyzed and a semi-structured questionnaire conducted among the first and third year students of Tallinn Health Care College. Questionnaire consisted of 25 statements on the text reading strategies, 3 multiple choice questions on preferences considering the design and medium of the text, and three open questions on the translation process when working with a text in student’s third language. The results of the questionnaire were categorized, analyzed and compared. Both, generation X and Y described their reading strategies to be 'scanning' and 'surfing'. Compared to generation X, first year generation Y learners valued interactivity and nonlinear texts. Students frequently used strategies of skimming, scanning, translating and highlighting together with relevant-thinking and assistance-seeking. Meanwhile, the third-year generation Y students no longer frequently used translating, resourcing and highlighting while Generation X learners still incorporated these strategies. Knowing about different needs of the generations currently inside the classrooms and on the labor market enables us with tools to provide sustainable education and grants the society a work force that is more flexible and able to move between professions. Future research should be conducted in order to investigate the amount of learning and strategy- adoption between generations. As for reading, main suggestions arising from the research are as follows: make a variety of materials available to students; allow them to select what they want to read and try to make those materials visually attractive, relevant, and appropriately challenging for learners considering the differences of generations.Keywords: generation X, generation Y, learning strategies, reading strategies
Procedia PDF Downloads 1801004 Continuous FAQ Updating for Service Incident Ticket Resolution
Authors: Kohtaroh Miyamoto
Abstract:
As enterprise computing becomes more and more complex, the costs and technical challenges of IT system maintenance and support are increasing rapidly. One popular approach to managing IT system maintenance is to prepare and use an FAQ (Frequently Asked Questions) system to manage and reuse systems knowledge. Such an FAQ system can help reduce the resolution time for each service incident ticket. However, there is a major problem where over time the knowledge in such FAQs tends to become outdated. Much of the knowledge captured in the FAQ requires periodic updates in response to new insights or new trends in the problems addressed in order to maintain its usefulness for problem resolution. These updates require a systematic approach to define the exact portion of the FAQ and its content. Therefore, we are working on a novel method to hierarchically structure the FAQ and automate the updates of its structure and content. We use structured information and the unstructured text information with the timelines of the information in the service incident tickets. We cluster the tickets by structured category information, by keywords, and by keyword modifiers for the unstructured text information. We also calculate an urgency score based on trends, resolution times, and priorities. We carefully studied the tickets of one of our projects over a 2.5-year time period. After the first 6 months, we started to create FAQs and confirmed they improved the resolution times. We continued observing over the next 2 years to assess the ongoing effectiveness of our method for the automatic FAQ updates. We improved the ratio of tickets covered by the FAQ from 32.3% to 68.9% during this time. Also, the average time reduction of ticket resolution was between 31.6% and 43.9%. Subjective analysis showed more than 75% reported that the FAQ system was useful in reducing ticket resolution times.Keywords: FAQ system, resolution time, service incident tickets, IT system maintenance
Procedia PDF Downloads 3401003 Understanding Factors that Affect the Prior Knowledge of Deaf and Hard of Hearing Students and their Relation to Reading Comprehension
Authors: Khalid Alasim
Abstract:
The reading comprehension levels of students who are deaf or hard of hearing (DHH) are low compared to those of their hearing peers. One possible reason for this low reading levels is related to the students’ prior knowledge. This study investigated the potential factors that might affected DHH students’ prior knowledge, including their degree of hearing loss, the presence or absence of family members with a hearing loss, and educational stage (elementary–middle school). The study also examined the contribution of prior knowledge in predicting DHH students’ reading comprehension levels, and investigated the differences in the students’ scores based on the type of questions, including text-explicit (TE), text-implicit (TI), and script-implicit (SI) questions. Thirty-one elementary and middle-school students completed a demographic form and assessment, and descriptive statistics and multiple and simple linear regressions were used to answer the research questions. The findings indicated that the independent variables—degree of hearing loss, presence or absence of family members with hearing loss, and educational stage—explained little of the variance in DHH students’ prior knowledge. Further, the results showed that the DHH students’ prior knowledge affected their reading comprehension. Finally, the result demonstrated that the participants were able to answer more of the TI questions correctly than the TE and SI questions. The study concluded that prior knowledge is important in these students’ reading comprehension, and it is also important for teachers and parents of DHH children to use effective ways to increase their students’ and children’s prior knowledge.Keywords: reading comprehension, prior knowledge, metacognition, elementary, self-contained classrooms
Procedia PDF Downloads 1041002 Enhancement of X-Rays Images Intensity Using Pixel Values Adjustments Technique
Authors: Yousif Mohamed Y. Abdallah, Razan Manofely, Rajab M. Ben Yousef
Abstract:
X-Ray images are very popular as a first tool for diagnosis. Automating the process of analysis of such images is important in order to help physician procedures. In this practice, teeth segmentation from the radiographic images and feature extraction are essential steps. The main objective of this study was to study correction preprocessing of x-rays images using local adaptive filters in order to evaluate contrast enhancement pattern in different x-rays images such as grey color and to evaluate the usage of new nonlinear approach for contrast enhancement of soft tissues in x-rays images. The data analyzed by using MatLab program to enhance the contrast within the soft tissues, the gray levels in both enhanced and unenhanced images and noise variance. The main techniques of enhancement used in this study were contrast enhancement filtering and deblurring images using the blind deconvolution algorithm. In this paper, prominent constraints are firstly preservation of image's overall look; secondly, preservation of the diagnostic content in the image and thirdly detection of small low contrast details in diagnostic content of the image.Keywords: enhancement, x-rays, pixel intensity values, MatLab
Procedia PDF Downloads 4881001 Machine Learning-Driven Prediction of Cardiovascular Diseases: A Supervised Approach
Authors: Thota Sai Prakash, B. Yaswanth, Jhade Bhuvaneswar, Marreddy Divakar Reddy, Shyam Ji Gupta
Abstract:
Across the globe, there are a lot of chronic diseases, and heart disease stands out as one of the most perilous. Sadly, many lives are lost to this condition, even though early intervention could prevent such tragedies. However, identifying heart disease in its initial stages is not easy. To address this challenge, we propose an automated system aimed at predicting the presence of heart disease using advanced techniques. By doing so, we hope to empower individuals with the knowledge needed to take proactive measures against this potentially fatal illness. Our approach towards this problem involves meticulous data preprocessing and the development of predictive models utilizing classification algorithms such as Support Vector Machines (SVM), Decision Tree, and Random Forest. We assess the efficiency of every model based on metrics like accuracy, ensuring that we select the most reliable option. Additionally, we conduct thorough data analysis to reveal the importance of different attributes. Among the models considered, Random Forest emerges as the standout performer with an accuracy rate of 96.04% in our study.Keywords: support vector machines, decision tree, random forest
Procedia PDF Downloads 421000 Between a Rock and a Hard Place: The Possible Roles of Eternity Clauses in the Member States of the European Union
Authors: Zsuzsa Szakaly
Abstract:
Several constitutions have explicit or implicit eternity clauses in the European Union, their classic roles were analyzed so far, albeit there are new possibilities emerging in relation to the identity of the constitutions of the Member States. The aim of the study is to look at the practice of the Constitutional Courts of the Member States in detail regarding eternity clauses where limiting constitutional amendment has practical bearing, and to examine the influence of such practice on Europeanization. There are some states that apply explicit eternity clauses embedded in the text of the constitution, e.g., Italy, Germany, and Romania. In other states, the Constitutional Court 'unearthed' the implicit eternity clauses from the text of the basic law, e.g., Slovakia and Croatia. By using comparative analysis to examine the explicit or implicit clauses of the concerned constitutions, taking into consideration the new trends of the judicial opinions of the Member States and the fresh scientific studies, the main questions are: How to wield the double-edged sword of eternity clauses? To support European Integration or to support the sovereignty of the Member State? To help Europeanization or to act against it? Eternity clauses can easily find themselves between a rock and a hard place, the law of the European Union and the law of a Member State, with more possible interpretations. As more and more Constitutional Courts started to declare elements of their Member States’ constitutional identities, these began to interfere with the eternity clauses. Will this trend eventually work against Europeanization? As a result of the research, it can be stated that a lowest common denominator exists in the practice of European Constitutional Courts regarding eternity clauses. The chance of a European model and the possibility of this model influencing the status quo between the European Union and the Member States will be examined by looking at the answers these courts have found so far.Keywords: constitutional court, constitutional identity, eternity clause, European Integration
Procedia PDF Downloads 141999 A Generative Pretrained Transformer-Based Question-Answer Chatbot and Phantom-Less Quantitative Computed Tomography Bone Mineral Density Measurement System for Osteoporosis
Authors: Mian Huang, Chi Ma, Junyu Lin, William Lu
Abstract:
Introduction: Bone health attracts more attention recently and an intelligent question and answer (QA) chatbot for osteoporosis is helpful for science popularization. With Generative Pretrained Transformer (GPT) technology developing, we build an osteoporosis corpus dataset and then fine-tune LLaMA, a famous open-source GPT foundation large language model(LLM), on our self-constructed osteoporosis corpus. Evaluated by clinical orthopedic experts, our fine-tuned model outperforms vanilla LLaMA on osteoporosis QA task in Chinese. Three-dimensional quantitative computed tomography (QCT) measured bone mineral density (BMD) is considered as more accurate than DXA for BMD measurement in recent years. We develop an automatic Phantom-less QCT(PL-QCT) that is more efficient for BMD measurement since no need of an external phantom for calibration. Combined with LLM on osteoporosis, our PL-QCT provides efficient and accurate BMD measurement for our chatbot users. Material and Methods: We build an osteoporosis corpus containing about 30,000 Chinese literatures whose titles are related to osteoporosis. The whole process is done automatically, including crawling literatures in .pdf format, localizing text/figure/table region by layout segmentation algorithm and recognizing text by OCR algorithm. We train our model by continuous pre-training with Low-rank Adaptation (LoRA, rank=10) technology to adapt LLaMA-7B model to osteoporosis domain, whose basic principle is to mask the next word in the text and make the model predict that word. The loss function is defined as cross-entropy between the predicted and ground-truth word. Experiment is implemented on single NVIDIA A800 GPU for 15 days. Our automatic PL-QCT BMD measurement adopt AI-associated region-of-interest (ROI) generation algorithm for localizing vertebrae-parallel cylinder in cancellous bone. Due to no phantom for BMD calibration, we calculate ROI BMD by CT-BMD of personal muscle and fat. Results & Discussion: Clinical orthopaedic experts are invited to design 5 osteoporosis questions in Chinese, evaluating performance of vanilla LLaMA and our fine-tuned model. Our model outperforms LLaMA on over 80% of these questions, understanding ‘Expert Consensus on Osteoporosis’, ‘QCT for osteoporosis diagnosis’ and ‘Effect of age on osteoporosis’. Detailed results are shown in appendix. Future work may be done by training a larger LLM on the whole orthopaedics with more high-quality domain data, or a multi-modal GPT combining and understanding X-ray and medical text for orthopaedic computer-aided-diagnosis. However, GPT model gives unexpected outputs sometimes, such as repetitive text or seemingly normal but wrong answer (called ‘hallucination’). Even though GPT give correct answers, it cannot be considered as valid clinical diagnoses instead of clinical doctors. The PL-QCT BMD system provided by Bone’s QCT(Bone’s Technology(Shenzhen) Limited) achieves 0.1448mg/cm2(spine) and 0.0002 mg/cm2(hip) mean absolute error(MAE) and linear correlation coefficient R2=0.9970(spine) and R2=0.9991(hip)(compared to QCT-Pro(Mindways)) on 155 patients in three-center clinical trial in Guangzhou, China. Conclusion: This study builds a Chinese osteoporosis corpus and develops a fine-tuned and domain-adapted LLM as well as a PL-QCT BMD measurement system. Our fine-tuned GPT model shows better capability than LLaMA model on most testing questions on osteoporosis. Combined with our PL-QCT BMD system, we are looking forward to providing science popularization and early morning screening for potential osteoporotic patients.Keywords: GPT, phantom-less QCT, large language model, osteoporosis
Procedia PDF Downloads 71998 Design of a Graphical User Interface for Data Preprocessing and Image Segmentation Process in 2D MRI Images
Authors: Enver Kucukkulahli, Pakize Erdogmus, Kemal Polat
Abstract:
The 2D image segmentation is a significant process in finding a suitable region in medical images such as MRI, PET, CT etc. In this study, we have focused on 2D MRI images for image segmentation process. We have designed a GUI (graphical user interface) written in MATLABTM for 2D MRI images. In this program, there are two different interfaces including data pre-processing and image clustering or segmentation. In the data pre-processing section, there are median filter, average filter, unsharp mask filter, Wiener filter, and custom filter (a filter that is designed by user in MATLAB). As for the image clustering, there are seven different image segmentations for 2D MR images. These image segmentation algorithms are as follows: PSO (particle swarm optimization), GA (genetic algorithm), Lloyds algorithm, k-means, the combination of Lloyds and k-means, mean shift clustering, and finally BBO (Biogeography Based Optimization). To find the suitable cluster number in 2D MRI, we have designed the histogram based cluster estimation method and then applied to these numbers to image segmentation algorithms to cluster an image automatically. Also, we have selected the best hybrid method for each 2D MR images thanks to this GUI software.Keywords: image segmentation, clustering, GUI, 2D MRI
Procedia PDF Downloads 377997 Developing an Exhaustive and Objective Definition of Social Enterprise through Computer Aided Text Analysis
Authors: Deepika Verma, Runa Sarkar
Abstract:
One of the prominent debates in the social entrepreneurship literature has been to establish whether entrepreneurial work for social well-being by for-profit organizations can be classified as social entrepreneurship or not. Of late, the scholarship has reached a consensus. It concludes that there seems little sense in confining social entrepreneurship to just non-profit organizations. Boosted by this research, increasingly a lot of businesses engaged in filling the social infrastructure gaps in developing countries are calling themselves social enterprise. These organizations are diverse in their ownership, size, objectives, operations and business models. The lack of a comprehensive definition of social enterprise leads to three issues. Firstly, researchers may face difficulty in creating a database for social enterprises because the choice of an entity as a social enterprise becomes subjective or based on some pre-defined parameters by the researcher which is not replicable. Secondly, practitioners who use ‘social enterprise’ in their vision/mission statement(s) may find it difficult to adjust their business models accordingly especially during the times when they face the dilemma of choosing social well-being over business viability. Thirdly, social enterprise and social entrepreneurship attract a lot of donor funding and venture capital. In the paucity of a comprehensive definitional guide, the donors or investors may find assigning grants and investments difficult. It becomes necessary to develop an exhaustive and objective definition of social enterprise and examine whether the understanding of the academicians and practitioners about social enterprise match. This paper develops a dictionary of words often associated with social enterprise or (and) social entrepreneurship. It further compares two lexicographic definitions of social enterprise imputed from the abstracts of academic journal papers and trade publications extracted from the EBSCO database using the ‘tm’ package in R software.Keywords: EBSCO database, lexicographic definition, social enterprise, text mining
Procedia PDF Downloads 397996 Integrating Deep Learning For Improved State Of Charge Estimation In Electric Bus
Authors: Ms. Hema Ramachandran, Dr. N. Vasudevan
Abstract:
Accurate estimation of the battery State of Charge (SOC) is essential for optimizing the range and performance of modern electric vehicles. This paper focuses on analysing historical driving data from electric buses, with an emphasis on feature extraction and data preprocessing of driving conditions. By selecting relevant parameters, a set of characteristic variables tailored to specific driving scenarios is established. A battery SOC prediction model based on a combination a bidirectional long short-term memory (LSTM) architecture and a standard fully connected neural network (FCNN) is then proposed, where various hyperparameters are identified and fine-tuned to enhance prediction accuracy. The results indicate that with optimized hyperparameters, the prediction achieves a Root Mean Square Error (RMSE) of 1.98% and a Mean Absolute Error (MAE) of 1.72%. This approach is expected to improve the efficiency of battery management systems and battery utilization in electric vehicles.Keywords: long short-term memory (lstm), battery health monitoring, data-driven models, battery charge-discharge cycles, adaptive soc estimation, voltage and current sensing
Procedia PDF Downloads 8995 A Self Beheld the Eyes of the Other: Reflections on Montesquieu's Persian Letters
Authors: Seyed Majid Alavi Shooshtari
Abstract:
As a multi-layered prose piece of artistry and craftsmanship Charles de Secondat, baron de Montesquieu’s Persian Letters (1721) is a satirical work which records the experiences of two Persian noblemen, Usbek and Rica, traveling through France in the early eighteenth century. Montesquieu creates Persian Letters as a critique of the French society, a critical explanation of what was considered to be 'the Orient' in the period, and an invaluable historical document which illustrates the ways Europe and the East understood each other in the first half of the eighteenth century. However, Persian Letters is considered today, in part, an Orientalist text because of it presenting the culture of the East using stereotypical images. Although, when Montesquieu published Persian Letters, the term Orientalist was a harmless word for people who studied or took an interest in it, the ways in which this Western intellectual author exerts his critique of French social and political life through the eyes of Persian protagonists by placing the example of the Orient (the Other) at the service of an ongoing Eighteen century discourse does raise some Eastern eyebrows. The fact that Persian side of the novel is considered by some critics as a fanciful decor, and the letters sent home are seen as literary props, yet these Eastern men intelligently question the rationality of religious, state, military and cultural practices and uncover much of the absurdity, irrationality or frivolity of European life. By drawing on the insight that Montesquieu’s text problematizes the assumption that orientalism monolithically constructs the Orient as the Other, the present paper aims to examine how the innocent gaze of two Eastern travelers mirrors the ways Europe’s identity defines its-Self.Keywords: montesquieu, persian letters, ‘the orint’, identity politics, self, the other
Procedia PDF Downloads 111994 Differences in Assessing Hand-Written and Typed Student Exams: A Corpus-Linguistic Study
Authors: Jutta Ransmayr
Abstract:
The digital age has long arrived at Austrian schools, so both society and educationalists demand that digital means should be integrated accordingly to day-to-day school routines. Therefore, the Austrian school-leaving exam (A-levels) can now be written either by hand or by using a computer. However, the choice of writing medium (pen and paper or computer) for written examination papers, which are considered 'high-stakes' exams, raises a number of questions that have not yet been adequately investigated and answered until recently, such as: What effects do the different conditions of text production in the written German A-levels have on the component of normative linguistic accuracy? How do the spelling skills of German A-level papers written with a pen differ from those that the students wrote on the computer? And how is the teacher's assessment related to this? Which practical desiderata for German didactics can be derived from this? In a trilateral pilot project of the Austrian Center for Digital Humanities (ACDH) of the Austrian Academy of Sciences and the University of Vienna in cooperation with the Austrian Ministry of Education and the Council for German Orthography, these questions were investigated. A representative Austrian learner corpus, consisting of around 530 German A-level papers from all over Austria (pen and computer written), was set up in order to subject it to a quantitative (corpus-linguistic and statistical) and qualitative investigation with regard to the spelling and punctuation performance of the high school graduates and the differences between pen- and computer-written papers and their assessments. Relevant studies are currently available mainly from the Anglophone world. These have shown that writing on the computer increases the motivation to write, has positive effects on the length of the text, and, in some cases, also on the quality of the text. Depending on the writing situation and other technical aids, better results in terms of spelling and punctuation could also be found in the computer-written texts as compared to the handwritten ones. Studies also point towards a tendency among teachers to rate handwritten texts better than computer-written texts. In this paper, the first comparable results from the German-speaking area are to be presented. Research results have shown that, on the one hand, there are significant differences between handwritten and computer-written work with regard to performance in orthography and punctuation. On the other hand, the corpus linguistic investigation and the subsequent statistical analysis made it clear that not only the teachers' assessments of the students’ spelling performance vary enormously but also the overall assessments of the exam papers – the factor of the production medium (pen and paper or computer) also seems to play a decisive role.Keywords: exam paper assessment, pen and paper or computer, learner corpora, linguistics
Procedia PDF Downloads 171993 Supporting 'vulnerable' Students to Complete Their Studies During the Economic Crisis in Greece: The Umbrella Program of International Hellenic University
Authors: Rigas Kotsakis, Nikolaos Tsigilis, Vasilis Grammatikopoulos, Evridiki Zachopoulou
Abstract:
During the last decade, Greece has faced an unprecedented financial crisis, affecting various aspects and functionalities of Higher Education. Besides the restricted funding of academic institutions, the students and their families encountered economical difficulties that undoubtedly influenced the effective completion of their studies. In this context, a fairly large number of students in Alexander campus of International Hellenic University (IHU) delay, interrupt, or even abandon their studies, especially when they come from low-income families, belong to sensitive social or special needs groups, they have different cultural origins, etc. For this reason, a European project, named “Umbrella”, was initiated aiming at providing the necessary psychological support and counseling, especially to disadvantaged students, towards the completion of their studies. To this end, a network of various academic members (academic staff and students) from IHU, namely iMentor, were implicated in different roles. Specifically, experienced academic staff trained students to serve as intermediate links for the integration and educational support of students that fall into the aforementioned sensitive social groups and face problems for the completion of their studies. The main idea of the project is held upon its person-centered character, which facilitates direct student-to-student communication without the intervention of the teaching staff. The backbone of the iMentors network are senior students that face no problem in their academic life and volunteered for this project. It should be noted that there is a provision from the Umbrella structure for substantial and ethical rewards for their engagement. In this context, a well-defined, stringent methodology was implemented for the evaluation of the extent of the problem in IHU and the detection of the profile of the “candidate” disadvantaged students. The first phase included two steps, (a) data collection and (b) data cleansing/ preprocessing. The first step involved the data collection process from the Secretary Services of all Schools in IHU, from 1980 to 2019, which resulted in 96.418 records. The data set included the School name, the semester of studies, a student enrolling criteria, the nationality, the graduation year or the current, up-to-date academic state (still studying, delayed, dropped off, etc.). The second step of the employed methodology involved the data cleansing/preprocessing because of the existence of “noisy” data, missing and erroneous values, etc. Furthermore, several assumptions and grouping actions were imposed to achieve data homogeneity and an easy-to-interpret subsequent statistical analysis. Specifically, the duration of 40 years recording was limited to the last 15 years (2004-2019). In 2004 the Greek Technological Institutions were evolved into Higher Education Universities, leading into a stable and unified frame of graduate studies. In addition, the data concerning active students were excluded from the analysis since the initial processing effort was focused on the detection of factors/variables that differentiated graduate and deleted students. The final working dataset included 21.432 records with only two categories of students, those that have a degree and those who abandoned their studies. Findings of the first phase are presented across faculties and further discussed.Keywords: higher education, students support, economic crisis, mentoring
Procedia PDF Downloads 115992 A Comparative Study on the Use of Learning Resources in Learning Biochemistry by MBBS Students at Ras Al Khaimah Medical and Health Sciences University, UAE
Authors: B. K. Manjunatha Goud, Aruna Chanu Oinam
Abstract:
The undergraduate medical curriculum is oriented towards training the students to undertake the responsibilities of a physician. During the training period, adequate emphasis is placed on inculcating logical and scientific habits of thought; clarity of expression and independence of judgment; and ability to collect and analyze information and to correlate them. At Ras Al Khaimah Medical and Health Sciences University (RAKMHSU), Biochemistry a basic medical science subject is taught in the 1st year of 5 years medical course with vertical interdisciplinary interaction with all subjects, which needs to be taught and learned adequately by the students to be related to clinical case or clinical problem in medicine and future diagnostics so that they can practice confidently and skillfully in the community. Based on these facts study was done to know the extent of usage of library resources by the students and the impact of study materials on their preparation for examination. It was a comparative cross sectional study included 100 and 80 1st and 2nd-year students who had successfully completed Biochemistry course. The purpose of the study was explained to all students [participants]. Information was collected on a pre-designed, pre-tested and self-administered questionnaire. The questionnaire was validated by the senior faculties and pre tested on students who were not involved in the study. The study results showed that 80.30% and 93.15% of 1st and 2nd year students have the clear idea of course outline given in course handout or study guide. We also found a statistically significant number of students agreed that they were benefited from the practical session and writing notes in the class hour. A high percentage of students [50% and 62.02%] disagreed that that reading only the handouts is enough for their examination as compared to other students. The study also showed that only 35% and 41% of students visited the library on daily basis for the learning process, around 65% of students were using lecture notes and text books as a tool for learning and to understand the subject and 45% and 53% of students used the library resources (recommended text books) compared to online sources before the examinations. The results presented here show that students perceived that e-learning resources like power point presentations along with text book reading using SQ4R technique had made a positive impact on various aspects of their learning in Biochemistry. The use of library by students has overall positive impact on learning process especially in medical field enhances the outcome, and medical students are better equipped to treat the patient. But it’s also true that use of library use has been in decline which will impact the knowledge aspects and outcome. In conclusion, a student has to be taught how to use the library as learning tool apart from lecture handouts.Keywords: medical education, learning resources, study guide, biochemistry
Procedia PDF Downloads 178991 Hyperspectral Mapping Methods for Differentiating Mangrove Species along Karachi Coast
Authors: Sher Muhammad, Mirza Muhammad Waqar
Abstract:
It is necessary to monitor and identify mangroves types and spatial extent near coastal areas because it plays an important role in coastal ecosystem and environmental protection. This research aims at identifying and mapping mangroves types along Karachi coast ranging from 24.79 to 24.85 degree in latitude and 66.91 to 66.97 degree in longitude using hyperspectral remote sensing data and techniques. Image acquired during February, 2012 through Hyperion sensor have been used for this research. Image preprocessing includes geometric and radiometric correction followed by Minimum Noise Fraction (MNF) and Pixel Purity Index (PPI). The output of MNF and PPI has been analyzed by visualizing it in n-dimensions for end-member extraction. Well-distributed clusters on the n-dimensional scatter plot have been selected with the region of interest (ROI) tool as end members. These end members have been used as an input for classification techniques applied to identify and map mangroves species including Spectral Angle Mapper (SAM), Spectral Feature Fitting (SFF), and Spectral Information Diversion (SID). Only two types of mangroves namely Avicennia Marina (white mangroves) and Avicennia Germinans (black mangroves) have been observed throughout the study area.Keywords: mangrove, hyperspectral, hyperion, SAM, SFF, SID
Procedia PDF Downloads 362990 A Pilot Study to Investigate the Use of Machine Translation Post-Editing Training for Foreign Language Learning
Authors: Hong Zhang
Abstract:
The main purpose of this study is to show that machine translation (MT) post-editing (PE) training can help our Chinese students learn Spanish as a second language. Our hypothesis is that they might make better use of it by learning PE skills specific for foreign language learning. We have developed PE training materials based on the data collected in a previous study. Training material included the special error types of the output of MT and the error types that our Chinese students studying Spanish could not detect in the experiment last year. This year we performed a pilot study in order to evaluate the PE training materials effectiveness and to what extent PE training helps Chinese students who study the Spanish language. We used screen recording to record these moments and made note of every action done by the students. Participants were speakers of Chinese with intermediate knowledge of Spanish. They were divided into two groups: Group A performed PE training and Group B did not. We prepared a Chinese text for both groups, and participants translated it by themselves (human translation), and then used Google Translate to translate the text and asked them to post-edit the raw MT output. Comparing the results of PE test, Group A could identify and correct the errors faster than Group B students, Group A did especially better in omission, word order, part of speech, terminology, mistranslation, official names, and formal register. From the results of this study, we can see that PE training can help Chinese students learn Spanish as a second language. In the future, we could focus on the students’ struggles during their Spanish studies and complete the PE training materials to teach Chinese students learning Spanish with machine translation.Keywords: machine translation, post-editing, post-editing training, Chinese, Spanish, foreign language learning
Procedia PDF Downloads 144989 Nurse´s Interventions in Patients with Dementia During Clinical Practice: A Literature Review
Authors: Helga Martins, Idália Matias
Abstract:
Background: Dementia is an important research topic since that life expectancy worldwide is increasing, so people are getting older. The aging of populations has a major impact on the increase in dementia, and nurses play a major role in taking care of these patients. Therefore, the implementation of nursing interventions based on evidence is vital so that we are aware of what we can do in clinical practice in order to provide patient cantered care to patients with dementia. Aim: To identify the nurse´s interventions in patients with dementia during clinical practice. Method: Literature review grounded on an electronic search in the EBSCOhost platform (CINAHL Plus with Full Text, MEDLINE with Full Text, and Nursing & Allied Health Collection), using the search terms of "dementia" AND "nurs*" AND “interventions” in the abstracts. The inclusion criteria were: original papers published up to June 2021. A total of 153 results after de duplicate removal we kept 104. After the application of the inclusion criteria, we included 15 studies This literature review was performed by two independent researchers. Results: A total of 15 results about nurses’ interventions in patients with dementia were included in the study. The major interventions are therapeutic communication strategies, environmental management of stressors involving family/caregivers; strategies to promote patient safety, and assistance in activities of daily living in patients who are clinically deteriorated. Conclusion: Taking care of people with dementia is a complex and demanding task. Nurses are required to have a set of skills and competences in order to provide nursing interventions. We highlight that is necessary an awareness in nursing education regarding providing nursing care to patients with dementia.Keywords: dementia, interventions, nursing, review
Procedia PDF Downloads 157988 The Development of Congeneric Elicited Writing Tasks to Capture Language Decline in Alzheimer Patients
Authors: Lise Paesen, Marielle Leijten
Abstract:
People diagnosed with probable Alzheimer disease suffer from an impairment of their language capacities; a gradual impairment which affects both their spoken and written communication. Our study aims at characterising the language decline in DAT patients with the use of congeneric elicited writing tasks. Within these tasks, a descriptive text has to be written based upon images with which the participants are confronted. A randomised set of images allows us to present the participants with a different task on every encounter, thus allowing us to avoid a recognition effect in this iterative study. This method is a revision from previous studies, in which participants were presented with a larger picture depicting an entire scene. In order to create the randomised set of images, existing pictures were adapted following strict criteria (e.g. frequency, AoA, colour, ...). The resulting data set contained 50 images, belonging to several categories (vehicles, animals, humans, and objects). A pre-test was constructed to validate the created picture set; most images had been used before in spoken picture naming tasks. Hence the same reaction times ought to be triggered in the typed picture naming task. Once validated, the effectiveness of the descriptive tasks was assessed. First, the participants (n=60 students, n=40 healthy elderly) performed a typing task, which provided information about the typing speed of each individual. Secondly, two descriptive writing tasks were carried out, one simple and one complex. The simple task contains 4 images (1 animal, 2 objects, 1 vehicle) and only contains elements with high frequency, a young AoA (<6 years), and fast reaction times. Slow reaction times, a later AoA (≥ 6 years) and low frequency were criteria for the complex task. This task uses 6 images (2 animals, 1 human, 2 objects and 1 vehicle). The data were collected with the keystroke logging programme Inputlog. Keystroke logging tools log and time stamp keystroke activity to reconstruct and describe text production processes. The data were analysed using a selection of writing process and product variables, such as general writing process measures, detailed pause analysis, linguistic analysis, and text length. As a covariate, the intrapersonal interkey transition times from the typing task were taken into account. The pre-test indicated that the new images lead to similar or even faster reaction times compared to the original images. All the images were therefore used in the main study. The produced texts of the description tasks were significantly longer compared to previous studies, providing sufficient text and process data for analyses. Preliminary analysis shows that the amount of words produced differed significantly between the healthy elderly and the students, as did the mean length of production bursts, even though both groups needed the same time to produce their texts. However, the elderly took significantly more time to produce the complex task than the simple task. Nevertheless, the amount of words per minute remained comparable between simple and complex. The pauses within and before words varied, even when taking personal typing abilities (obtained by the typing task) into account.Keywords: Alzheimer's disease, experimental design, language decline, writing process
Procedia PDF Downloads 276987 Investigating Malaysian Prereader’s Cognitive Processes when Reading English Picture Storybooks: A Comparative Eye-Tracking Experiment
Authors: Siew Ming Thang, Wong Hoo Keat, Chee Hao Sue, Fung Lan Loo, Ahju Rosalind
Abstract:
There are numerous studies that explored young learners’ literacy skills in Malaysia but none that uses the eye-tracking device to track their cognitive processes when reading picture storybooks. This study used this method to investigate two groups of prereaders’ cognitive processes in four conditions. (1) A congruent picture was presented, and a matching narration was read aloud by a recorder; (2) Children heard a narration telling about the same characters in the picture but involves a different scene; (3) Only a picture with matching text was present; (4) Students only heard the reading aloud of the text on the screen. The two main objectives of this project are to test which content of pictures helps the prereaders (i.e., young children who have not received any formal reading instruction) understand the narration and whether children try to create a coherent mental representation from the oral narration and the pictures. The study compares two groups of children from two different kindergartens. Group1: 15 Chinese children; Group2: 17 Malay children. The medium of instruction was English. An eye-tracker were used to identify Areas of Interest (AOI) of each picture and the five target elements and calculate number of fixations and total time spent on fixation of pictures and written texts. Two mixed factorial ANOVAs with the storytelling performance (good, average, or weak) and vocabulary level (low, medium, high) as between-subject variables, and the Areas of Interests (AOIs) and display conditions as the within-subject variables were performedon the variables.Keywords: eye-tracking, cognitive processes, literacy skills, prereaders, visual attention
Procedia PDF Downloads 95986 Developing the Skills of Reading Comprehension of Learners of English as a Second Language
Authors: Indu Gamage
Abstract:
Though commonly utilized as a language improvement technique, reading has not been fully employed by both language teachers and learners to develop reading comprehension skills in English as a second language. In a Sri Lankan context, this area has to be delved deep into as the learners’ show more propensity to analyze. Reading comprehension is an area that most language teachers and learners struggle with though it appears easy. Most ESL learners engage in reading tasks without being properly aware of the objective of doing reading comprehension. It is observed that when doing reading tasks, the language learners’ concern is more on the meanings of individual words than on the overall comprehension of the given text. The passiveness with which the ESL learners engage themselves in reading comprehension makes reading a tedious task for the learner thereby giving the learner a sense of disappointment at the end. Certain reading tasks take the form of translations. The active cognitive participation of the learner in the mode of using productive strategies for predicting, employing schemata and using contextual clues seems quite less. It was hypothesized that the learners’ lack of knowledge of the productive strategies of reading was the major obstacle that makes reading comprehension a tedious task for them. This study is based on a group of 30 tertiary students who read English only as a fundamental requirement for their degree. They belonged to the Faculty of Humanities and Social Sciences of the University of Ruhuna, Sri Lanka. Almost all learners hailed from areas where English was hardly utilized in their day to day conversations. The study is carried out in the mode of a questionnaire to check their opinions on reading and a test to check whether the learners are using productive strategies of reading when doing reading comprehension tasks. The test comprised reading questions covering major productive strategies for reading. Then the results were analyzed to see the degree of their active engagement in comprehending the text. The findings depicted the validity of the hypothesis as grounds behind the difficulties related to reading comprehension.Keywords: reading, comprehension, skills, reading strategies
Procedia PDF Downloads 175985 Corpus Linguistics as a Tool for Translation Studies Analysis: A Bilingual Parallel Corpus of Students’ Translations
Authors: Juan-Pedro Rica-Peromingo
Abstract:
Nowadays, corpus linguistics has become a key research methodology for Translation Studies, which broadens the scope of cross-linguistic studies. In the case of the study presented here, the approach used focuses on learners with little or no experience to study, at an early stage, general mistakes and errors, the correct or incorrect use of translation strategies, and to improve the translational competence of the students. Led by Sylviane Granger and Marie-Aude Lefer of the Centre for English Corpus Linguistics of the University of Louvain, the MUST corpus (MUltilingual Student Translation Corpus) is an international project which brings together partners from Europe and worldwide universities and connects Learner Corpus Research (LCR) and Translation Studies (TS). It aims to build a corpus of translations carried out by students including both direct (L2 > L1) an indirect (L1 > L2) translations, from a great variety of text types, genres, and registers in a wide variety of languages: audiovisual translations (including dubbing, subtitling for hearing population and for deaf population), scientific, humanistic, literary, economic and legal translation texts. This paper focuses on the work carried out by the Spanish team from the Complutense University (UCMA), which is part of the MUST project, and it describes the specific features of the corpus built by its members. All the texts used by UCMA are either direct or indirect translations between English and Spanish. Students’ profiles comprise translation trainees, foreign language students with a major in English, engineers studying EFL and MA students, all of them with different English levels (from B1 to C1); for some of the students, this would be their first experience with translation. The MUST corpus is searchable via Hypal4MUST, a web-based interface developed by Adam Obrusnik from Masaryk University (Czech Republic), which includes a translation-oriented annotation system (TAS). A distinctive feature of the interface is that it allows source texts and target texts to be aligned, so we can be able to observe and compare in detail both language structures and study translation strategies used by students. The initial data obtained point out the kind of difficulties encountered by the students and reveal the most frequent strategies implemented by the learners according to their level of English, their translation experience and the text genres. We have also found common errors in the graduate and postgraduate university students’ translations: transfer errors, lexical errors, grammatical errors, text-specific translation errors, and cultural-related errors have been identified. Analyzing all these parameters will provide more material to bring better solutions to improve the quality of teaching and the translations produced by the students.Keywords: corpus studies, students’ corpus, the MUST corpus, translation studies
Procedia PDF Downloads 148984 Online Yoga Asana Trainer Using Deep Learning
Authors: Venkata Narayana Chejarla, Nafisa Parvez Shaik, Gopi Vara Prasad Marabathula, Deva Kumar Bejjam
Abstract:
Yoga is an advanced, well-recognized method with roots in Indian philosophy. Yoga benefits both the body and the psyche. Yoga is a regular exercise that helps people relax and sleep better while also enhancing their balance, endurance, and concentration. Yoga can be learned in a variety of settings, including at home with the aid of books and the internet as well as in yoga studios with the guidance of an instructor. Self-learning does not teach the proper yoga poses, and doing them without the right instruction could result in significant injuries. We developed "Online Yoga Asana Trainer using Deep Learning" so that people could practice yoga without a teacher. Our project is developed using Tensorflow, Movenet, and Keras models. The system makes use of data from Kaggle that includes 25 different yoga poses. The first part of the process involves applying the movement model for extracting the 17 key points of the body from the dataset, and the next part involves preprocessing, which includes building a pose classification model using neural networks. The system scores a 98.3% accuracy rate. The system is developed to work with live videos.Keywords: yoga, deep learning, movenet, tensorflow, keras, CNN
Procedia PDF Downloads 241983 A Study of Mode Choice Model Improvement Considering Age Grouping
Authors: Young-Hyun Seo, Hyunwoo Park, Dong-Kyu Kim, Seung-Young Kho
Abstract:
The purpose of this study is providing an improved mode choice model considering parameters including age grouping of prime-aged and old age. In this study, 2010 Household Travel Survey data were used and improper samples were removed through the analysis. Chosen alternative, date of birth, mode, origin code, destination code, departure time, and arrival time are considered from Household Travel Survey. By preprocessing data, travel time, travel cost, mode, and ratio of people aged 45 to 55 years, 55 to 65 years and over 65 years were calculated. After the manipulation, the mode choice model was constructed using LIMDEP by maximum likelihood estimation. A significance test was conducted for nine parameters, three age groups for three modes. Then the test was conducted again for the mode choice model with significant parameters, travel cost variable and travel time variable. As a result of the model estimation, as the age increases, the preference for the car decreases and the preference for the bus increases. This study is meaningful in that the individual and households characteristics are applied to the aggregate model.Keywords: age grouping, aging, mode choice model, multinomial logit model
Procedia PDF Downloads 322