Search results for: grammatical errors
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1093

Search results for: grammatical errors

853 A Bayesian Multivariate Microeconometric Model for Estimation of Price Elasticity of Demand

Authors: Jefferson Hernandez, Juan Padilla

Abstract:

Estimation of price elasticity of demand is a valuable tool for the task of price settling. Given its relevance, it is an active field for microeconomic and statistical research. Price elasticity in the industry of oil and gas, in particular for fuels sold in gas stations, has shown to be a challenging topic given the market and state restrictions, and underlying correlations structures between the types of fuels sold by the same gas station. This paper explores the Lotka-Volterra model for the problem for price elasticity estimation in the context of fuels; in addition, it is introduced multivariate random effects with the purpose of dealing with errors, e.g., measurement or missing data errors. In order to model the underlying correlation structures, the Inverse-Wishart, Hierarchical Half-t and LKJ distributions are studied. Here, the Bayesian paradigm through Markov Chain Monte Carlo (MCMC) algorithms for model estimation is considered. Simulation studies covering a wide range of situations were performed in order to evaluate parameter recovery for the proposed models and algorithms. Results revealed that the proposed algorithms recovered quite well all model parameters. Also, a real data set analysis was performed in order to illustrate the proposed approach.

Keywords: price elasticity, volume, correlation structures, Bayesian models

Procedia PDF Downloads 165
852 A Study on the Acquisition of Chinese Classifiers by Vietnamese Learners

Authors: Quoc Hung Le Pham

Abstract:

In the field of language study, classifier is an interesting research feature. In the world’s languages, some languages have classifier system, some do not. Mandarin Chinese and Vietnamese languages are a rich classifier system, however, because of the language system, the cognitive, cultural differences, so that the syntactic structure of classifier of them also dissimilar. When using Mandarin Chinese classifiers must collocate with nouns or verbs, in the lexical category it is not like nouns or verbs, belong to the open class. But some scholars believe that Mandarin Chinese measure words are similar to English and other Indo European languages. The word hanging on the structure and word formation (suffix), is a closed class. Compared to other languages, such as Chinese, Vietnamese, Thai and other Asian languages are still belonging to the classifier language’s second type, this type of language is classifier, it is in the majority of quantity must exist, and following deictic, anaphoric or quantity appearing together, not separation between its modified noun, also known as numeral classifier language. Main syntactic structure of Chinese classifiers are as follows: ‘quantity+measure+noun’, ‘pronoun+measure+noun’, ‘pronoun+quantity+measure+noun’, ‘prefix+quantity+measure +noun’, ‘quantity +adjective + measure +noun’, ‘ quantity (above 10 whole number), + duo (多)measure +noun’, ‘ quantity (around 10) + measure + duo (多) +noun’. Main syntactic structure of Vietnamese classifiers are: ‘quantity+measure+noun’, ‘ measure+noun+pronoun’, ‘quantity+measure+noun+pronoun’, ‘measure+noun+prefix+ quantity’, ‘quantity+measure+noun+adjective', ‘duo (多) +quanlity+measure+noun’, ‘quantity+measure+adjective+pronoun (quantity word could not be 1)’, ‘measure+adjective+pronoun’, ‘measure+pronoun’. In daily life, classifiers are commonly used, if Chinese learners failed to standardize this using catergory, because the negative impact might occur on their verbal communication. The richness of the Chinese classifier system contributes to the complexity in the study of the system by foreign learners, especially in the inter language of Vietnamese learners. As above mentioned, Vietnamese language also has a rich system of classifiers, however, the basic structure order of two languages are similar but both still have differences. These similarities and dissimilarities between Chinese and Vietnamese classifier systems contribute significantly to the common errors made by Vietnamese students while they acquire Chinese, which are distinct from the errors made by students from the other language background. This article from a comparative perspective of language, has an orientation towards Chinese and Vietnamese languages commonly used in classifiers semantics and structural form two aspects. This comparative study aims to identity Vietnamese students while learning Chinese classifiers may face some negative transference of mother language, beside that through the analysis of the classifiers questionnaire, find out the causes and patterns of the errors they made. As the preliminary analysis shows, Vietnamese students while learning Chinese classifiers made some errors such as: overuse classifier ‘ge’(个); misuse the other classifiers ‘*yi zhang ri ji’(yi pian ri ji), ‘*yi zuo fang zi’(yi jian fang zi), ‘*si zhang jin pai’(si mei jin pai); homonym words ‘dui, shuang, fu, tao’ (对、双、副、套), ‘ke, li’ (颗、粒).

Keywords: acquisition, classifiers, negative transfer, Vietnamse learners

Procedia PDF Downloads 452
851 A Comparative Case Study on Teaching Romanian Language to Foreign Students: Swedes in Lund versus Arabs in Alba Iulia

Authors: Lucian Vasile Bagiu, Paraschiva Bagiu

Abstract:

The study is a contrastive essay on language acquisition and learning and follows the outcomes of teaching Romanian language to foreign students both at Lund University, Sweden (from 2014 to 2017) and at '1 Decembrie 1918' University in Alba Iulia, Romania (2017-2018). Having employed the same teaching methodology (on campus, same curricula) for the same level of study (beginners’ level: A1-A2), the essay focuses on the written exam at the end of the semester. The study argues on grammar exercises concerned with: the indefinite and the definite article; the conjugation of verbs in the present indicative; the possessive; verbs in the past tense; the subjunctive; the degrees of comparison for adjectives. Identifying similar errors when solving identical grammar exercises by different groups of foreign students is an opportunity to emphasize the major challenges any foreigner has to face and overcome when trying to acquire Romanian language. The conclusion draws attention to the complexity of the morphology of Romanian language in several key elements which may be insurmountable for a foreign speaker no matter if the language acquisition takes place in a foreign country or a Romanian university.

Keywords: Arab students, morphological errors, Romanian language, Swedish students, written exam

Procedia PDF Downloads 258
850 Language in Court: Ideology, Power and Cognition

Authors: Mehdi Damaliamiri

Abstract:

Undoubtedly, the power of language is hardly a new topic; indeed, the persuasive power of language accompanied by ideology has long been recognized in different aspects of life. The two and a half thousand-year-old Bisitun inscriptions in Iran, proclaiming the victories of the Persian King, Darius, are considered by some historians to have been an early example of the use of propaganda. Added to this, the modern age is the true cradle of fully-fledged ideologies and the ongoing process of centrifugal ideologization. The most visible work on ideology today within the field of linguistics is “Critical Discourse Analysis” (CDA). The focus of CDA is on “uncovering injustice, inequality, taking sides with the powerless and suppressed” and making “mechanisms of manipulation, discrimination, demagogy, and propaganda explicit and transparent.” possible way of relating language to ideology is to propose that ideology and language are inextricably intertwined. From this perspective, language is always ideological, and ideology depends on the language. All language use involves ideology, and so ideology is ubiquitous – in our everyday encounters, as much as in the business of the struggle for power within and between the nation-states and social statuses. At the same time, ideology requires language. Its key characteristics – its power and pervasiveness, its mechanisms for continuity and for change – all come out of the inner organization of language. The two phenomena are homologous: they share the same evolutionary trajectory. To get a more robust portrait of the power and ideology, we need to examine its potential place in the structure, and consider how such structures pattern in terms of the functional elements which organize meanings in the clause. This is based on the belief that all grammatical, including syntactic, knowledge is stored mentally as constructions have become immensely popular. When the structure of the clause is taken into account, the power and ideology have a preference for Complement over Subject and Adjunct. The subject is a central interpersonal element in discourse: it is one of two elements that form the central interactive nub of a proposition. Conceptually, there are countless ways of construing a given event and linguistically, a variety of grammatical devices that are usually available as alternate means of coding a given conception, such as political crime and corruption. In the theory of construal, then, which, like transitivity in Halliday, makes options available, Cognitive Linguistics can offer a cognitive account of ideology in language, where ideology is made possible by the choices a language allows for representing the same material situation in different ways. The possibility of promoting alternative construals of the same reality means that any particular choice in representation is always ideologically constrained or motivated and indicates the perspective and interests of the text-producer.

Keywords: power, ideology, court, discourse

Procedia PDF Downloads 163
849 Indoor Real-Time Positioning and Mapping Based on Manhattan Hypothesis Optimization

Authors: Linhang Zhu, Hongyu Zhu, Jiahe Liu

Abstract:

This paper investigated a method of indoor real-time positioning and mapping based on the Manhattan world assumption. In indoor environments, relying solely on feature matching techniques or other geometric algorithms for sensor pose estimation inevitably resulted in cumulative errors, posing a significant challenge to indoor positioning. To address this issue, we adopt the Manhattan world hypothesis to optimize the camera pose algorithm based on feature matching, which improves the accuracy of camera pose estimation. A special processing method was applied to image data frames that conformed to the Manhattan world assumption. When similar data frames appeared subsequently, this could be used to eliminate drift in sensor pose estimation, thereby reducing cumulative errors in estimation and optimizing mapping and positioning. Through experimental verification, it is found that our method achieves high-precision real-time positioning in indoor environments and successfully generates maps of indoor environments. This provides effective technical support for applications such as indoor navigation and robot control.

Keywords: Manhattan world hypothesis, real-time positioning and mapping, feature matching, loopback detection

Procedia PDF Downloads 61
848 Determining Components of Deflection of the Vertical in Owerri West Local Government, Imo State Nigeria Using Least Square Method

Authors: Chukwu Fidelis Ndubuisi, Madufor Michael Ozims, Asogwa Vivian Ndidiamaka, Egenamba Juliet Ngozi, Okonkwo Stephen C., Kamah Chukwudi David

Abstract:

Deflection of the vertical is a quantity used in reducing geodetic measurements related to geoidal networks to the ellipsoidal plane; and it is essential in Geoid modeling processes. Computing the deflection of the vertical component of a point in a given area is necessary in evaluating the standard errors along north-south and east-west direction. Using combined approach for the determination of deflection of the vertical component provides improved result but labor intensive without appropriate method. Least square method is a method that makes use of redundant observation in modeling a given sets of problem that obeys certain geometric condition. This research work is aimed to computing the deflection of vertical component of Owerri West local government area of Imo State using geometric method as field technique. In this method combination of Global Positioning System on static mode and precise leveling observation were utilized in determination of geodetic coordinate of points established within the study area by GPS observation and the orthometric heights through precise leveling. By least square using Matlab programme; the estimated deflections of vertical component parameters for the common station were -0.0286 and -0.0001 arc seconds for the north-south and east-west components respectively. The associated standard errors of the processed vectors of the network were computed. The computed standard errors of the North-south and East-west components were 5.5911e-005 and 1.4965e-004 arc seconds, respectively. Therefore, including the derived component of deflection of the vertical to the ellipsoidal model will yield high observational accuracy since an ellipsoidal model is not tenable due to its far observational error in the determination of high quality job. It is important to include the determined deflection of the vertical component for Owerri West Local Government in Imo State, Nigeria.

Keywords: deflection of vertical, ellipsoidal height, least square, orthometric height

Procedia PDF Downloads 209
847 Energy Detection Based Sensing and Primary User Traffic Classification for Cognitive Radio

Authors: Urvee B. Trivedi, U. D. Dalal

Abstract:

As wireless communication services grow quickly; the seriousness of spectrum utilization has been on the rise gradually. An emerging technology, cognitive radio has come out to solve today’s spectrum scarcity problem. To support the spectrum reuse functionality, secondary users are required to sense the radio frequency environment, and once the primary users are found to be active, the secondary users are required to vacate the channel within a certain amount of time. Therefore, spectrum sensing is of significant importance. Once sensing is done, different prediction rules apply to classify the traffic pattern of primary user. Primary user follows two types of traffic patterns: periodic and stochastic ON-OFF patterns. A cognitive radio can learn the patterns in different channels over time. Two types of classification methods are discussed in this paper, by considering edge detection and by using autocorrelation function. Edge detection method has a high accuracy but it cannot tolerate sensing errors. Autocorrelation-based classification is applicable in the real environment as it can tolerate some amount of sensing errors.

Keywords: cognitive radio (CR), probability of detection (PD), probability of false alarm (PF), primary user (PU), secondary user (SU), fast Fourier transform (FFT), signal to noise ratio (SNR)

Procedia PDF Downloads 345
846 A Stylistic Analysis of the Short Story ‘The Escape’ by Qaisra Shahraz

Authors: Huma Javed

Abstract:

Stylistics is a broad term that is concerned with both literature and linguistics, due to which the significance of the stylistics increases. This research aims to analyze Qaisra Shahraz's short story ‘The Escape’ from the stylistic analysis viewpoint. The focus of this study is on three aspects grammar category, lexical category, and figure of speech of the short story. The research designs for this article are both explorative and descriptive. The analysis of the data shows that the writer has used more nouns in the story as compared to other lexical items, which suggests that story has a descriptive style rather than narrative.

Keywords: The Escape, stylistics, grammatical category, lexical category, figure of speech

Procedia PDF Downloads 237
845 Computer Assisted Strategies Help to Pharmacist

Authors: Komal Fizza

Abstract:

All around the world in every field professionals are taking great support from their computers. Computer assisted strategies not only increase the efficiency of the professionals but also in case of healthcare they help in life-saving interventions. The background of this current research is aimed towards two things; first to find out if computer assisted strategies are useful for Pharmacist for not and secondly how much these assist a Pharmacist to do quality interventions. Shifa International Hospital is a 500 bedded hospital, and it is running Antimicrobial Stewardship, during their stewardship rounds pharmacists observed that a lot of wrong doses of antibiotics were coming at times those were being overlooked by the other pharmacist even. So, with the help of MIS team the patients were categorized into adult and peads depending upon their age. Minimum and maximum dose of every single antibiotic present in the pharmacy that could be dispensed to the patient was developed. These were linked to the order entry window. So whenever pharmacist would type any order and the dose would be below or above the therapeutic limit this would give an alert to the pharmacist. Whenever this message pop-up this was recorded at the back end along with the antibiotic name, pharmacist ID, date, and time. From 14th of January 2015 and till 14th of March 2015 the software stopped different users 350 times. Out of this 300 were found to be major errors which if reached to the patient could have harmed them to the greater extent. While 50 were due to typing errors and minor deviations. The pilot study showed that computer assisted strategies can be of great help to the pharmacist. They can improve the efficacy and quality of interventions.

Keywords: antibiotics, computer assisted strategies, pharmacist, stewardship

Procedia PDF Downloads 491
844 VIAN-DH: Computational Multimodal Conversation Analysis Software and Infrastructure

Authors: Teodora Vukovic, Christoph Hottiger, Noah Bubenhofer

Abstract:

The development of VIAN-DH aims at bridging two linguistic approaches: conversation analysis/interactional linguistics (IL), so far a dominantly qualitative field, and computational/corpus linguistics and its quantitative and automated methods. Contemporary IL investigates the systematic organization of conversations and interactions composed of speech, gaze, gestures, and body positioning, among others. These highly integrated multimodal behaviour is analysed based on video data aimed at uncovering so called “multimodal gestalts”, patterns of linguistic and embodied conduct that reoccur in specific sequential positions employed for specific purposes. Multimodal analyses (and other disciplines using videos) are so far dependent on time and resource intensive processes of manual transcription of each component from video materials. Automating these tasks requires advanced programming skills, which is often not in the scope of IL. Moreover, the use of different tools makes the integration and analysis of different formats challenging. Consequently, IL research often deals with relatively small samples of annotated data which are suitable for qualitative analysis but not enough for making generalized empirical claims derived quantitatively. VIAN-DH aims to create a workspace where many annotation layers required for the multimodal analysis of videos can be created, processed, and correlated in one platform. VIAN-DH will provide a graphical interface that operates state-of-the-art tools for automating parts of the data processing. The integration of tools that already exist in computational linguistics and computer vision, facilitates data processing for researchers lacking programming skills, speeds up the overall research process, and enables the processing of large amounts of data. The main features to be introduced are automatic speech recognition for the transcription of language, automatic image recognition for extraction of gestures and other visual cues, as well as grammatical annotation for adding morphological and syntactic information to the verbal content. In the ongoing instance of VIAN-DH, we focus on gesture extraction (pointing gestures, in particular), making use of existing models created for sign language and adapting them for this specific purpose. In order to view and search the data, VIAN-DH will provide a unified format and enable the import of the main existing formats of annotated video data and the export to other formats used in the field, while integrating different data source formats in a way that they can be combined in research. VIAN-DH will adapt querying methods from corpus linguistics to enable parallel search of many annotation levels, combining token-level and chronological search for various types of data. VIAN-DH strives to bring crucial and potentially revolutionary innovation to the field of IL, (that can also extend to other fields using video materials). It will allow the processing of large amounts of data automatically and, the implementation of quantitative analyses, combining it with the qualitative approach. It will facilitate the investigation of correlations between linguistic patterns (lexical or grammatical) with conversational aspects (turn-taking or gestures). Users will be able to automatically transcribe and annotate visual, spoken and grammatical information from videos, and to correlate those different levels and perform queries and analyses.

Keywords: multimodal analysis, corpus linguistics, computational linguistics, image recognition, speech recognition

Procedia PDF Downloads 108
843 The Underestimate of the Annual Maximum Rainfall Depths Due to Coarse Time Resolution Data

Authors: Renato Morbidelli, Carla Saltalippi, Alessia Flammini, Tommaso Picciafuoco, Corrado Corradini

Abstract:

A considerable part of rainfall data to be used in the hydrological practice is available in aggregated form within constant time intervals. This can produce undesirable effects, like the underestimate of the annual maximum rainfall depth, Hd, associated with a given duration, d, that is the basic quantity in the development of rainfall depth-duration-frequency relationships and in determining if climate change is producing effects on extreme event intensities and frequencies. The errors in the evaluation of Hd from data characterized by a coarse temporal aggregation, ta, and a procedure to reduce the non-homogeneity of the Hd series are here investigated. Our results indicate that: 1) in the worst conditions, for d=ta, the estimation of a single Hd value can be affected by an underestimation error up to 50%, while the average underestimation error for a series with at least 15-20 Hd values, is less than or equal to 16.7%; 2) the underestimation error values follow an exponential probability density function; 3) each very long time series of Hd contains many underestimated values; 4) relationships between the non-dimensional ratio ta/d and the average underestimate of Hd, derived from continuous rainfall data observed in many stations of Central Italy, may overcome this issue; 5) these equations should allow to improve the Hd estimates and the associated depth-duration-frequency curves at least in areas with similar climatic conditions.

Keywords: central Italy, extreme events, rainfall data, underestimation errors

Procedia PDF Downloads 191
842 A Syntactic Approach to Applied and Socio-Linguistics in Arabic Language in Modern Communications

Authors: Adeyemo Abduljeeel Taiwo

Abstract:

This research is an attempt that creates a conducive atmosphere of a phonological and morphological compendium of Arabic language in Modern Standard Arabic (MSA) for modern day communications. The research is carried out with the chief aim of grammatical analysis of the two broad fields of Arabic linguistics namely: Applied and Socio-Linguistics. It draws a pictorial record of Applied and Socio-Linguistics in Arabic phonology and morphology. Thematically, it postulates and contemplates to a large degree, the theory of concord in contemporary modern Arabic language acquisition. It utilizes an analytical method while it portrays Arabic as a Semitic language that promotes linguistics and syntax among the scholars of the fields.

Keywords: Arabic language, applied linguistics, socio-linguistics, modern communications

Procedia PDF Downloads 331
841 Efficient Principal Components Estimation of Large Factor Models

Authors: Rachida Ouysse

Abstract:

This paper proposes a constrained principal components (CnPC) estimator for efficient estimation of large-dimensional factor models when errors are cross sectionally correlated and the number of cross-sections (N) may be larger than the number of observations (T). Although principal components (PC) method is consistent for any path of the panel dimensions, it is inefficient as the errors are treated to be homoskedastic and uncorrelated. The new CnPC exploits the assumption of bounded cross-sectional dependence, which defines Chamberlain and Rothschild’s (1983) approximate factor structure, as an explicit constraint and solves a constrained PC problem. The CnPC method is computationally equivalent to the PC method applied to a regularized form of the data covariance matrix. Unlike maximum likelihood type methods, the CnPC method does not require inverting a large covariance matrix and thus is valid for panels with N ≥ T. The paper derives a convergence rate and an asymptotic normality result for the CnPC estimators of the common factors. We provide feasible estimators and show in a simulation study that they are more accurate than the PC estimator, especially for panels with N larger than T, and the generalized PC type estimators, especially for panels with N almost as large as T.

Keywords: high dimensionality, unknown factors, principal components, cross-sectional correlation, shrinkage regression, regularization, pseudo-out-of-sample forecasting

Procedia PDF Downloads 150
840 Formulation of a Stress Management Program for Human Error Prevention in Nuclear Power Plants

Authors: Hyeon-Kyo Lim, Tong-il Jang, Yong-Hee Lee

Abstract:

As for any nuclear power plant, human error is one of the most dreaded factors that may result in unexpected accidents. Thus, for accident prevention, it is quite indispensable to analyze and to manage the influence of any factor which may raise the possibility of human errors. Among lots factors, stress has been reported to have significant influence on human performance. Stress level of a person may fluctuate over time. To handle the possibility over time, robust stress management program is required, especially in nuclear power plants. Therefore, to overcome the possibility of human errors, this study aimed to develop a stress management program as a part of Fitness-for-Duty (FFD) Program for the workers in nuclear power plants. The meaning of FFD might be somewhat different by research objectives, appropriate definition of FFD was accomplished in this study with special reference to human error prevention, and diverse stress factors were elicited for management of human error susceptibility. In addition, with consideration of conventional FFD management programs, appropriate tests and interventions were introduced over the whole employment cycle including selection and screening of workers, job allocation, job rotation, and disemployment as well as Employee-Assistance-Program (EAP). The results showed that most tools mainly concentrated their weights on common organizational factors such as Demands, Supports, and Relationships in sequence, which were referred as major stress factors.

Keywords: human error, accident prevention, work performance, stress, fatigue

Procedia PDF Downloads 326
839 Clinical Outcomes of Toric Implantable Collamer Lens (T-ICL) and Toric Implantable Phakic Contact Lens (IPCL) for Correction of High Myopia with Astigmatism: Comparative Study

Authors: Mohamed Salah El-Din Mahmoud, Heba Radi Atta Allah

Abstract:

Background: Our study assesses the safety profile and efficacy of toric Implantable Collamer Lens (T-ICL) and toric implantable phakic contact lens (IPCL) for the correction of high myopia with astigmatism. Methods: A prospective interventional randomized comparative study included 60 myopic eyes divided into 2 groups, group A including 30 eyes that were implanted with T-ICL, and group B including 30 eyes that were implanted with toric IPCL. The refractive results, visual acuity, corneal endothelial cell count, and intraocular pressure (IOP) were evaluated at baseline and at 1, 6, and 9 months post-surgery. Any complications either during or after surgery were assessed. Results: A significant reduction in both spherical and cylindrical refractive errors with good predictability was reported in both groups compared with preoperative values. Regarding the predictability, In T-ICL group (A), the median spherical and cylindrical errors were significantly improved from (-10 D & -4.5 D) pre-operatively to (-0.25 D & - 0.3 D) at the end of 9 months follow up period. Similarly, in the toric IPCL group (B), the median spherical and cylindrical errors were significantly improved from (-11 D & -4.5 D) pre-operatively to (-0.25 D & - 0.3 D) at the end of 9 months follow up period. A statistically significant improvement of UCDVA at 9 months postoperatively was found in both groups, as median preoperative Log Mar UCDVA was 1.1 and 1.3 in groups A and B respectively, which was significantly improved to 0.2 in both groups at the end of follow-up period. Regarding IOP, no significant difference was found between both groups, either pre-operatively or during the postoperative period. Regarding the endothelial count, no significant differences were found during the pre-operative and postoperative follow-up periods between the two groups. Fortunately, no intra or postoperative complications as cataract, keratitis or lens decentration had occurred. Conclusions: Toric IPCL is a suitable alternative to T-ICL for the management of high myopia with astigmatism, especially in developing countries, as it is cheaper and easier for implantation than T-ICL. However, data over longer follow-up periods are needed to confirm its safety and stability.

Keywords: T-ICL, Toric IPCL, IOP, corneal endothelium

Procedia PDF Downloads 148
838 Multi-Scale Control Model for Network Group Behavior

Authors: Fuyuan Ma, Ying Wang, Xin Wang

Abstract:

Social networks have become breeding grounds for the rapid spread of rumors and malicious information, posing threats to societal stability and causing significant public harm. Existing research focuses on simulating the spread of information and its impact on users through propagation dynamics and applies methods such as greedy approximation strategies to approximate the optimal control solution at the global scale. However, the greedy strategy at the global scale may fall into locally optimal solutions, and the approximate simulation of information spread may accumulate more errors. Therefore, we propose a multi-scale control model for network group behavior, introducing individual and group scales on top of the greedy strategy’s global scale. At the individual scale, we calculate the propagation influence of nodes based on their structural attributes to alleviate the issue of local optimality. At the group scale, we conduct precise propagation simulations to avoid introducing cumulative errors from approximate calculations without increasing computational costs. Experimental results on three real-world datasets demonstrate the effectiveness of our proposed multi-scale model in controlling network group behavior.

Keywords: influence blocking maximization, competitive linear threshold model, social networks, network group behavior

Procedia PDF Downloads 21
837 True Single SKU Script: Applying the Automated Test to Set Software Properties in a Global Software Development Environment

Authors: Antonio Brigido, Maria Meireles, Francisco Barros, Gaspar Mota, Fernanda Terra, Lidia Melo, Marcelo Reis, Camilo Souza

Abstract:

As the globalization of the software process advances, companies are increasingly committed to improving software development technologies across multiple locations. On the other hand, working with teams distributed in different locations also raises new challenges. In this sense, automated processes can help to improve the quality of process execution. Therefore, this work presents the development of a tool called TSS Script that automates the sample preparation process for carrier requirements validation tests. The objective of the work is to obtain significant gains in execution time and reducing errors in scenario preparation. To estimate the gains over time, the executions performed in an automated and manual way were timed. In addition, a questionnaire-based survey was developed to discover new requirements and improvements to include in this automated support. The results show an average gain of 46.67% of the total hours worked, referring to sample preparation. The use of the tool avoids human errors, and for this reason, it adds greater quality and speed to the process. Another relevant factor is the fact that the tester can perform other activities in parallel with sample preparation.

Keywords: Android, GSD, automated testing tool, mobile products

Procedia PDF Downloads 317
836 AI-Based Technologies for Improving Patient Safety and Quality of Care

Authors: Tewelde Gebreslassie Gebreanenia, Frie Ayalew Yimam, Seada Hussen Adem

Abstract:

Patient safety and quality of care are essential goals of health care delivery, but they are often compromised by human errors, system failures, or resource constraints. In a variety of healthcare contexts, artificial intelligence (AI), a quickly developing field, can provide fresh approaches to enhancing patient safety and treatment quality. Artificial Intelligence (AI) has the potential to decrease errors and enhance patient outcomes by carrying out tasks that would typically require human intelligence. These tasks include the detection and prevention of adverse events, monitoring and warning patients and clinicians about changes in vital signs, symptoms, or risks, offering individualized and evidence-based recommendations for diagnosis, treatment, or prevention, and assessing and enhancing the effectiveness of health care systems and services. This study examines the state-of-the-art and potential future applications of AI-based technologies for enhancing patient safety and care quality, as well as the opportunities and problems they present for patients, policymakers, researchers, and healthcare providers. In order to ensure the safe, efficient, and responsible application of AI in healthcare, the paper also addresses the ethical, legal, social, and technical challenges that must be addressed and regulated.

Keywords: artificial intelligence, health care, human intelligence, patient safty, quality of care

Procedia PDF Downloads 78
835 An Approach for Detection Efficiency Determination of High Purity Germanium Detector Using Cesium-137

Authors: Abdulsalam M. Alhawsawi

Abstract:

Estimation of a radiation detector's efficiency plays a significant role in calculating the activity of radioactive samples. Detector efficiency is measured using sources that emit a variety of energies from low to high-energy photons along the energy spectrum. Some photon energies are hard to find in lab settings either because check sources are hard to obtain or the sources have short half-lives. This work aims to develop a method to determine the efficiency of a High Purity Germanium Detector (HPGe) based on the 662 keV gamma ray photon emitted from Cs-137. Cesium-137 is readily available in most labs with radiation detection and health physics applications and has a long half-life of ~30 years. Several photon efficiencies were calculated using the MCNP5 simulation code. The simulated efficiency of the 662 keV photon was used as a base to calculate other photon efficiencies in a point source and a Marinelli Beaker form. In the Marinelli Beaker filled with water case, the efficiency of the 59 keV low energy photons from Am-241 was estimated with a 9% error compared to the MCNP5 simulated efficiency. The 1.17 and 1.33 MeV high energy photons emitted by Co-60 had errors of 4% and 5%, respectively. The estimated errors are considered acceptable in calculating the activity of unknown samples as they fall within the 95% confidence level.

Keywords: MCNP5, MonteCarlo simulations, efficiency calculation, absolute efficiency, activity estimation, Cs-137

Procedia PDF Downloads 117
834 Agreement Across Borders: Theoretical Templates in the Brain of a New Language Learner

Authors: Sadeq Al Yaari, Ayman Al Yaari, Adham Al Yaari, Montaha Al Yaari, Aayah Al Yaari, Sajedah Al Yaari

Abstract:

Objective: The aim of this study is to investigate how the brain of a new language learner establishes theoretical templates to help understand grammatical structure. Method: The study recruited fourteen typically developing and achieving participants from eleven nationalities (ages between 23 and 30). Pre- and post-tests were administered, and the analysis was psychoneurolinguistically discussed. Results: Outline results show that, in grammar acquisition), the challenge that faces the second language learner is in the establishment of the templates relating to abstract nouns. During the process of grammar acquisition, the earlier, the better and fMRI was found to be the practical detector of brain theoretical templates.

Keywords: template, brain, imaging technique, grammar acquisition

Procedia PDF Downloads 36
833 Background Knowledge and Reading Comprehension in ELT Classes: A Pedagogical Perspective

Authors: Davoud Ansari Kejal, Meysam Sabour

Abstract:

For long, there has been a belief that a reader can easily comprehend a text if he is strong enough in vocabulary and grammatical knowledge but there was no account for the ability of understanding different subjects based on readers’ understanding of the surrounding world which is called world background knowledge. This paper attempts to investigate the reading comprehension process applying the schema theory as an influential factor in comprehending texts, in order to prove the important role of background knowledge in reading comprehension. Based on the discussion, some teaching methods are suggested for employing world background knowledge for an elaborated teaching of reading comprehension in an active learning environment in EFL classes.

Keywords: background knowledge, reading comprehension, schema theory, ELT classes

Procedia PDF Downloads 457
832 Estimating Estimators: An Empirical Comparison of Non-Invasive Analysis Methods

Authors: Yan Torres, Fernanda Simoes, Francisco Petrucci-Fonseca, Freddie-Jeanne Richard

Abstract:

The non-invasive samples are an alternative of collecting genetic samples directly. Non-invasive samples are collected without the manipulation of the animal (e.g., scats, feathers and hairs). Nevertheless, the use of non-invasive samples has some limitations. The main issue is degraded DNA, leading to poorer extraction efficiency and genotyping. Those errors delayed for some years a widespread use of non-invasive genetic information. Possibilities to limit genotyping errors can be done using analysis methods that can assimilate the errors and singularities of non-invasive samples. Genotype matching and population estimation algorithms can be highlighted as important analysis tools that have been adapted to deal with those errors. Although, this recent development of analysis methods there is still a lack of empirical performance comparison of them. A comparison of methods with dataset different in size and structure can be useful for future studies since non-invasive samples are a powerful tool for getting information specially for endangered and rare populations. To compare the analysis methods, four different datasets used were obtained from the Dryad digital repository were used. Three different matching algorithms (Cervus, Colony and Error Tolerant Likelihood Matching - ETLM) are used for matching genotypes and two different ones for population estimation (Capwire and BayesN). The three matching algorithms showed different patterns of results. The ETLM produced less number of unique individuals and recaptures. A similarity in the matched genotypes between Colony and Cervus was observed. That is not a surprise since the similarity between those methods on the likelihood pairwise and clustering algorithms. The matching of ETLM showed almost no similarity with the genotypes that were matched with the other methods. The different cluster algorithm system and error model of ETLM seems to lead to a more criterious selection, although the processing time and interface friendly of ETLM were the worst between the compared methods. The population estimators performed differently regarding the datasets. There was a consensus between the different estimators only for the one dataset. The BayesN showed higher and lower estimations when compared with Capwire. The BayesN does not consider the total number of recaptures like Capwire only the recapture events. So, this makes the estimator sensitive to data heterogeneity. Heterogeneity in the sense means different capture rates between individuals. In those examples, the tolerance for homogeneity seems to be crucial for BayesN work properly. Both methods are user-friendly and have reasonable processing time. An amplified analysis with simulated genotype data can clarify the sensibility of the algorithms. The present comparison of the matching methods indicates that Colony seems to be more appropriated for general use considering a time/interface/robustness balance. The heterogeneity of the recaptures affected strongly the BayesN estimations, leading to over and underestimations population numbers. Capwire is then advisable to general use since it performs better in a wide range of situations.

Keywords: algorithms, genetics, matching, population

Procedia PDF Downloads 143
831 The Effects of the Inference Process in Reading Texts in Arabic

Authors: May George

Abstract:

Inference plays an important role in the learning process and it can lead to a rapid acquisition of a second language. When learning a non-native language, i.e., a critical language like Arabic, the students depend on the teacher’s support most of the time to learn new concepts. The students focus on memorizing the new vocabulary and stress on learning all the grammatical rules. Hence, the students became mechanical and cannot produce the language easily. As a result, they are unable to predict the meaning of words in the context by relying heavily on the teacher, in that they cannot link their prior knowledge or even identify the meaning of the words without the support of the teacher. This study explores how the teacher guides students learning during the inference process and what are the processes of learning that can direct student’s inference.

Keywords: inference, reading, Arabic, language acquisition

Procedia PDF Downloads 531
830 Time Series Forecasting (TSF) Using Various Deep Learning Models

Authors: Jimeng Shi, Mahek Jain, Giri Narasimhan

Abstract:

Time Series Forecasting (TSF) is used to predict the target variables at a future time point based on the learning from previous time points. To keep the problem tractable, learning methods use data from a fixed-length window in the past as an explicit input. In this paper, we study how the performance of predictive models changes as a function of different look-back window sizes and different amounts of time to predict the future. We also consider the performance of the recent attention-based Transformer models, which have had good success in the image processing and natural language processing domains. In all, we compare four different deep learning methods (RNN, LSTM, GRU, and Transformer) along with a baseline method. The dataset (hourly) we used is the Beijing Air Quality Dataset from the UCI website, which includes a multivariate time series of many factors measured on an hourly basis for a period of 5 years (2010-14). For each model, we also report on the relationship between the performance and the look-back window sizes and the number of predicted time points into the future. Our experiments suggest that Transformer models have the best performance with the lowest Mean Average Errors (MAE = 14.599, 23.273) and Root Mean Square Errors (RSME = 23.573, 38.131) for most of our single-step and multi-steps predictions. The best size for the look-back window to predict 1 hour into the future appears to be one day, while 2 or 4 days perform the best to predict 3 hours into the future.

Keywords: air quality prediction, deep learning algorithms, time series forecasting, look-back window

Procedia PDF Downloads 154
829 Continuous Measurement of Spatial Exposure Based on Visual Perception in Three-Dimensional Space

Authors: Nanjiang Chen

Abstract:

In the backdrop of expanding urban landscapes, accurately assessing spatial openness is critical. Traditional visibility analysis methods grapple with discretization errors and inefficiencies, creating a gap in truly capturing the human experi-ence of space. Addressing these gaps, this paper introduces a distinct continuous visibility algorithm, a leap in measuring urban spaces from a human-centric per-spective. This study presents a methodological breakthrough by applying this algorithm to urban visibility analysis. Unlike conventional approaches, this tech-nique allows for a continuous range of visibility assessment, closely mirroring hu-man visual perception. By eliminating the need for predefined subdivisions in ray casting, it offers a more accurate and efficient tool for urban planners and architects. The proposed algorithm not only reduces computational errors but also demonstrates faster processing capabilities, validated through a case study in Bei-jing's urban setting. Its key distinction lies in its potential to benefit a broad spec-trum of stakeholders, ranging from urban developers to public policymakers, aid-ing in the creation of urban spaces that prioritize visual openness and quality of life. This advancement in urban analysis methods could lead to more inclusive, comfortable, and well-integrated urban environments, enhancing the spatial experience for communities worldwide.

Keywords: visual openness, spatial continuity, ray-tracing algorithms, urban computation

Procedia PDF Downloads 46
828 Preliminary Study of the Phonological Development in Three and Four Year Old Bulgarian Children

Authors: Tsvetomira Braynova, Miglena Simonska

Abstract:

The article presents the results of research on phonological processes in three and four-year-old children. For the purpose of the study, an author's test was developed and conducted among 120 children. The study included three areas of research - at the level of words (96 words), at the level of sentence repetition (10 sentences) and at the level of generating own speech from a picture (15 pictures). The test also gives us additional information about the articulation errors of the assessed children. The main purpose of the icing is to analyze all phonological processes that occur at this age in Bulgarian children and to identify which are typical and atypical for this age. The results show that the most common phonology errors that children make are: sound substitution, an elision of sound, metathesis of sound, elision of a syllable, and elision of consonants clustered in a syllable. All examined children were identified with the articulatory disorder from type bilabial lambdacism. Measuring the correlation between the average length of repeated speech and the average length of generated speech, the analysis proves that the more words a child can repeat in part “repeated speech,” the more words they can be expected to generate in part “generating sentence.” The results of this study show that the task of naming a word provides sufficient and representative information to assess the child's phonology.

Keywords: assessment, phonology, articulation, speech-language development

Procedia PDF Downloads 186
827 Efficient Model Order Reduction of Descriptor Systems Using Iterative Rational Krylov Algorithm

Authors: Muhammad Anwar, Ameen Ullah, Intakhab Alam Qadri

Abstract:

This study presents a technique utilizing the Iterative Rational Krylov Algorithm (IRKA) to reduce the order of large-scale descriptor systems. Descriptor systems, which incorporate differential and algebraic components, pose unique challenges in Model Order Reduction (MOR). The proposed method partitions the descriptor system into polynomial and strictly proper parts to minimize approximation errors, applying IRKA exclusively to the strictly adequate component. This approach circumvents the unbounded errors that arise when IRKA is directly applied to the entire system. A comparative analysis demonstrates the high accuracy of the reduced model and a significant reduction in computational burden. The reduced model enables more efficient simulations and streamlined controller designs. The study highlights IRKA-based MOR’s effectiveness in optimizing complex systems’ performance across various engineering applications. The proposed methodology offers a promising solution for reducing the complexity of large-scale descriptor systems while maintaining their essential characteristics and facilitating their analysis, simulation, and control design.

Keywords: model order reduction, descriptor systems, iterative rational Krylov algorithm, interpolatory model reduction, computational efficiency, projection methods, H₂-optimal model reduction

Procedia PDF Downloads 31
826 A Numerical Investigation of Total Temperature Probes Measurement Performance

Authors: Erdem Meriç

Abstract:

Measuring total temperature of air flow accurately is a very important requirement in the development phases of many industrial products, including gas turbines and rockets. Thermocouples are very practical devices to measure temperature in such cases, but in high speed and high temperature flows, the temperature of thermocouple junction may deviate considerably from real flow total temperature due to the effects of heat transfer mechanisms of convection, conduction, and radiation. To avoid errors in total temperature measurement, special probe designs which are experimentally characterized are used. In this study, a validation case which is an experimental characterization of a specific class of total temperature probes is selected from the literature to develop a numerical conjugate heat transfer analysis methodology to study the total temperature probe flow field and solid temperature distribution. Validated conjugate heat transfer methodology is used to investigate flow structures inside and around the probe and effects of probe design parameters like the ratio between inlet and outlet hole areas and prob tip geometry on measurement accuracy. Lastly, a thermal model is constructed to account for errors in total temperature measurement for a specific class of probes in different operating conditions. Outcomes of this work can guide experimentalists to design a very accurate total temperature probe and quantify the possible error for their specific case.

Keywords: conjugate heat transfer, recovery factor, thermocouples, total temperature probes

Procedia PDF Downloads 134
825 I Don’t Want to Have to Wait: A Study Into the Origins of Rule Violations at Rail Pedestrian Level Crossings

Authors: James Freeman, Andry Rakotonirainy

Abstract:

Train pedestrian collisions are common and are the most likely to result in severe injuries and fatalities when compared to other types of rail crossing accidents. However, there is limited research that has focused on understanding the reasons why some pedestrians’ break level crossings rules, which limits the development of effective countermeasures. As a result, this study undertook a deeper exploration into the origins of risky pedestrian behaviour through structured interviews. A total of 40 pedestrians who admitted to either intentionally breaking crossing rules or making crossing errors participated in an in-depth telephone interview. Qualitative analysis was undertaken via thematic analysis that revealed participants were more likely to report deliberately breaking rules (rather than make errors), particular after the train had passed the crossing as compared to before it arrives. Predominant reasons for such behaviours were identified to be: calculated risk taking, impatience, poor knowledge of rules and low likelihood of detection. The findings have direct implications for the development of effective countermeasures to improve crossing safety (and managing risk) such as increasing surveillance and transit officer presence, as well as installing appropriate barriers that either deter or incapacitate pedestrians from violating crossing rules. This paper will further outline the study findings in regards to the development of countermeasures as well as provide direction for future research efforts in this area.

Keywords: crossings, mistakes, risk, violations

Procedia PDF Downloads 415
824 Wrong Site Surgery Should Not Occur In This Day And Age!

Authors: C. Kuoh, C. Lucas, T. Lopes, I. Mechie, J. Yoong, W. Yoong

Abstract:

For all surgeons, there is one preventable but still highly occurring complication – wrong site surgeries. They can have potentially catastrophic, irreversible, or even fatal consequences on patients. With the exponential development of microsurgery and the use of advanced technological tools, the consequences of operating on the wrong side, anatomical part, or even person is seen as the most visible and destructive of all surgical errors and perhaps the error that is dreaded by most clinicians as it threatens their licenses and arouses feelings of guilt. Despite the implementation of the WHO surgical safety checklist more than a decade ago, the incidence of wrong-site surgeries remains relatively high, leading to tremendous physical and psychological repercussions for the clinicians involved, as well as a financial burden for the healthcare institution. In this presentation, the authors explore various factors which can lead to wrong site surgery – a combination of environmental and human factors and evaluate their impact amongst patients, practitioners, their families, and the medical industry. Major contributing factors to these “never events” include deviations from checklists, excessive workload, and poor communication. Two real-life cases are discussed, and systems that can be implemented to prevent these errors are highlighted alongside lessons learnt from other industries. The authors suggest that reinforcing speaking-up, implementing medical professional trainings, and higher patient’s involvements can potentially improve safety in surgeries and electrosurgeries.

Keywords: wrong side surgery, never events, checklist, workload, communication

Procedia PDF Downloads 184