Search results for: pivot language translation approach
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 16554

Search results for: pivot language translation approach

10464 The Capacity of Mel Frequency Cepstral Coefficients for Speech Recognition

Authors: Fawaz S. Al-Anzi, Dia AbuZeina

Abstract:

Speech recognition is of an important contribution in promoting new technologies in human computer interaction. Today, there is a growing need to employ speech technology in daily life and business activities. However, speech recognition is a challenging task that requires different stages before obtaining the desired output. Among automatic speech recognition (ASR) components is the feature extraction process, which parameterizes the speech signal to produce the corresponding feature vectors. Feature extraction process aims at approximating the linguistic content that is conveyed by the input speech signal. In speech processing field, there are several methods to extract speech features, however, Mel Frequency Cepstral Coefficients (MFCC) is the popular technique. It has been long observed that the MFCC is dominantly used in the well-known recognizers such as the Carnegie Mellon University (CMU) Sphinx and the Markov Model Toolkit (HTK). Hence, this paper focuses on the MFCC method as the standard choice to identify the different speech segments in order to obtain the language phonemes for further training and decoding steps. Due to MFCC good performance, the previous studies show that the MFCC dominates the Arabic ASR research. In this paper, we demonstrate MFCC as well as the intermediate steps that are performed to get these coefficients using the HTK toolkit.

Keywords: speech recognition, acoustic features, mel frequency, cepstral coefficients

Procedia PDF Downloads 242
10463 Parameters Estimation of Multidimensional Possibility Distributions

Authors: Sergey Sorokin, Irina Sorokina, Alexander Yazenin

Abstract:

We present a solution to the Maxmin u/E parameters estimation problem of possibility distributions in m-dimensional case. Our method is based on geometrical approach, where minimal area enclosing ellipsoid is constructed around the sample. Also we demonstrate that one can improve results of well-known algorithms in fuzzy model identification task using Maxmin u/E parameters estimation.

Keywords: possibility distribution, parameters estimation, Maxmin u\E estimator, fuzzy model identification

Procedia PDF Downloads 453
10462 Uplift Segmentation Approach for Targeting Customers in a Churn Prediction Model

Authors: Shivahari Revathi Venkateswaran

Abstract:

Segmenting customers plays a significant role in churn prediction. It helps the marketing team with proactive and reactive customer retention. For the reactive retention, the retention team reaches out to customers who already showed intent to disconnect by giving some special offers. When coming to proactive retention, the marketing team uses churn prediction model, which ranks each customer from rank 1 to 100, where 1 being more risk to churn/disconnect (high ranks have high propensity to churn). The churn prediction model is built by using XGBoost model. However, with the churn rank, the marketing team can only reach out to the customers based on their individual ranks. To profile different groups of customers and to frame different marketing strategies for targeted groups of customers are not possible with the churn ranks. For this, the customers must be grouped in different segments based on their profiles, like demographics and other non-controllable attributes. This helps the marketing team to frame different offer groups for the targeted audience and prevent them from disconnecting (proactive retention). For segmentation, machine learning approaches like k-mean clustering will not form unique customer segments that have customers with same attributes. This paper finds an alternate approach to find all the combination of unique segments that can be formed from the user attributes and then finds the segments who have uplift (churn rate higher than the baseline churn rate). For this, search algorithms like fast search and recursive search are used. Further, for each segment, all customers can be targeted using individual churn ranks from the churn prediction model. Finally, a UI (User Interface) is developed for the marketing team to interactively search for the meaningful segments that are formed and target the right set of audience for future marketing campaigns and prevent them from disconnecting.

Keywords: churn prediction modeling, XGBoost model, uplift segments, proactive marketing, search algorithms, retention, k-mean clustering

Procedia PDF Downloads 58
10461 Urban Security and Social Sustainability in Cities of Developing Countries

Authors: Taimaz Larimian, Negin Sadeghi

Abstract:

Very little is known about the impacts of urban security on the level of social sustainability within the cities of developing countries. Urban security is still struggling to find its position in the social sustainability agenda, despite the significant role of safety and security on different aspects of peoples’ lives. This paper argues that urban safety and security should be better integrated within the social sustainability framework. With this aim, this study investigates the hypothesized relationship between social sustainability and Crime Prevention through Environmental Design (CPTED) approach at the neighborhood scale. This study proposes a model of key influential dimensions of CPTED analyzed into localized factors and sub-factors. These factors are then prioritized using pairwise comparison logic and fuzzy group Analytic Hierarchy Process (AHP) method in order to determine the relative importance of each factor on achieving social sustainability. The proposed model then investigates social sustainability in six case study neighborhoods of Isfahan city based on residents’ perceptions of safety within their neighborhood. Mixed method of data collection is used by using a self-administered questionnaire to explore the residents’ perceptions of social sustainability in their area of residency followed by an on-site observation to measure the CPTED construct. In all, 150 respondents from selected neighborhoods were involved in this research. The model indicates that CPTED approach has a significant direct influence on increasing social sustainability in neighborhood scale. According to the findings, among different dimensions of CPTED, ‘activity support’ and ‘image/ management’ have the most influence on people’s feeling of safety within studied areas. This model represents a useful designing tool in achieving urban safety and security during the development of more socially sustainable and user-friendly urban areas.

Keywords: crime prevention through environmental design (CPTED), developing countries, fuzzy analytic hierarchy process (FAHP), social sustainability

Procedia PDF Downloads 291
10460 Fuzzy Control and Pertinence Functions

Authors: Luiz F. J. Maia

Abstract:

This paper presents an approach to fuzzy control, with the use of new pertinence functions, applied in the case of an inverted pendulum. Appropriate definitions of pertinence functions to fuzzy sets make possible the implementation of the controller with only one control rule, resulting in a smooth control surface. The fuzzy control system can be implemented with analog devices, affording a true real-time performance.

Keywords: control surface, fuzzy control, Inverted pendulum, pertinence functions

Procedia PDF Downloads 431
10459 English 2A Students’ Oral Presentation Errors: Basis for English Policy Revision

Authors: Marylene N. Tizon

Abstract:

English instructors pay attention on errors committed by students as errors show whether they know or master their oral skills and what difficulties they may have in the process of learning the English language. This descriptive quantitative study aimed at identifying and categorizing the oral presentation errors of the purposively chosen 118 English 2A students enrolled during the first semester of school year 2013 – 2014. The analysis of the data for this study was undertaken using the errors committed by the students in their presentation. Marking and classifying of errors were made by first classifying them into linguistic grammatical errors then all errors were categorized further into Surface Structure Errors Taxonomy with the use of Frequency and Percentage distribution. From the analysis of the data, the researcher found out: Errors in tenses of the verbs (71 or 16%) and in addition 167 or 37% were most frequently uttered by the students. And Question and negation mistakes (12 or 3%) and misordering errors (28 or 7%) were least frequently enunciated by the students. Thus, the respondents in this study most frequently enunciated errors in tenses and in addition while they uttered least frequently the errors in question, negation, and misordering.

Keywords: grammatical error, oral presentation error, surface structure errors taxonomy, descriptive quantitative design, Philippines, Asia

Procedia PDF Downloads 382
10458 Implementation Status of Industrial Training for Production Engineering Technology Diploma Inuniversity Kuala Lumpur Malaysia Spanish Institute (Unikl Msi)

Authors: M. Sazali Said, Rahim Jamian, Shahrizan Yusoff, Shahruzaman Sulaiman, Jum'Azulhisham Abdul Shukor

Abstract:

This case study focuses on the role of Universiti Kuala Lumpur Malaysian Spanish Institute (UniKL MSI) to produce technologist in order to reduce the shortage of skilled workers especially in the automotive industry. The purpose of the study therefore seeks to examine the effectiveness of Technical Education and Vocational Training (TEVT) curriculum of UniKL MSI to produce graduates that could immediately be productively employed by the automotive industry. The approach used in this study is through performance evaluation of students attending the Industrial Training Attachment (INTRA). The sample of study comprises of 37 students, 16 university supervisors and 26 industrial supervisors. The research methodology involves the use of quantitative and qualitative methods of data collections through the triangulation approach. The quantitative data was gathered from the students, university supervisors and industrial supervisors through the use of questionnaire. Meanwhile, the qualitative data was obtained from the students and university supervisors through the use of interview and observation. Both types of data have been processed and analyzed in order to summarize the results in terms of frequency and percentage by using a computerized spread sheet. The result shows that industrial supervisors were satisfied with the students’ performance. Meanwhile, university supervisors rated moderate effectiveness of the UniKL MSI curriculum in producing graduates with appropriate skills and in meeting the industrial needs. During the period of study, several weaknesses in the curriculum have been identified for further continuous improvements. Recommendations and suggestions for curriculum improvement also include the enhancement of technical skills and competences of students towards fulfilling the needs and demand of the automotive industries.

Keywords: technical education and vocational training (TEVT), industrial training attachment (INTRA), curriculum improvement, automotive industry

Procedia PDF Downloads 355
10457 A Long Short-Term Memory Based Deep Learning Model for Corporate Bond Price Predictions

Authors: Vikrant Gupta, Amrit Goswami

Abstract:

The fixed income market forms the basis of the modern financial market. All other assets in financial markets derive their value from the bond market. Owing to its over-the-counter nature, corporate bonds have relatively less data publicly available and thus is researched upon far less compared to Equities. Bond price prediction is a complex financial time series forecasting problem and is considered very crucial in the domain of finance. The bond prices are highly volatile and full of noise which makes it very difficult for traditional statistical time-series models to capture the complexity in series patterns which leads to inefficient forecasts. To overcome the inefficiencies of statistical models, various machine learning techniques were initially used in the literature for more accurate forecasting of time-series. However, simple machine learning methods such as linear regression, support vectors, random forests fail to provide efficient results when tested on highly complex sequences such as stock prices and bond prices. hence to capture these intricate sequence patterns, various deep learning-based methodologies have been discussed in the literature. In this study, a recurrent neural network-based deep learning model using long short term networks for prediction of corporate bond prices has been discussed. Long Short Term networks (LSTM) have been widely used in the literature for various sequence learning tasks in various domains such as machine translation, speech recognition, etc. In recent years, various studies have discussed the effectiveness of LSTMs in forecasting complex time-series sequences and have shown promising results when compared to other methodologies. LSTMs are a special kind of recurrent neural networks which are capable of learning long term dependencies due to its memory function which traditional neural networks fail to capture. In this study, a simple LSTM, Stacked LSTM and a Masked LSTM based model has been discussed with respect to varying input sequences (three days, seven days and 14 days). In order to facilitate faster learning and to gradually decompose the complexity of bond price sequence, an Empirical Mode Decomposition (EMD) has been used, which has resulted in accuracy improvement of the standalone LSTM model. With a variety of Technical Indicators and EMD decomposed time series, Masked LSTM outperformed the other two counterparts in terms of prediction accuracy. To benchmark the proposed model, the results have been compared with traditional time series models (ARIMA), shallow neural networks and above discussed three different LSTM models. In summary, our results show that the use of LSTM models provide more accurate results and should be explored more within the asset management industry.

Keywords: bond prices, long short-term memory, time series forecasting, empirical mode decomposition

Procedia PDF Downloads 121
10456 Experimental Study Damage in a Composite Structure by Vibration Analysis- Glass / Polyester

Authors: R. Abdeldjebar, B. Labbaci, L. Missoum, B. Moudden, M. Djermane

Abstract:

The basic components of a composite material made him very sensitive to damage, which requires techniques for detecting damage reliable and efficient. This work focuses on the detection of damage by vibration analysis, whose main objective is to exploit the dynamic response of a structure to detect understand the damage. The experimental results are compared with those predicted by numerical models to confirm the effectiveness of the approach.

Keywords: experimental, composite, vibration analysis, damage

Procedia PDF Downloads 659
10455 Processing Big Data: An Approach Using Feature Selection

Authors: Nikat Parveen, M. Ananthi

Abstract:

Big data is one of the emerging technology, which collects the data from various sensors and those data will be used in many fields. Data retrieval is one of the major issue where there is a need to extract the exact data as per the need. In this paper, large amount of data set is processed by using the feature selection. Feature selection helps to choose the data which are actually needed to process and execute the task. The key value is the one which helps to point out exact data available in the storage space. Here the available data is streamed and R-Center is proposed to achieve this task.

Keywords: big data, key value, feature selection, retrieval, performance

Procedia PDF Downloads 324
10454 Indigenous Understandings of Climate Vulnerability in Chile: A Qualitative Approach

Authors: Rosario Carmona

Abstract:

This article aims to discuss the importance of indigenous people participation in climate change mitigation and adaptation. Specifically, it analyses different understandings of climate vulnerability among diverse actors involved in climate change policies in Chile: indigenous people, state officials, and academics. These data were collected through participant observation and interviews conducted during October 2017 and January 2019 in Chile. Following Karen O’Brien, there are two types of vulnerability, outcome vulnerability and contextual vulnerability. How vulnerability to climate change is understood determines the approach, which actors are involved and which knowledge is considered to address it. Because climate change is a very complex phenomenon, it is necessary to transform the institutions and their responses. To do so, it is fundamental to consider these two perspectives and different types of knowledge, particularly those of the most vulnerable, such as indigenous people. For centuries and thanks to a long coexistence with the environment, indigenous societies have elaborated coping strategies, and some of them are already adapting to climate change. Indigenous people from Chile are not an exception. But, indigenous people tend to be excluded from decision-making processes. And indigenous knowledge is frequently seen as subjective and arbitrary in relation to science. Nevertheless, last years indigenous knowledge has gained particular relevance in the academic world, and indigenous actors are getting prominence in international negotiations. There are some mechanisms that promote their participation (e.g., Cancun safeguards, World Bank operational policies, REDD+), which are not absent from difficulties. And since 2016 parties are working on a Local Communities and Indigenous Peoples Platform. This paper also explores the incidence of this process in Chile. Although there is progress in the participation of indigenous people, this participation responds to the operational policies of the funding agencies and not to a real commitment of the state with this sector. The State of Chile omits a review of the structure that promotes inequality and the exclusion of indigenous people. In this way, climate change policies could be configured as a new mechanism of coloniality that validates a single type of knowledge and leads to new territorial control strategies, which increases vulnerability.

Keywords: indigenous knowledge, climate change, vulnerability, Chile

Procedia PDF Downloads 110
10453 Educating Children Who Are Deaf and Hearing Impaired in Southern Africa: Challenges and Triumphs

Authors: Emma Louise McKinney

Abstract:

There is a global move to integrate children who are Deaf and Hearing Impaired into regular classrooms with their hearing peers with an inclusive education framework. This paper examines the current education situation for children who are Deaf and Hearing Impaired in South Africa, Madagascar, Malawi, Zimbabwe, and Namibia. Qualitative data for this paper was obtained from the author’s experiences working as the Southern African Education Advisor for an international organization funding disability projects. It examines some of the challenges facing these children and their teachers relating to education. Challenges include cultural stigma relating to disability and deafness, a lack of hearing screening and early identification of deafness, schools in rural areas, special schools, specialist teacher training, equipment, understanding of how to implement policy, support, appropriate teaching methodologies, and sign language training and proficiency. On the other hand, in spite of the challenges some teachers are able to provide quality education to children who are Deaf and Hearing Impaired. This paper examines both the challenges as well as what teachers are doing to overcome these.

Keywords: education of children who are deaf and hearing impaired, Southern African experiences, challenges, triumphs

Procedia PDF Downloads 221
10452 Geostatistical Models to Correct Salinity of Soils from Landsat Satellite Sensor: Application to the Oran Region, Algeria

Authors: Dehni Abdellatif, Lounis Mourad

Abstract:

The new approach of applied spatial geostatistics in materials sciences, agriculture accuracy, agricultural statistics, permitted an apprehension of managing and monitoring the water and groundwater qualities in a relationship with salt-affected soil. The anterior experiences concerning data acquisition, spatial-preparation studies on optical and multispectral data has facilitated the integration of correction models of electrical conductivity related with soils temperature (horizons of soils). For tomography apprehension, this physical parameter has been extracted from calibration of the thermal band (LANDSAT ETM+6) with a radiometric correction. Our study area is Oran region (Northern West of Algeria). Different spectral indices are determined such as salinity and sodicity index, the Combined Spectral Reflectance Index (CSRI), Normalized Difference Vegetation Index (NDVI), emissivity, Albedo, and Sodium Adsorption Ratio (SAR). The approach of geostatistical modeling of electrical conductivity (salinity), appears to be a useful decision support system for estimating corrected electrical resistivity related to the temperature of surface soils, according to the conversion models by substitution, the reference temperature at 25°C (where hydrochemical data are collected with this constraint). The Brightness temperatures extracted from satellite reflectance (LANDSAT ETM+) are used in consistency models to estimate electrical resistivity. The confusions that arise from the effects of salt stress and water stress removed followed by seasonal application of the geostatistical analysis in Geographic Information System (GIS) techniques investigation and monitoring the variation of the electrical conductivity in the alluvial aquifer of Es-Sénia for the salt-affected soil.

Keywords: geostatistical modelling, landsat, brightness temperature, conductivity

Procedia PDF Downloads 428
10451 A Hierarchical Bayesian Calibration of Data-Driven Models for Composite Laminate Consolidation

Authors: Nikolaos Papadimas, Joanna Bennett, Amir Sakhaei, Timothy Dodwell

Abstract:

Composite modeling of consolidation processes is playing an important role in the process and part design by indicating the formation of possible unwanted prior to expensive experimental iterative trial and development programs. Composite materials in their uncured state display complex constitutive behavior, which has received much academic interest, and this with different models proposed. Errors from modeling and statistical which arise from this fitting will propagate through any simulation in which the material model is used. A general hyperelastic polynomial representation was proposed, which can be readily implemented in various nonlinear finite element packages. In our case, FEniCS was chosen. The coefficients are assumed uncertain, and therefore the distribution of parameters learned using Markov Chain Monte Carlo (MCMC) methods. In engineering, the approach often followed is to select a single set of model parameters, which on average, best fits a set of experiments. There are good statistical reasons why this is not a rigorous approach to take. To overcome these challenges, A hierarchical Bayesian framework was proposed in which population distribution of model parameters is inferred from an ensemble of experiments tests. The resulting sampled distribution of hyperparameters is approximated using Maximum Entropy methods so that the distribution of samples can be readily sampled when embedded within a stochastic finite element simulation. The methodology is validated and demonstrated on a set of consolidation experiments of AS4/8852 with various stacking sequences. The resulting distributions are then applied to stochastic finite element simulations of the consolidation of curved parts, leading to a distribution of possible model outputs. With this, the paper, as far as the authors are aware, represents the first stochastic finite element implementation in composite process modelling.

Keywords: data-driven , material consolidation, stochastic finite elements, surrogate models

Procedia PDF Downloads 132
10450 Evaluating the Location of Effective Product Advertising on Facebook Ads

Authors: Aulia F. Hadining, Atya Nur Aisha, Dimas Kurninatoro Aji

Abstract:

Utilization of social media as a marketing tool is growing rapidly, including for SMEs. Social media allows the user to give product evaluation and recommendations to the public. In addition, the social media facilitate word-of-mouth marketing communication. One of the social media that can be used is Facebook, with Facebook Ads. This study aimed to evaluate the location of Facebook Ads, to obtain an appropriate advertising design. There are three alternatives location consist of desktop, right-hand column and mobile. The effectiveness and efficiency of advertising will be measured based on advertising metrics such as reach, click, Cost per Click (CUC) and Unique Click-Through-Rate (UCTR). Facebook's Ads Manager was used for seven days, targeted by age (18-24), location (Bandung), language (Indonesia) and keywords. The result was 13,999 total reach, as well as 342 clicks. Based on the results of comparison using ANOVA, there was a significant difference for each placement location based on advertising metrics. Mobile location was chosen to be successful ads, because it produces the lowest CUC, amounting to Rp 691,- per click and 14% UCTR. Results of this study showed Facebook Ads was useful and cost-effective media to promote the product of SME, because it could be view by many people in the same time.

Keywords: marketing communication, social media, Facebook Ads, mobile location

Procedia PDF Downloads 338
10449 English as a Lingua Franca Elicited in ASEAN Accents

Authors: Choedchoo Kwanhathai

Abstract:

This study explores attitudes towards ASEAN plus ONE (namely ASEAN plus China) accents of English as a Lingua Franca. The study draws attention to features of ASEAN’s diversity of English and specifically examines the extent of which the English accent in ASEAN countries of three of the ten members plus one were perceived in terms of correctness, acceptability, pleasantness, and familiarity. Three accents were used for this study; Chinese, Philippine and Thai. The participants were ninety eight Thai students enrolled in a foundation course of Suan Dusit Rajabhat University, Bangkok Thailand. The students were asked in questionnaires to rank how they perceived each specifically ASEAN plus One English accent after listening to audio recordings of three stories spoken by the three different ASEAN plus ONE English speakers. SPSS was used to analyze the data. The findings of attitudes towards varieties of English accent from the 98 respondents regarding correctness, acceptability, pleasantness, and familiarity of Thai English accents found that Thai accent was overall at level 3 (X = 2.757, SD= o.33), %Then Philippines accents was at level 2 (X = 2.326, SD = 16.12), and Chinese accents w2as at level 3 (X 3.198, SD = 0.18). Finally, the present study proposes pedagogical implications for teaching regarding awareness of ‘Englishes’ of ASEAN and their respective accents and their lingua cultural background of instructors.

Keywords: English as a lingua franca, English accents, English as an international language, ASEAN plus one, ASEAN English varieties

Procedia PDF Downloads 410
10448 Developing Dynamic Capabilities: The Case of Western Subsidiaries in Emerging Market

Authors: O. A. Adeyemi, M. O. Idris, W. A. Oke, O. T. Olorode, S. O. Alayande, A. E. Adeoye

Abstract:

The purpose of this paper is to investigate the process of capability building at subsidiary level and the challenges to such process. The relevance of external factors for capability development, have not been explicitly addressed in empirical studies. Though, internal factors, acting as enablers, have been more extensively studied. With reference to external factors, subsidiaries are actively influenced by specific characteristics of the host country, implying a need to become fully immersed in local culture and practices. Specifically, in MNCs, there has been a widespread trend in management practice to increase subsidiary autonomy,  with subsidiary managers being encouraged to act entrepreneurially, and to take advantage of host country specificity. As such, it could be proposed that: P1: The degree at which subsidiary management is connected to the host country, will positively influence the capability development process. Dynamic capabilities reside to a large measure with the subsidiary management team, but are impacted by the organizational processes, systems and structures that the MNC headquarter has designed to manage its business. At the subsidiary level, the weight of the subsidiary in the network, its initiative-taking and its profile building increase the supportive attention of the HQs and are relevant to the success of the process of capability building. Therefore, our second proposition is that: P2: Subsidiary role and HQ support are relevant elements in capability development at the subsidiary level. Design/Methodology/Approach: This present study will adopt the multiple case studies approach. That is because a case study research is relevant when addressing issues without known empirical evidences or with little developed prior theory. The key definitions and literature sources directly connected with operations of western subsidiaries in emerging markets, such as China, are well established. A qualitative approach, i.e., case studies of three western subsidiaries, will be adopted. The companies have similar products, they have operations in China, and both of them are mature in their internationalization process. Interviews with key informants, annual reports, press releases, media materials, presentation material to customers and stakeholders, and other company documents will be used as data sources. Findings: Western Subsidiaries in Emerging Market operate in a way substantially different from those in the West. What are the conditions initiating the outsourcing of operations? The paper will discuss and present two relevant propositions guiding that process. Practical Implications: MNCs headquarter should be aware of the potential for capability development at the subsidiary level. This increased awareness could induce consideration in headquarter about the possible ways of encouraging such known capability development and how to leverage these capabilities for better MNC headquarter and/or subsidiary performance. Originality/Value: The paper is expected to contribute on the theme: drivers of subsidiary performance with focus on emerging market. In particular, it will show how some external conditions could promote a capability-building process within subsidiaries.

Keywords: case studies, dynamic capability, emerging market, subsidiary

Procedia PDF Downloads 111
10447 A Systematic Review of Patient-Reported Outcomes and Return to Work after Surgical vs. Non-surgical Midshaft Humerus Fracture

Authors: Jamal Alasiri, Naif Hakeem, Saoud Almaslmani

Abstract:

Background: Patients with humeral shaft fractures have two different treatment options. Surgical therapy has lesser risks of non-union, mal-union, and re-intervention than non-surgical therapy. These positive clinical outcomes of the surgical approach make it a preferable treatment option despite the risks of radial nerve palsy and additional surgery-related risk. We aimed to evaluate patients’ outcomes and return to work after surgical vs. non-surgical management of shaft humeral fracture. Methods: We used databases, including PubMed, Medline, and Cochrane Register of Controlled Trials, from 2010 to January 2022 to search for potential randomised controlled trials (RCTs) and cohort studies comparing the patients’ related outcome measures and return to work between surgical and non-surgical management of humerus fracture. Results: After carefully evaluating 1352 articles, we included three RCTs (232 patients) and one cohort study (39 patients). The surgical intervention used plate/nail fixation, while the non-surgical intervention used a splint or brace procedure to manage shaft humeral fracture. The pooled DASH effects of all three RCTs at six (M.D: -7.5 [-13.20, -1.89], P: 0.009) I2:44%) and 12 months (M.D: -1.32 [-3.82, 1.17], p:0.29, I2: 0%) were higher in patients treated surgically than in non-surgical procedures. The pooled constant Murley score at six (M.D: 7.945[2.77,13.10], P: 0.003) I2: 0%) and 12 months (M.D: 1.78 [-1.52, 5.09], P: 0.29, I2: 0%) were higher in patients who received non-surgical than surgical therapy. However, pooled analysis for patients returning to work for both groups remained inconclusive. Conclusion: Altogether, we found no significant evidence supporting the clinical benefits of surgical over non-surgical therapy. Thus, the non-surgical approach remains the preferred therapeutic choice for managing shaft humeral fractures due to its lesser side effects.

Keywords: shaft humeral fracture, surgical treatment, Patient-related outcomes, return to work, DASH

Procedia PDF Downloads 87
10446 Assessing Student Attitudes toward Graded Readers, MReader and the MReader Challenge

Authors: Catherine Cheetam, Alan Harper, Melody Elliott, Mika Ito

Abstract:

This paper describes a pilot study conducted with English as a foreign language (EFL) students at a private university in Japan who used graded readers and the MReader website in class or independently to enhance their English reading skills. Each semester students who read 100,000 words with MReader quizzes passed enter into the ‘MReader Challenge,’ a reading contest that recognizes students for their achievement. The study focused specifically on the attitudes of thirty-six EFL students who successfully completed the Challenge in the 2015 spring semester using graded readers and MReader, and their motivation to continue using English in the future. The attitudes of these students were measured using their responses to statements on a Likert scaled survey. Follow-up semi-structured interviews were conducted with eleven students to gain additional insight into their opinions. The results from this study suggest that reading graded readers in general promoted intrinsic motivation among a majority of the participants. This study is preliminary and needs to be expanded and continued to assess the lasting impact of the extensive reading program. Limitations and future directions of the study are also summarized and discussed.

Keywords: attitudes, extensive, intrinsic, methodolgies, motivation, reading

Procedia PDF Downloads 192
10445 Generating Synthetic Chest X-ray Images for Improved COVID-19 Detection Using Generative Adversarial Networks

Authors: Muneeb Ullah, Daishihan, Xiadong Young

Abstract:

Deep learning plays a crucial role in identifying COVID-19 and preventing its spread. To improve the accuracy of COVID-19 diagnoses, it is important to have access to a sufficient number of training images of CXRs (chest X-rays) depicting the disease. However, there is currently a shortage of such images. To address this issue, this paper introduces COVID-19 GAN, a model that uses generative adversarial networks (GANs) to generate realistic CXR images of COVID-19, which can be used to train identification models. Initially, a generator model is created that uses digressive channels to generate images of CXR scans for COVID-19. To differentiate between real and fake disease images, an efficient discriminator is developed by combining the dense connectivity strategy and instance normalization. This approach makes use of their feature extraction capabilities on CXR hazy areas. Lastly, the deep regret gradient penalty technique is utilized to ensure stable training of the model. With the use of 4,062 grape leaf disease images, the Leaf GAN model successfully produces 8,124 COVID-19 CXR images. The COVID-19 GAN model produces COVID-19 CXR images that outperform DCGAN and WGAN in terms of the Fréchet inception distance. Experimental findings suggest that the COVID-19 GAN-generated CXR images possess noticeable haziness, offering a promising approach to address the limited training data available for COVID-19 model training. When the dataset was expanded, CNN-based classification models outperformed other models, yielding higher accuracy rates than those of the initial dataset and other augmentation techniques. Among these models, ImagNet exhibited the best recognition accuracy of 99.70% on the testing set. These findings suggest that the proposed augmentation method is a solution to address overfitting issues in disease identification and can enhance identification accuracy effectively.

Keywords: classification, deep learning, medical images, CXR, GAN.

Procedia PDF Downloads 69
10444 Susanne Bier, Lone Scherfig: Transnationalization Strategies

Authors: Ebru Thwaites Diken

Abstract:

This article analyzes the works of certain directors in Danish cinema, namely Susanne Bier and Lone Sherfig, in the context of transnationalisation of Danish cinema. It looks at how the films' narratives negotiate and reconstruct the local / national / regional and the global. Scholars such as Nestingen & Elkington (2005), Hjort (2010), Higbee and Lim (2010), Bondebjerg and Redvall (2011) address transnationalism of Danish cinema in terms of production and distribution processes and how film making trascends national boundaries. This paper employs a particular understanding of transnationalism - in terms of how ideas and characters travel - to analyze how the storytelling and style has evolved to connect the national, the regional and the global on the basis of the works of these two directors. Strategies such as Hollywoodization - i.e. focus on stardom and classical narration, adhering to conventional European genre formulas, producing Danish films in English language have been identifiable strategies in Danish cinema in the period after the 2000s. Susanne Bier and Lone Scherfig are significant for employing some of these strategies simultaneously. For this reason, this article will look at how these two directors have employed these strategies and negotiated the cultural boundaries and exchanges.

Keywords: transnational cinema, danish cinema, susanne bier, lone scherfig

Procedia PDF Downloads 59
10443 Developing a Test Specifications for an Internationalization Course: Environment for Health in Thai Context

Authors: Rungrawee Samawathdana, Aim-Utcha Wattanaburanon

Abstract:

Test specifications for open book or notes exams provide the essential information to identify the types of the test items with validity of the evaluations process. This article explains the purpose of test specifications and illustrates how to use it to help construct the approach of open book or notes exams. The complication of the course objectives is challenging for the test designing.

Keywords: course curriculum, environment for health, internationalization, test specifications

Procedia PDF Downloads 553
10442 DeepLig: A de-novo Computational Drug Design Approach to Generate Multi-Targeted Drugs

Authors: Anika Chebrolu

Abstract:

Mono-targeted drugs can be of limited efficacy against complex diseases. Recently, multi-target drug design has been approached as a promising tool to fight against these challenging diseases. However, the scope of current computational approaches for multi-target drug design is limited. DeepLig presents a de-novo drug discovery platform that uses reinforcement learning to generate and optimize novel, potent, and multitargeted drug candidates against protein targets. DeepLig’s model consists of two networks in interplay: a generative network and a predictive network. The generative network, a Stack- Augmented Recurrent Neural Network, utilizes a stack memory unit to remember and recognize molecular patterns when generating novel ligands from scratch. The generative network passes each newly created ligand to the predictive network, which then uses multiple Graph Attention Networks simultaneously to forecast the average binding affinity of the generated ligand towards multiple target proteins. With each iteration, given feedback from the predictive network, the generative network learns to optimize itself to create molecules with a higher average binding affinity towards multiple proteins. DeepLig was evaluated based on its ability to generate multi-target ligands against two distinct proteins, multi-target ligands against three distinct proteins, and multi-target ligands against two distinct binding pockets on the same protein. With each test case, DeepLig was able to create a library of valid, synthetically accessible, and novel molecules with optimal and equipotent binding energies. We propose that DeepLig provides an effective approach to design multi-targeted drug therapies that can potentially show higher success rates during in-vitro trials.

Keywords: drug design, multitargeticity, de-novo, reinforcement learning

Procedia PDF Downloads 71
10441 Multiscale Simulation of Absolute Permeability in Carbonate Samples Using 3D X-Ray Micro Computed Tomography Images Textures

Authors: M. S. Jouini, A. Al-Sumaiti, M. Tembely, K. Rahimov

Abstract:

Characterizing rock properties of carbonate reservoirs is highly challenging because of rock heterogeneities revealed at several length scales. In the last two decades, the Digital Rock Physics (DRP) approach was implemented successfully in sandstone rocks reservoirs in order to understand rock properties behaviour at the pore scale. This approach uses 3D X-ray Microtomography images to characterize pore network and also simulate rock properties from these images. Even though, DRP is able to predict realistic rock properties results in sandstone reservoirs it is still suffering from a lack of clear workflow in carbonate rocks. The main challenge is the integration of properties simulated at different scales in order to obtain the effective rock property of core plugs. In this paper, we propose several approaches to characterize absolute permeability in some carbonate core plugs samples using multi-scale numerical simulation workflow. In this study, we propose a procedure to simulate porosity and absolute permeability of a carbonate rock sample using textures of Micro-Computed Tomography images. First, we discretize X-Ray Micro-CT image into a regular grid. Then, we use a textural parametric model to classify each cell of the grid using supervised classification. The main parameters are first and second order statistics such as mean, variance, range and autocorrelations computed from sub-bands obtained after wavelet decomposition. Furthermore, we fill permeability property in each cell using two strategies based on numerical simulation values obtained locally on subsets. Finally, we simulate numerically the effective permeability using Darcy’s law simulator. Results obtained for studied carbonate sample shows good agreement with the experimental property.

Keywords: multiscale modeling, permeability, texture, micro-tomography images

Procedia PDF Downloads 172
10440 Application of Mathematical Models for Conducting Long-Term Metal Fume Exposure Assessments for Workers in a Shipbuilding Factory

Authors: Shu-Yu Chung, Ying-Fang Wang, Shih-Min Wang

Abstract:

To conduct long-term exposure assessments are important for workers exposed to chemicals with chronic effects. However, it usually encounters with several constrains, including cost, workers' willingness, and interference to work practice, etc., leading to inadequate long-term exposure data in the real world. In this study, an integrated approach was developed for conducting long-term exposure assessment for welding workers in a shipbuilding factory. A laboratory study was conducted to yield the fume generation rates under various operating conditions. The results and the measured environmental conditions were applied to the near field/far field (NF/FF) model for predicting long term fume exposures via the Monte Carlo simulation. Then, the predicted long-term concentrations were used to determine the prior distribution in Bayesian decision analysis (BDA). Finally, the resultant posterior distributions were used to assess the long-term exposure and serve as basis for initiating control strategies for shipbuilding workers. Results show that the NF/FF model was a suitable for predicting the exposures of metal contents containing in welding fume. The resultant posterior distributions could effectively assess the long-term exposures of shipbuilding welders. Welders' long-term Fe, Mn and Pb exposures were found with high possibilities to exceed the action level indicating preventive measures should be taken for reducing welders' exposures immediately. Though the resultant posterior distribution can only be regarded as the best solution based on the currently available predicting and monitoring data, the proposed integrated approach can be regarded as a possible solution for conducting long term exposure assessment in the field.

Keywords: Bayesian decision analysis, exposure assessment, near field and far field model, shipbuilding industry, welding fume

Procedia PDF Downloads 125
10439 Structured-Ness and Contextual Retrieval Underlie Language Comprehension

Authors: Yao-Ying Lai, Maria Pinango, Ashwini Deo

Abstract:

While grammatical devices are essential to language processing, how comprehension utilizes cognitive mechanisms is less emphasized. This study addresses this issue by probing the complement coercion phenomenon: an entity-denoting complement following verbs like begin and finish receives an eventive interpretation. For example, (1) “The queen began the book” receives an agentive reading like (2) “The queen began [reading/writing/etc.…] the book.” Such sentences engender additional processing cost in real-time comprehension. The traditional account attributes this cost to an operation that coerces the entity-denoting complement to an event, assuming that these verbs require eventive complements. However, in closer examination, examples like “Chapter 1 began the book” undermine this assumption. An alternative, Structured Individual (SI) hypothesis, proposes that the complement following aspectual verbs (AspV; e.g. begin, finish) is conceptualized as a structured individual, construed as an axis along various dimensions (e.g. spatial, eventive, temporal, informational). The composition of an animate subject and an AspV such as (1) engenders an ambiguity between an agentive reading along the eventive dimension like (2), and a constitutive reading along the informational/spatial dimension like (3) “[The story of the queen] began the book,” in which the subject is interpreted as a subpart of the complement denotation. Comprehenders need to resolve the ambiguity by searching contextual information, resulting in additional cost. To evaluate the SI hypothesis, a questionnaire was employed. Method: Target AspV sentences such as “Shakespeare began the volume.” were preceded by one of the following types of context sentence: (A) Agentive-biasing, in which an event was mentioned (…writers often read…), (C) Constitutive-biasing, in which a constitutive meaning was hinted (Larry owns collections of Renaissance literature.), (N) Neutral context, which allowed both interpretations. Thirty-nine native speakers of English were asked to (i) rate each context-target sentence pair from a 1~5 scale (5=fully understandable), and (ii) choose possible interpretations for the target sentence given the context. The SI hypothesis predicts that comprehension is harder for the Neutral condition, as compared to the biasing conditions because no contextual information is provided to resolve an ambiguity. Also, comprehenders should obtain the specific interpretation corresponding to the context type. Results: (A) Agentive-biasing and (C) Constitutive-biasing were rated higher than (N) Neutral conditions (p< .001), while all conditions were within the acceptable range (> 3.5 on the 1~5 scale). This suggests that when lacking relevant contextual information, semantic ambiguity decreases comprehensibility. The interpretation task shows that the participants selected the biased agentive/constitutive reading for condition (A) and (C) respectively. For the Neutral condition, the agentive and constitutive readings were chosen equally often. Conclusion: These findings support the SI hypothesis: the meaning of AspV sentences is conceptualized as a parthood relation involving structured individuals. We argue that semantic representation makes reference to spatial structured-ness (abstracted axis). To obtain an appropriate interpretation, comprehenders utilize contextual information to enrich the conceptual representation of the sentence in question. This study connects semantic structure to human’s conceptual structure, and provides a processing model that incorporates contextual retrieval.

Keywords: ambiguity resolution, contextual retrieval, spatial structured-ness, structured individual

Procedia PDF Downloads 316
10438 Quantitative Analysis of Camera Setup for Optical Motion Capture Systems

Authors: J. T. Pitale, S. Ghassab, H. Ay, N. Berme

Abstract:

Biomechanics researchers commonly use marker-based optical motion capture (MoCap) systems to extract human body kinematic data. These systems use cameras to detect passive or active markers placed on the subject. The cameras use triangulation methods to form images of the markers, which typically require each marker to be visible by at least two cameras simultaneously. Cameras in a conventional optical MoCap system are mounted at a distance from the subject, typically on walls, ceiling as well as fixed or adjustable frame structures. To accommodate for space constraints and as portable force measurement systems are getting popular, there is a need for smaller and smaller capture volumes. When the efficacy of a MoCap system is investigated, it is important to consider the tradeoff amongst the camera distance from subject, pixel density, and the field of view (FOV). If cameras are mounted relatively close to a subject, the area corresponding to each pixel reduces, thus increasing the image resolution. However, the cross section of the capture volume also decreases, causing reduction of the visible area. Due to this reduction, additional cameras may be required in such applications. On the other hand, mounting cameras relatively far from the subject increases the visible area but reduces the image quality. The goal of this study was to develop a quantitative methodology to investigate marker occlusions and optimize camera placement for a given capture volume and subject postures using three-dimension computer-aided design (CAD) tools. We modeled a 4.9m x 3.7m x 2.4m (LxWxH) MoCap volume and designed a mounting structure for cameras using SOLIDWORKS (Dassault Systems, MA, USA). The FOV was used to generate the capture volume for each camera placed on the structure. A human body model with configurable posture was placed at the center of the capture volume on CAD environment. We studied three postures; initial contact, mid-stance, and early swing. The human body CAD model was adjusted for each posture based on the range of joint angles. Markers were attached to the model to enable a full body capture. The cameras were placed around the capture volume at a maximum distance of 2.7m from the subject. We used the Camera View feature in SOLIDWORKS to generate images of the subject as seen by each camera and the number of markers visible to each camera was tabulated. The approach presented in this study provides a quantitative method to investigate the efficacy and efficiency of a MoCap camera setup. This approach enables optimization of a camera setup through adjusting the position and orientation of cameras on the CAD environment and quantifying marker visibility. It is also possible to compare different camera setup options on the same quantitative basis. The flexibility of the CAD environment enables accurate representation of the capture volume, including any objects that may cause obstructions between the subject and the cameras. With this approach, it is possible to compare different camera placement options to each other, as well as optimize a given camera setup based on quantitative results.

Keywords: motion capture, cameras, biomechanics, gait analysis

Procedia PDF Downloads 299
10437 An E-Maintenance IoT Sensor Node Designed for Fleets of Diverse Heavy-Duty Vehicles

Authors: George Charkoftakis, Panagiotis Liosatos, Nicolas-Alexander Tatlas, Dimitrios Goustouridis, Stelios M. Potirakis

Abstract:

E-maintenance is a relatively new concept, generally referring to maintenance management by monitoring assets over the Internet. One of the key links in the chain of an e-maintenance system is data acquisition and transmission. Specifically for the case of a fleet of heavy-duty vehicles, where the main challenge is the diversity of the vehicles and vehicle-embedded self-diagnostic/reporting technologies, the design of the data acquisition and transmission unit is a demanding task. This clear if one takes into account that a heavy-vehicles fleet assortment may range from vehicles with only a limited number of analog sensors monitored by dashboard light indicators and gauges to vehicles with plethora of sensors monitored by a vehicle computer producing digital reporting. The present work proposes an adaptable internet of things (IoT) sensor node that is capable of addressing this challenge. The proposed sensor node architecture is based on the increasingly popular single-board computer – expansion boards approach. In the proposed solution, the expansion boards undertake the tasks of position identification by means of a global navigation satellite system (GNSS), cellular connectivity by means of 3G/long-term evolution (LTE) modem, connectivity to on-board diagnostics (OBD), and connectivity to analog and digital sensors by means of a novel design of expansion board. Specifically, the later provides eight analog plus three digital sensor channels, as well as one on-board temperature / relative humidity sensor. The specific device offers a number of adaptability features based on appropriate zero-ohm resistor placement and appropriate value selection for limited number of passive components. For example, although in the standard configuration four voltage analog channels with constant voltage sources for the power supply of the corresponding sensors are available, up to two of these voltage channels can be converted to provide power to the connected sensors by means of corresponding constant current source circuits, whereas all parameters of analog sensor power supply and matching circuits are fully configurable offering the advantage of covering a wide variety of industrial sensors. Note that a key feature of the proposed sensor node, ensuring the reliable operation of the connected sensors, is the appropriate supply of external power to the connected sensors and their proper matching to the IoT sensor node. In standard mode, the IoT sensor node communicates to the data center through 3G/LTE, transmitting all digital/digitized sensor data, IoT device identity, and position. Moreover, the proposed IoT sensor node offers WiFi connectivity to mobile devices (smartphones, tablets) equipped with an appropriate application for the manual registration of vehicle- and driver-specific information, and these data are also forwarded to the data center. All control and communication tasks of the IoT sensor node are performed by dedicated firmware. It is programmed with a high-level language (Python) on top of a modern operating system (Linux). Acknowledgment: This research has been co-financed by the European Union and Greek national funds through the Operational Program Competitiveness, Entrepreneurship, and Innovation, under the call RESEARCH—CREATE—INNOVATE (project code: T1EDK- 01359, IntelligentLogger).

Keywords: IoT sensor nodes, e-maintenance, single-board computers, sensor expansion boards, on-board diagnostics

Procedia PDF Downloads 139
10436 Simulating Studies on Phosphate Removal from Laundry Wastewater Using Biochar: Dudinin Approach

Authors: Eric York, James Tadio, Silas Owusu Antwi

Abstract:

Laundry wastewater contains a diverse range of chemical pollutants that can have detrimental effects on human health and the environment. In this study, simulation studies by Spyder Python software v 3.2 to assess the efficacy of biochar in removing PO₄³⁻ from wastewater were conducted. Through modeling and simulation, the mechanisms involved in the adsorption process of phosphate by biochar were studied by altering variables which is specific to the phosphate from common laundry phosphate detergents, such as the aqueous solubility, initial concentration, and temperature using the Dudinin Approach (DA). Results showed that the concentration equilibrate at near the highest concentrations for Sugar beet-120 mgL⁻¹, Tailing-85 mgL⁻¹, CaO- rich-50 mgL⁻¹, Eggshell and rice straw-48 mgL⁻¹, Undaria Pinnatifida Roots-190 mgL⁻¹, Ca-Alginate Granular Beads -240 mgL⁻¹, Laminaria Japonica Powder -900 mgL⁻¹, Pinesaw dust-57 mgL⁻¹, Ricehull-190 mgL⁻¹, sesame straw- 470 mgL⁻¹, Sugar Bagasse-380 mgL⁻¹, Miscanthus Giganteus-240 mgL⁻¹, Wood Bc-130 mgL⁻¹, Pine-25 mgL⁻¹, Sawdust-6.8 mgL⁻¹, Sewage Sludge-, Rice husk-12 mgL⁻¹, Corncob-117 mgL⁻¹, Maize straw- 1800 mgL⁻¹ while Peanut -Eucalyptus polybractea-, Crawfish equilibrated at near concentration. CO₂ activated Thalia, sewage sludge biochar, Broussonetia Papyrifera Leaves equilibrated just at the lower concentration. Only Soyer bean Stover exhibited a sharp rise and fall peak in mid-concentration at 2 mgL⁻¹ volume. The modelling results were consistent with experimental findings from the literature, ensuring the accuracy, repeatability, and reliability of the simulation study. The simulation study provided insights into adsorption for PO₄³⁻ from wastewater by biochar using concentration per volume that can be adsorbed ideally under the given conditions. Studies showed that applying the principle experimentally in real wastewater with all its complexity is warranted and not far-fetched.

Keywords: simulation studies, phosphate removal, biochar, adsorption, wastewater treatment

Procedia PDF Downloads 100
10435 The Effect of Online Self-Assessment Diaries on Academic Achievement

Authors: Zi Yan

Abstract:

The pedagogical value of self-assessment is widely recognized. However, identifying effective methods to help students develop productive SA practices poses a significant challenge. Since most students do not acquire self-assessment skills intuitively, they need instruction and guidance. This study is a randomized controlled trial aiming to test the effect of online self-assessment diaries on students’ achievement scores compared to a control group. Two groups of secondary school students (N=59), recruited through convenience sampling, participated in the study. The two groups were randomly designated to one of two conditions: control (n = 31) and online self-assessment diary (n = 28). The participants completed a curriculum-specific pre-test and a baseline survey on the first week of the 10-week study, as well as completed a post-test and survey by the tenth week. The results showed that the SA diary intervention had a significantly positive effect on post-intervention language learning scores after controlling for baseline scores. The findings highlight the potential of self-assessment to enhance educational outcomes, emphasizing its significant implications for educational policies that promote the integration of SA strategies into pedagogical practices.

Keywords: self-assessment, online diary, academic achievement, experimenal study

Procedia PDF Downloads 33