Search results for: accelerated learning
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7485

Search results for: accelerated learning

3765 Ensemble Methods in Machine Learning: An Algorithmic Approach to Derive Distinctive Behaviors of Criminal Activity Applied to the Poaching Domain

Authors: Zachary Blanks, Solomon Sonya

Abstract:

Poaching presents a serious threat to endangered animal species, environment conservations, and human life. Additionally, some poaching activity has even been linked to supplying funds to support terrorist networks elsewhere around the world. Consequently, agencies dedicated to protecting wildlife habitats have a near intractable task of adequately patrolling an entire area (spanning several thousand kilometers) given limited resources, funds, and personnel at their disposal. Thus, agencies need predictive tools that are both high-performing and easily implementable by the user to help in learning how the significant features (e.g. animal population densities, topography, behavior patterns of the criminals within the area, etc) interact with each other in hopes of abating poaching. This research develops a classification model using machine learning algorithms to aid in forecasting future attacks that is both easy to train and performs well when compared to other models. In this research, we demonstrate how data imputation methods (specifically predictive mean matching, gradient boosting, and random forest multiple imputation) can be applied to analyze data and create significant predictions across a varied data set. Specifically, we apply these methods to improve the accuracy of adopted prediction models (Logistic Regression, Support Vector Machine, etc). Finally, we assess the performance of the model and the accuracy of our data imputation methods by learning on a real-world data set constituting four years of imputed data and testing on one year of non-imputed data. This paper provides three main contributions. First, we extend work done by the Teamcore and CREATE (Center for Risk and Economic Analysis of Terrorism Events) research group at the University of Southern California (USC) working in conjunction with the Department of Homeland Security to apply game theory and machine learning algorithms to develop more efficient ways of reducing poaching. This research introduces ensemble methods (Random Forests and Stochastic Gradient Boosting) and applies it to real-world poaching data gathered from the Ugandan rain forest park rangers. Next, we consider the effect of data imputation on both the performance of various algorithms and the general accuracy of the method itself when applied to a dependent variable where a large number of observations are missing. Third, we provide an alternate approach to predict the probability of observing poaching both by season and by month. The results from this research are very promising. We conclude that by using Stochastic Gradient Boosting to predict observations for non-commercial poaching by season, we are able to produce statistically equivalent results while being orders of magnitude faster in computation time and complexity. Additionally, when predicting potential poaching incidents by individual month vice entire seasons, boosting techniques produce a mean area under the curve increase of approximately 3% relative to previous prediction schedules by entire seasons.

Keywords: ensemble methods, imputation, machine learning, random forests, statistical analysis, stochastic gradient boosting, wildlife protection

Procedia PDF Downloads 286
3764 Mitigating Food Insecurity and Malnutrition by Promoting Carbon Farming via a Solar-Powered Enzymatic Composting Bioreactor with Arduino-Based Sensors

Authors: Molin A., De Ramos J. M., Cadion L. G., Pico R. L.

Abstract:

Malnutrition and food insecurity represent significant global challenges affecting millions of individuals, particularly in low-income and developing regions. The researchers created a solar-powered enzymatic composting bioreactor with an Arduino-based monitoring system for pH, humidity, and temperature. It manages mixed municipal solid wastes incorporating industrial enzymes and whey additives for accelerated composting and minimized carbon footprint. Within 15 days, the bioreactor yielded 54.54% compost compared to 44.85% from traditional methods, increasing yield by nearly 10%. Tests showed that the bioreactor compost had 4.84% NPK, passing metal analysis standards, while the traditional pit compost had 3.86% NPK; both are suitable for agriculture. Statistical analyses, including ANOVA and Tukey's HSD test, revealed significant differences in agricultural yield across different compost types based on leaf length, width, and number of leaves. The study compared the effects of different composts on Brassica rapa subsp. Chinesis (Petchay) and Brassica juncea (Mustasa) plant growth. For Pechay, significant effects of compost type on plant leaf length (F(5,84) = 62.33, η² = 0.79) and leaf width (F(5,84) = 12.35, η² = 0.42) were found. For Mustasa, significant effects of compost type on leaf length (F(4,70) = 20.61, η² = 0.54), leaf width (F(4,70) = 19.24, η² = 0.52), and number of leaves (F(4,70) = 13.17, η² = 0.43) were observed. This study explores the effectiveness of the enzymatic composting bioreactor and its viability in promoting carbon farming as a solution to food insecurity and malnutrition.

Keywords: malnutrition, food insecurity, enzymatic composting bioreactor, arduino-based monitoring system, enzymes, carbon farming, whey additive, NPK level

Procedia PDF Downloads 52
3763 Coated Chromium Thin Film on Zirconium for Corrosion Resistance of Nuclear Fuel Rods by Plasma Focus Device

Authors: Amir Raeisdana, Davood Sohrabi, Mojtaba Nohekhan, Ameneh Kargarian, Maryam Ghapanvari, Alireza Aslezaeem

Abstract:

Improvement of zirconium properties by chromium coating and nitrogen implantation is ideal to protect the nuclear fuel rods against corrosion and secondary hydrogenation. Metallic chromium (Cr) has attracted attention as a potential coating material on zirconium alloys, to limit external cladding corrosion. In this research, high energy plasma focus device was used to coat the chromium and implant the nitrogen ions in the zirconium substrate. This device emits high-energy nitrogen ions of 10 keV-1 MeV and with a flux of 10^16 ions/cm^2 in each shot toward the target so it is attractive for implantation on the substrate materials at the room temperature. Six zirconium samples in 2cm×2cm dimensions with 1mm thickness were located at a distance of 20cm from the place where the pinch is formed. The experiments are carried out in 0.5 mbar of the nitrogen gas pressure and 15 kV of the charging voltage. Pure Cr disc was installed on the anode head for sputtering of the chromium and deposition on zirconium substrate. When the pinch plasma column decays due to various instabilities, intense and high-energy N2 ions are accelerated towards the zirconium substrate also sputtered Cr is deposited on the zirconium substrate. XRD and XRF analysis were used to study the structural properties of the samples. XRF analysis indicates 77.1% of Zr and 11.1% of Cr in the surface of the sample. XRD spectra shows the formation of ZrN, CrN and CrZr composites after nitrogen implantation and chromium coating. XRD spectra shows the chromium peak height equal to 152.80 a.u. for the major sample (θ=0֯) and 92.99 a.u. for the minor sample (θ=6֯), so implantation and coating along the main axis of the device is significantly more than other directions.

Keywords: ZrN and CrN and CrZr composites, angular distribution for Cr deposition rate, zirconium corrosion resistance, nuclear fuel rods, plasma focus device

Procedia PDF Downloads 16
3762 Performance Enrichment of Deep Feed Forward Neural Network and Deep Belief Neural Networks for Fault Detection of Automobile Gearbox Using Vibration Signal

Authors: T. Praveenkumar, Kulpreet Singh, Divy Bhanpuriya, M. Saimurugan

Abstract:

This study analysed the classification accuracy for gearbox faults using Machine Learning Techniques. Gearboxes are widely used for mechanical power transmission in rotating machines. Its rotating components such as bearings, gears, and shafts tend to wear due to prolonged usage, causing fluctuating vibrations. Increasing the dependability of mechanical components like a gearbox is hampered by their sealed design, which makes visual inspection difficult. One way of detecting impending failure is to detect a change in the vibration signature. The current study proposes various machine learning algorithms, with aid of these vibration signals for obtaining the fault classification accuracy of an automotive 4-Speed synchromesh gearbox. Experimental data in the form of vibration signals were acquired from a 4-Speed synchromesh gearbox using Data Acquisition System (DAQs). Statistical features were extracted from the acquired vibration signal under various operating conditions. Then the extracted features were given as input to the algorithms for fault classification. Supervised Machine Learning algorithms such as Support Vector Machines (SVM) and unsupervised algorithms such as Deep Feed Forward Neural Network (DFFNN), Deep Belief Networks (DBN) algorithms are used for fault classification. The fusion of DBN & DFFNN classifiers were architected to further enhance the classification accuracy and to reduce the computational complexity. The fault classification accuracy for each algorithm was thoroughly studied, tabulated, and graphically analysed for fused and individual algorithms. In conclusion, the fusion of DBN and DFFNN algorithm yielded the better classification accuracy and was selected for fault detection due to its faster computational processing and greater efficiency.

Keywords: deep belief networks, DBN, deep feed forward neural network, DFFNN, fault diagnosis, fusion of algorithm, vibration signal

Procedia PDF Downloads 106
3761 Integrating Wound Location Data with Deep Learning for Improved Wound Classification

Authors: Mouli Banga, Chaya Ravindra

Abstract:

Wound classification is a crucial step in wound diagnosis. An effective classifier can aid wound specialists in identifying wound types with reduced financial and time investments, facilitating the determination of optimal treatment procedures. This study presents a deep neural network-based classifier that leverages wound images and their corresponding locations to categorize wounds into various classes, such as diabetic, pressure, surgical, and venous ulcers. By incorporating a developed body map, the process of tagging wound locations is significantly enhanced, providing healthcare specialists with a more efficient tool for wound analysis. We conducted a comparative analysis between two prominent convolutional neural network models, ResNet50 and MobileNetV2, utilizing a dataset of 730 images. Our findings reveal that the RestNet50 outperforms MovileNetV2, achieving an accuracy of approximately 90%, compared to MobileNetV2’s 83%. This disparity highlights the superior capability of ResNet50 in the context of this dataset. The results underscore the potential of integrating deep learning with spatial data to improve the precision and efficiency of wound diagnosis, ultimately contributing to better patient outcomes and reducing healthcare costs.

Keywords: wound classification, MobileNetV2, ResNet50, multimodel

Procedia PDF Downloads 23
3760 Econophysical Approach on Predictability of Financial Crisis: The 2001 Crisis of Turkey and Argentina Case

Authors: Arzu K. Kamberli, Tolga Ulusoy

Abstract:

Technological developments and the resulting global communication have made the 21st century when large capitals are moved from one end to the other via a button. As a result, the flow of capital inflows has accelerated, and capital inflow has brought with it crisis-related infectiousness. Considering the irrational human behavior, the financial crisis in the world under the influence of the whole world has turned into the basic problem of the countries and increased the interest of the researchers in the reasons of the crisis and the period in which they lived. Therefore, the complex nature of the financial crises and its linearly unexplained structure have also been included in the new discipline, econophysics. As it is known, although financial crises have prediction mechanisms, there is no definite information. In this context, in this study, using the concept of electric field from the electrostatic part of physics, an early econophysical approach for global financial crises was studied. The aim is to define a model that can take place before the financial crises, identify financial fragility at an earlier stage and help public and private sector members, policy makers and economists with an econophysical approach. 2001 Turkey crisis has been assessed with data from Turkish Central Bank which is covered between 1992 to 2007, and for 2001 Argentina crisis, data was taken from IMF and the Central Bank of Argentina from 1997 to 2007. As an econophysical method, an analogy is used between the Gauss's law used in the calculation of the electric field and the forecasting of the financial crisis. The concept of Φ (Financial Flux) has been adopted for the pre-warning of the crisis by taking advantage of this analogy, which is based on currency movements and money mobility. For the first time used in this study Φ (Financial Flux) calculations obtained by the formula were analyzed by Matlab software, and in this context, in 2001 Turkey and Argentina Crisis for Φ (Financial Flux) crisis of values has been confirmed to give pre-warning.

Keywords: econophysics, financial crisis, Gauss's Law, physics

Procedia PDF Downloads 150
3759 Feature Engineering Based Detection of Buffer Overflow Vulnerability in Source Code Using Deep Neural Networks

Authors: Mst Shapna Akter, Hossain Shahriar

Abstract:

One of the most important challenges in the field of software code audit is the presence of vulnerabilities in software source code. Every year, more and more software flaws are found, either internally in proprietary code or revealed publicly. These flaws are highly likely exploited and lead to system compromise, data leakage, or denial of service. C and C++ open-source code are now available in order to create a largescale, machine-learning system for function-level vulnerability identification. We assembled a sizable dataset of millions of opensource functions that point to potential exploits. We developed an efficient and scalable vulnerability detection method based on deep neural network models that learn features extracted from the source codes. The source code is first converted into a minimal intermediate representation to remove the pointless components and shorten the dependency. Moreover, we keep the semantic and syntactic information using state-of-the-art word embedding algorithms such as glove and fastText. The embedded vectors are subsequently fed into deep learning networks such as LSTM, BilSTM, LSTM-Autoencoder, word2vec, BERT, and GPT-2 to classify the possible vulnerabilities. Furthermore, we proposed a neural network model which can overcome issues associated with traditional neural networks. Evaluation metrics such as f1 score, precision, recall, accuracy, and total execution time have been used to measure the performance. We made a comparative analysis between results derived from features containing a minimal text representation and semantic and syntactic information. We found that all of the deep learning models provide comparatively higher accuracy when we use semantic and syntactic information as the features but require higher execution time as the word embedding the algorithm puts on a bit of complexity to the overall system.

Keywords: cyber security, vulnerability detection, neural networks, feature extraction

Procedia PDF Downloads 83
3758 Bounded Rational Heterogeneous Agents in Artificial Stock Markets: Literature Review and Research Direction

Authors: Talal Alsulaiman, Khaldoun Khashanah

Abstract:

In this paper, we provided a literature survey on the artificial stock problem (ASM). The paper began by exploring the complexity of the stock market and the needs for ASM. ASM aims to investigate the link between individual behaviors (micro level) and financial market dynamics (macro level). The variety of patterns at the macro level is a function of the AFM complexity. The financial market system is a complex system where the relationship between the micro and macro level cannot be captured analytically. Computational approaches, such as simulation, are expected to comprehend this connection. Agent-based simulation is a simulation technique commonly used to build AFMs. The paper proceeds by discussing the components of the ASM. We consider the roles of behavioral finance (BF) alongside the traditionally risk-averse assumption in the construction of agent's attributes. Also, the influence of social networks in the developing of agents’ interactions is addressed. Network topologies such as a small world, distance-based, and scale-free networks may be utilized to outline economic collaborations. In addition, the primary methods for developing agents learning and adaptive abilities have been summarized. These incorporated approach such as Genetic Algorithm, Genetic Programming, Artificial neural network and Reinforcement Learning. In addition, the most common statistical properties (the stylized facts) of stock that are used for calibration and validation of ASM are discussed. Besides, we have reviewed the major related previous studies and categorize the utilized approaches as a part of these studies. Finally, research directions and potential research questions are argued. The research directions of ASM may focus on the macro level by analyzing the market dynamic or on the micro level by investigating the wealth distributions of the agents.

Keywords: artificial stock markets, market dynamics, bounded rationality, agent based simulation, learning, interaction, social networks

Procedia PDF Downloads 351
3757 Education for Sustainability Using PBL on an Engineering Course at the National University of Colombia

Authors: Hernán G. Cortés-Mora, José I. Péna-Reyes, Alfonso Herrera-Jiménez

Abstract:

This article describes the implementation experience of Project-Based Learning (PBL) in an engineering course of the Universidad Nacional de Colombia, with the aim of strengthening student skills necessary for the exercise of their profession under a sustainability framework. Firstly, we present a literature review on the education for sustainability field, emphasizing the skills and knowledge areas required for its development, as well as the commitment of the Faculty of Engineering of the Universidad Nacional de Colombia, and other engineering faculties of the country, regarding education for sustainability. This article covers the general aspects of the course, describes how students team were formed, and how their experience was during the first semester of 2017. During this period two groups of students decided to develop their course project aiming to solve a problem regarding a Non-Governmental Organization (NGO) that works with head-of-household mothers in a low-income neighborhood in Bogota (Colombia). Subsequently, we show how sustainability is involved in the course, how tools are provided to students, and how activities are developed as to strengthen their abilities, which allows them to incorporate sustainability in their projects while also working on the methodology used to develop said projects. Finally, we introduce the results obtained by the students who sent the prototypes of their projects to the community they were working on and the conclusions reached by them regarding the course experience.

Keywords: sustainability, project-based learning, engineering education, higher education for sustainability

Procedia PDF Downloads 348
3756 Clustering and Modelling Electricity Conductors from 3D Point Clouds in Complex Real-World Environments

Authors: Rahul Paul, Peter Mctaggart, Luke Skinner

Abstract:

Maintaining public safety and network reliability are the core objectives of all electricity distributors globally. For many electricity distributors, managing vegetation clearances from their above ground assets (poles and conductors) is the most important and costly risk mitigation control employed to meet these objectives. Light Detection And Ranging (LiDAR) is widely used by utilities as a cost-effective method to inspect their spatially-distributed assets at scale, often captured using high powered LiDAR scanners attached to fixed wing or rotary aircraft. The resulting 3D point cloud model is used by these utilities to perform engineering grade measurements that guide the prioritisation of vegetation cutting programs. Advances in computer vision and machine-learning approaches are increasingly applied to increase automation and reduce inspection costs and time; however, real-world LiDAR capture variables (e.g., aircraft speed and height) create complexity, noise, and missing data, reducing the effectiveness of these approaches. This paper proposes a method for identifying each conductor from LiDAR data via clustering methods that can precisely reconstruct conductors in complex real-world configurations in the presence of high levels of noise. It proposes 3D catenary models for individual clusters fitted to the captured LiDAR data points using a least square method. An iterative learning process is used to identify potential conductor models between pole pairs. The proposed method identifies the optimum parameters of the catenary function and then fits the LiDAR points to reconstruct the conductors.

Keywords: point cloud, LİDAR data, machine learning, computer vision, catenary curve, vegetation management, utility industry

Procedia PDF Downloads 94
3755 Social Change and Cultural Sustainability in the Wake of Digital Media Revolution in South Asia

Authors: Binod C. Agrawal

Abstract:

In modern history, industrial and media merchandising in South Asia from East Asia, Europe, United States and other countries of the West is over 200 years old. Hence, continued external technology and media exposure is not a new experience in multi-lingual and multi religious South Asia which evolved cultural means to withstand structural change. In the post-World War II phase, media exposure especially of telecommunication, film, Internet, radio, print media and television have increased manifold. South Asia did not lose any time in acquiring and adopting digital media accelerated by chip revolution, computer and satellite communication. The penetration of digital media and utilization are exceptionally high though the spread has an unequal intensity, use and effects. The author argues that industrial and media products are “cultural products” apart from being “technological products”; hence their influences are most felt in the cultural domain which may lead to blunting of unique cultural specifics in the multi-cultural, multi-lingual and multi religious South Asia. Social scientists, political leaders and parents have voiced concern of “Cultural domination”, “Digital media colonization” and “Westernization”. Increased digital media access has also opened up doors of pornography and other harmful information that have sparked fresh debates and discussions about serious negative, harmful, and undesirable social effects especially among youth. Within ‘techno-social’ perspective, based on recent research studies, the paper aims to describe and analyse possible socio-economic change due to digital media penetration. Further, analysis supports the view that the ancient multi-lingual and multi-religious cultures of South Asia due to inner cultural strength may sustain without setting in a process of irreversible structural changes in South Asia.

Keywords: cultural sustainability, digital media effects, digital media impact in South Asia, social change in South Asia

Procedia PDF Downloads 349
3754 Improving the Students’ Writing Skill by Using Brainstorming Technique

Authors: M. Z. Abdul Rofiq Badril Rizal

Abstract:

This research is aimed to know the improvement of students’ English writing skill by using brainstorming technique. The technique used in writing is able to help the students’ difficulties in generating ideas and to lead the students to arrange the ideas well as well as to focus on the topic developed in writing. The research method used is classroom action research. The data sources of the research are an English teacher who acts as an observer and the students of class X.MIA5 consist of 35 students. The test result and observation are collected as the data in this research. Based on the research result in cycle one, the percentage of students who reach minimum accomplishment criteria (MAC) is 76.31%. It shows that the cycle must be continued to cycle two because the aim of the research has not accomplished, all of the students’ scores have not reached MAC yet. After continuing the research to cycle two and the weaknesses are improved, the process of teaching and learning runs better. At the test which is conducted in the end of learning process in cycle two, all of the students reach the minimum score and above 76 based on the minimum accomplishment criteria. It means the research has been successful and the percentage of students who reach minimum accomplishment criteria is 100%. Therefore, the writer concludes that brainstorming technique is able to improve the students’ English writing skill at the tenth grade of SMAN 2 Jember.

Keywords: brainstorming technique, improving, writing skill, knowledge and innovation engineering

Procedia PDF Downloads 361
3753 Autism Spectrum Disorder Classification Algorithm Using Multimodal Data Based on Graph Convolutional Network

Authors: Yuntao Liu, Lei Wang, Haoran Xia

Abstract:

Machine learning has shown extensive applications in the development of classification models for autism spectrum disorder (ASD) using neural image data. This paper proposes a fusion multi-modal classification network based on a graph neural network. First, the brain is segmented into 116 regions of interest using a medical segmentation template (AAL, Anatomical Automatic Labeling). The image features of sMRI and the signal features of fMRI are extracted, which build the node and edge embedding representations of the brain map. Then, we construct a dynamically updated brain map neural network and propose a method based on a dynamic brain map adjacency matrix update mechanism and learnable graph to further improve the accuracy of autism diagnosis and recognition results. Based on the Autism Brain Imaging Data Exchange I dataset(ABIDE I), we reached a prediction accuracy of 74% between ASD and TD subjects. Besides, to study the biomarkers that can help doctors analyze diseases and interpretability, we used the features by extracting the top five maximum and minimum ROI weights. This work provides a meaningful way for brain disorder identification.

Keywords: autism spectrum disorder, brain map, supervised machine learning, graph network, multimodal data, model interpretability

Procedia PDF Downloads 57
3752 A Shared Space: A Pioneering Approach to Interprofessional Education in New Zealand

Authors: Maria L. Ulloa, Ruth M. Crawford, Stephanie Kelly, Joey Domdom

Abstract:

In recent decades health and social service delivery have become more collaborative and interdisciplinary. Emerging trends suggest the need for an integrative and interprofessional approach to meet the challenges faced by professionals navigating the complexities of health and social service practice environments. Terms such as multidisciplinary practice, interprofessional collaboration, interprofessional education and transprofessional practice have become the common language used across a range of social services and health providers in western democratic systems. In Aotearoa New Zealand, one example of an interprofessional collaborative approach to curriculum design and delivery in health and social service is the development of an innovative Masters of Professional Practice programme. This qualification is the result of a strategic partnership between two tertiary institutions – Whitireia New Zealand (NZ) and the Wellington Institute of Technology (Weltec) in Wellington. The Master of Professional Practice programme was designed and delivered from the perspective of a collaborative, interprofessional and relational approach. Teachers and students in the programme come from a diverse range of cultural, professional and personal backgrounds and are engaged in courses using a blended learning approach that incorporates the values and pedagogies of interprofessional education. Students are actively engaged in professional practice while undertaking the programme. This presentation describes the themes of exploratory qualitative formative observations of engagement in class and online, student assessments, student research projects, as well as qualitative interviews with the programme teaching staff. These formative findings reveal the development of critical practice skills around the common themes of the programme: research and evidence based practice, education, leadership, working with diversity and advancing critical reflection of professional identities and interprofessional practice. This presentation will provide evidence of enhanced learning experiences in higher education and learning in multi-disciplinary contexts.

Keywords: diversity, exploratory research, interprofessional education, professional identity

Procedia PDF Downloads 297
3751 Design and Implementation of Machine Learning Model for Short-Term Energy Forecasting in Smart Home Management System

Authors: R. Ramesh, K. K. Shivaraman

Abstract:

The main aim of this paper is to handle the energy requirement in an efficient manner by merging the advanced digital communication and control technologies for smart grid applications. In order to reduce user home load during peak load hours, utility applies several incentives such as real-time pricing, time of use, demand response for residential customer through smart meter. However, this method provides inconvenience in the sense that user needs to respond manually to prices that vary in real time. To overcome these inconvenience, this paper proposes a convolutional neural network (CNN) with k-means clustering machine learning model which have ability to forecast energy requirement in short term, i.e., hour of the day or day of the week. By integrating our proposed technique with home energy management based on Bluetooth low energy provides predicted value to user for scheduling appliance in advanced. This paper describes detail about CNN configuration and k-means clustering algorithm for short-term energy forecasting.

Keywords: convolutional neural network, fuzzy logic, k-means clustering approach, smart home energy management

Procedia PDF Downloads 299
3750 Learning Chinese Suprasegmentals for a Better Communicative Performance

Authors: Qi Wang

Abstract:

Chinese has become a powerful worldwide language and millions of learners are studying it all over the words. Chinese is a tone language with unique meaningful characters, which makes foreign learners master it with more difficulties. On the other hand, as each foreign language, the learners of Chinese first will learn the basic Chinese Sound Structure (the initials and finals, tones, Neutral Tone and Tone Sandhi). It’s quite common that in the following studies, teachers made a lot of efforts on drilling and error correcting, in order to help students to pronounce correctly, but ignored the training of suprasegmental features (e.g. stress, intonation). This paper analysed the oral data based on our graduation students (two-year program) from 2006-2013, presents the intonation pattern of our graduates to speak Chinese as second language -high and plain with heavy accents, without lexical stress, appropriate stop endings and intonation, which led to the misunderstanding in different real contexts of communications and the international official Chinese test, e.g. HSK (Chinese Proficiency Test), HSKK (HSK Speaking Test). This paper also demonstrated how the Chinese to use the suprasegmental features strategically in different functions and moods (declarative, interrogative, imperative, exclamatory and rhetorical intonations) in order to train the learners to achieve better Communicative Performance.

Keywords: second language learning, suprasegmental, communication, HSK (Chinese Proficiency Test)

Procedia PDF Downloads 431
3749 Service Information Integration Platform as Decision Making Tools for the Service Industry Supply Chain-Indonesia Service Integration Project

Authors: Haikal Achmad Thaha, Pujo Laksono, Dhamma Nibbana Putra

Abstract:

Customer service is one of the core interest in a service sector of a company, whether as the core business or as service part of the operation. Most of the time, the people and the previous research in service industry is focused on finding the best business model solution for the service sector, usually to decide between total in house customer service, outsourcing, or something in between. Conventionally, to take this decision is some important part of the management job, and this is a process that usually takes some time and staff effort, meanwhile market condition and overall company needs may change and cause loss of income and temporary disturbance in the companies operation . However, in this paper we have offer a new concept model to assist decision making process in service industry. This model will featured information platform as central tool to integrate service industry operation. The result is service information model which would ideally increase response time and effectivity of the decision making. it will also help service industry in switching the service solution system quickly through machine learning when the companies growth and the service solution needed are changing.

Keywords: service industry, customer service, machine learning, decision making, information platform

Procedia PDF Downloads 614
3748 Steady and Spatio-Temporal Monitoring of Water Quality Feeding Area Southwest of Great Casablanca (Morocco)

Authors: Hicham Maklache, Rajae Delhi, Fatiha Benzha, Mohamed Tahiri

Abstract:

In Morocco, where semi-arid climate is dominant, the supply of industrial and drink water is provided primarily by surface water. Morocco has currently 118 multi-purpose dams. If the construction of these works was a necessity to ensure in all seasons, the water essential to our country, it is impartial to control and protect the quality of running water. -Most dam reservoir used are threatened by eutrophication due to increased terrigenous and anthropogenic pollutants, coming from an over-fertilization of water by phosphorus and nitrogen nutrients and accelerated by uncontrolled development of microalgae aging. It should also be noted that the daily practices of citizens with respect to the resource, an essential component involved in almost all human activities (agriculture, agro-industries, hydropower, ...), has contributed significantly to the deterioration of water quality despite its treatment in several plants. Therefore, the treated water, provides a legacy of bad tastes and odors unacceptable to the consumer. -The present work exhibits results of water quality watershed Oum Erbia used to supply drinking water to the whole terraced area connecting the city of Khenifra to the city of Azemmour. The area south west of Great Casablanca (metropolis of the kingdom with about 4 million inhabitants) supplied 50% of its water needs by sourcing Dam Sidi Said Maachou located, last anchor point of the watershed before the spill in the Atlantic Ocean. The results were performed in a spatio-temporal scale and helped to establish a history of monitoring water quality during the 2009-2011 cycles, the study also presents the development of quality according to the seasonal rhythmicity and rainfall. It gives also an overview on the concept of watershed stewardship.

Keywords: crude surface water quality, Oum Er Rbia hydraulic basin, spatio-temporal monitoring, Great Casablanca drink water quality, Morocco

Procedia PDF Downloads 438
3747 Prediction of Survival Rate after Gastrointestinal Surgery Based on The New Japanese Association for Acute Medicine (JAAM Score) With Neural Network Classification Method

Authors: Ayu Nabila Kusuma Pradana, Aprinaldi Jasa Mantau, Tomohiko Akahoshi

Abstract:

The incidence of Disseminated intravascular coagulation (DIC) following gastrointestinal surgery has a poor prognosis. Therefore, it is important to determine the factors that can predict the prognosis of DIC. This study will investigate the factors that may influence the outcome of DIC in patients after gastrointestinal surgery. Eighty-one patients were admitted to the intensive care unit after gastrointestinal surgery in Kyushu University Hospital from 2003 to 2021. Acute DIC scores were estimated using the new Japanese Association for Acute Medicine (JAAM) score from before and after surgery from day 1, day 3, and day 7. Acute DIC scores will be compared with The Sequential Organ Failure Assessment (SOFA) score, platelet count, lactate level, and a variety of biochemical parameters. This study applied machine learning algorithms to predict the prognosis of DIC after gastrointestinal surgery. The results of this study are expected to be used as an indicator for evaluating patient prognosis so that it can increase life expectancy and reduce mortality from cases of DIC patients after gastrointestinal surgery.

Keywords: the survival rate, gastrointestinal surgery, JAAM score, neural network, machine learning, disseminated intravascular coagulation (DIC)

Procedia PDF Downloads 250
3746 Iranian Students’ and Teachers’ Perceptions of Effective Foreign Language Teaching

Authors: Mehrnoush Tajnia, Simin Sadeghi-Saeb

Abstract:

Students and teachers have different perceptions of effectiveness of instruction. Comparing students’ and teachers’ beliefs and finding the mismatches between them can increase L2 students’ satisfaction. Few studies have taken into account the beliefs of both students and teachers on different aspects of pedagogy and the effect of learners’ level of education and contexts on effective foreign language teacher practices. Therefore, the present study was conducted to compare students’ and teachers’ perceptions on effective foreign language teaching. A sample of 303 learners and 54 instructors from different private language institutes and universities participated in the study. A questionnaire was developed to elicit participants’ beliefs on effective foreign language teaching and learning. The analysis of the results revealed that: a) there is significant difference between the students’ beliefs about effective teacher practices and teachers’ belief, b) Class level influences students’ perception of effective foreign language teacher, d) There is a significant difference of opinion between those learners who study foreign languages at university and those who study foreign language in private institutes with respect to effective teacher practices. The present paper concludes that finding the gap between students’ and teachers’ beliefs would help both of the groups to enhance their learning and teaching.

Keywords: effective teacher, effective teaching, students’ beliefs, teachers’ beliefs

Procedia PDF Downloads 309
3745 Utilising Sociodrama as Classroom Intervention to Develop Sensory Integration in Adolescents who Present with Mild Impaired Learning

Authors: Talita Veldsman, Elzette Fritz

Abstract:

Many children attending special education present with sensory integration difficulties that hamper their learning and behaviour. These learners can benefit from therapeutic interventions as part of their classroom curriculum that can address sensory development and allow for holistic development to take place. A research study was conducted by utilizing socio-drama as a therapeutic intervention in the classroom in order to develop sensory integration skills. The use of socio-drama as therapeutic intervention proved to be a successful multi-disciplinary approach where education and psychology could build a bridge of growth and integration. The paper describes how socio-drama was used in the classroom and how these sessions were designed. The research followed a qualitative approach and involved six Afrikaans-speaking children attending special secondary school in the age group 12-14 years. Data collection included observations during the session, reflective art journals, semi-structured interviews with the teacher and informal interviews with the adolescents. The analysis found improved self-confidence, better social relationships, sensory awareness and self-regulation in the participants after a period of a year.

Keywords: education, sensory integration, sociodrama, classroom intervention, psychology

Procedia PDF Downloads 574
3744 Data Refinement Enhances The Accuracy of Short-Term Traffic Latency Prediction

Authors: Man Fung Ho, Lap So, Jiaqi Zhang, Yuheng Zhao, Huiyang Lu, Tat Shing Choi, K. Y. Michael Wong

Abstract:

Nowadays, a tremendous amount of data is available in the transportation system, enabling the development of various machine learning approaches to make short-term latency predictions. A natural question is then the choice of relevant information to enable accurate predictions. Using traffic data collected from the Taiwan Freeway System, we consider the prediction of short-term latency of a freeway segment with a length of 17 km covering 5 measurement points, each collecting vehicle-by-vehicle data through the electronic toll collection system. The processed data include the past latencies of the freeway segment with different time lags, the traffic conditions of the individual segments (the accumulations, the traffic fluxes, the entrance and exit rates), the total accumulations, and the weekday latency profiles obtained by Gaussian process regression of past data. We arrive at several important conclusions about how data should be refined to obtain accurate predictions, which have implications for future system-wide latency predictions. (1) We find that the prediction of median latency is much more accurate and meaningful than the prediction of average latency, as the latter is plagued by outliers. This is verified by machine-learning prediction using XGBoost that yields a 35% improvement in the mean square error of the 5-minute averaged latencies. (2) We find that the median latency of the segment 15 minutes ago is a very good baseline for performance comparison, and we have evidence that further improvement is achieved by machine learning approaches such as XGBoost and Long Short-Term Memory (LSTM). (3) By analyzing the feature importance score in XGBoost and calculating the mutual information between the inputs and the latencies to be predicted, we identify a sequence of inputs ranked in importance. It confirms that the past latencies are most informative of the predicted latencies, followed by the total accumulation, whereas inputs such as the entrance and exit rates are uninformative. It also confirms that the inputs are much less informative of the average latencies than the median latencies. (4) For predicting the latencies of segments composed of two or three sub-segments, summing up the predicted latencies of each sub-segment is more accurate than the one-step prediction of the whole segment, especially with the latency prediction of the downstream sub-segments trained to anticipate latencies several minutes ahead. The duration of the anticipation time is an increasing function of the traveling time of the upstream segment. The above findings have important implications to predicting the full set of latencies among the various locations in the freeway system.

Keywords: data refinement, machine learning, mutual information, short-term latency prediction

Procedia PDF Downloads 166
3743 Automatic Adult Age Estimation Using Deep Learning of the ResNeXt Model Based on CT Reconstruction Images of the Costal Cartilage

Authors: Ting Lu, Ya-Ru Diao, Fei Fan, Ye Xue, Lei Shi, Xian-e Tang, Meng-jun Zhan, Zhen-hua Deng

Abstract:

Accurate adult age estimation (AAE) is a significant and challenging task in forensic and archeology fields. Attempts have been made to explore optimal adult age metrics, and the rib is considered a potential age marker. The traditional way is to extract age-related features designed by experts from macroscopic or radiological images followed by classification or regression analysis. Those results still have not met the high-level requirements for practice, and the limitation of using feature design and manual extraction methods is loss of information since the features are likely not designed explicitly for extracting information relevant to age. Deep learning (DL) has recently garnered much interest in imaging learning and computer vision. It enables learning features that are important without a prior bias or hypothesis and could be supportive of AAE. This study aimed to develop DL models for AAE based on CT images and compare their performance to the manual visual scoring method. Chest CT data were reconstructed using volume rendering (VR). Retrospective data of 2500 patients aged 20.00-69.99 years were obtained between December 2019 and September 2021. Five-fold cross-validation was performed, and datasets were randomly split into training and validation sets in a 4:1 ratio for each fold. Before feeding the inputs into networks, all images were augmented with random rotation and vertical flip, normalized, and resized to 224×224 pixels. ResNeXt was chosen as the DL baseline due to its advantages of higher efficiency and accuracy in image classification. Mean absolute error (MAE) was the primary parameter. Independent data from 100 patients acquired between March and April 2022 were used as a test set. The manual method completely followed the prior study, which reported the lowest MAEs (5.31 in males and 6.72 in females) among similar studies. CT data and VR images were used. The radiation density of the first costal cartilage was recorded using CT data on the workstation. The osseous and calcified projections of the 1 to 7 costal cartilages were scored based on VR images using an eight-stage staging technique. According to the results of the prior study, the optimal models were the decision tree regression model in males and the stepwise multiple linear regression equation in females. Predicted ages of the test set were calculated separately using different models by sex. A total of 2600 patients (training and validation sets, mean age=45.19 years±14.20 [SD]; test set, mean age=46.57±9.66) were evaluated in this study. Of ResNeXt model training, MAEs were obtained with 3.95 in males and 3.65 in females. Based on the test set, DL achieved MAEs of 4.05 in males and 4.54 in females, which were far better than the MAEs of 8.90 and 6.42 respectively, for the manual method. Those results showed that the DL of the ResNeXt model outperformed the manual method in AAE based on CT reconstruction of the costal cartilage and the developed system may be a supportive tool for AAE.

Keywords: forensic anthropology, age determination by the skeleton, costal cartilage, CT, deep learning

Procedia PDF Downloads 67
3742 Analysis and Detection of Facial Expressions in Autism Spectrum Disorder People Using Machine Learning

Authors: Muhammad Maisam Abbas, Salman Tariq, Usama Riaz, Muhammad Tanveer, Humaira Abdul Ghafoor

Abstract:

Autism Spectrum Disorder (ASD) refers to a developmental disorder that impairs an individual's communication and interaction ability. Individuals feel difficult to read facial expressions while communicating or interacting. Facial Expression Recognition (FER) is a unique method of classifying basic human expressions, i.e., happiness, fear, surprise, sadness, disgust, neutral, and anger through static and dynamic sources. This paper conducts a comprehensive comparison and proposed optimal method for a continued research project—a system that can assist people who have Autism Spectrum Disorder (ASD) in recognizing facial expressions. Comparison has been conducted on three supervised learning algorithms EigenFace, FisherFace, and LBPH. The JAFFE, CK+, and TFEID (I&II) datasets have been used to train and test the algorithms. The results were then evaluated based on variance, standard deviation, and accuracy. The experiments showed that FisherFace has the highest accuracy for all datasets and is considered the best algorithm to be implemented in our system.

Keywords: autism spectrum disorder, ASD, EigenFace, facial expression recognition, FisherFace, local binary pattern histogram, LBPH

Procedia PDF Downloads 167
3741 Using Deep Learning in Lyme Disease Diagnosis

Authors: Teja Koduru

Abstract:

Untreated Lyme disease can lead to neurological, cardiac, and dermatological complications. Rapid diagnosis of the erythema migrans (EM) rash, a characteristic symptom of Lyme disease is therefore crucial to early diagnosis and treatment. In this study, we aim to utilize deep learning frameworks including Tensorflow and Keras to create deep convolutional neural networks (DCNN) to detect images of acute Lyme Disease from images of erythema migrans. This study uses a custom database of erythema migrans images of varying quality to train a DCNN capable of classifying images of EM rashes vs. non-EM rashes. Images from publicly available sources were mined to create an initial database. Machine-based removal of duplicate images was then performed, followed by a thorough examination of all images by a clinician. The resulting database was combined with images of confounding rashes and regular skin, resulting in a total of 683 images. This database was then used to create a DCNN with an accuracy of 93% when classifying images of rashes as EM vs. non EM. Finally, this model was converted into a web and mobile application to allow for rapid diagnosis of EM rashes by both patients and clinicians. This tool could be used for patient prescreening prior to treatment and lead to a lower mortality rate from Lyme disease.

Keywords: Lyme, untreated Lyme, erythema migrans rash, EM rash

Procedia PDF Downloads 233
3740 Dissolved Black Carbon Accelerates the Photo-Degradation of Polystyrene Microplastics

Authors: Qin Ou, Yanghui Xu, Xintu Wang, Kim Maren Lompe, Gang Liu, Jan Peter Van Der Hoek

Abstract:

Microplastics (MPs) can undergo the photooxidation process under ultraviolet (UV) exposure, which determines their transformation and fate in environments. The presence of dissolved organic matter (DOM) can interact with MPs and take participate in the photo-degradation of MPs. As an important DOM component, dissolved black carbon (DBC), widely distributed in aquatic environments, can accelerate or inhibit the sunlight-driven photo-transformation of environmental pollutants. However, the role and underlying mechanism of DBC in the photooxidation of MPs are not clear. Herein, the DBC (< 0.45 µm) was extracted from wood biochar and fractionated by molecular weight (i.e., <3 KDa, 3 KDa−30 KDa, 30 KDa−0.45 µm). The effects of DBC chemical composition (i.e., molecular weight and chemical structure) in DBC-mediated photo-transformation of polystyrene (PS) MPs were investigated. The results showed that DBC initially inhibited the photo-degradation of MPs due to light shielding. Under UV exposure for 6−24 h, the presence of 5 mg/L DBC decreased the carbonyl index of MPs compared to the control. This inhibitory effect of DBC was found to decrease with increasing irradiation time. Notably, DBC initially decreased but then increased the hydroxyl index with aging time, suggesting that the role of DBC may shift from inhibition to acceleration. In terms of the different DBC fractions, the results showed that the smallest fraction of DBC (<3 KDa) significantly accelerated the photooxidation of PS MPs since it acted as reactive oxygen species (ROS) generators, especially in promoting the production of ¹O₂ and ³DBC* and •OH. With the increase in molecular weight, the acceleration effect of DBC on the degradation of MPs was decreased due to the increase of light shielding and possible decrease of photosensitization ability. This study thoroughly investigated the critical role of DBC chemical composition in the photooxidation process, which helps to assess the duration of aging and transformation of MPs during long-term weathering in natural waters.

Keywords: microplastics, photo-degradation, dissolved black carbon, molecular weight, photosensitization

Procedia PDF Downloads 75
3739 The Role of Organizational Identity in Disaster Response, Recovery and Prevention: A Case Study of an Italian Multi-Utility Company

Authors: Shanshan Zhou, Massimo Battaglia

Abstract:

Identity plays a critical role when an organization faces disasters. Individuals reflect on their working identities and identify themselves with the group and the organization, which facilitate collective sensemaking under crisis situations and enable coordinated actions to respond to and recover from disasters. In addition, an organization’s identity links it to its regional community, which fosters the mobilization of resources and contributes to rapid recovery. However, identity is also problematic for disaster prevention because of its persistence. An organization’s ego-defenses system prohibits the rethink of its identity and a rigid identity obstructs disaster prevention. This research aims to tackle the ‘problem’ of identity by study in-depth a case of an Italian multi–utility which experienced the 2012 Northern Italy earthquakes. Collecting data from 11 interviews with top managers and key players in the local community and archived materials, we find that the earthquakes triggered the rethink of the organization’s identity, which got reinforced afterward. This research highlighted the importance of identity in disaster response and recovery. More importantly, it explored the solution of overcoming the barrier of ego-defense that is to transform the organization into a learning organization which constantly rethinks its identity.

Keywords: community identity, disaster, identity, organizational learning

Procedia PDF Downloads 722
3738 Single Imputation for Audiograms

Authors: Sarah Beaver, Renee Bryce

Abstract:

Audiograms detect hearing impairment, but missing values pose problems. This work explores imputations in an attempt to improve accuracy. This work implements Linear Regression, Lasso, Linear Support Vector Regression, Bayesian Ridge, K Nearest Neighbors (KNN), and Random Forest machine learning techniques to impute audiogram frequencies ranging from 125Hz to 8000Hz. The data contains patients who had or were candidates for cochlear implants. Accuracy is compared across two different Nested Cross-Validation k values. Over 4000 audiograms were used from 800 unique patients. Additionally, training on data combines and compares left and right ear audiograms versus single ear side audiograms. The accuracy achieved using Root Mean Square Error (RMSE) values for the best models for Random Forest ranges from 4.74 to 6.37. The R\textsuperscript{2} values for the best models for Random Forest ranges from .91 to .96. The accuracy achieved using RMSE values for the best models for KNN ranges from 5.00 to 7.72. The R\textsuperscript{2} values for the best models for KNN ranges from .89 to .95. The best imputation models received R\textsuperscript{2} between .89 to .96 and RMSE values less than 8dB. We also show that the accuracy of classification predictive models performed better with our best imputation models versus constant imputations by a two percent increase.

Keywords: machine learning, audiograms, data imputations, single imputations

Procedia PDF Downloads 77
3737 Application of Environmental Justice Concept in Urban Planning, The Peri-Urban Environment of Tehran as the Case Study

Authors: Zahra Khodaee

Abstract:

Environmental Justice (EJ) concept consists of multifaceted movements, community struggles, and discourses in contemporary societies that seek to reduce environmental risks, increase environmental protections, and generally reduce environmental inequalities suffered by minority and poor communities; a term that incorporates ‘environmental racism’ and ‘environmental classism,’ captures the idea that different racial and socioeconomic groups experience differential access to environmental quality. This article explores environmental justice as an urban phenomenon in urban planning and applies it in peri-urban environment of a metropolis. Tehran peri-urban environments which are the result of meeting the city- village- nature systems or «city-village junction» have gradually faced effects such as accelerated environmental decline, changes without land-use plan, and severe service deficiencies. These problems are instances of environmental injustice which make the planners to adjust the problems and use and apply the appropriate strategies and policies by looking for solutions and resorting to theories, techniques and methods related to environmental justice. In order to access to this goal, try to define environmental justice through justice and determining environmental justice indices to analysis environmental injustice in case study. Then, make an effort to introduce some criteria to select case study in two micro and micro levels. Qiyamdasht town as the peri-urban environment of Tehran metropolis is chosen and examined to show the existence of environmental injustice by questionnaire analysis and SPSS software. Finally, use AIDA technique to design a strategic plan and reduce environmental injustice in case study by introducing the better scenario to be used in policy and decision making areas.

Keywords: environmental justice, metropolis of Tehran, Qiyam, Dasht peri, urban settlement, analysis of interconnected decision area (AIDA)

Procedia PDF Downloads 481
3736 Exploring the Applications of Neural Networks in the Adaptive Learning Environment

Authors: Baladitya Swaika, Rahul Khatry

Abstract:

Computer Adaptive Tests (CATs) is one of the most efficient ways for testing the cognitive abilities of students. CATs are based on Item Response Theory (IRT) which is based on item selection and ability estimation using statistical methods of maximum information selection/selection from posterior and maximum-likelihood (ML)/maximum a posteriori (MAP) estimators respectively. This study aims at combining both classical and Bayesian approaches to IRT to create a dataset which is then fed to a neural network which automates the process of ability estimation and then comparing it to traditional CAT models designed using IRT. This study uses python as the base coding language, pymc for statistical modelling of the IRT and scikit-learn for neural network implementations. On creation of the model and on comparison, it is found that the Neural Network based model performs 7-10% worse than the IRT model for score estimations. Although performing poorly, compared to the IRT model, the neural network model can be beneficially used in back-ends for reducing time complexity as the IRT model would have to re-calculate the ability every-time it gets a request whereas the prediction from a neural network could be done in a single step for an existing trained Regressor. This study also proposes a new kind of framework whereby the neural network model could be used to incorporate feature sets, other than the normal IRT feature set and use a neural network’s capacity of learning unknown functions to give rise to better CAT models. Categorical features like test type, etc. could be learnt and incorporated in IRT functions with the help of techniques like logistic regression and can be used to learn functions and expressed as models which may not be trivial to be expressed via equations. This kind of a framework, when implemented would be highly advantageous in psychometrics and cognitive assessments. This study gives a brief overview as to how neural networks can be used in adaptive testing, not only by reducing time-complexity but also by being able to incorporate newer and better datasets which would eventually lead to higher quality testing.

Keywords: computer adaptive tests, item response theory, machine learning, neural networks

Procedia PDF Downloads 169