Search results for: evolving learning
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7700

Search results for: evolving learning

3920 Integrating Wound Location Data with Deep Learning for Improved Wound Classification

Authors: Mouli Banga, Chaya Ravindra

Abstract:

Wound classification is a crucial step in wound diagnosis. An effective classifier can aid wound specialists in identifying wound types with reduced financial and time investments, facilitating the determination of optimal treatment procedures. This study presents a deep neural network-based classifier that leverages wound images and their corresponding locations to categorize wounds into various classes, such as diabetic, pressure, surgical, and venous ulcers. By incorporating a developed body map, the process of tagging wound locations is significantly enhanced, providing healthcare specialists with a more efficient tool for wound analysis. We conducted a comparative analysis between two prominent convolutional neural network models, ResNet50 and MobileNetV2, utilizing a dataset of 730 images. Our findings reveal that the RestNet50 outperforms MovileNetV2, achieving an accuracy of approximately 90%, compared to MobileNetV2’s 83%. This disparity highlights the superior capability of ResNet50 in the context of this dataset. The results underscore the potential of integrating deep learning with spatial data to improve the precision and efficiency of wound diagnosis, ultimately contributing to better patient outcomes and reducing healthcare costs.

Keywords: wound classification, MobileNetV2, ResNet50, multimodel

Procedia PDF Downloads 32
3919 A Preparatory Method for Building Construction Implemented in a Case Study in Brazil

Authors: Aline Valverde Arroteia, Tatiana Gondim do Amaral, Silvio Burrattino Melhado

Abstract:

During the last twenty years, the construction field in Brazil has evolved significantly in response to its market growing and competitiveness. However, this evolving path has faced many obstacles such as cultural barriers and the lack of efforts to achieve quality at the construction site. At the same time, the greatest amount of information generated on the designing or construction phases is lost due to the lack of an effective coordination of these activities. Face this problem, the aim of this research was to implement a French method named PEO which means preparation for building construction (in Portuguese) seeking to understand the design management process and its interface with the building construction phase. The research method applied was qualitative, and it was carried out through two case studies in the city of Goiania, in Goias, Brazil. The research was divided into two stages called pilot study at Company A and implementation of PEO at Company B. After the implementation; the results demonstrated the PEO method's effectiveness and feasibility while a booster on the quality improvement of design management. The analysis showed that the method has a purpose to improve the design and allow the reduction of failures, errors and rework commonly found in the production of buildings. Therefore, it can be concluded that the PEO is feasible to be applied to real estate and building companies. But, companies need to believe in the contribution they can make to the discovery of design failures in conjunction with other stakeholders forming a construction team. The result of PEO can be maximized when adopting the principles of simultaneous engineering and insertion of new computer technologies, which use a three-dimensional model of the building with BIM process.

Keywords: communication, design and construction interface management, preparation for building construction (PEO), proactive coordination (CPA)

Procedia PDF Downloads 162
3918 Feature Engineering Based Detection of Buffer Overflow Vulnerability in Source Code Using Deep Neural Networks

Authors: Mst Shapna Akter, Hossain Shahriar

Abstract:

One of the most important challenges in the field of software code audit is the presence of vulnerabilities in software source code. Every year, more and more software flaws are found, either internally in proprietary code or revealed publicly. These flaws are highly likely exploited and lead to system compromise, data leakage, or denial of service. C and C++ open-source code are now available in order to create a largescale, machine-learning system for function-level vulnerability identification. We assembled a sizable dataset of millions of opensource functions that point to potential exploits. We developed an efficient and scalable vulnerability detection method based on deep neural network models that learn features extracted from the source codes. The source code is first converted into a minimal intermediate representation to remove the pointless components and shorten the dependency. Moreover, we keep the semantic and syntactic information using state-of-the-art word embedding algorithms such as glove and fastText. The embedded vectors are subsequently fed into deep learning networks such as LSTM, BilSTM, LSTM-Autoencoder, word2vec, BERT, and GPT-2 to classify the possible vulnerabilities. Furthermore, we proposed a neural network model which can overcome issues associated with traditional neural networks. Evaluation metrics such as f1 score, precision, recall, accuracy, and total execution time have been used to measure the performance. We made a comparative analysis between results derived from features containing a minimal text representation and semantic and syntactic information. We found that all of the deep learning models provide comparatively higher accuracy when we use semantic and syntactic information as the features but require higher execution time as the word embedding the algorithm puts on a bit of complexity to the overall system.

Keywords: cyber security, vulnerability detection, neural networks, feature extraction

Procedia PDF Downloads 89
3917 The Utilization of Tea Extract within the Realm of the Food Industry

Authors: Raana Babadi Fathipour

Abstract:

Tea, a beverage widely cherished across the globe, has captured the interest of scholars with its recent acknowledgement for possessing noteworthy health advantages. Of particular significance is its proven ability to ward off ailments such as cancer and cardiovascular afflictions. Moreover, within the realm of culinary creations, lipid oxidation poses a significant challenge for food product development. In light of these aforementioned concerns, this present discourse turns its attention towards exploring diverse methodologies employed in extracting polyphenols from various types of tea leaves and examining their utility within the vast landscape of the ever-evolving food industry. Based on the discoveries unearthed in this comprehensive investigation, it has been determined that the fundamental constituents of tea are polyphenols possessed of intrinsic health-enhancing properties. This includes an assortment of catechins, namely epicatechin, epigallocatechin, epicatechin gallate, and epigallocatechin gallate. Moreover, gallic acid, flavonoids, flavonols and theaphlavins have also been detected within this aromatic beverage. Of these myriad components examined vigorously in this study's analysis, catechin emerges as particularly beneficial. Multiple techniques have emerged over time to successfully extract key compounds from tea plants, including solvent-based extraction methodologies, microwave-assisted water extraction approaches and ultrasound-assisted extraction techniques. In particular, consideration is given to microwave-assisted water extraction method as a viable scheme which effectively procures valuable polyphenols from tea extracts. This methodology appears adaptable for implementation within sectors such as dairy production along with meat and oil industries alike.

Keywords: camellia sinensis, extraction, food application, shelf life, tea

Procedia PDF Downloads 70
3916 Approach to Establish Logistics as a Central Scientific Discipline of Tomorrow's Industry

Authors: Johannes Dregger, Michael Schmidt, Christian Prasse, Michael ten Hompel

Abstract:

Most of the today’s companies face increasing need to operate efficiently. Driven by global trends like shorter product cycles, mass customization and the rising speed of delivery, manufacturing value chains are becoming more and more distributed. Manufacturing processes are becoming highly integrated, e.g. 3D printing. All these changes are affecting companies´ organization. They are leading towards individual, small scale, and ad-hoc logistics processes and structures, and finally, towards a significant increase in the importance of logistics itself since traditional value chains transform into agile value networks. In the past logistics has been following manufacturing but in the future industry, this role allocation might change. With this increase in the logistics practice of companies and businesses, the relevance of logistics research as the methodological foundation of logistics networks and processes is gaining importance. Logistics research is evolving into a central and highly interdisciplinary science for the future industry. Using the example of Germany, this paper discusses ways to establish logistics as a central scientific discipline of the future industry. About three million people work in the logistics sector in Germany. Only automotive and retail industry have more employees. Even though there is a bunch of logistics degree programs at more than 100 institutions of higher education, a common understanding of logistics as a research discipline is missing. In this paper an innovative approach will be presented, including; identified perspectives on logistics, such as process orientation, IT orientation or employees orientation, relevant scientific disciplines for logistics science, a concept for interdisciplinary research approaches to unify the perspectives of the different scientific disciplines on logistics and the methodological base of logistics science.

Keywords: logistics, logistics science, logistics management, future challenges

Procedia PDF Downloads 314
3915 Machine Learning Prediction of Diabetes Prevalence in the U.S. Using Demographic, Physical, and Lifestyle Indicators: A Study Based on NHANES 2009-2018

Authors: Oluwafunmibi Omotayo Fasanya, Augustine Kena Adjei

Abstract:

To develop a machine learning model to predict diabetes (DM) prevalence in the U.S. population using demographic characteristics, physical indicators, and lifestyle habits, and to analyze how these factors contribute to the likelihood of diabetes. We analyzed data from 23,546 participants aged 20 and older, who were non-pregnant, from the 2009-2018 National Health and Nutrition Examination Survey (NHANES). The dataset included key demographic (age, sex, ethnicity), physical (BMI, leg length, total cholesterol [TCHOL], fasting plasma glucose), and lifestyle indicators (smoking habits). A weighted sample was used to account for NHANES survey design features such as stratification and clustering. A classification machine learning model was trained to predict diabetes status. The target variable was binary (diabetes or non-diabetes) based on fasting plasma glucose measurements. The following models were evaluated: Logistic Regression (baseline), Random Forest Classifier, Gradient Boosting Machine (GBM), Support Vector Machine (SVM). Model performance was assessed using accuracy, F1-score, AUC-ROC, and precision-recall metrics. Feature importance was analyzed using SHAP values to interpret the contributions of variables such as age, BMI, ethnicity, and smoking status. The Gradient Boosting Machine (GBM) model outperformed other classifiers with an AUC-ROC score of 0.85. Feature importance analysis revealed the following key predictors: Age: The most significant predictor, with diabetes prevalence increasing with age, peaking around the 60s for males and 70s for females. BMI: Higher BMI was strongly associated with a higher risk of diabetes. Ethnicity: Black participants had the highest predicted prevalence of diabetes (14.6%), followed by Mexican-Americans (13.5%) and Whites (10.6%). TCHOL: Diabetics had lower total cholesterol levels, particularly among White participants (mean decline of 23.6 mg/dL). Smoking: Smoking showed a slight increase in diabetes risk among Whites (0.2%) but had a limited effect in other ethnic groups. Using machine learning models, we identified key demographic, physical, and lifestyle predictors of diabetes in the U.S. population. The results confirm that diabetes prevalence varies significantly across age, BMI, and ethnic groups, with lifestyle factors such as smoking contributing differently by ethnicity. These findings provide a basis for more targeted public health interventions and resource allocation for diabetes management.

Keywords: diabetes, NHANES, random forest, gradient boosting machine, support vector machine

Procedia PDF Downloads 8
3914 Bounded Rational Heterogeneous Agents in Artificial Stock Markets: Literature Review and Research Direction

Authors: Talal Alsulaiman, Khaldoun Khashanah

Abstract:

In this paper, we provided a literature survey on the artificial stock problem (ASM). The paper began by exploring the complexity of the stock market and the needs for ASM. ASM aims to investigate the link between individual behaviors (micro level) and financial market dynamics (macro level). The variety of patterns at the macro level is a function of the AFM complexity. The financial market system is a complex system where the relationship between the micro and macro level cannot be captured analytically. Computational approaches, such as simulation, are expected to comprehend this connection. Agent-based simulation is a simulation technique commonly used to build AFMs. The paper proceeds by discussing the components of the ASM. We consider the roles of behavioral finance (BF) alongside the traditionally risk-averse assumption in the construction of agent's attributes. Also, the influence of social networks in the developing of agents’ interactions is addressed. Network topologies such as a small world, distance-based, and scale-free networks may be utilized to outline economic collaborations. In addition, the primary methods for developing agents learning and adaptive abilities have been summarized. These incorporated approach such as Genetic Algorithm, Genetic Programming, Artificial neural network and Reinforcement Learning. In addition, the most common statistical properties (the stylized facts) of stock that are used for calibration and validation of ASM are discussed. Besides, we have reviewed the major related previous studies and categorize the utilized approaches as a part of these studies. Finally, research directions and potential research questions are argued. The research directions of ASM may focus on the macro level by analyzing the market dynamic or on the micro level by investigating the wealth distributions of the agents.

Keywords: artificial stock markets, market dynamics, bounded rationality, agent based simulation, learning, interaction, social networks

Procedia PDF Downloads 354
3913 Education for Sustainability Using PBL on an Engineering Course at the National University of Colombia

Authors: Hernán G. Cortés-Mora, José I. Péna-Reyes, Alfonso Herrera-Jiménez

Abstract:

This article describes the implementation experience of Project-Based Learning (PBL) in an engineering course of the Universidad Nacional de Colombia, with the aim of strengthening student skills necessary for the exercise of their profession under a sustainability framework. Firstly, we present a literature review on the education for sustainability field, emphasizing the skills and knowledge areas required for its development, as well as the commitment of the Faculty of Engineering of the Universidad Nacional de Colombia, and other engineering faculties of the country, regarding education for sustainability. This article covers the general aspects of the course, describes how students team were formed, and how their experience was during the first semester of 2017. During this period two groups of students decided to develop their course project aiming to solve a problem regarding a Non-Governmental Organization (NGO) that works with head-of-household mothers in a low-income neighborhood in Bogota (Colombia). Subsequently, we show how sustainability is involved in the course, how tools are provided to students, and how activities are developed as to strengthen their abilities, which allows them to incorporate sustainability in their projects while also working on the methodology used to develop said projects. Finally, we introduce the results obtained by the students who sent the prototypes of their projects to the community they were working on and the conclusions reached by them regarding the course experience.

Keywords: sustainability, project-based learning, engineering education, higher education for sustainability

Procedia PDF Downloads 352
3912 Role of IT Systems in Corporate Recruitment: Challenges and Constraints

Authors: Brahim Bellali, Fatima Bellali

Abstract:

The integration of information technology systems (ITS) into a company's human resources processes seems to be the appropriate solution to the problem of evolving and adapting its human resources management practices in order to be both more strategic and more efficient in terms of costs and service quality. In this context, the aim of this work is to study the impact of information technology systems (ITS) on the recruitment process. In this study, we targeted candidates who had recruited using IT tools. The target population consists of 34 candidates based in Casablanca, Morocco. In order to collect the data, a questionnaire had to be drawn up. The survey is based on a data sheet and a questionnaire that is divided into several sections to make it more structured and comprehensible. The results show that the majority of respondents say that companies are making greater use of online CV libraries and social networks as digital solutions during the recruitment process. The results also show that 50% of candidates say that the use of digital tools by companies would not slow them down when applying for a job and that these IT tools improve manual recruitment processes, while 44.1% think that they facilitate recruitment without any human intervention. The majority of respondents (52.9%) think that social networks are the digital solutions most often used by recruiters in the sourcing phase. The constraints of digital recruitment encountered are the dehumanization of human resources (44.1%) and the limited interaction during remote interviews (44.1%), which leaves no room for informal exchanges. Digital recruitment can be a highly effective strategy for finding qualified candidates in a variety of fields. Here are a few recommendations for optimizing your digital recruitment process: (1) Use online recruitment platforms: LinkedIn, Twitter, and Facebook ; (2) Use applicant tracking systems (ATS) ; (3) Develop a content marketing strategy.

Keywords: IT systems, recruitment, challenges, constraints

Procedia PDF Downloads 34
3911 Clustering and Modelling Electricity Conductors from 3D Point Clouds in Complex Real-World Environments

Authors: Rahul Paul, Peter Mctaggart, Luke Skinner

Abstract:

Maintaining public safety and network reliability are the core objectives of all electricity distributors globally. For many electricity distributors, managing vegetation clearances from their above ground assets (poles and conductors) is the most important and costly risk mitigation control employed to meet these objectives. Light Detection And Ranging (LiDAR) is widely used by utilities as a cost-effective method to inspect their spatially-distributed assets at scale, often captured using high powered LiDAR scanners attached to fixed wing or rotary aircraft. The resulting 3D point cloud model is used by these utilities to perform engineering grade measurements that guide the prioritisation of vegetation cutting programs. Advances in computer vision and machine-learning approaches are increasingly applied to increase automation and reduce inspection costs and time; however, real-world LiDAR capture variables (e.g., aircraft speed and height) create complexity, noise, and missing data, reducing the effectiveness of these approaches. This paper proposes a method for identifying each conductor from LiDAR data via clustering methods that can precisely reconstruct conductors in complex real-world configurations in the presence of high levels of noise. It proposes 3D catenary models for individual clusters fitted to the captured LiDAR data points using a least square method. An iterative learning process is used to identify potential conductor models between pole pairs. The proposed method identifies the optimum parameters of the catenary function and then fits the LiDAR points to reconstruct the conductors.

Keywords: point cloud, LİDAR data, machine learning, computer vision, catenary curve, vegetation management, utility industry

Procedia PDF Downloads 99
3910 The Art and Science of Trauma-Informed Psychotherapy: Guidelines for Inter-Disciplinary Clinicians

Authors: Daphne Alroy-Thiberge

Abstract:

Trauma-impacted individuals present unique treatment challenges that include high reactivity, hyper-and hypo-arousal, poor adherence to therapy, as well as powerful transference and counter-transference experiences in therapy. This work provides an overview of the clinical tenets most often encountered in trauma-impacted individuals. Further, it provides readily applicable clinical techniques to optimize therapeutic rapport and facilitate accelerated positive mental health outcomes. Finally, integrated neuroscience and clinical evidence-based data are discussed to shed new light on crisis states in trauma-impacted individuals. This knowledge is utilized to provide effective and concrete interventions towards rapid and successful de-escalation of the impacted individual. A highly interactive, adult-learning-principles-based modality is utilized to provide an organic learning experience for participants. The information and techniques learned aim to increase clinical effectiveness, reduce staff injuries and burnout, and significantly enhance positive mental health outcomes and self-determination for the trauma-impacted individuals treated.

Keywords: clinical competencies, crisis interventions, psychotherapy techniques, trauma informed care

Procedia PDF Downloads 109
3909 Improving the Students’ Writing Skill by Using Brainstorming Technique

Authors: M. Z. Abdul Rofiq Badril Rizal

Abstract:

This research is aimed to know the improvement of students’ English writing skill by using brainstorming technique. The technique used in writing is able to help the students’ difficulties in generating ideas and to lead the students to arrange the ideas well as well as to focus on the topic developed in writing. The research method used is classroom action research. The data sources of the research are an English teacher who acts as an observer and the students of class X.MIA5 consist of 35 students. The test result and observation are collected as the data in this research. Based on the research result in cycle one, the percentage of students who reach minimum accomplishment criteria (MAC) is 76.31%. It shows that the cycle must be continued to cycle two because the aim of the research has not accomplished, all of the students’ scores have not reached MAC yet. After continuing the research to cycle two and the weaknesses are improved, the process of teaching and learning runs better. At the test which is conducted in the end of learning process in cycle two, all of the students reach the minimum score and above 76 based on the minimum accomplishment criteria. It means the research has been successful and the percentage of students who reach minimum accomplishment criteria is 100%. Therefore, the writer concludes that brainstorming technique is able to improve the students’ English writing skill at the tenth grade of SMAN 2 Jember.

Keywords: brainstorming technique, improving, writing skill, knowledge and innovation engineering

Procedia PDF Downloads 367
3908 Autism Spectrum Disorder Classification Algorithm Using Multimodal Data Based on Graph Convolutional Network

Authors: Yuntao Liu, Lei Wang, Haoran Xia

Abstract:

Machine learning has shown extensive applications in the development of classification models for autism spectrum disorder (ASD) using neural image data. This paper proposes a fusion multi-modal classification network based on a graph neural network. First, the brain is segmented into 116 regions of interest using a medical segmentation template (AAL, Anatomical Automatic Labeling). The image features of sMRI and the signal features of fMRI are extracted, which build the node and edge embedding representations of the brain map. Then, we construct a dynamically updated brain map neural network and propose a method based on a dynamic brain map adjacency matrix update mechanism and learnable graph to further improve the accuracy of autism diagnosis and recognition results. Based on the Autism Brain Imaging Data Exchange I dataset(ABIDE I), we reached a prediction accuracy of 74% between ASD and TD subjects. Besides, to study the biomarkers that can help doctors analyze diseases and interpretability, we used the features by extracting the top five maximum and minimum ROI weights. This work provides a meaningful way for brain disorder identification.

Keywords: autism spectrum disorder, brain map, supervised machine learning, graph network, multimodal data, model interpretability

Procedia PDF Downloads 67
3907 A Shared Space: A Pioneering Approach to Interprofessional Education in New Zealand

Authors: Maria L. Ulloa, Ruth M. Crawford, Stephanie Kelly, Joey Domdom

Abstract:

In recent decades health and social service delivery have become more collaborative and interdisciplinary. Emerging trends suggest the need for an integrative and interprofessional approach to meet the challenges faced by professionals navigating the complexities of health and social service practice environments. Terms such as multidisciplinary practice, interprofessional collaboration, interprofessional education and transprofessional practice have become the common language used across a range of social services and health providers in western democratic systems. In Aotearoa New Zealand, one example of an interprofessional collaborative approach to curriculum design and delivery in health and social service is the development of an innovative Masters of Professional Practice programme. This qualification is the result of a strategic partnership between two tertiary institutions – Whitireia New Zealand (NZ) and the Wellington Institute of Technology (Weltec) in Wellington. The Master of Professional Practice programme was designed and delivered from the perspective of a collaborative, interprofessional and relational approach. Teachers and students in the programme come from a diverse range of cultural, professional and personal backgrounds and are engaged in courses using a blended learning approach that incorporates the values and pedagogies of interprofessional education. Students are actively engaged in professional practice while undertaking the programme. This presentation describes the themes of exploratory qualitative formative observations of engagement in class and online, student assessments, student research projects, as well as qualitative interviews with the programme teaching staff. These formative findings reveal the development of critical practice skills around the common themes of the programme: research and evidence based practice, education, leadership, working with diversity and advancing critical reflection of professional identities and interprofessional practice. This presentation will provide evidence of enhanced learning experiences in higher education and learning in multi-disciplinary contexts.

Keywords: diversity, exploratory research, interprofessional education, professional identity

Procedia PDF Downloads 302
3906 Brand Preferences in Saudi Arabia: Explorative Study in Jeddah

Authors: Badr Alharbi

Abstract:

There is significant debate on the evolution of retail marketing as an economy matures. In penetrating new markets, global brands are efficient in establishing a presence and replacing less effective competitors by engaging in superior advertising, pricing and sometimes quality. However, national brands adapt over time and may either partner with global brands in distribution and services or directly compete more efficiently in the new, open market. This explorative study investigates brand preferences in Saudi Arabia. As a conservative society, which is nevertheless highly commercialised, Saudi Arabia markets could be fragmenting with consumer preferences and rejections based on country of origin, globalisation, or perhaps regionalisation. To investigate this, an online survey was distributed to Saudis in Jeddah to gather data on their preferences for travel, technology, clothes and accessories, eating out, vehicles, and influential brands. The results from 710 valid responses were that there are distinct regional and national brand preferences among the young Saudi men who contributed to the survey. Apart from a preference for Saudi food providers, airline preferences were the United Emirates, holiday preferences were Europe, study and work preferences were the United States, hotel preferences were United States-based, car preferences were Japanese, and clothing preferences were United States-based. The results were broadly in line with international research findings; however, the study participants varied from Arab research findings by describing themselves as innovative in their purchase selections, rarely loyal (exception of Apple products) and continually seeking new brand experiences. This survey contributes to an understanding of evolving Saudi consumer preferences.

Keywords: Saudi marketing, globalisation, country of origin, brand preferences

Procedia PDF Downloads 277
3905 Effects of Allowance for Corporate Equity on the Financing Choices of Belgian Small and Medium-Sized Enterprises in a Crisis Context

Authors: O. Colot, M. Croquet, L. Cultrera, Y. Fandja Collince

Abstract:

The objective of our research is to evaluate the impact of the allowance for corporate equity (ACE) on the financial structure of Belgian SME in order to highlight the potential existence of a fiscal leverage. To limit the biases linked to the rationing of the capital further to the financial crisis, we compare first the dynamic evolution of the financial structure of the Belgian firms over the period 2006-2015 by focusing on three sub-periods: 2006-2008, 2009-2012 and 2013-2015. We give then an international size to this comparison by including SMEs from countries adjoining Belgium (France, Germany, Netherlands and the United Kingdom) and within which there is no ACE. This comparison allows better understanding the fiscal advantage linked to the ACE of firms evolving in a relatively unstable economic environment further to the financial crisis of 2008. This research is relevant given the economic and political context in which Belgium operates and the very uncertain future of the Belgian ACE. The originality of this research is twofold: the long study period and the consideration of the effects of the financial and economic crisis on the financing structure of Belgian SMEs. The results of this research, even though they confirm the existence of a positive fiscal leverage for the tax deduction for venture capital on the financing structure of Belgian SMEs, do not allow the extent of this leverage to be clearly quantified. The comparative evolution of financing structures over the period 2006-2015 of Belgian, French, German, Dutch and English SMEs shows a strong similarity in the overall evolution of their financing.

Keywords: allowance for corporate equity, Belgium, financial structure, small and medium sized firms

Procedia PDF Downloads 204
3904 Design and Implementation of Machine Learning Model for Short-Term Energy Forecasting in Smart Home Management System

Authors: R. Ramesh, K. K. Shivaraman

Abstract:

The main aim of this paper is to handle the energy requirement in an efficient manner by merging the advanced digital communication and control technologies for smart grid applications. In order to reduce user home load during peak load hours, utility applies several incentives such as real-time pricing, time of use, demand response for residential customer through smart meter. However, this method provides inconvenience in the sense that user needs to respond manually to prices that vary in real time. To overcome these inconvenience, this paper proposes a convolutional neural network (CNN) with k-means clustering machine learning model which have ability to forecast energy requirement in short term, i.e., hour of the day or day of the week. By integrating our proposed technique with home energy management based on Bluetooth low energy provides predicted value to user for scheduling appliance in advanced. This paper describes detail about CNN configuration and k-means clustering algorithm for short-term energy forecasting.

Keywords: convolutional neural network, fuzzy logic, k-means clustering approach, smart home energy management

Procedia PDF Downloads 305
3903 Learning Chinese Suprasegmentals for a Better Communicative Performance

Authors: Qi Wang

Abstract:

Chinese has become a powerful worldwide language and millions of learners are studying it all over the words. Chinese is a tone language with unique meaningful characters, which makes foreign learners master it with more difficulties. On the other hand, as each foreign language, the learners of Chinese first will learn the basic Chinese Sound Structure (the initials and finals, tones, Neutral Tone and Tone Sandhi). It’s quite common that in the following studies, teachers made a lot of efforts on drilling and error correcting, in order to help students to pronounce correctly, but ignored the training of suprasegmental features (e.g. stress, intonation). This paper analysed the oral data based on our graduation students (two-year program) from 2006-2013, presents the intonation pattern of our graduates to speak Chinese as second language -high and plain with heavy accents, without lexical stress, appropriate stop endings and intonation, which led to the misunderstanding in different real contexts of communications and the international official Chinese test, e.g. HSK (Chinese Proficiency Test), HSKK (HSK Speaking Test). This paper also demonstrated how the Chinese to use the suprasegmental features strategically in different functions and moods (declarative, interrogative, imperative, exclamatory and rhetorical intonations) in order to train the learners to achieve better Communicative Performance.

Keywords: second language learning, suprasegmental, communication, HSK (Chinese Proficiency Test)

Procedia PDF Downloads 437
3902 Service Information Integration Platform as Decision Making Tools for the Service Industry Supply Chain-Indonesia Service Integration Project

Authors: Haikal Achmad Thaha, Pujo Laksono, Dhamma Nibbana Putra

Abstract:

Customer service is one of the core interest in a service sector of a company, whether as the core business or as service part of the operation. Most of the time, the people and the previous research in service industry is focused on finding the best business model solution for the service sector, usually to decide between total in house customer service, outsourcing, or something in between. Conventionally, to take this decision is some important part of the management job, and this is a process that usually takes some time and staff effort, meanwhile market condition and overall company needs may change and cause loss of income and temporary disturbance in the companies operation . However, in this paper we have offer a new concept model to assist decision making process in service industry. This model will featured information platform as central tool to integrate service industry operation. The result is service information model which would ideally increase response time and effectivity of the decision making. it will also help service industry in switching the service solution system quickly through machine learning when the companies growth and the service solution needed are changing.

Keywords: service industry, customer service, machine learning, decision making, information platform

Procedia PDF Downloads 622
3901 Prediction of Survival Rate after Gastrointestinal Surgery Based on The New Japanese Association for Acute Medicine (JAAM Score) With Neural Network Classification Method

Authors: Ayu Nabila Kusuma Pradana, Aprinaldi Jasa Mantau, Tomohiko Akahoshi

Abstract:

The incidence of Disseminated intravascular coagulation (DIC) following gastrointestinal surgery has a poor prognosis. Therefore, it is important to determine the factors that can predict the prognosis of DIC. This study will investigate the factors that may influence the outcome of DIC in patients after gastrointestinal surgery. Eighty-one patients were admitted to the intensive care unit after gastrointestinal surgery in Kyushu University Hospital from 2003 to 2021. Acute DIC scores were estimated using the new Japanese Association for Acute Medicine (JAAM) score from before and after surgery from day 1, day 3, and day 7. Acute DIC scores will be compared with The Sequential Organ Failure Assessment (SOFA) score, platelet count, lactate level, and a variety of biochemical parameters. This study applied machine learning algorithms to predict the prognosis of DIC after gastrointestinal surgery. The results of this study are expected to be used as an indicator for evaluating patient prognosis so that it can increase life expectancy and reduce mortality from cases of DIC patients after gastrointestinal surgery.

Keywords: the survival rate, gastrointestinal surgery, JAAM score, neural network, machine learning, disseminated intravascular coagulation (DIC)

Procedia PDF Downloads 260
3900 Iranian Students’ and Teachers’ Perceptions of Effective Foreign Language Teaching

Authors: Mehrnoush Tajnia, Simin Sadeghi-Saeb

Abstract:

Students and teachers have different perceptions of effectiveness of instruction. Comparing students’ and teachers’ beliefs and finding the mismatches between them can increase L2 students’ satisfaction. Few studies have taken into account the beliefs of both students and teachers on different aspects of pedagogy and the effect of learners’ level of education and contexts on effective foreign language teacher practices. Therefore, the present study was conducted to compare students’ and teachers’ perceptions on effective foreign language teaching. A sample of 303 learners and 54 instructors from different private language institutes and universities participated in the study. A questionnaire was developed to elicit participants’ beliefs on effective foreign language teaching and learning. The analysis of the results revealed that: a) there is significant difference between the students’ beliefs about effective teacher practices and teachers’ belief, b) Class level influences students’ perception of effective foreign language teacher, d) There is a significant difference of opinion between those learners who study foreign languages at university and those who study foreign language in private institutes with respect to effective teacher practices. The present paper concludes that finding the gap between students’ and teachers’ beliefs would help both of the groups to enhance their learning and teaching.

Keywords: effective teacher, effective teaching, students’ beliefs, teachers’ beliefs

Procedia PDF Downloads 317
3899 Utilising Sociodrama as Classroom Intervention to Develop Sensory Integration in Adolescents who Present with Mild Impaired Learning

Authors: Talita Veldsman, Elzette Fritz

Abstract:

Many children attending special education present with sensory integration difficulties that hamper their learning and behaviour. These learners can benefit from therapeutic interventions as part of their classroom curriculum that can address sensory development and allow for holistic development to take place. A research study was conducted by utilizing socio-drama as a therapeutic intervention in the classroom in order to develop sensory integration skills. The use of socio-drama as therapeutic intervention proved to be a successful multi-disciplinary approach where education and psychology could build a bridge of growth and integration. The paper describes how socio-drama was used in the classroom and how these sessions were designed. The research followed a qualitative approach and involved six Afrikaans-speaking children attending special secondary school in the age group 12-14 years. Data collection included observations during the session, reflective art journals, semi-structured interviews with the teacher and informal interviews with the adolescents. The analysis found improved self-confidence, better social relationships, sensory awareness and self-regulation in the participants after a period of a year.

Keywords: education, sensory integration, sociodrama, classroom intervention, psychology

Procedia PDF Downloads 577
3898 Data Refinement Enhances The Accuracy of Short-Term Traffic Latency Prediction

Authors: Man Fung Ho, Lap So, Jiaqi Zhang, Yuheng Zhao, Huiyang Lu, Tat Shing Choi, K. Y. Michael Wong

Abstract:

Nowadays, a tremendous amount of data is available in the transportation system, enabling the development of various machine learning approaches to make short-term latency predictions. A natural question is then the choice of relevant information to enable accurate predictions. Using traffic data collected from the Taiwan Freeway System, we consider the prediction of short-term latency of a freeway segment with a length of 17 km covering 5 measurement points, each collecting vehicle-by-vehicle data through the electronic toll collection system. The processed data include the past latencies of the freeway segment with different time lags, the traffic conditions of the individual segments (the accumulations, the traffic fluxes, the entrance and exit rates), the total accumulations, and the weekday latency profiles obtained by Gaussian process regression of past data. We arrive at several important conclusions about how data should be refined to obtain accurate predictions, which have implications for future system-wide latency predictions. (1) We find that the prediction of median latency is much more accurate and meaningful than the prediction of average latency, as the latter is plagued by outliers. This is verified by machine-learning prediction using XGBoost that yields a 35% improvement in the mean square error of the 5-minute averaged latencies. (2) We find that the median latency of the segment 15 minutes ago is a very good baseline for performance comparison, and we have evidence that further improvement is achieved by machine learning approaches such as XGBoost and Long Short-Term Memory (LSTM). (3) By analyzing the feature importance score in XGBoost and calculating the mutual information between the inputs and the latencies to be predicted, we identify a sequence of inputs ranked in importance. It confirms that the past latencies are most informative of the predicted latencies, followed by the total accumulation, whereas inputs such as the entrance and exit rates are uninformative. It also confirms that the inputs are much less informative of the average latencies than the median latencies. (4) For predicting the latencies of segments composed of two or three sub-segments, summing up the predicted latencies of each sub-segment is more accurate than the one-step prediction of the whole segment, especially with the latency prediction of the downstream sub-segments trained to anticipate latencies several minutes ahead. The duration of the anticipation time is an increasing function of the traveling time of the upstream segment. The above findings have important implications to predicting the full set of latencies among the various locations in the freeway system.

Keywords: data refinement, machine learning, mutual information, short-term latency prediction

Procedia PDF Downloads 169
3897 A comparative Analysis of the Good Faith Principle in Construction Contracts

Authors: Nadine Rashed, A. Samer Ezeldin, Engy Serag

Abstract:

The principle of good faith plays a critical role in shaping contractual relationships, yet its application varies significantly across different types of construction contracts and legal systems. This paper presents a comparative analysis of how various construction contracts perceive the principle of good faith, a fundamental aspect that influences contractual relationships and project outcomes. The primary objective of this analysis is to examine the differences in the application and interpretation of good faith across key construction contracts, including JCT (Joint Contracts Tribunal), FIDIC (Fédération Internationale des Ingénieurs-Conseils), NEC (New Engineering Contract), and ICE (Institution of Civil Engineers) Contracts. To accomplish this, a mixed-methods approach will be employed, integrating a thorough literature review of current legal frameworks and academic publications with primary data gathered from a structured questionnaire aimed at industry professionals such as contract managers, legal advisors, and project stakeholders. This combined strategy will enable a holistic understanding of the theoretical foundations of good faith in construction contracts and its practical effects in real-world contexts. The findings of this analysis are expected to yield valuable insights into how varying interpretations of good faith can impact project performance, dispute resolution, and collaborative practices within the construction industry. This paper contributes to a deeper understanding of how the principle of good faith is evolving in the construction industry, providing insights for contract drafters, legal practitioners, and project managers seeking to navigate the complexities of contractual obligations across different legal systems.

Keywords: construction contracts, contractual obligations, ethical practices, good faith

Procedia PDF Downloads 22
3896 Automatic Adult Age Estimation Using Deep Learning of the ResNeXt Model Based on CT Reconstruction Images of the Costal Cartilage

Authors: Ting Lu, Ya-Ru Diao, Fei Fan, Ye Xue, Lei Shi, Xian-e Tang, Meng-jun Zhan, Zhen-hua Deng

Abstract:

Accurate adult age estimation (AAE) is a significant and challenging task in forensic and archeology fields. Attempts have been made to explore optimal adult age metrics, and the rib is considered a potential age marker. The traditional way is to extract age-related features designed by experts from macroscopic or radiological images followed by classification or regression analysis. Those results still have not met the high-level requirements for practice, and the limitation of using feature design and manual extraction methods is loss of information since the features are likely not designed explicitly for extracting information relevant to age. Deep learning (DL) has recently garnered much interest in imaging learning and computer vision. It enables learning features that are important without a prior bias or hypothesis and could be supportive of AAE. This study aimed to develop DL models for AAE based on CT images and compare their performance to the manual visual scoring method. Chest CT data were reconstructed using volume rendering (VR). Retrospective data of 2500 patients aged 20.00-69.99 years were obtained between December 2019 and September 2021. Five-fold cross-validation was performed, and datasets were randomly split into training and validation sets in a 4:1 ratio for each fold. Before feeding the inputs into networks, all images were augmented with random rotation and vertical flip, normalized, and resized to 224×224 pixels. ResNeXt was chosen as the DL baseline due to its advantages of higher efficiency and accuracy in image classification. Mean absolute error (MAE) was the primary parameter. Independent data from 100 patients acquired between March and April 2022 were used as a test set. The manual method completely followed the prior study, which reported the lowest MAEs (5.31 in males and 6.72 in females) among similar studies. CT data and VR images were used. The radiation density of the first costal cartilage was recorded using CT data on the workstation. The osseous and calcified projections of the 1 to 7 costal cartilages were scored based on VR images using an eight-stage staging technique. According to the results of the prior study, the optimal models were the decision tree regression model in males and the stepwise multiple linear regression equation in females. Predicted ages of the test set were calculated separately using different models by sex. A total of 2600 patients (training and validation sets, mean age=45.19 years±14.20 [SD]; test set, mean age=46.57±9.66) were evaluated in this study. Of ResNeXt model training, MAEs were obtained with 3.95 in males and 3.65 in females. Based on the test set, DL achieved MAEs of 4.05 in males and 4.54 in females, which were far better than the MAEs of 8.90 and 6.42 respectively, for the manual method. Those results showed that the DL of the ResNeXt model outperformed the manual method in AAE based on CT reconstruction of the costal cartilage and the developed system may be a supportive tool for AAE.

Keywords: forensic anthropology, age determination by the skeleton, costal cartilage, CT, deep learning

Procedia PDF Downloads 73
3895 Analysis and Detection of Facial Expressions in Autism Spectrum Disorder People Using Machine Learning

Authors: Muhammad Maisam Abbas, Salman Tariq, Usama Riaz, Muhammad Tanveer, Humaira Abdul Ghafoor

Abstract:

Autism Spectrum Disorder (ASD) refers to a developmental disorder that impairs an individual's communication and interaction ability. Individuals feel difficult to read facial expressions while communicating or interacting. Facial Expression Recognition (FER) is a unique method of classifying basic human expressions, i.e., happiness, fear, surprise, sadness, disgust, neutral, and anger through static and dynamic sources. This paper conducts a comprehensive comparison and proposed optimal method for a continued research project—a system that can assist people who have Autism Spectrum Disorder (ASD) in recognizing facial expressions. Comparison has been conducted on three supervised learning algorithms EigenFace, FisherFace, and LBPH. The JAFFE, CK+, and TFEID (I&II) datasets have been used to train and test the algorithms. The results were then evaluated based on variance, standard deviation, and accuracy. The experiments showed that FisherFace has the highest accuracy for all datasets and is considered the best algorithm to be implemented in our system.

Keywords: autism spectrum disorder, ASD, EigenFace, facial expression recognition, FisherFace, local binary pattern histogram, LBPH

Procedia PDF Downloads 174
3894 Using Deep Learning in Lyme Disease Diagnosis

Authors: Teja Koduru

Abstract:

Untreated Lyme disease can lead to neurological, cardiac, and dermatological complications. Rapid diagnosis of the erythema migrans (EM) rash, a characteristic symptom of Lyme disease is therefore crucial to early diagnosis and treatment. In this study, we aim to utilize deep learning frameworks including Tensorflow and Keras to create deep convolutional neural networks (DCNN) to detect images of acute Lyme Disease from images of erythema migrans. This study uses a custom database of erythema migrans images of varying quality to train a DCNN capable of classifying images of EM rashes vs. non-EM rashes. Images from publicly available sources were mined to create an initial database. Machine-based removal of duplicate images was then performed, followed by a thorough examination of all images by a clinician. The resulting database was combined with images of confounding rashes and regular skin, resulting in a total of 683 images. This database was then used to create a DCNN with an accuracy of 93% when classifying images of rashes as EM vs. non EM. Finally, this model was converted into a web and mobile application to allow for rapid diagnosis of EM rashes by both patients and clinicians. This tool could be used for patient prescreening prior to treatment and lead to a lower mortality rate from Lyme disease.

Keywords: Lyme, untreated Lyme, erythema migrans rash, EM rash

Procedia PDF Downloads 241
3893 The Role of Organizational Identity in Disaster Response, Recovery and Prevention: A Case Study of an Italian Multi-Utility Company

Authors: Shanshan Zhou, Massimo Battaglia

Abstract:

Identity plays a critical role when an organization faces disasters. Individuals reflect on their working identities and identify themselves with the group and the organization, which facilitate collective sensemaking under crisis situations and enable coordinated actions to respond to and recover from disasters. In addition, an organization’s identity links it to its regional community, which fosters the mobilization of resources and contributes to rapid recovery. However, identity is also problematic for disaster prevention because of its persistence. An organization’s ego-defenses system prohibits the rethink of its identity and a rigid identity obstructs disaster prevention. This research aims to tackle the ‘problem’ of identity by study in-depth a case of an Italian multi–utility which experienced the 2012 Northern Italy earthquakes. Collecting data from 11 interviews with top managers and key players in the local community and archived materials, we find that the earthquakes triggered the rethink of the organization’s identity, which got reinforced afterward. This research highlighted the importance of identity in disaster response and recovery. More importantly, it explored the solution of overcoming the barrier of ego-defense that is to transform the organization into a learning organization which constantly rethinks its identity.

Keywords: community identity, disaster, identity, organizational learning

Procedia PDF Downloads 732
3892 Single Imputation for Audiograms

Authors: Sarah Beaver, Renee Bryce

Abstract:

Audiograms detect hearing impairment, but missing values pose problems. This work explores imputations in an attempt to improve accuracy. This work implements Linear Regression, Lasso, Linear Support Vector Regression, Bayesian Ridge, K Nearest Neighbors (KNN), and Random Forest machine learning techniques to impute audiogram frequencies ranging from 125Hz to 8000Hz. The data contains patients who had or were candidates for cochlear implants. Accuracy is compared across two different Nested Cross-Validation k values. Over 4000 audiograms were used from 800 unique patients. Additionally, training on data combines and compares left and right ear audiograms versus single ear side audiograms. The accuracy achieved using Root Mean Square Error (RMSE) values for the best models for Random Forest ranges from 4.74 to 6.37. The R\textsuperscript{2} values for the best models for Random Forest ranges from .91 to .96. The accuracy achieved using RMSE values for the best models for KNN ranges from 5.00 to 7.72. The R\textsuperscript{2} values for the best models for KNN ranges from .89 to .95. The best imputation models received R\textsuperscript{2} between .89 to .96 and RMSE values less than 8dB. We also show that the accuracy of classification predictive models performed better with our best imputation models versus constant imputations by a two percent increase.

Keywords: machine learning, audiograms, data imputations, single imputations

Procedia PDF Downloads 82
3891 Exploring the Applications of Neural Networks in the Adaptive Learning Environment

Authors: Baladitya Swaika, Rahul Khatry

Abstract:

Computer Adaptive Tests (CATs) is one of the most efficient ways for testing the cognitive abilities of students. CATs are based on Item Response Theory (IRT) which is based on item selection and ability estimation using statistical methods of maximum information selection/selection from posterior and maximum-likelihood (ML)/maximum a posteriori (MAP) estimators respectively. This study aims at combining both classical and Bayesian approaches to IRT to create a dataset which is then fed to a neural network which automates the process of ability estimation and then comparing it to traditional CAT models designed using IRT. This study uses python as the base coding language, pymc for statistical modelling of the IRT and scikit-learn for neural network implementations. On creation of the model and on comparison, it is found that the Neural Network based model performs 7-10% worse than the IRT model for score estimations. Although performing poorly, compared to the IRT model, the neural network model can be beneficially used in back-ends for reducing time complexity as the IRT model would have to re-calculate the ability every-time it gets a request whereas the prediction from a neural network could be done in a single step for an existing trained Regressor. This study also proposes a new kind of framework whereby the neural network model could be used to incorporate feature sets, other than the normal IRT feature set and use a neural network’s capacity of learning unknown functions to give rise to better CAT models. Categorical features like test type, etc. could be learnt and incorporated in IRT functions with the help of techniques like logistic regression and can be used to learn functions and expressed as models which may not be trivial to be expressed via equations. This kind of a framework, when implemented would be highly advantageous in psychometrics and cognitive assessments. This study gives a brief overview as to how neural networks can be used in adaptive testing, not only by reducing time-complexity but also by being able to incorporate newer and better datasets which would eventually lead to higher quality testing.

Keywords: computer adaptive tests, item response theory, machine learning, neural networks

Procedia PDF Downloads 175