Search results for: machine learning tools and techniques
15722 Creation of an Integrated Development Environment to Assist and Optimize the Learning the Languages C and C++
Authors: Francimar Alves, Marcos Castro, Marllus Lustosa
Abstract:
In the context of the teaching of computer programming, the choice of tool to use is very important in the initiation and continuity of learning a programming language. The literature tools do not always provide usability and pedagogical dynamism clearly and accurately for effective learning. This hypothesis implies fall in productivity and difficulty of learning a particular programming language by students. The integrated development environments (IDEs) Dev-C ++ and Code :: Blocks are widely used in introductory courses for undergraduate courses in Computer Science for learning C and C ++ languages. However, after several years of discontinuity maintaining the source code of Dev-C ++ tool, the continued use of the same in the teaching and learning process of the students of these institutions has led to difficulties, mainly due to the lack of update by the official developers, which resulted in a sequence of problems in using it on educational settings. Much of the users, dissatisfied with the IDE Dev-C ++, migrated to Code :: Blocks platform targeting the more dynamic range in the learning process of the C and C ++ languages. Nevertheless, there is still the need to create a tool that can provide the resources of most IDE's software development literature, however, more interactive, simple, accurate and efficient. This motivation led to the creation of Falcon C ++ tool, IDE that brings with features that turn it into an educational platform, which focuses primarily on increasing student learning index in the early disciplines of programming and algorithms that use the languages C and C ++ . As a working methodology, a field research to prove the truth of the proposed tool was used. The test results and interviews with entry-level students and intermediate in a postsecondary institution gave basis for the composition of this work, demonstrating a positive impact on the use of the tool in teaching programming, showing that the use of Falcon C ++ software is beneficial in the teaching process of the C and C ++ programming languages.Keywords: ide, education, learning, development, language
Procedia PDF Downloads 44315721 Two Different Learning Environments: Arabic International Students Coping with the Australian Learning System
Authors: H. van Rensburg, B. Adcock, B. Al Mansouri
Abstract:
This paper discusses the impact of pedagogical and learning differences on Arabic international students’ (AIS) learning when they come to study in Australia. It describes the difference in teaching and learning methods between the students’ home countries in the Arabic world and Australia. There are many research papers that discuss the general experiences of international students in the western learning systems, including Australia. However, there is little research conducted specifically about AIS learning in Australia. Therefore, the data was collected through in-depth, semi-structured interviews with AIS who are learning at an Australian regional university in Queensland. For that reason, this paper contributes to fill a gap by reporting on the learning experiences of AIS in Australia and, more specifically, on the AIS’ pedagogical experiences. Not only discussing the learning experiences of AIS, but also discussing the cultural adaptation using the Oberg’s cultural adaptation model. This paper suggests some learning strategies that may benefit AIS and academic lecturers when teaching students from a completely different culture and language.Keywords: arabic international students, cultural adaption, learning differences, learning systems
Procedia PDF Downloads 60415720 Instruction and Learning Design Consideration for the Development of Mobile Learning Application
Authors: M. Sarrab, M. Elbasir
Abstract:
Most of mobile learning applications currently available are developed for the formal education and learning environment. Those applications are characterized by the improvement of the interaction process between instructors and learners to provide more collaboration and flexibility in the learning process. Despite the long history and large amount of research on Instruction design model and mobile learning there is no complete and well defined set of steps to follow in designing mobile learning applications. Based on this scenario, this paper focuses on identifying instruction design phases considerations and influencing factors in developing mobile learning application. This set of instruction design steps includes analysis, design, development, implementation, evaluation and continuous has been built from a literature study with focus on standards for learning and mobile application software quality and guidelines. The effort is part of an Omani-funded research project investigating the development, adoption and dissemination of mobile learning in Oman.Keywords: instruction design, mobile learning, mobile application
Procedia PDF Downloads 60415719 Exploring How Online Applications Help Students to Learn Music Virtually: A Study in an Australian Music Academy
Authors: Ali Shah
Abstract:
This paper outlines the case study experience of using a variety of online strategies in an Australian music academy context during covid times. The study aimed at exploring how online applications help students to learn music, specifically playing musical instruments, composing songs, and performing virtually. To explore this, music teachers’ perceptions and experiences regarding online learning, the teaching strategies they implemented, and the challenges they faced were examined. For the purpose of this study, a qualitative research structure was adopted through the use of three data collection tools. These methods included pre- and post-research individual interviews of teachers and students, analysis of their lesson plans, virtual classroom observations of the teachers followed by the researcher’sown reflections, post-observation discussions, and teachers’ reflective journals. The findings revealed that teachers had a theoretical understanding of virtual learning and recent musical application such as Flowkey, Skoove, and Piano marvel, which are benefits of e-learning. While teachers faced challenges in implementing strategies to teach keyboard/piano online, overall, both students and teachers felt the positive impact of online applications and strategies on their learning and felt that modern technology made it possible for anyone to take music lessons at home.Keywords: music, keyboard, piano, online learning, virtual learning
Procedia PDF Downloads 7515718 Fault Study and Reliability Analysis of Rotative Machine
Authors: Guang Yang, Zhiwei Bai, Bo Sun
Abstract:
This paper analyzes the influence of failure mode and harmfulness of rotative machine according to FMECA (Failure Mode, Effects, and Criticality Analysis) method, and finds out the weak links that affect the reliability of this equipment. Also in this paper, fault tree analysis software is used for quantitative and qualitative analysis, pointing out the main factors of failure of this equipment. Based on the experimental results, this paper puts forward corresponding measures for prevention and improvement, and fundamentally improves the inherent reliability of this rotative machine, providing the basis for the formulation of technical conditions for the safe operation of industrial applications.Keywords: rotative machine, reliability test, fault tree analysis, FMECA
Procedia PDF Downloads 15415717 The Use of Videoconferencing in a Task-Based Beginners' Chinese Class
Authors: Sijia Guo
Abstract:
The development of new technologies and the falling cost of high-speed Internet access have made it easier for institutes and language teachers to opt different ways to communicate with students at distance. The emergence of web-conferencing applications, which integrate text, chat, audio / video and graphic facilities, offers great opportunities for language learning to through the multimodal environment. This paper reports on data elicited from a Ph.D. study of using web-conferencing in the teaching of first-year Chinese class in order to promote learners’ collaborative learning. Firstly, a comparison of four desktop videoconferencing (DVC) tools was conducted to determine the pedagogical value of the videoconferencing tool-Blackboard Collaborate. Secondly, the evaluation of 14 campus-based Chinese learners who conducted five one-hour online sessions via the multimodal environment reveals the users’ choice of modes and their learning preference. The findings show that the tasks designed for the web-conferencing environment contributed to the learners’ collaborative learning and second language acquisition.Keywords: computer-mediated communication (CMC), CALL evaluation, TBLT, web-conferencing, online Chinese teaching
Procedia PDF Downloads 30915716 Fair Federated Learning in Wireless Communications
Authors: Shayan Mohajer Hamidi
Abstract:
Federated Learning (FL) has emerged as a promising paradigm for training machine learning models on distributed data without the need for centralized data aggregation. In the realm of wireless communications, FL has the potential to leverage the vast amounts of data generated by wireless devices to improve model performance and enable intelligent applications. However, the fairness aspect of FL in wireless communications remains largely unexplored. This abstract presents an idea for fair federated learning in wireless communications, addressing the challenges of imbalanced data distribution, privacy preservation, and resource allocation. Firstly, the proposed approach aims to tackle the issue of imbalanced data distribution in wireless networks. In typical FL scenarios, the distribution of data across wireless devices can be highly skewed, resulting in unfair model updates. To address this, we propose a weighted aggregation strategy that assigns higher importance to devices with fewer samples during the aggregation process. By incorporating fairness-aware weighting mechanisms, the proposed approach ensures that each participating device's contribution is proportional to its data distribution, thereby mitigating the impact of data imbalance on model performance. Secondly, privacy preservation is a critical concern in federated learning, especially in wireless communications where sensitive user data is involved. The proposed approach incorporates privacy-enhancing techniques, such as differential privacy, to protect user privacy during the model training process. By adding carefully calibrated noise to the gradient updates, the proposed approach ensures that the privacy of individual devices is preserved without compromising the overall model accuracy. Moreover, the approach considers the heterogeneity of devices in terms of computational capabilities and energy constraints, allowing devices to adaptively adjust the level of privacy preservation to strike a balance between privacy and utility. Thirdly, efficient resource allocation is crucial for federated learning in wireless communications, as devices operate under limited bandwidth, energy, and computational resources. The proposed approach leverages optimization techniques to allocate resources effectively among the participating devices, considering factors such as data quality, network conditions, and device capabilities. By intelligently distributing the computational load, communication bandwidth, and energy consumption, the proposed approach minimizes resource wastage and ensures a fair and efficient FL process in wireless networks. To evaluate the performance of the proposed fair federated learning approach, extensive simulations and experiments will be conducted. The experiments will involve a diverse set of wireless devices, ranging from smartphones to Internet of Things (IoT) devices, operating in various scenarios with different data distributions and network conditions. The evaluation metrics will include model accuracy, fairness measures, privacy preservation, and resource utilization. The expected outcomes of this research include improved model performance, fair allocation of resources, enhanced privacy preservation, and a better understanding of the challenges and solutions for fair federated learning in wireless communications. The proposed approach has the potential to revolutionize wireless communication systems by enabling intelligent applications while addressing fairness concerns and preserving user privacy.Keywords: federated learning, wireless communications, fairness, imbalanced data, privacy preservation, resource allocation, differential privacy, optimization
Procedia PDF Downloads 7515715 A Predictive Model for Turbulence Evolution and Mixing Using Machine Learning
Authors: Yuhang Wang, Jorg Schluter, Sergiy Shelyag
Abstract:
The high cost associated with high-resolution computational fluid dynamics (CFD) is one of the main challenges that inhibit the design, development, and optimisation of new combustion systems adapted for renewable fuels. In this study, we propose a physics-guided CNN-based model to predict turbulence evolution and mixing without requiring a traditional CFD solver. The model architecture is built upon U-Net and the inception module, while a physics-guided loss function is designed by introducing two additional physical constraints to allow for the conservation of both mass and pressure over the entire predicted flow fields. Then, the model is trained on the Large Eddy Simulation (LES) results of a natural turbulent mixing layer with two different Reynolds number cases (Re = 3000 and 30000). As a result, the model prediction shows an excellent agreement with the corresponding CFD solutions in terms of both spatial distributions and temporal evolution of turbulent mixing. Such promising model prediction performance opens up the possibilities of doing accurate high-resolution manifold-based combustion simulations at a low computational cost for accelerating the iterative design process of new combustion systems.Keywords: computational fluid dynamics, turbulence, machine learning, combustion modelling
Procedia PDF Downloads 9115714 Rapid Building Detection in Population-Dense Regions with Overfitted Machine Learning Models
Authors: V. Mantey, N. Findlay, I. Maddox
Abstract:
The quality and quantity of global satellite data have been increasing exponentially in recent years as spaceborne systems become more affordable and the sensors themselves become more sophisticated. This is a valuable resource for many applications, including disaster management and relief. However, while more information can be valuable, the volume of data available is impossible to manually examine. Therefore, the question becomes how to extract as much information as possible from the data with limited manpower. Buildings are a key feature of interest in satellite imagery with applications including telecommunications, population models, and disaster relief. Machine learning tools are fast becoming one of the key resources to solve this problem, and models have been developed to detect buildings in optical satellite imagery. However, by and large, most models focus on affluent regions where buildings are generally larger and constructed further apart. This work is focused on the more difficult problem of detection in populated regions. The primary challenge with detecting small buildings in densely populated regions is both the spatial and spectral resolution of the optical sensor. Densely packed buildings with similar construction materials will be difficult to separate due to a similarity in color and because the physical separation between structures is either non-existent or smaller than the spatial resolution. This study finds that training models until they are overfitting the input sample can perform better in these areas than a more robust, generalized model. An overfitted model takes less time to fine-tune from a generalized pre-trained model and requires fewer input data. The model developed for this study has also been fine-tuned using existing, open-source, building vector datasets. This is particularly valuable in the context of disaster relief, where information is required in a very short time span. Leveraging existing datasets means that little to no manpower or time is required to collect data in the region of interest. The training period itself is also shorter for smaller datasets. Requiring less data means that only a few quality areas are necessary, and so any weaknesses or underpopulated regions in the data can be skipped over in favor of areas with higher quality vectors. In this study, a landcover classification model was developed in conjunction with the building detection tool to provide a secondary source to quality check the detected buildings. This has greatly reduced the false positive rate. The proposed methodologies have been implemented and integrated into a configurable production environment and have been employed for a number of large-scale commercial projects, including continent-wide DEM production, where the extracted building footprints are being used to enhance digital elevation models. Overfitted machine learning models are often considered too specific to have any predictive capacity. However, this study demonstrates that, in cases where input data is scarce, overfitted models can be judiciously applied to solve time-sensitive problems.Keywords: building detection, disaster relief, mask-RCNN, satellite mapping
Procedia PDF Downloads 16915713 A Schema of Building an Efficient Quality Gate throughout the Software Development with Tools
Authors: Le Chen
Abstract:
This paper presents an efficient tool platform scheme to ensure quality protection throughout the software development process. The main principle is to manage the information of requirements, design, development, testing, operation and maintenance process with proper tools, and to set up the quality standards of each process. Through the tools’ display and summary of quality standards, the quality standards can be visualizad and ready for policy decision, which is called Quality Gate in this paper. In addition, the tools are also integrated to achieve the exchange and relation of information which highly improving operational efficiency. In this paper, the feasibility of the scheme is verified by practical application of development projects, and the overall information display and data mining are proposed to be further improved.Keywords: efficiency, quality gate, software process, tools
Procedia PDF Downloads 35915712 Estimation of Synchronous Machine Synchronizing and Damping Torque Coefficients
Authors: Khaled M. EL-Naggar
Abstract:
Synchronizing and damping torque coefficients of a synchronous machine can give a quite clear picture for machine behavior during transients. These coefficients are used as a power system transient stability measurement. In this paper, a crow search optimization algorithm is presented and implemented to study the power system stability during transients. The algorithm makes use of the machine responses to perform the stability study in time domain. The problem is formulated as a dynamic estimation problem. An objective function that minimizes the error square in the estimated coefficients is designed. The method is tested using practical system with different study cases. Results are reported and a thorough discussion is presented. The study illustrates that the proposed method can estimate the stability coefficients for the critical stable cases where other methods may fail. The tests proved that the proposed tool is an accurate and reliable tool for estimating the machine coefficients for assessment of power system stability.Keywords: optimization, estimation, synchronous, machine, crow search
Procedia PDF Downloads 14015711 Design and Manufacture Detection System for Patient's Unwanted Movements during Radiology and CT Scan
Authors: Anita Yaghobi, Homayoun Ebrahimian
Abstract:
One of the important tools that can help orthopedic doctors for diagnose diseases is imaging scan. Imaging techniques can help physicians in see different parts of the body, including the bones, muscles, tendons, nerves, and cartilage. During CT scan, a patient must be in the same position from the start to the end of radiation treatment. Patient movements are usually monitored by the technologists through the closed circuit television (CCTV) during scan. If the patient makes a small movement, it is difficult to be noticed by them. In the present work, a simple patient movement monitoring device is fabricated to monitor the patient movement. It uses an electronic sensing device. It continuously monitors the patient’s position while the CT scan is in process. The device has been retrospectively tested on 51 patients whose movement and distance were measured. The results show that 25 patients moved 1 cm to 2.5 cm from their initial position during the CT scan. Hence, the device can potentially be used to control and monitor patient movement during CT scan and Radiography. In addition, an audible alarm situated at the control panel of the control room is provided with this device to alert the technologists. It is an inexpensive, compact device which can be used in any CT scan machine.Keywords: CT scan, radiology, X Ray, unwanted movement
Procedia PDF Downloads 46015710 Students’ Attitudes towards Self-Directed Learning out of Classroom: Indonesian Context
Authors: Silmy A. Humaira'
Abstract:
There is an issue about Asian students including Indonesian students that tend to behave passively in the classroom and depend on the teachers’ instruction. Regarding this statement, this study attempts to address the Indonesian high school students’ attitudes on whether they have initiative and be responsible for their learning out of the classroom and if so, why. Therefore, 30 high school students were asked to fill out the questionnaires and interviewed in order to figure out their attitudes towards self-directed learning. The descriptive qualitative research analysis adapted Knowles’s theory (1975) about Self-directed learning (SDL) to analyze the data. The findings show that the students have a potential to possess self-directed learning through ICT, but they have difficulties in choosing appropriate learning strategy, doing self-assessment and conducting self-reflection. Therefore, this study supports the teacher to promote self-directed learning instruction for successful learning by assisting students in dealing with those aforementioned problems. Furthermore, it is expected to be a beneficial reference which gives new insights on the self-directed learning practice in specific context.Keywords: ICT, learning autonomy, students’ attitudes, self-directed learning
Procedia PDF Downloads 22715709 Investigation of the Effects of Processing Parameters on Pla Based 3D Printed Tensile Samples
Authors: Saifullah Karimullah
Abstract:
Additive manufacturing techniques are becoming more common with the latest technological advancements. It is composed to bring a revolution in the way products are designed, planned, manufactured, and distributed to end users. Fused deposition modeling (FDM) based 3D printing is one of those promising aspects that have revolutionized the prototyping processes. The purpose of this design and study project is to design a customized laboratory-scale FDM-based 3D printer from locally available sources. The primary goal is to design and fabricate the FDM-based 3D printer. After the fabrication, a tensile test specimen would be designed in Solid Works or [Creo computer-aided design (CAD)] software. A .stl file is generated of the tensile test specimen through slicing software and the G-codes are inserted via a computer for the test specimen to be printed. Different parameters were under studies like printing speed, layer thickness and infill density of the printed object. Some parameters were kept constant such as temperature, extrusion rate, raster orientation etc. Different tensile test specimens were printed for a different sets of parameters of the FDM-based 3d printer. The tensile test specimen were subjected to tensile tests using a universal testing machine (UTM). Design Expert software has been used for analyses, So Different results were obtained from the different tensile test specimens. The best, average and worst specimen were also observed under a compound microscope to investigate the layer bonding in between.Keywords: additive manufacturing techniques, 3D printing, CAD software, UTM machine
Procedia PDF Downloads 10315708 Basic Characteristics and Prospects of Synchronized Stir Welding
Authors: Shoji Matsumoto
Abstract:
Friction Stir Welding (FSW) has been widely used in the automotive, aerospace, and high-tech industries due to its superior mechanical properties after welding. However, when it becomes a matter to perform a high-quality joint using FSW, it is necessary to secure an advanced tilt angle (usually 1 to 5 degrees) using a dedicated FSW machine and to use a joint structure and a restraining jig that can withstand the tool pressure applied during the jointing process using a highly rigid processing machine. One issue that has become a challenge in this process is ‘productivity and versatility’. To solve this problem, we have conducted research and development of multi-functioning machines and robotics with FSW tools, which combine cutting/milling and FSW functions as one in recent years. However, the narrow process window makes it prone to welding defects and lacks repeatability, which makes a limitation for FSW its use in the fields where precisions required. Another reason why FSW machines are not widely used in the world is because of the matter of very high cost of ownership.Keywords: synchronized, stir, welding, friction, traveling speed, synchronized stir welding, friction stir welding
Procedia PDF Downloads 5415707 The Role of the Constructivist Learning Theory and Collaborative Learning Environment on Wiki Classroom and the Relationship between Them
Authors: Ibraheem Alzahrani
Abstract:
This paper seeks to discover the relationship between both the social constructivist learning theory and the collaborative learning environment. This relationship can be identified through given an example of the learning environment. Due to wiki characteristics, wiki can be used to understand the relationship between constructivist learning theory and collaborative learning environment. However, several evidences will come in this paper to support the idea of why wiki is the suitable method to explore the relationship between social constructivist theory and the collaborative learning and their role in learning. Moreover, learning activities in wiki classroom will be discussed in this paper to find out the result of the learners' interaction in the classroom groups, which will be through two types of communication; synchronous and asynchronous.Keywords: social constructivist, collaborative, environment, wiki, activities
Procedia PDF Downloads 50315706 Optimizing Energy Efficiency: Leveraging Big Data Analytics and AWS Services for Buildings and Industries
Authors: Gaurav Kumar Sinha
Abstract:
In an era marked by increasing concerns about energy sustainability, this research endeavors to address the pressing challenge of energy consumption in buildings and industries. This study delves into the transformative potential of AWS services in optimizing energy efficiency. The research is founded on the recognition that effective management of energy consumption is imperative for both environmental conservation and economic viability. Buildings and industries account for a substantial portion of global energy use, making it crucial to develop advanced techniques for analysis and reduction. This study sets out to explore the integration of AWS services with big data analytics to provide innovative solutions for energy consumption analysis. Leveraging AWS's cloud computing capabilities, scalable infrastructure, and data analytics tools, the research aims to develop efficient methods for collecting, processing, and analyzing energy data from diverse sources. The core focus is on creating predictive models and real-time monitoring systems that enable proactive energy management. By harnessing AWS's machine learning and data analytics capabilities, the research seeks to identify patterns, anomalies, and optimization opportunities within energy consumption data. Furthermore, this study aims to propose actionable recommendations for reducing energy consumption in buildings and industries. By combining AWS services with metrics-driven insights, the research strives to facilitate the implementation of energy-efficient practices, ultimately leading to reduced carbon emissions and cost savings. The integration of AWS services not only enhances the analytical capabilities but also offers scalable solutions that can be customized for different building and industrial contexts. The research also recognizes the potential for AWS-powered solutions to promote sustainable practices and support environmental stewardship.Keywords: energy consumption analysis, big data analytics, AWS services, energy efficiency
Procedia PDF Downloads 6415705 Websites for Hypothesis Testing
Authors: Frantisek Mosna
Abstract:
E-learning has become an efficient and widespread means in process of education at all branches of human activities. Statistics is not an exception. Unfortunately the main focus in the statistics teaching is usually paid to the substitution to formulas. Suitable web-sites can simplify and automate calculation and provide more attention and time to the basic principles of statistics, mathematization of real-life situations and following interpretation of results. We introduce our own web-sites for hypothesis testing. Their didactic aspects, technical possibilities of individual tools for their creating, experience and advantages or disadvantages of them are discussed in this paper. These web-sites do not substitute common statistical software but significantly improve the teaching of the statistics at universities.Keywords: e-learning, hypothesis testing, PHP, web-sites
Procedia PDF Downloads 42515704 Automatic Identification and Classification of Contaminated Biodegradable Plastics using Machine Learning Algorithms and Hyperspectral Imaging Technology
Authors: Nutcha Taneepanichskul, Helen C. Hailes, Mark Miodownik
Abstract:
Plastic waste has emerged as a critical global environmental challenge, primarily driven by the prevalent use of conventional plastics derived from petrochemical refining and manufacturing processes in modern packaging. While these plastics serve vital functions, their persistence in the environment post-disposal poses significant threats to ecosystems. Addressing this issue necessitates approaches, one of which involves the development of biodegradable plastics designed to degrade under controlled conditions, such as industrial composting facilities. It is imperative to note that compostable plastics are engineered for degradation within specific environments and are not suited for uncontrolled settings, including natural landscapes and aquatic ecosystems. The full benefits of compostable packaging are realized when subjected to industrial composting, preventing environmental contamination and waste stream pollution. Therefore, effective sorting technologies are essential to enhance composting rates for these materials and diminish the risk of contaminating recycling streams. In this study, it leverage hyperspectral imaging technology (HSI) coupled with advanced machine learning algorithms to accurately identify various types of plastics, encompassing conventional variants like Polyethylene terephthalate (PET), Polypropylene (PP), Low density polyethylene (LDPE), High density polyethylene (HDPE) and biodegradable alternatives such as Polybutylene adipate terephthalate (PBAT), Polylactic acid (PLA), and Polyhydroxyalkanoates (PHA). The dataset is partitioned into three subsets: a training dataset comprising uncontaminated conventional and biodegradable plastics, a validation dataset encompassing contaminated plastics of both types, and a testing dataset featuring real-world packaging items in both pristine and contaminated states. Five distinct machine learning algorithms, namely Partial Least Squares Discriminant Analysis (PLS-DA), Support Vector Machine (SVM), Convolutional Neural Network (CNN), Logistic Regression, and Decision Tree Algorithm, were developed and evaluated for their classification performance. Remarkably, the Logistic Regression and CNN model exhibited the most promising outcomes, achieving a perfect accuracy rate of 100% for the training and validation datasets. Notably, the testing dataset yielded an accuracy exceeding 80%. The successful implementation of this sorting technology within recycling and composting facilities holds the potential to significantly elevate recycling and composting rates. As a result, the envisioned circular economy for plastics can be established, thereby offering a viable solution to mitigate plastic pollution.Keywords: biodegradable plastics, sorting technology, hyperspectral imaging technology, machine learning algorithms
Procedia PDF Downloads 8015703 Vaccination Coverage and Its Associated Factors in India: An ML Approach to Understand the Hierarchy and Inter-Connections
Authors: Anandita Mitro, Archana Srivastava, Bidisha Banerjee
Abstract:
The present paper attempts to analyze the hierarchy and interconnection of factors responsible for the uptake of BCG vaccination in India. The study uses National Family Health Survey (NFHS-5) data which was conducted during 2019-21. The univariate logistic regression method is used to understand the univariate effects while the interconnection effects have been studied using the Categorical Inference Tree (CIT) which is a non-parametric Machine Learning (ML) model. The hierarchy of the factors is further established using Conditional Inference Forest which is an extension of the CIT approach. The results suggest that BCG vaccination coverage was influenced more by system-level factors and awareness than education or socio-economic status. Factors such as place of delivery, antenatal care, and postnatal care were crucial, with variations based on delivery location. Region-specific differences were also observed which could be explained by the factors. Awareness of the disease was less impactful along with the factor of wealth and urban or rural residence, although awareness did appear to substitute for inadequate ANC. Thus, from the policy point of view, it is revealed that certain subpopulations have less prevalence of vaccination which implies that there is a need for population-specific policy action to achieve a hundred percent coverage.Keywords: vaccination, NFHS, machine learning, public health
Procedia PDF Downloads 5915702 Application of Artificial Neural Network in Assessing Fill Slope Stability
Authors: An-Jui. Li, Kelvin Lim, Chien-Kuo Chiu, Benson Hsiung
Abstract:
This paper details the utilization of artificial intelligence (AI) in the field of slope stability whereby quick and convenient solutions can be obtained using the developed tool. The AI tool used in this study is the artificial neural network (ANN), while the slope stability analysis methods are the finite element limit analysis methods. The developed tool allows for the prompt prediction of the safety factors of fill slopes and their corresponding probability of failure (depending on the degree of variation of the soil parameters), which can give the practicing engineer a reasonable basis in their decision making. In fact, the successful use of the Extreme Learning Machine (ELM) algorithm shows that slope stability analysis is no longer confined to the conventional methods of modeling, which at times may be tedious and repetitive during the preliminary design stage where the focus is more on cost saving options rather than detailed design. Therefore, similar ANN-based tools can be further developed to assist engineers in this aspect.Keywords: landslide, limit analysis, artificial neural network, soil properties
Procedia PDF Downloads 20715701 Roof and Road Network Detection through Object Oriented SVM Approach Using Low Density LiDAR and Optical Imagery in Misamis Oriental, Philippines
Authors: Jigg L. Pelayo, Ricardo G. Villar, Einstine M. Opiso
Abstract:
The advances of aerial laser scanning in the Philippines has open-up entire fields of research in remote sensing and machine vision aspire to provide accurate timely information for the government and the public. Rapid mapping of polygonal roads and roof boundaries is one of its utilization offering application to disaster risk reduction, mitigation and development. The study uses low density LiDAR data and high resolution aerial imagery through object-oriented approach considering the theoretical concept of data analysis subjected to machine learning algorithm in minimizing the constraints of feature extraction. Since separating one class from another in distinct regions of a multi-dimensional feature-space, non-trivial computing for fitting distribution were implemented to formulate the learned ideal hyperplane. Generating customized hybrid feature which were then used in improving the classifier findings. Supplemental algorithms for filtering and reshaping object features are develop in the rule set for enhancing the final product. Several advantages in terms of simplicity, applicability, and process transferability is noticeable in the methodology. The algorithm was tested in the different random locations of Misamis Oriental province in the Philippines demonstrating robust performance in the overall accuracy with greater than 89% and potential to semi-automation. The extracted results will become a vital requirement for decision makers, urban planners and even the commercial sector in various assessment processes.Keywords: feature extraction, machine learning, OBIA, remote sensing
Procedia PDF Downloads 36215700 Analysis of Business Intelligence Tools in Healthcare
Authors: Avishkar Gawade, Omkar Bansode, Ketan Bhambure, Bhargav Deore
Abstract:
In recent year wide range of business intelligence technology have been applied to different area in order to support decision making process BI enables extraction of knowledge from data store. BI tools usually used in public health field for financial and administrative purposes.BI uses a dashboard in presentation stage to deliver information to information to end users.In this paper,we intend to analyze some open source BI tools on the market and their applicability in the clinical sphere taking into consideration the general characteristics of the clinical environment.A pervasive BI platform was developed using a real case in order to prove the tool viability.Analysis of various BI Tools in done with the help of several parameters such as data security,data integration,data quality reporting and anlaytics,performance,scalability and cost effectivesness.Keywords: CDSS, EHR, business intelliegence, tools
Procedia PDF Downloads 13715699 Designing the Lesson Instructional Plans for Exploring the STEM Education and Creative Learning Processes to Students' Logical Thinking Abilities with Different Learning Outcomes in Chemistry Classes
Authors: Pajaree Naramitpanich, Natchanok Jansawang, Panwilai Chomchid
Abstract:
The aims of this are compared between the students’ logical thinking abilities of their learning for designing the 5-lesson instructional plans of the 2-instructional methods, namely; the STEM Education and the Creative Learning Process (CLP) for developing students’ logical thinking abilities that a sample consisted of 90 students from two chemistry classes of different learning outcomes in Wapi Phathum School with the cluster random sampling technique was used at the 11th grade level. To administer of their learning environments with the 45-experimenl student group by the STEM Education method and the 45-controlling student group by the Creative Learning Process. These learning different groups were obtained using the 5 instruments; the 5-lesson instructional plans of the STEM Education and the Creative Learning Process to enhance the logical thinking tests on Mineral issue were used. The efficiency of the Creative Learning Processes (CLP) Model and the STEM Education’s innovations of these each five instructional lesson plans based on criteria are higher than of 80/80 standard level with the IOC index from the expert educators. The averages mean scores of students’ learning achievement motives were assessed with the Pre and Post Techniques and Logical Thinking Ability Test (LTAT) and dependent t-test analysis were differentiated between the CLP and the STEM, significantly. Students’ perceptions of their chemistry classroom environment inventories with the MCI with the CLP and the STEM methods also were found, differently. Associations between students’ perceptions of their chemistry classroom learning environment inventories on the CLP Model and the STEM Education learning designs toward their logical thinking abilities toward chemistry, the predictive efficiency of R2 values indicate that 68% and 76% of the variances in students’ logical thinking abilities toward chemistry to their controlling and experimental chemistry classroom learning environmental groups with the MCI were correlated at .05 levels, significantly. Implementations of this result are showed the students’ learning by the CLP of the potential thinking life-changing roles in most their logical thinking abilities that it is revealed that the students perceive their abilities to be highly learning achievement in chemistry group are differentiated with the STEM education of students’ outcomes.Keywords: design, the lesson instructional plans, the stem education, the creative learning process, logical thinking ability, different, learning outcome, student, chemistry class
Procedia PDF Downloads 32115698 Transfer Knowledge From Multiple Source Problems to a Target Problem in Genetic Algorithm
Authors: Terence Soule, Tami Al Ghamdi
Abstract:
To study how to transfer knowledge from multiple source problems to the target problem, we modeled the Transfer Learning (TL) process using Genetic Algorithms as the model solver. TL is the process that aims to transfer learned data from one problem to another problem. The TL process aims to help Machine Learning (ML) algorithms find a solution to the problems. The Genetic Algorithms (GA) give researchers access to information that we have about how the old problem is solved. In this paper, we have five different source problems, and we transfer the knowledge to the target problem. We studied different scenarios of the target problem. The results showed combined knowledge from multiple source problems improves the GA performance. Also, the process of combining knowledge from several problems results in promoting diversity of the transferred population.Keywords: transfer learning, genetic algorithm, evolutionary computation, source and target
Procedia PDF Downloads 14015697 A Review: Detection and Classification Defects on Banana and Apples by Computer Vision
Authors: Zahow Muoftah
Abstract:
Traditional manual visual grading of fruits has been one of the agricultural industry’s major challenges due to its laborious nature as well as inconsistency in the inspection and classification process. The main requirements for computer vision and visual processing are some effective techniques for identifying defects and estimating defect areas. Automated defect detection using computer vision and machine learning has emerged as a promising area of research with a high and direct impact on the visual inspection domain. Grading, sorting, and disease detection are important factors in determining the quality of fruits after harvest. Many studies have used computer vision to evaluate the quality level of fruits during post-harvest. Many studies have used computer vision to evaluate the quality level of fruits during post-harvest. Many studies have been conducted to identify diseases and pests that affect the fruits of agricultural crops. However, most previous studies concentrated solely on the diagnosis of a lesion or disease. This study focused on a comprehensive study to identify pests and diseases of apple and banana fruits using detection and classification defects on Banana and Apples by Computer Vision. As a result, the current article includes research from these domains as well. Finally, various pattern recognition techniques for detecting apple and banana defects are discussed.Keywords: computer vision, banana, apple, detection, classification
Procedia PDF Downloads 10615696 Evaluation of Random Forest and Support Vector Machine Classification Performance for the Prediction of Early Multiple Sclerosis from Resting State FMRI Connectivity Data
Authors: V. Saccà, A. Sarica, F. Novellino, S. Barone, T. Tallarico, E. Filippelli, A. Granata, P. Valentino, A. Quattrone
Abstract:
The work aim was to evaluate how well Random Forest (RF) and Support Vector Machine (SVM) algorithms could support the early diagnosis of Multiple Sclerosis (MS) from resting-state functional connectivity data. In particular, we wanted to explore the ability in distinguishing between controls and patients of mean signals extracted from ICA components corresponding to 15 well-known networks. Eighteen patients with early-MS (mean-age 37.42±8.11, 9 females) were recruited according to McDonald and Polman, and matched for demographic variables with 19 healthy controls (mean-age 37.55±14.76, 10 females). MRI was acquired by a 3T scanner with 8-channel head coil: (a)whole-brain T1-weighted; (b)conventional T2-weighted; (c)resting-state functional MRI (rsFMRI), 200 volumes. Estimated total lesion load (ml) and number of lesions were calculated using LST-toolbox from the corrected T1 and FLAIR. All rsFMRIs were pre-processed using tools from the FMRIB's Software Library as follows: (1) discarding of the first 5 volumes to remove T1 equilibrium effects, (2) skull-stripping of images, (3) motion and slice-time correction, (4) denoising with high-pass temporal filter (128s), (5) spatial smoothing with a Gaussian kernel of FWHM 8mm. No statistical significant differences (t-test, p < 0.05) were found between the two groups in the mean Euclidian distance and the mean Euler angle. WM and CSF signal together with 6 motion parameters were regressed out from the time series. We applied an independent component analysis (ICA) with the GIFT-toolbox using the Infomax approach with number of components=21. Fifteen mean components were visually identified by two experts. The resulting z-score maps were thresholded and binarized to extract the mean signal of the 15 networks for each subject. Statistical and machine learning analysis were then conducted on this dataset composed of 37 rows (subjects) and 15 features (mean signal in the network) with R language. The dataset was randomly splitted into training (75%) and test sets and two different classifiers were trained: RF and RBF-SVM. We used the intrinsic feature selection of RF, based on the Gini index, and recursive feature elimination (rfe) for the SVM, to obtain a rank of the most predictive variables. Thus, we built two new classifiers only on the most important features and we evaluated the accuracies (with and without feature selection) on test-set. The classifiers, trained on all the features, showed very poor accuracies on training (RF:58.62%, SVM:65.52%) and test sets (RF:62.5%, SVM:50%). Interestingly, when feature selection by RF and rfe-SVM were performed, the most important variable was the sensori-motor network I in both cases. Indeed, with only this network, RF and SVM classifiers reached an accuracy of 87.5% on test-set. More interestingly, the only misclassified patient resulted to have the lowest value of lesion volume. We showed that, with two different classification algorithms and feature selection approaches, the best discriminant network between controls and early MS, was the sensori-motor I. Similar importance values were obtained for the sensori-motor II, cerebellum and working memory networks. These findings, in according to the early manifestation of motor/sensorial deficits in MS, could represent an encouraging step toward the translation to the clinical diagnosis and prognosis.Keywords: feature selection, machine learning, multiple sclerosis, random forest, support vector machine
Procedia PDF Downloads 24015695 Reading and Writing Memories in Artificial and Human Reasoning
Authors: Ian O'Loughlin
Abstract:
Memory networks aim to integrate some of the recent successes in machine learning with a dynamic memory base that can be updated and deployed in artificial reasoning tasks. These models involve training networks to identify, update, and operate over stored elements in a large memory array in order, for example, to ably perform question and answer tasks parsing real-world and simulated discourses. This family of approaches still faces numerous challenges: the performance of these network models in simulated domains remains considerably better than in open, real-world domains, wide-context cues remain elusive in parsing words and sentences, and even moderately complex sentence structures remain problematic. This innovation, employing an array of stored and updatable ‘memory’ elements over which the system operates as it parses text input and develops responses to questions, is a compelling one for at least two reasons: first, it addresses one of the difficulties that standard machine learning techniques face, by providing a way to store a large bank of facts, offering a way forward for the kinds of long-term reasoning that, for example, recurrent neural networks trained on a corpus have difficulty performing. Second, the addition of a stored long-term memory component in artificial reasoning seems psychologically plausible; human reasoning appears replete with invocations of long-term memory, and the stored but dynamic elements in the arrays of memory networks are deeply reminiscent of the way that human memory is readily and often characterized. However, this apparent psychological plausibility is belied by a recent turn in the study of human memory in cognitive science. In recent years, the very notion that there is a stored element which enables remembering, however dynamic or reconstructive it may be, has come under deep suspicion. In the wake of constructive memory studies, amnesia and impairment studies, and studies of implicit memory—as well as following considerations from the cognitive neuroscience of memory and conceptual analyses from the philosophy of mind and cognitive science—researchers are now rejecting storage and retrieval, even in principle, and instead seeking and developing models of human memory wherein plasticity and dynamics are the rule rather than the exception. In these models, storage is entirely avoided by modeling memory using a recurrent neural network designed to fit a preconceived energy function that attains zero values only for desired memory patterns, so that these patterns are the sole stable equilibrium points in the attractor network. So although the array of long-term memory elements in memory networks seem psychologically appropriate for reasoning systems, they may actually be incurring difficulties that are theoretically analogous to those that older, storage-based models of human memory have demonstrated. The kind of emergent stability found in the attractor network models more closely fits our best understanding of human long-term memory than do the memory network arrays, despite appearances to the contrary.Keywords: artificial reasoning, human memory, machine learning, neural networks
Procedia PDF Downloads 27115694 Laban Movement Analysis Using Kinect
Authors: Bernstein Ran, Shafir Tal, Tsachor Rachelle, Studd Karen, Schuster Assaf
Abstract:
Laban Movement Analysis (LMA), developed in the dance community over the past seventy years, is an effective method for observing, describing, notating, and interpreting human movement to enhance communication and expression in everyday and professional life. Many applications that use motion capture data might be significantly leveraged if the Laban qualities will be recognized automatically. This paper presents an automated recognition method of Laban qualities from motion capture skeletal recordings and it is demonstrated on the output of Microsoft’s Kinect V2 sensor.Keywords: Laban movement analysis, multitask learning, Kinect sensor, machine learning
Procedia PDF Downloads 34215693 Tool Condition Monitoring of Ceramic Inserted Tools in High Speed Machining through Image Processing
Authors: Javier A. Dominguez Caballero, Graeme A. Manson, Matthew B. Marshall
Abstract:
Cutting tools with ceramic inserts are often used in the process of machining many types of superalloy, mainly due to their high strength and thermal resistance. Nevertheless, during the cutting process, the plastic flow wear generated in these inserts enhances and propagates cracks due to high temperature and high mechanical stress. This leads to a very variable failure of the cutting tool. This article explores the relationship between the continuous wear that ceramic SiAlON (solid solutions based on the Si3N4 structure) inserts experience during a high-speed machining process and the evolution of sparks created during the same process. These sparks were analysed through pictures of the cutting process recorded using an SLR camera. Features relating to the intensity and area of the cutting sparks were extracted from the individual pictures using image processing techniques. These features were then related to the ceramic insert’s crater wear area.Keywords: ceramic cutting tools, high speed machining, image processing, tool condition monitoring, tool wear
Procedia PDF Downloads 298