Search results for: machine capacity
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6735

Search results for: machine capacity

5355 Availability Analysis of Process Management in the Equipment Maintenance and Repair Implementation

Authors: Onur Ozveri, Korkut Karabag, Cagri Keles

Abstract:

It is an important issue that the occurring of production downtime and repair costs when machines fail in the machine intensive production industries. In the case of failure of more than one machine at the same time, which machines will have the priority to repair, how to determine the optimal repair time should be allotted for this machines and how to plan the resources needed to repair are the key issues. In recent years, Business Process Management (BPM) technique, bring effective solutions to different problems in business. The main feature of this technique is that it can improve the way the job done by examining in detail the works of interest. In the industries, maintenance and repair works are operating as a process and when a breakdown occurs, it is known that the repair work is carried out in a series of process. Maintenance main-process and repair sub-process are evaluated with process management technique, so it is thought that structure could bring a solution. For this reason, in an international manufacturing company, this issue discussed and has tried to develop a proposal for a solution. The purpose of this study is the implementation of maintenance and repair works which is integrated with process management technique and at the end of implementation, analyzing the maintenance related parameters like quality, cost, time, safety and spare part. The international firm that carried out the application operates in a free region in Turkey and its core business area is producing original equipment technologies, vehicle electrical construction, electronics, safety and thermal systems for the world's leading light and heavy vehicle manufacturers. In the firm primarily, a project team has been established. The team dealt with the current maintenance process again, and it has been revised again by the process management techniques. Repair process which is sub-process of maintenance process has been discussed again. In the improved processes, the ABC equipment classification technique was used to decide which machine or machines will be given priority in case of failure. This technique is a prioritization method of malfunctioned machine based on the effect of the production, product quality, maintenance costs and job security. Improved maintenance and repair processes have been implemented in the company for three months, and the obtained data were compared with the previous year data. In conclusion, breakdown maintenance was found to occur in a shorter time, with lower cost and lower spare parts inventory.

Keywords: ABC equipment classification, business process management (BPM), maintenance, repair performance

Procedia PDF Downloads 185
5354 Keynote Talk: The Role of Internet of Things in the Smart Cities Power System

Authors: Abdul-Rahman Al-Ali

Abstract:

As the number of mobile devices is growing exponentially, it is estimated to connect about 50 million devices to the Internet by the year 2020. At the end of this decade, it is expected that an average of eight connected devices per person worldwide. The 50 billion devices are not mobile phones and data browsing gadgets only, but machine-to-machine and man-to-machine devices. With such growing numbers of devices the Internet of Things (I.o.T) concept is one of the emerging technologies as of recently. Within the smart grid technologies, smart home appliances, Intelligent Electronic Devices (IED) and Distributed Energy Resources (DER) are major I.o.T objects that can be addressable using the IPV6. These objects are called the smart grid internet of things (SG-I.o.T). The SG-I.o.T generates big data that requires high-speed computing infrastructure, widespread computer networks, big data storage, software, and platforms services. A company’s utility control and data centers cannot handle such a large number of devices, high-speed processing, and massive data storage. Building large data center’s infrastructure takes a long time, it also requires widespread communication networks and huge capital investment. To maintain and upgrade control and data centers’ infrastructure and communication networks as well as updating and renewing software licenses which collectively, requires additional cost. This can be overcome by utilizing the emerging computing paradigms such as cloud computing. This can be used as a smart grid enabler to replace the legacy of utilities data centers. The talk will highlight the role of I.o.T, cloud computing services and their development models within the smart grid technologies.

Keywords: intelligent electronic devices (IED), distributed energy resources (DER), internet, smart home appliances

Procedia PDF Downloads 311
5353 Modelling and Detecting the Demagnetization Fault in the Permanent Magnet Synchronous Machine Using the Current Signature Analysis

Authors: Yassa Nacera, Badji Abderrezak, Saidoune Abdelmalek, Houassine Hamza

Abstract:

Several kinds of faults can occur in a permanent magnet synchronous machine (PMSM) systems: bearing faults, electrically short/open faults, eccentricity faults, and demagnetization faults. Demagnetization fault means that the strengths of permanent magnets (PM) in PMSM decrease, and it causes low output torque, which is undesirable for EVs. The fault is caused by physical damage, high-temperature stress, inverse magnetic field, and aging. Motor current signature analysis (MCSA) is a conventional motor fault detection method based on the extraction of signal features from stator current. a simulation model of the PMSM under partial demagnetization and uniform demagnetization fault was established, and different degrees of demagnetization fault were simulated. The harmonic analyses using the Fast Fourier Transform (FFT) show that the fault diagnosis method based on the harmonic wave analysis is only suitable for partial demagnetization fault of the PMSM and does not apply to uniform demagnetization fault of the PMSM.

Keywords: permanent magnet, diagnosis, demagnetization, modelling

Procedia PDF Downloads 51
5352 Loading and Unloading Scheduling Problem in a Multiple-Multiple Logistics Network: Modelling and Solving

Authors: Yasin Tadayonrad

Abstract:

Most of the supply chain networks have many nodes starting from the suppliers’ side up to the customers’ side that each node sends/receives the raw materials/products from/to the other nodes. One of the major concerns in this kind of supply chain network is finding the best schedule for loading /unloading the shipments through the whole network by which all the constraints in the source and destination nodes are met and all the shipments are delivered on time. One of the main constraints in this problem is loading/unloading capacity in each source/ destination node at each time slot (e.g., per week/day/hour). Because of the different characteristics of different products/groups of products, the capacity of each node might differ based on each group of products. In most supply chain networks (especially in the Fast-moving consumer goods industry), there are different planners/planning teams working separately in different nodes to determine the loading/unloading timeslots in source/destination nodes to send/receive the shipments. In this paper, a mathematical problem has been proposed to find the best timeslots for loading/unloading the shipments minimizing the overall delays subject to respecting the capacity of loading/unloading of each node, the required delivery date of each shipment (considering the lead-times), and working-days of each node. This model was implemented on python and solved using Python-MIP on a sample data set. Finally, the idea of a heuristic algorithm has been proposed as a way of improving the solution method that helps to implement the model on larger data sets in real business cases, including more nodes and shipments.

Keywords: supply chain management, transportation, multiple-multiple network, timeslots management, mathematical modeling, mixed integer programming

Procedia PDF Downloads 85
5351 Detecting Elderly Abuse in US Nursing Homes Using Machine Learning and Text Analytics

Authors: Minh Huynh, Aaron Heuser, Luke Patterson, Chris Zhang, Mason Miller, Daniel Wang, Sandeep Shetty, Mike Trinh, Abigail Miller, Adaeze Enekwechi, Tenille Daniels, Lu Huynh

Abstract:

Machine learning and text analytics have been used to analyze child abuse, cyberbullying, domestic abuse and domestic violence, and hate speech. However, to the authors’ knowledge, no research to date has used these methods to study elder abuse in nursing homes or skilled nursing facilities from field inspection reports. We used machine learning and text analytics methods to analyze 356,000 inspection reports, which have been extracted from CMS Form-2567 field inspections of US nursing homes and skilled nursing facilities between 2016 and 2021. Our algorithm detected occurrences of the various types of abuse, including physical abuse, psychological abuse, verbal abuse, sexual abuse, and passive and active neglect. For example, to detect physical abuse, our algorithms search for combinations or phrases and words suggesting willful infliction of damage (hitting, pinching or burning, tethering, tying), or consciously ignoring an emergency. To detect occurrences of elder neglect, our algorithm looks for combinations or phrases and words suggesting both passive neglect (neglecting vital needs, allowing malnutrition and dehydration, allowing decubiti, deprivation of information, limitation of freedom, negligence toward safety precautions) and active neglect (intimidation and name-calling, tying the victim up to prevent falls without consent, consciously ignoring an emergency, not calling a physician in spite of indication, stopping important treatments, failure to provide essential care, deprivation of nourishment, leaving a person alone for an inappropriate amount of time, excessive demands in a situation of care). We further compare the prevalence of abuse before and after Covid-19 related restrictions on nursing home visits. We also identified the facilities with the most number of cases of abuse with no abuse facilities within a 25-mile radius as most likely candidates for additional inspections. We also built an interactive display to visualize the location of these facilities.

Keywords: machine learning, text analytics, elder abuse, elder neglect, nursing home abuse

Procedia PDF Downloads 138
5350 Domestic and Foreign Terrorism: Evaluation of the Breeding Ground

Authors: T. K. Hung

Abstract:

Terrorism acts have occurred across both developed and developing states, with well-identified motivation and causes. For many years, terrorism eradication has become a major topic yet only passive actions were taken in response to acts. The linkage between the location of terrorism occurrence and breeding ground is not well-documented, resulting in the passive approach used in counter-terrorism nowadays. The evaluation investigates all post-9/11 terrorism affairs considering their state capacity, safety, ease of border access control, religion diversity, and technology access, to measure the level of breeding ground of the states. Those "weak" states with poor border access control, resources capacity and domestic safety are the best breeding ground for terrorists. Although many attacks were caused by religious motivation, religion diversity does not predict the breeding ground. States with censored technology access, particular computer-mediated communication, predict on the terrorism breeding ground, moderated by the level of breeding ground of neighboring states.

Keywords: counter-terrorism, lethality, security, terrorism

Procedia PDF Downloads 327
5349 Use Cloud-Based Watson Deep Learning Platform to Train Models Faster and More Accurate

Authors: Susan Diamond

Abstract:

Machine Learning workloads have traditionally been run in high-performance computing (HPC) environments, where users log in to dedicated machines and utilize the attached GPUs to run training jobs on huge datasets. Training of large neural network models is very resource intensive, and even after exploiting parallelism and accelerators such as GPUs, a single training job can still take days. Consequently, the cost of hardware is a barrier to entry. Even when upfront cost is not a concern, the lead time to set up such an HPC environment takes months from acquiring hardware to set up the hardware with the right set of firmware, software installed and configured. Furthermore, scalability is hard to achieve in a rigid traditional lab environment. Therefore, it is slow to react to the dynamic change in the artificial intelligent industry. Watson Deep Learning as a service, a cloud-based deep learning platform that mitigates the long lead time and high upfront investment in hardware. It enables robust and scalable sharing of resources among the teams in an organization. It is designed for on-demand cloud environments. Providing a similar user experience in a multi-tenant cloud environment comes with its own unique challenges regarding fault tolerance, performance, and security. Watson Deep Learning as a service tackles these challenges and present a deep learning stack for the cloud environments in a secure, scalable and fault-tolerant manner. It supports a wide range of deep-learning frameworks such as Tensorflow, PyTorch, Caffe, Torch, Theano, and MXNet etc. These frameworks reduce the effort and skillset required to design, train, and use deep learning models. Deep Learning as a service is used at IBM by AI researchers in areas including machine translation, computer vision, and healthcare. 

Keywords: deep learning, machine learning, cognitive computing, model training

Procedia PDF Downloads 199
5348 Cellular Mobile Telecommunication GSM Radio Base Station Network Planning

Authors: Saeed Alzahrani, Yaser Miaji

Abstract:

The project involves the design and simulation of a Mobile Cellular Telecommunication Network using the software tool CelPlanner. The design is mainly concerned with Global System for Mobile Communications . The design and simulation of the network is done for a small part of the area allocated for us in the terrain area of Shreveport city .The project is concerned with designing a network that is cost effective and which also efficiently meets the required Grade of Service (GOS) AND Quality of Service (QOS).The expected outcome of this project is the design of a network that gives a good coverage for the area allocated to us with minimum co-channel interference and adjacent channel interference. The Handover and Traffic Handling Capacity should also be taken into consideration and should be good for the given area . The Traffic Handling Capacity of the network in a way decides whether the designed network is good or bad . The design also takes into consideration the topographical and morphological information.

Keywords: mobile communication, GSM, radio base station, network planning

Procedia PDF Downloads 429
5347 Multi-Vehicle Detection Using Histogram of Oriented Gradients Features and Adaptive Sliding Window Technique

Authors: Saumya Srivastava, Rina Maiti

Abstract:

In order to achieve a better performance of vehicle detection in a complex environment, we present an efficient approach for a multi-vehicle detection system using an adaptive sliding window technique. For a given frame, image segmentation is carried out to establish the region of interest. Gradient computation followed by thresholding, denoising, and morphological operations is performed to extract the binary search image. Near-region field and far-region field are defined to generate hypotheses using the adaptive sliding window technique on the resultant binary search image. For each vehicle candidate, features are extracted using a histogram of oriented gradients, and a pre-trained support vector machine is applied for hypothesis verification. Later, the Kalman filter is used for tracking the vanishing point. The experimental results show that the method is robust and effective on various roads and driving scenarios. The algorithm was tested on highways and urban roads in India.

Keywords: gradient, vehicle detection, histograms of oriented gradients, support vector machine

Procedia PDF Downloads 112
5346 Seismic Assessment of RC Structures

Authors: Badla Oualid

Abstract:

A great number of existing buildings are designed without seismic design criteria and detailing rules for dissipative structural behavior. Thus, it is of critical importance that the structures that need seismic retrofitting are correctly identified, and an optimal retrofitting is conducted in a cost effective fashion. Among the retrofitting techniques available, steel braces can be considered as one of the most efficient solution among seismic performance upgrading methods of RC structures. This paper investigates the seismic behavior of RC buildings strengthened with different types of steel braces, X-braced, inverted V braced, ZX braced, and Zipper braced. Static non linear pushover analysis has been conducted to estimate the capacity of three story and six story buildings with different brace-frame systems and different cross sections for the braces. It is found that adding braces enhances the global capacity of the buildings compared to the case with no bracing and that the X and Zipper bracing systems performed better depending on the type and size of the cross section.

Keywords: seismic design, strengthening, RC frames, steel bracing, pushover analysis

Procedia PDF Downloads 514
5345 Synthesis and Characterization of Thiourea-Formaldehyde Coated Fe3O4 (TUF@Fe3O4) and Its Application for Adsorption of Methylene Blue

Authors: Saad M. Alshehri, Tansir Ahamad

Abstract:

Thiourea-Formaldehyde Pre-Polymer (TUF) was prepared by the reaction thiourea and formaldehyde in basic medium and used as a coating materials for magnetite Fe3O4. The synthesized polymer coated microspheres (TUF@Fe3O4) was characterized using FTIR, TGA SEM and TEM. Its BET surface area was up to 1680 m2 g_1. The adsorption capacity of this ACF product was evaluated in its adsorption of Methylene Blue (MB) in water under different pH values and different temperature. We found that the adsorption process was well described both by the Langmuir and Freundlich isotherm model. The kinetic processes of MB adsorption onto TUF@Fe3O4 were described in order to provide a more clear interpretation of the adsorption rate and uptake mechanism. The overall kinetic data was acceptably explained by a pseudo second-order rate model. Evaluated ∆Go and ∆Ho specify the spontaneous and exothermic nature of the reaction. The adsorption takes place with a decrease in entropy (∆So is negative). The monolayer capacity for MB was up to 450 mg g_1 and was one of the highest among similar polymeric products. It was due to its large BET surface area.

Keywords: TGA, FTIR, magentite, thiourea formaldehyde resin, methylene blue, adsorption

Procedia PDF Downloads 326
5344 In-Service Training to Enhance Community Based Corrections

Authors: Varathagowry Vasudevan

Abstract:

This paper attempts to demonstrate the importance of capacity building of the para-professionals in community based corrections for enhancing family and child welfare as a crucial factor in providing in-service training as a responsive methodology in community based corrections to enhance the best practices. The Diploma programme in community-based corrections initiated by the National Institute of Social Development has been engaged in this noble task of training quality personnel knowledgeable in the best practices and fieldwork skills on community-based correction and its best practice. To protect the families and children and enhance best practices, National Institute of Social Development with support from the department of community-based corrections initiated a Diploma programme in community-based corrections to enhance and update the knowledge, skills, attitudes with the right mindset of the work supervisors employed at the department of community-based corrections. This study based on reflective practice illustrated the effectiveness of curriculum of in-service training programme as a tool to enhance the capacities of the relevant officers in Sri Lanka. The data for the study was obtained from participants and coordinator through classroom discussions and key informant interviews. This study showed that use of appropriate tailor-made curriculum and field practice manual by the officers during the training was very much dependent on the provision of appropriate administrative facilities, passion, teaching methodology that promote capacity to involve best practices. It also demonstrated further the fact that professional social work response, strengthening families within legal framework was very much grounded in the adoption of proper skills imbibed through training in appropriate methodology practiced in the field under guided supervision.

Keywords: capacity building, community-based corrections, in-service training, paraprofessionals

Procedia PDF Downloads 145
5343 External Strengthening of RC Continuous Beams Using FRP Plates: Finite Element Model

Authors: Mohammed A. Sakr, Tarek M. Khalifa, Walid N. Mansour

Abstract:

Fiber reinforced polymer (FRP) installation is a very effective way to repair and strengthen structures that have become structurally weak over their life span. This technique attracted the concerning of researchers during the last two decades. This paper presents a simple uniaxial nonlinear finite element model (UNFEM) able to accurately estimate the load-carrying capacity, different failure modes and the interfacial stresses of reinforced concrete (RC) continuous beams flexurally strengthened with externally bonded FRP plates on the upper and lower fibers. Results of the proposed finite element (FE) model are verified by comparing them with experimental measurements available in the literature. The agreement between numerical and experimental results is very good. Considering fracture energy of adhesive is necessary to get a realistic load carrying capacity of continuous RC beams strengthened with FRP. This simple UNFEM is able to help design engineers to model their strengthened structures and solve their problems.

Keywords: continuous beams, debonding, finite element, fibre reinforced polymer

Procedia PDF Downloads 469
5342 Quantification Model for Capability Evaluation of Optical-Based in-Situ Monitoring System for Laser Powder Bed Fusion (LPBF) Process

Authors: Song Zhang, Hui Wang, Johannes Henrich Schleifenbaum

Abstract:

Due to the increasing demand for quality assurance and reliability for additive manufacturing, the development of an advanced in-situ monitoring system is required to monitor the process anomalies as input for further process control. Optical-based monitoring systems, such as CMOS cameras and NIR cameras, are proved as effective ways to monitor the geometrical distortion and exceptional thermal distribution. Therefore, many studies and applications are focusing on the availability of the optical-based monitoring system for detecting varied types of defects. However, the capability of the monitoring setup is not quantified. In this study, a quantification model to evaluate the capability of the monitoring setups for the LPBF machine based on acquired monitoring data of a designed test artifact is presented, while the design of the relevant test artifacts is discussed. The monitoring setup is evaluated based on its hardware properties, location of the integration, and light condition. Methodology of data processing to quantify the capacity for each aspect is discussed. The minimal capability of the detectable size of the monitoring set up in the application is estimated by quantifying its resolution and accuracy. The quantification model is validated using a CCD camera-based monitoring system for LPBF machines in the laboratory with different setups. The result shows the model to quantify the monitoring system's performance, which makes the evaluation of monitoring systems with the same concept but different setups possible for the LPBF process and provides the direction to improve the setups.

Keywords: data processing, in-situ monitoring, LPBF process, optical system, quantization model, test artifact

Procedia PDF Downloads 190
5341 Optimization of Moisture Content for Highest Tensile Strength of Instant Soluble Milk Tablet and Flowability of Milk Powder

Authors: Siddharth Vishwakarma, Danie Shajie A., Mishra H. N.

Abstract:

Milk powder becomes very useful in the low milk supply area but the exact amount to add for one glass of milk and the handling is difficult. So, the idea of instant soluble milk tablet comes into existence for its high solubility and easy handling. The moisture content of milk tablets is increased by the direct addition of water with no additives for binding. The variation of the tensile strength of instant soluble milk tablets and the flowability of milk powder with the moisture content is analyzed and optimized for the highest tensile strength of instant soluble milk tablets and flowability, above a particular value of milk powder using response surface methodology. The flowability value is necessary for ease in quantifying the milk powder, as a feed, in the designed tablet making machine. The instant soluble nature of milk tablets purely depends upon the disintegration characteristic of tablets in water whose study is under progress. Conclusions: The optimization results are very useful in the commercialization of milk tablets.

Keywords: flowability, milk powder, response surface methodology, tablet making machine, tensile strength

Procedia PDF Downloads 167
5340 Graph Clustering Unveiled: ClusterSyn - A Machine Learning Framework for Predicting Anti-Cancer Drug Synergy Scores

Authors: Babak Bahri, Fatemeh Yassaee Meybodi, Changiz Eslahchi

Abstract:

In the pursuit of effective cancer therapies, the exploration of combinatorial drug regimens is crucial to leverage synergistic interactions between drugs, thereby improving treatment efficacy and overcoming drug resistance. However, identifying synergistic drug pairs poses challenges due to the vast combinatorial space and limitations of experimental approaches. This study introduces ClusterSyn, a machine learning (ML)-powered framework for classifying anti-cancer drug synergy scores. ClusterSyn employs a two-step approach involving drug clustering and synergy score prediction using a fully connected deep neural network. For each cell line in the training dataset, a drug graph is constructed, with nodes representing drugs and edge weights denoting synergy scores between drug pairs. Drugs are clustered using the Markov clustering (MCL) algorithm, and vectors representing the similarity of drug pairs to each cluster are input into the deep neural network for synergy score prediction (synergy or antagonism). Clustering results demonstrate effective grouping of drugs based on synergy scores, aligning similar synergy profiles. Subsequently, neural network predictions and synergy scores of the two drugs on others within their clusters are used to predict the synergy score of the considered drug pair. This approach facilitates comparative analysis with clustering and regression-based methods, revealing the superior performance of ClusterSyn over state-of-the-art methods like DeepSynergy and DeepDDS on diverse datasets such as Oniel and Almanac. The results highlight the remarkable potential of ClusterSyn as a versatile tool for predicting anti-cancer drug synergy scores.

Keywords: drug synergy, clustering, prediction, machine learning., deep learning

Procedia PDF Downloads 63
5339 Functional Neural Network for Decision Processing: A Racing Network of Programmable Neurons Where the Operating Model Is the Network Itself

Authors: Frederic Jumelle, Kelvin So, Didan Deng

Abstract:

In this paper, we are introducing a model of artificial general intelligence (AGI), the functional neural network (FNN), for modeling human decision-making processes. The FNN is composed of multiple artificial mirror neurons (AMN) racing in the network. Each AMN has a similar structure programmed independently by the users and composed of an intention wheel, a motor core, and a sensory core racing at a specific velocity. The mathematics of the node’s formulation and the racing mechanism of multiple nodes in the network will be discussed, and the group decision process with fuzzy logic and the transformation of these conceptual methods into practical methods of simulation and in operations will be developed. Eventually, we will describe some possible future research directions in the fields of finance, education, and medicine, including the opportunity to design an intelligent learning agent with application in AGI. We believe that FNN has a promising potential to transform the way we can compute decision-making and lead to a new generation of AI chips for seamless human-machine interactions (HMI).

Keywords: neural computing, human machine interation, artificial general intelligence, decision processing

Procedia PDF Downloads 116
5338 Evaluating Climate Risks to Enhance Resilience in Durban, South Africa

Authors: Cabangile Ncengeni Ngwane, Gerald Mills

Abstract:

Anthropogenic climate change is exacerbating natural hazards such as droughts, heat waves and sea-level rise. The associated risks are the greatest in places where socio-ecological systems are exposed to these changes and the populations and infrastructure are vulnerable. Identifying the communities at risk and enhancing local resilience are key issues in responding to the current and project climate changes. This paper explores the types of risks associated with multiple overlapping hazards in Durban, South Africa where the social, cultural and economic dimensions that contribute to exposure and vulnerability are compounded by its history of apartheid. As a result, climate change risks are highly concentrated in marginalized communities that have the least adaptive capacity. In this research, a Geographic Information System is to explore the spatial correspondence among geographic layers representing hazards, exposure and vulnerability across Durban. This quantitative analysis will allow authors to identify communities at high risk and focus our study on the nature of the current human-environment relationships that result in risk inequalities. This work will employ qualitative methods to critically examine policies (including educational practices and financial support systems) and on-the-ground actions that are designed to improve the adaptive capacity of these communities and meet UN Sustainable Development Goals. This work will contribute to a growing body of literature on disaster risk management, especially as it relates to developing economies where socio-economic inequalities are correlated with ethnicity and race.

Keywords: adaptive capacity, disaster risk reduction, exposure, resilience, South Africa

Procedia PDF Downloads 137
5337 Assessing Online Learning Paths in an Learning Management Systems Using a Data Mining and Machine Learning Approach

Authors: Alvaro Figueira, Bruno Cabral

Abstract:

Nowadays, students are used to be assessed through an online platform. Educators have stepped up from a period in which they endured the transition from paper to digital. The use of a diversified set of question types that range from quizzes to open questions is currently common in most university courses. In many courses, today, the evaluation methodology also fosters the students’ online participation in forums, the download, and upload of modified files, or even the participation in group activities. At the same time, new pedagogy theories that promote the active participation of students in the learning process, and the systematic use of problem-based learning, are being adopted using an eLearning system for that purpose. However, although there can be a lot of feedback from these activities to student’s, usually it is restricted to the assessments of online well-defined tasks. In this article, we propose an automatic system that informs students of abnormal deviations of a 'correct' learning path in the course. Our approach is based on the fact that by obtaining this information earlier in the semester, may provide students and educators an opportunity to resolve an eventual problem regarding the student’s current online actions towards the course. Our goal is to prevent situations that have a significant probability to lead to a poor grade and, eventually, to failing. In the major learning management systems (LMS) currently available, the interaction between the students and the system itself is registered in log files in the form of registers that mark beginning of actions performed by the user. Our proposed system uses that logged information to derive new one: the time each student spends on each activity, the time and order of the resources used by the student and, finally, the online resource usage pattern. Then, using the grades assigned to the students in previous years, we built a learning dataset that is used to feed a machine learning meta classifier. The produced classification model is then used to predict the grades a learning path is heading to, in the current year. Not only this approach serves the teacher, but also the student to receive automatic feedback on her current situation, having past years as a perspective. Our system can be applied to online courses that integrate the use of an online platform that stores user actions in a log file, and that has access to other student’s evaluations. The system is based on a data mining process on the log files and on a self-feedback machine learning algorithm that works paired with the Moodle LMS.

Keywords: data mining, e-learning, grade prediction, machine learning, student learning path

Procedia PDF Downloads 115
5336 [Keynote Talk]: Water Resources Vulnerability Assessment to Climate Change in a Semi-Arid Basin of South India

Authors: K. Shimola, M. Krishnaveni

Abstract:

This paper examines vulnerability assessment of water resources in a semi-arid basin using the 4-step approach. The vulnerability assessment framework is developed to study the water resources vulnerability which includes the creation of GIS-based vulnerability maps. These maps represent the spatial variability of the vulnerability index. This paper introduces the 4-step approach to assess vulnerability that incorporates a new set of indicators. The approach is demonstrated using a framework composed of a precipitation data for (1975–2010) period, temperature data for (1965–2010) period, hydrological model outputs and the water resources GIS data base. The vulnerability assessment is a function of three components such as exposure, sensitivity and adaptive capacity. The current water resources vulnerability is assessed using GIS based spatio-temporal information. Rainfall Coefficient of Variation, monsoon onset and end date, rainy days, seasonality indices, temperature are selected for the criterion ‘exposure’. Water yield, ground water recharge, evapotranspiration (ET) are selected for the criterion ‘sensitivity’. Type of irrigation and storage structures are selected for the criterion ‘Adaptive capacity’. These indicators were mapped and integrated in GIS environment using overlay analysis. The five sub-basins, namely Arjunanadhi, Kousiganadhi, Sindapalli-Uppodai and Vallampatti Odai, fall under medium vulnerability profile, which indicates that the basin is under moderate stress of water resources. The paper also explores prioritization of sub-basinwise adaptation strategies to climate change based on the vulnerability indices.

Keywords: adaptive capacity, exposure, overlay analysis, sensitivity, vulnerability

Procedia PDF Downloads 306
5335 Constraint-Directed Techniques for Transport Scheduling with Capacity Restrictions of Automotive Manufacturing Components

Authors: Martha Ndeley, John Ikome

Abstract:

In this paper, we expand the scope of constraint-directed techniques to deal with the case of transportation schedule with capacity restrictions where the scheduling problem includes alternative activities. That is, not only does the scheduling problem consist of determining when an activity is to be executed, but also determining which set of alternative activities is to be executed at all level of transportation from input to output. Such problems encompass both alternative resource problems and alternative process plan problems. We formulate a constraint-based representation of alternative activities to model problems containing such choices. We then extend existing constraint-directed scheduling heuristic commitment techniques and propagators to reason directly about the fact that an activity does not necessarily have to exist in a final transportation schedule without being completed. Tentative results show that an algorithm using a novel texture-based heuristic commitment technique propagators achieves the best overall performance of the techniques tested.

Keywords: production, transportation, scheduling, integrated

Procedia PDF Downloads 346
5334 Discrete State Prediction Algorithm Design with Self Performance Enhancement Capacity

Authors: Smail Tigani, Mohamed Ouzzif

Abstract:

This work presents a discrete quantitative state prediction algorithm with intelligent behavior making it able to self-improve some performance aspects. The specificity of this algorithm is the capacity of self-rectification of the prediction strategy before the final decision. The auto-rectification mechanism is based on two parallel mathematical models. In one hand, the algorithm predicts the next state based on event transition matrix updated after each observation. In the other hand, the algorithm extracts its residues trend with a linear regression representing historical residues data-points in order to rectify the first decision if needs. For a normal distribution, the interactivity between the two models allows the algorithm to self-optimize its performance and then make better prediction. Designed key performance indicator, computed during a Monte Carlo simulation, shows the advantages of the proposed approach compared with traditional one.

Keywords: discrete state, Markov Chains, linear regression, auto-adaptive systems, decision making, Monte Carlo Simulation

Procedia PDF Downloads 488
5333 Effective Doping Engineering of Na₃V₂(PO₄)₂F₃ as a High-Performance Cathode Material for Sodium-Ion Batteries

Authors: Ramon Alberto Paredes Camacho, Li Lu

Abstract:

Sustainable batteries are possible through the development of cheaper and greener alternatives whose most feasible option is epitomized by Sodium-Ion Batteries (SIB). Na₃V₂(PO₄)₂F₃ (NVPF) an important member of the Na-superionic-conductor (NASICON) materials, has recently been in the spotlight due to its interesting electrochemical properties when used as cathode namely, high specific capacity of 128 mA h g-¹, high energy density of 507 W h Kg-¹, increased working potential at which vanadium redox couples can be activated (with an average value around 3.9 V), and small volume variation of less than 2%. These traits grant NVPF an excellent perspective as a cathode material for the next generation of sodium batteries. Unfortunately, because of its low inherent electrical conductivity and a high energy barrier that impedes the mobilization of all the available Na ions per formula, the overall electrochemical performance suffers substantial degradation, finally obstructing its industrial use. Many approaches have been developed to remediate these issues where nanostructural design, carbon coating, and ion doping are the most effective ones. This investigation is focused on enhancing the electrochemical response of NVPF by doping metal ions in the crystal lattice, substituting vanadium atoms. A facile sol-gel process is employed, with citric acid as the chelator and the carbon source. The optimized conditions circumvent fluorine sublimation, ratifying the material’s purity. One of the reasons behind the large ionic improvement is the attraction of extra Na ions into the crystalline structure due to a charge imbalance produced by the valence of the doped ions (+2), which is lower than the one of vanadium (+3). Superior stability (higher than 90% at a current density of 20C) and capacity retention at an extremely high current density of 50C are demonstrated by our doped NVPF. This material continues to retain high capacity values at low and high temperatures. In addition, full cell NVPF//Hard Carbon shows capacity values and high stability at -20 and 60ºC. Our doping strategy proves to significantly increase the ionic and electronic conductivity of NVPF even at extreme conditions, delivering outstanding electrochemical performance and paving the way for advanced high-potential cathode materials.

Keywords: sodium-ion batteries, cathode materials, NASICON, Na3V2(PO4)2F3, Ion doping

Procedia PDF Downloads 46
5332 Comparative Analysis of Predictive Models for Customer Churn Prediction in the Telecommunication Industry

Authors: Deepika Christopher, Garima Anand

Abstract:

To determine the best model for churn prediction in the telecom industry, this paper compares 11 machine learning algorithms, namely Logistic Regression, Support Vector Machine, Random Forest, Decision Tree, XGBoost, LightGBM, Cat Boost, AdaBoost, Extra Trees, Deep Neural Network, and Hybrid Model (MLPClassifier). It also aims to pinpoint the top three factors that lead to customer churn and conducts customer segmentation to identify vulnerable groups. According to the data, the Logistic Regression model performs the best, with an F1 score of 0.6215, 81.76% accuracy, 68.95% precision, and 56.57% recall. The top three attributes that cause churn are found to be tenure, Internet Service Fiber optic, and Internet Service DSL; conversely, the top three models in this article that perform the best are Logistic Regression, Deep Neural Network, and AdaBoost. The K means algorithm is applied to establish and analyze four different customer clusters. This study has effectively identified customers that are at risk of churn and may be utilized to develop and execute strategies that lower customer attrition.

Keywords: attrition, retention, predictive modeling, customer segmentation, telecommunications

Procedia PDF Downloads 45
5331 Implementation of Correlation-Based Data Analysis as a Preliminary Stage for the Prediction of Geometric Dimensions Using Machine Learning in the Forming of Car Seat Rails

Authors: Housein Deli, Loui Al-Shrouf, Hammoud Al Joumaa, Mohieddine Jelali

Abstract:

When forming metallic materials, fluctuations in material properties, process conditions, and wear lead to deviations in the component geometry. Several hundred features sometimes need to be measured, especially in the case of functional and safety-relevant components. These can only be measured offline due to the large number of features and the accuracy requirements. The risk of producing components outside the tolerances is minimized but not eliminated by the statistical evaluation of process capability and control measurements. The inspection intervals are based on the acceptable risk and are at the expense of productivity but remain reactive and, in some cases, considerably delayed. Due to the considerable progress made in the field of condition monitoring and measurement technology, permanently installed sensor systems in combination with machine learning and artificial intelligence, in particular, offer the potential to independently derive forecasts for component geometry and thus eliminate the risk of defective products - actively and preventively. The reliability of forecasts depends on the quality, completeness, and timeliness of the data. Measuring all geometric characteristics is neither sensible nor technically possible. This paper, therefore, uses the example of car seat rail production to discuss the necessary first step of feature selection and reduction by correlation analysis, as otherwise, it would not be possible to forecast components in real-time and inline. Four different car seat rails with an average of 130 features were selected and measured using a coordinate measuring machine (CMM). The run of such measuring programs alone takes up to 20 minutes. In practice, this results in the risk of faulty production of at least 2000 components that have to be sorted or scrapped if the measurement results are negative. Over a period of 2 months, all measurement data (> 200 measurements/ variant) was collected and evaluated using correlation analysis. As part of this study, the number of characteristics to be measured for all 6 car seat rail variants was reduced by over 80%. Specifically, direct correlations for almost 100 characteristics were proven for an average of 125 characteristics for 4 different products. A further 10 features correlate via indirect relationships so that the number of features required for a prediction could be reduced to less than 20. A correlation factor >0.8 was assumed for all correlations.

Keywords: long-term SHM, condition monitoring, machine learning, correlation analysis, component prediction, wear prediction, regressions analysis

Procedia PDF Downloads 23
5330 Estimating CO₂ Storage Capacity under Geological Uncertainty Using 3D Geological Modeling of Unconventional Reservoir Rocks in Block nv32, Shenvsi Oilfield, China

Authors: Ayman Mutahar Alrassas, Shaoran Ren, Renyuan Ren, Hung Vo Thanh, Mohammed Hail Hakimi, Zhenliang Guan

Abstract:

The significant effect of CO₂ on global climate and the environment has gained more concern worldwide. Enhance oil recovery (EOR) associated with sequestration of CO₂ particularly into the depleted oil reservoir is considered the viable approach under financial limitations since it improves the oil recovery from the existing oil reservoir and boosts the relation between global-scale of CO₂ capture and geological sequestration. Consequently, practical measurements are required to attain large-scale CO₂ emission reduction. This paper presents an integrated modeling workflow to construct an accurate 3D reservoir geological model to estimate the storage capacity of CO₂ under geological uncertainty in an unconventional oil reservoir of the Paleogene Shahejie Formation (Es1) in the block Nv32, Shenvsi oilfield, China. In this regard, geophysical data, including well logs of twenty-two well locations and seismic data, were combined with geological and engineering data and used to construct a 3D reservoir geological modeling. The geological modeling focused on four tight reservoir units of the Shahejie Formation (Es1-x1, Es1-x2, Es1-x3, and Es1-x4). The validated 3D reservoir models were subsequently used to calculate the theoretical CO₂ storage capacity in the block Nv32, Shenvsi oilfield. Well logs were utilized to predict petrophysical properties such as porosity and permeability, and lithofacies and indicate that the Es1 reservoir units are mainly sandstone, shale, and limestone with a proportion of 38.09%, 32.42%, and 29.49, respectively. Well log-based petrophysical results also show that the Es1 reservoir units generally exhibit 2–36% porosity, 0.017 mD to 974.8 mD permeability, and moderate to good net to gross ratios. These estimated values of porosity, permeability, lithofacies, and net to gross were up-scaled and distributed laterally using Sequential Gaussian Simulation (SGS) and Simulation Sequential Indicator (SIS) methods to generate 3D reservoir geological models. The reservoir geological models show there are lateral heterogeneities of the reservoir properties and lithofacies, and the best reservoir rocks exist in the Es1-x4, Es1-x3, and Es1-x2 units, respectively. In addition, the reservoir volumetric of the Es1 units in block Nv32 was also estimated based on the petrophysical property models and fund to be between 0.554368

Keywords: CO₂ storage capacity, 3D geological model, geological uncertainty, unconventional oil reservoir, block Nv32

Procedia PDF Downloads 162
5329 Machine Learning and Internet of Thing for Smart-Hydrology of the Mantaro River Basin

Authors: Julio Jesus Salazar, Julio Jesus De Lama

Abstract:

the fundamental objective of hydrological studies applied to the engineering field is to determine the statistically consistent volumes or water flows that, in each case, allow us to size or design a series of elements or structures to effectively manage and develop a river basin. To determine these values, there are several ways of working within the framework of traditional hydrology: (1) Study each of the factors that influence the hydrological cycle, (2) Study the historical behavior of the hydrology of the area, (3) Study the historical behavior of hydrologically similar zones, and (4) Other studies (rain simulators or experimental basins). Of course, this range of studies in a certain basin is very varied and complex and presents the difficulty of collecting the data in real time. In this complex space, the study of variables can only be overcome by collecting and transmitting data to decision centers through the Internet of things and artificial intelligence. Thus, this research work implemented the learning project of the sub-basin of the Shullcas river in the Andean basin of the Mantaro river in Peru. The sensor firmware to collect and communicate hydrological parameter data was programmed and tested in similar basins of the European Union. The Machine Learning applications was programmed to choose the algorithms that direct the best solution to the determination of the rainfall-runoff relationship captured in the different polygons of the sub-basin. Tests were carried out in the mountains of Europe, and in the sub-basins of the Shullcas river (Huancayo) and the Yauli river (Jauja) with heights close to 5000 m.a.s.l., giving the following conclusions: to guarantee a correct communication, the distance between devices should not pass the 15 km. It is advisable to minimize the energy consumption of the devices and avoid collisions between packages, the distances oscillate between 5 and 10 km, in this way the transmission power can be reduced and a higher bitrate can be used. In case the communication elements of the devices of the network (internet of things) installed in the basin do not have good visibility between them, the distance should be reduced to the range of 1-3 km. The energy efficiency of the Atmel microcontrollers present in Arduino is not adequate to meet the requirements of system autonomy. To increase the autonomy of the system, it is recommended to use low consumption systems, such as the Ashton Raggatt McDougall or ARM Cortex L (Ultra Low Power) microcontrollers or even the Cortex M; and high-performance direct current (DC) to direct current (DC) converters. The Machine Learning System has initiated the learning of the Shullcas system to generate the best hydrology of the sub-basin. This will improve as machine learning and the data entered in the big data coincide every second. This will provide services to each of the applications of the complex system to return the best data of determined flows.

Keywords: hydrology, internet of things, machine learning, river basin

Procedia PDF Downloads 149
5328 LiTa2PO8-based Composite Solid Polymer Electrolytes for High-Voltage Cathodes in Lithium-Metal Batteries

Authors: Kumlachew Zelalem Walle, Chun-Chen Yang

Abstract:

Solid-state Lithium metal batteries (SSLMBs) that contain polymer and ceramic solid electrolytes have received considerable attention as an alternative to substitute liquid electrolytes in lithium metal batteries (LMBs) for highly safe, excellent energy storage performance and stability under elevated temperature situations. Here, a novel fast Li-ion conducting material, LiTa₂PO₈ (LTPO), was synthesized and electrochemical performance of as-prepared powder and LTPO-incorporated composite solid polymer electrolyte (LTPO-CPE) membrane were investigated. The as-prepared LTPO powder was homogeneously dispersed in polymer matrices, and a hybrid solid electrolyte membrane was synthesized via a simple solution-casting method. The room temperature total ionic conductivity (σt) of the LTPO pellet and LTPO-CPE membrane were 0.14 and 0.57 mS cm-1, respectively. A coin battery with NCM811 cathode is cycled under 1C between 2.8 to 4.5 V at room temperature, achieving a Coulombic efficiency of 99.3% with capacity retention of 74.1% after 300 cycles. Similarly, the LFP cathode also delivered an excellent performance at 0.5C with an average Coulombic efficiency of 100% without virtually capacity loss (the maximum specific capacity is at 27th: 138 mAh g−1 and 500th: 131.3 mAh g−1). These results demonstrates the feasibility of a high Li-ion conductor LTPO as a filler, and the developed polymer/ceramic hybrid electrolyte has potential to be a high-performance electrolyte for high-voltage cathodes, which may provide a fresh platform for developing more advanced solid-state electrolytes.

Keywords: li-ion conductor, lithium-metal batteries, composite solid electrolytes, liTa2PO8, high-voltage cathode

Procedia PDF Downloads 56
5327 Assessing Organizational Resilience Capacity to Flooding: Index Development and Application to Greek Small & Medium-Sized Enterprises

Authors: Antonis Skouloudis, Konstantinos Evangelinos, Walter Leal-Filho, Panagiotis Vouros, Ioannis Nikolaou

Abstract:

Organizational resilience capacity to extreme weather events (EWEs) has sparked a growth in scholarly attention over the past decade as an essential aspect in business continuity management, with supporting evidence for this claim to suggest that it retains a key role in successful responses to adverse situations, crises and shocks. Small and medium-sized enterprises (SMEs) are more vulnerable to face floods compared to their larger counterparts, so they are disproportionately affected by such extreme weather events. The limited resources at their disposal, the lack of time and skills all conduce to inadequate preparedness to challenges posed by floods. SMEs tend to plan in the short-term, reacting to circumstances as they arise and focussing on their very survival. Likewise, they share less formalised structures and codified policies while they are most usually owner-managed, resulting in a command-and-control management culture. Such characteristics result in them having limited opportunities to recover from flooding and quickly turnaround their operation from a loss making to a profit making one. Scholars frame the capacity of business entities to be resilient upon an EWE disturbance (such as flash floods) as the rate of recovery and restoration of organizational performance to pre-disturbance conditions, the amount of disturbance (i.e. threshold level) a business can absorb before losing structural and/or functional components that will alter or cease operation, as well as the extent to which the organization maintains its function (i.e. impact resistance) before performance levels are driven to zero. Nevertheless, while it seems to be accepted as an essential trait of firms effectively transcending uncertain conditions, research deconstructing the enabling conditions and/or inhibitory factors of SMEs resilience capacity to natural hazards is still sparse, fragmentary and mostly fuelled by anecdotal evidence or normative assumptions. Focusing on the individual level of analysis, i.e. the individual enterprise and its endeavours to succeed, the emergent picture from this relatively new research strand delineates the specification of variables, conceptual relationships or dynamic boundaries of resilience capacity components in an attempt to provide prescriptions for policy-making as well as business management. This study will present the development of a flood resilience capacity index (FRCI) and its application to Greek SMEs. The proposed composite indicator pertains to cognitive, behavioral/managerial and contextual factors that influence an enterprise’s ability to shape effective responses to meet flood challenges. Through the proposed indicator-based approach, an analytical framework is set forth that will help standardize such assessments with the overarching aim of reducing the vulnerability of SMEs to flooding. This will be achieved by identifying major internal and external attributes explaining resilience capacity which is particularly important given the limited resources these enterprises have and that they tend to be primary sources of vulnerabilities in supply chain networks, generating Single Points of Failure (SPOF).

Keywords: Floods, Small & Medium-Sized enterprises, organizational resilience capacity, index development

Procedia PDF Downloads 180
5326 Production of a Sustainable Slow-Release Urea Fertilizer Using Starch and Poly-Vinyl Alcohol

Authors: A. M. H. Shokry, N. S. M. El-Tayeb

Abstract:

The environmental impacts caused by fertilizers call for the adaptation of more sustainable technologies in order to increase agricultural production and reduce pollution due to high nutrient emissions. One particular technique has been to coat urea fertilizer granules with less-soluble chemicals that permit the gradual release of nutrients in a slow and controlled manner. The aim of this research is to develop a biodegradable slow-release fertilizer (SRF) with materials that come from sustainable sources; starch and polyvinyl alcohol (PVA). The slow-release behavior and water retention capacity of the coated granules were determined. In addition, the aqueous release and absorbency rates were also tested. Results confirmed that the release rate from coated granules was slower than through plain membranes; and that the water absorption capacity of the coated urea decreased as PVA content increased. The SRF was also tested and gave positive results that confirmed the integrity of the product.

Keywords: biodegradability, nitrogen-use efficiency, poly-vinyl alcohol, slow-release fertilizer, sustainability

Procedia PDF Downloads 199