Search results for: poverty prediction
2374 Numerical Prediction of Effects of Location of Across-the-Width Laminations on Tensile Properties of Rectangular Wires
Authors: Kazeem K. Adewole
Abstract:
This paper presents the finite element analysis numerical investigation of the effects of the location of across-the-width lamination on the tensile properties of rectangular wires for civil engineering applications. FE analysis revealed that the presence of the mid-thickness across-the-width lamination changes the cup and cone fracture shape exhibited by the lamination-free wire to a V-shaped fracture shape with an opening at the bottom/pointed end of the V-shape at the location of the mid-thickness across-the-width lamination. FE analysis also revealed that the presence of the mid-width across-the-thickness lamination changes the cup and cone fracture shape of the lamination-free wire without an opening to a cup and cone fracture shape with an opening at the location of the mid-width across-the-thickness lamination. The FE fracture behaviour prediction approach presented in this work serves as a tool for failure analysis of wires with lamination at different orientations which cannot be conducted experimentally.Keywords: across-the-width lamination, tensile properties, lamination location, wire
Procedia PDF Downloads 4732373 Additive Weibull Model Using Warranty Claim and Finite Element Analysis Fatigue Analysis
Authors: Kanchan Mondal, Dasharath Koulage, Dattatray Manerikar, Asmita Ghate
Abstract:
This paper presents an additive reliability model using warranty data and Finite Element Analysis (FEA) data. Warranty data for any product gives insight to its underlying issues. This is often used by Reliability Engineers to build prediction model to forecast failure rate of parts. But there is one major limitation in using warranty data for prediction. Warranty periods constitute only a small fraction of total lifetime of a product, most of the time it covers only the infant mortality and useful life zone of a bathtub curve. Predicting with warranty data alone in these cases is not generally provide results with desired accuracy. Failure rate of a mechanical part is driven by random issues initially and wear-out or usage related issues at later stages of the lifetime. For better predictability of failure rate, one need to explore the failure rate behavior at wear out zone of a bathtub curve. Due to cost and time constraints, it is not always possible to test samples till failure, but FEA-Fatigue analysis can provide the failure rate behavior of a part much beyond warranty period in a quicker time and at lesser cost. In this work, the authors proposed an Additive Weibull Model, which make use of both warranty and FEA fatigue analysis data for predicting failure rates. It involves modeling of two data sets of a part, one with existing warranty claims and other with fatigue life data. Hazard rate base Weibull estimation has been used for the modeling the warranty data whereas S-N curved based Weibull parameter estimation is used for FEA data. Two separate Weibull models’ parameters are estimated and combined to form the proposed Additive Weibull Model for prediction.Keywords: bathtub curve, fatigue, FEA, reliability, warranty, Weibull
Procedia PDF Downloads 722372 Evaluation of the CRISP-DM Business Understanding Step: An Approach for Assessing the Predictive Power of Regression versus Classification for the Quality Prediction of Hydraulic Test Results
Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter
Abstract:
Digitalisation in production technology is a driver for the application of machine learning methods. Through the application of predictive quality, the great potential for saving necessary quality control can be exploited through the data-based prediction of product quality and states. However, the serial use of machine learning applications is often prevented by various problems. Fluctuations occur in real production data sets, which are reflected in trends and systematic shifts over time. To counteract these problems, data preprocessing includes rule-based data cleaning, the application of dimensionality reduction techniques, and the identification of comparable data subsets to extract stable features. Successful process control of the target variables aims to centre the measured values around a mean and minimise variance. Competitive leaders claim to have mastered their processes. As a result, much of the real data has a relatively low variance. For the training of prediction models, the highest possible generalisability is required, which is at least made more difficult by this data availability. The implementation of a machine learning application can be interpreted as a production process. The CRoss Industry Standard Process for Data Mining (CRISP-DM) is a process model with six phases that describes the life cycle of data science. As in any process, the costs to eliminate errors increase significantly with each advancing process phase. For the quality prediction of hydraulic test steps of directional control valves, the question arises in the initial phase whether a regression or a classification is more suitable. In the context of this work, the initial phase of the CRISP-DM, the business understanding, is critically compared for the use case at Bosch Rexroth with regard to regression and classification. The use of cross-process production data along the value chain of hydraulic valves is a promising approach to predict the quality characteristics of workpieces. Suitable methods for leakage volume flow regression and classification for inspection decision are applied. Impressively, classification is clearly superior to regression and achieves promising accuracies.Keywords: classification, CRISP-DM, machine learning, predictive quality, regression
Procedia PDF Downloads 1432371 COVID-19 Analysis with Deep Learning Model Using Chest X-Rays Images
Authors: Uma Maheshwari V., Rajanikanth Aluvalu, Kumar Gautam
Abstract:
The COVID-19 disease is a highly contagious viral infection with major worldwide health implications. The global economy suffers as a result of COVID. The spread of this pandemic disease can be slowed if positive patients are found early. COVID-19 disease prediction is beneficial for identifying patients' health problems that are at risk for COVID. Deep learning and machine learning algorithms for COVID prediction using X-rays have the potential to be extremely useful in solving the scarcity of doctors and clinicians in remote places. In this paper, a convolutional neural network (CNN) with deep layers is presented for recognizing COVID-19 patients using real-world datasets. We gathered around 6000 X-ray scan images from various sources and split them into two categories: normal and COVID-impacted. Our model examines chest X-ray images to recognize such patients. Because X-rays are commonly available and affordable, our findings show that X-ray analysis is effective in COVID diagnosis. The predictions performed well, with an average accuracy of 99% on training photographs and 88% on X-ray test images.Keywords: deep CNN, COVID–19 analysis, feature extraction, feature map, accuracy
Procedia PDF Downloads 772370 Pattern Recognition Using Feature Based Die-Map Clustering in the Semiconductor Manufacturing Process
Authors: Seung Hwan Park, Cheng-Sool Park, Jun Seok Kim, Youngji Yoo, Daewoong An, Jun-Geol Baek
Abstract:
Depending on the big data analysis becomes important, yield prediction using data from the semiconductor process is essential. In general, yield prediction and analysis of the causes of the failure are closely related. The purpose of this study is to analyze pattern affects the final test results using a die map based clustering. Many researches have been conducted using die data from the semiconductor test process. However, analysis has limitation as the test data is less directly related to the final test results. Therefore, this study proposes a framework for analysis through clustering using more detailed data than existing die data. This study consists of three phases. In the first phase, die map is created through fail bit data in each sub-area of die. In the second phase, clustering using map data is performed. And the third stage is to find patterns that affect final test result. Finally, the proposed three steps are applied to actual industrial data and experimental results showed the potential field application.Keywords: die-map clustering, feature extraction, pattern recognition, semiconductor manufacturing process
Procedia PDF Downloads 4012369 Application of Artificial Neural Network for Prediction of Load-Haul-Dump Machine Performance Characteristics
Authors: J. Balaraju, M. Govinda Raj, C. S. N. Murthy
Abstract:
Every industry is constantly looking for enhancement of its day to day production and productivity. This can be possible only by maintaining the men and machinery at its adequate level. Prediction of performance characteristics plays an important role in performance evaluation of the equipment. Analytical and statistical approaches will take a bit more time to solve complex problems such as performance estimations as compared with software-based approaches. Keeping this in view the present study deals with an Artificial Neural Network (ANN) modelling of a Load-Haul-Dump (LHD) machine to predict the performance characteristics such as reliability, availability and preventive maintenance (PM). A feed-forward-back-propagation ANN technique has been used to model the Levenberg-Marquardt (LM) training algorithm. The performance characteristics were computed using Isograph Reliability Workbench 13.0 software. These computed values were validated using predicted output responses of ANN models. Further, recommendations are given to the industry based on the performed analysis for improvement of equipment performance.Keywords: load-haul-dump, LHD, artificial neural network, ANN, performance, reliability, availability, preventive maintenance
Procedia PDF Downloads 1482368 Clinical Prediction Rules for Using Open Kinetic Chain Exercise in Treatment of Knee Osteoarthritis
Authors: Mohamed Aly, Aliaa Rehan Youssef, Emad Sawerees, Mounir Guirgis
Abstract:
Relevance: Osteoarthritis (OA) is the most common degenerative disease seen in all populations. It causes disability and substantial socioeconomic burden. Evidence supports that exercise are the most effective conservative treatment for patients with OA. Therapists experience and clinical judgment play major role in exercise prescription and scientific evidence for this regard is lacking. The development of clinical prediction rules to identify patients who are most likely benefit from exercise may help solving this dilemma. Purpose: This study investigated whether body mass index and functional ability at baseline can predict patients’ response to a selected exercise program. Approach: Fifty-six patients, aged 35 to 65 years, completed an exercise program consisting of open kinetic chain strengthening and passive stretching exercises. The program was given for 3 sessions per week, 45 minutes per session, for 6 weeks Evaluation: At baseline and post treatment, pain severity was assessed using the numerical pain rating scale, whereas functional ability was being assessed by step test (ST), time up and go test (TUG) and 50 feet time walk test (50 FTW). After completing the program, global rate of change (GROC) score of greater than 4 was used to categorize patients as successful and non-successful. Thirty-eight patients (68%) had successful response to the intervention. Logistic regression showed that BMI and 50 FTW test were the only significant predictors. Based on the results, patients with BMI less than 34.71 kg/m2 and 50 FTW test less than 25.64 sec are 68% to 89% more likely to benefit from the exercise program. Conclusions: Clinicians should consider the described strengthening and flexibility exercise program for patents with BMI less than 34.7 Kg/m2 and 50 FTW faster than 25.6 seconds. The validity of these predictors should be investigated for other exercise.Keywords: clinical prediction rule, knee osteoarthritis, physical therapy exercises, validity
Procedia PDF Downloads 4212367 The Application of Artificial Neural Networks for the Performance Prediction of Evacuated Tube Solar Air Collector with Phase Change Material
Authors: Sukhbir Singh
Abstract:
This paper describes the modeling of novel solar air collector (NSAC) system by using artificial neural network (ANN) model. The objective of the study is to demonstrate the application of the ANN model to predict the performance of the NSAC with acetamide as a phase change material (PCM) storage. Input data set consist of time, solar intensity and ambient temperature wherever as outlet air temperature of NSAC was considered as output. Experiments were conducted between 9.00 and 24.00 h in June and July 2014 underneath the prevailing atmospheric condition of Kurukshetra (city of the India). After that, experimental results were utilized to train the back propagation neural network (BPNN) to predict the outlet air temperature of NSAC. The results of proposed algorithm show that the BPNN is effective tool for the prediction of responses. The BPNN predicted results are 99% in agreement with the experimental results.Keywords: Evacuated tube solar air collector, Artificial neural network, Phase change material, solar air collector
Procedia PDF Downloads 1192366 The Theory behind Logistic Regression
Authors: Jan Henrik Wosnitza
Abstract:
The logistic regression has developed into a standard approach for estimating conditional probabilities in a wide range of applications including credit risk prediction. The article at hand contributes to the current literature on logistic regression fourfold: First, it is demonstrated that the binary logistic regression automatically meets its model assumptions under very general conditions. This result explains, at least in part, the logistic regression's popularity. Second, the requirement of homoscedasticity in the context of binary logistic regression is theoretically substantiated. The variances among the groups of defaulted and non-defaulted obligors have to be the same across the level of the aggregated default indicators in order to achieve linear logits. Third, this article sheds some light on the question why nonlinear logits might be superior to linear logits in case of a small amount of data. Fourth, an innovative methodology for estimating correlations between obligor-specific log-odds is proposed. In order to crystallize the key ideas, this paper focuses on the example of credit risk prediction. However, the results presented in this paper can easily be transferred to any other field of application.Keywords: correlation, credit risk estimation, default correlation, homoscedasticity, logistic regression, nonlinear logistic regression
Procedia PDF Downloads 4252365 Runoff Simulation by Using WetSpa Model in Garmabrood Watershed of Mazandaran Province, Iran
Authors: Mohammad Reza Dahmardeh Ghaleno, Mohammad Nohtani, Saeedeh Khaledi
Abstract:
Hydrological models are applied to simulation and prediction floods in watersheds. WetSpa is a distributed, continuous and physically model with daily or hourly time step that explains of precipitation, runoff and evapotranspiration processes for both simple and complex contexts. This model uses a modified rational method for runoff calculation. In this model, runoff is routed along the flow path using Diffusion-Wave Equation which depend on the slope, velocity and flow route characteristics. Garmabrood watershed located in Mazandaran province in Iran and passing over coordinates 53° 10´ 55" to 53° 38´ 20" E and 36° 06´ 45" to 36° 25´ 30"N. The area of the catchment is about 1133 km2 and elevations in the catchment range from 213 to 3136 m at the outlet, with average slope of 25.77 %. Results of the simulations show a good agreement between calculated and measured hydrographs at the outlet of the basin. Drawing upon Nash-Sutcliffe Model Efficiency Coefficient for calibration periodic model estimated daily hydrographs and maximum flow rate with an accuracy up to 61% and 83.17 % respectively.Keywords: watershed simulation, WetSpa, runoff, flood prediction
Procedia PDF Downloads 3352364 Virtual Metrology for Copper Clad Laminate Manufacturing
Authors: Misuk Kim, Seokho Kang, Jehyuk Lee, Hyunchang Cho, Sungzoon Cho
Abstract:
In semiconductor manufacturing, virtual metrology (VM) refers to methods to predict properties of a wafer based on machine parameters and sensor data of the production equipment, without performing the (costly) physical measurement of the wafer properties (Wikipedia). Additional benefits include avoidance of human bias and identification of important factors affecting the quality of the process which allow improving the process quality in the future. It is however rare to find VM applied to other areas of manufacturing. In this work, we propose to use VM to copper clad laminate (CCL) manufacturing. CCL is a core element of a printed circuit board (PCB) which is used in smartphones, tablets, digital cameras, and laptop computers. The manufacturing of CCL consists of three processes: Treating, lay-up, and pressing. Treating, the most important process among the three, puts resin on glass cloth, heat up in a drying oven, then produces prepreg for lay-up process. In this process, three important quality factors are inspected: Treated weight (T/W), Minimum Viscosity (M/V), and Gel Time (G/T). They are manually inspected, incurring heavy cost in terms of time and money, which makes it a good candidate for VM application. We developed prediction models of the three quality factors T/W, M/V, and G/T, respectively, with process variables, raw material, and environment variables. The actual process data was obtained from a CCL manufacturer. A variety of variable selection methods and learning algorithms were employed to find the best prediction model. We obtained prediction models of M/V and G/T with a high enough accuracy. They also provided us with information on “important” predictor variables, some of which the process engineers had been already aware and the rest of which they had not. They were quite excited to find new insights that the model revealed and set out to do further analysis on them to gain process control implications. T/W did not turn out to be possible to predict with a reasonable accuracy with given factors. The very fact indicates that the factors currently monitored may not affect T/W, thus an effort has to be made to find other factors which are not currently monitored in order to understand the process better and improve the quality of it. In conclusion, VM application to CCL’s treating process was quite successful. The newly built quality prediction model allowed one to reduce the cost associated with actual metrology as well as reveal some insights on the factors affecting the important quality factors and on the level of our less than perfect understanding of the treating process.Keywords: copper clad laminate, predictive modeling, quality control, virtual metrology
Procedia PDF Downloads 3492363 Geophysical Methods and Machine Learning Algorithms for Stuck Pipe Prediction and Avoidance
Authors: Ammar Alali, Mahmoud Abughaban
Abstract:
Cost reduction and drilling optimization is the goal of many drilling operators. Historically, stuck pipe incidents were a major segment of non-productive time (NPT) associated costs. Traditionally, stuck pipe problems are part of the operations and solved post-sticking. However, the real key to savings and success is in predicting the stuck pipe incidents and avoiding the conditions leading to its occurrences. Previous attempts in stuck-pipe predictions have neglected the local geology of the problem. The proposed predictive tool utilizes geophysical data processing techniques and Machine Learning (ML) algorithms to predict drilling activities events in real-time using surface drilling data with minimum computational power. The method combines two types of analysis: (1) real-time prediction, and (2) cause analysis. Real-time prediction aggregates the input data, including historical drilling surface data, geological formation tops, and petrophysical data, from wells within the same field. The input data are then flattened per the geological formation and stacked per stuck-pipe incidents. The algorithm uses two physical methods (stacking and flattening) to filter any noise in the signature and create a robust pre-determined pilot that adheres to the local geology. Once the drilling operation starts, the Wellsite Information Transfer Standard Markup Language (WITSML) live surface data are fed into a matrix and aggregated in a similar frequency as the pre-determined signature. Then, the matrix is correlated with the pre-determined stuck-pipe signature for this field, in real-time. The correlation used is a machine learning Correlation-based Feature Selection (CFS) algorithm, which selects relevant features from the class and identifying redundant features. The correlation output is interpreted as a probability curve of stuck pipe incidents prediction in real-time. Once this probability passes a fixed-threshold defined by the user, the other component, cause analysis, alerts the user of the expected incident based on set pre-determined signatures. A set of recommendations will be provided to reduce the associated risk. The validation process involved feeding of historical drilling data as live-stream, mimicking actual drilling conditions, of an onshore oil field. Pre-determined signatures were created for three problematic geological formations in this field prior. Three wells were processed as case studies, and the stuck-pipe incidents were predicted successfully, with an accuracy of 76%. This accuracy of detection could have resulted in around 50% reduction in NPT, equivalent to 9% cost saving in comparison with offset wells. The prediction of stuck pipe problem requires a method to capture geological, geophysical and drilling data, and recognize the indicators of this issue at a field and geological formation level. This paper illustrates the efficiency and the robustness of the proposed cross-disciplinary approach in its ability to produce such signatures and predicting this NPT event.Keywords: drilling optimization, hazard prediction, machine learning, stuck pipe
Procedia PDF Downloads 2252362 Cooling Profile Analysis of Hot Strip Coil Using Finite Volume Method
Authors: Subhamita Chakraborty, Shubhabrata Datta, Sujay Kumar Mukherjea, Partha Protim Chattopadhyay
Abstract:
Manufacturing of multiphase high strength steel in hot strip mill have drawn significant attention due to the possibility of forming low temperature transformation product of austenite under continuous cooling condition. In such endeavor, reliable prediction of temperature profile of hot strip coil is essential in order to accesses the evolution of microstructure at different location of hot strip coil, on the basis of corresponding Continuous Cooling Transformation (CCT) diagram. Temperature distribution profile of the hot strip coil has been determined by using finite volume method (FVM) vis-à-vis finite difference method (FDM). It has been demonstrated that FVM offer greater computational reliability in estimation of contact pressure distribution and hence the temperature distribution for curved and irregular profiles, owing to the flexibility in selection of grid geometry and discrete point position, Moreover, use of finite volume concept allows enforcing the conservation of mass, momentum and energy, leading to enhanced accuracy of prediction.Keywords: simulation, modeling, thermal analysis, coil cooling, contact pressure, finite volume method
Procedia PDF Downloads 4712361 Determining Disparities in the Distribution of the Energy Efficiency Resource through the History of Michigan Policy
Authors: M. Benjamin Stacey
Abstract:
Energy efficiency has been increasingly recognized as a high value resource through state policies that require utility companies to implement efficiency programs. While policymakers have recognized the statewide economic, environmental, and health related value to residents who rely on this grid supplied resource, varying interests in energy efficiency between socioeconomic groups stands undifferentiated in most state legislation. Instead, the benefits are oftentimes assumed to be distributed equitably across these groups. Despite this fact, these policies are frequently sited by advocacy groups, regulatory bodies and utility companies for their ability to address the negative financial, health and other social impacts of energy poverty in low income communities. Yet, while most states like Michigan require programs that target low income consumers, oftentimes no requirements exist for the equitable investment and energy savings for low income consumers, nor does it stipulate minimal spending levels on low income programs. To further understand the impact of the absence of these factors in legislation, this study examines the distribution of program funds and energy efficiency savings to answer a fundamental energy justice concern; Are there disparities in the investment and benefits of energy efficiency programs between socioeconomic groups? This study compiles data covering the history of Michigan’s Energy Efficiency policy implementation from 2010-2016, analyzing the energy efficiency portfolios of Michigan’s two main energy providers. To make accurate comparisons between these two energy providers' investments and energy savings in low and non-low income programs, the socioeconomic variation for each utility coverage area was captured and accounted for using GIS and US Census data. Interestingly, this study found that both providers invested more equitably in natural gas efficiency programs, however, together these providers invested roughly three times less per household in low income electricity efficiency programs, which resulted in ten times less electricity savings per household. This study also compares variation in commission approved utility plans and actual spending and savings results, with varying patterns pointing to differing portfolio management strategies between companies. This study reveals that for the history of the implementation of Michigan’s Energy Efficiency Policy, that the 35% of Michigan’s population who qualify as low income have received substantially disproportionate funding and energy savings because of the policy. This study provides an overview of results from a social perspective, raises concerns about the impact on energy poverty and equity between consumer groups and is an applicable tool for law makers, regulatory agencies, utility portfolio managers, and advocacy groups concerned with addressing issues related to energy poverty.Keywords: energy efficiency, energy justice, low income, state policy
Procedia PDF Downloads 1862360 Artificial Neural Network Based Approach in Prediction of Potential Water Pollution Across Different Land-Use Patterns
Authors: M.Rüştü Karaman, İsmail İşeri, Kadir Saltalı, A.Reşit Brohi, Ayhan Horuz, Mümin Dizman
Abstract:
Considerable relations has recently been given to the environmental hazardous caused by agricultural chemicals such as excess fertilizers. In this study, a neural network approach was investigated in the prediction of potential nitrate pollution across different land-use patterns by using a feedforward multilayered computer model of artificial neural network (ANN) with proper training. Periodical concentrations of some anions, especially nitrate (NO3-), and cations were also detected in drainage waters collected from the drain pipes placed in irrigated tomato field, unirrigated wheat field, fallow and pasture lands. The soil samples were collected from the irrigated tomato field and unirrigated wheat field on a grid system with 20 m x 20 m intervals. Site specific nitrate concentrations in the soil samples were measured for ANN based simulation of nitrate leaching potential from the land profiles. In the application of ANN model, a multi layered feedforward was evaluated, and data sets regarding with training, validation and testing containing the measured soil nitrate values were estimated based on spatial variability. As a result of the testing values, while the optimal structures of 2-15-1 was obtained (R2= 0.96, P < 0.01) for unirrigated field, the optimal structures of 2-10-1 was obtained (R2= 0.96, P < 0.01) for irrigated field. The results showed that the ANN model could be successfully used in prediction of the potential leaching levels of nitrate, based on different land use patterns. However, for the most suitable results, the model should be calibrated by training according to different NN structures depending on site specific soil parameters and varied agricultural managements.Keywords: artificial intelligence, ANN, drainage water, nitrate pollution
Procedia PDF Downloads 3092359 Proactive Business Approaches in Human Rights: The Implications of Corporate Social Responsibility
Authors: Fatemeh Jalalvand
Abstract:
The critical human rights problems such as extreme poverty, hunger, inequalities and gender discrimination need to be addressed by powerful and influential actors in the world. In today’s globalization, corporations have become one of the potent agents in the society. They are capable of generating economic growth, reducing poverty, and increasing the well-being of individuals, thereby contributing to the betterment of a broad spectrum of human rights. However, the discussion on how business can contribute to human rights has primarily focused on not violating them (reactive approach) rather than improving the conditions and solving the problems of human rights (proactive approach). In particular, the role of corporate social responsibility (CSR) in bringing proactivity of business in human rights has gained less attention. This paper develops a conceptual framework to examine the role of different categories of CSR, including discretionary, ethical, legal, instrumental and political CSR in encouraging the proactive contribution of corporations to the betterment of human rights. The five propositions, related to the conceptual framework, outline the relationships between five categories of CSR and proactivity of corporations in human rights. The findings indicate that discretionary CSR with voluntary nature might not be able to motivate any contribution of business in human rights. Moreover, ethical CSR and legal CSR might lead to reactive strategies of business toward human rights. Meanwhile, the economic incentives behind the notion of instrumental CSR could result in partial proactive engagement of corporations in human rights. Finally, the internal motives as profit and power besides the external duties might lead to the highest level of proactivity of corporations in human rights under the context of political CSR. The model developed offers a map for business to adopt proactive human rights strategies more systematically maintaining key profit-drivers like power and profit. In sum, instrumental and political categories of CSR might lead corporations to improve the conditions of human rights proactively.Keywords: CSR, human rights, proactive approach, reactive approach
Procedia PDF Downloads 2622358 Thermal Behaviour of a Low-Cost Passive Solar House in Somerset East, South Africa
Authors: Ochuko K. Overen, Golden Makaka, Edson L. Meyer, Sampson Mamphweli
Abstract:
Low-cost housing provided for people with small incomes in South Africa are characterized by poor thermal performance. This is due to inferior craftsmanship with no regard to energy efficient design during the building process. On average, South African households spend 14% of their total monthly income on energy needs, in particular space heating; which is higher than the international benchmark of 10% for energy poverty. Adopting energy efficient passive solar design strategies and superior thermal building materials can create a stable thermal comfort environment indoors. Thereby, reducing energy consumption for space heating. The aim of this study is to analyse the thermal behaviour of a low-cost house integrated with passive solar design features. A low-cost passive solar house with superstructure fly ash brick walls was designed and constructed in Somerset East, South Africa. Indoor and outdoor meteorological parameters of the house were monitored for a period of one year. The ASTM E741-11 Standard was adopted to perform ventilation test in the house. In summer, the house was found to be thermally comfortable for 66% of the period monitored, while for winter it was about 79%. The ventilation heat flow rate of the windows and doors were found to be 140 J/s and 68 J/s, respectively. Air leakage through cracks and openings in the building envelope was 0.16 m3/m2h with a corresponding ventilation heat flow rate of 24 J/s. The indoor carbon dioxide concentration monitored overnight was found to be 0.248%, which is less than the maximum range limit of 0.500%. The prediction percentage dissatisfaction of the house shows that 86% of the occupants will express the thermal satisfaction of the indoor environment. With a good operation of the house, it can create a well-ventilated, thermal comfortable and nature luminous indoor environment for the occupants. Incorporating passive solar design in low-cost housing can be one of the long and immediate solutions to the energy crisis facing South Africa.Keywords: energy efficiency, low-cost housing, passive solar design, rural development, thermal comfort
Procedia PDF Downloads 2602357 Regional Disparities in Microfinance Distribution: Evidence from Indian States
Authors: Sunil Sangwan, Narayan Chandra Nayak
Abstract:
Over the last few decades, Indian banking system has achieved remarkable growth in its credit volume. However, one of the most disturbing facts about this growth is the uneven distribution of financial services across regions. Having witnessed limited success from all the earlier efforts towards financial inclusion targeting the rural poor and the underprivileged, provision of microfinance, of late, has emerged as a supplementary mechanism. There are two prominent modes of microfinance distribution in India namely Bank-SHG linkage (SBLP) and private Microfinance Institutions (MFIs). Ironically, such efforts also seem to have failed to achieve the desired targets as the microfinance services have witnessed skewed distribution across the states of the country. This study attempts to make a comparative analysis of the geographical skew of the SBLP and MFI in India and examine the factors influencing their regional distribution. The results indicate that microfinance services are largely concentrated in the southern region, accounting for about 50% of all microfinance clients and 49% of all microfinance loan portfolios. This is distantly followed by an eastern region where client outreach is close to 25% only. The north-eastern, northern, central, and western regions lag far behind in microfinance sectors, accounting for only 4%, 4%, 10%, and 7 % client outreach respectively. The penetration of SHGs is equally skewed, with the southern region accounting for 46% of client outreach and 70% of loan portfolios followed by an eastern region with 21% of client outreach and 13% of the loan portfolio. Contrarily, north-eastern, northern, central, western and eastern regions account for 5%, 5%, 10%, and 13% of client outreach and 3%, 3%, 7%, and 4% of loan portfolios respectively. The study examines the impact of literacy rate, rural poverty, population density, primary sector share, non-farm activities, loan default behavior and bank penetration on the microfinance penetration. The study is limited to 17 major states of the country over the period 2008-2014. The results of the GMM estimation indicate the significant positive impact of literacy rate, non-farm activities and population density on microfinance penetration across the states, while the rise in loan default seems to deter it. Rural poverty shows the significant negative impact on the spread of SBLP, while it has a positive impact on MFI penetration, hence indicating the policy of exclusion being adhered to by the formal financial system especially towards the poor. However, MFIs seem to be working as substitute mechanisms to banks to fill the gap. The findings of the study are a pointer towards enhancing financial literacy, non-farm activities, rural bank penetration and containing loan default for achieving greater microfinance prevalence.Keywords: bank penetration, literacy rate, microfinance, primary sector share, rural non-farm activities, rural poverty
Procedia PDF Downloads 2292356 Statistical Comparison of Ensemble Based Storm Surge Forecasting Models
Authors: Amin Salighehdar, Ziwen Ye, Mingzhe Liu, Ionut Florescu, Alan F. Blumberg
Abstract:
Storm surge is an abnormal water level caused by a storm. Accurate prediction of a storm surge is a challenging problem. Researchers developed various ensemble modeling techniques to combine several individual forecasts to produce an overall presumably better forecast. There exist some simple ensemble modeling techniques in literature. For instance, Model Output Statistics (MOS), and running mean-bias removal are widely used techniques in storm surge prediction domain. However, these methods have some drawbacks. For instance, MOS is based on multiple linear regression and it needs a long period of training data. To overcome the shortcomings of these simple methods, researchers propose some advanced methods. For instance, ENSURF (Ensemble SURge Forecast) is a multi-model application for sea level forecast. This application creates a better forecast of sea level using a combination of several instances of the Bayesian Model Averaging (BMA). An ensemble dressing method is based on identifying best member forecast and using it for prediction. Our contribution in this paper can be summarized as follows. First, we investigate whether the ensemble models perform better than any single forecast. Therefore, we need to identify the single best forecast. We present a methodology based on a simple Bayesian selection method to select the best single forecast. Second, we present several new and simple ways to construct ensemble models. We use correlation and standard deviation as weights in combining different forecast models. Third, we use these ensembles and compare with several existing models in literature to forecast storm surge level. We then investigate whether developing a complex ensemble model is indeed needed. To achieve this goal, we use a simple average (one of the simplest and widely used ensemble model) as benchmark. Predicting the peak level of Surge during a storm as well as the precise time at which this peak level takes place is crucial, thus we develop a statistical platform to compare the performance of various ensemble methods. This statistical analysis is based on root mean square error of the ensemble forecast during the testing period and on the magnitude and timing of the forecasted peak surge compared to the actual time and peak. In this work, we analyze four hurricanes: hurricanes Irene and Lee in 2011, hurricane Sandy in 2012, and hurricane Joaquin in 2015. Since hurricane Irene developed at the end of August 2011 and hurricane Lee started just after Irene at the beginning of September 2011, in this study we consider them as a single contiguous hurricane event. The data set used for this study is generated by the New York Harbor Observing and Prediction System (NYHOPS). We find that even the simplest possible way of creating an ensemble produces results superior to any single forecast. We also show that the ensemble models we propose generally have better performance compared to the simple average ensemble technique.Keywords: Bayesian learning, ensemble model, statistical analysis, storm surge prediction
Procedia PDF Downloads 3072355 The Ability of Forecasting the Term Structure of Interest Rates Based on Nelson-Siegel and Svensson Model
Authors: Tea Poklepović, Zdravka Aljinović, Branka Marasović
Abstract:
Due to the importance of yield curve and its estimation it is inevitable to have valid methods for yield curve forecasting in cases when there are scarce issues of securities and/or week trade on a secondary market. Therefore in this paper, after the estimation of weekly yield curves on Croatian financial market from October 2011 to August 2012 using Nelson-Siegel and Svensson models, yield curves are forecasted using Vector auto-regressive model and Neural networks. In general, it can be concluded that both forecasting methods have good prediction abilities where forecasting of yield curves based on Nelson Siegel estimation model give better results in sense of lower Mean Squared Error than forecasting based on Svensson model Also, in this case Neural networks provide slightly better results. Finally, it can be concluded that most appropriate way of yield curve prediction is neural networks using Nelson-Siegel estimation of yield curves.Keywords: Nelson-Siegel Model, neural networks, Svensson Model, vector autoregressive model, yield curve
Procedia PDF Downloads 3302354 Photo-Fenton Decolorization of Methylene Blue Adsolubilized on Co2+ -Embedded Alumina Surface: Comparison of Process Modeling through Response Surface Methodology and Artificial Neural Network
Authors: Prateeksha Mahamallik, Anjali Pal
Abstract:
In the present study, Co(II)-adsolubilized surfactant modified alumina (SMA) was prepared, and methylene blue (MB) degradation was carried out on Co-SMA surface by visible light photo-Fenton process. The entire reaction proceeded on solid surface as MB was embedded on Co-SMA surface. The reaction followed zero order kinetics. Response surface methodology (RSM) and artificial neural network (ANN) were used for modeling the decolorization of MB by photo-Fenton process as a function of dose of Co-SMA (10, 20 and 30 g/L), initial concentration of MB (10, 20 and 30 mg/L), concentration of H2O2 (174.4, 348.8 and 523.2 mM) and reaction time (30, 45 and 60 min). The prediction capabilities of both the methodologies (RSM and ANN) were compared on the basis of correlation coefficient (R2), root mean square error (RMSE), standard error of prediction (SEP), relative percent deviation (RPD). Due to lower value of RMSE (1.27), SEP (2.06) and RPD (1.17) and higher value of R2 (0.9966), ANN was proved to be more accurate than RSM in order to predict decolorization efficiency.Keywords: adsolubilization, artificial neural network, methylene blue, photo-fenton process, response surface methodology
Procedia PDF Downloads 2522353 Analysis of Access to Credit among Rural Farmers in Giwa Local Government Area of Kaduna State, Nigeria
Authors: S. Ibrahim, Bashir Umar
Abstract:
Agricultural credit is very important for sustainable agricultural development to be achieved in any country of the world. Rural credit has proven to be a powerful instrument against poverty reduction and development in rural area. Agricultural credit enhances productivity and promotes standard of living by breaking vicious cycle of poverty of small scale farmers. This study examined access to credit among rural farmers in Giwa local government area of Kaduna state. Two stages sampling procedure was employed to select forty-two (42) respondents for the study. Primary data were collected using structured questionnaire with the help of well-trained enumerators. Data were analyzed using simple descriptive statistics. The results revealed that farmers were predominantly male (57.1%) and most (54.7%), were married with one level of education or another (66.5.%). Majority of the households’ head were between the ages of 31 to 50. majority of the farmers (68.2%) had more than 2ha of farmlands with at least 5 years of farming experience and an annual farm income of N 61,000 to 100,000 (61.9%). The Various sources of credit by the farmers in the study area were commercial banks (38.1%), Co-operative banks (47.6%), Development banks (14.2%) (formal) and Relatives (26.1%), Personal Savings (Adashi scheme) (52.3%), Moneylenders (21.4%) (informal). As regard to the amount of credit obtained by the farmers 38.1% received N 50,000-100,000, 50 % obtained N 100,001-500,000 while 11.9% obtained N 500,001-1,000,000. High interest Inadequate collateral, Complicated Procedures, lack of guarantor were the major constrains encountered by the farmers in accessing loans. The study therefore recommends that Rural farmers should be encouraged to form credit and thrift cooperative societies from which they can access much cheaper credits, Moreover, to ensure that any credit obtained may be manageable for the farmers, financial institutions should provide loans with low interest rates and government and non-governmental organizations should simplify procedures associated with accessing loans.Keywords: analysis, access, credit, farmers
Procedia PDF Downloads 612352 Air Dispersion Modeling for Prediction of Accidental Emission in the Atmosphere along Northern Coast of Egypt
Authors: Moustafa Osman
Abstract:
Modeling of air pollutants from the accidental release is performed for quantifying the impact of industrial facilities into the ambient air. The mathematical methods are requiring for the prediction of the accidental scenario in probability of failure-safe mode and analysis consequences to quantify the environmental damage upon human health. The initial statement of mitigation plan is supporting implementation during production and maintenance periods. In a number of mathematical methods, the flow rate at which gaseous and liquid pollutants might be accidentally released is determined from various types in term of point, line and area sources. These emissions are integrated meteorological conditions in simplified stability parameters to compare dispersion coefficients from non-continuous air pollution plumes. The differences are reflected in concentrations levels and greenhouse effect to transport the parcel load in both urban and rural areas. This research reveals that the elevation effect nearby buildings with other structure is higher 5 times more than open terrains. These results are agreed with Sutton suggestion for dispersion coefficients in different stability classes.Keywords: air pollutants, dispersion modeling, GIS, health effect, urban planning
Procedia PDF Downloads 3732351 Multi-Faceted Growth in Creative Industries
Authors: Sanja Pfeifer, Nataša Šarlija, Marina Jeger, Ana Bilandžić
Abstract:
The purpose of this study is to explore the different facets of growth among micro, small and medium-sized firms in Croatia and to analyze the differences between models designed for all micro, small and medium-sized firms and those in creative industries. Three growth prediction models were designed and tested using the growth of sales, employment and assets of the company as dependent variables. The key drivers of sales growth are: prudent use of cash, industry affiliation and higher share of intangible assets. Growth of assets depends on retained profits, internal and external sources of financing, as well as industry affiliation. Growth in employment is closely related to sources of financing, in particular, debt and it occurs less frequently than growth in sales and assets. The findings confirm the assumption that growth strategies of small and medium-sized enterprises (SMEs) in creative industries have specific differences in comparison to SMEs in general. Interestingly, only 2.2% of growing enterprises achieve growth in employment, assets and sales simultaneously.Keywords: creative industries, growth prediction model, growth determinants, growth measures
Procedia PDF Downloads 3302350 Graph Clustering Unveiled: ClusterSyn - A Machine Learning Framework for Predicting Anti-Cancer Drug Synergy Scores
Authors: Babak Bahri, Fatemeh Yassaee Meybodi, Changiz Eslahchi
Abstract:
In the pursuit of effective cancer therapies, the exploration of combinatorial drug regimens is crucial to leverage synergistic interactions between drugs, thereby improving treatment efficacy and overcoming drug resistance. However, identifying synergistic drug pairs poses challenges due to the vast combinatorial space and limitations of experimental approaches. This study introduces ClusterSyn, a machine learning (ML)-powered framework for classifying anti-cancer drug synergy scores. ClusterSyn employs a two-step approach involving drug clustering and synergy score prediction using a fully connected deep neural network. For each cell line in the training dataset, a drug graph is constructed, with nodes representing drugs and edge weights denoting synergy scores between drug pairs. Drugs are clustered using the Markov clustering (MCL) algorithm, and vectors representing the similarity of drug pairs to each cluster are input into the deep neural network for synergy score prediction (synergy or antagonism). Clustering results demonstrate effective grouping of drugs based on synergy scores, aligning similar synergy profiles. Subsequently, neural network predictions and synergy scores of the two drugs on others within their clusters are used to predict the synergy score of the considered drug pair. This approach facilitates comparative analysis with clustering and regression-based methods, revealing the superior performance of ClusterSyn over state-of-the-art methods like DeepSynergy and DeepDDS on diverse datasets such as Oniel and Almanac. The results highlight the remarkable potential of ClusterSyn as a versatile tool for predicting anti-cancer drug synergy scores.Keywords: drug synergy, clustering, prediction, machine learning., deep learning
Procedia PDF Downloads 772349 Microfinance and Microenterprise Development: Evidence from Bangladesh
Authors: Rahat Dewan
Abstract:
The debate surrounding the efficacy of microfinance and the importance of microenterprise is fierce, lengthy and multifaceted. This paper reviews key issues, theory and evidence surrounding microfinance and microenterprise development for poverty alleviation. We report on a recently completed, large-scale microenterprise development intervention in Bangladesh using the rudimentary data available to us, and also our own qualitative field research. We find reasonable evidence for significant returns to several development outcomes.Keywords: Bangladesh, development, microenterprise, microfinance
Procedia PDF Downloads 2322348 Comparative Analysis of Predictive Models for Customer Churn Prediction in the Telecommunication Industry
Authors: Deepika Christopher, Garima Anand
Abstract:
To determine the best model for churn prediction in the telecom industry, this paper compares 11 machine learning algorithms, namely Logistic Regression, Support Vector Machine, Random Forest, Decision Tree, XGBoost, LightGBM, Cat Boost, AdaBoost, Extra Trees, Deep Neural Network, and Hybrid Model (MLPClassifier). It also aims to pinpoint the top three factors that lead to customer churn and conducts customer segmentation to identify vulnerable groups. According to the data, the Logistic Regression model performs the best, with an F1 score of 0.6215, 81.76% accuracy, 68.95% precision, and 56.57% recall. The top three attributes that cause churn are found to be tenure, Internet Service Fiber optic, and Internet Service DSL; conversely, the top three models in this article that perform the best are Logistic Regression, Deep Neural Network, and AdaBoost. The K means algorithm is applied to establish and analyze four different customer clusters. This study has effectively identified customers that are at risk of churn and may be utilized to develop and execute strategies that lower customer attrition.Keywords: attrition, retention, predictive modeling, customer segmentation, telecommunications
Procedia PDF Downloads 562347 Implementation of Correlation-Based Data Analysis as a Preliminary Stage for the Prediction of Geometric Dimensions Using Machine Learning in the Forming of Car Seat Rails
Authors: Housein Deli, Loui Al-Shrouf, Hammoud Al Joumaa, Mohieddine Jelali
Abstract:
When forming metallic materials, fluctuations in material properties, process conditions, and wear lead to deviations in the component geometry. Several hundred features sometimes need to be measured, especially in the case of functional and safety-relevant components. These can only be measured offline due to the large number of features and the accuracy requirements. The risk of producing components outside the tolerances is minimized but not eliminated by the statistical evaluation of process capability and control measurements. The inspection intervals are based on the acceptable risk and are at the expense of productivity but remain reactive and, in some cases, considerably delayed. Due to the considerable progress made in the field of condition monitoring and measurement technology, permanently installed sensor systems in combination with machine learning and artificial intelligence, in particular, offer the potential to independently derive forecasts for component geometry and thus eliminate the risk of defective products - actively and preventively. The reliability of forecasts depends on the quality, completeness, and timeliness of the data. Measuring all geometric characteristics is neither sensible nor technically possible. This paper, therefore, uses the example of car seat rail production to discuss the necessary first step of feature selection and reduction by correlation analysis, as otherwise, it would not be possible to forecast components in real-time and inline. Four different car seat rails with an average of 130 features were selected and measured using a coordinate measuring machine (CMM). The run of such measuring programs alone takes up to 20 minutes. In practice, this results in the risk of faulty production of at least 2000 components that have to be sorted or scrapped if the measurement results are negative. Over a period of 2 months, all measurement data (> 200 measurements/ variant) was collected and evaluated using correlation analysis. As part of this study, the number of characteristics to be measured for all 6 car seat rail variants was reduced by over 80%. Specifically, direct correlations for almost 100 characteristics were proven for an average of 125 characteristics for 4 different products. A further 10 features correlate via indirect relationships so that the number of features required for a prediction could be reduced to less than 20. A correlation factor >0.8 was assumed for all correlations.Keywords: long-term SHM, condition monitoring, machine learning, correlation analysis, component prediction, wear prediction, regressions analysis
Procedia PDF Downloads 462346 Comparison of Different Intraocular Lens Power Calculation Formulas in People With Very High Myopia
Authors: Xia Chen, Yulan Wang
Abstract:
purpose: To compare the accuracy of Haigis, SRK/T, T2, Holladay 1, Hoffer Q, Barrett Universal II, Emmetropia Verifying Optical (EVO) and Kane for intraocular lens power calculation in patients with axial length (AL) ≥ 28 mm. Methods: In this retrospective single-center study, 50 eyes of 41 patients with AL ≥ 28 mm that underwent uneventful cataract surgery were enrolled. The actual postoperative refractive results were compared to the predicted refraction calculated with different formulas (Haigis, SRK/T, T2, Holladay 1, Hoffer Q, Barrett Universal II, EVO and Kane). The mean absolute prediction errors (MAE) 1 month postoperatively were compared. Results: The MAE of different formulas were as follows: Haigis (0.509), SRK/T (0.705), T2 (0.999), Holladay 1 (0.714), Hoffer Q (0.583), Barrett Universal II (0.552), EVO (0.463) and Kane (0.441). No significant difference was found among the different formulas (P = .122). The Kane and EVO formulas achieved the lowest level of mean prediction error (PE) and median absolute error (MedAE) (p < 0.05). Conclusion: The Kane and EVO formulas had a better success rate than others in predicting IOL power in high myopic eyes with AL longer than 28 mm in this study.Keywords: cataract, power calculation formulas, intraocular lens, long axial length
Procedia PDF Downloads 822345 Prediction of Critical Flow Rate in Tubular Heat Exchangers for the Onset of Damaging Flow-Induced Vibrations
Authors: Y. Khulief, S. Bashmal, S. Said, D. Al-Otaibi, K. Mansour
Abstract:
The prediction of flow rates at which the vibration-induced instability takes place in tubular heat exchangers due to cross-flow is of major importance to the performance and service life of such equipment. In this paper, the semi-analytical model for square tube arrays was extended and utilized to study the triangular tube patterns. A laboratory test rig with instrumented test section is used to measure the fluidelastic coefficients to be used for tuning the mathematical model. The test section can be made of any bundle pattern. In this study, two test sections were constructed for both the normal triangular and the rotated triangular tube arrays. The developed scheme is utilized in predicting the onset of flow-induced instability in the two triangular tube arrays. The results are compared to those obtained for two other bundle configurations. The results of the four different tube patterns are viewed in the light of TEMA predictions. The comparison demonstrated that TEMA guidelines are more conservative in all configurations consideredKeywords: fluid-structure interaction, cross-flow, heat exchangers,
Procedia PDF Downloads 277