Search results for: meteorological prediction data
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 25578

Search results for: meteorological prediction data

24918 Estimation of Coefficient of Discharge of Side Trapezoidal Labyrinth Weir Using Group Method of Data Handling Technique

Authors: M. A. Ansari, A. Hussain, A. Uddin

Abstract:

A side weir is a flow diversion structure provided in the side wall of a channel to divert water from the main channel to a branch channel. The trapezoidal labyrinth weir is a special type of weir in which crest length of the weir is increased to pass higher discharge. Experimental and numerical studies related to the coefficient of discharge of trapezoidal labyrinth weir in an open channel have been presented in the present study. Group Method of Data Handling (GMDH) with the transfer function of quadratic polynomial has been used to predict the coefficient of discharge for the side trapezoidal labyrinth weir. A new model is developed for coefficient of discharge of labyrinth weir by regression method. Generalized models for predicting the coefficient of discharge for labyrinth weir using Group Method of Data Handling (GMDH) network have also been developed. The prediction based on GMDH model is more satisfactory than those given by traditional regression equations.

Keywords: discharge coefficient, group method of data handling, open channel, side labyrinth weir

Procedia PDF Downloads 146
24917 Enhancing a Recidivism Prediction Tool with Machine Learning: Effectiveness and Algorithmic Fairness

Authors: Marzieh Karimihaghighi, Carlos Castillo

Abstract:

This work studies how Machine Learning (ML) may be used to increase the effectiveness of a criminal recidivism risk assessment tool, RisCanvi. The two key dimensions of this analysis are predictive accuracy and algorithmic fairness. ML-based prediction models obtained in this study are more accurate at predicting criminal recidivism than the manually-created formula used in RisCanvi, achieving an AUC of 0.76 and 0.73 in predicting violent and general recidivism respectively. However, the improvements are small, and it is noticed that algorithmic discrimination can easily be introduced between groups such as national vs foreigner, or young vs old. It is described how effectiveness and algorithmic fairness objectives can be balanced, applying a method in which a single error disparity in terms of generalized false positive rate is minimized, while calibration is maintained across groups. Obtained results show that this bias mitigation procedure can substantially reduce generalized false positive rate disparities across multiple groups. Based on these results, it is proposed that ML-based criminal recidivism risk prediction should not be introduced without applying algorithmic bias mitigation procedures.

Keywords: algorithmic fairness, criminal risk assessment, equalized odds, recidivism

Procedia PDF Downloads 136
24916 Top-K Shortest Distance as a Similarity Measure

Authors: Andrey Lebedev, Ilya Dmitrenok, JooYoung Lee, Leonard Johard

Abstract:

Top-k shortest path routing problem is an extension of finding the shortest path in a given network. Shortest path is one of the most essential measures as it reveals the relations between two nodes in a network. However, in many real world networks, whose diameters are small, top-k shortest path is more interesting as it contains more information about the network topology. Many variations to compute top-k shortest paths have been studied. In this paper, we apply an efficient top-k shortest distance routing algorithm to the link prediction problem and test its efficacy. We compare the results with other base line and state-of-the-art methods as well as with the shortest path. Then, we also propose a top-k distance based graph matching algorithm.

Keywords: graph matching, link prediction, shortest path, similarity

Procedia PDF Downloads 342
24915 Sea-Level Rise and Shoreline Retreat in Tainan Coast

Authors: Wen-Juinn Chen, Yi-Phei Chou, Jou-Han Wang

Abstract:

Tainan coast is suffering from beach erosion, wave overtopping, and lowland flooding; though most of the shoreline has been protected by seawalls, they still threatened by sea level rise. For coastal resources developing, coastal land utilization, and to draft an appropriate mitigate strategy. Firstly; we must assess the impact of beach erosion under a different scenario of climate change. Here, we have used the meteorological data since 1898 to 2012 to prove that the Tainan area did suffer the impact of climate change. The result shows the temperature has been raised to about 1.7 degrees since 1989. Also, we analyzed the tidal data near the Tainan coast (Anpin site and Junjunn site), it shows sea level rising with a rate about 4.1~4.8 mm/year, this phenomenon will have serious impacts on Tainan coastal area, especially it will worsen coastal erosion. So we have used Bruun rule to calculate the shoreline retreated rate at every two decade period since 2012. Wave data and bottom sand diameter D50 were used to calculate the closure depth that will be used in Bruun formula and the active length of the profile is computed by the beach slope and Dean's equilibrium concept. After analysis, we found that in 2020, the shoreline will be retreated about 3.0 to 12 meters. The maximum retreat is happening at Chigu coast. In 2060, average shoreline retreated distance is 22m, but at Chigu and Tsenwen, shoreline may be backward retreat about 70m and will be reached about 130m at 2100, this will cause a lot of coastal land loss to the sea, protect and mitigate project must be quickly performed.

Keywords: sea level rise, shoreline, coastal erosion, climate change

Procedia PDF Downloads 388
24914 Climate Changes in Albania and Their Effect on Cereal Yield

Authors: Lule Basha, Eralda Gjika

Abstract:

This study is focused on analyzing climate change in Albania and its potential effects on cereal yields. Initially, monthly temperature and rainfalls in Albania were studied for the period 1960-2021. Climacteric variables are important variables when trying to model cereal yield behavior, especially when significant changes in weather conditions are observed. For this purpose, in the second part of the study, linear and nonlinear models explaining cereal yield are constructed for the same period, 1960-2021. The multiple linear regression analysis and lasso regression method are applied to the data between cereal yield and each independent variable: average temperature, average rainfall, fertilizer consumption, arable land, land under cereal production, and nitrous oxide emissions. In our regression model, heteroscedasticity is not observed, data follow a normal distribution, and there is a low correlation between factors, so we do not have the problem of multicollinearity. Machine-learning methods, such as random forest, are used to predict cereal yield responses to climacteric and other variables. Random Forest showed high accuracy compared to the other statistical models in the prediction of cereal yield. We found that changes in average temperature negatively affect cereal yield. The coefficients of fertilizer consumption, arable land, and land under cereal production are positively affecting production. Our results show that the Random Forest method is an effective and versatile machine-learning method for cereal yield prediction compared to the other two methods.

Keywords: cereal yield, climate change, machine learning, multiple regression model, random forest

Procedia PDF Downloads 74
24913 A Machine Learning Approach for Efficient Resource Management in Construction Projects

Authors: Soheila Sadeghi

Abstract:

Construction projects are complex and often subject to significant cost overruns due to the multifaceted nature of the activities involved. Accurate cost estimation is crucial for effective budget planning and resource allocation. Traditional methods for predicting overruns often rely on expert judgment or analysis of historical data, which can be time-consuming, subjective, and may fail to consider important factors. However, with the increasing availability of data from construction projects, machine learning techniques can be leveraged to improve the accuracy of overrun predictions. This study applied machine learning algorithms to enhance the prediction of cost overruns in a case study of a construction project. The methodology involved the development and evaluation of two machine learning models: Random Forest and Neural Networks. Random Forest can handle high-dimensional data, capture complex relationships, and provide feature importance estimates. Neural Networks, particularly Deep Neural Networks (DNNs), are capable of automatically learning and modeling complex, non-linear relationships between input features and the target variable. These models can adapt to new data, reduce human bias, and uncover hidden patterns in the dataset. The findings of this study demonstrate that both Random Forest and Neural Networks can significantly improve the accuracy of cost overrun predictions compared to traditional methods. The Random Forest model also identified key cost drivers and risk factors, such as changes in the scope of work and delays in material delivery, which can inform better project risk management. However, the study acknowledges several limitations. First, the findings are based on a single construction project, which may limit the generalizability of the results to other projects or contexts. Second, the dataset, although comprehensive, may not capture all relevant factors influencing cost overruns, such as external economic conditions or political factors. Third, the study focuses primarily on cost overruns, while schedule overruns are not explicitly addressed. Future research should explore the application of machine learning techniques to a broader range of projects, incorporate additional data sources, and investigate the prediction of both cost and schedule overruns simultaneously.

Keywords: resource allocation, machine learning, optimization, data-driven decision-making, project management

Procedia PDF Downloads 16
24912 Churn Prediction for Savings Bank Customers: A Machine Learning Approach

Authors: Prashant Verma

Abstract:

Commercial banks are facing immense pressure, including financial disintermediation, interest rate volatility and digital ways of finance. Retaining an existing customer is 5 to 25 less expensive than acquiring a new one. This paper explores customer churn prediction, based on various statistical & machine learning models and uses under-sampling, to improve the predictive power of these models. The results show that out of the various machine learning models, Random Forest which predicts the churn with 78% accuracy, has been found to be the most powerful model for the scenario. Customer vintage, customer’s age, average balance, occupation code, population code, average withdrawal amount, and an average number of transactions were found to be the variables with high predictive power for the churn prediction model. The model can be deployed by the commercial banks in order to avoid the customer churn so that they may retain the funds, which are kept by savings bank (SB) customers. The article suggests a customized campaign to be initiated by commercial banks to avoid SB customer churn. Hence, by giving better customer satisfaction and experience, the commercial banks can limit the customer churn and maintain their deposits.

Keywords: savings bank, customer churn, customer retention, random forests, machine learning, under-sampling

Procedia PDF Downloads 124
24911 Municipal Solid Waste Management and Analysis of Waste Generation: A Case Study of Bangkok, Thailand

Authors: Pitchayanin Sukholthaman

Abstract:

Gradually accumulated, the enormous amount of waste has caused tremendous adverse impacts to the world. Bangkok, Thailand, is chosen as an urban city of a developing country having coped with serious MSW problems due to the vast amount of waste generated, ineffective and improper waste management problems. Waste generation is the most important factor for successful planning of MSW management system. Thus, the prediction of MSW is a very important role to understand MSW distribution and characteristic; to be used for strategic planning issues. This study aims to find influencing variables that affect the amount of Bangkok MSW generation quantity.

Keywords: MSW generation, MSW quantity prediction, MSW management, multiple regression, Bangkok

Procedia PDF Downloads 405
24910 Artificial Intelligence Based Predictive Models for Short Term Global Horizontal Irradiation Prediction

Authors: Kudzanayi Chiteka, Wellington Makondo

Abstract:

The whole world is on the drive to go green owing to the negative effects of burning fossil fuels. Therefore, there is immediate need to identify and utilise alternative renewable energy sources. Among these energy sources solar energy is one of the most dominant in Zimbabwe. Solar power plants used to generate electricity are entirely dependent on solar radiation. For planning purposes, solar radiation values should be known in advance to make necessary arrangements to minimise the negative effects of the absence of solar radiation due to cloud cover and other naturally occurring phenomena. This research focused on the prediction of Global Horizontal Irradiation values for the sixth day given values for the past five days. Artificial intelligence techniques were used in this research. Three models were developed based on Support Vector Machines, Radial Basis Function, and Feed Forward Back-Propagation Artificial neural network. Results revealed that Support Vector Machines gives the best results compared to the other two with a mean absolute percentage error (MAPE) of 2%, Mean Absolute Error (MAE) of 0.05kWh/m²/day root mean square (RMS) error of 0.15kWh/m²/day and a coefficient of determination of 0.990. The other predictive models had prediction accuracies of MAPEs of 4.5% and 6% respectively for Radial Basis Function and Feed Forward Back-propagation Artificial neural network. These two models also had coefficients of determination of 0.975 and 0.970 respectively. It was found that prediction of GHI values for the future days is possible using artificial intelligence-based predictive models.

Keywords: solar energy, global horizontal irradiation, artificial intelligence, predictive models

Procedia PDF Downloads 257
24909 An Improvement of ComiR Algorithm for MicroRNA Target Prediction by Exploiting Coding Region Sequences of mRNAs

Authors: Giorgio Bertolazzi, Panayiotis Benos, Michele Tumminello, Claudia Coronnello

Abstract:

MicroRNAs are small non-coding RNAs that post-transcriptionally regulate the expression levels of messenger RNAs. MicroRNA regulation activity depends on the recognition of binding sites located on mRNA molecules. ComiR (Combinatorial miRNA targeting) is a user friendly web tool realized to predict the targets of a set of microRNAs, starting from their expression profile. ComiR incorporates miRNA expression in a thermodynamic binding model, and it associates each gene with the probability of being a target of a set of miRNAs. ComiR algorithms were trained with the information regarding binding sites in the 3’UTR region, by using a reliable dataset containing the targets of endogenously expressed microRNA in D. melanogaster S2 cells. This dataset was obtained by comparing the results from two different experimental approaches, i.e., inhibition, and immunoprecipitation of the AGO1 protein; this protein is a component of the microRNA induced silencing complex. In this work, we tested whether including coding region binding sites in the ComiR algorithm improves the performance of the tool in predicting microRNA targets. We focused the analysis on the D. melanogaster species and updated the ComiR underlying database with the currently available releases of mRNA and microRNA sequences. As a result, we find that the ComiR algorithm trained with the information related to the coding regions is more efficient in predicting the microRNA targets, with respect to the algorithm trained with 3’utr information. On the other hand, we show that 3’utr based predictions can be seen as complementary to the coding region based predictions, which suggests that both predictions, from 3'UTR and coding regions, should be considered in a comprehensive analysis. Furthermore, we observed that the lists of targets obtained by analyzing data from one experimental approach only, that is, inhibition or immunoprecipitation of AGO1, are not reliable enough to test the performance of our microRNA target prediction algorithm. Further analysis will be conducted to investigate the effectiveness of the tool with data from other species, provided that validated datasets, as obtained from the comparison of RISC proteins inhibition and immunoprecipitation experiments, will be available for the same samples. Finally, we propose to upgrade the existing ComiR web-tool by including the coding region based trained model, available together with the 3’UTR based one.

Keywords: AGO1, coding region, Drosophila melanogaster, microRNA target prediction

Procedia PDF Downloads 429
24908 Prediction of B-Cell Epitope for 24 Mite Allergens: An in Silico Approach towards Epitope-Based Immune Therapeutics

Authors: Narjes Ebrahimi, Soheila Alyasin, Navid Nezafat, Hossein Esmailzadeh, Younes Ghasemi, Seyed Hesamodin Nabavizadeh

Abstract:

Immunotherapy with allergy vaccines is of great importance in allergen-specific immunotherapy. In recent years, B-cell epitope-based vaccines have attracted considerable attention and the prediction of epitopes is crucial to design these types of allergy vaccines. B-cell epitopes might be linear or conformational. The prerequisite for the identification of conformational epitopes is the information about allergens' tertiary structures. Bioinformatics approaches have paved the way towards the design of epitope-based allergy vaccines through the prediction of tertiary structures and epitopes. Mite allergens are one of the major allergy contributors. Several mite allergens can elicit allergic reactions; however, their structures and epitopes are not well established. So, B-cell epitopes of various groups of mite allergens (24 allergens in 6 allergen groups) were predicted in the present work. Tertiary structures of 17 allergens with unknown structure were predicted and refined with RaptorX and GalaxyRefine servers, respectively. The predicted structures were further evaluated by Rampage, ProSA-web, ERRAT and Verify 3D servers. Linear and conformational B-cell epitopes were identified with Ellipro, Bcepred, and DiscoTope 2 servers. To improve the accuracy level, consensus epitopes were selected. Fifty-four conformational and 133 linear consensus epitopes were predicted. Furthermore, overlapping epitopes in each allergen group were defined, following the sequence alignment of the allergens in each group. The predicted epitopes were also compared with the experimentally identified epitopes. The presented results provide valuable information for further studies about allergy vaccine design.

Keywords: B-cell epitope, Immunotherapy, In silico prediction, Mite allergens, Tertiary structure

Procedia PDF Downloads 145
24907 On the Creep of Concrete Structures

Authors: A. Brahma

Abstract:

Analysis of deferred deformations of concrete under sustained load shows that the creep has a leading role on deferred deformations of concrete structures. Knowledge of the creep characteristics of concrete is a Necessary starting point in the design of structures for crack control. Such knowledge will enable the designer to estimate the probable deformation in pre-stressed concrete or reinforced and the appropriate steps can be taken in design to accommodate this movement. In this study, we propose a prediction model that involves the acting principal parameters on the deferred behaviour of concrete structures. For the estimation of the model parameters Levenberg-Marquardt method has proven very satisfactory. A confrontation between the experimental results and the predictions of models designed shows that it is well suited to describe the evolution of the creep of concrete structures.

Keywords: concrete structure, creep, modelling, prediction

Procedia PDF Downloads 275
24906 Improved 3D Structure Prediction of Beta-Barrel Membrane Proteins by Using Evolutionary Coupling Constraints, Reduced State Space and an Empirical Potential Function

Authors: Wei Tian, Jie Liang, Hammad Naveed

Abstract:

Beta-barrel membrane proteins are found in the outer membrane of gram-negative bacteria, mitochondria, and chloroplasts. They carry out diverse biological functions, including pore formation, membrane anchoring, enzyme activity, and bacterial virulence. In addition, beta-barrel membrane proteins increasingly serve as scaffolds for bacterial surface display and nanopore-based DNA sequencing. Due to difficulties in experimental structure determination, they are sparsely represented in the protein structure databank and computational methods can help to understand their biophysical principles. We have developed a novel computational method to predict the 3D structure of beta-barrel membrane proteins using evolutionary coupling (EC) constraints and a reduced state space. Combined with an empirical potential function, we can successfully predict strand register at > 80% accuracy for a set of 49 non-homologous proteins with known structures. This is a significant improvement from previous results using EC alone (44%) and using empirical potential function alone (73%). Our method is general and can be applied to genome-wide structural prediction.

Keywords: beta-barrel membrane proteins, structure prediction, evolutionary constraints, reduced state space

Procedia PDF Downloads 592
24905 A Review of Current Knowledge on Assessment of Precast Structures Using Fragility Curves

Authors: E. Akpinar, A. Erol, M.F. Cakir

Abstract:

Precast reinforced concrete (RC) structures are excellent alternatives for construction world all over the globe, thanks to their rapid erection phase, ease mounting process, better quality and reasonable prices. Such structures are rather popular for industrial buildings. For the sake of economic importance of such industrial buildings as well as significance of safety, like every other type of structures, performance assessment and structural risk analysis are important. Fragility curves are powerful tools for damage projection and assessment for any sort of building as well as precast structures. In this study, a comparative review of current knowledge on fragility analysis of industrial precast RC structures were presented and findings in previous studies were compiled. Effects of different structural variables, parameters and building geometries as well as soil conditions on fragility analysis of precast structures are reviewed. It was aimed to briefly present the information in the literature about the procedure of damage probability prediction including fragility curves for such industrial facilities. It is found that determination of the aforementioned structural parameters as well as selecting analysis procedure are critically important for damage prediction of industrial precast RC structures using fragility curves.

Keywords: damage prediction, fragility curve, industrial buildings, precast reinforced concrete structures

Procedia PDF Downloads 174
24904 Simulation of Surface Runoff in Mahabad Dam Basin, Iran

Authors: Leila Khosravi

Abstract:

A major part of the drinking water in North West of Iran is supplied from Mahabad reservoir 80 km northwest of Mahabad. This reservoir collects water from 750 km-catchment which is undergoing accelerated changes due to deforestation and urbanization. The main objective of this study is to develop a catchment modeling platform which translates ongoing land-use changes, soil data, precipitation and evaporation into surface runoff of the river discharging into the reservoir: Soil and Water Assessment Tool, SWAT, model along with hydro -meteorological records of 1997–2011. A variety of statistical indices were used to evaluate the simulation results for both calibration and validation periods; among them, the robust Nash–Sutcliffe coefficients were found to be 0.52 and 0.62 in the calibration and validation periods, respectively. This project has developed a reliable modeling platform with the benchmark land physical conditions of the Mahabad dam basin.

Keywords: simulation, surface runoff, Mahabad dam, SWAT model

Procedia PDF Downloads 187
24903 Predicting Recessions with Bivariate Dynamic Probit Model: The Czech and German Case

Authors: Lukas Reznak, Maria Reznakova

Abstract:

Recession of an economy has a profound negative effect on all involved stakeholders. It follows that timely prediction of recessions has been of utmost interest both in the theoretical research and in practical macroeconomic modelling. Current mainstream of recession prediction is based on standard OLS models of continuous GDP using macroeconomic data. This approach is not suitable for two reasons: the standard continuous models are proving to be obsolete and the macroeconomic data are unreliable, often revised many years retroactively. The aim of the paper is to explore a different branch of recession forecasting research theory and verify the findings on real data of the Czech Republic and Germany. In the paper, the authors present a family of discrete choice probit models with parameters estimated by the method of maximum likelihood. In the basic form, the probits model a univariate series of recessions and expansions in the economic cycle for a given country. The majority of the paper deals with more complex model structures, namely dynamic and bivariate extensions. The dynamic structure models the autoregressive nature of recessions, taking into consideration previous economic activity to predict the development in subsequent periods. Bivariate extensions utilize information from a foreign economy by incorporating correlation of error terms and thus modelling the dependencies of the two countries. Bivariate models predict a bivariate time series of economic states in both economies and thus enhance the predictive performance. A vital enabler of timely and successful recession forecasting are reliable and readily available data. Leading indicators, namely the yield curve and the stock market indices, represent an ideal data base, as the pieces of information is available in advance and do not undergo any retroactive revisions. As importantly, the combination of yield curve and stock market indices reflect a range of macroeconomic and financial market investors’ trends which influence the economic cycle. These theoretical approaches are applied on real data of Czech Republic and Germany. Two models for each country were identified – each for in-sample and out-of-sample predictive purposes. All four followed a bivariate structure, while three contained a dynamic component.

Keywords: bivariate probit, leading indicators, recession forecasting, Czech Republic, Germany

Procedia PDF Downloads 233
24902 A Generalized Framework for Adaptive Machine Learning Deployments in Algorithmic Trading

Authors: Robert Caulk

Abstract:

A generalized framework for adaptive machine learning deployments in algorithmic trading is introduced, tested, and released as open-source code. The presented software aims to test the hypothesis that recent data contains enough information to form a probabilistically favorable short-term price prediction. Further, the framework contains various adaptive machine learning techniques that are geared toward generating profit during strong trends and minimizing losses during trend changes. Results demonstrate that this adaptive machine learning approach is capable of capturing trends and generating profit. The presentation also discusses the importance of defining the parameter space associated with the dynamic training data-set and using the parameter space to identify and remove outliers from prediction data points. Meanwhile, the generalized architecture enables common users to exploit the powerful machinery while focusing on high-level feature engineering and model testing. The presentation also highlights common strengths and weaknesses associated with the presented technique and presents a broad range of well-tested starting points for feature set construction, target setting, and statistical methods for enforcing risk management and maintaining probabilistically favorable entry and exit points. The presentation also describes the end-to-end data processing tools associated with FreqAI, including automatic data fetching, data aggregation, feature engineering, safe and robust data pre-processing, outlier detection, custom machine learning and statistical tools, data post-processing, and adaptive training backtest emulation, and deployment of adaptive training in live environments. Finally, the generalized user interface is also discussed in the presentation. Feature engineering is simplified so that users can seed their feature sets with common indicator libraries (e.g. TA-lib, pandas-ta). The user also feeds data expansion parameters to fill out a large feature set for the model, which can contain as many as 10,000+ features. The presentation describes the various object-oriented programming techniques employed to make FreqAI agnostic to third-party libraries and external data sources. In other words, the back-end is constructed in such a way that users can leverage a broad range of common regression libraries (Catboost, LightGBM, Sklearn, etc) as well as common Neural Network libraries (TensorFlow, PyTorch) without worrying about the logistical complexities associated with data handling and API interactions. The presentation finishes by drawing conclusions about the most important parameters associated with a live deployment of the adaptive learning framework and provides the road map for future development in FreqAI.

Keywords: machine learning, market trend detection, open-source, adaptive learning, parameter space exploration

Procedia PDF Downloads 72
24901 Development of Geo-computational Model for Analysis of Lassa Fever Dynamics and Lassa Fever Outbreak Prediction

Authors: Adekunle Taiwo Adenike, I. K. Ogundoyin

Abstract:

Lassa fever is a neglected tropical virus that has become a significant public health issue in Nigeria, with the country having the greatest burden in Africa. This paper presents a Geo-Computational Model for Analysis and Prediction of Lassa Fever Dynamics and Outbreaks in Nigeria. The model investigates the dynamics of the virus with respect to environmental factors and human populations. It confirms the role of the rodent host in virus transmission and identifies how climate and human population are affected. The proposed methodology is carried out on a Linux operating system using the OSGeoLive virtual machine for geographical computing, which serves as a base for spatial ecology computing. The model design uses Unified Modeling Language (UML), and the performance evaluation uses machine learning algorithms such as random forest, fuzzy logic, and neural networks. The study aims to contribute to the control of Lassa fever, which is achievable through the combined efforts of public health professionals and geocomputational and machine learning tools. The research findings will potentially be more readily accepted and utilized by decision-makers for the attainment of Lassa fever elimination.

Keywords: geo-computational model, lassa fever dynamics, lassa fever, outbreak prediction, nigeria

Procedia PDF Downloads 75
24900 Combination of Artificial Neural Network Model and Geographic Information System for Prediction Water Quality

Authors: Sirilak Areerachakul

Abstract:

Water quality has initiated serious management efforts in many countries. Artificial Neural Network (ANN) models are developed as forecasting tools in predicting water quality trend based on historical data. This study endeavors to automatically classify water quality. The water quality classes are evaluated using 6 factor indices. These factors are pH value (pH), Dissolved Oxygen (DO), Biochemical Oxygen Demand (BOD), Nitrate Nitrogen (NO3N), Ammonia Nitrogen (NH3N) and Total Coliform (T-Coliform). The methodology involves applying data mining techniques using multilayer perceptron (MLP) neural network models. The data consisted of 11 sites of Saen Saep canal in Bangkok, Thailand. The data is obtained from the Department of Drainage and Sewerage Bangkok Metropolitan Administration during 2007-2011. The results of multilayer perceptron neural network exhibit a high accuracy multilayer perception rate at 94.23% in classifying the water quality of Saen Saep canal in Bangkok. Subsequently, this encouraging result could be combined with GIS data improves the classification accuracy significantly.

Keywords: artificial neural network, geographic information system, water quality, computer science

Procedia PDF Downloads 324
24899 Evaluation of the Effect of Learning Disabilities and Accommodations on the Prediction of the Exam Performance: Ordinal Decision-Tree Algorithm

Authors: G. Singer, M. Golan

Abstract:

Providing students with learning disabilities (LD) with extra time to grant them equal access to the exam is a necessary but insufficient condition to compensate for their LD; there should also be a clear indication that the additional time was actually used. For example, if students with LD use more time than students without LD and yet receive lower grades, this may indicate that a different accommodation is required. If they achieve higher grades but use the same amount of time, then the effectiveness of the accommodation has not been demonstrated. The main goal of this study is to evaluate the effect of including parameters related to LD and extended exam time, along with other commonly-used characteristics (e.g., student background and ability measures such as high-school grades), on the ability of ordinal decision-tree algorithms to predict exam performance. We use naturally-occurring data collected from hundreds of undergraduate engineering students. The sub-goals are i) to examine the improvement in prediction accuracy when the indicator of exam performance includes 'actual time used' in addition to the conventional indicator (exam grade) employed in most research; ii) to explore the effectiveness of extended exam time on exam performance for different courses and for LD students with different profiles (i.e., sets of characteristics). This is achieved by using the patterns (i.e., subgroups) generated by the algorithms to identify pairs of subgroups that differ in just one characteristic (e.g., course or type of LD) but have different outcomes in terms of exam performance (grade and time used). Since grade and time used to exhibit an ordering form, we propose a method based on ordinal decision-trees, which applies a weighted information-gain ratio (WIGR) measure for selecting the classifying attributes. Unlike other known ordinal algorithms, our method does not assume monotonicity in the data. The proposed WIGR is an extension of an information-theoretic measure, in the sense that it adjusts to the case of an ordinal target and takes into account the error severity between two different target classes. Specifically, we use ordinal C4.5, random-forest, and AdaBoost algorithms, as well as an ensemble technique composed of ordinal and non-ordinal classifiers. Firstly, we find that the inclusion of LD and extended exam-time parameters improves prediction of exam performance (compared to specifications of the algorithms that do not include these variables). Secondly, when the indicator of exam performance includes 'actual time used' together with grade (as opposed to grade only), the prediction accuracy improves. Thirdly, our subgroup analyses show clear differences in the effect of extended exam time on exam performance among different courses and different student profiles. From a methodological perspective, we find that the ordinal decision-tree based algorithms outperform their conventional, non-ordinal counterparts. Further, we demonstrate that the ensemble-based approach leverages the strengths of each type of classifier (ordinal and non-ordinal) and yields better performance than each classifier individually.

Keywords: actual exam time usage, ensemble learning, learning disabilities, ordinal classification, time extension

Procedia PDF Downloads 88
24898 A Comparative Study of Optimization Techniques and Models to Forecasting Dengue Fever

Authors: Sudha T., Naveen C.

Abstract:

Dengue is a serious public health issue that causes significant annual economic and welfare burdens on nations. However, enhanced optimization techniques and quantitative modeling approaches can predict the incidence of dengue. By advocating for a data-driven approach, public health officials can make informed decisions, thereby improving the overall effectiveness of sudden disease outbreak control efforts. The National Oceanic and Atmospheric Administration and the Centers for Disease Control and Prevention are two of the U.S. Federal Government agencies from which this study uses environmental data. Based on environmental data that describe changes in temperature, precipitation, vegetation, and other factors known to affect dengue incidence, many predictive models are constructed that use different machine learning methods to estimate weekly dengue cases. The first step involves preparing the data, which includes handling outliers and missing values to make sure the data is prepared for subsequent processing and the creation of an accurate forecasting model. In the second phase, multiple feature selection procedures are applied using various machine learning models and optimization techniques. During the third phase of the research, machine learning models like the Huber Regressor, Support Vector Machine, Gradient Boosting Regressor (GBR), and Support Vector Regressor (SVR) are compared with several optimization techniques for feature selection, such as Harmony Search and Genetic Algorithm. In the fourth stage, the model's performance is evaluated using Mean Square Error (MSE), Mean Absolute Error (MAE), and Root Mean Square Error (RMSE) as assistance. Selecting an optimization strategy with the least number of errors, lowest price, biggest productivity, or maximum potential results is the goal. In a variety of industries, including engineering, science, management, mathematics, finance, and medicine, optimization is widely employed. An effective optimization method based on harmony search and an integrated genetic algorithm is introduced for input feature selection, and it shows an important improvement in the model's predictive accuracy. The predictive models with Huber Regressor as the foundation perform the best for optimization and also prediction.

Keywords: deep learning model, dengue fever, prediction, optimization

Procedia PDF Downloads 38
24897 Prediction of Compressive Strength in Geopolymer Composites by Adaptive Neuro Fuzzy Inference System

Authors: Mehrzad Mohabbi Yadollahi, Ramazan Demirboğa, Majid Atashafrazeh

Abstract:

Geopolymers are highly complex materials which involve many variables which makes modeling its properties very difficult. There is no systematic approach in mix design for Geopolymers. Since the amounts of silica modulus, Na2O content, w/b ratios and curing time have a great influence on the compressive strength an ANFIS (Adaptive neuro fuzzy inference system) method has been established for predicting compressive strength of ground pumice based Geopolymers and the possibilities of ANFIS for predicting the compressive strength has been studied. Consequently, ANFIS can be used for geopolymer compressive strength prediction with acceptable accuracy.

Keywords: geopolymer, ANFIS, compressive strength, mix design

Procedia PDF Downloads 822
24896 Prediction of Deformations of Concrete Structures

Authors: A. Brahma

Abstract:

Drying is a phenomenon that accompanies the hardening of hydraulic materials. It can, if it is not prevented, lead to significant spontaneous dimensional variations, which the cracking is one of events. In this context, cracking promotes the transport of aggressive agents in the material, which can affect the durability of concrete structures. Drying shrinkage develops over a long period almost 30 years although most occurred during the first three years. Drying shrinkage stabilizes when the material is water balance with the external environment. The drying shrinkage of cementitious materials is due to the formation of capillary tensions in the pores of the material, which has the consequences of bringing the solid walls of each other. Knowledge of the shrinkage characteristics of concrete is a necessary starting point in the design of structures for crack control. Such knowledge will enable the designer to estimate the probable shrinkage movement in reinforced or prestressed concrete and the appropriate steps can be taken in design to accommodate this movement. This study is concerned the modelling of drying shrinkage of the hydraulic materials and the prediction of the rate of spontaneous deformations of hydraulic materials during hardening. The model developed takes in consideration the main factors affecting drying shrinkage. There was agreement between drying shrinkage predicted by the developed model and experimental results. In last we show that developed model describe the evolution of the drying shrinkage of high performances concretes correctly.

Keywords: drying, hydraulic concretes, shrinkage, modeling, prediction

Procedia PDF Downloads 318
24895 The Characteristics of Settlement Owing to the Construction of Several Parallel Tunnels with Short Distances

Authors: Lojain Suliman, Xinrong Liu, Xiaohan Zhou

Abstract:

Since most tunnels are built in crowded metropolitan settings, the excavation process must take place in highly condensed locations, including high-density cities. In this way, the tunnels are typically located close together, which leads to more interaction between the parallel existing tunnels, and this, in turn, leads to more settlement. This research presents an examination of the impact of a large-scale tunnel excavation on two forms of settlement: surface settlement and settlement surrounding the tunnel. Additionally, research has been done on the properties of interactions between two and three parallel tunnels. The settlement has been evaluated using three primary techniques: theoretical modeling, numerical simulation, and data monitoring. Additionally, a parametric investigation on how distance affects the settlement characteristic for parallel tunnels with short distances has been completed. Additionally, it has been observed that the sequence of excavation has an impact on the behavior of settlements. Nevertheless, a comparison of the model test and numerical simulation yields significant agreement in terms of settlement trend and value. Additionally, when compared to the FEM study, the suggested analytical solution exhibits reduced sensitivity in the settlement prediction. For example, the settlement of the small tunnel diameter does not appear clearly on the settlement curve, while it is notable in the FEM analysis. It is advised, however, that additional studies be conducted in the future employing analytical solutions for settlement prediction for parallel tunnels.

Keywords: settlement, FEM, analytical solution, parallel tunnels

Procedia PDF Downloads 12
24894 Challenges and Implications for Choice of Caesarian Section and Natural Birth in Pregnant Women with Pre-Eclampsia in Western Nigeria

Authors: F. O. Adeosun, I. O. Orubuloye, O. O. Babalola

Abstract:

Although caesarean section has greatly improved obstetric care throughout the world, in developing countries there is a great aversion to caesarean section. This study was carried out to examine the rate at which pregnant women with pre-eclampsia choose caesarean section over natural birth. A cross-sectional study was conducted among 500 pre-eclampsia antenatal clients seen at the States University Teaching Hospitals in the last one year. The sample selection was purposive. Information on their educational background, beliefs and attitudes were collected. Data analysis was presented using simple percentages. Out of 500 women studied, 38% favored caesarean section while 62% were against it. About 89% of them understood what caesarean section is, 57.3% of those who understood what caesarean section is will still not choose it as an option. Over 85% of the women believed caesarean section is done for medical reasons. If caesarean section is given as an option for childbirth, 38% would go for it, 29% would try religious intervention, 5.5% would not choose it because of fear, while 27.5% would reject it because they believe it is culturally wrong. Majority of respondents (85%) who favored caesarean delivery are aware of the risk attached to choosing virginal birth but go an extra mile in sourcing funds for a caesarean session while over 64% cannot afford the cost of caesarean delivery. It is therefore pertinent to encourage research in prediction methods and prevention of occurrence, since this would assist patients to plan on how to finance treatment.

Keywords: caesarean section, choice, cost, pre eclampsia, prediction methods

Procedia PDF Downloads 303
24893 GCM Based Fuzzy Clustering to Identify Homogeneous Climatic Regions of North-East India

Authors: Arup K. Sarma, Jayshree Hazarika

Abstract:

The North-eastern part of India, which receives heavier rainfall than other parts of the subcontinent, is of great concern now-a-days with regard to climate change. High intensity rainfall for short duration and longer dry spell, occurring due to impact of climate change, affects river morphology too. In the present study, an attempt is made to delineate the North-Eastern region of India into some homogeneous clusters based on the Fuzzy Clustering concept and to compare the resulting clusters obtained by using conventional methods and non conventional methods of clustering. The concept of clustering is adapted in view of the fact that, impact of climate change can be studied in a homogeneous region without much variation, which can be helpful in studies related to water resources planning and management. 10 IMD (Indian Meteorological Department) stations, situated in various regions of the North-east, have been selected for making the clusters. The results of the Fuzzy C-Means (FCM) analysis show different clustering patterns for different conditions. From the analysis and comparison it can be concluded that non conventional method of using GCM data is somehow giving better results than the others. However, further analysis can be done by taking daily data instead of monthly means to reduce the effect of standardization.

Keywords: climate change, conventional and nonconventional methods of clustering, FCM analysis, homogeneous regions

Procedia PDF Downloads 365
24892 Solar Radiation Time Series Prediction

Authors: Cameron Hamilton, Walter Potter, Gerrit Hoogenboom, Ronald McClendon, Will Hobbs

Abstract:

A model was constructed to predict the amount of solar radiation that will make contact with the surface of the earth in a given location an hour into the future. This project was supported by the Southern Company to determine at what specific times during a given day of the year solar panels could be relied upon to produce energy in sufficient quantities. Due to their ability as universal function approximators, an artificial neural network was used to estimate the nonlinear pattern of solar radiation, which utilized measurements of weather conditions collected at the Griffin, Georgia weather station as inputs. A number of network configurations and training strategies were utilized, though a multilayer perceptron with a variety of hidden nodes trained with the resilient propagation algorithm consistently yielded the most accurate predictions. In addition, a modeled DNI field and adjacent weather station data were used to bolster prediction accuracy. In later trials, the solar radiation field was preprocessed with a discrete wavelet transform with the aim of removing noise from the measurements. The current model provides predictions of solar radiation with a mean square error of 0.0042, though ongoing efforts are being made to further improve the model’s accuracy.

Keywords: artificial neural networks, resilient propagation, solar radiation, time series forecasting

Procedia PDF Downloads 368
24891 Analysis and Prediction of COVID-19 by Using Recurrent LSTM Neural Network Model in Machine Learning

Authors: Grienggrai Rajchakit

Abstract:

As we all know that coronavirus is announced as a pandemic in the world by WHO. It is speeded all over the world with few days of time. To control this spreading, every citizen maintains social distance and self-preventive measures are the best strategies. As of now, many researchers and scientists are continuing their research in finding out the exact vaccine. The machine learning model finds that the coronavirus disease behaves in an exponential manner. To abolish the consequence of this pandemic, an efficient step should be taken to analyze this disease. In this paper, a recurrent neural network model is chosen to predict the number of active cases in a particular state. To make this prediction of active cases, we need a database. The database of COVID-19 is downloaded from the KAGGLE website and is analyzed by applying a recurrent LSTM neural network with univariant features to predict the number of active cases of patients suffering from the corona virus. The downloaded database is divided into training and testing the chosen neural network model. The model is trained with the training data set and tested with a testing dataset to predict the number of active cases in a particular state; here, we have concentrated on Andhra Pradesh state.

Keywords: COVID-19, coronavirus, KAGGLE, LSTM neural network, machine learning

Procedia PDF Downloads 141
24890 Towards a Sustainable Energy Future: Method Used in Existing Buildings to Implement Sustainable Energy Technologies

Authors: Georgi Vendramin, Aurea Lúcia, Yamamoto, Carlos Itsuo, Souza Melegari, N. Samuel

Abstract:

This article describes the development of a model that uses a method where openings are represented by single glass and double glass. The model is based on a healthy balance equations purely theoretical and empirical data. Simplified equations are derived through a synthesis of the measured data obtained from meteorological stations. The implementation of the model in a design tool integrated buildings is discussed in this article, to better punctuate the requirements of comfort and energy efficiency in architecture and engineering. Sustainability, energy efficiency, and the integration of alternative energy systems and concepts are beginning to be incorporated into designs for new buildings and renovations to existing buildings. Few means have existed to effectively validate the potential performance benefits of the design concepts. It was used a method of degree-days for an assessment of the energy performance of a building showed that the design of the architectural design should always be considered the materials used and the size of the openings. The energy performance was obtained through the model, considering the location of the building Central Park Shopping Mall, in the city of Cascavel - PR. Obtained climatic data of these locations and in a second step, it was obtained the coefficient of total heat loss in the building pre-established so evaluating the thermal comfort and energy performance. This means that the more openings in buildings in Cascavel – PR, installed to the east side, they may be higher because the glass added to the geometry of architectural spaces will cause the environment conserve energy.

Keywords: sustainable design, energy modeling, design validation, degree-days methods

Procedia PDF Downloads 398
24889 Scour Depth Prediction around Bridge Piers Using Neuro-Fuzzy and Neural Network Approaches

Authors: H. Bonakdari, I. Ebtehaj

Abstract:

The prediction of scour depth around bridge piers is frequently considered in river engineering. One of the key aspects in efficient and optimum bridge structure design is considered to be scour depth estimation around bridge piers. In this study, scour depth around bridge piers is estimated using two methods, namely the Adaptive Neuro-Fuzzy Inference System (ANFIS) and Artificial Neural Network (ANN). Therefore, the effective parameters in scour depth prediction are determined using the ANN and ANFIS methods via dimensional analysis, and subsequently, the parameters are predicted. In the current study, the methods’ performances are compared with the nonlinear regression (NLR) method. The results show that both methods presented in this study outperform existing methods. Moreover, using the ratio of pier length to flow depth, ratio of median diameter of particles to flow depth, ratio of pier width to flow depth, the Froude number and standard deviation of bed grain size parameters leads to optimal performance in scour depth estimation.

Keywords: adaptive neuro-fuzzy inference system (ANFIS), artificial neural network (ANN), bridge pier, scour depth, nonlinear regression (NLR)

Procedia PDF Downloads 202