Search results for: flood prediction process
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 16970

Search results for: flood prediction process

16520 Estimation of Transition and Emission Probabilities

Authors: Aakansha Gupta, Neha Vadnere, Tapasvi Soni, M. Anbarsi

Abstract:

Protein secondary structure prediction is one of the most important goals pursued by bioinformatics and theoretical chemistry; it is highly important in medicine and biotechnology. Some aspects of protein functions and genome analysis can be predicted by secondary structure prediction. This is used to help annotate sequences, classify proteins, identify domains, and recognize functional motifs. In this paper, we represent protein secondary structure as a mathematical model. To extract and predict the protein secondary structure from the primary structure, we require a set of parameters. Any constants appearing in the model are specified by these parameters, which also provide a mechanism for efficient and accurate use of data. To estimate these model parameters there are many algorithms out of which the most popular one is the EM algorithm or called the Expectation Maximization Algorithm. These model parameters are estimated with the use of protein datasets like RS126 by using the Bayesian Probabilistic method (data set being categorical). This paper can then be extended into comparing the efficiency of EM algorithm to the other algorithms for estimating the model parameters, which will in turn lead to an efficient component for the Protein Secondary Structure Prediction. Further this paper provides a scope to use these parameters for predicting secondary structure of proteins using machine learning techniques like neural networks and fuzzy logic. The ultimate objective will be to obtain greater accuracy better than the previously achieved.

Keywords: model parameters, expectation maximization algorithm, protein secondary structure prediction, bioinformatics

Procedia PDF Downloads 448
16519 Low-Impact Development Strategies Assessment for Urban Design

Authors: Y. S. Lin, H. L. Lin

Abstract:

Climate change and land-use change caused by urban expansion increase the frequency of urban flooding. To mitigate the increase in runoff volume, low-impact development (LID) is a green approach for reducing the area of impervious surface and managing stormwater at the source with decentralized micro-scale control measures. However, the current benefit assessment and practical application of LID in Taiwan is still tending to be development plan in the community and building site scales. As for urban design, site-based moisture-holding capacity has been common index for evaluating LID’s effectiveness of urban design, which ignore the diversity, and complexity of the urban built environments, such as different densities, positive and negative spaces, volumes of building and so on. Such inflexible regulations not only probably make difficulty for most of the developed areas to implement, but also not suitable for every different types of built environments, make little benefits to some types of built environments. Looking toward to enable LID to strength the link with urban design to reduce the runoff in coping urban flooding, the research consider different characteristics of different types of built environments in developing LID strategy. Classify the built environments by doing the cluster analysis based on density measures, such as Ground Space Index (GSI), Floor Space Index (FSI), Floors (L), and Open Space Ratio (OSR), and analyze their impervious surface rates and runoff volumes. Simulate flood situations by using quasi-two-dimensional flood plain flow model, and evaluate the flood mitigation effectiveness of different types of built environments in different low-impact development strategies. The information from the results of the assessment can be more precisely implement in urban design. In addition, it helps to enact regulations of low-Impact development strategies in urban design more suitable for every different type of built environments.

Keywords: low-impact development, urban design, flooding, density measures

Procedia PDF Downloads 310
16518 Effect of Genuine Missing Data Imputation on Prediction of Urinary Incontinence

Authors: Suzan Arslanturk, Mohammad-Reza Siadat, Theophilus Ogunyemi, Ananias Diokno

Abstract:

Missing data is a common challenge in statistical analyses of most clinical survey datasets. A variety of methods have been developed to enable analysis of survey data to deal with missing values. Imputation is the most commonly used among the above methods. However, in order to minimize the bias introduced due to imputation, one must choose the right imputation technique and apply it to the correct type of missing data. In this paper, we have identified different types of missing values: missing data due to skip pattern (SPMD), undetermined missing data (UMD), and genuine missing data (GMD) and applied rough set imputation on only the GMD portion of the missing data. We have used rough set imputation to evaluate the effect of such imputation on prediction by generating several simulation datasets based on an existing epidemiological dataset (MESA). To measure how well each dataset lends itself to the prediction model (logistic regression), we have used p-values from the Wald test. To evaluate the accuracy of the prediction, we have considered the width of 95% confidence interval for the probability of incontinence. Both imputed and non-imputed simulation datasets were fit to the prediction model, and they both turned out to be significant (p-value < 0.05). However, the Wald score shows a better fit for the imputed compared to non-imputed datasets (28.7 vs. 23.4). The average confidence interval width was decreased by 10.4% when the imputed dataset was used, meaning higher precision. The results show that using the rough set method for missing data imputation on GMD data improve the predictive capability of the logistic regression. Further studies are required to generalize this conclusion to other clinical survey datasets.

Keywords: rough set, imputation, clinical survey data simulation, genuine missing data, predictive index

Procedia PDF Downloads 144
16517 Effects of Changes in LULC on Hydrological Response in Upper Indus Basin

Authors: Ahmad Ammar, Umar Khan Khattak, Muhammad Majid

Abstract:

Empirically based lumped hydrologic models have an extensive track record of use for various watershed managements and flood related studies. This study focuses on the impacts of LULC change for 10 year period on the discharge in watershed using lumped model HEC-HMS. The Indus above Tarbela region acts as a source of the main flood events in the middle and lower portions of Indus because of the amount of rainfall and topographic setting of the region. The discharge pattern of the region is influenced by the LULC associated with it. In this study the Landsat TM images were used to do LULC analysis of the watershed. Satellite daily precipitation TRMM data was used as input rainfall. The input variables for model building in HEC-HMS were then calculated based on the GIS data collected and pre-processed in HEC-GeoHMS. SCS-CN was used as transform model, SCS unit hydrograph method was used as loss model and Muskingum was used as routing model. For discharge simulation years 2000 and 2010 were taken. HEC-HMS was calibrated for the year 2000 and then validated for 2010.The performance of the model was assessed through calibration and validation process and resulted R2=0.92 during calibration and validation. Relative Bias for the years 2000 was -9% and for2010 was -14%. The result shows that in 10 years the impact of LULC change on discharge has been negligible in the study area overall. One reason is that, the proportion of built-up area in the watershed, which is the main causative factor of change in discharge, is less than 1% of the total area. However, locally, the impact of development was found significant in built up area of Mansehra city. The analysis was done on Mansehra city sub-watershed with an area of about 16 km2 and has more than 13% built up area in 2010. The results showed that with an increase of 40% built-up area in the city from 2000 to 2010 the discharge values increased about 33 percent, indicating the impact of LULC change on discharge value.

Keywords: LULC change, HEC-HMS, Indus Above Tarbela, SCS-CN

Procedia PDF Downloads 484
16516 Uplift Segmentation Approach for Targeting Customers in a Churn Prediction Model

Authors: Shivahari Revathi Venkateswaran

Abstract:

Segmenting customers plays a significant role in churn prediction. It helps the marketing team with proactive and reactive customer retention. For the reactive retention, the retention team reaches out to customers who already showed intent to disconnect by giving some special offers. When coming to proactive retention, the marketing team uses churn prediction model, which ranks each customer from rank 1 to 100, where 1 being more risk to churn/disconnect (high ranks have high propensity to churn). The churn prediction model is built by using XGBoost model. However, with the churn rank, the marketing team can only reach out to the customers based on their individual ranks. To profile different groups of customers and to frame different marketing strategies for targeted groups of customers are not possible with the churn ranks. For this, the customers must be grouped in different segments based on their profiles, like demographics and other non-controllable attributes. This helps the marketing team to frame different offer groups for the targeted audience and prevent them from disconnecting (proactive retention). For segmentation, machine learning approaches like k-mean clustering will not form unique customer segments that have customers with same attributes. This paper finds an alternate approach to find all the combination of unique segments that can be formed from the user attributes and then finds the segments who have uplift (churn rate higher than the baseline churn rate). For this, search algorithms like fast search and recursive search are used. Further, for each segment, all customers can be targeted using individual churn ranks from the churn prediction model. Finally, a UI (User Interface) is developed for the marketing team to interactively search for the meaningful segments that are formed and target the right set of audience for future marketing campaigns and prevent them from disconnecting.

Keywords: churn prediction modeling, XGBoost model, uplift segments, proactive marketing, search algorithms, retention, k-mean clustering

Procedia PDF Downloads 46
16515 Rain Gauges Network Optimization in Southern Peninsular Malaysia

Authors: Mohd Khairul Bazli Mohd Aziz, Fadhilah Yusof, Zulkifli Yusop, Zalina Mohd Daud, Mohammad Afif Kasno

Abstract:

Recent developed rainfall network design techniques have been discussed and compared by many researchers worldwide due to the demand of acquiring higher levels of accuracy from collected data. In many studies, rain-gauge networks are designed to provide good estimation for areal rainfall and for flood modelling and prediction. In a certain study, even using lumped models for flood forecasting, a proper gauge network can significantly improve the results. Therefore existing rainfall network in Johor must be optimized and redesigned in order to meet the required level of accuracy preset by rainfall data users. The well-known geostatistics method (variance-reduction method) that is combined with simulated annealing was used as an algorithm of optimization in this study to obtain the optimal number and locations of the rain gauges. Rain gauge network structure is not only dependent on the station density; station location also plays an important role in determining whether information is acquired accurately. The existing network of 84 rain gauges in Johor is optimized and redesigned by using rainfall, humidity, solar radiation, temperature and wind speed data during monsoon season (November – February) for the period of 1975 – 2008. Three different semivariogram models which are Spherical, Gaussian and Exponential were used and their performances were also compared in this study. Cross validation technique was applied to compute the errors and the result showed that exponential model is the best semivariogram. It was found that the proposed method was satisfied by a network of 64 rain gauges with the minimum estimated variance and 20 of the existing ones were removed and relocated. An existing network may consist of redundant stations that may make little or no contribution to the network performance for providing quality data. Therefore, two different cases were considered in this study. The first case considered the removed stations that were optimally relocated into new locations to investigate their influence in the calculated estimated variance and the second case explored the possibility to relocate all 84 existing stations into new locations to determine the optimal position. The relocations of the stations in both cases have shown that the new optimal locations have managed to reduce the estimated variance and it has proven that locations played an important role in determining the optimal network.

Keywords: geostatistics, simulated annealing, semivariogram, optimization

Procedia PDF Downloads 274
16514 Dynamic Process Monitoring of an Ammonia Synthesis Fixed-Bed Reactor

Authors: Bothinah Altaf, Gary Montague, Elaine B. Martin

Abstract:

This study involves the modeling and monitoring of an ammonia synthesis fixed-bed reactor using partial least squares (PLS) and its variants. The process exhibits complex dynamic behavior due to the presence of heat recycling and feed quench. One limitation of static PLS model in this situation is that it does not take account of the process dynamics and hence dynamic PLS was used. Although it showed, superior performance to static PLS in terms of prediction, the monitoring scheme was inappropriate hence adaptive PLS was considered. A limitation of adaptive PLS is that non-conforming observations also contribute to the model, therefore, a new adaptive approach was developed, robust adaptive dynamic PLS. This approach updates a dynamic PLS model and is robust to non-representative data. The developed methodology showed a clear improvement over existing approaches in terms of the modeling of the reactor and the detection of faults.

Keywords: ammonia synthesis fixed-bed reactor, dynamic partial least squares modeling, recursive partial least squares, robust modeling

Procedia PDF Downloads 368
16513 Copper Price Prediction Model for Various Economic Situations

Authors: Haidy S. Ghali, Engy Serag, A. Samer Ezeldin

Abstract:

Copper is an essential raw material used in the construction industry. During the year 2021 and the first half of 2022, the global market suffered from a significant fluctuation in copper raw material prices due to the aftermath of both the COVID-19 pandemic and the Russia-Ukraine war, which exposed its consumers to an unexpected financial risk. Thereto, this paper aims to develop two ANN-LSTM price prediction models, using Python, that can forecast the average monthly copper prices traded in the London Metal Exchange; the first model is a multivariate model that forecasts the copper price of the next 1-month and the second is a univariate model that predicts the copper prices of the upcoming three months. Historical data of average monthly London Metal Exchange copper prices are collected from January 2009 till July 2022, and potential external factors are identified and employed in the multivariate model. These factors lie under three main categories: energy prices and economic indicators of the three major exporting countries of copper, depending on the data availability. Before developing the LSTM models, the collected external parameters are analyzed with respect to the copper prices using correlation and multicollinearity tests in R software; then, the parameters are further screened to select the parameters that influence the copper prices. Then, the two LSTM models are developed, and the dataset is divided into training, validation, and testing sets. The results show that the performance of the 3-Month prediction model is better than the 1-Month prediction model, but still, both models can act as predicting tools for diverse economic situations.

Keywords: copper prices, prediction model, neural network, time series forecasting

Procedia PDF Downloads 86
16512 StockTwits Sentiment Analysis on Stock Price Prediction

Authors: Min Chen, Rubi Gupta

Abstract:

Understanding and predicting stock market movements is a challenging problem. It is believed stock markets are partially driven by public sentiments, which leads to numerous research efforts to predict stock market trend using public sentiments expressed on social media such as Twitter but with limited success. Recently a microblogging website StockTwits is becoming increasingly popular for users to share their discussions and sentiments about stocks and financial market. In this project, we analyze the text content of StockTwits tweets and extract financial sentiment using text featurization and machine learning algorithms. StockTwits tweets are first pre-processed using techniques including stopword removal, special character removal, and case normalization to remove noise. Features are extracted from these preprocessed tweets through text featurization process using bags of words, N-gram models, TF-IDF (term frequency-inverse document frequency), and latent semantic analysis. Machine learning models are then trained to classify the tweets' sentiment as positive (bullish) or negative (bearish). The correlation between the aggregated daily sentiment and daily stock price movement is then investigated using Pearson’s correlation coefficient. Finally, the sentiment information is applied together with time series stock data to predict stock price movement. The experiments on five companies (Apple, Amazon, General Electric, Microsoft, and Target) in a duration of nine months demonstrate the effectiveness of our study in improving the prediction accuracy.

Keywords: machine learning, sentiment analysis, stock price prediction, tweet processing

Procedia PDF Downloads 129
16511 Comparison between Two Software Packages GSTARS4 and HEC-6 about Prediction of the Sedimentation Amount in Dam Reservoirs and to Estimate Its Efficient Life Time in the South of Iran

Authors: Fatemeh Faramarzi, Hosein Mahjoob

Abstract:

Building dams on rivers for utilization of water resources causes problems in hydrodynamic equilibrium and results in leaving all or part of the sediments carried by water in dam reservoir. This phenomenon has also significant impacts on water and sediment flow regime and in the long term can cause morphological changes in the environment surrounding the river, reducing the useful life of the reservoir which threatens sustainable development through inefficient management of water resources. In the past, empirical methods were used to predict the sedimentation amount in dam reservoirs and to estimate its efficient lifetime. But recently the mathematical and computational models are widely used in sedimentation studies in dam reservoirs as a suitable tool. These models usually solve the equations using finite element method. This study compares the results from tow software packages, GSTARS4 & HEC-6, in the prediction of the sedimentation amount in Dez dam, southern Iran. The model provides a one-dimensional, steady-state simulation of sediment deposition and erosion by solving the equations of momentum, flow and sediment continuity and sediment transport. GSTARS4 (Generalized Sediment Transport Model for Alluvial River Simulation) which is based on a one-dimensional mathematical model that simulates bed changes in both longitudinal and transverse directions by using flow tubes in a quasi-two-dimensional scheme to calibrate a period of 47 years and forecast the next 47 years of sedimentation in Dez Dam, Southern Iran. This dam is among the highest dams all over the world (with its 203 m height), and irrigates more than 125000 square hectares of downstream lands and plays a major role in flood control in the region. The input data including geometry, hydraulic and sedimentary data, starts from 1955 to 2003 on a daily basis. To predict future river discharge, in this research, the time series data were assumed to be repeated after 47 years. Finally, the obtained result was very satisfactory in the delta region so that the output from GSTARS4 was almost identical to the hydrographic profile in 2003. In the Dez dam due to the long (65 km) and a large tank, the vertical currents are dominant causing the calculations by the above-mentioned method to be inaccurate. To solve this problem, we used the empirical reduction method to calculate the sedimentation in the downstream area which led to very good answers. Thus, we demonstrated that by combining these two methods a very suitable model for sedimentation in Dez dam for the study period can be obtained. The present study demonstrated successfully that the outputs of both methods are the same.

Keywords: Dez Dam, prediction, sedimentation, water resources, computational models, finite element method, GSTARS4, HEC-6

Procedia PDF Downloads 289
16510 The Implication of Disaster Risk Identification to Cultural Heritage-The Scenarios of Flood Risk in Taiwan

Authors: Jieh-Jiuh Wang

Abstract:

Disasters happen frequently due to the global climate changes today. The cultural heritage conservation should be considered from the perspectives of surrounding environments and large-scale disasters. Most current thoughts about the disaster prevention of cultural heritages in Taiwan are single-point thoughts emphasizing firefighting, decay prevention, and construction reinforcement and ignoring the whole concept of the environment. The traditional conservation cannot defend against more and more tremendous and frequent natural disasters caused by climate changes. More and more cultural heritages are confronting the high risk of disasters. This study adopts the perspective of risk identification and takes flood as the main disaster category. It analyzes the amount and categories of cultural heritages that might suffer from disasters with the geographic information system integrating the latest flooding potential data from National Fire Agency and Water Resources Agency and the basic data of cultural heritages. It examines the actual risk of cultural heritages confronting floods and serves as the accordance for future considerations of risk measures and preparation for reducing disasters. The result of the study finds the positive relationship between the disaster affected situation of national cultural heritages and the rainfall intensity. The order of impacted level by floods is historical buildings, historical sites indicated by municipalities and counties, and national historical sites and relics. However, traditional settlements and cultural landscapes are not impacted. It might be related to the taboo space in the traditional culture of site selection (concepts of disaster avoidance). As for the regional distribution on the other hand, cultural heritages in central and northern Taiwan suffer from more shocking floods, while the heritages in northern and eastern Taiwan suffer from more serious flooding depth.

Keywords: cultural heritage, flood, preventive conservation, risk management

Procedia PDF Downloads 314
16509 Investigation on Remote Sense Surface Latent Heat Temperature Associated with Pre-Seismic Activities in Indian Region

Authors: Vijay S. Katta, Vinod Kushwah, Rudraksh Tiwari, Mulayam Singh Gaur, Priti Dimri, Ashok Kumar Sharma

Abstract:

The formation process of seismic activities because of abrupt slip on faults, tectonic plate moments due to accumulated stress in the Earth’s crust. The prediction of seismic activity is a very challenging task. We have studied the changes in surface latent heat temperatures which are observed prior to significant earthquakes have been investigated and could be considered for short term earthquake prediction. We analyzed the surface latent heat temperature (SLHT) variation for inland earthquakes occurred in Chamba, Himachal Pradesh (32.5 N, 76.1E, M-4.5, depth-5km) nearby the main boundary fault region, the data of SLHT have been taken from National Center for Environmental Prediction (NCEP). In this analysis, we have calculated daily variations with surface latent heat temperature (0C) in the range area 1⁰x1⁰ (~120/KM²) with the pixel covering epicenter of earthquake at the center for a three months period prior to and after the seismic activities. The mean value during that period has been considered in order to take account of the seasonal effect. The monthly mean has been subtracted from daily value to study anomalous behavior (∆SLHT) of SLHT during the earthquakes. The results found that the SLHTs adjacent the epicenters all are anomalous high value 3-5 days before the seismic activities. The abundant surface water and groundwater in the epicenter and its adjacent region can provide the necessary condition for the change of SLHT. To further confirm the reliability of SLHT anomaly, it is necessary to explore its physical mechanism in depth by more earthquakes cases.

Keywords: surface latent heat temperature, satellite data, earthquake, magnetic storm

Procedia PDF Downloads 109
16508 Homeless Population Modeling and Trend Prediction Through Identifying Key Factors and Machine Learning

Authors: Shayla He

Abstract:

Background and Purpose: According to Chamie (2017), it’s estimated that no less than 150 million people, or about 2 percent of the world’s population, are homeless. The homeless population in the United States has grown rapidly in the past four decades. In New York City, the sheltered homeless population has increased from 12,830 in 1983 to 62,679 in 2020. Knowing the trend on the homeless population is crucial at helping the states and the cities make affordable housing plans, and other community service plans ahead of time to better prepare for the situation. This study utilized the data from New York City, examined the key factors associated with the homelessness, and developed systematic modeling to predict homeless populations of the future. Using the best model developed, named HP-RNN, an analysis on the homeless population change during the months of 2020 and 2021, which were impacted by the COVID-19 pandemic, was conducted. Moreover, HP-RNN was tested on the data from Seattle. Methods: The methodology involves four phases in developing robust prediction methods. Phase 1 gathered and analyzed raw data of homeless population and demographic conditions from five urban centers. Phase 2 identified the key factors that contribute to the rate of homelessness. In Phase 3, three models were built using Linear Regression, Random Forest, and Recurrent Neural Network (RNN), respectively, to predict the future trend of society's homeless population. Each model was trained and tuned based on the dataset from New York City for its accuracy measured by Mean Squared Error (MSE). In Phase 4, the final phase, the best model from Phase 3 was evaluated using the data from Seattle that was not part of the model training and tuning process in Phase 3. Results: Compared to the Linear Regression based model used by HUD et al (2019), HP-RNN significantly improved the prediction metrics of Coefficient of Determination (R2) from -11.73 to 0.88 and MSE by 99%. HP-RNN was then validated on the data from Seattle, WA, which showed a peak %error of 14.5% between the actual and the predicted count. Finally, the modeling results were collected to predict the trend during the COVID-19 pandemic. It shows a good correlation between the actual and the predicted homeless population, with the peak %error less than 8.6%. Conclusions and Implications: This work is the first work to apply RNN to model the time series of the homeless related data. The Model shows a close correlation between the actual and the predicted homeless population. There are two major implications of this result. First, the model can be used to predict the homeless population for the next several years, and the prediction can help the states and the cities plan ahead on affordable housing allocation and other community service to better prepare for the future. Moreover, this prediction can serve as a reference to policy makers and legislators as they seek to make changes that may impact the factors closely associated with the future homeless population trend.

Keywords: homeless, prediction, model, RNN

Procedia PDF Downloads 96
16507 Prediction of California Bearing Ratio of a Black Cotton Soil Stabilized with Waste Glass and Eggshell Powder using Artificial Neural Network

Authors: Biruhi Tesfaye, Avinash M. Potdar

Abstract:

The laboratory test process to determine the California bearing ratio (CBR) of black cotton soils is not only overpriced but also time-consuming as well. Hence advanced prediction of CBR plays a significant role as it is applicable In pavement design. The prediction of CBR of treated soil was executed by Artificial Neural Networks (ANNs) which is a Computational tool based on the properties of the biological neural system. To observe CBR values, combined eggshell and waste glass was added to soil as 4, 8, 12, and 16 % of the weights of the soil samples. Accordingly, the laboratory related tests were conducted to get the required best model. The maximum CBR value found at 5.8 at 8 % of eggshell waste glass powder addition. The model was developed using CBR as an output layer variable. CBR was considered as a function of the joint effect of liquid limit, plastic limit, and plastic index, optimum moisture content and maximum dry density. The best model that has been found was ANN with 5, 6 and 1 neurons in the input, hidden and output layer correspondingly. The performance of selected ANN has been 0.99996, 4.44E-05, 0.00353 and 0.0067 which are correlation coefficient (R), mean square error (MSE), mean absolute error (MAE) and root mean square error (RMSE) respectively. The research presented or summarized above throws light on future scope on stabilization with waste glass combined with different percentages of eggshell that leads to the economical design of CBR acceptable to pavement sub-base or base, as desired.

Keywords: CBR, artificial neural network, liquid limit, plastic limit, maximum dry density, OMC

Procedia PDF Downloads 160
16506 Bioengineering System for Prediction and Early Prenosological Diagnostics of Stomach Diseases Based on Energy Characteristics of Bioactive Points with Fuzzy Logic

Authors: Mahdi Alshamasin, Riad Al-Kasasbeh, Nikolay Korenevskiy

Abstract:

We apply mathematical models for the interaction of the internal and biologically active points of meridian structures. Amongst the diseases for which reflex diagnostics are effective are those of the stomach disease. It is shown that use of fuzzy logic decision-making yields good results for the prediction and early diagnosis of gastrointestinal tract diseases, depending on the reaction energy of biologically active points (acupuncture points). It is shown that good results for the prediction and early diagnosis of diseases from the reaction energy of biologically active points (acupuncture points) are obtained by using fuzzy logic decision-making.

Keywords: acupuncture points, fuzzy logic, diagnostically important points (DIP), confidence factors, membership functions, stomach diseases

Procedia PDF Downloads 441
16505 Surveying Coastal Society Perception on Giant Sea Wall Jakarta Development Planning

Authors: Ammar Asfari, Faizah Finur Fithriah, Shighia Ajeng Savitri

Abstract:

Jakarta as the capital city of Indonesia held an important role for the country, that is being the city where central government is located. But its topographic character which categorized as lowland area is causing an ultimate trouble. With average height of 7 meters above the sea level, flood keeps occurring in this city. On the other hand, water exploitation that caused land subsidence and sea-levels increasing by global warming make it even worse. Giant Sea Wall Development is a project created by Jakarta’s government to overcome flood, which is inspired by Saemangeum Dam in South Korea. For further planning, Giant Sea Wall is planned to be water reservoir for Jakarta’s inhabitants. This research’s aim is to fully understand the knowledge and opinion of people living in North Jakarta (Jakarta’s Coastal Area) on Giant Sea Wall development planning using qualitative method analysis with descriptive approach. The result of this research will be one of the determining factors in Giant Sea Wall Jakarta development planning continuance.

Keywords: descriptive approach, Giant Sea Wall Jakarta, qualitative method analysis, society perception

Procedia PDF Downloads 257
16504 Towards the Prediction of Aesthetic Requirements for Women’s Apparel Product

Authors: Yu Zhao, Min Zhang, Yuanqian Wang, Qiuyu Yu

Abstract:

The prediction of aesthetics of apparel is helpful for the development of a new type of apparel. This study is to build the quantitative relationship between the aesthetics and its design parameters. In particular, women’s pants have been preliminarily studied. This aforementioned relationship has been carried out by statistical analysis. The contributions of this study include the development of a more personalized apparel design mechanism and the provision of some empirical knowledge for the development of other products in the aspect of aesthetics.

Keywords: aesthetics, crease line, cropped straight leg pants, knee width

Procedia PDF Downloads 165
16503 A Study on Unix Process Crash Based on Efficient Process Management Method

Authors: Guo Haonan, Chen Peiyu, Zhao Hanyu, Burra Venkata Durga Kumar

Abstract:

Unix and Unix-like operating systems are widely used due to their high stability but are limited by the parent-child process structure, and the child process depends on the parent process, so the crash of a single process may cause the entire process group or even the entire system to fail. Another possibility of unexpected process termination is that the system administrator inadvertently closed the terminal or pseudo-terminal where the application was launched, causing the application process to terminate unexpectedly. This paper mainly analyzes the reasons for the problems and proposes two solutions.

Keywords: process management, daemon, login-bash and non-login bash, process group

Procedia PDF Downloads 107
16502 Enhancing Patch Time Series Transformer with Wavelet Transform for Improved Stock Prediction

Authors: Cheng-yu Hsieh, Bo Zhang, Ahmed Hambaba

Abstract:

Stock market prediction has long been an area of interest for both expert analysts and investors, driven by its complexity and the noisy, volatile conditions it operates under. This research examines the efficacy of combining the Patch Time Series Transformer (PatchTST) with wavelet transforms, specifically focusing on Haar and Daubechies wavelets, in forecasting the adjusted closing price of the S&P 500 index for the following day. By comparing the performance of the augmented PatchTST models with traditional predictive models such as Recurrent Neural Networks (RNNs), Convolutional Neural Networks (CNNs), Long Short-Term Memory (LSTM) networks, and Transformers, this study highlights significant enhancements in prediction accuracy. The integration of the Daubechies wavelet with PatchTST notably excels, surpassing other configurations and conventional models in terms of Mean Absolute Error (MAE) and Mean Squared Error (MSE). The success of the PatchTST model paired with Daubechies wavelet is attributed to its superior capability in extracting detailed signal information and eliminating irrelevant noise, thus proving to be an effective approach for financial time series forecasting.

Keywords: deep learning, financial forecasting, stock market prediction, patch time series transformer, wavelet transform

Procedia PDF Downloads 14
16501 Network Analysis and Sex Prediction based on a full Human Brain Connectome

Authors: Oleg Vlasovets, Fabian Schaipp, Christian L. Mueller

Abstract:

we conduct a network analysis and predict the sex of 1000 participants based on ”connectome” - pairwise Pearson’s correlation across 436 brain parcels. We solve the non-smooth convex optimization problem, known under the name of Graphical Lasso, where the solution includes a low-rank component. With this solution and machine learning model for a sex prediction, we explain the brain parcels-sex connectivity patterns.

Keywords: network analysis, neuroscience, machine learning, optimization

Procedia PDF Downloads 118
16500 Stacking Ensemble Approach for Combining Different Methods in Real Estate Prediction

Authors: Sol Girouard, Zona Kostic

Abstract:

A home is often the largest and most expensive purchase a person makes. Whether the decision leads to a successful outcome will be determined by a combination of critical factors. In this paper, we propose a method that efficiently handles all the factors in residential real estate and performs predictions given a feature space with high dimensionality while controlling for overfitting. The proposed method was built on gradient descent and boosting algorithms and uses a mixed optimizing technique to improve the prediction power. Usually, a single model cannot handle all the cases thus our approach builds multiple models based on different subsets of the predictors. The algorithm was tested on 3 million homes across the U.S., and the experimental results demonstrate the efficiency of this approach by outperforming techniques currently used in forecasting prices. With everyday changes on the real estate market, our proposed algorithm capitalizes from new events allowing more efficient predictions.

Keywords: real estate prediction, gradient descent, boosting, ensemble methods, active learning, training

Procedia PDF Downloads 252
16499 An Improved Heat Transfer Prediction Model for Film Condensation inside a Tube with Interphacial Shear Effect

Authors: V. G. Rifert, V. V. Gorin, V. V. Sereda, V. V. Treputnev

Abstract:

The analysis of heat transfer design methods in condensing inside plain tubes under existing influence of shear stress is presented in this paper. The existing discrepancy in more than 30-50% between rating heat transfer coefficients and experimental data has been noted. The analysis of existing theoretical and semi-empirical methods of heat transfer prediction is given. The influence of a precise definition concerning boundaries of phase flow (it is especially important in condensing inside horizontal tubes), shear stress (friction coefficient) and heat flux on design of heat transfer is shown. The substantiation of boundary conditions of the values of parameters, influencing accuracy of rated relationships, is given. More correct relationships for heat transfer prediction, which showed good convergence with experiments made by different authors, are substantiated in this work.

Keywords: film condensation, heat transfer, plain tube, shear stress

Procedia PDF Downloads 221
16498 A Hybrid Model Tree and Logistic Regression Model for Prediction of Soil Shear Strength in Clay

Authors: Ehsan Mehryaar, Seyed Armin Motahari Tabari

Abstract:

Without a doubt, soil shear strength is the most important property of the soil. The majority of fatal and catastrophic geological accidents are related to shear strength failure of the soil. Therefore, its prediction is a matter of high importance. However, acquiring the shear strength is usually a cumbersome task that might need complicated laboratory testing. Therefore, prediction of it based on common and easy to get soil properties can simplify the projects substantially. In this paper, A hybrid model based on the classification and regression tree algorithm and logistic regression is proposed where each leaf of the tree is an independent regression model. A database of 189 points for clay soil, including Moisture content, liquid limit, plastic limit, clay content, and shear strength, is collected. The performance of the developed model compared to the existing models and equations using root mean squared error and coefficient of correlation.

Keywords: model tree, CART, logistic regression, soil shear strength

Procedia PDF Downloads 171
16497 Ultimate Strength Prediction of Shear Walls with an Aspect Ratio between One and Two

Authors: Said Boukais, Ali Kezmane, Kahil Amar, Mohand Hamizi, Hannachi Neceur Eddine

Abstract:

This paper presents an analytical study on the behavior of rectangular reinforced concrete walls with an aspect ratio between one and tow. Several experiments on such walls have been selected to be studied. Database from various experiments were collected and nominal wall strengths have been calculated using formulas, such as those of the ACI (American), NZS (New Zealand), Mexican (NTCC), and Wood equation for shear and strain compatibility analysis for flexure. Subsequently, nominal ultimate wall strengths from the formulas were compared with the ultimate wall strengths from the database. These formulas vary substantially in functional form and do not account for all variables that affect the response of walls. There is substantial scatter in the predicted values of ultimate strength. New semi empirical equation are developed using data from tests of 46 walls with the objective of improving the prediction of ultimate strength of walls with the most possible accuracy and for all failure modes.

Keywords: prediction, ultimate strength, reinforced concrete walls, walls, rectangular walls

Procedia PDF Downloads 317
16496 Developing a Machine Learning-based Cost Prediction Model for Construction Projects using Particle Swarm Optimization

Authors: Soheila Sadeghi

Abstract:

Accurate cost prediction is essential for effective project management and decision-making in the construction industry. This study aims to develop a cost prediction model for construction projects using Machine Learning techniques and Particle Swarm Optimization (PSO). The research utilizes a comprehensive dataset containing project cost estimates, actual costs, resource details, and project performance metrics from a road reconstruction project. The methodology involves data preprocessing, feature selection, and the development of an Artificial Neural Network (ANN) model optimized using PSO. The study investigates the impact of various input features, including cost estimates, resource allocation, and project progress, on the accuracy of cost predictions. The performance of the optimized ANN model is evaluated using metrics such as Mean Squared Error (MSE), Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), and R-squared. The results demonstrate the effectiveness of the proposed approach in predicting project costs, outperforming traditional benchmark models. The feature selection process identifies the most influential variables contributing to cost variations, providing valuable insights for project managers. However, this study has several limitations. Firstly, the model's performance may be influenced by the quality and quantity of the dataset used. A larger and more diverse dataset covering different types of construction projects would enhance the model's generalizability. Secondly, the study focuses on a specific optimization technique (PSO) and a single Machine Learning algorithm (ANN). Exploring other optimization methods and comparing the performance of various ML algorithms could provide a more comprehensive understanding of the cost prediction problem. Future research should focus on several key areas. Firstly, expanding the dataset to include a wider range of construction projects, such as residential buildings, commercial complexes, and infrastructure projects, would improve the model's applicability. Secondly, investigating the integration of additional data sources, such as economic indicators, weather data, and supplier information, could enhance the predictive power of the model. Thirdly, exploring the potential of ensemble learning techniques, which combine multiple ML algorithms, may further improve cost prediction accuracy. Additionally, developing user-friendly interfaces and tools to facilitate the adoption of the proposed cost prediction model in real-world construction projects would be a valuable contribution to the industry. The findings of this study have significant implications for construction project management, enabling proactive cost estimation, resource allocation, budget planning, and risk assessment, ultimately leading to improved project performance and cost control. This research contributes to the advancement of cost prediction techniques in the construction industry and highlights the potential of Machine Learning and PSO in addressing this critical challenge. However, further research is needed to address the limitations and explore the identified future research directions to fully realize the potential of ML-based cost prediction models in the construction domain.

Keywords: cost prediction, construction projects, machine learning, artificial neural networks, particle swarm optimization, project management, feature selection, road reconstruction

Procedia PDF Downloads 8
16495 The Application of Artificial Neural Network for Bridge Structures Design Optimization

Authors: Angga S. Fajar, A. Aminullah, J. Kiyono, R. A. Safitri

Abstract:

This paper discusses about the application of ANN for optimizing of bridge structure design. ANN has been applied in various field of science concerning prediction and optimization. The structural optimization has several benefit including accelerate structural design process, saving the structural material, and minimize self-weight and mass of structure. In this paper, there are three types of bridge structure that being optimized including PSC I-girder superstructure, composite steel-concrete girder superstructure, and RC bridge pier. The different optimization strategy on each bridge structure implement back propagation method of ANN is conducted in this research. The optimal weight and easier design process of bridge structure with satisfied error are achieved.

Keywords: bridge structures, ANN, optimization, back propagation

Procedia PDF Downloads 348
16494 Case Studies of Mitigation Methods against the Impacts of High Water Levels in the Great Lakes

Authors: Jennifer M. Penton

Abstract:

Record high lake levels in 2017 and 2019 (2017 max lake level = 75.81 m; 2018 max lake level = 75.26 m; 2019 max lake level = 75.92 m) combined with a number of severe storms in the Great Lakes region, have resulted in significant wave generation across Lake Ontario. The resulting large wave heights have led to erosion of the natural shoreline, overtopping of existing revetments, backshore erosion, and partial and complete failure of several coastal structures, which in turn have led to further erosion of the shoreline and damaged existing infrastructure. Such impacts can be seen all along the coast of Lake Ontario. Three specific locations have been chosen as case studies for this paper, each addressing erosion and/or flood mitigation methods, such as revetments and sheet piling with increased land levels. Varying site conditions and the resulting shoreline damage are compared herein. The results are reflected in the case-specific design components of the mitigation and adaptation methods and are presented in this paper.

Keywords: erosion mitigation, flood mitigation, great lakes, high water levels

Procedia PDF Downloads 145
16493 Influence of Fermentation Conditions on Humic Acids Production by Trichoderma viride Using an Oil Palm Empty Fruit Bunch as the Substrate

Authors: F. L. Motta, M. H. A. Santana

Abstract:

Humic Acids (HA) were produced by a Trichoderma viride strain under submerged fermentation in a medium based on the oil palm Empty Fruit Bunch (EFB) and the main variables of the process were optimized by using response surface methodology. A temperature of 40°C and concentrations of 50g/L EFB, 5.7g/L potato peptone and 0.11g/L (NH4)2SO4 were the optimum levels of the variables that maximize the HA production, within the physicochemical and biological limits of the process. The optimized conditions led to an experimental HA concentration of 428.4±17.5 mg/L, which validated the prediction from the statistical model of 412.0mg/L. This optimization increased about 7–fold the HA production previously reported in the literature. Additionally, the time profiles of HA production and fungal growth confirmed our previous findings that HA production preferably occurs during fungal sporulation. The present study demonstrated that T. viride successfully produced HA via the submerged fermentation of EFB and the process parameters were successfully optimized using a statistics-based response surface model. To the best of our knowledge, the present work is the first report on the optimization of HA production from EFB by a biotechnological process, whose feasibility was only pointed out in previous works.

Keywords: empty fruit bunch, humic acids, submerged fermentation, Trichoderma viride

Procedia PDF Downloads 273
16492 Roasting Process of Sesame Seeds Modelling Using Gene Expression Programming: A Comparative Analysis with Response Surface Methodology

Authors: Alime Cengiz, Talip Kahyaoglu

Abstract:

Roasting process has the major importance to obtain desired aromatic taste of nuts. In this study, two kinds of roasting process were applied to hulled sesame seeds - vacuum oven and hot air roasting. Efficiency of Gene Expression Programming (GEP), a new soft computing technique of evolutionary algorithm that describes the cause and effect relationships in the data modelling system, and response surface methodology (RSM) were examined in the modelling of roasting processes over a range of temperature (120-180°C) for various times (30-60 min). Color attributes (L*, a*, b*, Browning Index (BI)), textural properties (hardness and fracturability) and moisture content were evaluated and modelled by RSM and GEP. The GEP-based formulations and RSM approach were compared with experimental results and evaluated according to correlation coefficients. The results showed that both GEP and RSM were found to be able to adequately learn the relation between roasting conditions and physical and textural parameters of roasted seeds. However, GEP had better prediction performance than the RSM with the high correlation coefficients (R2 >0.92) for the all quality parameters. This result indicates that the soft computing techniques have better capability for describing the physical changes occuring in sesame seeds during roasting process.

Keywords: genetic expression programming, response surface methodology, roasting, sesame seed

Procedia PDF Downloads 389
16491 Epilepsy Seizure Prediction by Effective Connectivity Estimation Using Granger Causality and Directed Transfer Function Analysis of Multi-Channel Electroencephalogram

Authors: Mona Hejazi, Ali Motie Nasrabadi

Abstract:

Epilepsy is a persistent neurological disorder that affects more than 50 million people worldwide. Hence, there is a necessity to introduce an efficient prediction model for making a correct diagnosis of the epileptic seizure and accurate prediction of its type. In this study we consider how the Effective Connectivity (EC) patterns obtained from intracranial Electroencephalographic (EEG) recordings reveal information about the dynamics of the epileptic brain and can be used to predict imminent seizures, as this will enable the patients (and caregivers) to take appropriate precautions. We use this definition because we believe that effective connectivity near seizures begin to change, so we can predict seizures according to this feature. Results are reported on the standard Freiburg EEG dataset which contains data from 21 patients suffering from medically intractable focal epilepsy. Six channels of EEG from each patients are considered and effective connectivity using Directed Transfer Function (DTF) and Granger Causality (GC) methods is estimated. We concentrate on effective connectivity standard deviation over time and feature changes in five brain frequency sub-bands (Alpha, Beta, Theta, Delta, and Gamma) are compared. The performance obtained for the proposed scheme in predicting seizures is: average prediction time is 50 minutes before seizure onset, the maximum sensitivity is approximate ~80% and the false positive rate is 0.33 FP/h. DTF method is more acceptable to predict epileptic seizures and generally we can observe that the greater results are in gamma and beta sub-bands. The research of this paper is significantly helpful for clinical applications, especially for the exploitation of online portable devices.

Keywords: effective connectivity, Granger causality, directed transfer function, epilepsy seizure prediction, EEG

Procedia PDF Downloads 438