Search results for: ocean models
6185 Mathematical Modeling of the Fouling Phenomenon in Ultrafiltration of Latex Effluent
Authors: Amira Abdelrasoul, Huu Doan, Ali Lohi
Abstract:
An efficient and well-planned ultrafiltration process is becoming a necessity for monetary returns in the industrial settings. The aim of the present study was to develop a mathematical model for an accurate prediction of ultrafiltration membrane fouling of latex effluent applied to homogeneous and heterogeneous membranes with uniform and non-uniform pore sizes, respectively. The models were also developed for an accurate prediction of power consumption that can handle the large-scale purposes. The model incorporated the fouling attachments as well as chemical and physical factors in membrane fouling for accurate prediction and scale-up application. Both Polycarbonate and Polysulfone flat membranes, with pore sizes of 0.05 µm and a molecular weight cut-off of 60,000, respectively, were used under a constant feed flow rate and a cross-flow mode in ultrafiltration of the simulated paint effluent. Furthermore, hydrophilic ultrafilic and hydrophobic PVDF membranes with MWCO of 100,000 were used to test the reliability of the models. Monodisperse particles of 50 nm and 100 nm in diameter, and a latex effluent with a wide range of particle size distributions were utilized to validate the models. The aggregation and the sphericity of the particles indicated a significant effect on membrane fouling.Keywords: membrane fouling, mathematical modeling, power consumption, attachments, ultrafiltration
Procedia PDF Downloads 4706184 Stochastic Richelieu River Flood Modeling and Comparison of Flood Propagation Models: WMS (1D) and SRH (2D)
Authors: Maryam Safrai, Tewfik Mahdi
Abstract:
This article presents the stochastic modeling of the Richelieu River flood in Quebec, Canada, occurred in the spring of 2011. With the aid of the one-dimensional Watershed Modeling System (WMS (v.10.1) and HEC-RAS (v.4.1) as a flood simulator, the delineation of the probabilistic flooded areas was considered. Based on the Monte Carlo method, WMS (v.10.1) delineated the probabilistic flooded areas with corresponding occurrence percentages. Furthermore, results of this one-dimensional model were compared with the results of two-dimensional model (SRH-2D) for the evaluation of efficiency and precision of each applied model. Based on this comparison, computational process in two-dimensional model is longer and more complicated versus brief one-dimensional one. Although, two-dimensional models are more accurate than one-dimensional method, but according to existing modellers, delineation of probabilistic flooded areas based on Monte Carlo method is achievable via one-dimensional modeler. The applied software in this case study greatly responded to verify the research objectives. As a result, flood risk maps of the Richelieu River with the two applied models (1d, 2d) could elucidate the flood risk factors in hydrological, hydraulic, and managerial terms.Keywords: flood modeling, HEC-RAS, model comparison, Monte Carlo simulation, probabilistic flooded area, SRH-2D, WMS
Procedia PDF Downloads 1406183 Use of Satellite Altimetry and Moderate Resolution Imaging Technology of Flood Extent to Support Seasonal Outlooks of Nuisance Flood Risk along United States Coastlines and Managed Areas
Authors: Varis Ransibrahmanakul, Doug Pirhalla, Scott Sheridan, Cameron Lee
Abstract:
U.S. coastal areas and ecosystems are facing multiple sea level rise threats and effects: heavy rain events, cyclones, and changing wind and weather patterns all influence coastal flooding, sedimentation, and erosion along critical barrier islands and can strongly impact habitat resiliency and water quality in protected habitats. These impacts are increasing over time and have accelerated the need for new tracking techniques, models and tools of flood risk to support enhanced preparedness for coastal management and mitigation. To address this issue, NOAA National Ocean Service (NOS) evaluated new metrics from satellite altimetry AVISO/Copernicus and MODIS IR flood extents to isolate nodes atmospheric variability indicative of elevated sea level and nuisance flood events. Using de-trended time series of cross-shelf sea surface heights (SSH), we identified specific Self Organizing Maps (SOM) nodes and transitions having a strongest regional association with oceanic spatial patterns (e.g., heightened downwelling favorable wind-stress and enhanced southward coastal transport) indicative of elevated coastal sea levels. Results show the impacts of the inverted barometer effect as well as the effects of surface wind forcing; Ekman-induced transport along broad expanses of the U.S. eastern coastline. Higher sea levels and corresponding localized flooding are associated with either pattern indicative of enhanced on-shore flow, deepening cyclones, or local- scale winds, generally coupled with an increased local to regional precipitation. These findings will support an integration of satellite products and will inform seasonal outlook model development supported through NOAAs Climate Program Office and NOS office of Center for Operational Oceanographic Products and Services (CO-OPS). Overall results will prioritize ecological areas and coastal lab facilities at risk based on numbers of nuisance flood projected and inform coastal management of flood risk around low lying areas subjected to bank erosion.Keywords: AVISO satellite altimetry SSHA, MODIS IR flood map, nuisance flood, remote sensing of flood
Procedia PDF Downloads 1446182 Designing the Maturity Model of Smart Digital Transformation through the Foundation Data Method
Authors: Mohammad Reza Fazeli
Abstract:
Nowadays, the fourth industry, known as the digital transformation of industries, is seen as one of the top subjects in the history of structural revolution, which has led to the high-tech and tactical dominance of the organization. In the face of these profits, the undefined and non-transparent nature of the after-effects of investing in digital transformation has hindered many organizations from attempting this area of this industry. One of the important frameworks in the field of understanding digital transformation in all organizations is the maturity model of digital transformation. This model includes two main parts of digital transformation maturity dimensions and digital transformation maturity stages. Mediating factors of digital maturity and organizational performance at the individual (e.g., motivations, attitudes) and at the organizational level (e.g., organizational culture) should be considered. For successful technology adoption processes, organizational development and human resources must go hand in hand and be supported by a sound communication strategy. Maturity models are developed to help organizations by providing broad guidance and a roadmap for improvement. However, as a result of a systematic review of the literature and its analysis, it was observed that none of the 18 maturity models in the field of digital transformation fully meet all the criteria of appropriateness, completeness, clarity, and objectivity. A maturity assessment framework potentially helps systematize assessment processes that create opportunities for change in processes and organizations enabled by digital initiatives and long-term improvements at the project portfolio level. Cultural characteristics reflecting digital culture are not systematically integrated, and specific digital maturity models for the service sector are less clearly presented. It is also clearly evident that research on the maturity of digital transformation as a holistic concept is scarce and needs more attention in future research.Keywords: digital transformation, organizational performance, maturity models, maturity assessment
Procedia PDF Downloads 1076181 Series Network-Structured Inverse Models of Data Envelopment Analysis: Pitfalls and Solutions
Authors: Zohreh Moghaddas, Morteza Yazdani, Farhad Hosseinzadeh
Abstract:
Nowadays, data envelopment analysis (DEA) models featuring network structures have gained widespread usage for evaluating the performance of production systems and activities (Decision-Making Units (DMUs)) across diverse fields. By examining the relationships between the internal stages of the network, these models offer valuable insights to managers and decision-makers regarding the performance of each stage and its impact on the overall network. To further empower system decision-makers, the inverse data envelopment analysis (IDEA) model has been introduced. This model allows the estimation of crucial information for estimating parameters while keeping the efficiency score unchanged or improved, enabling analysis of the sensitivity of system inputs or outputs according to managers' preferences. This empowers managers to apply their preferences and policies on resources, such as inputs and outputs, and analyze various aspects like production, resource allocation processes, and resource efficiency enhancement within the system. The results obtained can be instrumental in making informed decisions in the future. The top result of this study is an analysis of infeasibility and incorrect estimation that may arise in the theory and application of the inverse model of data envelopment analysis with network structures. By addressing these pitfalls, novel protocols are proposed to circumvent these shortcomings effectively. Subsequently, several theoretical and applied problems are examined and resolved through insightful case studies.Keywords: inverse models of data envelopment analysis, series network, estimation of inputs and outputs, efficiency, resource allocation, sensitivity analysis, infeasibility
Procedia PDF Downloads 526180 Interaction between Space Syntax and Agent-Based Approaches for Vehicle Volume Modelling
Authors: Chuan Yang, Jing Bie, Panagiotis Psimoulis, Zhong Wang
Abstract:
Modelling and understanding vehicle volume distribution over the urban network are essential for urban design and transport planning. The space syntax approach was widely applied as the main conceptual and methodological framework for contemporary vehicle volume models with the help of the statistical method of multiple regression analysis (MRA). However, the MRA model with space syntax variables shows a limitation in vehicle volume predicting in accounting for the crossed effect of the urban configurational characters and socio-economic factors. The aim of this paper is to construct models by interacting with the combined impact of the street network structure and socio-economic factors. In this paper, we present a multilevel linear (ML) and an agent-based (AB) vehicle volume model at an urban scale interacting with space syntax theoretical framework. The ML model allowed random effects of urban configurational characteristics in different urban contexts. And the AB model was developed with the incorporation of transformed space syntax components of the MRA models into the agents’ spatial behaviour. Three models were implemented in the same urban environment. The ML model exhibit superiority over the original MRA model in identifying the relative impacts of the configurational characters and macro-scale socio-economic factors that shape vehicle movement distribution over the city. Compared with the ML model, the suggested AB model represented the ability to estimate vehicle volume in the urban network considering the combined effects of configurational characters and land-use patterns at the street segment level.Keywords: space syntax, vehicle volume modeling, multilevel model, agent-based model
Procedia PDF Downloads 1466179 A Machine Learning Approach for Intelligent Transportation System Management on Urban Roads
Authors: Ashish Dhamaniya, Vineet Jain, Rajesh Chouhan
Abstract:
Traffic management is one of the gigantic issue in most of the urban roads in al-most all metropolitan cities in India. Speed is one of the critical traffic parameters for effective Intelligent Transportation System (ITS) implementation as it decides the arrival rate of vehicles on an intersection which are majorly the point of con-gestions. The study aimed to leverage Machine Learning (ML) models to produce precise predictions of speed on urban roadway links. The research objective was to assess how categorized traffic volume and road width, serving as variables, in-fluence speed prediction. Four tree-based regression models namely: Decision Tree (DT), Random Forest (RF), Extra Tree (ET), and Extreme Gradient Boost (XGB)are employed for this purpose. The models' performances were validated using test data, and the results demonstrate that Random Forest surpasses other machine learning techniques and a conventional utility theory-based model in speed prediction. The study is useful for managing the urban roadway network performance under mixed traffic conditions and effective implementation of ITS.Keywords: stream speed, urban roads, machine learning, traffic flow
Procedia PDF Downloads 706178 The Model Establishment and Analysis of TRACE/FRAPTRAN for Chinshan Nuclear Power Plant Spent Fuel Pool
Authors: J. R. Wang, H. T. Lin, Y. S. Tseng, W. Y. Li, H. C. Chen, S. W. Chen, C. Shih
Abstract:
TRACE is developed by U.S. NRC for the nuclear power plants (NPPs) safety analysis. We focus on the establishment and application of TRACE/FRAPTRAN/SNAP models for Chinshan NPP (BWR/4) spent fuel pool in this research. The geometry is 12.17 m × 7.87 m × 11.61 m for the spent fuel pool. In this study, there are three TRACE/SNAP models: one-channel, two-channel, and multi-channel TRACE/SNAP model. Additionally, the cooling system failure of the spent fuel pool was simulated and analyzed by using the above models. According to the analysis results, the peak cladding temperature response was more accurate in the multi-channel TRACE/SNAP model. The results depicted that the uncovered of the fuels occurred at 2.7 day after the cooling system failed. In order to estimate the detailed fuel rods performance, FRAPTRAN code was used in this research. According to the results of FRAPTRAN, the highest cladding temperature located on the node 21 of the fuel rod (the highest node at node 23) and the cladding burst roughly after 3.7 day.Keywords: TRACE, FRAPTRAN, BWR, spent fuel pool
Procedia PDF Downloads 3576177 Analytical Description of Disordered Structures in Continuum Models of Pattern Formation
Authors: Gyula I. Tóth, Shaho Abdalla
Abstract:
Even though numerical simulations indeed have a significant precursory/supportive role in exploring the disordered phase displaying no long-range order in pattern formation models, studying the stability properties of this phase and determining the order of the ordered-disordered phase transition in these models necessitate an analytical description of the disordered phase. First, we will present the results of a comprehensive statistical analysis of a large number (1,000-10,000) of numerical simulations in the Swift-Hohenberg model, where the bulk disordered (or amorphous) phase is stable. We will show that the average free energy density (over configurations) converges, while the variance of the energy density vanishes with increasing system size in numerical simulations, which suggest that the disordered phase is a thermodynamic phase (i.e., its properties are independent of the configuration in the macroscopic limit). Furthermore, the structural analysis of this phase in the Fourier space suggests that the phase can be modeled by a colored isotropic Gaussian noise, where any instant of the noise describes a possible configuration. Based on these results, we developed the general mathematical framework of finding a pool of solutions to partial differential equations in the sense of continuous probability measure, which we will present briefly. Applying the general idea to the Swift-Hohenberg model we show, that the amorphous phase can be found, and its properties can be determined analytically. As the general mathematical framework is not restricted to continuum theories, we hope that the proposed methodology will open a new chapter in studying disordered phases.Keywords: fundamental theory, mathematical physics, continuum models, analytical description
Procedia PDF Downloads 1346176 Numerical Investigation of the Jacketing Method of Reinforced Concrete Column
Authors: S. Boukais, A. Nekmouche, N. Khelil, A. Kezmane
Abstract:
The first intent of this study is to develop a finite element model that can predict correctly the behavior of the reinforced concrete column. Second aim is to use the finite element model to investigate and evaluate the effect of the strengthening method by jacketing of the reinforced concrete column, by considering different interface contact between the old and the new concrete. Four models were evaluated, one by considering perfect contact, the other three models by using friction coefficient of 0.1, 0.3 and 0.5. The simulation was carried out by using Abaqus software. The obtained results show that the jacketing reinforcement led to significant increase of the global performance of the behavior of the simulated reinforced concrete column.Keywords: strengthening, jacketing, rienforced concrete column, Abaqus, simulation
Procedia PDF Downloads 1466175 Seismic Hazard Assessment of Offshore Platforms
Authors: F. D. Konstandakopoulou, G. A. Papagiannopoulos, N. G. Pnevmatikos, G. D. Hatzigeorgiou
Abstract:
This paper examines the effects of pile-soil-structure interaction on the dynamic response of offshore platforms under the action of near-fault earthquakes. Two offshore platforms models are investigated, one with completely fixed supports and one with piles which are clamped into deformable layered soil. The soil deformability for the second model is simulated using non-linear springs. These platform models are subjected to near-fault seismic ground motions. The role of fault mechanism on platforms’ response is additionally investigated, while the study also examines the effects of different angles of incidence of seismic records on the maximum response of each platform.Keywords: hazard analysis, offshore platforms, earthquakes, safety
Procedia PDF Downloads 1486174 A Biometric Template Security Approach to Fingerprints Based on Polynomial Transformations
Authors: Ramon Santana
Abstract:
The use of biometric identifiers in the field of information security, access control to resources, authentication in ATMs and banking among others, are of great concern because of the safety of biometric data. In the general architecture of a biometric system have been detected eight vulnerabilities, six of them allow obtaining minutiae template in plain text. The main consequence of obtaining minutia templates is the loss of biometric identifier for life. To mitigate these vulnerabilities several models to protect minutiae templates have been proposed. Several vulnerabilities in the cryptographic security of these models allow to obtain biometric data in plain text. In order to increase the cryptographic security and ease of reversibility, a minutiae templates protection model is proposed. The model aims to make the cryptographic protection and facilitate the reversibility of data using two levels of security. The first level of security is the data transformation level. In this level generates invariant data to rotation and translation, further transformation is irreversible. The second level of security is the evaluation level, where the encryption key is generated and data is evaluated using a defined evaluation function. The model is aimed at mitigating known vulnerabilities of the proposed models, basing its security on the impossibility of the polynomial reconstruction.Keywords: fingerprint, template protection, bio-cryptography, minutiae protection
Procedia PDF Downloads 1706173 Segregation Patterns of Trees and Grass Based on a Modified Age-Structured Continuous-Space Forest Model
Authors: Jian Yang, Atsushi Yagi
Abstract:
Tree-grass coexistence system is of great importance for forest ecology. Mathematical models are being proposed to study the dynamics of tree-grass coexistence and the stability of the systems. However, few of the models concentrates on spatial dynamics of the tree-grass coexistence. In this study, we modified an age-structured continuous-space population model for forests, obtaining an age-structured continuous-space population model for the tree-grass competition model. In the model, for thermal competitions, adult trees can out-compete grass, and grass can out-compete seedlings. We mathematically studied the model to make sure tree-grass coexistence solutions exist. Numerical experiments demonstrated that a fraction of area that trees or grass occupies can affect whether the coexistence is stable or not. We also tried regulating the mortality of adult trees with other parameters and the fraction of area trees and grass occupies were fixed; results show that the mortality of adult trees is also a factor affecting the stability of the tree-grass coexistence in this model.Keywords: population-structured models, stabilities of ecosystems, thermal competitions, tree-grass coexistence systems
Procedia PDF Downloads 1606172 Comparison of Applicability of Time Series Forecasting Models VAR, ARCH and ARMA in Management Science: Study Based on Empirical Analysis of Time Series Techniques
Authors: Muhammad Tariq, Hammad Tahir, Fawwad Mahmood Butt
Abstract:
Purpose: This study attempts to examine the best forecasting methodologies in the time series. The time series forecasting models such as VAR, ARCH and the ARMA are considered for the analysis. Methodology: The Bench Marks or the parameters such as Adjusted R square, F-stats, Durban Watson, and Direction of the roots have been critically and empirically analyzed. The empirical analysis consists of time series data of Consumer Price Index and Closing Stock Price. Findings: The results show that the VAR model performed better in comparison to other models. Both the reliability and significance of VAR model is highly appreciable. In contrary to it, the ARCH model showed very poor results for forecasting. However, the results of ARMA model appeared double standards i.e. the AR roots showed that model is stationary and that of MA roots showed that the model is invertible. Therefore, the forecasting would remain doubtful if it made on the bases of ARMA model. It has been concluded that VAR model provides best forecasting results. Practical Implications: This paper provides empirical evidences for the application of time series forecasting model. This paper therefore provides the base for the application of best time series forecasting model.Keywords: forecasting, time series, auto regression, ARCH, ARMA
Procedia PDF Downloads 3486171 The System of Uniform Criteria for the Characterization and Evaluation of Elements of Economic Structure: The Territory, Infrastructure, Processes, Technological Chains, the End Products
Authors: Aleksandr A. Gajour, Vladimir G. Merzlikin, Vladimir I. Veselov
Abstract:
This paper refers to the analysis of the characteristics of industrial and lifestyle facilities heat- energy objects as a part of the thermal envelope of Earth's surface for inclusion in any database of economic forecasting. The idealized model of the Earth's surface is discussed. This model gives the opportunity to obtain the energy equivalent for each element of terrain and world ocean. Energy efficiency criterion of comfortable human existence is introduced. Dynamics of changes of this criterion offers the possibility to simulate the possible technogenic catastrophes with the spontaneous industrial development of the certain Earth areas. Calculated model with the confirmed forecast of the Gulf Stream freezing in the polar regions in 2011 due to the heat-energy balance disturbance for the oceanic subsurface oil polluted layer is given. Two opposing trends of human development under limited and unlimited amount of heat-energy resources are analyzed.Keywords: Earth's surface, heat-energy consumption, energy criteria, technogenic catastrophes
Procedia PDF Downloads 4026170 Use of SUDOKU Design to Assess the Implications of the Block Size and Testing Order on Efficiency and Precision of Dulce De Leche Preference Estimation
Authors: Jéssica Ferreira Rodrigues, Júlio Silvio De Sousa Bueno Filho, Vanessa Rios De Souza, Ana Carla Marques Pinheiro
Abstract:
This study aimed to evaluate the implications of the block size and testing order on efficiency and precision of preference estimation for Dulce de leche samples. Efficiency was defined as the inverse of the average variance of pairwise comparisons among treatments. Precision was defined as the inverse of the variance of treatment means (or effects) estimates. The experiment was originally designed to test 16 treatments as a series of 8 Sudoku 16x16 designs being 4 randomized independently and 4 others in the reverse order, to yield balance in testing order. Linear mixed models were assigned to the whole experiment with 112 testers and all their grades, as well as their partially balanced subgroups, namely: a) experiment with the four initial EU; b) experiment with EU 5 to 8; c) experiment with EU 9 to 12; and b) experiment with EU 13 to 16. To record responses we used a nine-point hedonic scale, it was assumed a mixed linear model analysis with random tester and treatments effects and with fixed test order effect. Analysis of a cumulative random effects probit link model was very similar, with essentially no different conclusions and for simplicity, we present the results using Gaussian assumption. R-CRAN library lme4 and its function lmer (Fit Linear Mixed-Effects Models) was used for the mixed models and libraries Bayesthresh (default Gaussian threshold function) and ordinal with the function clmm (Cumulative Link Mixed Model) was used to check Bayesian analysis of threshold models and cumulative link probit models. It was noted that the number of samples tested in the same session can influence the acceptance level, underestimating the acceptance. However, proving a large number of samples can help to improve the samples discrimination.Keywords: acceptance, block size, mixed linear model, testing order, testing order
Procedia PDF Downloads 3216169 Supervised Machine Learning Approach for Studying the Effect of Different Joint Sets on Stability of Mine Pit Slopes Under the Presence of Different External Factors
Authors: Sudhir Kumar Singh, Debashish Chakravarty
Abstract:
Slope stability analysis is an important aspect in the field of geotechnical engineering. It is also important from safety, and economic point of view as any slope failure leads to loss of valuable lives and damage to property worth millions. This paper aims at mitigating the risk of slope failure by studying the effect of different joint sets on the stability of mine pit slopes under the influence of various external factors, namely degree of saturation, rainfall intensity, and seismic coefficients. Supervised machine learning approach has been utilized for making accurate and reliable predictions regarding the stability of slopes based on the value of Factor of Safety. Numerous cases have been studied for analyzing the stability of slopes using the popular Finite Element Method, and the data thus obtained has been used as training data for the supervised machine learning models. The input data has been trained on different supervised machine learning models, namely Random Forest, Decision Tree, Support vector Machine, and XGBoost. Distinct test data that is not present in training data has been used for measuring the performance and accuracy of different models. Although all models have performed well on the test dataset but Random Forest stands out from others due to its high accuracy of greater than 95%, thus helping us by providing a valuable tool at our disposition which is neither computationally expensive nor time consuming and in good accordance with the numerical analysis result.Keywords: finite element method, geotechnical engineering, machine learning, slope stability
Procedia PDF Downloads 1016168 Predicting the Impact of Scope Changes on Project Cost and Schedule Using Machine Learning Techniques
Authors: Soheila Sadeghi
Abstract:
In the dynamic landscape of project management, scope changes are an inevitable reality that can significantly impact project performance. These changes, whether initiated by stakeholders, external factors, or internal project dynamics, can lead to cost overruns and schedule delays. Accurately predicting the consequences of these changes is crucial for effective project control and informed decision-making. This study aims to develop predictive models to estimate the impact of scope changes on project cost and schedule using machine learning techniques. The research utilizes a comprehensive dataset containing detailed information on project tasks, including the Work Breakdown Structure (WBS), task type, productivity rate, estimated cost, actual cost, duration, task dependencies, scope change magnitude, and scope change timing. Multiple machine learning models are developed and evaluated to predict the impact of scope changes on project cost and schedule. These models include Linear Regression, Decision Tree, Ridge Regression, Random Forest, Gradient Boosting, and XGBoost. The dataset is split into training and testing sets, and the models are trained using the preprocessed data. Cross-validation techniques are employed to assess the robustness and generalization ability of the models. The performance of the models is evaluated using metrics such as Mean Squared Error (MSE) and R-squared. Residual plots are generated to assess the goodness of fit and identify any patterns or outliers. Hyperparameter tuning is performed to optimize the XGBoost model and improve its predictive accuracy. The feature importance analysis reveals the relative significance of different project attributes in predicting the impact on cost and schedule. Key factors such as productivity rate, scope change magnitude, task dependencies, estimated cost, actual cost, duration, and specific WBS elements are identified as influential predictors. The study highlights the importance of considering both cost and schedule implications when managing scope changes. The developed predictive models provide project managers with a data-driven tool to proactively assess the potential impact of scope changes on project cost and schedule. By leveraging these insights, project managers can make informed decisions, optimize resource allocation, and develop effective mitigation strategies. The findings of this research contribute to improved project planning, risk management, and overall project success.Keywords: cost impact, machine learning, predictive modeling, schedule impact, scope changes
Procedia PDF Downloads 436167 Churn Prediction for Savings Bank Customers: A Machine Learning Approach
Authors: Prashant Verma
Abstract:
Commercial banks are facing immense pressure, including financial disintermediation, interest rate volatility and digital ways of finance. Retaining an existing customer is 5 to 25 less expensive than acquiring a new one. This paper explores customer churn prediction, based on various statistical & machine learning models and uses under-sampling, to improve the predictive power of these models. The results show that out of the various machine learning models, Random Forest which predicts the churn with 78% accuracy, has been found to be the most powerful model for the scenario. Customer vintage, customer’s age, average balance, occupation code, population code, average withdrawal amount, and an average number of transactions were found to be the variables with high predictive power for the churn prediction model. The model can be deployed by the commercial banks in order to avoid the customer churn so that they may retain the funds, which are kept by savings bank (SB) customers. The article suggests a customized campaign to be initiated by commercial banks to avoid SB customer churn. Hence, by giving better customer satisfaction and experience, the commercial banks can limit the customer churn and maintain their deposits.Keywords: savings bank, customer churn, customer retention, random forests, machine learning, under-sampling
Procedia PDF Downloads 1436166 A Comparison of Tsunami Impact to Sydney Harbour, Australia at Different Tidal Stages
Authors: Olivia A. Wilson, Hannah E. Power, Murray Kendall
Abstract:
Sydney Harbour is an iconic location with a dense population and low-lying development. On the east coast of Australia, facing the Pacific Ocean, it is exposed to several tsunamigenic trenches. This paper presents a component of the most detailed assessment of the potential for earthquake-generated tsunami impact on Sydney Harbour to date. Models in this study use dynamic tides to account for tide-tsunami interaction. Sydney Harbour’s tidal range is 1.5 m, and the spring tides from January 2015 that are used in the modelling for this study are close to the full tidal range. The tsunami wave trains modelled include hypothetical tsunami generated from earthquakes of magnitude 7.5, 8.0, 8.5, and 9.0 MW from the Puysegur and New Hebrides trenches as well as representations of the historical 1960 Chilean and 2011 Tohoku events. All wave trains are modelled for the peak wave to coincide with both a low tide and a high tide. A single wave train, representing a 9.0 MW earthquake at the Puysegur trench, is modelled for peak waves to coincide with every hour across a 12-hour tidal phase. Using the hydrodynamic model ANUGA, results are compared according to the impact parameters of inundation area, depth variation and current speeds. Results show that both maximum inundation area and depth variation are tide dependent. Maximum inundation area increases when coincident with a higher tide, however, hazardous inundation is only observed for the larger waves modelled: NH90high and P90high. The maximum and minimum depths are deeper on higher tides and shallower on lower tides. The difference between maximum and minimum depths varies across different tidal phases although the differences are slight. Maximum current speeds are shown to be a significant hazard for Sydney Harbour; however, they do not show consistent patterns according to tide-tsunami phasing. The maximum current speed hazard is shown to be greater in specific locations such as Spit Bridge, a narrow channel with extensive marine infrastructure. The results presented for Sydney Harbour are novel, and the conclusions are consistent with previous modelling efforts in the greater area. It is shown that tide must be a consideration for both tsunami modelling and emergency management planning. Modelling with peak tsunami waves coinciding with a high tide would be a conservative approach; however, it must be considered that maximum current speeds may be higher on other tides.Keywords: emergency management, sydney, tide-tsunami interaction, tsunami impact
Procedia PDF Downloads 2426165 Comparing Performance of Neural Network and Decision Tree in Prediction of Myocardial Infarction
Authors: Reza Safdari, Goli Arji, Robab Abdolkhani Maryam zahmatkeshan
Abstract:
Background and purpose: Cardiovascular diseases are among the most common diseases in all societies. The most important step in minimizing myocardial infarction and its complications is to minimize its risk factors. The amount of medical data is increasingly growing. Medical data mining has a great potential for transforming these data into information. Using data mining techniques to generate predictive models for identifying those at risk for reducing the effects of the disease is very helpful. The present study aimed to collect data related to risk factors of heart infarction from patients’ medical record and developed predicting models using data mining algorithm. Methods: The present work was an analytical study conducted on a database containing 350 records. Data were related to patients admitted to Shahid Rajaei specialized cardiovascular hospital, Iran, in 2011. Data were collected using a four-sectioned data collection form. Data analysis was performed using SPSS and Clementine version 12. Seven predictive algorithms and one algorithm-based model for predicting association rules were applied to the data. Accuracy, precision, sensitivity, specificity, as well as positive and negative predictive values were determined and the final model was obtained. Results: five parameters, including hypertension, DLP, tobacco smoking, diabetes, and A+ blood group, were the most critical risk factors of myocardial infarction. Among the models, the neural network model was found to have the highest sensitivity, indicating its ability to successfully diagnose the disease. Conclusion: Risk prediction models have great potentials in facilitating the management of a patient with a specific disease. Therefore, health interventions or change in their life style can be conducted based on these models for improving the health conditions of the individuals at risk.Keywords: decision trees, neural network, myocardial infarction, Data Mining
Procedia PDF Downloads 4296164 Signs-Only Compressed Row Storage Format for Exact Diagonalization Study of Quantum Fermionic Models
Authors: Michael Danilov, Sergei Iskakov, Vladimir Mazurenko
Abstract:
The present paper describes a high-performance parallel realization of an exact diagonalization solver for quantum-electron models in a shared memory computing system. The proposed algorithm contains a storage format for efficient computing eigenvalues and eigenvectors of a quantum electron Hamiltonian matrix. The results of the test calculations carried out for 15 sites Hubbard model demonstrate reduction in the required memory and good multiprocessor scalability, while maintaining performance of the same order as compressed row storage.Keywords: sparse matrix, compressed format, Hubbard model, Anderson model
Procedia PDF Downloads 4026163 Optimizing the Passenger Throughput at an Airport Security Checkpoint
Authors: Kun Li, Yuzheng Liu, Xiuqi Fan
Abstract:
High-security standard and high efficiency of screening seem to be contradictory to each other in the airport security check process. Improving the efficiency as far as possible while maintaining the same security standard is significantly meaningful. This paper utilizes the knowledge of Operation Research and Stochastic Process to establish mathematical models to explore this problem. We analyze the current process of airport security check and use the M/G/1 and M/G/k models in queuing theory to describe the process. Then we find the least efficient part is the pre-check lane, the bottleneck of the queuing system. To improve passenger throughput and reduce the variance of passengers’ waiting time, we adjust our models and use Monte Carlo method, then put forward three modifications: adjust the ratio of Pre-Check lane to regular lane flexibly, determine the optimal number of security check screening lines based on cost analysis and adjust the distribution of arrival and service time based on Monte Carlo simulation results. We also analyze the impact of cultural differences as the sensitivity analysis. Finally, we give the recommendations for the current process of airport security check process.Keywords: queue theory, security check, stochatic process, Monte Carlo simulation
Procedia PDF Downloads 2006162 Application of Signature Verification Models for Document Recognition
Authors: Boris M. Fedorov, Liudmila P. Goncharenko, Sergey A. Sybachin, Natalia A. Mamedova, Ekaterina V. Makarenkova, Saule Rakhimova
Abstract:
In modern economic conditions, the question of the possibility of correct recognition of a signature on digital documents in order to verify the expression of will or confirm a certain operation is relevant. The additional complexity of processing lies in the dynamic variability of the signature for each individual, as well as in the way information is processed because the signature refers to biometric data. The article discusses the issues of using artificial intelligence models in order to improve the quality of signature confirmation in document recognition. The analysis of several possible options for using the model is carried out. The results of the study are given, in which it is possible to correctly determine the authenticity of the signature on small samples.Keywords: signature recognition, biometric data, artificial intelligence, neural networks
Procedia PDF Downloads 1486161 Analog Input Output Buffer Information Specification Modelling Techniques for Single Ended Inter-Integrated Circuit and Differential Low Voltage Differential Signaling I/O Interfaces
Authors: Monika Rawat, Rahul Kumar
Abstract:
Input output Buffer Information Specification (IBIS) models are used for describing the analog behavior of the Input Output (I/O) buffers of a digital device. They are widely used to perform signal integrity analysis. Advantages of using IBIS models include simple structure, IP protection and fast simulation time with reasonable accuracy. As design complexity of driver and receiver increases, capturing exact behavior from transistor level model into IBIS model becomes an essential task to achieve better accuracy. In this paper, an improvement in existing methodology of generating IBIS model for complex I/O interfaces such as Inter-Integrated Circuit (I2C) and Low Voltage Differential Signaling (LVDS) is proposed. Furthermore, the accuracy and computational performance of standard method and proposed approach with respect to SPICE are presented. The investigations will be useful to further improve the accuracy of IBIS models and to enhance their wider acceptance.Keywords: IBIS, signal integrity, open-drain buffer, low voltage differential signaling, behavior modelling, transient simulation
Procedia PDF Downloads 1966160 An As-Is Analysis and Approach for Updating Building Information Models and Laser Scans
Authors: Rene Hellmuth
Abstract:
Factory planning has the task of designing products, plants, processes, organization, areas, and the construction of a factory. The requirements for factory planning and the building of a factory have changed in recent years. Regular restructuring of the factory building is becoming more important in order to maintain the competitiveness of a factory. Restrictions in new areas, shorter life cycles of product and production technology as well as a VUCA world (Volatility, Uncertainty, Complexity & Ambiguity) lead to more frequent restructuring measures within a factory. A building information model (BIM) is the planning basis for rebuilding measures and becomes an indispensable data repository to be able to react quickly to changes. Use as a planning basis for restructuring measures in factories only succeeds if the BIM model has adequate data quality. Under this aspect and the industrial requirement, three data quality factors are particularly important for this paper regarding the BIM model: up-to-dateness, completeness, and correctness. The research question is: how can a BIM model be kept up to date with required data quality and which visualization techniques can be applied in a short period of time on the construction site during conversion measures? An as-is analysis is made of how BIM models and digital factory models (including laser scans) are currently being kept up to date. Industrial companies are interviewed, and expert interviews are conducted. Subsequently, the results are evaluated, and a procedure conceived how cost-effective and timesaving updating processes can be carried out. The availability of low-cost hardware and the simplicity of the process are of importance to enable service personnel from facility mnagement to keep digital factory models (BIM models and laser scans) up to date. The approach includes the detection of changes to the building, the recording of the changing area, and the insertion into the overall digital twin. Finally, an overview of the possibilities for visualizations suitable for construction sites is compiled. An augmented reality application is created based on an updated BIM model of a factory and installed on a tablet. Conversion scenarios with costs and time expenditure are displayed. A user interface is designed in such a way that all relevant conversion information is available at a glance for the respective conversion scenario. A total of three essential research results are achieved: As-is analysis of current update processes for BIM models and laser scans, development of a time-saving and cost-effective update process and the conception and implementation of an augmented reality solution for BIM models suitable for construction sites.Keywords: building information modeling, digital factory model, factory planning, restructuring
Procedia PDF Downloads 1146159 Surface-Enhanced Raman Spectroscopy on Gold Nanoparticles in the Kidney Disease
Authors: Leonardo C. Pacheco-Londoño, Nataly J Galan-Freyle, Lisandro Pacheco-Lugo, Antonio Acosta-Hoyos, Elkin Navarro, Gustavo Aroca-Martinez, Karin Rondón-Payares, Alberto C. Espinosa-Garavito, Samuel P. Hernández-Rivera
Abstract:
At the Life Science Research Center at Simon Bolivar University, a primary focus is the diagnosis of various diseases, and the use of gold nanoparticles (Au-NPs) in diverse biomedical applications is continually expanding. In the present study, Au-NPs were employed as substrates for Surface-Enhanced Raman Spectroscopy (SERS) aimed at diagnosing kidney diseases arising from Lupus Nephritis (LN), preeclampsia (PC), and Hypertension (H). Discrimination models were developed for distinguishing patients with and without kidney diseases based on the SERS signals from urine samples by partial least squares-discriminant analysis (PLS-DA). A comparative study of the Raman signals across the three conditions was conducted, leading to the identification of potential metabolite signals. Model performance was assessed through cross-validation and external validation, determining parameters like sensitivity and specificity. Additionally, a secondary analysis was performed using machine learning (ML) models, wherein different ML algorithms were evaluated for their efficiency. Models’ validation was carried out using cross-validation and external validation, and other parameters were determined, such as sensitivity and specificity; the models showed average values of 0.9 for both parameters. Additionally, it is not possible to highlight this collaborative effort involved two university research centers and two healthcare institutions, ensuring ethical treatment and informed consent of patient samples.Keywords: SERS, Raman, PLS-DA, kidney diseases
Procedia PDF Downloads 456158 Plot Scale Estimation of Crop Biophysical Parameters from High Resolution Satellite Imagery
Authors: Shreedevi Moharana, Subashisa Dutta
Abstract:
The present study focuses on the estimation of crop biophysical parameters like crop chlorophyll, nitrogen and water stress at plot scale in the crop fields. To achieve these, we have used high-resolution satellite LISS IV imagery. A new methodology has proposed in this research work, the spectral shape function of paddy crop is employed to get the significant wavelengths sensitive to paddy crop parameters. From the shape functions, regression index models were established for the critical wavelength with minimum and maximum wavelengths of multi-spectrum high-resolution LISS IV data. Moreover, the functional relationships were utilized to develop the index models. From these index models crop, biophysical parameters were estimated and mapped from LISS IV imagery at plot scale in crop field level. The result showed that the nitrogen content of the paddy crop varied from 2-8%, chlorophyll from 1.5-9% and water content variation observed from 40-90% respectively. It was observed that the variability in rice agriculture system in India was purely a function of field topography.Keywords: crop parameters, index model, LISS IV imagery, plot scale, shape function
Procedia PDF Downloads 1686157 Application of Data Driven Based Models as Early Warning Tools of High Stream Flow Events and Floods
Authors: Mohammed Seyam, Faridah Othman, Ahmed El-Shafie
Abstract:
The early warning of high stream flow events (HSF) and floods is an important aspect in the management of surface water and rivers systems. This process can be performed using either process-based models or data driven-based models such as artificial intelligence (AI) techniques. The main goal of this study is to develop efficient AI-based model for predicting the real-time hourly stream flow (Q) and apply it as early warning tool of HSF and floods in the downstream area of the Selangor River basin, taken here as a paradigm of humid tropical rivers in Southeast Asia. The performance of AI-based models has been improved through the integration of the lag time (Lt) estimation in the modelling process. A total of 8753 patterns of Q, water level, and rainfall hourly records representing one-year period (2011) were utilized in the modelling process. Six hydrological scenarios have been arranged through hypothetical cases of input variables to investigate how the changes in RF intensity in upstream stations can lead formation of floods. The initial SF was changed for each scenario in order to include wide range of hydrological situations in this study. The performance evaluation of the developed AI-based model shows that high correlation coefficient (R) between the observed and predicted Q is achieved. The AI-based model has been successfully employed in early warning throughout the advance detection of the hydrological conditions that could lead to formations of floods and HSF, where represented by three levels of severity (i.e., alert, warning, and danger). Based on the results of the scenarios, reaching the danger level in the downstream area required high RF intensity in at least two upstream areas. According to results of applications, it can be concluded that AI-based models are beneficial tools to the local authorities for flood control and awareness.Keywords: floods, stream flow, hydrological modelling, hydrology, artificial intelligence
Procedia PDF Downloads 2486156 The Prediction of Effective Equation on Drivers' Behavioral Characteristics of Lane Changing
Authors: Khashayar Kazemzadeh, Mohammad Hanif Dasoomi
Abstract:
According to the increasing volume of traffic, lane changing plays a crucial role in traffic flow. Lane changing in traffic depends on several factors including road geometrical design, speed, drivers’ behavioral characteristics, etc. A great deal of research has been carried out regarding these fields. Despite of the other significant factors, the drivers’ behavioral characteristics of lane changing has been emphasized in this paper. This paper has predicted the effective equation based on personal characteristics of lane changing by regression models.Keywords: effective equation, lane changing, drivers’ behavioral characteristics, regression models
Procedia PDF Downloads 450