Search results for: network model
17649 Numerical Analysis of Swirling Chamber Using Improved Delayed Detached Eddy Simulation Turbulence Model
Authors: Hamad M. Alhajeri
Abstract:
Swirling chamber is a promising cooling method for heavily thermally loaded parts like turbine blades due to the additional circumferential velocity and therefore improved turbulent mixing of the fluid. This paper investigates numerically the effect of turbulence model on the heat convection of the swirling chamber. Grid independence analysis is conducted to obtain the proper grid dimension. The work validated with experimental data available in the literature. Flow analysis using improved delayed detached eddy simulation turbulence model and Reynolds averaged Navier-Stokes k-ɛ turbulence model is carried. The flow characteristic near the exit is reformed when improved delayed detached eddy simulation model used.Keywords: gas turbine, Nusselt number, flow characteristics, heat transfer
Procedia PDF Downloads 20017648 Numerical Simulation of Wishart Diffusion Processes
Authors: Raphael Naryongo, Philip Ngare, Anthony Waititu
Abstract:
This paper deals with numerical simulation of Wishart processes for a single asset risky pricing model whose volatility is described by Wishart affine diffusion processes. The multi-factor specification of volatility will make the model more flexible enough to fit the stock market data for short or long maturities for better returns. The Wishart process is a stochastic process which is a positive semi-definite matrix-valued generalization of the square root process. The aim of the study is to model the log asset stock returns under the double Wishart stochastic volatility model. The solution of the log-asset return dynamics for Bi-Wishart processes will be obtained through Euler-Maruyama discretization schemes. The numerical results on the asset returns are compared to the existing models returns such as Heston stochastic volatility model and double Heston stochastic volatility modelKeywords: euler schemes, log-asset return, infinitesimal generator, wishart diffusion affine processes
Procedia PDF Downloads 37617647 Input-Output Analysis in Laptop Computer Manufacturing
Authors: H. Z. Ulukan, E. Demircioğlu, M. Erol Genevois
Abstract:
The scope of this paper and the aim of proposed model were to apply monetary Input –Output (I-O) analysis to point out the importance of reusing know-how and other requirements in order to reduce the production costs in a manufacturing process for a laptop computer. I-O approach using the monetary input-output model is employed to demonstrate the impacts of different factors in a manufacturing process. A sensitivity analysis showing the correlation between these different factors is also presented. It is expected that the recommended model would have an advantageous effect in the cost minimization process.Keywords: input-output analysis, monetary input-output model, manufacturing process, laptop computer
Procedia PDF Downloads 38817646 Space Vector PWM and Model Predictive Control for Voltage Source Inverter Control
Authors: Irtaza M. Syed, Kaamran Raahemifar
Abstract:
In this paper, we present a comparative assessment of Space Vector Pulse Width Modulation (SVPWM) and Model Predictive Control (MPC) for two-level three phase (2L-3P) Voltage Source Inverter (VSI). VSI with associated system is subjected to both control techniques and the results are compared. Matlab/Simulink was used to model, simulate and validate the control schemes. Findings of this study show that MPC is superior to SVPWM in terms of total harmonic distortion (THD) and implementation.Keywords: voltage source inverter, space vector pulse width modulation, model predictive control, comparison
Procedia PDF Downloads 50617645 Reconfigurable Ubiquitous Computing Infrastructure for Load Balancing
Authors: Khaled Sellami, Lynda Sellami, Pierre F. Tiako
Abstract:
Ubiquitous computing helps make data and services available to users anytime and anywhere. This makes the cooperation of devices a crucial need. In return, such cooperation causes an overload of the devices and/or networks, resulting in network malfunction and suspension of its activities. Our goal in this paper is to propose an approach of devices reconfiguration in order to help to reduce the energy consumption in ubiquitous environments. The idea is that when high-energy consumption is detected, we proceed to a change in component distribution on the devices to reduce and/or balance the energy consumption. We also investigate the possibility to detect high-energy consumption of devices/network based on devices abilities. As a result, our idea realizes a reconfiguration of devices aimed at reducing the consumption of energy and/or load balancing in ubiquitous environments.Keywords: ubiquitous computing, load balancing, device energy consumption, reconfiguration
Procedia PDF Downloads 27417644 ARIMA-GARCH, A Statistical Modeling for Epileptic Seizure Prediction
Authors: Salman Mohamadi, Seyed Mohammad Ali Tayaranian Hosseini, Hamidreza Amindavar
Abstract:
In this paper, we provide a procedure to analyze and model EEG (electroencephalogram) signal as a time series using ARIMA-GARCH to predict an epileptic attack. The heteroskedasticity of EEG signal is examined through the ARCH or GARCH, (Autore- gressive conditional heteroskedasticity, Generalized autoregressive conditional heteroskedasticity) test. The best ARIMA-GARCH model in AIC sense is utilized to measure the volatility of the EEG from epileptic canine subjects, to forecast the future values of EEG. ARIMA-only model can perform prediction, but the ARCH or GARCH model acting on the residuals of ARIMA attains a con- siderable improved forecast horizon. First, we estimate the best ARIMA model, then different orders of ARCH and GARCH modelings are surveyed to determine the best heteroskedastic model of the residuals of the mentioned ARIMA. Using the simulated conditional variance of selected ARCH or GARCH model, we suggest the procedure to predict the oncoming seizures. The results indicate that GARCH modeling determines the dynamic changes of variance well before the onset of seizure. It can be inferred that the prediction capability comes from the ability of the combined ARIMA-GARCH modeling to cover the heteroskedastic nature of EEG signal changes.Keywords: epileptic seizure prediction , ARIMA, ARCH and GARCH modeling, heteroskedasticity, EEG
Procedia PDF Downloads 40317643 Multi-Level Meta-Modeling for Enabling Dynamic Subtyping for Industrial Automation
Authors: Zoltan Theisz, Gergely Mezei
Abstract:
Modern industrial automation relies on service oriented concepts of Internet of Things (IoT) device modeling in order to provide a flexible and extendable environment for service meta-repository. However, state-of-the-art meta-modeling techniques prefer design-time modeling, which results in a heavy usage of class sometimes unnecessary static subtyping. Although this approach benefits from clear-cut object-oriented design principles, it also seals the model repository for further dynamic extensions. In this paper, a dynamic multi-level modeling approach is introduced that enables dynamic subtyping through a more relaxed partial instantiation mechanism. The approach is demonstrated on a simple sensor network example.Keywords: meta-modeling, dynamic subtyping, DMLA, industrial automation, arrowhead
Procedia PDF Downloads 35817642 Simulation of Uniaxial Ratcheting Behaviors of SA508-3 Steel at Elevated Temperature
Authors: Jun Tian, Yu Yang, Liping Zhang, Qianhua Kan
Abstract:
Experimental results show that SA 508-3 steel exhibits temperature dependent cyclic softening characteristic and obvious ratcheting behaviors, and dynamic strain age was observed at temperature range of 200 ºC to 350 ºC. Based on these observations, a temperature dependent cyclic plastic constitutive model was proposed by introducing the nonlinear cyclic softening and kinematic hardening rules, and the dynamic strain age was also considered into the constitutive model. Comparisons between experiments and simulations were carried out to validate the proposed model at elevated temperature.Keywords: constitutive model, elevated temperature, ratcheting, SA 508-3
Procedia PDF Downloads 30117641 Urban Flood Resilience Comprehensive Assessment of "720" Rainstorm in Zhengzhou Based on Multiple Factors
Authors: Meiyan Gao, Zongmin Wang, Haibo Yang, Qiuhua Liang
Abstract:
Under the background of global climate change and rapid development of modern urbanization, the frequency of climate disasters such as extreme precipitation in cities around the world is gradually increasing. In this paper, Hi-PIMS model is used to simulate the "720" flood in Zhengzhou, and the continuous stages of flood resilience are determined with the urban flood stages are divided. The flood resilience curve under the influence of multiple factors were determined and the urban flood toughness was evaluated by combining the results of resilience curves. The flood resilience of urban unit grid was evaluated based on economy, population, road network, hospital distribution and land use type. Firstly, the rainfall data of meteorological stations near Zhengzhou and the remote sensing rainfall data from July 17 to 22, 2021 were collected. The Kriging interpolation method was used to expand the rainfall data of Zhengzhou. According to the rainfall data, the flood process generated by four rainfall events in Zhengzhou was reproduced. Based on the results of the inundation range and inundation depth in different areas, the flood process was divided into four stages: absorption, resistance, overload and recovery based on the once in 50 years rainfall standard. At the same time, based on the levels of slope, GDP, population, hospital affected area, land use type, road network density and other aspects, the resilience curve was applied to evaluate the urban flood resilience of different regional units, and the difference of flood process of different precipitation in "720" rainstorm in Zhengzhou was analyzed. Faced with more than 1,000 years of rainstorm, most areas are quickly entering the stage of overload. The influence levels of factors in different areas are different, some areas with ramps or higher terrain have better resilience, and restore normal social order faster, that is, the recovery stage needs shorter time. Some low-lying areas or special terrain, such as tunnels, will enter the overload stage faster in the case of heavy rainfall. As a result, high levels of flood protection, water level warning systems and faster emergency response are needed in areas with low resilience and high risk. The building density of built-up area, population of densely populated area and road network density all have a certain negative impact on urban flood resistance, and the positive impact of slope on flood resilience is also very obvious. While hospitals can have positive effects on medical treatment, they also have negative effects such as population density and asset density when they encounter floods. The result of a separate comparison of the unit grid of hospitals shows that the resilience of hospitals in the distribution range is low when they encounter floods. Therefore, in addition to improving the flood resistance capacity of cities, through reasonable planning can also increase the flood response capacity of cities. Changes in these influencing factors can further improve urban flood resilience, such as raise design standards and the temporary water storage area when floods occur, train the response speed of emergency personnel and adjust emergency support equipment.Keywords: urban flood resilience, resilience assessment, hydrodynamic model, resilience curve
Procedia PDF Downloads 3817640 Exploring the Energy Model of Cumulative Grief
Authors: Masica Jordan Alston, Angela N. Bullock, Angela S. Henderson, Stephanie Strianse, Sade Dunn, Joseph Hackett, Alaysia Black Hackett, Marcus Mason
Abstract:
The Energy Model of Cumulative Grief was created in 2018. The Energy Model of Cumulative Grief utilizes historic models of grief stage theories. The innovative model is additionally unique due to its focus on cultural responsiveness. The Energy Model of Cumulative Grief helps to train practitioners who work with clients dealing with grief and loss. This paper assists in introducing the world to this innovative model and exploring how this model positively impacted a convenience sample of 140 practitioners and individuals experiencing grief and loss. Respondents participated in Webinars provided by the National Grief and Loss Center of America (NGLCA). Participants in this cross-sectional research design study completed one of three Grief and Loss Surveys created by the Grief and Loss Centers of America. Data analysis for this study was conducted via SPSS and Survey Hero to examine survey results for respondents. Results indicate that the Energy Model of Cumulative Grief was an effective resource for participants in addressing grief and loss. The majority of participants found the Webinars to be helpful and a conduit to providing them with higher levels of hope. The findings suggest that using The Energy Model of Cumulative Grief is effective in providing culturally responsive grief and loss resources to practitioners and clients. There are far reaching implications with the use of technology to provide hope to those suffering from grief and loss worldwide through The Energy Model of Cumulative Grief.Keywords: grief, loss, grief energy, grieving brain
Procedia PDF Downloads 8217639 An Investigation of Influential Factors in Adopting the Cloud Computing in Saudi Arabia: An Application of Technology Acceptance Model
Authors: Shayem Saleh ALresheedi, Lu Song Feng, Abdulaziz Abdulwahab M. Fatani
Abstract:
Cloud computing is an emerging concept in the technological sphere. Its development enables many applications to avail information online and on demand. It is becoming an essential element for businesses due to its ability to diminish the costs of IT infrastructure and is being adopted in Saudi Arabia. However, there exist many factors that affect its adoption. Several researchers in the field have ignored the study of the TAM model for identifying the relevant factors and their impact for adopting of cloud computing. This study focuses on evaluating the acceptability of cloud computing and analyzing its impacting factors using Technology Acceptance Model (TAM) of technology adoption in Saudi Arabia. It suggests a model to examine the influential factors of the TAM model along with external factors of technical support in adapting the cloud computing. The proposed model has been tested through the use of multiple hypotheses based on calculation tools and collected data from customers through questionnaires. The findings of the study prove that the TAM model along with external factors can be applied in measuring the expected adoption of cloud computing. The study presents an investigation of influential factors and further recommendation in adopting cloud computing in Saudi Arabia.Keywords: cloud computing, acceptability, adoption, determinants
Procedia PDF Downloads 19017638 Utilization of an Object Oriented Tool to Perform Model-Based Safety Analysis According to Extended Failure System Models
Authors: Royia Soliman, Salma ElAnsary, Akram Amin Abdellatif, Florian Holzapfel
Abstract:
Model-Based Safety Analysis (MBSA) is an approach in which the system and safety engineers share a common system model created using a model-based development process. The model can also be extended by the failure modes of the system components. There are two famous approaches for the addition of fault behaviors to system models. The first one is to enclose the failure into the system design directly. The second approach is to develop a fault model separately from the system model, thus combining both independent models for safety analysis. This paper introduces a hybrid approach of MBSA. The approach tries to use informal abstracted models to investigate failure behaviors. The approach will combine various concepts such as directed graph traversal, event lists and Constraint Satisfaction Problems (CSP). The approach is implemented using an Object Oriented programming language. The components are abstracted to its failure logic and relationships of connected components. The implemented approach is tested on various flight control systems, including electrical and multi-domain examples. The various tests are analyzed, and a comparison to different approaches is represented.Keywords: flight control systems, model based safety analysis, safety assessment analysis, system modelling
Procedia PDF Downloads 16317637 Fractal-Wavelet Based Techniques for Improving the Artificial Neural Network Models
Authors: Reza Bazargan lari, Mohammad H. Fattahi
Abstract:
Natural resources management including water resources requires reliable estimations of time variant environmental parameters. Small improvements in the estimation of environmental parameters would result in grate effects on managing decisions. Noise reduction using wavelet techniques is an effective approach for pre-processing of practical data sets. Predictability enhancement of the river flow time series are assessed using fractal approaches before and after applying wavelet based pre-processing. Time series correlation and persistency, the minimum sufficient length for training the predicting model and the maximum valid length of predictions were also investigated through a fractal assessment.Keywords: wavelet, de-noising, predictability, time series fractal analysis, valid length, ANN
Procedia PDF Downloads 36817636 An Alternative Stratified Cox Model for Correlated Variables in Infant Mortality
Authors: K. A. Adeleke
Abstract:
Often in epidemiological research, introducing stratified Cox model can account for the existence of interactions of some inherent factors with some major/noticeable factors. This research work aimed at modelling correlated variables in infant mortality with the existence of some inherent factors affecting the infant survival function. An alternative semiparametric Stratified Cox model is proposed with a view to take care of multilevel factors that have interactions with others. This, however, was used as a tool to model infant mortality data from Nigeria Demographic and Health Survey (NDHS) with some multilevel factors (Tetanus, Polio, and Breastfeeding) having correlation with main factors (Sex, Size, and Mode of Delivery). Asymptotic properties of the estimators are also studied via simulation. The tested model via data showed good fit and performed differently depending on the levels of the interaction of the strata variable Z*. An evidence that the baseline hazard functions and regression coefficients are not the same from stratum to stratum provides a gain in information as against the usage of Cox model. Simulation result showed that the present method produced better estimates in terms of bias, lower standard errors, and or mean square errors.Keywords: stratified Cox, semiparametric model, infant mortality, multilevel factors, cofounding variables
Procedia PDF Downloads 55517635 Non-Universality in Barkhausen Noise Signatures of Thin Iron Films
Authors: Arnab Roy, P. S. Anil Kumar
Abstract:
We discuss angle dependent changes to the Barkhausen noise signatures of thin epitaxial Fe films upon altering the angle of the applied field. We observe a sub-critical to critical phase transition in the hysteresis loop of the sample upon increasing the out-of-plane component of the applied field. The observations are discussed in the light of simulations of a 2D Gaussian Random Field Ising Model with references to a reducible form of the Random Anisotropy Ising Model.Keywords: Barkhausen noise, Planar Hall effect, Random Field Ising Model, Random Anisotropy Ising Model
Procedia PDF Downloads 38717634 AER Model: An Integrated Artificial Society Modeling Method for Cloud Manufacturing Service Economic System
Authors: Deyu Zhou, Xiao Xue, Lizhen Cui
Abstract:
With the increasing collaboration among various services and the growing complexity of user demands, there are more and more factors affecting the stable development of the cloud manufacturing service economic system (CMSE). This poses new challenges to the evolution analysis of the CMSE. Many researchers have modeled and analyzed the evolution process of CMSE from the perspectives of individual learning and internal factors influencing the system, but without considering other important characteristics of the system's individuals (such as heterogeneity, bounded rationality, etc.) and the impact of external environmental factors. Therefore, this paper proposes an integrated artificial social model for the cloud manufacturing service economic system, which considers both the characteristics of the system's individuals and the internal and external influencing factors of the system. The model consists of three parts: the Agent model, environment model, and rules model (Agent-Environment-Rules, AER): (1) the Agent model considers important features of the individuals, such as heterogeneity and bounded rationality, based on the adaptive behavior mechanisms of perception, action, and decision-making; (2) the environment model describes the activity space of the individuals (real or virtual environment); (3) the rules model, as the driving force of system evolution, describes the mechanism of the entire system's operation and evolution. Finally, this paper verifies the effectiveness of the AER model through computational and experimental results.Keywords: cloud manufacturing service economic system (CMSE), AER model, artificial social modeling, integrated framework, computing experiment, agent-based modeling, social networks
Procedia PDF Downloads 7817633 Improving Post Release Outcomes
Authors: Michael Airton
Abstract:
This case study examines the development of a new service delivery model for prisons that focuses on using NGO’s to provide more effective case management and post release support functions. The model includes the co-design of the service delivery model and innovative commercial agreements that encourage embedded service providers within the prison and continuity of services post release with outcomes based payment mechanisms. The collaboration of prison staff, probation and parole officers and NGO’s is critical to the success of the model and its ability to deliver value and positive outcomes in relation to desistance from offending.Keywords: collaborative service delivery, desistance, non-government organisations, post release support services
Procedia PDF Downloads 38917632 Application of Adaptive Neural Network Algorithms for Determination of Salt Composition of Waters Using Laser Spectroscopy
Authors: Tatiana A. Dolenko, Sergey A. Burikov, Alexander O. Efitorov, Sergey A. Dolenko
Abstract:
In this study, a comparative analysis of the approaches associated with the use of neural network algorithms for effective solution of a complex inverse problem – the problem of identifying and determining the individual concentrations of inorganic salts in multicomponent aqueous solutions by the spectra of Raman scattering of light – is performed. It is shown that application of artificial neural networks provides the average accuracy of determination of concentration of each salt no worse than 0.025 M. The results of comparative analysis of input data compression methods are presented. It is demonstrated that use of uniform aggregation of input features allows decreasing the error of determination of individual concentrations of components by 16-18% on the average.Keywords: inverse problems, multi-component solutions, neural networks, Raman spectroscopy
Procedia PDF Downloads 52717631 Magnetic Navigation in Underwater Networks
Authors: Kumar Divyendra
Abstract:
Underwater Sensor Networks (UWSNs) have wide applications in areas such as water quality monitoring, marine wildlife management etc. A typical UWSN system consists of a set of sensors deployed randomly underwater which communicate with each other using acoustic links. RF communication doesn't work underwater, and GPS too isn't available underwater. Additionally Automated Underwater Vehicles (AUVs) are deployed to collect data from some special nodes called Cluster Heads (CHs). These CHs aggregate data from their neighboring nodes and forward them to the AUVs using optical links when an AUV is in range. This helps reduce the number of hops covered by data packets and helps conserve energy. We consider the three-dimensional model of the UWSN. Nodes are initially deployed randomly underwater. They attach themselves to the surface using a rod and can only move upwards or downwards using a pump and bladder mechanism. We use graph theory concepts to maximize the coverage volume while every node maintaining connectivity with at least one surface node. We treat the surface nodes as landmarks and each node finds out its hop distance from every surface node. We treat these hop-distances as coordinates and use them for AUV navigation. An AUV intending to move closer to a node with given coordinates moves hop by hop through nodes that are closest to it in terms of these coordinates. In absence of GPS, multiple different approaches like Inertial Navigation System (INS), Doppler Velocity Log (DVL), computer vision-based navigation, etc., have been proposed. These systems have their own drawbacks. INS accumulates error with time, vision techniques require prior information about the environment. We propose a method that makes use of the earth's magnetic field values for navigation and combines it with other methods that simultaneously increase the coverage volume under the UWSN. The AUVs are fitted with magnetometers that measure the magnetic intensity (I), horizontal inclination (H), and Declination (D). The International Geomagnetic Reference Field (IGRF) is a mathematical model of the earth's magnetic field, which provides the field values for the geographical coordinateson earth. Researchers have developed an inverse deep learning model that takes the magnetic field values and predicts the location coordinates. We make use of this model within our work. We combine this with with the hop-by-hop movement described earlier so that the AUVs move in such a sequence that the deep learning predictor gets trained as quickly and precisely as possible We run simulations in MATLAB to prove the effectiveness of our model with respect to other methods described in the literature.Keywords: clustering, deep learning, network backbone, parallel computing
Procedia PDF Downloads 9717630 Data Mining Model for Predicting the Status of HIV Patients during Drug Regimen Change
Authors: Ermias A. Tegegn, Million Meshesha
Abstract:
Human Immunodeficiency Virus and Acquired Immunodeficiency Syndrome (HIV/AIDS) is a major cause of death for most African countries. Ethiopia is one of the seriously affected countries in sub Saharan Africa. Previously in Ethiopia, having HIV/AIDS was almost equivalent to a death sentence. With the introduction of Antiretroviral Therapy (ART), HIV/AIDS has become chronic, but manageable disease. The study focused on a data mining technique to predict future living status of HIV/AIDS patients at the time of drug regimen change when the patients become toxic to the currently taking ART drug combination. The data is taken from University of Gondar Hospital ART program database. Hybrid methodology is followed to explore the application of data mining on ART program dataset. Data cleaning, handling missing values and data transformation were used for preprocessing the data. WEKA 3.7.9 data mining tools, classification algorithms, and expertise are utilized as means to address the research problem. By using four different classification algorithms, (i.e., J48 Classifier, PART rule induction, Naïve Bayes and Neural network) and by adjusting their parameters thirty-two models were built on the pre-processed University of Gondar ART program dataset. The performances of the models were evaluated using the standard metrics of accuracy, precision, recall, and F-measure. The most effective model to predict the status of HIV patients with drug regimen substitution is pruned J48 decision tree with a classification accuracy of 98.01%. This study extracts interesting attributes such as Ever taking Cotrim, Ever taking TbRx, CD4 count, Age, Weight, and Gender so as to predict the status of drug regimen substitution. The outcome of this study can be used as an assistant tool for the clinician to help them make more appropriate drug regimen substitution. Future research directions are forwarded to come up with an applicable system in the area of the study.Keywords: HIV drug regimen, data mining, hybrid methodology, predictive model
Procedia PDF Downloads 14117629 Application of an Artificial Neural Network to Determine the Risk of Malignant Tumors from the Images Resulting from the Asymmetry of Internal and External Thermograms of the Mammary Glands
Authors: Amdy Moustapha Drame, Ilya V. Germashev, E. A. Markushevskaya
Abstract:
Among the main problems of medicine is breast cancer, from which a significant number of women around the world are constantly dying. Therefore, the detection of malignant breast tumors is an urgent task. For many years, various technologies for detecting these tumors have been used, in particular, in thermal imaging in order to determine different levels of breast cancer development. These periodic screening methods are a diagnostic tool for women and may have become an alternative to older methods such as mammography. This article proposes a model for the identification of malignant neoplasms of the mammary glands by the asymmetry of internal and external thermal imaging fields.Keywords: asymmetry, breast cancer, tumors, deep learning, thermogram, convolutional transformation, classification
Procedia PDF Downloads 5917628 Speeding up Nonlinear Time History Analysis of Base-Isolated Structures Using a Nonlinear Exponential Model
Authors: Nicolò Vaiana, Giorgio Serino
Abstract:
The nonlinear time history analysis of seismically base-isolated structures can require a significant computational effort when the behavior of each seismic isolator is predicted by adopting the widely used differential equation Bouc-Wen model. In this paper, a nonlinear exponential model, able to simulate the response of seismic isolation bearings within a relatively large displacements range, is described and adopted in order to reduce the numerical computations and speed up the nonlinear dynamic analysis. Compared to the Bouc-Wen model, the proposed one does not require the numerical solution of a nonlinear differential equation for each time step of the analysis. The seismic response of a 3d base-isolated structure with a lead rubber bearing system subjected to harmonic earthquake excitation is simulated by modeling each isolator using the proposed analytical model. The comparison of the numerical results and computational time with those obtained by modeling the lead rubber bearings using the Bouc-Wen model demonstrates the good accuracy of the proposed model and its capability to reduce significantly the computational effort of the analysis.Keywords: base isolation, computational efficiency, nonlinear exponential model, nonlinear time history analysis
Procedia PDF Downloads 38217627 Complex Network Analysis of Seismicity and Applications to Short-Term Earthquake Forecasting
Authors: Kahlil Fredrick Cui, Marissa Pastor
Abstract:
Earthquakes are complex phenomena, exhibiting complex correlations in space, time, and magnitude. Recently, the concept of complex networks has been used to shed light on the statistical and dynamical characteristics of regional seismicity. In this work, we study the relationships and interactions of seismic regions in Chile, Japan, and the Philippines through weighted and directed complex network analysis. Geographical areas are digitized into cells of fixed dimensions which in turn become the nodes of the network when an earthquake has occurred therein. Nodes are linked if a correlation exists between them as determined and measured by a correlation metric. The networks are found to be scale-free, exhibiting power-law behavior in the distributions of their different centrality measures: the in- and out-degree and the in- and out-strength. The evidence is also found of preferential interaction between seismically active regions through their degree-degree correlations suggesting that seismicity is dictated by the activity of a few active regions. The importance of a seismic region to the overall seismicity is measured using a generalized centrality metric taken to be an indicator of its activity or passivity. The spatial distribution of earthquake activity indicates the areas where strong earthquakes have occurred in the past while the passivity distribution points toward the likely locations an earthquake would occur whenever another one happens elsewhere. Finally, we propose a method that would project the location of the next possible earthquake using the generalized centralities coupled with correlations calculated between the latest earthquakes and a geographical point in the future.Keywords: complex networks, correlations, earthquake, hazard assessment
Procedia PDF Downloads 21217626 Prediction of Coronary Artery Stenosis Severity Based on Machine Learning Algorithms
Authors: Yu-Jia Jian, Emily Chia-Yu Su, Hui-Ling Hsu, Jian-Jhih Chen
Abstract:
Coronary artery is the major supplier of myocardial blood flow. When fat and cholesterol are deposit in the coronary arterial wall, narrowing and stenosis of the artery occurs, which may lead to myocardial ischemia and eventually infarction. According to the World Health Organization (WHO), estimated 740 million people have died of coronary heart disease in 2015. According to Statistics from Ministry of Health and Welfare in Taiwan, heart disease (except for hypertensive diseases) ranked the second among the top 10 causes of death from 2013 to 2016, and it still shows a growing trend. According to American Heart Association (AHA), the risk factors for coronary heart disease including: age (> 65 years), sex (men to women with 2:1 ratio), obesity, diabetes, hypertension, hyperlipidemia, smoking, family history, lack of exercise and more. We have collected a dataset of 421 patients from a hospital located in northern Taiwan who received coronary computed tomography (CT) angiography. There were 300 males (71.26%) and 121 females (28.74%), with age ranging from 24 to 92 years, and a mean age of 56.3 years. Prior to coronary CT angiography, basic data of the patients, including age, gender, obesity index (BMI), diastolic blood pressure, systolic blood pressure, diabetes, hypertension, hyperlipidemia, smoking, family history of coronary heart disease and exercise habits, were collected and used as input variables. The output variable of the prediction module is the degree of coronary artery stenosis. The output variable of the prediction module is the narrow constriction of the coronary artery. In this study, the dataset was randomly divided into 80% as training set and 20% as test set. Four machine learning algorithms, including logistic regression, stepwise regression, neural network and decision tree, were incorporated to generate prediction results. We used area under curve (AUC) / accuracy (Acc.) to compare the four models, the best model is neural network, followed by stepwise logistic regression, decision tree, and logistic regression, with 0.68 / 79 %, 0.68 / 74%, 0.65 / 78%, and 0.65 / 74%, respectively. Sensitivity of neural network was 27.3%, specificity was 90.8%, stepwise Logistic regression sensitivity was 18.2%, specificity was 92.3%, decision tree sensitivity was 13.6%, specificity was 100%, logistic regression sensitivity was 27.3%, specificity 89.2%. From the result of this study, we hope to improve the accuracy by improving the module parameters or other methods in the future and we hope to solve the problem of low sensitivity by adjusting the imbalanced proportion of positive and negative data.Keywords: decision support, computed tomography, coronary artery, machine learning
Procedia PDF Downloads 22817625 Optimal Location of the I/O Point in the Parking System
Authors: Jing Zhang, Jie Chen
Abstract:
In this paper, we deal with the optimal I/O point location in an automated parking system. In this system, the S/R machine (storage and retrieve machine) travels independently in vertical and horizontal directions. Based on the characteristics of the parking system and the basic principle of AS/RS system (Automated Storage and Retrieval System), we obtain the continuous model in units of time. For the single command cycle using the randomized storage policy, we calculate the probability density function for the system travel time and thus we develop the travel time model. And we confirm that the travel time model shows a good performance by comparing with discrete case. Finally in this part, we establish the optimal model by minimizing the expected travel time model and it is shown that the optimal location of the I/O point is located at the middle of the left-hand above corner.Keywords: parking system, optimal location, response time, S/R machine
Procedia PDF Downloads 40817624 Research of Applicable Ground Reinforcement Method in Double-Deck Tunnel Junction
Authors: SKhan Park, Seok Jin Lee, Jong Sun Kim, Jun Ho Lee, Bong Chan Kim
Abstract:
Because of the large economic losses caused by traffic congestion in metropolitan areas, various studies on the underground network design and construction techniques has been performed various studies in the developed countries. In Korea, it has performed a study to develop a versatile double-deck of deep tunnel model. This paper is an introduction to develop a ground reinforcement method to enable the safe tunnel construction in the weakened pillar section like as junction of tunnel. Applicable ground reinforcement method in the weakened section is proposed and it is expected to verify the method by the field application tests.Keywords: double-deck tunnel, ground reinforcement, tunnel construction, weakened pillar section
Procedia PDF Downloads 40617623 A Non-Linear Eddy Viscosity Model for Turbulent Natural Convection in Geophysical Flows
Authors: J. P. Panda, K. Sasmal, H. V. Warrior
Abstract:
Eddy viscosity models in turbulence modeling can be mainly classified as linear and nonlinear models. Linear formulations are simple and require less computational resources but have the disadvantage that they cannot predict actual flow pattern in complex geophysical flows where streamline curvature and swirling motion are predominant. A constitutive equation of Reynolds stress anisotropy is adopted for the formulation of eddy viscosity including all the possible higher order terms quadratic in the mean velocity gradients, and a simplified model is developed for actual oceanic flows where only the vertical velocity gradients are important. The new model is incorporated into the one dimensional General Ocean Turbulence Model (GOTM). Two realistic oceanic test cases (OWS Papa and FLEX' 76) have been investigated. The new model predictions match well with the observational data and are better in comparison to the predictions of the two equation k-epsilon model. The proposed model can be easily incorporated in the three dimensional Princeton Ocean Model (POM) to simulate a wide range of oceanic processes. Practically, this model can be implemented in the coastal regions where trasverse shear induces higher vorticity, and for prediction of flow in estuaries and lakes, where depth is comparatively less. The model predictions of marine turbulence and other related data (e.g. Sea surface temperature, Surface heat flux and vertical temperature profile) can be utilized in short term ocean and climate forecasting and warning systems.Keywords: Eddy viscosity, turbulence modeling, GOTM, CFD
Procedia PDF Downloads 20017622 Refined Edge Detection Network
Authors: Omar Elharrouss, Youssef Hmamouche, Assia Kamal Idrissi, Btissam El Khamlichi, Amal El Fallah-Seghrouchni
Abstract:
Edge detection is represented as one of the most challenging tasks in computer vision, due to the complexity of detecting the edges or boundaries in real-world images that contains objects of different types and scales like trees, building as well as various backgrounds. Edge detection is represented also as a key task for many computer vision applications. Using a set of backbones as well as attention modules, deep-learning-based methods improved the detection of edges compared with the traditional methods like Sobel and Canny. However, images of complex scenes still represent a challenge for these methods. Also, the detected edges using the existing approaches suffer from non-refined results while the image output contains many erroneous edges. To overcome this, n this paper, by using the mechanism of residual learning, a refined edge detection network is proposed (RED-Net). By maintaining the high resolution of edges during the training process, and conserving the resolution of the edge image during the network stage, we make the pooling outputs at each stage connected with the output of the previous layer. Also, after each layer, we use an affined batch normalization layer as an erosion operation for the homogeneous region in the image. The proposed methods are evaluated using the most challenging datasets including BSDS500, NYUD, and Multicue. The obtained results outperform the designed edge detection networks in terms of performance metrics and quality of output images.Keywords: edge detection, convolutional neural networks, deep learning, scale-representation, backbone
Procedia PDF Downloads 10217621 Improve Student Performance Prediction Using Majority Vote Ensemble Model for Higher Education
Authors: Wade Ghribi, Abdelmoty M. Ahmed, Ahmed Said Badawy, Belgacem Bouallegue
Abstract:
In higher education institutions, the most pressing priority is to improve student performance and retention. Large volumes of student data are used in Educational Data Mining techniques to find new hidden information from students' learning behavior, particularly to uncover the early symptom of at-risk pupils. On the other hand, data with noise, outliers, and irrelevant information may provide incorrect conclusions. By identifying features of students' data that have the potential to improve performance prediction results, comparing and identifying the most appropriate ensemble learning technique after preprocessing the data, and optimizing the hyperparameters, this paper aims to develop a reliable students' performance prediction model for Higher Education Institutions. Data was gathered from two different systems: a student information system and an e-learning system for undergraduate students in the College of Computer Science of a Saudi Arabian State University. The cases of 4413 students were used in this article. The process includes data collection, data integration, data preprocessing (such as cleaning, normalization, and transformation), feature selection, pattern extraction, and, finally, model optimization and assessment. Random Forest, Bagging, Stacking, Majority Vote, and two types of Boosting techniques, AdaBoost and XGBoost, are ensemble learning approaches, whereas Decision Tree, Support Vector Machine, and Artificial Neural Network are supervised learning techniques. Hyperparameters for ensemble learning systems will be fine-tuned to provide enhanced performance and optimal output. The findings imply that combining features of students' behavior from e-learning and students' information systems using Majority Vote produced better outcomes than the other ensemble techniques.Keywords: educational data mining, student performance prediction, e-learning, classification, ensemble learning, higher education
Procedia PDF Downloads 10517620 Cooperative Coevolution for Neuro-Evolution of Feed Forward Networks for Time Series Prediction Using Hidden Neuron Connections
Authors: Ravneil Nand
Abstract:
Cooperative coevolution uses problem decomposition methods to solve a larger problem. The problem decomposition deals with breaking down the larger problem into a number of smaller sub-problems depending on their method. Different problem decomposition methods have their own strengths and limitations depending on the neural network used and application problem. In this paper we are introducing a new problem decomposition method known as Hidden-Neuron Level Decomposition (HNL). The HNL method is competing with established problem decomposition method in time series prediction. The results show that the proposed approach has improved the results in some benchmark data sets when compared to the standalone method and has competitive results when compared to methods from literature.Keywords: cooperative coevaluation, feed forward network, problem decomposition, neuron, synapse
Procedia PDF Downloads 333