Search results for: Cox proportional hazard regression
3975 Implementation of a Non-Poissonian Model in a Low-Seismicity Area
Authors: Ludivine Saint-Mard, Masato Nakajima, Gloria Senfaute
Abstract:
In areas with low to moderate seismicity, the probabilistic seismic hazard analysis frequently uses a Poisson approach, which assumes independence in time and space of events to determine the annual probability of earthquake occurrence. Nevertheless, in countries with high seismic rate, such as Japan, it is frequently use non-poissonian model which assumes that next earthquake occurrence depends on the date of previous one. The objective of this paper is to apply a non-poissonian models in a region of low to moderate seismicity to get a feedback on the following questions: can we overcome the lack of data to determine some key parameters?, and can we deal with uncertainties to apply largely this methodology on an industrial context?. The Brownian-Passage-Time model was applied to a fault located in France and conclude that even if the lack of data can be overcome with some calculations, the amount of uncertainties and number of scenarios leads to a numerous branches in PSHA, making this method difficult to apply on a large scale of low to moderate seismicity areas and in an industrial context.Keywords: probabilistic seismic hazard, non-poissonian model, earthquake occurrence, low seismicity
Procedia PDF Downloads 653974 Instability Index Method and Logistic Regression to Assess Landslide Susceptibility in County Route 89, Taiwan
Authors: Y. H. Wu, Ji-Yuan Lin, Yu-Ming Liou
Abstract:
This study aims to set up the landslide susceptibility map of County Route 89 at Ren-Ai Township in Nantou County using the Instability Index Method and Logistic regression. Seven susceptibility factors including Slope Angle, Aspect, Elevation, Distance to fold, Distance to River, Distance to Road and Accumulated Rainfall were obtained by GIS based on the Typhoon Toraji landslide area identified by Industrial Technology Research Institute in 2001. To calculate the landslide percentage of each factor and acquire the weight and grade the grid by means of Instability Index Method. In this study, landslide susceptibility can be classified into four grades: high, medium high, medium low and low, in order to determine the advantages and disadvantages of the two models. The precision of this model is verified by classification error matrix and SRC curve. These results suggest that the logistic regression model is a preferred method than instability index in the assessment of landslide susceptibility. It is suitable for the landslide prediction and precaution in this area in the future.Keywords: instability index method, logistic regression, landslide susceptibility, SRC curve
Procedia PDF Downloads 2923973 How to Reach Net Zero Emissions? On the Permissibility of Negative Emission Technologies and the Danger of Moral Hazards
Authors: Hanna Schübel, Ivo Wallimann-Helmer
Abstract:
In order to reach the goal of the Paris Agreement to not overshoot 1.5°C of warming above pre-industrial levels, various countries including the UK and Switzerland have committed themselves to net zero emissions by 2050. The employment of negative emission technologies (NETs) is very likely going to be necessary for meeting these national objectives as well as other internationally agreed climate targets. NETs are methods of removing carbon from the atmosphere and are thus a means for addressing climate change. They range from afforestation to technological measures such as direct air capture and carbon storage (DACCS), where CO2 is captured from the air and stored underground. As all so-called geoengineering technologies, the development and deployment of NETs are often subject to moral hazard arguments. As these technologies could be perceived as an alternative to mitigation efforts, so the argument goes, they are potentially a dangerous distraction from the main target of mitigating emissions. We think that this is a dangerous argument to make as it may hinder the development of NETs which are an essential element of net zero emission targets. In this paper we argue that the moral hazard argument is only problematic if we do not reflect upon which levels of emissions are at stake in order to meet net zero emissions. In response to the moral hazard argument we develop an account of which levels of emissions in given societies should be mitigated and not be the target of NETs and which levels of emissions can legitimately be a target of NETs. For this purpose, we define four different levels of emissions: the current level of individual emissions, the level individuals emit in order to appear in public without shame, the level of a fair share of individual emissions in the global budget, and finally the baseline of net zero emissions. At each level of emissions there are different subjects to be assigned responsibilities if societies and/or individuals are committed to the target of net zero emissions. We argue that all emissions within one’s fair share do not demand individual mitigation efforts. The same holds with regard to individuals and the baseline level of emissions necessary to appear in public in their societies without shame. Individuals are only under duty to reduce their emissions if they exceed this baseline level. This is different for whole societies. Societies demanding more emissions to appear in public without shame than the individual fair share are under duty to foster emission reductions and are not legitimate to reduce by introducing NETs. NETs are legitimate for reducing emissions only below the level of fair shares and for reaching net zero emissions. Since access to NETs to achieve net zero emissions demands technology not affordable to individuals there are also no full individual responsibilities to achieve net zero emissions. This is mainly a responsibility of societies as a whole.Keywords: climate change, mitigation, moral hazard, negative emission technologies, responsibility
Procedia PDF Downloads 1223972 Factors Related to Protective Behavior on Indoor Pollution among Pregnant Women in Nakhon Pathom Province, Thailand
Authors: Yuri Teraoka, Cheerawit Rattanapan, Aroonsri Mongkolchati
Abstract:
This cross sectional analytic study was carried out to determine factors related to protective behavior on indoor pollution among pregnant women in Nakhon Pathom province, Thailand. A total of 319 pregnant women were enrolled at three antenatal care clinics in community hospital. Data were collected using simple random sampling from April 2015 to May 2015 using a structured self-administration questionnaire by well-trained research assistants. The result showed that around 73% pregnant women showed low level of low protective behavior on indoor pollution. Chi-square and multiple logistic regression were used to examine the factors and protective behavior on indoor pollution. After adjusting for confounding factors, this study found that tobacco smoking before pregnancy (AOR=2.15, 95% CI: 0.78-5.95) and low environmental health hazard (AOR=1.94, 95% CI: 1.09-3.49) were significant factors related to protective behavior on indoor pollution among pregnant women (p-value < 0.05). In conclusion, this study suggested that environmental health education campaign and environmental implementation program among pregnant woman are needed.Keywords: Thailand, environmental health, protective behavior, pregnant women
Procedia PDF Downloads 3673971 On the Development of a Homogenized Earthquake Catalogue for Northern Algeria
Authors: I. Grigoratos, R. Monteiro
Abstract:
Regions with a significant percentage of non-seismically designed buildings and reduced urban planning are particularly vulnerable to natural hazards. In this context, the project ‘Improved Tools for Disaster Risk Mitigation in Algeria’ (ITERATE) aims at seismic risk mitigation in Algeria. Past earthquakes in North Algeria caused extensive damages, e.g. the El Asnam 1980 moment magnitude (Mw) 7.1 and Boumerdes 2003 Mw 6.8 earthquakes. This paper will address a number of proposed developments and considerations made towards a further improvement of the component of seismic hazard. In specific, an updated earthquake catalog (until year 2018) is compiled, and new conversion equations to moment magnitude are introduced. Furthermore, a network-based method for the estimation of the spatial and temporal distribution of the minimum magnitude of completeness is applied. We found relatively large values for Mc, due to the sparse network, and a nonlinear trend between Mw and body wave (mb) or local magnitude (ML), which are the most common scales reported in the region. Lastly, the resulting b-value of the Gutenberg-Richter distribution is sensitive to the declustering method.Keywords: conversion equation, magnitude of completeness, seismic events, seismic hazard
Procedia PDF Downloads 1663970 Consumer Health Risk Assessment from Some Heavy Metal Bioaccumulation in Common Carp (Cyprinus Carpio) from Lake Koka, Ethiopia
Authors: Mathewos Temesgen, Lemi Geleta
Abstract:
Lake Koka is one of the Ethiopian Central Rift Valleys lakes, where the absorbance of domestic, agricultural, and industrial waste from the nearby industrial and agro-industrial activities is very common. The aim of this research was to assess the heavy metal bioaccumulation in edible parts of common carp (Cyprinus carpio) in Lake Koka and the health risks associated with the dietary intake of the fish. Three sampling sites were selected randomly for primary data collection. Physicochemical parameters (pH, Total Dissolved Solids, Dissolved Oxygen and Electrical Conductivity) were measured in-situ. Four heavy metals (Cd, Cr, Pb, and Zn) in water and bio-accumulation in the edible parts of the fish were analyzed with flame atomic absorption spectrometry. The mean values of TDS, EC, DO and pH of the lake water were 458.1 mg/L, 905.7 µ s/cm, 7.36 mg/L, and 7.9, respectively. The mean concentrations of Zn, Cr, and Cd in the edible part of fish were also 0.18 mg/kg, ND-0.24 mg/kg, and ND-0.03 mg/kg, respectively. Pb was, however, not identified. The amount of Cr in the examined fish muscle was above the level set by FAO, and the accumulation of the metals showed marked differences between sampling sites (p<0.05). The concentrations of Cd, Pb and were below the maximum permissible limit. The results also indicated that Cr has a high transfer factor value and Zn has the lowest. The carcinogenic hazard ratio values were below the threshold value (<1) for the edible parts of fish. The estimated weekly intake of heavy metals from fish muscles ranked as Cr>Zn>Cd, but the values were lower than the Reference Dose limit for metals. The carcinogenic risk values indicated a low health risk due to the intake of individual metals from fish. Furthermore, the hazard index of the edible part of fish was less than unity. Generally, the water quality is not a risk for the survival and reproduction of fish, and the heavy metal contents in the edible parts of fish exhibited low carcinogenic risk through the food chain.Keywords: bio-accumulation, cyprinus carpio, hazard index, heavy metals, Lake Koka
Procedia PDF Downloads 1143969 Regret-Regression for Multi-Armed Bandit Problem
Authors: Deyadeen Ali Alshibani
Abstract:
In the literature, the multi-armed bandit problem as a statistical decision model of an agent trying to optimize his decisions while improving his information at the same time. There are several different algorithms models and their applications on this problem. In this paper, we evaluate the Regret-regression through comparing with Q-learning method. A simulation on determination of optimal treatment regime is presented in detail.Keywords: optimal, bandit problem, optimization, dynamic programming
Procedia PDF Downloads 4533968 The Strengths and Limitations of the Statistical Modeling of Complex Social Phenomenon: Focusing on SEM, Path Analysis, or Multiple Regression Models
Authors: Jihye Jeon
Abstract:
This paper analyzes the conceptual framework of three statistical methods, multiple regression, path analysis, and structural equation models. When establishing research model of the statistical modeling of complex social phenomenon, it is important to know the strengths and limitations of three statistical models. This study explored the character, strength, and limitation of each modeling and suggested some strategies for accurate explaining or predicting the causal relationships among variables. Especially, on the studying of depression or mental health, the common mistakes of research modeling were discussed.Keywords: multiple regression, path analysis, structural equation models, statistical modeling, social and psychological phenomenon
Procedia PDF Downloads 6583967 QSRR Analysis of 17-Picolyl and 17-Picolinylidene Androstane Derivatives Based on Partial Least Squares and Principal Component Regression
Authors: Sanja Podunavac-Kuzmanović, Strahinja Kovačević, Lidija Jevrić, Evgenija Djurendić, Jovana Ajduković
Abstract:
There are several methods for determination of the lipophilicity of biologically active compounds, however chromatography has been shown as a very suitable method for this purpose. Chromatographic (C18-RP-HPLC) analysis of a series of 24 17-picolyl and 17-picolinylidene androstane derivatives was carried out. The obtained retention indices (logk, methanol (90%) / water (10%)) were correlated with calculated physicochemical and lipophilicity descriptors. The QSRR analysis was carried out applying principal component regression (PCR) and partial least squares regression (PLS). The PCR and PLS model were selected on the basis of the highest variance and the lowest root mean square error of cross-validation. The obtained PCR and PLS model successfully correlate the calculated molecular descriptors with logk parameter indicating the significance of the lipophilicity of compounds in chromatographic process. On the basis of the obtained results it can be concluded that the obtained logk parameters of the analyzed androstane derivatives can be considered as their chromatographic lipophilicity. These results are the part of the project No. 114-451-347/2015-02, financially supported by the Provincial Secretariat for Science and Technological Development of Vojvodina and CMST COST Action CM1105.Keywords: androstane derivatives, chromatography, molecular structure, principal component regression, partial least squares regression
Procedia PDF Downloads 2773966 Detecting Earnings Management via Statistical and Neural Networks Techniques
Authors: Mohammad Namazi, Mohammad Sadeghzadeh Maharluie
Abstract:
Predicting earnings management is vital for the capital market participants, financial analysts and managers. The aim of this research is attempting to respond to this query: Is there a significant difference between the regression model and neural networks’ models in predicting earnings management, and which one leads to a superior prediction of it? In approaching this question, a Linear Regression (LR) model was compared with two neural networks including Multi-Layer Perceptron (MLP), and Generalized Regression Neural Network (GRNN). The population of this study includes 94 listed companies in Tehran Stock Exchange (TSE) market from 2003 to 2011. After the results of all models were acquired, ANOVA was exerted to test the hypotheses. In general, the summary of statistical results showed that the precision of GRNN did not exhibit a significant difference in comparison with MLP. In addition, the mean square error of the MLP and GRNN showed a significant difference with the multi variable LR model. These findings support the notion of nonlinear behavior of the earnings management. Therefore, it is more appropriate for capital market participants to analyze earnings management based upon neural networks techniques, and not to adopt linear regression models.Keywords: earnings management, generalized linear regression, neural networks multi-layer perceptron, Tehran stock exchange
Procedia PDF Downloads 4243965 Minimizing the Impact of Covariate Detection Limit in Logistic Regression
Authors: Shahadut Hossain, Jacek Wesolowski, Zahirul Hoque
Abstract:
In many epidemiological and environmental studies covariate measurements are subject to the detection limit. In most applications, covariate measurements are usually truncated from below which is known as left-truncation. Because the measuring device, which we use to measure the covariate, fails to detect values falling below the certain threshold. In regression analyses, it causes inflated bias and inaccurate mean squared error (MSE) to the estimators. This paper suggests a response-based regression calibration method to correct the deleterious impact introduced by the covariate detection limit in the estimators of the parameters of simple logistic regression model. Compared to the maximum likelihood method, the proposed method is computationally simpler, and hence easier to implement. It is robust to the violation of distributional assumption about the covariate of interest. In producing correct inference, the performance of the proposed method compared to the other competing methods has been investigated through extensive simulations. A real-life application of the method is also shown using data from a population-based case-control study of non-Hodgkin lymphoma.Keywords: environmental exposure, detection limit, left truncation, bias, ad-hoc substitution
Procedia PDF Downloads 2383964 Comparative Study od Three Artificial Intelligence Techniques for Rain Domain in Precipitation Forecast
Authors: Nabilah Filzah Mohd Radzuan, Andi Putra, Zalinda Othman, Azuraliza Abu Bakar, Abdul Razak Hamdan
Abstract:
Precipitation forecast is important to avoid natural disaster incident which can cause losses in the involved area. This paper reviews three techniques logistic regression, decision tree, and random forest which are used in making precipitation forecast. These combination techniques through the vector auto-regression (VAR) model help in finding the advantages and strengths of each technique in the forecast process. The data-set contains variables of the rain’s domain. Adaptation of artificial intelligence techniques involved in rain domain enables the forecast process to be easier and systematic for precipitation forecast.Keywords: logistic regression, decisions tree, random forest, VAR model
Procedia PDF Downloads 4473963 Using Seismic and GPS Data for Hazard Estimation in Some Active Regions in Egypt
Authors: Abdel-Monem Sayed Mohamed
Abstract:
Egypt rapidly growing development is accompanied by increasing levels of standard living particular in its urban areas. However, there is a limited experience in quantifying the sources of risk management in Egypt and in designing efficient strategies to keep away serious impacts of earthquakes. From the historical point of view and recent instrumental records, there are some seismo-active regions in Egypt, where some significant earthquakes had occurred in different places. The special tectonic features in Egypt: Aswan, Greater Cairo, Red Sea and Sinai Peninsula regions are the territories of a high seismic risk, which have to be monitored by up-to date technologies. The investigations of the seismic events and interpretations led to evaluate the seismic hazard for disaster prevention and for the safety of the dense populated regions and the vital national projects as the High Dam. In addition to the monitoring of the recent crustal movements, the most powerful technique of satellite geodesy GPS are used where geodetic networks are covering such seismo-active regions. The results from the data sets are compared and combined in order to determine the main characteristics of the deformation and hazard estimation for specified regions. The final compiled output from the seismological and geodetic analysis threw lights upon the geodynamical regime of these seismo-active regions and put Aswan and Greater Cairo under the lowest class according to horizontal crustal strains classifications. This work will serve a basis for the development of so-called catastrophic models and can be further used for catastrophic risk management. Also, this work is trying to evaluate risk of large catastrophic losses within the important regions including the High Dam, strategic buildings and archeological sites. Studies on possible scenarios of earthquakes and losses are a critical issue for decision making in insurance as a part of mitigation measures.Keywords: b-value, Gumbel distribution, seismic and GPS data, strain parameters
Procedia PDF Downloads 4603962 Safety of Ports, Harbours, Marine Terminals: Application of Quantitative Risk Assessment
Authors: Dipak Sonawane, Sudarshan Daga, Somesh Gupta
Abstract:
Quantitative risk assessment (QRA) is a very precise and consistent approach to defining the likelihood, consequence and severity of a major incident/accident. A variety of hazardous cargoes in bulk, such as hydrocarbons and flammable/toxic chemicals, are handled at various ports. It is well known that most of the operations are hazardous, having the potential of damaging property, causing injury/loss of life and, in some cases, the threat of environmental damage. In order to ensure adequate safety towards life, environment and property, the application of scientific methods such as QRA is inevitable. By means of these methods, comprehensive hazard identification, risk assessment and appropriate implementation of Risk Control measures can be carried out. In this paper, the authors, based on their extensive experience in Risk Analysis for ports and harbors, have exhibited how QRA can be used in practice to minimize and contain risk to tolerable levels. A specific case involving the operation for unloading of hydrocarbon at a port is presented. The exercise provides confidence that the method of QRA, as proposed by the authors, can be used appropriately for the identification of hazards and risk assessment of Ports and Terminals.Keywords: quantitative risk assessment, hazard assessment, consequence analysis, individual risk, societal risk
Procedia PDF Downloads 803961 An Alternative Stratified Cox Model for Correlated Variables in Infant Mortality
Authors: K. A. Adeleke
Abstract:
Often in epidemiological research, introducing stratified Cox model can account for the existence of interactions of some inherent factors with some major/noticeable factors. This research work aimed at modelling correlated variables in infant mortality with the existence of some inherent factors affecting the infant survival function. An alternative semiparametric Stratified Cox model is proposed with a view to take care of multilevel factors that have interactions with others. This, however, was used as a tool to model infant mortality data from Nigeria Demographic and Health Survey (NDHS) with some multilevel factors (Tetanus, Polio, and Breastfeeding) having correlation with main factors (Sex, Size, and Mode of Delivery). Asymptotic properties of the estimators are also studied via simulation. The tested model via data showed good fit and performed differently depending on the levels of the interaction of the strata variable Z*. An evidence that the baseline hazard functions and regression coefficients are not the same from stratum to stratum provides a gain in information as against the usage of Cox model. Simulation result showed that the present method produced better estimates in terms of bias, lower standard errors, and or mean square errors.Keywords: stratified Cox, semiparametric model, infant mortality, multilevel factors, cofounding variables
Procedia PDF Downloads 5583960 A Study of User Awareness and Attitudes Towards Civil-ID Authentication in Oman’s Electronic Services
Authors: Raya Al Khayari, Rasha Al Jassim, Muna Al Balushi, Fatma Al Moqbali, Said El Hajjar
Abstract:
This study utilizes linear regression analysis to investigate the correlation between user account passwords and the probability of civil ID exposure, offering statistical insights into civil ID security. The study employs multiple linear regression (MLR) analysis to further investigate the elements that influence consumers’ views of civil ID security. This aims to increase awareness and improve preventive measures. The results obtained from the MLR analysis provide a thorough comprehension and can guide specific educational and awareness campaigns aimed at promoting improved security procedures. In summary, the study’s results offer significant insights for improving existing security measures and developing more efficient tactics to reduce risks related to civil ID security in Oman. By identifying key factors that impact consumers’ perceptions, organizations can tailor their strategies to address vulnerabilities effectively. Additionally, the findings can inform policymakers on potential regulatory changes to enhance civil ID security in the country.Keywords: civil-id disclosure, awareness, linear regression, multiple regression
Procedia PDF Downloads 603959 Policy Implications of Demographic Impacts on COVID-19, Pneumonia, and Influenza Mortality: A Multivariable Regression Approach to Death Toll Reduction
Authors: Saiakhil Chilaka
Abstract:
Understanding the demographic factors that influence mortality from respiratory diseases like COVID-19, pneumonia, and influenza is crucial for informing public health policy. This study utilizes multivariable regression models to assess the relationship between state, sex, and age group on deaths from these diseases using U.S. data from 2020 to 2023. The analysis reveals that age and sex play significant roles in mortality, while state-level variations are minimal. Although the model’s low R-squared values indicate that additional factors are at play, this paper discusses how these findings, in light of recent research, can inform future public health policy, resource allocation, and intervention strategies.Keywords: COVID-19, multivariable regression, public policy, data science
Procedia PDF Downloads 233958 Earthquake Identification to Predict Tsunami in Andalas Island, Indonesia Using Back Propagation Method and Fuzzy TOPSIS Decision Seconder
Authors: Muhamad Aris Burhanudin, Angga Firmansyas, Bagus Jaya Santosa
Abstract:
Earthquakes are natural hazard that can trigger the most dangerous hazard, tsunami. 26 December 2004, a giant earthquake occurred in north-west Andalas Island. It made giant tsunami which crushed Sumatra, Bangladesh, India, Sri Lanka, Malaysia and Singapore. More than twenty thousand people dead. The occurrence of earthquake and tsunami can not be avoided. But this hazard can be mitigated by earthquake forecasting. Early preparation is the key factor to reduce its damages and consequences. We aim to investigate quantitatively on pattern of earthquake. Then, we can know the trend. We study about earthquake which has happened in Andalas island, Indonesia one last decade. Andalas is island which has high seismicity, more than a thousand event occur in a year. It is because Andalas island is in tectonic subduction zone of Hindia sea plate and Eurasia plate. A tsunami forecasting is needed to mitigation action. Thus, a Tsunami Forecasting Method is presented in this work. Neutral Network has used widely in many research to estimate earthquake and it is convinced that by using Backpropagation Method, earthquake can be predicted. At first, ANN is trained to predict Tsunami 26 December 2004 by using earthquake data before it. Then after we get trained ANN, we apply to predict the next earthquake. Not all earthquake will trigger Tsunami, there are some characteristics of earthquake that can cause Tsunami. Wrong decision can cause other problem in the society. Then, we need a method to reduce possibility of wrong decision. Fuzzy TOPSIS is a statistical method that is widely used to be decision seconder referring to given parameters. Fuzzy TOPSIS method can make the best decision whether it cause Tsunami or not. This work combines earthquake prediction using neural network method and using Fuzzy TOPSIS to determine the decision that the earthquake triggers Tsunami wave or not. Neural Network model is capable to capture non-linear relationship and Fuzzy TOPSIS is capable to determine the best decision better than other statistical method in tsunami prediction.Keywords: earthquake, fuzzy TOPSIS, neural network, tsunami
Procedia PDF Downloads 4983957 Machine Learning Techniques for Estimating Ground Motion Parameters
Authors: Farid Khosravikia, Patricia Clayton
Abstract:
The main objective of this study is to evaluate the advantages and disadvantages of various machine learning techniques in forecasting ground-motion intensity measures given source characteristics, source-to-site distance, and local site condition. Intensity measures such as peak ground acceleration and velocity (PGA and PGV, respectively) as well as 5% damped elastic pseudospectral accelerations at different periods (PSA), are indicators of the strength of shaking at the ground surface. Estimating these variables for future earthquake events is a key step in seismic hazard assessment and potentially subsequent risk assessment of different types of structures. Typically, linear regression-based models, with pre-defined equations and coefficients, are used in ground motion prediction. However, due to the restrictions of the linear regression methods, such models may not capture more complex nonlinear behaviors that exist in the data. Thus, this study comparatively investigates potential benefits from employing other machine learning techniques as a statistical method in ground motion prediction such as Artificial Neural Network, Random Forest, and Support Vector Machine. The algorithms are adjusted to quantify event-to-event and site-to-site variability of the ground motions by implementing them as random effects in the proposed models to reduce the aleatory uncertainty. All the algorithms are trained using a selected database of 4,528 ground-motions, including 376 seismic events with magnitude 3 to 5.8, recorded over the hypocentral distance range of 4 to 500 km in Oklahoma, Kansas, and Texas since 2005. The main reason of the considered database stems from the recent increase in the seismicity rate of these states attributed to petroleum production and wastewater disposal activities, which necessities further investigation in the ground motion models developed for these states. Accuracy of the models in predicting intensity measures, generalization capability of the models for future data, as well as usability of the models are discussed in the evaluation process. The results indicate the algorithms satisfy some physically sound characteristics such as magnitude scaling distance dependency without requiring pre-defined equations or coefficients. Moreover, it is shown that, when sufficient data is available, all the alternative algorithms tend to provide more accurate estimates compared to the conventional linear regression-based method, and particularly, Random Forest outperforms the other algorithms. However, the conventional method is a better tool when limited data is available.Keywords: artificial neural network, ground-motion models, machine learning, random forest, support vector machine
Procedia PDF Downloads 1233956 Urban Energy Demand Modelling: Spatial Analysis Approach
Authors: Hung-Chu Chen, Han Qi, Bauke de Vries
Abstract:
Energy consumption in the urban environment has attracted numerous researches in recent decades. However, it is comparatively rare to find literary works which investigated 3D spatial analysis of urban energy demand modelling. In order to analyze the spatial correlation between urban morphology and energy demand comprehensively, this paper investigates their relation by using the spatial regression tool. In addition, the spatial regression tool which is applied in this paper is ordinary least squares regression (OLS) and geographically weighted regression (GWR) model. Normalized Difference Built-up Index (NDBI), Normalized Difference Vegetation Index (NDVI), and building volume are explainers of urban morphology, which act as independent variables of Energy-land use (E-L) model. NDBI and NDVI are used as the index to describe five types of land use: urban area (U), open space (O), artificial green area (G), natural green area (V), and water body (W). Accordingly, annual electricity, gas demand and energy demand are dependent variables of the E-L model. Based on the analytical result of E-L model relation, it revealed that energy demand and urban morphology are closely connected and the possible causes and practical use are discussed. Besides, the spatial analysis methods of OLS and GWR are compared.Keywords: energy demand model, geographically weighted regression, normalized difference built-up index, normalized difference vegetation index, spatial statistics
Procedia PDF Downloads 1493955 The Association between Acupuncture Treatment and a Decreased Risk of Irritable Bowel Syndrome in Patients with Depression
Authors: Greg Zimmerman
Abstract:
Background: Major depression is a common illness that affects millions of people globally. It is the leading cause of disability and is projected to become the number one cause of the global burden of disease by 2030. Many of those who suffer from depression also suffer from Irritable Bowel Syndrome (IBS). Acupuncture has been shown to help depression. The aim of this study was to investigate the effectiveness of acupuncture in reducing the risk of IBS in patients with depression. Methods: We enrolled patients diagnosed with depression through the Taiwanese National Health Insurance Research Database (NHIRD). Propensity score matching was used to match equal numbers (n=32971) of the acupuncture cohort and no-acupuncture cohort based on characteristics including sex, age, baseline comorbidity, and medication. The Cox regression model was used to compare the hazard ratios (HRs) of IBS in the two cohorts. Results: The basic characteristics of the two groups were similar. The cumulative incidence of IBS was significantly lower in the acupuncture cohort than in the no-acupuncture cohort (Log-rank test, p<0.001). Conclusion: The results provided real-world evidence that acupuncture may have a beneficial effect on IBS risk reduction in patients with depression.Keywords: acupuncture, depression, irritable bowel syndrome, national health insurance research database, real-world evidence
Procedia PDF Downloads 1073954 Modeling Aeration of Sharp Crested Weirs by Using Support Vector Machines
Authors: Arun Goel
Abstract:
The present paper attempts to investigate the prediction of air entrainment rate and aeration efficiency of a free over-fall jets issuing from a triangular sharp crested weir by using regression based modelling. The empirical equations, support vector machine (polynomial and radial basis function) models and the linear regression techniques were applied on the triangular sharp crested weirs relating the air entrainment rate and the aeration efficiency to the input parameters namely drop height, discharge, and vertex angle. It was observed that there exists a good agreement between the measured values and the values obtained using empirical equations, support vector machine (Polynomial and rbf) models, and the linear regression techniques. The test results demonstrated that the SVM based (Poly & rbf) model also provided acceptable prediction of the measured values with reasonable accuracy along with empirical equations and linear regression techniques in modelling the air entrainment rate and the aeration efficiency of a free over-fall jets issuing from triangular sharp crested weir. Further sensitivity analysis has also been performed to study the impact of input parameter on the output in terms of air entrainment rate and aeration efficiency.Keywords: air entrainment rate, dissolved oxygen, weir, SVM, regression
Procedia PDF Downloads 4363953 Use of Regression Analysis in Determining the Length of Plastic Hinge in Reinforced Concrete Columns
Authors: Mehmet Alpaslan Köroğlu, Musa Hakan Arslan, Muslu Kazım Körez
Abstract:
Basic objective of this study is to create a regression analysis method that can estimate the length of a plastic hinge which is an important design parameter, by making use of the outcomes of (lateral load-lateral displacement hysteretic curves) the experimental studies conducted for the reinforced square concrete columns. For this aim, 170 different square reinforced concrete column tests results have been collected from the existing literature. The parameters which are thought affecting the plastic hinge length such as cross-section properties, features of material used, axial loading level, confinement of the column, longitudinal reinforcement bars in the columns etc. have been obtained from these 170 different square reinforced concrete column tests. In the study, when determining the length of plastic hinge, using the experimental test results, a regression analysis have been separately tested and compared with each other. In addition, the outcome of mentioned methods on determination of plastic hinge length of the reinforced concrete columns has been compared to other methods available in the literature.Keywords: columns, plastic hinge length, regression analysis, reinforced concrete
Procedia PDF Downloads 4803952 Seismic Resistant Columns of Buildings against the Differential Settlement of the Foundation
Authors: Romaric Desbrousses, Lan Lin
Abstract:
The objective of this study is to determine how Canadian seismic design provisions affect the column axial load resistance of moment-resisting frame reinforced concrete buildings subjected to the differential settlement of their foundation. To do so, two four-storey buildings are designed in accordance with the seismic design provisions of the Canadian Concrete Design Standards. One building is located in Toronto, which is situated in a moderate seismic hazard zone in Canada, and the other in Vancouver, which is in Canada’s highest seismic hazard zone. A finite element model of each building is developed using SAP 2000. A 100 mm settlement is assigned to the base of the building’s center column. The axial load resistance of the column is represented by the demand capacity ratio. The analysis results show that settlement-induced tensile axial forces have a particularly detrimental effect on the conventional settling columns of the Toronto buildings which fail at a much smaller settlement that those in the Vancouver buildings. The results also demonstrate that particular care should be taken in the design of columns in short-span buildings.Keywords: Columns, Demand, Foundation differential settlement, Seismic design, Non-linear analysis
Procedia PDF Downloads 1353951 Experimental Study on Recycled Aggregate Pervious Concrete
Authors: Ji Wenzhan, Zhang Tao, Li Guoyou
Abstract:
Concrete is the most widely used building material in the world. At the same time, the world produces a large amount of construction waste each year. Waste concrete is processed and treated, and the recycled aggregate is used to make pervious concrete, which enables the construction waste to be recycled. Pervious concrete has many advantages such as permeability to water, protection of water resources, and so on. This paper tests the recycled aggregate obtained by crushing high-strength waste concrete (TOU) and low-strength waste concrete (PU), and analyzes the effect of porosity, amount of cement, mineral admixture and recycled aggregate on the strength of permeable concrete. The porosity is inversely proportional to the strength, and the amount of cement used is proportional to the strength. The mineral admixture can effectively improve the workability of the mixture. The quality of recycled aggregates had a significant effect on strength. Compared with concrete using "PU" aggregates, the strength of 7d and 28d concrete using "TOU" aggregates increased by 69.0% and 73.3%, respectively. Therefore, the quality of recycled aggregates should be strictly controlled during production, and the mix ratio should be designed according to different use environments and usage requirements. This test prepared a recycled aggregate permeable concrete with a compressive strength of 35.8 MPa, which can be used for light load roads and provides a reference for engineering applications.Keywords: recycled aggregate, permeable concrete, compressive strength, permeability
Procedia PDF Downloads 2273950 Measurement Errors and Misclassifications in Covariates in Logistic Regression: Bayesian Adjustment of Main and Interaction Effects and the Sample Size Implications
Authors: Shahadut Hossain
Abstract:
Measurement errors in continuous covariates and/or misclassifications in categorical covariates are common in epidemiological studies. Regression analysis ignoring such mismeasurements seriously biases the estimated main and interaction effects of covariates on the outcome of interest. Thus, adjustments for such mismeasurements are necessary. In this research, we propose a Bayesian parametric framework for eliminating deleterious impacts of covariate mismeasurements in logistic regression. The proposed adjustment method is unified and thus can be applied to any generalized linear and non-linear regression models. Furthermore, adjustment for covariate mismeasurements requires validation data usually in the form of either gold standard measurements or replicates of the mismeasured covariates on a subset of the study population. Initial investigation shows that adequacy of such adjustment depends on the sizes of main and validation samples, especially when prevalences of the categorical covariates are low. Thus, we investigate the impact of main and validation sample sizes on the adjusted estimates, and provide a general guideline about these sample sizes based on simulation studies.Keywords: measurement errors, misclassification, mismeasurement, validation sample, Bayesian adjustment
Procedia PDF Downloads 4093949 Quantitative Structure-Activity Relationship Study of Some Quinoline Derivatives as Antimalarial Agents
Authors: M. Ouassaf, S. Belaid
Abstract:
A series of quinoline derivatives with antimalarial activity were subjected to two-dimensional quantitative structure-activity relationship (2D-QSAR) studies. Three models were implemented using multiple regression linear MLR, a regression partial least squares (PLS), nonlinear regression (MNLR), to see which descriptors are closely related to the activity biologic. We relied on a principal component analysis (PCA). Based on our results, a comparison of the quality of, MLR, PLS, and MNLR models shows that the MNLR (R = 0.914 and R² = 0.835, RCV= 0.853) models have substantially better predictive capability because the MNLR approach gives better results than MLR (R = 0.835 and R² = 0,752, RCV=0.601)), PLS (R = 0.742 and R² = 0.552, RCV=0.550) The model of MNLR gave statistically significant results and showed good stability to data variation in leave-one-out cross-validation. The obtained results suggested that our proposed model MNLR may be useful to predict the biological activity of derivatives of quinoline.Keywords: antimalarial, quinoline, QSAR, PCA, MLR , MNLR, MLR
Procedia PDF Downloads 1573948 Improved Computational Efficiency of Machine Learning Algorithm Based on Evaluation Metrics to Control the Spread of Coronavirus in the UK
Authors: Swathi Ganesan, Nalinda Somasiri, Rebecca Jeyavadhanam, Gayathri Karthick
Abstract:
The COVID-19 crisis presents a substantial and critical hazard to worldwide health. Since the occurrence of the disease in late January 2020 in the UK, the number of infected people confirmed to acquire the illness has increased tremendously across the country, and the number of individuals affected is undoubtedly considerably high. The purpose of this research is to figure out a predictive machine learning archetypal that could forecast COVID-19 cases within the UK. This study concentrates on the statistical data collected from 31st January 2020 to 31st March 2021 in the United Kingdom. Information on total COVID cases registered, new cases encountered on a daily basis, total death registered, and patients’ death per day due to Coronavirus is collected from World Health Organisation (WHO). Data preprocessing is carried out to identify any missing values, outliers, or anomalies in the dataset. The data is split into 8:2 ratio for training and testing purposes to forecast future new COVID cases. Support Vector Machines (SVM), Random Forests, and linear regression algorithms are chosen to study the model performance in the prediction of new COVID-19 cases. From the evaluation metrics such as r-squared value and mean squared error, the statistical performance of the model in predicting the new COVID cases is evaluated. Random Forest outperformed the other two Machine Learning algorithms with a training accuracy of 99.47% and testing accuracy of 98.26% when n=30. The mean square error obtained for Random Forest is 4.05e11, which is lesser compared to the other predictive models used for this study. From the experimental analysis Random Forest algorithm can perform more effectively and efficiently in predicting the new COVID cases, which could help the health sector to take relevant control measures for the spread of the virus.Keywords: COVID-19, machine learning, supervised learning, unsupervised learning, linear regression, support vector machine, random forest
Procedia PDF Downloads 1213947 Landslide Hazard Zonation Using Satellite Remote Sensing and GIS Technology
Authors: Ankit Tyagi, Reet Kamal Tiwari, Naveen James
Abstract:
Landslide is the major geo-environmental problem of Himalaya because of high ridges, steep slopes, deep valleys, and complex system of streams. They are mainly triggered by rainfall and earthquake and causing severe damage to life and property. In Uttarakhand, the Tehri reservoir rim area, which is situated in the lesser Himalaya of Garhwal hills, was selected for landslide hazard zonation (LHZ). The study utilized different types of data, including geological maps, topographic maps from the survey of India, Landsat 8, and Cartosat DEM data. This paper presents the use of a weighted overlay method in LHZ using fourteen causative factors. The various data layers generated and co-registered were slope, aspect, relative relief, soil cover, intensity of rainfall, seismic ground shaking, seismic amplification at surface level, lithology, land use/land cover (LULC), normalized difference vegetation index (NDVI), topographic wetness index (TWI), stream power index (SPI), drainage buffer and reservoir buffer. Seismic analysis is performed using peak horizontal acceleration (PHA) intensity and amplification factors in the evaluation of the landslide hazard index (LHI). Several digital image processing techniques such as topographic correction, NDVI, and supervised classification were widely used in the process of terrain factor extraction. Lithological features, LULC, drainage pattern, lineaments, and structural features are extracted using digital image processing techniques. Colour, tones, topography, and stream drainage pattern from the imageries are used to analyse geological features. Slope map, aspect map, relative relief are created by using Cartosat DEM data. DEM data is also used for the detailed drainage analysis, which includes TWI, SPI, drainage buffer, and reservoir buffer. In the weighted overlay method, the comparative importance of several causative factors obtained from experience. In this method, after multiplying the influence factor with the corresponding rating of a particular class, it is reclassified, and the LHZ map is prepared. Further, based on the land-use map developed from remote sensing images, a landslide vulnerability study for the study area is carried out and presented in this paper.Keywords: weighted overlay method, GIS, landslide hazard zonation, remote sensing
Procedia PDF Downloads 1343946 Agile Software Effort Estimation Using Regression Techniques
Authors: Mikiyas Adugna
Abstract:
Effort estimation is among the activities carried out in software development processes. An accurate model of estimation leads to project success. The method of agile effort estimation is a complex task because of the dynamic nature of software development. Researchers are still conducting studies on agile effort estimation to enhance prediction accuracy. Due to these reasons, we investigated and proposed a model on LASSO and Elastic Net regression to enhance estimation accuracy. The proposed model has major components: preprocessing, train-test split, training with default parameters, and cross-validation. During the preprocessing phase, the entire dataset is normalized. After normalization, a train-test split is performed on the dataset, setting training at 80% and testing set to 20%. We chose two different phases for training the two algorithms (Elastic Net and LASSO) regression following the train-test-split. In the first phase, the two algorithms are trained using their default parameters and evaluated on the testing data. In the second phase, the grid search technique (the grid is used to search for tuning and select optimum parameters) and 5-fold cross-validation to get the final trained model. Finally, the final trained model is evaluated using the testing set. The experimental work is applied to the agile story point dataset of 21 software projects collected from six firms. The results show that both Elastic Net and LASSO regression outperformed the compared ones. Compared to the proposed algorithms, LASSO regression achieved better predictive performance and has acquired PRED (8%) and PRED (25%) results of 100.0, MMRE of 0.0491, MMER of 0.0551, MdMRE of 0.0593, MdMER of 0.063, and MSE of 0.0007. The result implies LASSO regression algorithm trained model is the most acceptable, and higher estimation performance exists in the literature.Keywords: agile software development, effort estimation, elastic net regression, LASSO
Procedia PDF Downloads 72