Search results for: empirical
2483 Financial Regulations and Insolvency Risk: Empirical Evidence from Commercial Banks of Pakistan
Authors: Shumaila Zeb
Abstract:
The proposed study aims to investigate insolvency risk of commercial banks of Pakistan. Furthermore, it empirically estimates the effect of already implemented financial regulations on the insolvency risk of banks. To carry out the empirical analysis, a balanced bank-level panel data covering the period 2008-2016 is used. The Z-score is used for calculating the insolvency risk of each bank. The panel regression is used to investigate the relationship between financial regulations and insolvency risk of banks. The empirics reveal that the financial regulations enforced by State Bank of Pakistan have significant impacts on the insolvency risk of banks. The results further indicate that loan ratio and reserve ratio are positively and significantly related to the insolvency risk of banks.Keywords: insolvency risk, Z-score, financial regulations, banks
Procedia PDF Downloads 1982482 A Literature Review of Ergonomics Sitting Studies to Characterize Safe and Unsafe Sitting Behaviors
Authors: Yoonjin Lee, Dongwook Hwang, Juhee Park, Woojin Park
Abstract:
As undesirable sitting posture is known to be a major cause of musculoskeletal disorder of office workers, sitting has attracted attention on occupational health. However, there seems to be no consensus on what are safe and unsafe sitting behaviors. The purpose of this study was to characterize safe and unsafe behaviors based on scientific findings of sitting behavior. Three objectives were as follows; to identify different sitting behaviors measure used in ergonomics studies on safe sitting, for each measure identified, to find available findings or recommendations on safe and unsafe sitting behaviors along with relevant empirical grounds, and to synthesize the findings or recommendations to provide characterizations of safe and unsafe behaviors. A systematic review of electronic databases (Google Scholar, PubMed, Web of Science) was conducted for extensive search of sitting behavior. Key terms included awkward sitting position, sedentary sitting, dynamic sitting, sitting posture, sitting posture, and sitting biomechanics, etc. Each article was systemically abstracted to extract a list of studied sitting behaviors, measures used to study the sitting behavior, and presence of empirical evidence of safety of the sitting behaviors. Finally, characterization of safe and unsafe sitting behavior was conducted based on knowledge with empirical evidence. This characterization is expected to provide useful knowledge for evaluation of sitting behavior and about postures to be measured in development of sensing chair.Keywords: sitting position, sitting biomechanics, sitting behavior, unsafe sitting
Procedia PDF Downloads 3022481 An Empirical Analysis of Euthanasia Issues in Taiwan
Authors: Wen-Shai Hung
Abstract:
This paper examines the factors influencing euthanasia issues in Taiwan. The data used is from the 2015 Survey Research on Attitudes towards the Death Penalty and Related Values in Taiwan, which focused on knowledge, attitudes towards the death penalty, and the concepts of social, political, and law values. The sample ages are from 21 to 94. The method used is probit modelling for examining the influences on euthanasia issues in Taiwan. The main empirical results find that older people, persons with higher educational attainment, those who favour abolition of the death penalty and do not oppose divorce, abortion, same-sex relationships, and putting down homeless’ cats or dogs are more likely to approve of the use of euthanasia to end their lives. In contrast, Mainlanders, people who support the death penalty and favour long-term prison sentences are less likely to support the use of euthanasia.Keywords: euthanasia, homosexual, death penalty, and probit model
Procedia PDF Downloads 3772480 Empirical Acceleration Functions and Fuzzy Information
Authors: Muhammad Shafiq
Abstract:
In accelerated life testing approaches life time data is obtained under various conditions which are considered more severe than usual condition. Classical techniques are based on obtained precise measurements, and used to model variation among the observations. In fact, there are two types of uncertainty in data: variation among the observations and the fuzziness. Analysis techniques, which do not consider fuzziness and are only based on precise life time observations, lead to pseudo results. This study was aimed to examine the behavior of empirical acceleration functions using fuzzy lifetimes data. The results showed an increased fuzziness in the transformed life times as compare to the input data.Keywords: acceleration function, accelerated life testing, fuzzy number, non-precise data
Procedia PDF Downloads 2992479 A Simple and Empirical Refraction Correction Method for UAV-Based Shallow-Water Photogrammetry
Authors: I GD Yudha Partama, A. Kanno, Y. Akamatsu, R. Inui, M. Goto, M. Sekine
Abstract:
The aerial photogrammetry of shallow water bottoms has the potential to be an efficient high-resolution survey technique for shallow water topography, thanks to the advent of convenient UAV and automatic image processing techniques Structure-from-Motion (SfM) and Multi-View Stereo (MVS)). However, it suffers from the systematic overestimation of the bottom elevation, due to the light refraction at the air-water interface. In this study, we present an empirical method to correct for the effect of refraction after the usual SfM-MVS processing, using common software. The presented method utilizes the empirical relation between the measured true depth and the estimated apparent depth to generate an empirical correction factor. Furthermore, this correction factor was utilized to convert the apparent water depth into a refraction-corrected (real-scale) water depth. To examine its effectiveness, we applied the method to two river sites, and compared the RMS errors in the corrected bottom elevations with those obtained by three existing methods. The result shows that the presented method is more effective than the two existing methods: The method without applying correction factor and the method utilizes the refractive index of water (1.34) as correction factor. In comparison with the remaining existing method, which used the additive terms (offset) after calculating correction factor, the presented method performs well in Site 2 and worse in Site 1. However, we found this linear regression method to be unstable when the training data used for calibration are limited. It also suffers from a large negative bias in the correction factor when the apparent water depth estimated is affected by noise, according to our numerical experiment. Overall, the good accuracy of refraction correction method depends on various factors such as the locations, image acquisition, and GPS measurement conditions. The most effective method can be selected by using statistical selection (e.g. leave-one-out cross validation).Keywords: bottom elevation, MVS, river, SfM
Procedia PDF Downloads 2992478 Normalizing Logarithms of Realized Volatility in an ARFIMA Model
Authors: G. L. C. Yap
Abstract:
Modelling realized volatility with high-frequency returns is popular as it is an unbiased and efficient estimator of return volatility. A computationally simple model is fitting the logarithms of the realized volatilities with a fractionally integrated long-memory Gaussian process. The Gaussianity assumption simplifies the parameter estimation using the Whittle approximation. Nonetheless, this assumption may not be met in the finite samples and there may be a need to normalize the financial series. Based on the empirical indices S&P500 and DAX, this paper examines the performance of the linear volatility model pre-treated with normalization compared to its existing counterpart. The empirical results show that by including normalization as a pre-treatment procedure, the forecast performance outperforms the existing model in terms of statistical and economic evaluations.Keywords: Gaussian process, long-memory, normalization, value-at-risk, volatility, Whittle estimator
Procedia PDF Downloads 3542477 Modeling Aeration of Sharp Crested Weirs by Using Support Vector Machines
Authors: Arun Goel
Abstract:
The present paper attempts to investigate the prediction of air entrainment rate and aeration efficiency of a free over-fall jets issuing from a triangular sharp crested weir by using regression based modelling. The empirical equations, support vector machine (polynomial and radial basis function) models and the linear regression techniques were applied on the triangular sharp crested weirs relating the air entrainment rate and the aeration efficiency to the input parameters namely drop height, discharge, and vertex angle. It was observed that there exists a good agreement between the measured values and the values obtained using empirical equations, support vector machine (Polynomial and rbf) models, and the linear regression techniques. The test results demonstrated that the SVM based (Poly & rbf) model also provided acceptable prediction of the measured values with reasonable accuracy along with empirical equations and linear regression techniques in modelling the air entrainment rate and the aeration efficiency of a free over-fall jets issuing from triangular sharp crested weir. Further sensitivity analysis has also been performed to study the impact of input parameter on the output in terms of air entrainment rate and aeration efficiency.Keywords: air entrainment rate, dissolved oxygen, weir, SVM, regression
Procedia PDF Downloads 4362476 Empirical Analysis of Forensic Accounting Practices for Tackling Persistent Fraud and Financial Irregularities in the Nigerian Public Sector
Authors: Sani AbdulRahman Bala
Abstract:
This empirical study delves into the realm of forensic accounting practices within the Nigerian Public Sector, seeking to quantitatively analyze their efficacy in addressing the persistent challenges of fraud and financial irregularities. With a focus on empirical data, this research employs a robust methodology to assess the current state of fraud in the Nigerian Public Sector and evaluate the performance of existing forensic accounting measures. Through quantitative analyses, including statistical models and data-driven insights, the study aims to identify patterns, trends, and correlations associated with fraudulent activities. The research objectives include scrutinizing documented fraud cases, examining the effectiveness of established forensic accounting practices, and proposing data-driven strategies for enhancing fraud detection and prevention. Leveraging quantitative methodologies, the study seeks to measure the impact of technological advancements on forensic accounting accuracy and efficiency. Additionally, the research explores collaborative mechanisms among government agencies, regulatory bodies, and the private sector by quantifying the effects of information sharing on fraud prevention. The empirical findings from this study are expected to provide a nuanced understanding of the challenges and opportunities in combating fraud within the Nigerian Public Sector. The quantitative insights derived from real-world data will contribute to the refinement of forensic accounting strategies, ensuring their effectiveness in addressing the unique complexities of financial irregularities in the public sector. The study's outcomes aim to inform policymakers, practitioners, and stakeholders, fostering evidence-based decision-making and proactive measures for a more resilient and fraud-resistant financial governance system in Nigeria.Keywords: fraud, financial irregularities, nigerian public sector, quantitative investigation
Procedia PDF Downloads 622475 Key Success Factors of Customer Relationship Management: An Empirical Study of Tunisian Firms
Authors: Khlif Hamadi
Abstract:
Customer Relationship Management has become the main interest of researchers and practitioners especially in the domains of Management and Information Systems (IS). This paper is an overview of success factors that could facilitate successful adoption of CRM. There are 2 factors: the organizational climate and the capacity for innovation. The survey was developed with 200 CRM users. Empirical research is in the positivist paradigm based on the hypothetico-deductive method. Indeed, the approach adopted is the quantitative approach based on a questionnaire complied by Tunisian companies operating in different sectors of activity. For the data analyses, the structural equations method was used to conduct our exploratory and confirmatory analysis. The results revealed that the creative organizational climate and high innovation capacity positively influence the success of CRM practice.Keywords: CRM practices, innovation capacity, organizational climate, the structural equation
Procedia PDF Downloads 1172474 Empirical Roughness Progression Models of Heavy Duty Rural Pavements
Authors: Nahla H. Alaswadko, Rayya A. Hassan, Bayar N. Mohammed
Abstract:
Empirical deterministic models have been developed to predict roughness progression of heavy duty spray sealed pavements for a dataset representing rural arterial roads. The dataset provides a good representation of the relevant network and covers a wide range of operating and environmental conditions. A sample with a large size of historical time series data for many pavement sections has been collected and prepared for use in multilevel regression analysis. The modelling parameters include road roughness as performance parameter and traffic loading, time, initial pavement strength, reactivity level of subgrade soil, climate condition, and condition of drainage system as predictor parameters. The purpose of this paper is to report the approaches adopted for models development and validation. The study presents multilevel models that can account for the correlation among time series data of the same section and to capture the effect of unobserved variables. Study results show that the models fit the data very well. The contribution and significance of relevant influencing factors in predicting roughness progression are presented and explained. The paper concludes that the analysis approach used for developing the models confirmed their accuracy and reliability by well-fitting to the validation data.Keywords: roughness progression, empirical model, pavement performance, heavy duty pavement
Procedia PDF Downloads 1682473 Dynamic Modeling of the Exchange Rate in Tunisia: Theoretical and Empirical Study
Authors: Chokri Slim
Abstract:
The relative failure of simultaneous equation models in the seventies has led researchers to turn to other approaches that take into account the dynamics of economic and financial systems. In this paper, we use an approach based on vector autoregressive model that is widely used in recent years. Their popularity is due to their flexible nature and ease of use to produce models with useful descriptive characteristics. It is also easy to use them to test economic hypotheses. The standard econometric techniques assume that the series studied are stable over time (stationary hypothesis). Most economic series do not verify this hypothesis, which assumes, when one wishes to study the relationships that bind them to implement specific techniques. This is cointegration which characterizes non-stationary series (integrated) with a linear combination is stationary, will also be presented in this paper. Since the work of Johansen, this approach is generally presented as part of a multivariate analysis and to specify long-term stable relationships while at the same time analyzing the short-term dynamics of the variables considered. In the empirical part, we have applied these concepts to study the dynamics of of the exchange rate in Tunisia, which is one of the most important economic policy of a country open to the outside. According to the results of the empirical study by the cointegration method, there is a cointegration relationship between the exchange rate and its determinants. This relationship shows that the variables have a significant influence in determining the exchange rate in Tunisia.Keywords: stationarity, cointegration, dynamic models, causality, VECM models
Procedia PDF Downloads 3652472 Applying Critical Realism to Qualitative Social Work Research: A Critical Realist Approach for Social Work Thematic Analysis Method
Authors: Lynne Soon-Chean Park
Abstract:
Critical Realism (CR) has emerged as an alternative to both the positivist and constructivist perspectives that have long dominated social work research. By unpacking the epistemic weakness of two dogmatic perspectives, CR provides a useful philosophical approach that incorporates the ontological objectivist and subjectivist stance. The CR perspective suggests an alternative approach for social work researchers who have long been looking to engage in the complex interplay between perceived reality at the empirical level and the objective reality that lies behind the empirical event as a causal mechanism. However, despite the usefulness of CR in informing social work research, little practical guidance is available about how CR can inform methodological considerations in social work research studies. This presentation aims to provide a detailed description of CR-informed thematic analysis by drawing examples from a social work doctoral research of Korean migrants’ experiences and understanding of trust associated with their settlement experience in New Zealand. Because of its theoretical flexibility and accessibility as a qualitative analysis method, thematic analysis can be applied as a method that works both to search for the demi-regularities of the collected data and to identify the causal mechanisms that lay behind the empirical data. In so doing, this presentation seeks to provide a concrete and detailed exemplar for social work researchers wishing to employ CR in their qualitative thematic analysis process.Keywords: critical Realism, data analysis, epistemology, research methodology, social work research, thematic analysis
Procedia PDF Downloads 2122471 Firm's Growth Leading Dimensions of Blockchain Empowered Information Management System: An Empirical Study
Authors: Umang Varshney, Amit Karamchandani, Rohit Kapoor
Abstract:
Practitioners and researchers have realized that Blockchain is not limited to currency. Blockchain as a distributed ledger can ensure a transparent and traceable supply chain. Due to Blockchain-enabled IoTs, a firm’s information management system can now take inputs from other supply chain partners in real-time. This study aims to provide empirical evidence of dimensions responsible for blockchain implemented firm’s growth and highlight how sector (manufacturing or service), state's regulatory environment, and choice of blockchain network affect the blockchain's usefulness. This post-adoption study seeks to validate the findings of pre-adoption studies done on the blockchain. Data will be collected through a survey of managers working in blockchain implemented firms and analyzed through PLS-SEM.Keywords: blockchain, information management system, PLS-SEM, firm's growth
Procedia PDF Downloads 1262470 Empirical Research to Improve Performances of Paddy Columnar Dryer
Authors: Duong Thi Hong, Nguyen Van Hung, Martin Gummert
Abstract:
Good practices of mechanical drying can reduce losses of grain quality. Recently, with demands of higher capacity for paddy drying in the Mekong River Delta of Vietnam, columnar dryers have been introduced rapidly in this area. To improve the technology, this study was conducted to investigate and optimize the parameters for drying Jasmine paddy using an empirical cross-flow columnar dryer. The optimum parameters were resulted in air flow rate and drying temperature that are 1-1.5 m³ s-¹ t-¹ of paddy and 40-42°C, respectively. The investigation also addressed a solution of reversing drying air to achieve the uniformity of grain temperature and quality. Results of this study should be significant for developments of grain drying, contributing to reduce post harvest lossesKeywords: paddy drying, columnar dryer, air flow rate, drying temperature
Procedia PDF Downloads 3712469 Hydrological Modeling of Watersheds Using the Only Corresponding Competitor Method: The Case of M’Zab Basin, South East Algeria
Authors: Oulad Naoui Noureddine, Cherif ELAmine, Djehiche Abdelkader
Abstract:
Water resources management includes several disciplines; the modeling of rainfall-runoff relationship is the most important discipline to prevent natural risks. There are several models to study rainfall-runoff relationship in watersheds. However, the majority of these models are not applicable in all basins of the world. In this study, a new stochastic method called The Only Corresponding Competitor method (OCC) was used for the hydrological modeling of M’ZAB Watershed (South East of Algeria) to adapt a few empirical models for any hydrological regime. The results obtained allow to authorize a certain number of visions, in which it would be interesting to experiment with hydrological models that improve collectively or separately the data of a catchment by the OCC method.Keywords: modelling, optimization, rainfall-runoff relationship, empirical model, OCC
Procedia PDF Downloads 2652468 The Impact of Board Director Characteristics on the Quality of Information Disclosure
Authors: Guo Jinhong
Abstract:
The purpose of this study is to explore the association between board member functions and information disclosure levels. Based on the literature variables, such as the characteristics of the board of directors in the past, a single comprehensive indicator is established as a substitute variable for board functions, and the information disclosure evaluation results published by the Securities and Foundation are used to measure the information disclosure level of the company. This study focuses on companies listed on the Taiwan Stock Exchange from 2006 to 2010 and uses descriptive statistical analysis, univariate analysis, correlation analysis and ordered normal probability (Ordered Probit) regression for empirical analysis. The empirical results show that there is a significant positive correlation between the function of board members and the level of information disclosure. This study also conducts a sensitivity test and draws similar conclusions, showing that boards with better board member functions have higher levels of information disclosure. In addition, this study also found that higher board independence, lower director shareholding pledge ratio, higher director shareholding ratio, and directors with rich professional knowledge and practical experience can help improve the level of information disclosure. The empirical results of this study provide strong support for the "relative regulations to improve the level of information disclosure" formulated by the competent authorities in recent years.Keywords: function of board members, information disclosure, securities, foundation
Procedia PDF Downloads 972467 The Association of Empirical Dietary Inflammatory Index with Musculoskeletal Pains in Elderlies
Authors: Mahshid Rezaei, Zahra Tajari, Zahra Esmaeily, Atefeh Eyvazkhani, Shahrzad Daei, Marjan Mansouri Dara, Mohaddesh Rezaei, Abolghassem Djazayeri, Ahmadreza Dorosti Motlagh
Abstract:
Background: Musculoskeletal pain is one of the most prevalent symptoms in elderly age. Nutrition and diet are considered important underlying factors that could affect chronic musculoskeletal pain. The purpose of this study was to identify the relationship between empirical dietary inflammatory patterns (EDII) and musculoskeletal pain. Method: In this cross-sectional study, 213 elderly individuals were selected from several health centers. The usual dietary intake was evaluated by a valid and reliable 147-items food frequency questionnaire (FFQ). To measure the intensity of pain, Visual Analogue Scale (VAS) was used. Multiple Linear Regression was applied to assess the association between EDII and musculoskeletal pain. Results: The results of multiple linear regression analysis indicate that a higher EDII score was associated with higher musculoskeletal pain (β= 0.21: 95% CI: 0.24-1.87: P= 0.003). These results stayed significant even after adjusting for covariates such as sex, marital status, height, family number, sleep, BMI, physical activity duration, waist circumference, protector, and medication use (β= 0.16: 95% CI: 0.11-1.04: P= 0.02). Conclusion: Study findings indicated that higher inflammation of diet might have a direct association with musculoskeletal pains in elderlies. However, further investigations are required to confirm these findings.Keywords: musculoskeletal pain, empirical dietary inflammatory pattern, elderlies, dietary pattern
Procedia PDF Downloads 2112466 Empirical Superpave Mix-Design of Rubber-Modified Hot-Mix Asphalt in Railway Sub-Ballast
Authors: Fernando M. Soto, Gaetano Di Mino
Abstract:
The design of an unmodified bituminous mixture and three rubber-aggregate mixtures containing rubber-aggregate by a dry process (RUMAC) was evaluated, using an empirical-analytical approach based on experimental findings obtained in the laboratory with the volumetric mix design by gyratory compaction. A reference dense-graded bituminous sub-ballast mixture (3% of air voids and a bitumen 4% over the total weight of the mix), and three rubberized mixtures by dry process (1,5 to 3% of rubber by total weight and 5-7% of binder) were used applying the Superpave mix-design for a level 3 (high-traffic) design rail lines. The railway trackbed section analyzed was a granular layer of 19 cm compacted, while for the sub-ballast a thickness of 12 cm has been used. In order to evaluate the effect of increasing the specimen density (as a percent of its theoretical maximum specific gravity), in this article, are illustrated the results obtained after different comparative analysis into the influence of varying the binder-rubber percentages under the sub-ballast layer mix-design. This work demonstrates that rubberized blends containing crumb and ground rubber in bituminous asphalt mixtures behave at least similar or better than conventional asphalt materials. By using the same methodology of volumetric compaction, the densification curves resulting from each mixture have been studied. The purpose is to obtain an optimum empirical parameter multiplier of the number of gyrations necessary to reach the same compaction energy as in conventional mixtures. It has provided some experimental parameters adopting an empirical-analytical method, evaluating the results obtained from the gyratory-compaction of bituminous mixtures with an HMA and rubber-aggregate blends. An extensive integrated research has been carried out to assess the suitability of rubber-modified hot mix asphalt mixtures as a sub-ballast layer in railway underlayment trackbed. Design optimization of the mixture was conducted for each mixture and the volumetric properties analyzed. Also, an improved and complete manufacturing process, compaction and curing of these blends are provided. By adopting this increase-parameters of compaction, called 'beta' factor, mixtures modified with rubber with uniform densification and workability are obtained that in the conventional mixtures. It is found that considering the usual bearing capacity requirements in rail track, the optimal rubber content is 2% (by weight) or 3.95% (by volumetric substitution) and a binder content of 6%.Keywords: empirical approach, rubber-asphalt, sub-ballast, superpave mix-design
Procedia PDF Downloads 3682465 Empirical and Indian Automotive Equity Portfolio Decision Support
Authors: P. Sankar, P. James Daniel Paul, Siddhant Sahu
Abstract:
A brief review of the empirical studies on the methodology of the stock market decision support would indicate that they are at a threshold of validating the accuracy of the traditional and the fuzzy, artificial neural network and the decision trees. Many researchers have been attempting to compare these models using various data sets worldwide. However, the research community is on the way to the conclusive confidence in the emerged models. This paper attempts to use the automotive sector stock prices from National Stock Exchange (NSE), India and analyze them for the intra-sectorial support for stock market decisions. The study identifies the significant variables and their lags which affect the price of the stocks using OLS analysis and decision tree classifiers.Keywords: Indian automotive sector, stock market decisions, equity portfolio analysis, decision tree classifiers, statistical data analysis
Procedia PDF Downloads 4852464 Retrospection and Introspection on the Three-Decade Sight Translation Research in China—Bibliometric Analysis of CNKI (1987—2015) Relevant Articles
Authors: Wei Deng
Abstract:
Based on sorting and analyzing related literature on CNKI for nearly three decades between 1987—2015, this paper, adopting the method of bibliometrics, summarized and reviewed the domestic research on sight translation from three aspects. The analysis concluded the following findings: 1) The majority research had focused on the noumenon of sight translation. The rest of the three main research perspectives are in descending order: sight translation teaching, sight translation skills and other associated skills, and cognitive research of sight translation. 2) The domestic research increased significantly in recent five years, but there is much room for the quality. 3) The non-empirical study has had higher proportion, while the empirical study is unitary with the lack of triangle validation. This paper suggested that sight translation being in sore need of unified definition, multilingual, even interdisciplinary cooperation.Keywords: bibliometric analysis, perspectives, sight translation, tendency
Procedia PDF Downloads 3362463 Empirical Mode Decomposition Based Denoising by Customized Thresholding
Authors: Wahiba Mohguen, Raïs El’hadi Bekka
Abstract:
This paper presents a denoising method called EMD-Custom that was based on Empirical Mode Decomposition (EMD) and the modified Customized Thresholding Function (Custom) algorithms. EMD was applied to decompose adaptively a noisy signal into intrinsic mode functions (IMFs). Then, all the noisy IMFs got threshold by applying the presented thresholding function to suppress noise and to improve the signal to noise ratio (SNR). The method was tested on simulated data and real ECG signal, and the results were compared to the EMD-Based signal denoising methods using the soft and hard thresholding. The results showed the superior performance of the proposed EMD-Custom denoising over the traditional approach. The performances were evaluated in terms of SNR in dB, and Mean Square Error (MSE).Keywords: customized thresholding, ECG signal, EMD, hard thresholding, soft-thresholding
Procedia PDF Downloads 3022462 Use of Statistical Correlations for the Estimation of Shear Wave Velocity from Standard Penetration Test-N-Values: Case Study of Algiers Area
Authors: Soumia Merat, Lynda Djerbal, Ramdane Bahar, Mohammed Amin Benbouras
Abstract:
Along with shear wave, many soil parameters are associated with the standard penetration test (SPT) as a dynamic in situ experiment. Both SPT-N data and geophysical data do not often exist in the same area. Statistical analysis of correlation between these parameters is an alternate method to estimate Vₛ conveniently and without additional investigations or data acquisition. Shear wave velocity is a basic engineering tool required to define dynamic properties of soils. In many instances, engineers opt for empirical correlations between shear wave velocity (Vₛ) and reliable static field test data like standard penetration test (SPT) N value, CPT (Cone Penetration Test) values, etc., to estimate shear wave velocity or dynamic soil parameters. The relation between Vs and SPT- N values of Algiers area is predicted using the collected data, and it is also compared with the previously suggested formulas of Vₛ determination by measuring Root Mean Square Error (RMSE) of each model. Algiers area is situated in high seismic zone (Zone III [RPA 2003: réglement parasismique algerien]), therefore the study is important for this region. The principal aim of this paper is to compare the field measurements of Down-hole test and the empirical models to show which one of these proposed formulas are applicable to predict and deduce shear wave velocity values.Keywords: empirical models, RMSE, shear wave velocity, standard penetration test
Procedia PDF Downloads 3382461 Estimation of Shear Wave Velocity from Cone Penetration Test for Structured Busan Clays
Authors: Vinod K. Singh, S. G. Chung
Abstract:
The degree of structuration of Busan clays at the mouth of Nakdong River mouth was highly influenced by the depositional environment, i.e., flow of the river stream, marine regression, and transgression during the sedimentation process. As a result, the geotechnical properties also varies along the depth with change in degree of structuration. Thus, the in-situ tests such as cone penetration test (CPT) could not be used to predict various geotechnical properties properly by using the conventional empirical methods. In this paper, the shear wave velocity (Vs) was measured from the field using the seismic dilatometer. The Vs was also measured in the laboratory from high quality undisturbed and remolded samples using bender element method to evaluate the degree of structuration. The degree of structuration was quantitatively defined by the modulus ratio of undisturbed to remolded soil samples which is found well correlated with the normalized void ratio (e0/eL) where eL is the void ratio at the liquid limit. It is revealed that the empirical method based on laboratory results incorporating e0/eL can predict Vs from the field more accurately. Thereafter, the CPT based empirical method was developed to estimate the shear wave velocity taking the effect of structuration in the consideration. The developed method was found to predict shear wave velocity reasonably for Busan clays.Keywords: level of structuration, normalized modulus, normalized void ratio, shear wave velocity, site characterization
Procedia PDF Downloads 2352460 Empirical Evaluation of Gradient-Based Training Algorithms for Ordinary Differential Equation Networks
Authors: Martin K. Steiger, Lukas Heisler, Hans-Georg Brachtendorf
Abstract:
Deep neural networks and their variants form the backbone of many AI applications. Based on the so-called residual networks, a continuous formulation of such models as ordinary differential equations (ODEs) has proven advantageous since different techniques may be applied that significantly increase the learning speed and enable controlled trade-offs with the resulting error at the same time. For the evaluation of such models, high-performance numerical differential equation solvers are used, which also provide the gradients required for training. However, whether classical gradient-based methods are even applicable or which one yields the best results has not been discussed yet. This paper aims to redeem this situation by providing empirical results for different applications.Keywords: deep neural networks, gradient-based learning, image processing, ordinary differential equation networks
Procedia PDF Downloads 1682459 Determinants of Economic Growth in Pakistan: A Structural Vector Auto Regression Approach
Authors: Muhammad Ajmair
Abstract:
This empirical study followed structural vector auto regression (SVAR) approach proposed by the so-called AB-model of Amisano and Giannini (1997) to check the impact of relevant macroeconomic determinants on economic growth in Pakistan. Before that auto regressive distributive lag (ARDL) bound testing technique and time varying parametric approach along with general to specific approach was employed to find out relevant significant determinants of economic growth. To our best knowledge, no author made such a study that employed auto regressive distributive lag (ARDL) bound testing and time varying parametric approach with general to specific approach in empirical literature, but current study will bridge this gap. Annual data was taken from World Development Indicators (2014) during period 1976-2014. The widely-used Schwarz information criterion and Akaike information criterion were considered for the lag length in each estimated equation. Main findings of the study are that remittances received, gross national expenditures and inflation are found to be the best relevant positive and significant determinants of economic growth. Based on these empirical findings, we conclude that government should focus on overall economic growth augmenting factors while formulating any policy relevant to the concerned sector.Keywords: economic growth, gross national expenditures, inflation, remittances
Procedia PDF Downloads 1992458 Ab Initio Spectroscopic Study of the Electronic Properties of the (Bana)+ Molecular Ion
Authors: Tahani H. Alluhaybi, Leila Mejrissi
Abstract:
In the present theoretical study, we investigated adiabatically the electronic structure of the (BaNa)+ by the use of the ab initio calculation. We optimized a large atomic GTO basis set for Na and Ba atoms. The (BaNa)+ molecular ion is considered a two-electron thank to a non-empirical pseudo-potentials approach applied to Ba and Na cores with the Core Polarization Potentials operator (CPP). Then, we performed the Full Configuration Interaction (FCI) method. Accordingly, we calculated the adiabatic Potential Energy Curves (PECs) and their spectroscopic constants (well depth De, transition energies Te, the equilibrium distances Re, vibrational constant ⍵e, and anharmonic constant ⍵exe) for 10 electronic states in Σ+ symmetry. Then we determined the vibrational level energies and their spacing, and the electric Permanent Dipole Moments (PDM).Keywords: Ab initio, dipole moment, non-empirical pseudo-potential, potential energy curves, spectroscopic constants, vibrational energy
Procedia PDF Downloads 1152457 Predictive Semi-Empirical NOx Model for Diesel Engine
Authors: Saurabh Sharma, Yong Sun, Bruce Vernham
Abstract:
Accurate prediction of NOx emission is a continuous challenge in the field of diesel engine-out emission modeling. Performing experiments for each conditions and scenario cost significant amount of money and man hours, therefore model-based development strategy has been implemented in order to solve that issue. NOx formation is highly dependent on the burn gas temperature and the O2 concentration inside the cylinder. The current empirical models are developed by calibrating the parameters representing the engine operating conditions with respect to the measured NOx. This makes the prediction of purely empirical models limited to the region where it has been calibrated. An alternative solution to that is presented in this paper, which focus on the utilization of in-cylinder combustion parameters to form a predictive semi-empirical NOx model. The result of this work is shown by developing a fast and predictive NOx model by using the physical parameters and empirical correlation. The model is developed based on the steady state data collected at entire operating region of the engine and the predictive combustion model, which is developed in Gamma Technology (GT)-Power by using Direct Injected (DI)-Pulse combustion object. In this approach, temperature in both burned and unburnt zone is considered during the combustion period i.e. from Intake Valve Closing (IVC) to Exhaust Valve Opening (EVO). Also, the oxygen concentration consumed in burnt zone and trapped fuel mass is also considered while developing the reported model. Several statistical methods are used to construct the model, including individual machine learning methods and ensemble machine learning methods. A detailed validation of the model on multiple diesel engines is reported in this work. Substantial numbers of cases are tested for different engine configurations over a large span of speed and load points. Different sweeps of operating conditions such as Exhaust Gas Recirculation (EGR), injection timing and Variable Valve Timing (VVT) are also considered for the validation. Model shows a very good predictability and robustness at both sea level and altitude condition with different ambient conditions. The various advantages such as high accuracy and robustness at different operating conditions, low computational time and lower number of data points requires for the calibration establishes the platform where the model-based approach can be used for the engine calibration and development process. Moreover, the focus of this work is towards establishing a framework for the future model development for other various targets such as soot, Combustion Noise Level (CNL), NO2/NOx ratio etc.Keywords: diesel engine, machine learning, NOₓ emission, semi-empirical
Procedia PDF Downloads 1142456 Development of Hydrodynamic Drag Calculation and Cavity Shape Generation for Supercavitating Torpedoes
Authors: Sertac Arslan, Sezer Kefeli
Abstract:
In this paper, firstly supercavitating phenomenon and supercavity shape design parameters are explained and then drag force calculation methods of high speed supercavitating torpedoes are investigated with numerical techniques and verified with empirical studies. In order to reach huge speeds such as 200, 300 knots for underwater vehicles, hydrodynamic hull drag force which is proportional to density of water (ρ) and square of speed should be reduced. Conventional heavy weight torpedoes could reach up to ~50 knots by classic underwater hydrodynamic techniques. However, to exceed 50 knots and reach about 200 knots speeds, hydrodynamic viscous forces must be reduced or eliminated completely. This requirement revives supercavitation phenomena that could be implemented to conventional torpedoes. Supercavitation is the use of cavitation effects to create a gas bubble, allowing the torpedo to move at huge speed through the water by being fully developed cavitation bubble. When the torpedo moves in a cavitation envelope due to cavitator in nose section and solid fuel rocket engine in rear section, this kind of torpedoes could be entitled as Supercavitating Torpedoes. There are two types of cavitation; first one is natural cavitation, and second one is ventilated cavitation. In this study, disk cavitator is modeled with natural cavitation and supercavitation phenomenon parameters are studied. Moreover, drag force calculation is performed for disk shape cavitator with numerical techniques and compared via empirical studies. Drag forces are calculated with computational fluid dynamics methods and different empirical methods. Numerical calculation method is developed by comparing with empirical results. In verification study cavitation number (σ), drag coefficient (CD) and drag force (D), cavity wall velocity (UKeywords: cavity envelope, CFD, high speed underwater vehicles, supercavitation, supercavity flows
Procedia PDF Downloads 1882455 Comparison of Receiver Operating Characteristic Curve Smoothing Methods
Authors: D. Sigirli
Abstract:
The Receiver Operating Characteristic (ROC) curve is a commonly used statistical tool for evaluating the diagnostic performance of screening and diagnostic test with continuous or ordinal scale results which aims to predict the presence or absence probability of a condition, usually a disease. When the test results were measured as numeric values, sensitivity and specificity can be computed across all possible threshold values which discriminate the subjects as diseased and non-diseased. There are infinite numbers of possible decision thresholds along the continuum of the test results. The ROC curve presents the trade-off between sensitivity and the 1-specificity as the threshold changes. The empirical ROC curve which is a non-parametric estimator of the ROC curve is robust and it represents data accurately. However, especially for small sample sizes, it has a problem of variability and as it is a step function there can be different false positive rates for a true positive rate value and vice versa. Besides, the estimated ROC curve being in a jagged form, since the true ROC curve is a smooth curve, it underestimates the true ROC curve. Since the true ROC curve is assumed to be smooth, several smoothing methods have been explored to smooth a ROC curve. These include using kernel estimates, using log-concave densities, to fit parameters for the specified density function to the data with the maximum-likelihood fitting of univariate distributions or to create a probability distribution by fitting the specified distribution to the data nd using smooth versions of the empirical distribution functions. In the present paper, we aimed to propose a smooth ROC curve estimation based on the boundary corrected kernel function and to compare the performances of ROC curve smoothing methods for the diagnostic test results coming from different distributions in different sample sizes. We performed simulation study to compare the performances of different methods for different scenarios with 1000 repetitions. It is seen that the performance of the proposed method was typically better than that of the empirical ROC curve and only slightly worse compared to the binormal model when in fact the underlying samples were generated from the normal distribution.Keywords: empirical estimator, kernel function, smoothing, receiver operating characteristic curve
Procedia PDF Downloads 1522454 The Twin Terminal of Pedestrian Trajectory Based on City Intelligent Model (CIM) 4.0
Authors: Chen Xi, Lao Xuerui, Li Junjie, Jiang Yike, Wang Hanwei, Zeng Zihao
Abstract:
To further promote the development of smart cities, the microscopic "nerve endings" of the City Intelligent Model (CIM) are extended to be more sensitive. In this paper, we develop a pedestrian trajectory twin terminal based on the CIM and CNN technology. It also uses 5G networks, architectural and geoinformatics technologies, convolutional neural networks, combined with deep learning networks for human behaviour recognition models, to provide empirical data such as 'pedestrian flow data and human behavioural characteristics data', and ultimately form spatial performance evaluation criteria and spatial performance warning systems, to make the empirical data accurate and intelligent for prediction and decision making.Keywords: urban planning, urban governance, CIM, artificial intelligence, convolutional neural network
Procedia PDF Downloads 150