Search results for: data interpolating empirical orthogonal function
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 29220

Search results for: data interpolating empirical orthogonal function

28980 Comparative Settlement Analysis on the under of Embankment with Empirical Formulas and Settlement Plate Measurement for Reducing Building Crack around of Embankments

Authors: Safitri Nur Wulandari, M. Ivan Adi Perdana, Prathisto L. Panuntun Unggul, R. Dary Wira Mahadika

Abstract:

In road construction on the soft soil, we need a soil improvement method to improve the soil bearing capacity of the land base so that the soil can withstand the traffic loads. Most of the land in Indonesia has a soft soil, where soft soil is a type of clay that has the consistency of very soft to medium stiff, undrained shear strength, Cu <0:25 kg/cm2, or the estimated value of NSPT <5 blows/ft. This study focuses on the analysis of the effect on preloading load (embarkment) to the amount of settlement ratio on the under of embarkment that will impact on the building cracks around of embarkment. The method used in this research is a superposition method for embarkment distribution on 27 locations with undisturbed soil samples at some borehole point in Java and Kalimantan, Indonesia. Then correlating the results of settlement plate monitoring on the field with Asaoka method. The results of settlement plate monitoring taken from an embarkment of Ahmad Yani airport in Semarang on 32 points. Where the value of Cc (index compressible) soil data based on some laboratory test results, while the value of Cc is not tested obtained from empirical formula Ardhana and Mochtar, 1999. From this research, the results of the field monitoring showed almost the same results with an empirical formulation with the standard deviation of 4% where the formulation of the empirical results of this analysis obtained by linear formula. Value empirical linear formula is to determine the effect of compression heap area as high as 4,25 m is 3,1209x + y = 0.0026 for the slope of the embankment 1: 8 for the same analysis with an initial height of embankment on the field. Provided that at the edge of the embankment settlement worth is not equal to 0 but at a quarter of embankment has a settlement ratio average 0.951 and at the edge of embankment has a settlement ratio 0,049. The influence areas around of embankment are approximately 1 meter for slope 1:8 and 7 meters for slope 1:2. So, it can cause the building cracks, to build in sustainable development.

Keywords: building cracks, influence area, settlement plate, soft soil, empirical formula, embankment

Procedia PDF Downloads 329
28979 Pod and Wavelets Application for Aerodynamic Design Optimization

Authors: Bonchan Koo, Junhee Han, Dohyung Lee

Abstract:

The research attempts to evaluate the accuracy and efficiency of a design optimization procedure which combines wavelets-based solution algorithm and proper orthogonal decomposition (POD) database management technique. Aerodynamic design procedure calls for high fidelity computational fluid dynamic (CFD) simulations and the consideration of large number of flow conditions and design constraints. Even with significant computing power advancement, current level of integrated design process requires substantial computing time and resources. POD reduces the degree of freedom of full system through conducting singular value decomposition for various field simulations. For additional efficiency improvement of the procedure, adaptive wavelet technique is also being employed during POD training period. The proposed design procedure was applied to the optimization of wing aerodynamic performance. Throughout the research, it was confirmed that the POD/wavelets design procedure could significantly reduce the total design turnaround time and is also able to capture all detailed complex flow features as in full order analysis.

Keywords: POD (Proper Orthogonal Decomposition), wavelets, CFD, design optimization, ROM (Reduced Order Model)

Procedia PDF Downloads 449
28978 The Role of Attachment Styles, Gender Schemas, Sexual Self Schemas, and Body Exposures During Sexual Activity in Sexual Function, Marital Satisfaction, and Sexual Self-Esteem

Authors: Hossein Shareh, Farhad Seifi

Abstract:

The present study was to examine the role of attachment styles, gender schemas, sexual-self schemas, and body image during sexual activity in sexual function, marital satisfaction, and sexual self-esteem. The sampling method was among married women who were living in Mashhad; a snowball selected 765 people. Questionnaires and measures of adult attachment style (AAS), Bem Sex Role Inventory (BSRI), sexual self-schema (SSS), body exposure during sexual activity questionnaire (BESAQ), sexual function female inventory (FSFI), a short form of sexual self-esteem (SSEI-W-SF) and marital satisfaction (Enrich) were completed by participants. Data analysis using Pearson correlation and hierarchical regression and case analysis was performed by SPSS-19 software. The results showed that there is a significant correlation (P <0.05) between attachment and sexual function (r=0.342), marital satisfaction (r=0.351) and sexual self-esteem (r =0.292). A correlation (P <0.05) was observed between sexual schema (r=0.342) and sexual esteem (r=0.31). A meaningful correlation (P <0.05) exists between gender stereotypes and sexual function (r=0.352). There was a significant inverse correlation (P <0.05) between body image and their performance during sexual activity (r=0.41). There is no significant relationship between gender schemas, sexual schemas, body image, and marital satisfaction, and no relation was found between gender schemas, body image, and sexual self-esteem. Also, the result of the regression showed that attachment styles, gender schemas, sexual self- schemas, and body exposures during sexual activity are predictable in sexual function, and marital satisfaction can be predicted by attachment style and gender schema. Somewhat, sexual self-esteem can be expected by attachment style and gender schemas.

Keywords: attachment styles, gender and sexual schemas, body image, sexual function, marital satisfaction, sexual self-esteem

Procedia PDF Downloads 13
28977 Tests for Zero Inflation in Count Data with Measurement Error in Covariates

Authors: Man-Yu Wong, Siyu Zhou, Zhiqiang Cao

Abstract:

In quality of life, health service utilization is an important determinant of medical resource expenditures on Colorectal cancer (CRC) care, a better understanding of the increased utilization of health services is essential for optimizing the allocation of healthcare resources to services and thus for enhancing the service quality, especially for high expenditure on CRC care like Hong Kong region. In assessing the association between the health-related quality of life (HRQOL) and health service utilization in patients with colorectal neoplasm, count data models can be used, which account for over dispersion or extra zero counts. In our data, the HRQOL evaluation is a self-reported measure obtained from a questionnaire completed by the patients, misreports and variations in the data are inevitable. Besides, there are more zero counts from the observed number of clinical consultations (observed frequency of zero counts = 206) than those from a Poisson distribution with mean equal to 1.33 (expected frequency of zero counts = 156). This suggests that excess of zero counts may exist. Therefore, we study tests for detecting zero-inflation in models with measurement error in covariates. Method: Under classical measurement error model, the approximate likelihood function for zero-inflation Poisson regression model can be obtained, then Approximate Maximum Likelihood Estimation(AMLE) can be derived accordingly, which is consistent and asymptotically normally distributed. By calculating score function and Fisher information based on AMLE, a score test is proposed to detect zero-inflation effect in ZIP model with measurement error. The proposed test follows asymptotically standard normal distribution under H0, and it is consistent with the test proposed for zero-inflation effect when there is no measurement error. Results: Simulation results show that empirical power of our proposed test is the highest among existing tests for zero-inflation in ZIP model with measurement error. In real data analysis, with or without considering measurement error in covariates, existing tests, and our proposed test all imply H0 should be rejected with P-value less than 0.001, i.e., zero-inflation effect is very significant, ZIP model is superior to Poisson model for analyzing this data. However, if measurement error in covariates is not considered, only one covariate is significant; if measurement error in covariates is considered, only another covariate is significant. Moreover, the direction of coefficient estimations for these two covariates is different in ZIP regression model with or without considering measurement error. Conclusion: In our study, compared to Poisson model, ZIP model should be chosen when assessing the association between condition-specific HRQOL and health service utilization in patients with colorectal neoplasm. and models taking measurement error into account will result in statistically more reliable and precise information.

Keywords: count data, measurement error, score test, zero inflation

Procedia PDF Downloads 264
28976 Sea Surface Trend over the Arabian Sea and Its Influence on the South West Monsoon Rainfall Variability over Sri Lanka

Authors: Sherly Shelton, Zhaohui Lin

Abstract:

In recent decades, the inter-annual variability of summer precipitation over the India and Sri Lanka has intensified significantly with an increased frequency of both abnormally dry and wet summers. Therefore prediction of the inter-annual variability of summer precipitation is crucial and urgent for water management and local agriculture scheduling. However, none of the hypotheses put forward so far could understand the relationship to monsoon variability and related factors that affect to the South West Monsoon (SWM) variability in Sri Lanka. This study focused to identify the spatial and temporal variability of SWM rainfall events from June to September (JJAS) over Sri Lanka and associated trend. The monthly rainfall records covering 1980-2013 over the Sri Lanka are used for 19 stations to investigate long-term trends in SWM rainfall over Sri Lanka. The linear trends of atmospheric variables are calculated to understand the drivers behind the changers described based on the observed precipitation, sea surface temperature and atmospheric reanalysis products data for 34 years (1980–2013). Empirical orthogonal function (EOF) analysis was applied to understand the spatial and temporal behaviour of seasonal SWM rainfall variability and also investigate whether the trend pattern is the dominant mode that explains SWM rainfall variability. The spatial and stations based precipitation over the country showed statistically insignificant decreasing trends except few stations. The first two EOFs of seasonal (JJAS) mean of rainfall explained 52% and 23 % of the total variance and first PC showed positive loadings of the SWM rainfall for the whole landmass while strongest positive lording can be seen in western/ southwestern part of the Sri Lanka. There is a negative correlation (r ≤ -0.3) between SMRI and SST in the Arabian Sea and Central Indian Ocean which indicate that lower temperature in the Arabian Sea and Central Indian Ocean are associated with greater rainfall over the country. This study also shows that consistently warming throughout the Indian Ocean. The result shows that the perceptible water over the county is decreasing with the time which the influence to the reduction of precipitation over the area by weakening drawn draft. In addition, evaporation is getting weaker over the Arabian Sea, Bay of Bengal and Sri Lankan landmass which leads to reduction of moisture availability required for the SWM rainfall over Sri Lanka. At the same time, weakening of the SST gradients between Arabian Sea and Bay of Bengal can deteriorate the monsoon circulation, untimely which diminish SWM over Sri Lanka. The decreasing trends of moisture, moisture transport, zonal wind, moisture divergence with weakening evaporation over Arabian Sea, during the past decade having an aggravating influence on decreasing trends of monsoon rainfall over the Sri Lanka.

Keywords: Arabian Sea, moisture flux convergence, South West Monsoon, Sri Lanka, sea surface temperature

Procedia PDF Downloads 112
28975 The Determinants of Trade Flow and Potential between Ethiopia and Group of Twenty

Authors: Terefe Alemu

Abstract:

This study is intended to examine Ethiopia’s trade flow determinants and trade potential with G20 countries whether it was overtraded or there is/are trade potential by using trade gravity model. The sources of panel data used were IMF, WDI, United Nations population division, The Heritage Foundation, Washington's No. 1 think tank online website database, online distance calculator, and others for the duration of 2010 to 2019 for 10 consecutive years. The empirical data analyzing tool used was Random effect model (REM), which is effective in estimation of time-invariant data. The empirical data analyzed using STATA software result indicates that Ethiopia has a trade potential with seven countries of G20, whereas Ethiopia overtrade with 12 countries and EU region. The Ethiopia’s and G20 countries/region bilateral trade flow statistically significant/ p<0.05/determinants were the population of G20 countries, growth domestic products of G20 countries, growth domestic products of Ethiopia, geographical distance between Ethiopia and G20 countries. The top five G20 countries exported to Ethiopia were china, United State of America, European Union, India, and South Africa, whereas the top five G20 countries imported from Ethiopia were EU, China, United State of America, Saudi Arabia, and Germany, respectively. Finally, the policy implication were Ethiopia has to Keep the consistence of trade flow with overtraded countries and improve with under traded countries through trade policy revision, and secondly, focusing on the trade determinants to improve trade flow is recommended.

Keywords: trade gravity model, trade determinants, G20, international trade, trade potential

Procedia PDF Downloads 181
28974 Adsorption of Chromium Ions from Aqueous Solution by Carbon Adsorbent

Authors: S. Heydari, H. Sharififard, M. Nabavinia, H. Kiani, M. Parvizi

Abstract:

Rapid industrialization has led to increased disposal of heavy metals into the environment. Activated carbon adsorption has proven to be an effective process for the removal of trace metal contaminants from aqueous media. This paper was investigated chromium adsorption efficiency by commercial activated carbon. The sorption studied as a function of activated carbon particle size, dose of activated carbon and initial pH of solution. Adsorption tests for the effects of these factors were designed with Taguchi approach. According to the Taguchi parameter design methodology, L9 orthogonal array was used. Analysis of experimental results showed that the most influential factor was initial pH of solution. The optimum conditions for chromium adsorption by activated carbons were found to be as follows: Initial feed pH 6, adsorbent particle size 0.412 mm and activated carbon dose 6 g/l. Under these conditions, nearly %100 of chromium ions was adsorbed by activated carbon after 2 hours.

Keywords: chromium, adsorption, Taguchi method, activated carbon

Procedia PDF Downloads 375
28973 A Kolmogorov-Smirnov Type Goodness-Of-Fit Test of Multinomial Logistic Regression Model in Case-Control Studies

Authors: Chen Li-Ching

Abstract:

The multinomial logistic regression model is used popularly for inferring the relationship of risk factors and disease with multiple categories. This study based on the discrepancy between the nonparametric maximum likelihood estimator and semiparametric maximum likelihood estimator of the cumulative distribution function to propose a Kolmogorov-Smirnov type test statistic to assess adequacy of the multinomial logistic regression model for case-control data. A bootstrap procedure is presented to calculate the critical value of the proposed test statistic. Empirical type I error rates and powers of the test are performed by simulation studies. Some examples will be illustrated the implementation of the test.

Keywords: case-control studies, goodness-of-fit test, Kolmogorov-Smirnov test, multinomial logistic regression

Procedia PDF Downloads 431
28972 Social Media Marketing Efforts and Hospital Brand Equity: An Empirical Investigation

Authors: Abrar R. Al-Hasan

Abstract:

Despite the widespread use of social media by consumers and marketers, empirical research investigating their economic value in the healthcare industry still lags. This study explores the impact of the use of social media marketing efforts on a hospital's brand equity and, ultimately, consumer response. Using social media data from Twitter and Facebook, along with an online and offline survey methodology, data is analyzed using logistic regression models. A random sample of (728) residents of the Kuwaiti population is used. The results of this study found that social media marketing efforts (SMME) in terms of use and validation lead to higher hospital brand equity and in turn, patient loyalty and patient visit. The study highlights the impact of SMME on hospital brand equity and patient response. Healthcare organizations should guide their marketing efforts to better manage this new way of marketing and communicating with patients to enhance their consumer loyalty and financial performance.

Keywords: brand equity, healthcare marketing, patient visit, social media, SMME

Procedia PDF Downloads 144
28971 A Sparse Representation Speech Denoising Method Based on Adapted Stopping Residue Error

Authors: Qianhua He, Weili Zhou, Aiwu Chen

Abstract:

A sparse representation speech denoising method based on adapted stopping residue error was presented in this paper. Firstly, the cross-correlation between the clean speech spectrum and the noise spectrum was analyzed, and an estimation method was proposed. In the denoising method, an over-complete dictionary of the clean speech power spectrum was learned with the K-singular value decomposition (K-SVD) algorithm. In the sparse representation stage, the stopping residue error was adaptively achieved according to the estimated cross-correlation and the adjusted noise spectrum, and the orthogonal matching pursuit (OMP) approach was applied to reconstruct the clean speech spectrum from the noisy speech. Finally, the clean speech was re-synthesised via the inverse Fourier transform with the reconstructed speech spectrum and the noisy speech phase. The experiment results show that the proposed method outperforms the conventional methods in terms of subjective and objective measure.

Keywords: speech denoising, sparse representation, k-singular value decomposition, orthogonal matching pursuit

Procedia PDF Downloads 478
28970 Automated Process Quality Monitoring and Diagnostics for Large-Scale Measurement Data

Authors: Hyun-Woo Cho

Abstract:

Continuous monitoring of industrial plants is one of necessary tasks when it comes to ensuring high-quality final products. In terms of monitoring and diagnosis, it is quite critical and important to detect some incipient abnormal events of manufacturing processes in order to improve safety and reliability of operations involved and to reduce related losses. In this work a new multivariate statistical online diagnostic method is presented using a case study. For building some reference models an empirical discriminant model is constructed based on various past operation runs. When a fault is detected on-line, an on-line diagnostic module is initiated. Finally, the status of the current operating conditions is compared with the reference model to make a diagnostic decision. The performance of the presented framework is evaluated using a dataset from complex industrial processes. It has been shown that the proposed diagnostic method outperforms other techniques especially in terms of incipient detection of any faults occurred.

Keywords: data mining, empirical model, on-line diagnostics, process fault, process monitoring

Procedia PDF Downloads 383
28969 A Study on the Performance of 2-PC-D Classification Model

Authors: Nurul Aini Abdul Wahab, Nor Syamim Halidin, Sayidatina Aisah Masnan, Nur Izzati Romli

Abstract:

There are many applications of principle component method for reducing the large set of variables in various fields. Fisher’s Discriminant function is also a popular tool for classification. In this research, the researcher focuses on studying the performance of Principle Component-Fisher’s Discriminant function in helping to classify rice kernels to their defined classes. The data were collected on the smells or odour of the rice kernel using odour-detection sensor, Cyranose. 32 variables were captured by this electronic nose (e-nose). The objective of this research is to measure how well a combination model, between principle component and linear discriminant, to be as a classification model. Principle component method was used to reduce all 32 variables to a smaller and manageable set of components. Then, the reduced components were used to develop the Fisher’s Discriminant function. In this research, there are 4 defined classes of rice kernel which are Aromatic, Brown, Ordinary and Others. Based on the output from principle component method, the 32 variables were reduced to only 2 components. Based on the output of classification table from the discriminant analysis, 40.76% from the total observations were correctly classified into their classes by the PC-Discriminant function. Indirectly, it gives an idea that the classification model developed has committed to more than 50% of misclassifying the observations. As a conclusion, the Fisher’s Discriminant function that was built on a 2-component from PCA (2-PC-D) is not satisfying to classify the rice kernels into its defined classes.

Keywords: classification model, discriminant function, principle component analysis, variable reduction

Procedia PDF Downloads 313
28968 The Antecedent Variables of Government Financial Accounting System (SAKD) Implementation and Its Consequences: Empirical Study on the Device of Regional Coordinating Agency for Development of Cross County, City Region III Central Java Province, Indo

Authors: Dona Primasari

Abstract:

This study examines the antecedent variables of Government Financial Acccounting System (SAKD) implementation and its consequence. The antecedent variables are: decentralization of decision making, adaptation, and the manager support. The consequences are satisfaction and performance officer. This research represents the empirical test which used convenience sampling technics in data collection. The data were collected from 167 officers of local government in the Regional Coordinating Agency for Development of Cross County/City Region III Central Java Province. Data analysis used Structural Equation Model (SEM) with the AMOS 18.0 program. The result of hypothesis examination indicates that six raised hypothesis are accepted and two hypothesis are rejected.

Keywords: decentralization of decision making, adaptation officer, manager support, implementation of Government Accounting Financial System (SAKD), satisfaction and performance officer

Procedia PDF Downloads 372
28967 Improved K-Means Clustering Algorithm Using RHadoop with Combiner

Authors: Ji Eun Shin, Dong Hoon Lim

Abstract:

Data clustering is a common technique used in data analysis and is used in many applications, such as artificial intelligence, pattern recognition, economics, ecology, psychiatry and marketing. K-means clustering is a well-known clustering algorithm aiming to cluster a set of data points to a predefined number of clusters. In this paper, we implement K-means algorithm based on MapReduce framework with RHadoop to make the clustering method applicable to large scale data. RHadoop is a collection of R packages that allow users to manage and analyze data with Hadoop. The main idea is to introduce a combiner as a function of our map output to decrease the amount of data needed to be processed by reducers. The experimental results demonstrated that K-means algorithm using RHadoop can scale well and efficiently process large data sets on commodity hardware. We also showed that our K-means algorithm using RHadoop with combiner was faster than regular algorithm without combiner as the size of data set increases.

Keywords: big data, combiner, K-means clustering, RHadoop

Procedia PDF Downloads 408
28966 Performance Analysis of Multichannel OCDMA-FSO Network under Different Pervasive Conditions

Authors: Saru Arora, Anurag Sharma, Harsukhpreet Singh

Abstract:

To meet the growing need of high data rate and bandwidth, various efforts has been made nowadays for the efficient communication systems. Optical Code Division Multiple Access over Free space optics communication system seems an effective role for providing transmission at high data rate with low bit error rate and low amount of multiple access interference. This paper demonstrates the OCDMA over FSO communication system up to the range of 7000 m at a data rate of 5 Gbps. Initially, the 8 user OCDMA-FSO system is simulated and pseudo orthogonal codes are used for encoding. Also, the simulative analysis of various performance parameters like power and core effective area that are having an effect on the Bit error rate (BER) of the system is carried out. The simulative analysis reveals that the length of the transmission is limited by the multi-access interference (MAI) effect which arises when the number of users increases in the system.

Keywords: FSO, PSO, bit error rate (BER), opti system simulation, multiple access interference (MAI), q-factor

Procedia PDF Downloads 350
28965 Strategic Decision Making Practice in Croatia: Which Decision Making Style is More Effective?

Authors: Ivana Bulog

Abstract:

Decision making is a vital part of the business world and any other field of human endeavor. Which way a business organization will take, and where that way will lead it, depends on broad range of decisions made by managers in the managerial structure. Strategic decisions are of the greatest importance for organizational success. Although much empirical research has been done trying to describe and explain its nature and effectiveness, knowledge about strategic decision making is still incomplete. This paper explores the nature of strategic decision making in particular setting - in Croatian companies. The main focus of this research is on the style that decision makers on strategic management level are following when making decisions of life importance for their companies. Two main decision making style that explain the way decision maker collects and processes available information and performs all the activities in strategic decision making process were empirical tested: rational and intuitive one. Besides analyzing their existence on strategic management level in Croatian companies, their effectiveness is analyzed as well. Results showed that decision makers at strategic management level are following both styles somewhat equally in order to function effectively, and that intuitive style is more effective when considering decisions outcomes.

Keywords: decision making style, decision making effectiveness, strategic decisions, management sciences

Procedia PDF Downloads 356
28964 Multiple Relaxation Times in the Gibbs Ensemble Monte Carlo Simulation of Phase Separation

Authors: Bina Kumari, Subir K. Sarkar, Pradipta Bandyopadhyay

Abstract:

The autocorrelation function of the density fluctuation is studied in each of the two phases in a Gibbs Ensemble Monte Carlo (GEMC) simulation of the problem of phase separation for a square well potential with various values of its range. We find that the normalized autocorrelation function is described very well as a linear combination of an exponential function with a time scale τ₂ and a stretched exponential function with a time scale τ₁ and an exponent α. Dependence of (α, τ₁, τ₂) on the parameters of the GEMC algorithm and the range of the square well potential is investigated and interpreted. We also analyse the issue of how to choose the parameters of the GEMC simulation optimally.

Keywords: autocorrelation function, density fluctuation, GEMC, simulation

Procedia PDF Downloads 167
28963 Determinants of Non-Performing Loans: An Empirical Investigation of Bank-Specific Micro-Economic Factors

Authors: Amir Ikram, Faisal Ijaz, Qin Su

Abstract:

The empirical study was undertaken to explore the determinants of non-performing loans (NPLs) of small and medium enterprises (SMEs) sector held by the commercial banks. Primary data was collected through well-structured survey questionnaire from credit analysts/bankers of 42 branches of 9 commercial banks, operating in the district of Lahore (Pakistan), for 2014-2015. Selective descriptive analysis and Pearson chi-square technique were used to illustrate and evaluate the significance of different variables affecting NPLs. Branch age, duration of the loan, and credit policy were found to be significant determinants of NPLs. The study proposes that bank-specific and SME-specific microeconomic variables directly influence NPLs, while macroeconomic factors act as intermediary variables. Framework exhibiting causal nexus of NPLs was also drawn on the basis of empirical findings. The results elaborate various origins of NPLs and suggest that they are primarily instigated by the loan sanctioning procedure of the financial institution. The paper also underlines the risk management practices adopted by the bank at branch level to averse the risk of loan default. Empirical investigation of bank-specific microeconomic factors of NPLs with respect to Pakistan’s economy is the novelty of the study. Broader strategic policy implications are provided for credit analysts and entrepreneurs.

Keywords: commercial banks, microeconomic factors, non-performing loans, small and medium enterprises

Procedia PDF Downloads 237
28962 Monotonicity of the Jensen Functional for f-Divergences via the Zipf-Mandelbrot Law

Authors: Neda Lovričević, Đilda Pečarić, Josip Pečarić

Abstract:

The Jensen functional in its discrete form is brought in relation to the Csiszar divergence functional, this time via its monotonicity property. This approach presents a generalization of the previously obtained results that made use of interpolating Jensen-type inequalities. Thus the monotonicity property is integrated with the Zipf-Mandelbrot law and applied to f-divergences for probability distributions that originate from the Csiszar divergence functional: Kullback-Leibler divergence, Hellinger distance, Bhattacharyya distance, chi-square divergence, total variation distance. The Zipf-Mandelbrot and the Zipf law are widely used in various scientific fields and interdisciplinary and here the focus is on the aspect of the mathematical inequalities.

Keywords: Jensen functional, monotonicity, Csiszar divergence functional, f-divergences, Zipf-Mandelbrot law

Procedia PDF Downloads 117
28961 Characteristics of the Severe Rollover Crashes in the UAE Using In-Depth Crash Investigation Data

Authors: Yaser E. Hawas, Md. Didarul Alam

Abstract:

Rollover crashes are complex events entailing interactions of driver, road, vehicle, and environmental factors. The primary objective of this paper is to present an empirical approach that can be used to characterise the rollover crashes and to identify some of the important factors that may lead to rollovers. Among the studied factors are the vehicle types and the rollover occurrence rate after hitting various barrier types. The carried analysis indicated that 71% of the rollover crashes occurred after impact and the type of rollover initiation is “trip/turn over” (nearly 50%). It was also found that light trucks (LTVs) vehicles are more likely to rollover than the sedan vehicles. Barrier impacts are associated with increased incidence of rollover.

Keywords: empirical, hitting barrier, in-depth crash investigation, rollover, severe crash

Procedia PDF Downloads 342
28960 A Destination Marketing Study on Capitalising on the Cultural Link between Ireland and North America Using Social Media

Authors: Colm Barcoe, Garvan Whelan

Abstract:

This study examines how a destination marketing organisation can use social media channels to engage the interests of the US and Canadian markets in a way that maximises the number of visits (and revisits) to Ireland. The research reveals how the cultural link between Ireland and North America is exploited through the use of social media strategies. The findings are based on quantitative and qualitative empirical data obtained through a survey of North American holidaymakers in the pre, during and post trip phases coupled with in-depth interviews of 20 industry experts who are responsible for the implementation of relationship marketing strategies for this segment. The qualitative data was analysed using Netnography in order to provide insights into the effectiveness of various social media channels in developing cultural links between Ireland and North American tourists. The findings of this investigation will extend an under-researched body of literature pertaining to Ireland and North America. The empirical evidence of this study will be of value to both academics and industry practitioners.

Keywords: Ireland, marketing, North America, relationship, strategies

Procedia PDF Downloads 165
28959 Nonparametric Path Analysis with Truncated Spline Approach in Modeling Rural Poverty in Indonesia

Authors: Usriatur Rohma, Adji Achmad Rinaldo Fernandes

Abstract:

Nonparametric path analysis is a statistical method that does not rely on the assumption that the curve is known. The purpose of this study is to determine the best nonparametric truncated spline path function between linear and quadratic polynomial degrees with 1, 2, and 3-knot points and to determine the significance of estimating the best nonparametric truncated spline path function in the model of the effect of population migration and agricultural economic growth on rural poverty through the variable unemployment rate using the t-test statistic at the jackknife resampling stage. The data used in this study are secondary data obtained from statistical publications. The results showed that the best model of nonparametric truncated spline path analysis is quadratic polynomial degree with 3-knot points. In addition, the significance of the best-truncated spline nonparametric path function estimation using jackknife resampling shows that all exogenous variables have a significant influence on the endogenous variables.

Keywords: nonparametric path analysis, truncated spline, linear, quadratic, rural poverty, jackknife resampling

Procedia PDF Downloads 13
28958 Positive Affect, Negative Affect, Organizational and Motivational Factor on the Acceptance of Big Data Technologies

Authors: Sook Ching Yee, Angela Siew Hoong Lee

Abstract:

Big data technologies have become a trend to exploit business opportunities and provide valuable business insights through the analysis of big data. However, there are still many organizations that have yet to adopt big data technologies especially small and medium organizations (SME). This study uses the technology acceptance model (TAM) to look into several constructs in the TAM and other additional constructs which are positive affect, negative affect, organizational factor and motivational factor. The conceptual model proposed in the study will be tested on the relationship and influence of positive affect, negative affect, organizational factor and motivational factor towards the intention to use big data technologies to produce an outcome. Empirical research is used in this study by conducting a survey to collect data.

Keywords: big data technologies, motivational factor, negative affect, organizational factor, positive affect, technology acceptance model (TAM)

Procedia PDF Downloads 334
28957 Kinetic Model to Interpret Whistler Waves in Multicomponent Non-Maxwellian Space Plasmas

Authors: Warda Nasir, M. N. S. Qureshi

Abstract:

Whistler waves are right handed circularly polarized waves and are frequently observed in space plasmas. The Low frequency branch of the Whistler waves having frequencies nearly around 100 Hz, known as Lion roars, are frequently observed in magnetosheath. Another feature of the magnetosheath is the observations of flat top electron distributions with single as well as two electron populations. In the past, lion roars were studied by employing kinetic model using classical bi-Maxwellian distribution function, however, could not be justified both on quantitatively as well as qualitatively grounds. We studied Whistler waves by employing kinetic model using non-Maxwellian distribution function such as the generalized (r,q) distribution function which is the generalized form of kappa and Maxwellian distribution functions by employing kinetic theory with single or two electron populations. We compare our results with the Cluster observations and found good quantitative and qualitative agreement between them. At times when lion roars are observed (not observed) in the data and bi-Maxwellian could not provide the sufficient growth (damping) rates, we showed that when generalized (r,q) distribution function is employed, the resulted growth (damping) rates exactly match the observations.

Keywords: kinetic model, whistler waves, non-maxwellian distribution function, space plasmas

Procedia PDF Downloads 288
28956 A Practical and Efficient Evaluation Function for 3D Model Based Vehicle Matching

Authors: Yuan Zheng

Abstract:

3D model-based vehicle matching provides a new way for vehicle recognition, localization and tracking. Its key is to construct an evaluation function, also called fitness function, to measure the degree of vehicle matching. The existing fitness functions often poorly perform when the clutter and occlusion exist in traffic scenarios. In this paper, we present a practical and efficient fitness function. Unlike the existing evaluation functions, the proposed fitness function is to study the vehicle matching problem from both local and global perspectives, which exploits the pixel gradient information as well as the silhouette information. In view of the discrepancy between 3D vehicle model and real vehicle, a weighting strategy is introduced to differently treat the fitting of the model’s wireframes. Additionally, a normalization operation for the model’s projection is performed to improve the accuracy of the matching. Experimental results on real traffic videos reveal that the proposed fitness function is efficient and robust to the cluttered background and partial occlusion.

Keywords: 3D-2D matching, fitness function, 3D vehicle model, local image gradient, silhouette information

Procedia PDF Downloads 374
28955 Parameter Estimation of Gumbel Distribution with Maximum-Likelihood Based on Broyden Fletcher Goldfarb Shanno Quasi-Newton

Authors: Dewi Retno Sari Saputro, Purnami Widyaningsih, Hendrika Handayani

Abstract:

Extreme data on an observation can occur due to unusual circumstances in the observation. The data can provide important information that can’t be provided by other data so that its existence needs to be further investigated. The method for obtaining extreme data is one of them using maxima block method. The distribution of extreme data sets taken with the maxima block method is called the distribution of extreme values. Distribution of extreme values is Gumbel distribution with two parameters. The parameter estimation of Gumbel distribution with maximum likelihood method (ML) is difficult to determine its exact value so that it is necessary to solve the approach. The purpose of this study was to determine the parameter estimation of Gumbel distribution with quasi-Newton BFGS method. The quasi-Newton BFGS method is a numerical method used for nonlinear function optimization without constraint so that the method can be used for parameter estimation from Gumbel distribution whose distribution function is in the form of exponential doubel function. The quasi-New BFGS method is a development of the Newton method. The Newton method uses the second derivative to calculate the parameter value changes on each iteration. Newton's method is then modified with the addition of a step length to provide a guarantee of convergence when the second derivative requires complex calculations. In the quasi-Newton BFGS method, Newton's method is modified by updating both derivatives on each iteration. The parameter estimation of the Gumbel distribution by a numerical approach using the quasi-Newton BFGS method is done by calculating the parameter values that make the distribution function maximum. In this method, we need gradient vector and hessian matrix. This research is a theory research and application by studying several journals and textbooks. The results of this study obtained the quasi-Newton BFGS algorithm and estimation of Gumbel distribution parameters. The estimation method is then applied to daily rainfall data in Purworejo District to estimate the distribution parameters. This indicates that the high rainfall that occurred in Purworejo District decreased its intensity and the range of rainfall that occurred decreased.

Keywords: parameter estimation, Gumbel distribution, maximum likelihood, broyden fletcher goldfarb shanno (BFGS)quasi newton

Procedia PDF Downloads 301
28954 An Improved K-Means Algorithm for Gene Expression Data Clustering

Authors: Billel Kenidra, Mohamed Benmohammed

Abstract:

Data mining technique used in the field of clustering is a subject of active research and assists in biological pattern recognition and extraction of new knowledge from raw data. Clustering means the act of partitioning an unlabeled dataset into groups of similar objects. Each group, called a cluster, consists of objects that are similar between themselves and dissimilar to objects of other groups. Several clustering methods are based on partitional clustering. This category attempts to directly decompose the dataset into a set of disjoint clusters leading to an integer number of clusters that optimizes a given criterion function. The criterion function may emphasize a local or a global structure of the data, and its optimization is an iterative relocation procedure. The K-Means algorithm is one of the most widely used partitional clustering techniques. Since K-Means is extremely sensitive to the initial choice of centers and a poor choice of centers may lead to a local optimum that is quite inferior to the global optimum, we propose a strategy to initiate K-Means centers. The improved K-Means algorithm is compared with the original K-Means, and the results prove how the efficiency has been significantly improved.

Keywords: microarray data mining, biological pattern recognition, partitional clustering, k-means algorithm, centroid initialization

Procedia PDF Downloads 167
28953 Dual-Polarized Multi-Antenna System for Massive MIMO Cellular Communications

Authors: Naser Ojaroudi Parchin, Haleh Jahanbakhsh Basherlou, Raed A. Abd-Alhameed, Peter S. Excell

Abstract:

In this paper, a multiple-input/multiple-output (MIMO) antenna design with polarization and radiation pattern diversity is presented for future smartphones. The configuration of the design consists of four double-fed circular-ring antenna elements located at different edges of the printed circuit board (PCB) with an FR-4 substrate and overall dimension of 75×150 mm2. The antenna elements are fed by 50-Ohm microstrip-lines and provide polarization and radiation pattern diversity function due to the orthogonal placement of their feed lines. A good impedance bandwidth (S11 ≤ -10 dB) of 3.4-3.8 GHz has been obtained for the smartphone antenna array. However, for S11 ≤ -6 dB, this value is 3.25-3.95 GHz. More than 3 dB realized gain and 80% total efficiency are achieved for the single-element radiator. The presented design not only provides the required radiation coverage but also generates the polarization diversity characteristic.

Keywords: cellular communications, multiple-input/multiple-output systems, mobile-phone antenna, polarization diversity

Procedia PDF Downloads 119
28952 A Deterministic Approach for Solving the Hull and White Interest Rate Model with Jump Process

Authors: Hong-Ming Chen

Abstract:

This work considers the resolution of the Hull and White interest rate model with the jump process. A deterministic process is adopted to model the random behavior of interest rate variation as deterministic perturbations, which is depending on the time t. The Brownian motion and jumps uncertainty are denoted as the integral functions piecewise constant function w(t) and point function θ(t). It shows that the interest rate function and the yield function of the Hull and White interest rate model with jump process can be obtained by solving a nonlinear semi-infinite programming problem. A relaxed cutting plane algorithm is then proposed for solving the resulting optimization problem. The method is calibrated for the U.S. treasury securities at 3-month data and is used to analyze several effects on interest rate prices, including interest rate variability, and the negative correlation between stock returns and interest rates. The numerical results illustrate that our approach essentially generates the yield functions with minimal fitting errors and small oscillation.

Keywords: optimization, interest rate model, jump process, deterministic

Procedia PDF Downloads 147
28951 Comparison of Applicability of Time Series Forecasting Models VAR, ARCH and ARMA in Management Science: Study Based on Empirical Analysis of Time Series Techniques

Authors: Muhammad Tariq, Hammad Tahir, Fawwad Mahmood Butt

Abstract:

Purpose: This study attempts to examine the best forecasting methodologies in the time series. The time series forecasting models such as VAR, ARCH and the ARMA are considered for the analysis. Methodology: The Bench Marks or the parameters such as Adjusted R square, F-stats, Durban Watson, and Direction of the roots have been critically and empirically analyzed. The empirical analysis consists of time series data of Consumer Price Index and Closing Stock Price. Findings: The results show that the VAR model performed better in comparison to other models. Both the reliability and significance of VAR model is highly appreciable. In contrary to it, the ARCH model showed very poor results for forecasting. However, the results of ARMA model appeared double standards i.e. the AR roots showed that model is stationary and that of MA roots showed that the model is invertible. Therefore, the forecasting would remain doubtful if it made on the bases of ARMA model. It has been concluded that VAR model provides best forecasting results. Practical Implications: This paper provides empirical evidences for the application of time series forecasting model. This paper therefore provides the base for the application of best time series forecasting model.

Keywords: forecasting, time series, auto regression, ARCH, ARMA

Procedia PDF Downloads 319