Search results for: data envelopment analysis (DEA)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 42276

Search results for: data envelopment analysis (DEA)

41616 Quantitative Ranking Evaluation of Wine Quality

Authors: A. Brunel, A. Kernevez, F. Leclere, J. Trenteseaux

Abstract:

Today, wine quality is only evaluated by wine experts with their own different personal tastes, even if they may agree on some common features. So producers do not have any unbiased way to independently assess the quality of their products. A tool is here proposed to evaluate wine quality by an objective ranking based upon the variables entering wine elaboration, and analysed through principal component analysis (PCA) method. Actual climatic data are compared by measuring the relative distance between each considered wine, out of which the general ranking is performed.

Keywords: wine, grape, weather conditions, rating, climate, principal component analysis, metric analysis

Procedia PDF Downloads 320
41615 Demographic Factors Influencing Employees’ Salary Expectations and Labor Turnover

Authors: M. Osipova

Abstract:

Thanks to informational technologies development every sphere of economics is becoming more and more data-centralized as people are generating huge datasets containing information on any aspect of their life. Applying research of such data to human resources management allows getting scarce statistics on labor market state including salary expectations and potential employees’ typical career behavior, and this information can become a reliable basis for management decisions. The following article presents results of career behavior research based on freely accessible resume data. Information used for study is much wider than one usually uses in human resources surveys. That is why there is enough data for statistically significant results even for subgroups analysis.

Keywords: human resources management, salary expectations, statistics, turnover

Procedia PDF Downloads 354
41614 Evaluation of Longitudinal Relaxation Time (T1) of Bone Marrow in Lumbar Vertebrae of Leukaemia Patients Undergoing Magnetic Resonance Imaging

Authors: M. G. R. S. Perera, B. S. Weerakoon, L. P. G. Sherminie, M. L. Jayatilake, R. D. Jayasinghe, W. Huang

Abstract:

The aim of this study was to measure and evaluate the Longitudinal Relaxation Times (T1) in bone marrow of an Acute Myeloid Leukaemia (AML) patient in order to explore the potential for a prognostic biomarker using Magnetic Resonance Imaging (MRI) which will be a non-invasive prognostic approach to AML. MR image data were collected in the DICOM format and MATLAB Simulink software was used in the image processing and data analysis. For quantitative MRI data analysis, Region of Interests (ROI) on multiple image slices were drawn encompassing vertebral bodies of L3, L4, and L5. T1 was evaluated using the T1 maps obtained. The estimated bone marrow mean value of T1 was 790.1 (ms) at 3T. However, the reported T1 value of healthy subjects is significantly (946.0 ms) higher than the present finding. This suggests that the T1 for bone marrow can be considered as a potential prognostic biomarker for AML patients.

Keywords: acute myeloid leukaemia, longitudinal relaxation time, magnetic resonance imaging, prognostic biomarker.

Procedia PDF Downloads 531
41613 Internet Addiction among Students: An Empirical Study in Pondicherry University

Authors: Mashood C., Abdul Vahid K., Ashique C. K.

Abstract:

The technology is growing beyond human expectation. Internet is one of very sophisticated product of the information technology. It has various advantages like connecting the world, simplifying the difficult tasks done in past etc. Simultaneously it has demerits also; that is lack of authenticity and internet addiction. To find out the problems of internet addiction, a study conducted among the Postgraduate students of Pondicherry University and collected 454 samples. The study strictly focused to identify the internet addiction among students, influence and interdependence of personality on internet addiction among first years and second years. To evaluate this, we used two major analysis, these are Confirmatory Factor Analysis (CFA) to predict the internet addiction with the observed data and Logistic Regression to identify the difference between first years and second years in the case of internet addiction. Before applying to the core analysis, the data applied to some preliminary tests to check the model fit. The empirical findings shows that , the students of Pondicherry University are very much addicted to the internet, But there is no such huge difference between first years and second years in case of internet addiction.

Keywords: internet addiction, students, Pondicherry University, empirical study

Procedia PDF Downloads 459
41612 Churn Prediction for Telecommunication Industry Using Artificial Neural Networks

Authors: Ulas Vural, M. Ergun Okay, E. Mesut Yildiz

Abstract:

Telecommunication service providers demand accurate and precise prediction of customer churn probabilities to increase the effectiveness of their customer relation services. The large amount of customer data owned by the service providers is suitable for analysis by machine learning methods. In this study, expenditure data of customers are analyzed by using an artificial neural network (ANN). The ANN model is applied to the data of customers with different billing duration. The proposed model successfully predicts the churn probabilities at 83% accuracy for only three months expenditure data and the prediction accuracy increases up to 89% when the nine month data is used. The experiments also show that the accuracy of ANN model increases on an extended feature set with information of the changes on the bill amounts.

Keywords: customer relationship management, churn prediction, telecom industry, deep learning, artificial neural networks

Procedia PDF Downloads 148
41611 Identifying the Goals of a Multicultural Curriculum for the Primary Education Course

Authors: Fatemeh Havas Beigi

Abstract:

The purpose of this study is to identify the objectives of a multicultural curriculum for the primary education period from the perspective of ethnic teachers and education experts and cultural professionals. The research paradigm is interpretive, the research approach is qualitative, the research strategy is content analysis, the sampling method is purposeful and it is a snowball, and the sample of informants in the research for Iranian ethnic teachers and experts until the theoretical saturation was estimated to be 67 people. The data collection tools used were based on semi-structured interviews and individual interviews and focal interviews were used to collect information. The data format was also in audio format and the first period coding and the second coding were used to analyze the data. Based on data analysis 11 Objective: Paying attention to ethnic equality, expanding educational opportunities and justice, peaceful coexistence, anti-ethnic and racial discrimination education, paying attention to human value and dignity, accepting religious diversity, getting to know ethnicities and cultures, promoting teaching-learning, fostering self-confidence, building national unity, and developing cultural commonalities for a multicultural curriculum were identified.

Keywords: objective, multicultural curriculum, connect, elementary education period

Procedia PDF Downloads 95
41610 A Fully-Automated Disturbance Analysis Vision for the Smart Grid Based on Smart Switch Data

Authors: Bernardo Cedano, Ahmed H. Eltom, Bob Hay, Jim Glass, Raga Ahmed

Abstract:

The deployment of smart grid devices such as smart meters and smart switches (SS) supported by a reliable and fast communications system makes automated distribution possible, and thus, provides great benefits to electric power consumers and providers alike. However, more research is needed before the full utility of smart switch data is realized. This paper presents new automated switching techniques using SS within the electric power grid. A concise background of the SS is provided, and operational examples are shown. Organization and presentation of data obtained from SS are shown in the context of the future goal of total automation of the distribution network. The description of application techniques, the examples of success with SS, and the vision outlined in this paper serve to motivate future research pertinent to disturbance analysis automation.

Keywords: disturbance automation, electric power grid, smart grid, smart switches

Procedia PDF Downloads 309
41609 Development of Risk Management System for Urban Railroad Underground Structures and Surrounding Ground

Authors: Y. K. Park, B. K. Kim, J. W. Lee, S. J. Lee

Abstract:

To assess the risk of the underground structures and surrounding ground, we collect basic data by the engineering method of measurement, exploration and surveys and, derive the risk through proper analysis and each assessment for urban railroad underground structures and surrounding ground including station inflow. Basic data are obtained by the fiber-optic sensors, MEMS sensors, water quantity/quality sensors, tunnel scanner, ground penetrating radar, light weight deflectometer, and are evaluated if they are more than the proper value or not. Based on these data, we analyze the risk level of urban railroad underground structures and surrounding ground. And we develop the risk management system to manage efficiently these data and to support a convenient interface environment at input/output of data.

Keywords: urban railroad, underground structures, ground subsidence, station inflow, risk

Procedia PDF Downloads 336
41608 ESG and Corporate Financial Performance: Empirical Evidence from Vietnam’s Listed Construction Companies

Authors: My Linh Hoang, Van Dung Hoang

Abstract:

Environmental, Social, and Governance (ESG) factors have become a focus for companies globally, as businesses are now focusing on long-term sustainable goals rather than only operating for the goals of profit maximization. According to recent research, in several countries, companies have shown positive results in their financial performance by improving their ESG performance. The construction industry is one of the most crucial components of social and economic development; as a result, considerations for ESG factors are becoming more and more essential for companies in this sector. In Vietnam, the construction industry has been growing rapidly in recent years; however, it has yet to be discussed and studied extensively in Vietnam how ESG factors create impacts on corporate financial performance in general and construction corporations’ financial performance in particular. This research aims to examine the relationship between ESG factors and financial indicators in construction companies from 2011 to 2021 through panel data analysis of 75 listed construction companies in Vietnam and to provide insights into how these companies can better integrate ESG considerations into their operations to enhance their financial performance. The data was analyzed through 3 main methods: descriptive statistics, correlation coefficient analysis applied to all dependent, explanatory and control variables, and panel data analysis method. In panel data analysis, the study uses the fixed effects model (FEM) and random effects model (REM). The Hausman test will be used to select which model is suitable to be used. The findings indicate that maintaining a strong commitment to ESG principles can have a positive impact on financial performance. Finally, FGLS estimation will be performed when the problem of autocorrelation and variable variance appears in the model. This is significant for all parties involved, including investors, company managers, decision-makers, and industry regulators.

Keywords: ESG, financial performance, construction company, Vietnam

Procedia PDF Downloads 93
41607 The Relationship Between Artificial Intelligence, Data Science, and Privacy

Authors: M. Naidoo

Abstract:

Artificial intelligence often requires large amounts of good quality data. Within important fields, such as healthcare, the training of AI systems predominately relies on health and personal data; however, the usage of this data is complicated by various layers of law and ethics that seek to protect individuals’ privacy rights. This research seeks to establish the challenges AI and data sciences pose to (i) informational rights, (ii) privacy rights, and (iii) data protection. To solve some of the issues presented, various methods are suggested, such as embedding values in technological development, proper balancing of rights and interests, and others.

Keywords: artificial intelligence, data science, law, policy

Procedia PDF Downloads 106
41606 Cloud-Based Multiresolution Geodata Cube for Efficient Raster Data Visualization and Analysis

Authors: Lassi Lehto, Jaakko Kahkonen, Juha Oksanen, Tapani Sarjakoski

Abstract:

The use of raster-formatted data sets in geospatial analysis is increasing rapidly. At the same time, geographic data are being introduced into disciplines outside the traditional domain of geoinformatics, like climate change, intelligent transport, and immigration studies. These developments call for better methods to deliver raster geodata in an efficient and easy-to-use manner. Data cube technologies have traditionally been used in the geospatial domain for managing Earth Observation data sets that have strict requirements for effective handling of time series. The same approach and methodologies can also be applied in managing other types of geospatial data sets. A cloud service-based geodata cube, called GeoCubes Finland, has been developed to support online delivery and analysis of most important geospatial data sets with national coverage. The main target group of the service is the academic research institutes in the country. The most significant aspects of the GeoCubes data repository include the use of multiple resolution levels, cloud-optimized file structure, and a customized, flexible content access API. Input data sets are pre-processed while being ingested into the repository to bring them into a harmonized form in aspects like georeferencing, sampling resolutions, spatial subdivision, and value encoding. All the resolution levels are created using an appropriate generalization method, selected depending on the nature of the source data set. Multiple pre-processed resolutions enable new kinds of online analysis approaches to be introduced. Analysis processes based on interactive visual exploration can be effectively carried out, as the level of resolution most close to the visual scale can always be used. In the same way, statistical analysis can be carried out on resolution levels that best reflect the scale of the phenomenon being studied. Access times remain close to constant, independent of the scale applied in the application. The cloud service-based approach, applied in the GeoCubes Finland repository, enables analysis operations to be performed on the server platform, thus making high-performance computing facilities easily accessible. The developed GeoCubes API supports this kind of approach for online analysis. The use of cloud-optimized file structures in data storage enables the fast extraction of subareas. The access API allows for the use of vector-formatted administrative areas and user-defined polygons as definitions of subareas for data retrieval. Administrative areas of the country in four levels are available readily from the GeoCubes platform. In addition to direct delivery of raster data, the service also supports the so-called virtual file format, in which only a small text file is first downloaded. The text file contains links to the raster content on the service platform. The actual raster data is downloaded on demand, from the spatial area and resolution level required in each stage of the application. By the geodata cube approach, pre-harmonized geospatial data sets are made accessible to new categories of inexperienced users in an easy-to-use manner. At the same time, the multiresolution nature of the GeoCubes repository facilitates expert users to introduce new kinds of interactive online analysis operations.

Keywords: cloud service, geodata cube, multiresolution, raster geodata

Procedia PDF Downloads 139
41605 The Effect of Job Insecurity on Attitude towards Change and Organizational Citizenship Behavior: Moderating Role of Islamic Work Ethics

Authors: Khurram Shahzad, Muhammad Usman

Abstract:

The main aim of this study is to examine the direct and interactive effects of job insecurity and Islamic work ethics on employee’s attitude towards change and organizational citizenship behavior. Design/methodology/approach: The data was collected from 171 male and female university teachers of Pakistan. Self administered, close ended questionnaires were used to collect the data. Data was analyzed through correlation and regression analysis. Findings: Through the analysis of data, it was found that job insecurity has a strong negative effect on the attitude towards change of university teachers. On the contrary, job insecurity has no significant effect on organizational citizenship behavior of university teachers. Our results also show that Islamic work ethics does not moderate the relationship of job insecurity and attitude towards change, while a strong moderation effect of Islamic wok ethics is found on the relationship of job insecurity and organizational citizenship behavior. Originality/value: This study for the first time examines the relationship of job insecurity with employee’s attitude towards change and organizational citizenship behavior with the moderating effect of Islamic work ethics.

Keywords: job security, islamic work ethics, attitude towards change, organizational citizenship behavior

Procedia PDF Downloads 476
41604 Utilizing the Principal Component Analysis on Multispectral Aerial Imagery for Identification of Underlying Structures

Authors: Marcos Bosques-Perez, Walter Izquierdo, Harold Martin, Liangdon Deng, Josue Rodriguez, Thony Yan, Mercedes Cabrerizo, Armando Barreto, Naphtali Rishe, Malek Adjouadi

Abstract:

Aerial imagery is a powerful tool when it comes to analyzing temporal changes in ecosystems and extracting valuable information from the observed scene. It allows us to identify and assess various elements such as objects, structures, textures, waterways, and shadows. To extract meaningful information, multispectral cameras capture data across different wavelength bands of the electromagnetic spectrum. In this study, the collected multispectral aerial images were subjected to principal component analysis (PCA) to identify independent and uncorrelated components or features that extend beyond the visible spectrum captured in standard RGB images. The results demonstrate that these principal components contain unique characteristics specific to certain wavebands, enabling effective object identification and image segmentation.

Keywords: big data, image processing, multispectral, principal component analysis

Procedia PDF Downloads 178
41603 The Analysis of Differential Item and Test Functioning between Sexes by Studying on the Scholastic Aptitude Test 2013

Authors: Panwasn Mahalawalert

Abstract:

The purposes of this research were analyzed differential item functioning and differential test functioning of SWUSAT aptitude test classification by sex variable. The data used in this research is the secondary data from Srinakharinwirot University Scholastic Aptitude Test 2013 (SWUSAT). SWUSAT test consists of four subjects. There are verbal ability test, number ability test, reasoning ability test and spatial ability test. The data analysis was analyzed in 2 steps. The first step was analyzing descriptive statistics. In the second step were analyzed differential item functioning (DIF) and differential test functioning (DTF) by using the DIFAS program. The research results were as follows: The results of DIF and DTF analysis for all 10 tests in year 2013. Gender was the characteristic that found DIF all 10 tests. The percentage of item number that found DIF is between 6.67% - 60%. There are 5 tests that most of items favors female group and 2 tests that most of items favors male group. There are 3 tests that the number of items favors female group equal favors male group. For Differential test functioning (DTF), there are 8 tests that have small level.

Keywords: aptitude test, differential item functioning, differential test functioning, educational measurement

Procedia PDF Downloads 415
41602 A Parallel Approach for 3D-Variational Data Assimilation on GPUs in Ocean Circulation Models

Authors: Rossella Arcucci, Luisa D'Amore, Simone Celestino, Giuseppe Scotti, Giuliano Laccetti

Abstract:

This work is the first dowel in a rather wide research activity in collaboration with Euro Mediterranean Center for Climate Changes, aimed at introducing scalable approaches in Ocean Circulation Models. We discuss designing and implementation of a parallel algorithm for solving the Variational Data Assimilation (DA) problem on Graphics Processing Units (GPUs). The algorithm is based on the fully scalable 3DVar DA model, previously proposed by the authors, which uses a Domain Decomposition approach (we refer to this model as the DD-DA model). We proceed with an incremental porting process consisting of 3 distinct stages: requirements and source code analysis, incremental development of CUDA kernels, testing and optimization. Experiments confirm the theoretic performance analysis based on the so-called scale up factor demonstrating that the DD-DA model can be suitably mapped on GPU architectures.

Keywords: data assimilation, GPU architectures, ocean models, parallel algorithm

Procedia PDF Downloads 413
41601 Cost Efficiency of European Cooperative Banks

Authors: Karolína Vozková, Matěj Kuc

Abstract:

This paper analyzes recent trends in cost efficiency of European cooperative banks using efficient frontier analysis. Our methodology is based on stochastic frontier analysis which is run on a set of 649 European cooperative banks using data between 2006 and 2015. Our results show that average inefficiency of European cooperative banks is increasing since 2008, smaller cooperative banks are significantly more efficient than the bigger ones over the whole time period and that share of net fee and commission income to total income surprisingly seems to have no impact on bank cost efficiency.

Keywords: cooperative banks, cost efficiency, efficient frontier analysis, stochastic frontier analysis, net fee and commission income

Procedia PDF Downloads 211
41600 Identification and Classification of Fiber-Fortified Semolina by Near-Infrared Spectroscopy (NIR)

Authors: Amanda T. Badaró, Douglas F. Barbin, Sofia T. Garcia, Maria Teresa P. S. Clerici, Amanda R. Ferreira

Abstract:

Food fortification is the intentional addition of a nutrient in a food matrix and has been widely used to overcome the lack of nutrients in the diet or increasing the nutritional value of food. Fortified food must meet the demand of the population, taking into account their habits and risks that these foods may cause. Wheat and its by-products, such as semolina, has been strongly indicated to be used as a food vehicle since it is widely consumed and used in the production of other foods. These products have been strategically used to add some nutrients, such as fibers. Methods of analysis and quantification of these kinds of components are destructive and require lengthy sample preparation and analysis. Therefore, the industry has searched for faster and less invasive methods, such as Near-Infrared Spectroscopy (NIR). NIR is a rapid and cost-effective method, however, it is based on indirect measurements, yielding high amount of data. Therefore, NIR spectroscopy requires calibration with mathematical and statistical tools (Chemometrics) to extract analytical information from the corresponding spectra, as Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA). PCA is well suited for NIR, once it can handle many spectra at a time and be used for non-supervised classification. Advantages of the PCA, which is also a data reduction technique, is that it reduces the data spectra to a smaller number of latent variables for further interpretation. On the other hand, LDA is a supervised method that searches the Canonical Variables (CV) with the maximum separation among different categories. In LDA, the first CV is the direction of maximum ratio between inter and intra-class variances. The present work used a portable infrared spectrometer (NIR) for identification and classification of pure and fiber-fortified semolina samples. The fiber was added to semolina in two different concentrations, and after the spectra acquisition, the data was used for PCA and LDA to identify and discriminate the samples. The results showed that NIR spectroscopy associate to PCA was very effective in identifying pure and fiber-fortified semolina. Additionally, the classification range of the samples using LDA was between 78.3% and 95% for calibration and 75% and 95% for cross-validation. Thus, after the multivariate analysis such as PCA and LDA, it was possible to verify that NIR associated to chemometric methods is able to identify and classify the different samples in a fast and non-destructive way.

Keywords: Chemometrics, fiber, linear discriminant analysis, near-infrared spectroscopy, principal component analysis, semolina

Procedia PDF Downloads 213
41599 Effect of Genuine Missing Data Imputation on Prediction of Urinary Incontinence

Authors: Suzan Arslanturk, Mohammad-Reza Siadat, Theophilus Ogunyemi, Ananias Diokno

Abstract:

Missing data is a common challenge in statistical analyses of most clinical survey datasets. A variety of methods have been developed to enable analysis of survey data to deal with missing values. Imputation is the most commonly used among the above methods. However, in order to minimize the bias introduced due to imputation, one must choose the right imputation technique and apply it to the correct type of missing data. In this paper, we have identified different types of missing values: missing data due to skip pattern (SPMD), undetermined missing data (UMD), and genuine missing data (GMD) and applied rough set imputation on only the GMD portion of the missing data. We have used rough set imputation to evaluate the effect of such imputation on prediction by generating several simulation datasets based on an existing epidemiological dataset (MESA). To measure how well each dataset lends itself to the prediction model (logistic regression), we have used p-values from the Wald test. To evaluate the accuracy of the prediction, we have considered the width of 95% confidence interval for the probability of incontinence. Both imputed and non-imputed simulation datasets were fit to the prediction model, and they both turned out to be significant (p-value < 0.05). However, the Wald score shows a better fit for the imputed compared to non-imputed datasets (28.7 vs. 23.4). The average confidence interval width was decreased by 10.4% when the imputed dataset was used, meaning higher precision. The results show that using the rough set method for missing data imputation on GMD data improve the predictive capability of the logistic regression. Further studies are required to generalize this conclusion to other clinical survey datasets.

Keywords: rough set, imputation, clinical survey data simulation, genuine missing data, predictive index

Procedia PDF Downloads 169
41598 Settlement Analysis of Axially Loaded Bored Piles: A Case History

Authors: M. Mert, M. T. Ozkan

Abstract:

Pile load tests should be applied to check the bearing capacity calculations and to determine the settlement of the pile corresponding to test load. Strain gauges can be installed into pile in order to determine the shaft resistance of the piles for every soil layer respectively. Detailed results can be obtained by means of strain gauges placed at certain levels into test piles. In the scope of this study, pile load test data obtained from two different projects are examined.  Instrumented static pile load tests were applied on totally 7 test bored piles of different diameters (80 cm, 150 cm, and 200 cm) and different lengths (between 30-76 m) in two different project site. Settlement analysis of test piles is done by using some of load transfer methods and finite element method. Plaxis 3D which is a three-dimensional finite element program is also used for settlement analysis of the test piles. In this study, firstly bearing capacity of test piles are determined and compared with strain gauge data which is required for settlement analysis. Then, settlement values of the test piles are estimated by using load transfer methods developed in recent years and finite element method. The aim of this study is to show similarities and differences between the results obtained from settlement analysis methods and instrumented pile load tests.

Keywords: failure, finite element method, monitoring and instrumentation, pile, settlement

Procedia PDF Downloads 170
41597 A Data Mining Approach for Analysing and Predicting the Bank's Asset Liability Management Based on Basel III Norms

Authors: Nidhin Dani Abraham, T. K. Sri Shilpa

Abstract:

Asset liability management is an important aspect in banking business. Moreover, the today’s banking is based on BASEL III which strictly regulates on the counterparty default. This paper focuses on prediction and analysis of counter party default risk, which is a type of risk occurs when the customers fail to repay the amount back to the lender (bank or any financial institutions). This paper proposes an approach to reduce the counterparty risk occurring in the financial institutions using an appropriate data mining technique and thus predicts the occurrence of NPA. It also helps in asset building and restructuring quality. Liability management is very important to carry out banking business. To know and analyze the depth of liability of bank, a suitable technique is required. For that a data mining technique is being used to predict the dormant behaviour of various deposit bank customers. Various models are implemented and the results are analyzed of saving bank deposit customers. All these data are cleaned using data cleansing approach from the bank data warehouse.

Keywords: data mining, asset liability management, BASEL III, banking

Procedia PDF Downloads 555
41596 Estimation of Geotechnical Parameters by Comparing Monitoring Data with Numerical Results: Case Study of Arash–Esfandiar-Niayesh Under-Passing Tunnel, Africa Tunnel, Tehran, Iran

Authors: Aliakbar Golshani, Seyyed Mehdi Poorhashemi, Mahsa Gharizadeh

Abstract:

The under passing tunnels are strongly influenced by the soils around. There are some complexities in the specification of real soil behavior, owing to the fact that lots of uncertainties exist in soil properties, and additionally, inappropriate soil constitutive models. Such mentioned factors may cause incompatible settlements in numerical analysis with the obtained values in actual construction. This paper aims to report a case study on a specific tunnel constructed by NATM. The tunnel has a depth of 11.4 m, height of 12.2 m, and width of 14.4 m with 2.5 lanes. The numerical modeling was based on a 2D finite element program. The soil material behavior was modeled by hardening soil model. According to the field observations, the numerical estimated settlement at the ground surface was approximately four times more than the measured one, after the entire installation of the initial lining, indicating that some unknown factors affect the values. Consequently, the geotechnical parameters are accurately revised by a numerical back-analysis using laboratory and field test data and based on the obtained monitoring data. The obtained result confirms that typically, the soil parameters are conservatively low-estimated. And additionally, the constitutive models cannot be applied properly for all soil conditions.

Keywords: NATM tunnel, initial lining, laboratory test data, numerical back-analysis

Procedia PDF Downloads 361
41595 The Impact of Corporate Social Responsibility and Relationship Marketing on Relationship Maintainer and Customer Loyalty by Mediating Role of Customer Satisfaction

Authors: Anam Bhatti, Sumbal Arif, Mariam Mehar, Sohail Younas

Abstract:

CSR has become one of the imperative implements in satisfying customers. The impartial of this research is to calculate CSR, relationship marketing, and customer satisfaction. In Pakistan, there is not enough research work on the effect of CSR and relationship marketing on relationship maintainer and customer loyalty. To find out deductive approach and survey method is used as research approach and research strategy respectively. This research design is descriptive and quantitative study. For data, collection questionnaire method with semantic differential scale and seven point scales are adopted. Data has been collected by adopting the non-probability convenience technique as sampling technique and the sample size is 400. For factor confirmatory factor analysis, structure equation modeling and medication analysis, regression analysis Amos software were used. Strong empirical evidence supports that the customer’s perception of CSR performance is highly influenced by the values.

Keywords: CSR, Relationship marketing, Relationship maintainer, Customer loyalty, Customer satisfaction

Procedia PDF Downloads 484
41594 The Influence of Intellectual Capital Disclosures on Market Capitalization Growth

Authors: Nyoman Wijana, Chandra Arha

Abstract:

Disclosures of Intellectual Capital (IC) is a presentation of corporate information assets that are not recorded in the financial statements. This disclosures is very helpful because it provides inform corporate assets are intangible. In the new economic era, the company's intangible assets will determine company's competitive advantage. This study aimed to examine the effect of IC disclosures on market capitalization growth. Observational studies conducted over ten years in 2002-2011. The purpose of this study was to determine the effect for last ten years. One hundred samples of the company's largest market capitalization in 2011 traced back to last ten years. Data that used, are in 2011, 2008, 2005, and 2002 Method that’s used for acquiring the data is content analysis. The analytical method used is Ordinanary Least Square (OLS) and analysis tools are e views 7 This software using Pooled Least Square estimation parameters are specifically designed for panel data. The results of testing analysis showed inconsistent expression levels affect the growth of the market capitalization in each year of observation. The results of this study are expected to motivate the public company in Indonesia to do more voluntary IC disclosures and encourage regulators to make regulations in a comprehensive manner so that all categories of the IC must be disclosed by the company.

Keywords: IC disclosures, market capitalization growth, analytical method, OLS

Procedia PDF Downloads 342
41593 Perception-Oriented Model Driven Development for Designing Data Acquisition Process in Wireless Sensor Networks

Authors: K. Indra Gandhi

Abstract:

Wireless Sensor Networks (WSNs) have always been characterized for application-specific sensing, relaying and collection of information for further analysis. However, software development was not considered as a separate entity in this process of data collection which has posed severe limitations on the software development for WSN. Software development for WSN is a complex process since the components involved are data-driven, network-driven and application-driven in nature. This implies that there is a tremendous need for the separation of concern from the software development perspective. A layered approach for developing data acquisition design based on Model Driven Development (MDD) has been proposed as the sensed data collection process itself varies depending upon the application taken into consideration. This work focuses on the layered view of the data acquisition process so as to ease the software point of development. A metamodel has been proposed that enables reusability and realization of the software development as an adaptable component for WSN systems. Further, observing users perception indicates that proposed model helps in improving the programmer's productivity by realizing the collaborative system involved.

Keywords: data acquisition, model-driven development, separation of concern, wireless sensor networks

Procedia PDF Downloads 435
41592 Dose Evaluations with SNAP/RADTRAD for Loss of Coolant Accidents in a BWR6 Nuclear Power Plant

Authors: Kai Chun Yang, Shao-Wen Chen, Jong-Rong Wang, Chunkuan Shih, Jung-Hua Yang, Hsiung-Chih Chen, Wen-Sheng Hsu

Abstract:

In this study, we build RADionuclide Transport, Removal And Dose Estimation/Symbolic Nuclear Analysis Package (SNAP/RADTRAD) model of Kuosheng Nuclear Power Plant which is based on the Final Safety Evaluation Report (FSAR) and other data of Kuosheng Nuclear Power Plant. It is used to estimate the radiation dose of the Exclusion Area Boundary (EAB), the Low Population Zone (LPZ), and the control room following ‘release from the containment’ case in Loss Of Coolant Accident (LOCA). The RADTRAD analysis result shows that the evaluation dose at EAB, LPZ, and the control room are close to the FSAR data, and all of the doses are lower than the regulatory limits. At last, we do a sensitivity analysis and observe that the evaluation doses increase as the intake rate of the control room increases.

Keywords: RADTRAD, radionuclide transport, removal and dose estimation, snap, symbolic nuclear analysis package, boiling water reactor, NPP, kuosheng

Procedia PDF Downloads 343
41591 Moved by Music: The Impact of Music on Fatigue, Arousal and Motivation During Conditioning for High to Elite Level Female Artistic Gymnasts

Authors: Chante J. De Klerk

Abstract:

The potential of music to facilitate superior performance during high to elite level gymnastics conditioning instigated this research. A team of seven gymnasts completed a fixed conditioning programme eight times, alternating the two variable conditions. Four sessions of each condition were conducted: without music (session 1), with music (session 2), without music (3), with music (4), without music (5), and so forth. Quantitative data were collected in both conditions through physiological monitoring of the gymnasts, and administration of the Situational Motivation Scale (SIMS). Statistical analysis of the physiological data made it possible to quantify the presence as well as the magnitude of the musical intervention’s impact on various aspects of the gymnasts' physiological functioning during conditioning. The SIMS questionnaire results were used to evaluate if their motivation towards conditioning was altered by the intervention. Thematic analysis of qualitative data collected through semi-structured interviews revealed themes reflecting the gymnasts’ sentiments towards the data collection process. Gymnast-specific descriptions and experiences of the team as a whole were integrated with the quantitative data to facilitate greater dimension in establishing the impact of the intervention. The results showed positive physiological, motivational, and emotional effects. In the presence of music, superior sympathetic nervous activation, and energy efficiency, with more economic breathing, dominated the physiological data. Fatigue and arousal levels (emotional and physiological) were also conducive to improved conditioning outcomes compared to conventional conditioning (without music). Greater levels of positive affect and motivation emerged in analysis of both the SIMS and interview data sets. Overall, the intervention was found to promote psychophysiological coherence during the physical activity. In conclusion, a strategically constructed musical intervention, designed to accompany a gymnastics conditioning session for high to elite level gymnasts, has ergogenic potential.

Keywords: arousal, fatigue, gymnastics conditioning, motivation, musical intervention, psychophysiological coherence

Procedia PDF Downloads 94
41590 Ethnic and National Determinants in the Process of Building Peace in Afghanistan After the Withdrawal of Western Forces in 2021

Authors: Małgorzata Cichy

Abstract:

Afghanistan is a source of conflicts that affect security on a global scale. The role of ethnic and national determinants in the peacebuilding process in this country remains an extremely important factor in this respect. Research methods include literature and data analysis (scientific literature, documents of governmental and non-governmental organizations, statistical data and media reports), institutional and legal analysis, as well as decision-making method. The main objective of the research is a comprehensive answer to the question of how ethnic and national factors affect the process of building peace in Afghanistan after 2021 and what impact it has on international security.

Keywords: Afghanistan, pashtuns, peace, taliban

Procedia PDF Downloads 97
41589 Performance Analysis of Hierarchical Agglomerative Clustering in a Wireless Sensor Network Using Quantitative Data

Authors: Tapan Jain, Davender Singh Saini

Abstract:

Clustering is a useful mechanism in wireless sensor networks which helps to cope with scalability and data transmission problems. The basic aim of our research work is to provide efficient clustering using Hierarchical agglomerative clustering (HAC). If the distance between the sensing nodes is calculated using their location then it’s quantitative HAC. This paper compares the various agglomerative clustering techniques applied in a wireless sensor network using the quantitative data. The simulations are done in MATLAB and the comparisons are made between the different protocols using dendrograms.

Keywords: routing, hierarchical clustering, agglomerative, quantitative, wireless sensor network

Procedia PDF Downloads 618
41588 Algorithms used in Spatial Data Mining GIS

Authors: Vahid Bairami Rad

Abstract:

Extracting knowledge from spatial data like GIS data is important to reduce the data and extract information. Therefore, the development of new techniques and tools that support the human in transforming data into useful knowledge has been the focus of the relatively new and interdisciplinary research area ‘knowledge discovery in databases’. Thus, we introduce a set of database primitives or basic operations for spatial data mining which are sufficient to express most of the spatial data mining algorithms from the literature. This approach has several advantages. Similar to the relational standard language SQL, the use of standard primitives will speed-up the development of new data mining algorithms and will also make them more portable. We introduced a database-oriented framework for spatial data mining which is based on the concepts of neighborhood graphs and paths. A small set of basic operations on these graphs and paths were defined as database primitives for spatial data mining. Furthermore, techniques to efficiently support the database primitives by a commercial DBMS were presented.

Keywords: spatial data base, knowledge discovery database, data mining, spatial relationship, predictive data mining

Procedia PDF Downloads 462
41587 Partial Least Square Regression for High-Dimentional and High-Correlated Data

Authors: Mohammed Abdullah Alshahrani

Abstract:

The research focuses on investigating the use of partial least squares (PLS) methodology for addressing challenges associated with high-dimensional correlated data. Recent technological advancements have led to experiments producing data characterized by a large number of variables compared to observations, with substantial inter-variable correlations. Such data patterns are common in chemometrics, where near-infrared (NIR) spectrometer calibrations record chemical absorbance levels across hundreds of wavelengths, and in genomics, where thousands of genomic regions' copy number alterations (CNA) are recorded from cancer patients. PLS serves as a widely used method for analyzing high-dimensional data, functioning as a regression tool in chemometrics and a classification method in genomics. It handles data complexity by creating latent variables (components) from original variables. However, applying PLS can present challenges. The study investigates key areas to address these challenges, including unifying interpretations across three main PLS algorithms and exploring unusual negative shrinkage factors encountered during model fitting. The research presents an alternative approach to addressing the interpretation challenge of predictor weights associated with PLS. Sparse estimation of predictor weights is employed using a penalty function combining a lasso penalty for sparsity and a Cauchy distribution-based penalty to account for variable dependencies. The results demonstrate sparse and grouped weight estimates, aiding interpretation and prediction tasks in genomic data analysis. High-dimensional data scenarios, where predictors outnumber observations, are common in regression analysis applications. Ordinary least squares regression (OLS), the standard method, performs inadequately with high-dimensional and highly correlated data. Copy number alterations (CNA) in key genes have been linked to disease phenotypes, highlighting the importance of accurate classification of gene expression data in bioinformatics and biology using regularized methods like PLS for regression and classification.

Keywords: partial least square regression, genetics data, negative filter factors, high dimensional data, high correlated data

Procedia PDF Downloads 51