Search results for: high-dimensional data analysis
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 42068

Search results for: high-dimensional data analysis

41468 Hierarchically Modeling Cognition and Behavioral Problems of an Under-Represented Group

Authors: Zhidong Zhang, Zhi-Chao Zhang

Abstract:

This study examines adolescent psychological and behavioral problems. The Achenbach systems of empirically based assessment (ASEBA) were used as the instrument. The problem framework consists of internal, external and social behavioral problems which are theoretically developed based on about 113 items plus relevant background variables. In this study, the sample consist of 1,975 sixth and seventh grade students in Northeast China. Stratified random sampling method was used to collect the data, meaning that samples were from different school districts, schools, and classes. The researchers looked at both macro and micro effect. Therefore, multilevel analysis techniques were used in the data analysis. The parts of the research results indicated that the background variables such as extracurricular activities were directly related to students’ internal problems.

Keywords: behavioral problems, anxious/depressed problems, internalizing problems, mental health, under-represented groups, empirically-based assessment, hierarchical modeling, ASEBA, multilevel analysis

Procedia PDF Downloads 603
41467 ISMARA: Completely Automated Inference of Gene Regulatory Networks from High-Throughput Data

Authors: Piotr J. Balwierz, Mikhail Pachkov, Phil Arnold, Andreas J. Gruber, Mihaela Zavolan, Erik van Nimwegen

Abstract:

Understanding the key players and interactions in the regulatory networks that control gene expression and chromatin state across different cell types and tissues in metazoans remains one of the central challenges in systems biology. Our laboratory has pioneered a number of methods for automatically inferring core gene regulatory networks directly from high-throughput data by modeling gene expression (RNA-seq) and chromatin state (ChIP-seq) measurements in terms of genome-wide computational predictions of regulatory sites for hundreds of transcription factors and micro-RNAs. These methods have now been completely automated in an integrated webserver called ISMARA that allows researchers to analyze their own data by simply uploading RNA-seq or ChIP-seq data sets and provides results in an integrated web interface as well as in downloadable flat form. For any data set, ISMARA infers the key regulators in the system, their activities across the input samples, the genes and pathways they target, and the core interactions between the regulators. We believe that by empowering experimental researchers to apply cutting-edge computational systems biology tools to their data in a completely automated manner, ISMARA can play an important role in developing our understanding of regulatory networks across metazoans.

Keywords: gene expression analysis, high-throughput sequencing analysis, transcription factor activity, transcription regulation

Procedia PDF Downloads 65
41466 Psychometric Properties and Factor Structure of the College Readiness Questionnaire

Authors: Muna Al-Kalbani, Thuwayba Al Barwani, Otherine Neisler, Hussain Alkharusi, David Clayton, Humaira Al-Sulaimani, Mohammad Khan, Hamad Al-Yahmadi

Abstract:

This study describes the psychometric properties and factor structure of the University Readiness Survey (URS). Survey data were collected from sample of 2652 students from Sultan Qaboos University. Exploratory factor analysis identified ten significant factors underlining the structure. The results of Confirmatory factor analysis showed a good fit to the data where the indices for the revised model were χ2(df = 1669) = 6093.4; CFI = 0.900; GFI =0.926; PCLOSE = 1.00 and RMSAE = 0.030 where each of these indices were above threshold. The overall value of Cronbach’s alpha was 0.899 indicating that the instrument score was reliable. Results imply that the URS is a valid measure describing the college readiness pattern among Sultan Qaboos University students and the Arabic version could be used by university counselors to identify students’ readiness factors. Nevertheless, further validation of the of the USR is recommended.

Keywords: college readiness, confirmatory factor analysis, reliability, validity

Procedia PDF Downloads 226
41465 A Research on a Historical Architectural Heritage of the Village: Zriba El Olia

Authors: Yosra Ben Salah, Wang Li Jun, Salem Bellil

Abstract:

The village Hammem Zriba is a lost little paradise in the middle of a beautiful landscape that captures the eyes of every visitor. The village alone is a rich expression of different elements such as urban, architecture, technical and vernacular elements, as well as sociological, spiritual and religious behaviors. This heritage is in degrading conditions and is threatened by disappearing soon; thus, actions have to be taken as soon as possible to preserve this heritage, record, analyze and learn from its traditional ways of construction. The strategy of this study is to examine the architecture within the Berber society over a period of time and influenced by a certain location and its relationship to the social and cultural aspects; this research will focus on historical, environmental, social and cultural aspects influencing architecture. The contents of this paper should mainly be constructed by three successive layouts of historical view, a cultural view and an architectural view that will include the urban and domestic scale. This research relies on the integration of both theoretical and empirical investigations. On the theoretical level: A documentary analysis of secondary data is used. Documentary analysis means content analysis of the relevant documents that include books, journals, magazines, archival data, and field survey and observations. On the empirical level: analysis of these traditional ways of planning and house building will be carried out. Through the Analysis, three techniques will be employed to collect primary data. These techniques are; systematic analysis of the architectural drawings, quantitative analysis to the houses statistics, and a direct observation. Through this research, the technical, architectural and urban achievements of the Berber people who represent a part of the general history and architectural history will be emphasized. And on a second point the potential for the sustainability present in this traditional urban planning and housing to be used to formulate guidelines for modern urban and housing development.

Keywords: culture, history, traditional architecture, values

Procedia PDF Downloads 156
41464 Quantitative Ranking Evaluation of Wine Quality

Authors: A. Brunel, A. Kernevez, F. Leclere, J. Trenteseaux

Abstract:

Today, wine quality is only evaluated by wine experts with their own different personal tastes, even if they may agree on some common features. So producers do not have any unbiased way to independently assess the quality of their products. A tool is here proposed to evaluate wine quality by an objective ranking based upon the variables entering wine elaboration, and analysed through principal component analysis (PCA) method. Actual climatic data are compared by measuring the relative distance between each considered wine, out of which the general ranking is performed.

Keywords: wine, grape, weather conditions, rating, climate, principal component analysis, metric analysis

Procedia PDF Downloads 318
41463 Data Science/Artificial Intelligence: A Possible Panacea for Refugee Crisis

Authors: Avi Shrivastava

Abstract:

In 2021, two heart-wrenching scenes, shown live on television screens across countries, painted a grim picture of refugees. One of them was of people clinging onto an airplane's wings in their desperate attempt to flee war-torn Afghanistan. They ultimately fell to their death. The other scene was the U.S. government authorities separating children from their parents or guardians to deter migrants/refugees from coming to the U.S. These events show the desperation refugees feel when they are trying to leave their homes in disaster zones. However, data paints a grave picture of the current refugee situation. It also indicates that a bleak future lies ahead for the refugees across the globe. Data and information are the two threads that intertwine to weave the shimmery fabric of modern society. Data and information are often used interchangeably, but they differ considerably. For example, information analysis reveals rationale, and logic, while data analysis, on the other hand, reveals a pattern. Moreover, patterns revealed by data can enable us to create the necessary tools to combat huge problems on our hands. Data analysis paints a clear picture so that the decision-making process becomes simple. Geopolitical and economic data can be used to predict future refugee hotspots. Accurately predicting the next refugee hotspots will allow governments and relief agencies to prepare better for future refugee crises. The refugee crisis does not have binary answers. Given the emotionally wrenching nature of the ground realities, experts often shy away from realistically stating things as they are. This hesitancy can cost lives. When decisions are based solely on data, emotions can be removed from the decision-making process. Data also presents irrefutable evidence and tells whether there is a solution or not. Moreover, it also responds to a nonbinary crisis with a binary answer. Because of all that, it becomes easier to tackle a problem. Data science and A.I. can predict future refugee crises. With the recent explosion of data due to the rise of social media platforms, data and insight into data has solved many social and political problems. Data science can also help solve many issues refugees face while staying in refugee camps or adopted countries. This paper looks into various ways data science can help solve refugee problems. A.I.-based chatbots can help refugees seek legal help to find asylum in the country they want to settle in. These chatbots can help them find a marketplace where they can find help from the people willing to help. Data science and technology can also help solve refugees' many problems, including food, shelter, employment, security, and assimilation. The refugee problem seems to be one of the most challenging for social and political reasons. Data science and machine learning can help prevent the refugee crisis and solve or alleviate some of the problems that refugees face in their journey to a better life. With the explosion of data in the last decade, data science has made it possible to solve many geopolitical and social issues.

Keywords: refugee crisis, artificial intelligence, data science, refugee camps, Afghanistan, Ukraine

Procedia PDF Downloads 72
41462 Evaluation of Longitudinal Relaxation Time (T1) of Bone Marrow in Lumbar Vertebrae of Leukaemia Patients Undergoing Magnetic Resonance Imaging

Authors: M. G. R. S. Perera, B. S. Weerakoon, L. P. G. Sherminie, M. L. Jayatilake, R. D. Jayasinghe, W. Huang

Abstract:

The aim of this study was to measure and evaluate the Longitudinal Relaxation Times (T1) in bone marrow of an Acute Myeloid Leukaemia (AML) patient in order to explore the potential for a prognostic biomarker using Magnetic Resonance Imaging (MRI) which will be a non-invasive prognostic approach to AML. MR image data were collected in the DICOM format and MATLAB Simulink software was used in the image processing and data analysis. For quantitative MRI data analysis, Region of Interests (ROI) on multiple image slices were drawn encompassing vertebral bodies of L3, L4, and L5. T1 was evaluated using the T1 maps obtained. The estimated bone marrow mean value of T1 was 790.1 (ms) at 3T. However, the reported T1 value of healthy subjects is significantly (946.0 ms) higher than the present finding. This suggests that the T1 for bone marrow can be considered as a potential prognostic biomarker for AML patients.

Keywords: acute myeloid leukaemia, longitudinal relaxation time, magnetic resonance imaging, prognostic biomarker.

Procedia PDF Downloads 531
41461 Internet Addiction among Students: An Empirical Study in Pondicherry University

Authors: Mashood C., Abdul Vahid K., Ashique C. K.

Abstract:

The technology is growing beyond human expectation. Internet is one of very sophisticated product of the information technology. It has various advantages like connecting the world, simplifying the difficult tasks done in past etc. Simultaneously it has demerits also; that is lack of authenticity and internet addiction. To find out the problems of internet addiction, a study conducted among the Postgraduate students of Pondicherry University and collected 454 samples. The study strictly focused to identify the internet addiction among students, influence and interdependence of personality on internet addiction among first years and second years. To evaluate this, we used two major analysis, these are Confirmatory Factor Analysis (CFA) to predict the internet addiction with the observed data and Logistic Regression to identify the difference between first years and second years in the case of internet addiction. Before applying to the core analysis, the data applied to some preliminary tests to check the model fit. The empirical findings shows that , the students of Pondicherry University are very much addicted to the internet, But there is no such huge difference between first years and second years in case of internet addiction.

Keywords: internet addiction, students, Pondicherry University, empirical study

Procedia PDF Downloads 459
41460 Demographic Factors Influencing Employees’ Salary Expectations and Labor Turnover

Authors: M. Osipova

Abstract:

Thanks to informational technologies development every sphere of economics is becoming more and more data-centralized as people are generating huge datasets containing information on any aspect of their life. Applying research of such data to human resources management allows getting scarce statistics on labor market state including salary expectations and potential employees’ typical career behavior, and this information can become a reliable basis for management decisions. The following article presents results of career behavior research based on freely accessible resume data. Information used for study is much wider than one usually uses in human resources surveys. That is why there is enough data for statistically significant results even for subgroups analysis.

Keywords: human resources management, salary expectations, statistics, turnover

Procedia PDF Downloads 349
41459 Churn Prediction for Telecommunication Industry Using Artificial Neural Networks

Authors: Ulas Vural, M. Ergun Okay, E. Mesut Yildiz

Abstract:

Telecommunication service providers demand accurate and precise prediction of customer churn probabilities to increase the effectiveness of their customer relation services. The large amount of customer data owned by the service providers is suitable for analysis by machine learning methods. In this study, expenditure data of customers are analyzed by using an artificial neural network (ANN). The ANN model is applied to the data of customers with different billing duration. The proposed model successfully predicts the churn probabilities at 83% accuracy for only three months expenditure data and the prediction accuracy increases up to 89% when the nine month data is used. The experiments also show that the accuracy of ANN model increases on an extended feature set with information of the changes on the bill amounts.

Keywords: customer relationship management, churn prediction, telecom industry, deep learning, artificial neural networks

Procedia PDF Downloads 145
41458 Identifying the Goals of a Multicultural Curriculum for the Primary Education Course

Authors: Fatemeh Havas Beigi

Abstract:

The purpose of this study is to identify the objectives of a multicultural curriculum for the primary education period from the perspective of ethnic teachers and education experts and cultural professionals. The research paradigm is interpretive, the research approach is qualitative, the research strategy is content analysis, the sampling method is purposeful and it is a snowball, and the sample of informants in the research for Iranian ethnic teachers and experts until the theoretical saturation was estimated to be 67 people. The data collection tools used were based on semi-structured interviews and individual interviews and focal interviews were used to collect information. The data format was also in audio format and the first period coding and the second coding were used to analyze the data. Based on data analysis 11 Objective: Paying attention to ethnic equality, expanding educational opportunities and justice, peaceful coexistence, anti-ethnic and racial discrimination education, paying attention to human value and dignity, accepting religious diversity, getting to know ethnicities and cultures, promoting teaching-learning, fostering self-confidence, building national unity, and developing cultural commonalities for a multicultural curriculum were identified.

Keywords: objective, multicultural curriculum, connect, elementary education period

Procedia PDF Downloads 94
41457 A Fully-Automated Disturbance Analysis Vision for the Smart Grid Based on Smart Switch Data

Authors: Bernardo Cedano, Ahmed H. Eltom, Bob Hay, Jim Glass, Raga Ahmed

Abstract:

The deployment of smart grid devices such as smart meters and smart switches (SS) supported by a reliable and fast communications system makes automated distribution possible, and thus, provides great benefits to electric power consumers and providers alike. However, more research is needed before the full utility of smart switch data is realized. This paper presents new automated switching techniques using SS within the electric power grid. A concise background of the SS is provided, and operational examples are shown. Organization and presentation of data obtained from SS are shown in the context of the future goal of total automation of the distribution network. The description of application techniques, the examples of success with SS, and the vision outlined in this paper serve to motivate future research pertinent to disturbance analysis automation.

Keywords: disturbance automation, electric power grid, smart grid, smart switches

Procedia PDF Downloads 309
41456 ESG and Corporate Financial Performance: Empirical Evidence from Vietnam’s Listed Construction Companies

Authors: My Linh Hoang, Van Dung Hoang

Abstract:

Environmental, Social, and Governance (ESG) factors have become a focus for companies globally, as businesses are now focusing on long-term sustainable goals rather than only operating for the goals of profit maximization. According to recent research, in several countries, companies have shown positive results in their financial performance by improving their ESG performance. The construction industry is one of the most crucial components of social and economic development; as a result, considerations for ESG factors are becoming more and more essential for companies in this sector. In Vietnam, the construction industry has been growing rapidly in recent years; however, it has yet to be discussed and studied extensively in Vietnam how ESG factors create impacts on corporate financial performance in general and construction corporations’ financial performance in particular. This research aims to examine the relationship between ESG factors and financial indicators in construction companies from 2011 to 2021 through panel data analysis of 75 listed construction companies in Vietnam and to provide insights into how these companies can better integrate ESG considerations into their operations to enhance their financial performance. The data was analyzed through 3 main methods: descriptive statistics, correlation coefficient analysis applied to all dependent, explanatory and control variables, and panel data analysis method. In panel data analysis, the study uses the fixed effects model (FEM) and random effects model (REM). The Hausman test will be used to select which model is suitable to be used. The findings indicate that maintaining a strong commitment to ESG principles can have a positive impact on financial performance. Finally, FGLS estimation will be performed when the problem of autocorrelation and variable variance appears in the model. This is significant for all parties involved, including investors, company managers, decision-makers, and industry regulators.

Keywords: ESG, financial performance, construction company, Vietnam

Procedia PDF Downloads 90
41455 Development of Risk Management System for Urban Railroad Underground Structures and Surrounding Ground

Authors: Y. K. Park, B. K. Kim, J. W. Lee, S. J. Lee

Abstract:

To assess the risk of the underground structures and surrounding ground, we collect basic data by the engineering method of measurement, exploration and surveys and, derive the risk through proper analysis and each assessment for urban railroad underground structures and surrounding ground including station inflow. Basic data are obtained by the fiber-optic sensors, MEMS sensors, water quantity/quality sensors, tunnel scanner, ground penetrating radar, light weight deflectometer, and are evaluated if they are more than the proper value or not. Based on these data, we analyze the risk level of urban railroad underground structures and surrounding ground. And we develop the risk management system to manage efficiently these data and to support a convenient interface environment at input/output of data.

Keywords: urban railroad, underground structures, ground subsidence, station inflow, risk

Procedia PDF Downloads 336
41454 Cloud-Based Multiresolution Geodata Cube for Efficient Raster Data Visualization and Analysis

Authors: Lassi Lehto, Jaakko Kahkonen, Juha Oksanen, Tapani Sarjakoski

Abstract:

The use of raster-formatted data sets in geospatial analysis is increasing rapidly. At the same time, geographic data are being introduced into disciplines outside the traditional domain of geoinformatics, like climate change, intelligent transport, and immigration studies. These developments call for better methods to deliver raster geodata in an efficient and easy-to-use manner. Data cube technologies have traditionally been used in the geospatial domain for managing Earth Observation data sets that have strict requirements for effective handling of time series. The same approach and methodologies can also be applied in managing other types of geospatial data sets. A cloud service-based geodata cube, called GeoCubes Finland, has been developed to support online delivery and analysis of most important geospatial data sets with national coverage. The main target group of the service is the academic research institutes in the country. The most significant aspects of the GeoCubes data repository include the use of multiple resolution levels, cloud-optimized file structure, and a customized, flexible content access API. Input data sets are pre-processed while being ingested into the repository to bring them into a harmonized form in aspects like georeferencing, sampling resolutions, spatial subdivision, and value encoding. All the resolution levels are created using an appropriate generalization method, selected depending on the nature of the source data set. Multiple pre-processed resolutions enable new kinds of online analysis approaches to be introduced. Analysis processes based on interactive visual exploration can be effectively carried out, as the level of resolution most close to the visual scale can always be used. In the same way, statistical analysis can be carried out on resolution levels that best reflect the scale of the phenomenon being studied. Access times remain close to constant, independent of the scale applied in the application. The cloud service-based approach, applied in the GeoCubes Finland repository, enables analysis operations to be performed on the server platform, thus making high-performance computing facilities easily accessible. The developed GeoCubes API supports this kind of approach for online analysis. The use of cloud-optimized file structures in data storage enables the fast extraction of subareas. The access API allows for the use of vector-formatted administrative areas and user-defined polygons as definitions of subareas for data retrieval. Administrative areas of the country in four levels are available readily from the GeoCubes platform. In addition to direct delivery of raster data, the service also supports the so-called virtual file format, in which only a small text file is first downloaded. The text file contains links to the raster content on the service platform. The actual raster data is downloaded on demand, from the spatial area and resolution level required in each stage of the application. By the geodata cube approach, pre-harmonized geospatial data sets are made accessible to new categories of inexperienced users in an easy-to-use manner. At the same time, the multiresolution nature of the GeoCubes repository facilitates expert users to introduce new kinds of interactive online analysis operations.

Keywords: cloud service, geodata cube, multiresolution, raster geodata

Procedia PDF Downloads 135
41453 Utilizing the Principal Component Analysis on Multispectral Aerial Imagery for Identification of Underlying Structures

Authors: Marcos Bosques-Perez, Walter Izquierdo, Harold Martin, Liangdon Deng, Josue Rodriguez, Thony Yan, Mercedes Cabrerizo, Armando Barreto, Naphtali Rishe, Malek Adjouadi

Abstract:

Aerial imagery is a powerful tool when it comes to analyzing temporal changes in ecosystems and extracting valuable information from the observed scene. It allows us to identify and assess various elements such as objects, structures, textures, waterways, and shadows. To extract meaningful information, multispectral cameras capture data across different wavelength bands of the electromagnetic spectrum. In this study, the collected multispectral aerial images were subjected to principal component analysis (PCA) to identify independent and uncorrelated components or features that extend beyond the visible spectrum captured in standard RGB images. The results demonstrate that these principal components contain unique characteristics specific to certain wavebands, enabling effective object identification and image segmentation.

Keywords: big data, image processing, multispectral, principal component analysis

Procedia PDF Downloads 177
41452 The Analysis of Differential Item and Test Functioning between Sexes by Studying on the Scholastic Aptitude Test 2013

Authors: Panwasn Mahalawalert

Abstract:

The purposes of this research were analyzed differential item functioning and differential test functioning of SWUSAT aptitude test classification by sex variable. The data used in this research is the secondary data from Srinakharinwirot University Scholastic Aptitude Test 2013 (SWUSAT). SWUSAT test consists of four subjects. There are verbal ability test, number ability test, reasoning ability test and spatial ability test. The data analysis was analyzed in 2 steps. The first step was analyzing descriptive statistics. In the second step were analyzed differential item functioning (DIF) and differential test functioning (DTF) by using the DIFAS program. The research results were as follows: The results of DIF and DTF analysis for all 10 tests in year 2013. Gender was the characteristic that found DIF all 10 tests. The percentage of item number that found DIF is between 6.67% - 60%. There are 5 tests that most of items favors female group and 2 tests that most of items favors male group. There are 3 tests that the number of items favors female group equal favors male group. For Differential test functioning (DTF), there are 8 tests that have small level.

Keywords: aptitude test, differential item functioning, differential test functioning, educational measurement

Procedia PDF Downloads 412
41451 The Relationship Between Artificial Intelligence, Data Science, and Privacy

Authors: M. Naidoo

Abstract:

Artificial intelligence often requires large amounts of good quality data. Within important fields, such as healthcare, the training of AI systems predominately relies on health and personal data; however, the usage of this data is complicated by various layers of law and ethics that seek to protect individuals’ privacy rights. This research seeks to establish the challenges AI and data sciences pose to (i) informational rights, (ii) privacy rights, and (iii) data protection. To solve some of the issues presented, various methods are suggested, such as embedding values in technological development, proper balancing of rights and interests, and others.

Keywords: artificial intelligence, data science, law, policy

Procedia PDF Downloads 106
41450 The Effect of Job Insecurity on Attitude towards Change and Organizational Citizenship Behavior: Moderating Role of Islamic Work Ethics

Authors: Khurram Shahzad, Muhammad Usman

Abstract:

The main aim of this study is to examine the direct and interactive effects of job insecurity and Islamic work ethics on employee’s attitude towards change and organizational citizenship behavior. Design/methodology/approach: The data was collected from 171 male and female university teachers of Pakistan. Self administered, close ended questionnaires were used to collect the data. Data was analyzed through correlation and regression analysis. Findings: Through the analysis of data, it was found that job insecurity has a strong negative effect on the attitude towards change of university teachers. On the contrary, job insecurity has no significant effect on organizational citizenship behavior of university teachers. Our results also show that Islamic work ethics does not moderate the relationship of job insecurity and attitude towards change, while a strong moderation effect of Islamic wok ethics is found on the relationship of job insecurity and organizational citizenship behavior. Originality/value: This study for the first time examines the relationship of job insecurity with employee’s attitude towards change and organizational citizenship behavior with the moderating effect of Islamic work ethics.

Keywords: job security, islamic work ethics, attitude towards change, organizational citizenship behavior

Procedia PDF Downloads 475
41449 Cost Efficiency of European Cooperative Banks

Authors: Karolína Vozková, Matěj Kuc

Abstract:

This paper analyzes recent trends in cost efficiency of European cooperative banks using efficient frontier analysis. Our methodology is based on stochastic frontier analysis which is run on a set of 649 European cooperative banks using data between 2006 and 2015. Our results show that average inefficiency of European cooperative banks is increasing since 2008, smaller cooperative banks are significantly more efficient than the bigger ones over the whole time period and that share of net fee and commission income to total income surprisingly seems to have no impact on bank cost efficiency.

Keywords: cooperative banks, cost efficiency, efficient frontier analysis, stochastic frontier analysis, net fee and commission income

Procedia PDF Downloads 211
41448 A Parallel Approach for 3D-Variational Data Assimilation on GPUs in Ocean Circulation Models

Authors: Rossella Arcucci, Luisa D'Amore, Simone Celestino, Giuseppe Scotti, Giuliano Laccetti

Abstract:

This work is the first dowel in a rather wide research activity in collaboration with Euro Mediterranean Center for Climate Changes, aimed at introducing scalable approaches in Ocean Circulation Models. We discuss designing and implementation of a parallel algorithm for solving the Variational Data Assimilation (DA) problem on Graphics Processing Units (GPUs). The algorithm is based on the fully scalable 3DVar DA model, previously proposed by the authors, which uses a Domain Decomposition approach (we refer to this model as the DD-DA model). We proceed with an incremental porting process consisting of 3 distinct stages: requirements and source code analysis, incremental development of CUDA kernels, testing and optimization. Experiments confirm the theoretic performance analysis based on the so-called scale up factor demonstrating that the DD-DA model can be suitably mapped on GPU architectures.

Keywords: data assimilation, GPU architectures, ocean models, parallel algorithm

Procedia PDF Downloads 412
41447 Identification and Classification of Fiber-Fortified Semolina by Near-Infrared Spectroscopy (NIR)

Authors: Amanda T. Badaró, Douglas F. Barbin, Sofia T. Garcia, Maria Teresa P. S. Clerici, Amanda R. Ferreira

Abstract:

Food fortification is the intentional addition of a nutrient in a food matrix and has been widely used to overcome the lack of nutrients in the diet or increasing the nutritional value of food. Fortified food must meet the demand of the population, taking into account their habits and risks that these foods may cause. Wheat and its by-products, such as semolina, has been strongly indicated to be used as a food vehicle since it is widely consumed and used in the production of other foods. These products have been strategically used to add some nutrients, such as fibers. Methods of analysis and quantification of these kinds of components are destructive and require lengthy sample preparation and analysis. Therefore, the industry has searched for faster and less invasive methods, such as Near-Infrared Spectroscopy (NIR). NIR is a rapid and cost-effective method, however, it is based on indirect measurements, yielding high amount of data. Therefore, NIR spectroscopy requires calibration with mathematical and statistical tools (Chemometrics) to extract analytical information from the corresponding spectra, as Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA). PCA is well suited for NIR, once it can handle many spectra at a time and be used for non-supervised classification. Advantages of the PCA, which is also a data reduction technique, is that it reduces the data spectra to a smaller number of latent variables for further interpretation. On the other hand, LDA is a supervised method that searches the Canonical Variables (CV) with the maximum separation among different categories. In LDA, the first CV is the direction of maximum ratio between inter and intra-class variances. The present work used a portable infrared spectrometer (NIR) for identification and classification of pure and fiber-fortified semolina samples. The fiber was added to semolina in two different concentrations, and after the spectra acquisition, the data was used for PCA and LDA to identify and discriminate the samples. The results showed that NIR spectroscopy associate to PCA was very effective in identifying pure and fiber-fortified semolina. Additionally, the classification range of the samples using LDA was between 78.3% and 95% for calibration and 75% and 95% for cross-validation. Thus, after the multivariate analysis such as PCA and LDA, it was possible to verify that NIR associated to chemometric methods is able to identify and classify the different samples in a fast and non-destructive way.

Keywords: Chemometrics, fiber, linear discriminant analysis, near-infrared spectroscopy, principal component analysis, semolina

Procedia PDF Downloads 212
41446 Settlement Analysis of Axially Loaded Bored Piles: A Case History

Authors: M. Mert, M. T. Ozkan

Abstract:

Pile load tests should be applied to check the bearing capacity calculations and to determine the settlement of the pile corresponding to test load. Strain gauges can be installed into pile in order to determine the shaft resistance of the piles for every soil layer respectively. Detailed results can be obtained by means of strain gauges placed at certain levels into test piles. In the scope of this study, pile load test data obtained from two different projects are examined.  Instrumented static pile load tests were applied on totally 7 test bored piles of different diameters (80 cm, 150 cm, and 200 cm) and different lengths (between 30-76 m) in two different project site. Settlement analysis of test piles is done by using some of load transfer methods and finite element method. Plaxis 3D which is a three-dimensional finite element program is also used for settlement analysis of the test piles. In this study, firstly bearing capacity of test piles are determined and compared with strain gauge data which is required for settlement analysis. Then, settlement values of the test piles are estimated by using load transfer methods developed in recent years and finite element method. The aim of this study is to show similarities and differences between the results obtained from settlement analysis methods and instrumented pile load tests.

Keywords: failure, finite element method, monitoring and instrumentation, pile, settlement

Procedia PDF Downloads 167
41445 The Impact of Corporate Social Responsibility and Relationship Marketing on Relationship Maintainer and Customer Loyalty by Mediating Role of Customer Satisfaction

Authors: Anam Bhatti, Sumbal Arif, Mariam Mehar, Sohail Younas

Abstract:

CSR has become one of the imperative implements in satisfying customers. The impartial of this research is to calculate CSR, relationship marketing, and customer satisfaction. In Pakistan, there is not enough research work on the effect of CSR and relationship marketing on relationship maintainer and customer loyalty. To find out deductive approach and survey method is used as research approach and research strategy respectively. This research design is descriptive and quantitative study. For data, collection questionnaire method with semantic differential scale and seven point scales are adopted. Data has been collected by adopting the non-probability convenience technique as sampling technique and the sample size is 400. For factor confirmatory factor analysis, structure equation modeling and medication analysis, regression analysis Amos software were used. Strong empirical evidence supports that the customer’s perception of CSR performance is highly influenced by the values.

Keywords: CSR, Relationship marketing, Relationship maintainer, Customer loyalty, Customer satisfaction

Procedia PDF Downloads 482
41444 Effect of Genuine Missing Data Imputation on Prediction of Urinary Incontinence

Authors: Suzan Arslanturk, Mohammad-Reza Siadat, Theophilus Ogunyemi, Ananias Diokno

Abstract:

Missing data is a common challenge in statistical analyses of most clinical survey datasets. A variety of methods have been developed to enable analysis of survey data to deal with missing values. Imputation is the most commonly used among the above methods. However, in order to minimize the bias introduced due to imputation, one must choose the right imputation technique and apply it to the correct type of missing data. In this paper, we have identified different types of missing values: missing data due to skip pattern (SPMD), undetermined missing data (UMD), and genuine missing data (GMD) and applied rough set imputation on only the GMD portion of the missing data. We have used rough set imputation to evaluate the effect of such imputation on prediction by generating several simulation datasets based on an existing epidemiological dataset (MESA). To measure how well each dataset lends itself to the prediction model (logistic regression), we have used p-values from the Wald test. To evaluate the accuracy of the prediction, we have considered the width of 95% confidence interval for the probability of incontinence. Both imputed and non-imputed simulation datasets were fit to the prediction model, and they both turned out to be significant (p-value < 0.05). However, the Wald score shows a better fit for the imputed compared to non-imputed datasets (28.7 vs. 23.4). The average confidence interval width was decreased by 10.4% when the imputed dataset was used, meaning higher precision. The results show that using the rough set method for missing data imputation on GMD data improve the predictive capability of the logistic regression. Further studies are required to generalize this conclusion to other clinical survey datasets.

Keywords: rough set, imputation, clinical survey data simulation, genuine missing data, predictive index

Procedia PDF Downloads 168
41443 Estimation of Geotechnical Parameters by Comparing Monitoring Data with Numerical Results: Case Study of Arash–Esfandiar-Niayesh Under-Passing Tunnel, Africa Tunnel, Tehran, Iran

Authors: Aliakbar Golshani, Seyyed Mehdi Poorhashemi, Mahsa Gharizadeh

Abstract:

The under passing tunnels are strongly influenced by the soils around. There are some complexities in the specification of real soil behavior, owing to the fact that lots of uncertainties exist in soil properties, and additionally, inappropriate soil constitutive models. Such mentioned factors may cause incompatible settlements in numerical analysis with the obtained values in actual construction. This paper aims to report a case study on a specific tunnel constructed by NATM. The tunnel has a depth of 11.4 m, height of 12.2 m, and width of 14.4 m with 2.5 lanes. The numerical modeling was based on a 2D finite element program. The soil material behavior was modeled by hardening soil model. According to the field observations, the numerical estimated settlement at the ground surface was approximately four times more than the measured one, after the entire installation of the initial lining, indicating that some unknown factors affect the values. Consequently, the geotechnical parameters are accurately revised by a numerical back-analysis using laboratory and field test data and based on the obtained monitoring data. The obtained result confirms that typically, the soil parameters are conservatively low-estimated. And additionally, the constitutive models cannot be applied properly for all soil conditions.

Keywords: NATM tunnel, initial lining, laboratory test data, numerical back-analysis

Procedia PDF Downloads 361
41442 The Influence of Intellectual Capital Disclosures on Market Capitalization Growth

Authors: Nyoman Wijana, Chandra Arha

Abstract:

Disclosures of Intellectual Capital (IC) is a presentation of corporate information assets that are not recorded in the financial statements. This disclosures is very helpful because it provides inform corporate assets are intangible. In the new economic era, the company's intangible assets will determine company's competitive advantage. This study aimed to examine the effect of IC disclosures on market capitalization growth. Observational studies conducted over ten years in 2002-2011. The purpose of this study was to determine the effect for last ten years. One hundred samples of the company's largest market capitalization in 2011 traced back to last ten years. Data that used, are in 2011, 2008, 2005, and 2002 Method that’s used for acquiring the data is content analysis. The analytical method used is Ordinanary Least Square (OLS) and analysis tools are e views 7 This software using Pooled Least Square estimation parameters are specifically designed for panel data. The results of testing analysis showed inconsistent expression levels affect the growth of the market capitalization in each year of observation. The results of this study are expected to motivate the public company in Indonesia to do more voluntary IC disclosures and encourage regulators to make regulations in a comprehensive manner so that all categories of the IC must be disclosed by the company.

Keywords: IC disclosures, market capitalization growth, analytical method, OLS

Procedia PDF Downloads 340
41441 A Data Mining Approach for Analysing and Predicting the Bank's Asset Liability Management Based on Basel III Norms

Authors: Nidhin Dani Abraham, T. K. Sri Shilpa

Abstract:

Asset liability management is an important aspect in banking business. Moreover, the today’s banking is based on BASEL III which strictly regulates on the counterparty default. This paper focuses on prediction and analysis of counter party default risk, which is a type of risk occurs when the customers fail to repay the amount back to the lender (bank or any financial institutions). This paper proposes an approach to reduce the counterparty risk occurring in the financial institutions using an appropriate data mining technique and thus predicts the occurrence of NPA. It also helps in asset building and restructuring quality. Liability management is very important to carry out banking business. To know and analyze the depth of liability of bank, a suitable technique is required. For that a data mining technique is being used to predict the dormant behaviour of various deposit bank customers. Various models are implemented and the results are analyzed of saving bank deposit customers. All these data are cleaned using data cleansing approach from the bank data warehouse.

Keywords: data mining, asset liability management, BASEL III, banking

Procedia PDF Downloads 552
41440 Dose Evaluations with SNAP/RADTRAD for Loss of Coolant Accidents in a BWR6 Nuclear Power Plant

Authors: Kai Chun Yang, Shao-Wen Chen, Jong-Rong Wang, Chunkuan Shih, Jung-Hua Yang, Hsiung-Chih Chen, Wen-Sheng Hsu

Abstract:

In this study, we build RADionuclide Transport, Removal And Dose Estimation/Symbolic Nuclear Analysis Package (SNAP/RADTRAD) model of Kuosheng Nuclear Power Plant which is based on the Final Safety Evaluation Report (FSAR) and other data of Kuosheng Nuclear Power Plant. It is used to estimate the radiation dose of the Exclusion Area Boundary (EAB), the Low Population Zone (LPZ), and the control room following ‘release from the containment’ case in Loss Of Coolant Accident (LOCA). The RADTRAD analysis result shows that the evaluation dose at EAB, LPZ, and the control room are close to the FSAR data, and all of the doses are lower than the regulatory limits. At last, we do a sensitivity analysis and observe that the evaluation doses increase as the intake rate of the control room increases.

Keywords: RADTRAD, radionuclide transport, removal and dose estimation, snap, symbolic nuclear analysis package, boiling water reactor, NPP, kuosheng

Procedia PDF Downloads 343
41439 Perception-Oriented Model Driven Development for Designing Data Acquisition Process in Wireless Sensor Networks

Authors: K. Indra Gandhi

Abstract:

Wireless Sensor Networks (WSNs) have always been characterized for application-specific sensing, relaying and collection of information for further analysis. However, software development was not considered as a separate entity in this process of data collection which has posed severe limitations on the software development for WSN. Software development for WSN is a complex process since the components involved are data-driven, network-driven and application-driven in nature. This implies that there is a tremendous need for the separation of concern from the software development perspective. A layered approach for developing data acquisition design based on Model Driven Development (MDD) has been proposed as the sensed data collection process itself varies depending upon the application taken into consideration. This work focuses on the layered view of the data acquisition process so as to ease the software point of development. A metamodel has been proposed that enables reusability and realization of the software development as an adaptable component for WSN systems. Further, observing users perception indicates that proposed model helps in improving the programmer's productivity by realizing the collaborative system involved.

Keywords: data acquisition, model-driven development, separation of concern, wireless sensor networks

Procedia PDF Downloads 434