Search results for: data interpolating empirical orthogonal function
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 29461

Search results for: data interpolating empirical orthogonal function

26131 Prototype Development of Knitted Buoyant Swimming Vest for Children

Authors: Nga-Wun Li, Chu-Po Ho, Kit-Lun Yick, Jin-Yun Zhou

Abstract:

The use of buoyant vests incorporated with swimsuits can develop children’s confidence in the water, particularly for novice swimmers. Consequently, parents intend to purchase buoyant swimming vests for the children to reduce their anxiety to water. Although the conventional buoyant swimming vests can provide the buoyant function to the wearer, their bulkiness and hardness make children feel uncomfortable and not willing to wear. This study aimed to apply inlay knitting technology to design new functional buoyant swimming vests for children. This prototype involved a shell and a buoyant knitted layer, which is the main media to provide buoyancy. Polypropylene yarn and 6.4 mm of Expandable Polyethylene (EPE) foam were fabricated in Full needle stitch with inlay knitting technology and were then linked by sewing to form the buoyant layer. The shell of the knitted buoyant vest was made of Polypropylene circular knitted fabric. The structure of knitted fabrics of the buoyant swimsuit makes them inherently stretchable, and the arrangement of the inlaid material was designed based on the body movement that can improve the ease with which the swimmer moves. Further, the shoulder seam is designed at the back to minimize the irritation of the wearer. Apart from maintaining the buoyant function to them, this prototype shows its contribution in reducing bulkiness and improving softness to the conventional buoyant swimming vest by taking the advantages of a knitted garment. The results in this study are significant to the development of the buoyant swimming vest for both the textile and the fast-growing sportswear industry.

Keywords: knitting technology, buoyancy, inlay, swimming vest, functional garment

Procedia PDF Downloads 101
26130 Study on the Effect Cabbage (Brassica oleracea) and Ginger (Zingiber officinale) Extracts on Rat Liver Injuries Induced by Carbon tetrachloride (CCl4)

Authors: Asmaa F. Hamouda, Randa M Shrourou

Abstract:

Cabbage (Brassica oleracea) and Ginger (Zingiber officinale) constitute apportion of regular human diet. The effect of Cabbage(CE) and Ginger extracts(GE) separately on liver nitric oxide (NO), malondialdehyde (MDA), as well as serum aspartate aminotransferase (AST), alanine aminotransferase (ALT), total bilirubin, total cholesterol(TC), triglyceride(T.G), high density lipoprotein(HDL cholesterol), low density lipoprotein (LDL cholesterol), thyroid-stimulating hormone (TSH), Triiodothyronine (T3), Thyroxine (T4) in rats treated and untreated with carbon tetrachloride (CCl4) was studied. The levels of NO, MDA, as well as serum AST, ALT, total bilirubin, TC, T.G, LDLand TSH showed an elevation and decline in HDL, T3, and T4 in rats treated with CCl4 as compared to control. Treatment of rats with GE pre, during, and post CCl4 administration improved NO, MDA, as well as serum AST, ALT, total bilirubin, TC, T.G, HDL, LDL, TSH, T3, T4 as compared to CCl4, indicates that GE improve thyroid function and reduced oxidative stress as well as injuries induced by CCl4. Treatment of rats with CE pre, during, and post CCl4 administration did not improved in the thyroid hormones and lipid profile levels as compared to CCl4. These findings suggest that ginger treatment exerts a protective effect on metabolic disorders by decreasing oxidative stress.

Keywords: liver injuries, carbon tetrachloride (CCl4), cabbage (Brassica oleracea), ginger (Zingiber officinale), thyroid function

Procedia PDF Downloads 254
26129 A Study on Inverse Determination of Impact Force on a Honeycomb Composite Panel

Authors: Hamed Kalhori, Lin Ye

Abstract:

In this study, an inverse method was developed to reconstruct the magnitude and duration of impact forces exerted to a rectangular carbon fibre-epoxy composite honeycomb sandwich panel. The dynamic signals captured by Piezoelectric (PZT) sensors installed on the panel remotely from the impact locations were utilized to reconstruct the impact force generated by an instrumented hammer through an extended deconvolution approach. Two discretized forms of convolution integral are considered; the traditional one with an explicit transfer function and the modified one without an explicit transfer function. Deconvolution, usually applied to reconstruct the time history (e.g. magnitude) of a stochastic force at a defined location, is extended to identify both the location and magnitude of the impact force among a number of potential impact locations. It is assumed that a number of impact forces are simultaneously exerted to all potential locations, but the magnitude of all forces except one is zero, implicating that the impact occurs only at one location. The extended deconvolution is then applied to determine the magnitude as well as location (among the potential ones), incorporating the linear superposition of responses resulted from impact at each potential location. The problem can be categorized into under-determined (the number of sensors is less than that of impact locations), even-determined (the number of sensors equals that of impact locations), or over-determined (the number of sensors is greater than that of impact locations) cases. For an under-determined case, it comprises three potential impact locations and one PZT sensor for the rectangular carbon fibre-epoxy composite honeycomb sandwich panel. Assessments are conducted to evaluate the factors affecting the precision of the reconstructed force. Truncated Singular Value Decomposition (TSVD) and the Tikhonov regularization are independently chosen to regularize the problem to find the most suitable method for this system. The selection of optimal value of the regularization parameter is investigated through L-curve and Generalized Cross Validation (GCV) methods. In addition, the effect of different width of signal windows on the reconstructed force is examined. It is observed that the impact force generated by the instrumented impact hammer is sensitive to the impact locations of the structure, having a shape from a simple half-sine to a complicated one. The accuracy of the reconstructed impact force is evaluated using the correlation co-efficient between the reconstructed force and the actual one. Based on this criterion, it is concluded that the forces reconstructed by using the extended deconvolution without an explicit transfer function together with Tikhonov regularization match well with the actual forces in terms of magnitude and duration.

Keywords: honeycomb composite panel, deconvolution, impact localization, force reconstruction

Procedia PDF Downloads 523
26128 GPU-Based Back-Projection of Synthetic Aperture Radar (SAR) Data onto 3D Reference Voxels

Authors: Joshua Buli, David Pietrowski, Samuel Britton

Abstract:

Processing SAR data usually requires constraints in extent in the Fourier domain as well as approximations and interpolations onto a planar surface to form an exploitable image. This results in a potential loss of data requires several interpolative techniques, and restricts visualization to two-dimensional plane imagery. The data can be interpolated into a ground plane projection, with or without terrain as a component, all to better view SAR data in an image domain comparable to what a human would view, to ease interpretation. An alternate but computationally heavy method to make use of more of the data is the basis of this research. Pre-processing of the SAR data is completed first (matched-filtering, motion compensation, etc.), the data is then range compressed, and lastly, the contribution from each pulse is determined for each specific point in space by searching the time history data for the reflectivity values for each pulse summed over the entire collection. This results in a per-3D-point reflectivity using the entire collection domain. New advances in GPU processing have finally allowed this rapid projection of acquired SAR data onto any desired reference surface (called backprojection). Mathematically, the computations are fast and easy to implement, despite limitations in SAR phase history data size and 3D-point cloud size. Backprojection processing algorithms are embarrassingly parallel since each 3D point in the scene has the same reflectivity calculation applied for all pulses, independent of all other 3D points and pulse data under consideration. Therefore, given the simplicity of the single backprojection calculation, the work can be spread across thousands of GPU threads allowing for accurate reflectivity representation of a scene. Furthermore, because reflectivity values are associated with individual three-dimensional points, a plane is no longer the sole permissible mapping base; a digital elevation model or even a cloud of points (collected from any sensor capable of measuring ground topography) can be used as a basis for the backprojection technique. This technique minimizes any interpolations and modifications of the raw data, maintaining maximum data integrity. This innovative processing will allow for SAR data to be rapidly brought into a common reference frame for immediate exploitation and data fusion with other three-dimensional data and representations.

Keywords: backprojection, data fusion, exploitation, three-dimensional, visualization

Procedia PDF Downloads 58
26127 Computerized Analysis of Phonological Structure of 10,400 Brazilian Sign Language Signs

Authors: Wanessa G. Oliveira, Fernando C. Capovilla

Abstract:

Capovilla and Raphael’s Libras Dictionary documents a corpus of 4,200 Brazilian Sign Language (Libras) signs. Duduchi and Capovilla’s software SignTracking permits users to retrieve signs even when ignoring the gloss corresponding to it and to discover the meaning of all 4,200 signs sign simply by clicking on graphic menus of the sign characteristics (phonemes). Duduchi and Capovilla have discovered that the ease with which any given sign can be retrieved is an inverse function of the average popularity of its component phonemes. Thus, signs composed of rare (distinct) phonemes are easier to retrieve than are those composed of common phonemes. SignTracking offers a means of computing the average popularity of the phonemes that make up each one of 4,200 signs. It provides a precise measure of the degree of ease with which signs can be retrieved, and sign meanings can be discovered. Duduchi and Capovilla’s logarithmic model proved valid: The degree with which any given sign can be retrieved is an inverse function of the arithmetic mean of the logarithm of the popularity of each component phoneme. Capovilla, Raphael and Mauricio’s New Libras Dictionary documents a corpus of 10,400 Libras signs. The present analysis revealed Libras DNA structure by mapping the incidence of 501 sign phonemes resulting from the layered distribution of five parameters: 163 handshape phonemes (CherEmes-ManusIculi); 34 finger shape phonemes (DactilEmes-DigitumIculi); 55 hand placement phonemes (ArtrotoToposEmes-ArticulatiLocusIculi); 173 movement dimension phonemes (CinesEmes-MotusIculi) pertaining to direction, frequency, and type; and 76 Facial Expression phonemes (MascarEmes-PersonalIculi).

Keywords: Brazilian sign language, lexical retrieval, libras sign, sign phonology

Procedia PDF Downloads 331
26126 A Simulation-Based Investigation of the Smooth-Wall, Radial Gravity Problem of Granular Flow through a Wedge-Shaped Hopper

Authors: A. F. Momin, D. V. Khakhar

Abstract:

Granular materials consist of particulate particles found in nature and various industries that, due to gravity flow, behave macroscopically like liquids. A fundamental industrial unit operation is a hopper with inclined walls or a converging channel in which material flows downward under gravity and exits the storage bin through the bottom outlet. The simplest form of the flow corresponds to a wedge-shaped, quasi-two-dimensional geometry with smooth walls and radially directed gravitational force toward the apex of the wedge. These flows were examined using the Mohr-Coulomb criterion in the classic work of Savage (1965), while Ravi Prakash and Rao used the critical state theory (1988). The smooth-wall radial gravity (SWRG) wedge-shaped hopper is simulated using the discrete element method (DEM) to test existing theories. DEM simulations involve the solution of Newton's equations, taking particle-particle interactions into account to compute stress and velocity fields for the flow in the SWRG system. Our computational results are consistent with the predictions of Savage (1965) and Ravi Prakash and Rao (1988), except for the region near the exit, where both viscous and frictional effects are present. To further comprehend this behaviour, a parametric analysis is carried out to analyze the rheology of wedge-shaped hoppers by varying the orifice diameter, wedge angle, friction coefficient, and stiffness. The conclusion is that velocity increases as the flow rate increases but decreases as the wedge angle and friction coefficient increase. We observed no substantial changes in velocity due to varying stiffness. It is anticipated that stresses at the exit result from the transfer of momentum during particle collisions; for this reason, relationships between viscosity and shear rate are shown, and all data are collapsed into a single curve. In addition, it is demonstrated that viscosity and volume fraction exhibit power law correlations with the inertial number and that all the data collapse into a single curve. A continuum model for determining granular flows is presented using empirical correlations.

Keywords: discrete element method, gravity flow, smooth-wall, wedge-shaped hoppers

Procedia PDF Downloads 71
26125 Investigating the Influence of Activation Functions on Image Classification Accuracy via Deep Convolutional Neural Network

Authors: Gulfam Haider, sana danish

Abstract:

Convolutional Neural Networks (CNNs) have emerged as powerful tools for image classification, and the choice of optimizers profoundly affects their performance. The study of optimizers and their adaptations remains a topic of significant importance in machine learning research. While numerous studies have explored and advocated for various optimizers, the efficacy of these optimization techniques is still subject to scrutiny. This work aims to address the challenges surrounding the effectiveness of optimizers by conducting a comprehensive analysis and evaluation. The primary focus of this investigation lies in examining the performance of different optimizers when employed in conjunction with the popular activation function, Rectified Linear Unit (ReLU). By incorporating ReLU, known for its favorable properties in prior research, the aim is to bolster the effectiveness of the optimizers under scrutiny. Specifically, we evaluate the adjustment of these optimizers with both the original Softmax activation function and the modified ReLU activation function, carefully assessing their impact on overall performance. To achieve this, a series of experiments are conducted using a well-established benchmark dataset for image classification tasks, namely the Canadian Institute for Advanced Research dataset (CIFAR-10). The selected optimizers for investigation encompass a range of prominent algorithms, including Adam, Root Mean Squared Propagation (RMSprop), Adaptive Learning Rate Method (Adadelta), Adaptive Gradient Algorithm (Adagrad), and Stochastic Gradient Descent (SGD). The performance analysis encompasses a comprehensive evaluation of the classification accuracy, convergence speed, and robustness of the CNN models trained with each optimizer. Through rigorous experimentation and meticulous assessment, we discern the strengths and weaknesses of the different optimization techniques, providing valuable insights into their suitability for image classification tasks. By conducting this in-depth study, we contribute to the existing body of knowledge surrounding optimizers in CNNs, shedding light on their performance characteristics for image classification. The findings gleaned from this research serve to guide researchers and practitioners in making informed decisions when selecting optimizers and activation functions, thus advancing the state-of-the-art in the field of image classification with convolutional neural networks.

Keywords: deep neural network, optimizers, RMsprop, ReLU, stochastic gradient descent

Procedia PDF Downloads 106
26124 Researching Servant Leadership Behaviors of Sport Managers

Authors: Betul Altinok

Abstract:

The aim of this study is researching servant leadership behaviors of sports managers. For this purpose, Servant Leadership behaviors of Sport Managers (N=69) working as Dean, School Principal and Head of Department in Sport Sciences Faculties, Physical Education and Sport Schools and Departments educating Physical Education and Sport investigated via questionnaires applied to academicians (N=1185) working in these institutions. Servant Leadership Questionnaire sent via e-mail to all Academicians working in Physical Education and Sport educating Faculties, Schools of Universities and Departments in Turkey. 406 survey which is responded and accurately completed by Academicians were evaluated. In this study, Servant Leadership Questionnaire developed and conducted validity and reliability analysis by Barbuto and Wheeler (2006) used to investigate sports managers servant leadership behaviors. Scale translated into Turkish then validity and reliability analysis were conducted. After measurement model of servant leadership questionnaire verified, Shapiro Wilk normality test was applied to obtained data to determine whether has got a normal distribution or not, depending on gender, job title, profession time, department and evaluated manager. Results of practiced normality test showed that data has not got a normal distribution (nonparametric). After normality test, Mann Whitney-U test applied at 0.05 value for determining whether there is a difference between servant leadership scores according to gender and Kruskal Wallis Test applied at 0.05 value for determining whether there is a difference between servant leadership scores according to job title, profession time, department and evaluated manager. Test results showed that there were not differences between Altruistic Calling (p>0.05), Emotional Healing (p>0.05), Wisdom (p>0.05), Persuasive Mapping (p>0.05) and (p>0.05), Organizational Stewardship sub-dimensions according to gender. Test results showed that there were not differences between Altruistic Calling (p>0.05), Emotional Healing (p>0.05), Wisdom (p>0.05), Persuasive Mapping (p>0.05) and (p>0.05), Organizational Stewardship sub-dimensions according to job title, profession time, department and evaluated manager. In the light of study results, it can be said that applied survey is objective and unfurls evaluated managers servant leadership behaviors. Empirical and practical contribution of this study is to test sports managers servant leadership behaviors in Turkey for the very first time.

Keywords: academicians, management, servant leadership, sport

Procedia PDF Downloads 294
26123 Integration of Knowledge and Metadata for Complex Data Warehouses and Big Data

Authors: Jean Christian Ralaivao, Fabrice Razafindraibe, Hasina Rakotonirainy

Abstract:

This document constitutes a resumption of work carried out in the field of complex data warehouses (DW) relating to the management and formalization of knowledge and metadata. It offers a methodological approach for integrating two concepts, knowledge and metadata, within the framework of a complex DW architecture. The objective of the work considers the use of the technique of knowledge representation by description logics and the extension of Common Warehouse Metamodel (CWM) specifications. This will lead to a fallout in terms of the performance of a complex DW. Three essential aspects of this work are expected, including the representation of knowledge in description logics and the declination of this knowledge into consistent UML diagrams while respecting or extending the CWM specifications and using XML as pivot. The field of application is large but will be adapted to systems with heteroge-neous, complex and unstructured content and moreover requiring a great (re)use of knowledge such as medical data warehouses.

Keywords: data warehouse, description logics, integration, knowledge, metadata

Procedia PDF Downloads 126
26122 Data Analytics in Energy Management

Authors: Sanjivrao Katakam, Thanumoorthi I., Antony Gerald, Ratan Kulkarni, Shaju Nair

Abstract:

With increasing energy costs and its impact on the business, sustainability today has evolved from a social expectation to an economic imperative. Therefore, finding methods to reduce cost has become a critical directive for Industry leaders. Effective energy management is the only way to cut costs. However, Energy Management has been a challenge because it requires a change in old habits and legacy systems followed for decades. Today exorbitant levels of energy and operational data is being captured and stored by Industries, but they are unable to convert these structured and unstructured data sets into meaningful business intelligence. It must be noted that for quick decisions, organizations must learn to cope with large volumes of operational data in different formats. Energy analytics not only helps in extracting inferences from these data sets, but also is instrumental in transformation from old approaches of energy management to new. This in turn assists in effective decision making for implementation. It is the requirement of organizations to have an established corporate strategy for reducing operational costs through visibility and optimization of energy usage. Energy analytics play a key role in optimization of operations. The paper describes how today energy data analytics is extensively used in different scenarios like reducing operational costs, predicting energy demands, optimizing network efficiency, asset maintenance, improving customer insights and device data insights. The paper also highlights how analytics helps transform insights obtained from energy data into sustainable solutions. The paper utilizes data from an array of segments such as retail, transportation, and water sectors.

Keywords: energy analytics, energy management, operational data, business intelligence, optimization

Procedia PDF Downloads 351
26121 Efficient Frequent Itemset Mining Methods over Real-Time Spatial Big Data

Authors: Hamdi Sana, Emna Bouazizi, Sami Faiz

Abstract:

In recent years, there is a huge increase in the use of spatio-temporal applications where data and queries are continuously moving. As a result, the need to process real-time spatio-temporal data seems clear and real-time stream data management becomes a hot topic. Sliding window model and frequent itemset mining over dynamic data are the most important problems in the context of data mining. Thus, sliding window model for frequent itemset mining is a widely used model for data stream mining due to its emphasis on recent data and its bounded memory requirement. These methods use the traditional transaction-based sliding window model where the window size is based on a fixed number of transactions. Actually, this model supposes that all transactions have a constant rate which is not suited for real-time applications. And the use of this model in such applications endangers their performance. Based on these observations, this paper relaxes the notion of window size and proposes the use of a timestamp-based sliding window model. In our proposed frequent itemset mining algorithm, support conditions are used to differentiate frequents and infrequent patterns. Thereafter, a tree is developed to incrementally maintain the essential information. We evaluate our contribution. The preliminary results are quite promising.

Keywords: real-time spatial big data, frequent itemset, transaction-based sliding window model, timestamp-based sliding window model, weighted frequent patterns, tree, stream query

Procedia PDF Downloads 146
26120 A Model of Teacher Leadership in History Instruction

Authors: Poramatdha Chutimant

Abstract:

The objective of the research was to propose a model of teacher leadership in history instruction for utilization. Everett M. Rogers’ Diffusion of Innovations Theory is applied as theoretical framework. Qualitative method is to be used in the study, and the interview protocol used as an instrument to collect primary data from best practices who awarded by Office of National Education Commission (ONEC). Open-end questions will be used in interview protocol in order to gather the various data. Then, information according to international context of history instruction is the secondary data used to support in the summarizing process (Content Analysis). Dendrogram is a key to interpret and synthesize the primary data. Thus, secondary data comes as the supportive issue in explanation and elaboration. In-depth interview is to be used to collected information from seven experts in educational field. The focal point is to validate a draft model in term of future utilization finally.

Keywords: history study, nationalism, patriotism, responsible citizenship, teacher leadership

Procedia PDF Downloads 270
26119 The Effect of Institutions on Economic Growth: An Analysis Based on Bayesian Panel Data Estimation

Authors: Mohammad Anwar, Shah Waliullah

Abstract:

This study investigated panel data regression models. This paper used Bayesian and classical methods to study the impact of institutions on economic growth from data (1990-2014), especially in developing countries. Under the classical and Bayesian methodology, the two-panel data models were estimated, which are common effects and fixed effects. For the Bayesian approach, the prior information is used in this paper, and normal gamma prior is used for the panel data models. The analysis was done through WinBUGS14 software. The estimated results of the study showed that panel data models are valid models in Bayesian methodology. In the Bayesian approach, the effects of all independent variables were positively and significantly affected by the dependent variables. Based on the standard errors of all models, we must say that the fixed effect model is the best model in the Bayesian estimation of panel data models. Also, it was proved that the fixed effect model has the lowest value of standard error, as compared to other models.

Keywords: Bayesian approach, common effect, fixed effect, random effect, Dynamic Random Effect Model

Procedia PDF Downloads 61
26118 Diagnosis of the Heart Rhythm Disorders by Using Hybrid Classifiers

Authors: Sule Yucelbas, Gulay Tezel, Cuneyt Yucelbas, Seral Ozsen

Abstract:

In this study, it was tried to identify some heart rhythm disorders by electrocardiography (ECG) data that is taken from MIT-BIH arrhythmia database by subtracting the required features, presenting to artificial neural networks (ANN), artificial immune systems (AIS), artificial neural network based on artificial immune system (AIS-ANN) and particle swarm optimization based artificial neural network (PSO-NN) classifier systems. The main purpose of this study is to evaluate the performance of hybrid AIS-ANN and PSO-ANN classifiers with regard to the ANN and AIS. For this purpose, the normal sinus rhythm (NSR), atrial premature contraction (APC), sinus arrhythmia (SA), ventricular trigeminy (VTI), ventricular tachycardia (VTK) and atrial fibrillation (AF) data for each of the RR intervals were found. Then these data in the form of pairs (NSR-APC, NSR-SA, NSR-VTI, NSR-VTK and NSR-AF) is created by combining discrete wavelet transform which is applied to each of these two groups of data and two different data sets with 9 and 27 features were obtained from each of them after data reduction. Afterwards, the data randomly was firstly mixed within themselves, and then 4-fold cross validation method was applied to create the training and testing data. The training and testing accuracy rates and training time are compared with each other. As a result, performances of the hybrid classification systems, AIS-ANN and PSO-ANN were seen to be close to the performance of the ANN system. Also, the results of the hybrid systems were much better than AIS, too. However, ANN had much shorter period of training time than other systems. In terms of training times, ANN was followed by PSO-ANN, AIS-ANN and AIS systems respectively. Also, the features that extracted from the data affected the classification results significantly.

Keywords: AIS, ANN, ECG, hybrid classifiers, PSO

Procedia PDF Downloads 429
26117 Mild Auditory Perception and Cognitive Impairment in mid-Trimester Pregnancy

Authors: Tahamina Begum, Wan Nor Azlen Wan Mohamad, Faruque Reza, Wan Rosilawati Wan Rosli

Abstract:

To assess auditory perception and cognitive function during pregnancy is necessary as the pregnant women need extra effort for attention mainly for their executive function to maintain their quality of life. This study aimed to investigate neural correlates of cognitive and behavioral processing during mid trimester pregnancy. Event-Related Potentials (ERPs) were studied by using 128-sensor net and PAS or COWA (controlled Oral Word Association), WCST (Wisconsin Card Sorting Test), RAVLTIM (Rey Auditory Verbal and Learning Test: immediate or interference recall, delayed recall (RAVLT DR) and total score (RAVLT TS) were tested for neuropsychology assessment. In total 18 subjects were recruited (n= 9 in each group; control and pregnant group). All participants of the pregnant group were within 16-27 (mid trimester) weeks gestation. Age and education matched control healthy subjects were recruited in the control group. Participants were given a standardized test of auditory cognitive function as auditory oddball paradigm during ERP study. In this paradigm, two different auditory stimuli (standard and target stimuli) were used where subjects counted silently only target stimuli with giving attention by ignoring standard stimuli. Mean differences between target and standard stimuli were compared across groups. N100 (auditory sensory ERP component) and P300 (auditory cognitive ERP component) were recorded at T3, T4, T5, T6, Cz and Pz electrode sites. An equal number of electrodes showed non-significantly shorter amplitude of N100 component (except significantly shorter at T3, P= 0.05) and non-significant longer latencies (except significantly longer latency at T5, P= 0.008) of N100 component in pregnant group comparing control. In case of P300 component, maximum electrode sites showed non-significantly higher amplitudes and equal number of sites showed non-significant shorter latencies in pregnant group comparing control. Neuropsychology results revealed the non-significant higher score of PAS, lower score of WCST, lower score of RAVLTIM and RAVLTDR in pregnant group comparing control. The results of N100 component and RAVLT scores concluded that auditory perception is mildly impaired and P300 component proved very mild cognitive dysfunction with good executive functions in second trimester of pregnancy.

Keywords: auditory perception, pregnancy, stimuli, trimester

Procedia PDF Downloads 363
26116 Kernel-Based Double Nearest Proportion Feature Extraction for Hyperspectral Image Classification

Authors: Hung-Sheng Lin, Cheng-Hsuan Li

Abstract:

Over the past few years, kernel-based algorithms have been widely used to extend some linear feature extraction methods such as principal component analysis (PCA), linear discriminate analysis (LDA), and nonparametric weighted feature extraction (NWFE) to their nonlinear versions, kernel principal component analysis (KPCA), generalized discriminate analysis (GDA), and kernel nonparametric weighted feature extraction (KNWFE), respectively. These nonlinear feature extraction methods can detect nonlinear directions with the largest nonlinear variance or the largest class separability based on the given kernel function. Moreover, they have been applied to improve the target detection or the image classification of hyperspectral images. The double nearest proportion feature extraction (DNP) can effectively reduce the overlap effect and have good performance in hyperspectral image classification. The DNP structure is an extension of the k-nearest neighbor technique. For each sample, there are two corresponding nearest proportions of samples, the self-class nearest proportion and the other-class nearest proportion. The term “nearest proportion” used here consider both the local information and other more global information. With these settings, the effect of the overlap between the sample distributions can be reduced. Usually, the maximum likelihood estimator and the related unbiased estimator are not ideal estimators in high dimensional inference problems, particularly in small data-size situation. Hence, an improved estimator by shrinkage estimation (regularization) is proposed. Based on the DNP structure, LDA is included as a special case. In this paper, the kernel method is applied to extend DNP to kernel-based DNP (KDNP). In addition to the advantages of DNP, KDNP surpasses DNP in the experimental results. According to the experiments on the real hyperspectral image data sets, the classification performance of KDNP is better than that of PCA, LDA, NWFE, and their kernel versions, KPCA, GDA, and KNWFE.

Keywords: feature extraction, kernel method, double nearest proportion feature extraction, kernel double nearest feature extraction

Procedia PDF Downloads 330
26115 Prevalence of Occupational Asthma Diagnosed by Specific Challenge Test in 5 Different Working Environments in Thailand

Authors: Sawang Saenghirunvattana, Chao Saenghirunvattana, Maria Christina Gonzales, Wilai Srimuk, Chitchamai Siangpro, Kritsana Sutthisri

Abstract:

Introduction: Thailand is one of the fastest growing countries in Asia. It has emerged from agricultural to industrialized economy. Work places have shifted from farms to factories, offices and streets were employees are exposed to certain chemicals and pollutants causing occupational diseases particularly asthma. Work-related diseases are major concern and many studies have been published to demonstrate certain professions and their exposures that elevate the risk of asthma. Workers who exhibit coughing, wheezing and difficulty of breathing are brought to a health care setting where Pulmonary Function Test (PFT) is performed and based from results, they are then diagnosed of asthma. These patients, known to have occupational asthma eventually get well when removed from the exposure of the environment. Our study, focused on performing PFT or specific challenge test in diagnosing workers of occupational asthma with them executing the test within their workplace, maintaining the environment and their daily exposure to certain levels of chemicals and pollutants. This has provided us with an understanding and reliable diagnosis of occupational asthma. Objective: To identify the prevalence of Thai workers who develop asthma caused by exposure to pollutants and chemicals from their working environment by conducting interview and performing PFT or specific challenge test in their work places. Materials and Methods: This study was performed from January-March 2015 in Bangkok, Thailand. The percentage of abnormal symptoms of 940 workers in 5 different areas (factories of plastic, fertilizer, animal food, office and streets) were collected through a questionnaire. The demographic information, occupational history, and the state of health were determined using a questionnaire and checklists. PFT was executed in their work places and results were measured and evaluated. Results: Pulmonary Function test was performed by 940 participants. The specific challenge test was done in factories of plastic, fertilizer, animal food, office environment and on the streets of Thailand. Of the 100 participants working in the plastic industry, 65% complained of having respiratory symptoms. None of them had an abnormal PFT. From the participants who worked with fertilizers and are exposed to sulfur dioxide, out of 200 participants, 20% complained of having symptoms and 8% had abnormal PFT. The 300 subjects working with animal food reported that 45% complained of respiratory symptoms and 15% had abnormal PFT results. From the office environment where there is indoor pollution, Out of 140 subjects, 7% had symptoms and 4% had abnormal PFT. The 200 workers exposed to traffic pollution, 24% reported respiratory symptoms and 12% had abnormal PFT. Conclusion: We were able to identify and diagnose participants of occupational asthma through their abnormal lung function test done at their work places. The chemical agents and exposures were determined therefore effective management of workers with occupational asthma were advised to avoid further exposure for better chances of recovery. Further studies identifying the risk factors and causative agents of asthma in workplaces should be developed to encourage interventional strategies and programs that will prevent occupation related diseases particularly asthma.

Keywords: occupational asthma, pulmonary function test, specific challenge test, Thailand

Procedia PDF Downloads 293
26114 Topic Modelling Using Latent Dirichlet Allocation and Latent Semantic Indexing on SA Telco Twitter Data

Authors: Phumelele Kubheka, Pius Owolawi, Gbolahan Aiyetoro

Abstract:

Twitter is one of the most popular social media platforms where users can share their opinions on different subjects. As of 2010, The Twitter platform generates more than 12 Terabytes of data daily, ~ 4.3 petabytes in a single year. For this reason, Twitter is a great source for big mining data. Many industries such as Telecommunication companies can leverage the availability of Twitter data to better understand their markets and make an appropriate business decision. This study performs topic modeling on Twitter data using Latent Dirichlet Allocation (LDA). The obtained results are benchmarked with another topic modeling technique, Latent Semantic Indexing (LSI). The study aims to retrieve topics on a Twitter dataset containing user tweets on South African Telcos. Results from this study show that LSI is much faster than LDA. However, LDA yields better results with higher topic coherence by 8% for the best-performing model represented in Table 1. A higher topic coherence score indicates better performance of the model.

Keywords: big data, latent Dirichlet allocation, latent semantic indexing, telco, topic modeling, twitter

Procedia PDF Downloads 142
26113 Enhance the Power of Sentiment Analysis

Authors: Yu Zhang, Pedro Desouza

Abstract:

Since big data has become substantially more accessible and manageable due to the development of powerful tools for dealing with unstructured data, people are eager to mine information from social media resources that could not be handled in the past. Sentiment analysis, as a novel branch of text mining, has in the last decade become increasingly important in marketing analysis, customer risk prediction and other fields. Scientists and researchers have undertaken significant work in creating and improving their sentiment models. In this paper, we present a concept of selecting appropriate classifiers based on the features and qualities of data sources by comparing the performances of five classifiers with three popular social media data sources: Twitter, Amazon Customer Reviews, and Movie Reviews. We introduced a couple of innovative models that outperform traditional sentiment classifiers for these data sources, and provide insights on how to further improve the predictive power of sentiment analysis. The modelling and testing work was done in R and Greenplum in-database analytic tools.

Keywords: sentiment analysis, social media, Twitter, Amazon, data mining, machine learning, text mining

Procedia PDF Downloads 337
26112 Empirical Examination of High Performance Work System, Organizational Commitment and Organizational Citizen Behavior: A Mediation of Model of Vietnam Organizations

Authors: Giang Vu, Duong Nguyen, Yuan-Ling Chen

Abstract:

Vietnam is a fast developing country with highly economic growth, and Vietnam organizations strive to utilize high performance work system (HPWS) in reinforcing employee in-role performance. HPWS, a bundle of human resource (HR) practices, are composed of eight sets of HR practices, namely selective staffing, extensive training, internal mobility, employment security, clear job description, result-oriented appraisal, incentive reward, and participation. However, whether HPWS stimulate employee extra-role behaviors remains understudied in a booming economic context. In this study, we aim to investigate organizational citizenship behavior (OCB) in a Vietnam context and, as a central issue, disentangle how HPWS elicits in employee OCB. On the other hand, recently, a deliberation of so-called 'black-box' HPWS issue has explored the role of employee commitment, suggesting that organizational commitment is a compelling source of employee OCB. We draw upon social exchange theory to predict that when employees perceive the organizational investment, like HPWS, in heightening their abilities, knowledge, and motivation, they are more likely to pay back with commitment; consequently, they will take initiatives in OCB. Hence, we hypothesize an individual level framework, in which organizational commitment mediates the positive relationship between HPWS and OCB. We collected data on HPWS, organizational commitment, OCB, and demographic variables, all at line managers of Vietnamese firms in Hanoi and Hochiminh. We conclude with research findings, implications, and future research suggestions.

Keywords: high performance work system, organizational citizenship behavior, organizational commitment, Vietnam

Procedia PDF Downloads 295
26111 Real-Time Big-Data Warehouse a Next-Generation Enterprise Data Warehouse and Analysis Framework

Authors: Abbas Raza Ali

Abstract:

Big Data technology is gradually becoming a dire need of large enterprises. These enterprises are generating massively large amount of off-line and streaming data in both structured and unstructured formats on daily basis. It is a challenging task to effectively extract useful insights from the large scale datasets, even though sometimes it becomes a technology constraint to manage transactional data history of more than a few months. This paper presents a framework to efficiently manage massively large and complex datasets. The framework has been tested on a communication service provider producing massively large complex streaming data in binary format. The communication industry is bound by the regulators to manage history of their subscribers’ call records where every action of a subscriber generates a record. Also, managing and analyzing transactional data allows service providers to better understand their customers’ behavior, for example, deep packet inspection requires transactional internet usage data to explain internet usage behaviour of the subscribers. However, current relational database systems limit service providers to only maintain history at semantic level which is aggregated at subscriber level. The framework addresses these challenges by leveraging Big Data technology which optimally manages and allows deep analysis of complex datasets. The framework has been applied to offload existing Intelligent Network Mediation and relational Data Warehouse of the service provider on Big Data. The service provider has 50+ million subscriber-base with yearly growth of 7-10%. The end-to-end process takes not more than 10 minutes which involves binary to ASCII decoding of call detail records, stitching of all the interrogations against a call (transformations) and aggregations of all the call records of a subscriber.

Keywords: big data, communication service providers, enterprise data warehouse, stream computing, Telco IN Mediation

Procedia PDF Downloads 163
26110 Programming with Grammars

Authors: Peter M. Maurer Maurer

Abstract:

DGL is a context free grammar-based tool for generating random data. Many types of simulator input data require some computation to be placed in the proper format. For example, it might be necessary to generate ordered triples in which the third element is the sum of the first two elements, or it might be necessary to generate random numbers in some sorted order. Although DGL is universal in computational power, generating these types of data is extremely difficult. To overcome this problem, we have enhanced DGL to include features that permit direct computation within the structure of a context free grammar. The features have been implemented as special types of productions, preserving the context free flavor of DGL specifications.

Keywords: DGL, Enhanced Context Free Grammars, Programming Constructs, Random Data Generation

Procedia PDF Downloads 133
26109 A Model Architecture Transformation with Approach by Modeling: From UML to Multidimensional Schemas of Data Warehouses

Authors: Ouzayr Rabhi, Ibtissam Arrassen

Abstract:

To provide a complete analysis of the organization and to help decision-making, leaders need to have relevant data; Data Warehouses (DW) are designed to meet such needs. However, designing DW is not trivial and there is no formal method to derive a multidimensional schema from heterogeneous databases. In this article, we present a Model-Driven based approach concerning the design of data warehouses. We describe a multidimensional meta-model and also specify a set of transformations starting from a Unified Modeling Language (UML) metamodel. In this approach, the UML metamodel and the multidimensional one are both considered as a platform-independent model (PIM). The first meta-model is mapped into the second one through transformation rules carried out by the Query View Transformation (QVT) language. This proposal is validated through the application of our approach to generating a multidimensional schema of a Balanced Scorecard (BSC) DW. We are interested in the BSC perspectives, which are highly linked to the vision and the strategies of an organization.

Keywords: data warehouse, meta-model, model-driven architecture, transformation, UML

Procedia PDF Downloads 145
26108 Creating a Multilevel ESL Learning Community for Adults

Authors: Gloria Chen

Abstract:

When offering conventional level-appropriate ESL classes for adults is not feasible, a multilevel adult ESL class can be formed to benefit those who need to learn English for daily function. This paper examines the rationale, the process, the contents, and the outcomes of a multilevel ESL class for adults. The action research discusses a variety of assessments, lesson plans, teaching strategies that facilitate lifelong language learning. In small towns where adult ESL learners are only a handful, often advanced students and inexperienced students have to be placed in one class. Such class might not be viewed as desirable, but with on-going assessments, careful lesson plans, and purposeful strategies, a multilevel ESL class for adults can overcome the obstacles and help learners to reach a higher level of English proficiency. This research explores some hand-on strategies, such as group rotating, cooperative learning, and modifying textbook contents for practical purpose, and evaluate their effectiveness. The data collected in this research include Needs Assessment (beginning of class term), Mid-term Self-Assessment (5 months into class term), End-of-term Student Reflection (10 months into class), and End-of-term Assessment from the Instructor (10 months into class). A descriptive analysis of the data explains the practice of this particular learning community, and reveal the areas for improvement and enrichment. This research answers the following questions: (1) How do the assessments positively help both learners and instructors? (2) How do the learning strategies prepare students to become independent, life-long English learners? (3) How do materials, grouping, and class schedule enhance the learning? The result of the research contributes to the field of teaching and learning in language, not limited in English, by (a) examining strategies of conducting a multilevel adult class, (b) involving adult language learners with various backgrounds and learning styles for reflection and feedback, and (c) improving teaching and learning strategies upon research methods and results. One unique feature of this research is how students can work together with the instructor to form a learning community, seeking and exploring resources available to them, to become lifelong language learners.

Keywords: adult language learning, assessment, multilevel, teaching strategies

Procedia PDF Downloads 339
26107 Applying (1, T) Ordering Policy in a Multi-Vendor-Single-Buyer Inventory System with Lost Sales and Poisson Demand

Authors: Adel Nikfarjam, Hamed Tayebi, Sadoullah Ebrahimnejad

Abstract:

This paper considers a two-echelon inventory system with a number of warehouses and a single retailer. The retailer replenishes its required items from warehouses, and assembles them into a single final product. We assume that each warehouse supplies only one kind of the raw material for the retailer. The demand process of the final product is assumed to be Poissson, and unsatisfied demand of the final product will be lost. The retailer applies one-for-one-period ordering policy which is also known as (1, T) ordering policy. In this policy the retailer orders to each warehouse a fixed quantity of each item at fixed time intervals, which the fixed quantity is equal to the utilization of the item in the final product. Since, this policy eliminates all demand uncertainties at the upstream echelon, the standard lot sizing model can be applied at all warehouses. In this paper, we calculate the total cost function of the inventory system. Then, based on this function, we present a procedure to obtain the optimal time interval between two consecutive order placements from retailer to the warehouses, and the optimal order quantities of warehouses (assuming that there are positive ordering costs at warehouses). Finally, we present some numerical examples, and conduct numerical sensitivity analysis for cost parameters.

Keywords: two-echelon supply chain, multi-vendor-single-buyer inventory system, lost sales, Poisson demand, one-for-one-period policy, lot sizing model

Procedia PDF Downloads 297
26106 Effect of Recruitment and Selection on Employee Performance in Hospitality Industries

Authors: Yusuf A. Bako, Olubunmi O. Kolawole

Abstract:

This study sought to establish the effect of recruitment and selection on the employee performance in hospitality industries. The success of any organization in this modern business environment depends on the caliber of the manpower that steer the affairs of the organization. History has shown that recruitment and selection as a function of human resources management practices have a pivotal role in determining the level of employee performance in an organization. The hospitality industries have been faced with challenges of performance due to unconventional selection and placement practices in terms of poor policy in selecting candidate, inconsistency in selection process, sidetracking employment test and interview, godfatherism and regional selection process etc. The overall objective of the study was to determine how recruitment and selection affect employee performance in hospitality industry in Ogun State, Nigeria. This study adopts descriptive and inferential research design while population was drawn from leading hotels in Ogun State, Nigeria. The samples size was 100 employees and questionnaire was used to collect data while Cronbach alpha was used to test the instrument. The result of the study reveals that correlation between employee performance and recruitment and selection were highly significant.

Keywords: employee performance, human resources management, practices, recruitment, selection

Procedia PDF Downloads 354
26105 Secured Embedding of Patient’s Confidential Data in Electrocardiogram Using Chaotic Maps

Authors: Butta Singh

Abstract:

This paper presents a chaotic map based approach for secured embedding of patient’s confidential data in electrocardiogram (ECG) signal. The chaotic map generates predefined locations through the use of selective control parameters. The sample value difference method effectually hides the confidential data in ECG sample pairs at these predefined locations. Evaluation of proposed method on all 48 records of MIT-BIH arrhythmia ECG database demonstrates that the embedding does not alter the diagnostic features of cover ECG. The secret data imperceptibility in stego-ECG is evident through various statistical and clinical performance measures. Statistical metrics comprise of Percentage Root Mean Square Difference (PRD) and Peak Signal to Noise Ratio (PSNR). Further, a comparative analysis between proposed method and existing approaches was also performed. The results clearly demonstrated the superiority of proposed method.

Keywords: chaotic maps, ECG steganography, data embedding, electrocardiogram

Procedia PDF Downloads 178
26104 Developing a Translator Career Path: Based on the Dreyfus Model of Skills Acquisition

Authors: Noha A. Alowedi

Abstract:

This paper proposes a Translator Career Path (TCP) which is based on the Dreyfus Model of Skills Acquisition as the conceptual framework. In this qualitative study, the methodology to collect and analyze the data takes an inductive approach that draws upon the literature to form the criteria for the different steps in the TCP. This path is based on descriptors of expert translator performance and best employees’ practice documented in the literature. Each translator skill will be graded as novice, advanced beginner, competent, proficient, and expert. Consequently, five levels of translator performance are identified in the TCP as five ranks. The first rank is the intern translator, which is equivalent to the novice level; the second rank is the assistant translator, which is equivalent to the advanced beginner level; the third rank is the associate translator, which is equivalent to the competent level; the fourth rank is the translator, which is equivalent to the proficient level; finally, the fifth rank is the expert translator, which is equivalent to the expert level. The main function of this career path is to guide the processes of translator development in translation organizations. Although it is designed primarily for the need of in-house translators’ supervisors, the TCP can be used in academic settings for translation trainers and teachers.

Keywords: Dreyfus model, translation organization, translator career path, translator development, translator evaluation, translator promotion

Procedia PDF Downloads 360
26103 Evaluation of Gesture-Based Password: User Behavioral Features Using Machine Learning Algorithms

Authors: Lakshmidevi Sreeramareddy, Komalpreet Kaur, Nane Pothier

Abstract:

Graphical-based passwords have existed for decades. Their major advantage is that they are easier to remember than an alphanumeric password. However, their disadvantage (especially recognition-based passwords) is the smaller password space, making them more vulnerable to brute force attacks. Graphical passwords are also highly susceptible to the shoulder-surfing effect. The gesture-based password method that we developed is a grid-free, template-free method. In this study, we evaluated the gesture-based passwords for usability and vulnerability. The results of the study are significant. We developed a gesture-based password application for data collection. Two modes of data collection were used: Creation mode and Replication mode. In creation mode (Session 1), users were asked to create six different passwords and reenter each password five times. In replication mode, users saw a password image created by some other user for a fixed duration of time. Three different duration timers, such as 5 seconds (Session 2), 10 seconds (Session 3), and 15 seconds (Session 4), were used to mimic the shoulder-surfing attack. After the timer expired, the password image was removed, and users were asked to replicate the password. There were 74, 57, 50, and 44 users participated in Session 1, Session 2, Session 3, and Session 4 respectfully. In this study, the machine learning algorithms have been applied to determine whether the person is a genuine user or an imposter based on the password entered. Five different machine learning algorithms were deployed to compare the performance in user authentication: namely, Decision Trees, Linear Discriminant Analysis, Naive Bayes Classifier, Support Vector Machines (SVMs) with Gaussian Radial Basis Kernel function, and K-Nearest Neighbor. Gesture-based password features vary from one entry to the next. It is difficult to distinguish between a creator and an intruder for authentication. For each password entered by the user, four features were extracted: password score, password length, password speed, and password size. All four features were normalized before being fed to a classifier. Three different classifiers were trained using data from all four sessions. Classifiers A, B, and C were trained and tested using data from the password creation session and the password replication with a timer of 5 seconds, 10 seconds, and 15 seconds, respectively. The classification accuracies for Classifier A using five ML algorithms are 72.5%, 71.3%, 71.9%, 74.4%, and 72.9%, respectively. The classification accuracies for Classifier B using five ML algorithms are 69.7%, 67.9%, 70.2%, 73.8%, and 71.2%, respectively. The classification accuracies for Classifier C using five ML algorithms are 68.1%, 64.9%, 68.4%, 71.5%, and 69.8%, respectively. SVMs with Gaussian Radial Basis Kernel outperform other ML algorithms for gesture-based password authentication. Results confirm that the shorter the duration of the shoulder-surfing attack, the higher the authentication accuracy. In conclusion, behavioral features extracted from the gesture-based passwords lead to less vulnerable user authentication.

Keywords: authentication, gesture-based passwords, machine learning algorithms, shoulder-surfing attacks, usability

Procedia PDF Downloads 89
26102 Comparative Evaluation of EBT3 Film Dosimetry Using Flat Bad Scanner, Densitometer and Spectrophotometer Methods and Its Applications in Radiotherapy

Authors: K. Khaerunnisa, D. Ryangga, S. A. Pawiro

Abstract:

Over the past few decades, film dosimetry has become a tool which is used in various radiotherapy modalities, either for clinical quality assurance (QA) or dose verification. The response of the film to irradiation is usually expressed in optical density (OD) or net optical density (netOD). While the film's response to radiation is not linear, then the use of film as a dosimeter must go through a calibration process. This study aimed to compare the function of the calibration curve of various measurement methods with various densitometer, using a flat bad scanner, point densitometer and spectrophotometer. For every response function, a radichromic film calibration curve is generated from each method by performing accuracy, precision and sensitivity analysis. netOD is obtained by measuring changes in the optical density (OD) of the film before irradiation and after irradiation when using a film scanner if it uses ImageJ to extract the pixel value of the film on the red channel of three channels (RGB), calculate the change in OD before and after irradiation when using a point densitometer, and calculate changes in absorbance before and after irradiation when using a spectrophotometer. the results showed that the three calibration methods gave readings with a netOD precision of doses below 3% for the uncertainty value of 1σ (one sigma). while the sensitivity of all three methods has the same trend in responding to film readings against radiation, it has a different magnitude of sensitivity. while the accuracy of the three methods provides readings below 3% for doses above 100 cGy and 200 cGy, but for doses below 100 cGy found above 3% when using point densitometers and spectrophotometers. when all three methods are used for clinical implementation, the results of the study show accuracy and precision below 2% for the use of scanners and spectrophotometers and above 3% for precision and accuracy when using point densitometers.

Keywords: Callibration Methods, Film Dosimetry EBT3, Flat Bad Scanner, Densitomete, Spectrophotometer

Procedia PDF Downloads 119