Search results for: scale invariant feature transforms
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7422

Search results for: scale invariant feature transforms

7242 Local Spectrum Feature Extraction for Face Recognition

Authors: Muhammad Imran Ahmad, Ruzelita Ngadiran, Mohd Nazrin Md Isa, Nor Ashidi Mat Isa, Mohd ZaizuIlyas, Raja Abdullah Raja Ahmad, Said Amirul Anwar Ab Hamid, Muzammil Jusoh

Abstract:

This paper presents two technique, local feature extraction using image spectrum and low frequency spectrum modelling using GMM to capture the underlying statistical information to improve the performance of face recognition system. Local spectrum features are extracted using overlap sub block window that are mapping on the face image. For each of this block, spatial domain is transformed to frequency domain using DFT. A low frequency coefficient is preserved by discarding high frequency coefficients by applying rectangular mask on the spectrum of the facial image. Low frequency information is non Gaussian in the feature space and by using combination of several Gaussian function that has different statistical properties, the best feature representation can be model using probability density function. The recognition process is performed using maximum likelihood value computed using pre-calculate GMM components. The method is tested using FERET data sets and is able to achieved 92% recognition rates.

Keywords: local features modelling, face recognition system, Gaussian mixture models, Feret

Procedia PDF Downloads 632
7241 Defining the Turbulent Coefficients with the Effect of Atmospheric Stability in Wake of a Wind Turbine Wake

Authors: Mohammad A. Sazzad, Md M. Alam

Abstract:

Wind energy is one of the cleanest form of renewable energy. Despite wind industry is growing faster than ever there are some roadblocks towards the improvement. One of the difficulties the industry facing is insufficient knowledge about wake within the wind farms. As we know energy is generated in the lowest layer of the atmospheric boundary layer (ABL). This interaction between the wind turbine (WT) blades and wind introduces a low speed wind region which is defined as wake. This wake region shows different characteristics under each stability condition of the ABL. So, it is fundamental to know this wake region well which is defined mainly by turbulence transport and wake shear. Defining the wake recovery length and width are very crucial for wind farm to optimize the generation and reduce the waste of power to the grid. Therefore, in order to obtain the turbulent coefficients of velocity and length, this research focused on the large eddy simulation (LES) data for neutral ABL (NABL). According to turbulent theory, if we can present velocity defect and Reynolds stress in the form of local length and velocity scales, they become invariant. In our study velocity and length coefficients are 0.4867 and 0.4794 respectively which is close to the theoretical value of 0.5 for NABL. There are some invariant profiles because of the presence of thermal and wind shear power coefficients varied a little from the ideal condition.

Keywords: atmospheric boundary layer, renewable energy, turbulent coefficient, wind turbine, wake

Procedia PDF Downloads 104
7240 Analysis of Efficiency Production of Grass Black Jelly (Mesona palustris) in Double Scale

Authors: Irvan Adhin Cholilie, Susinggih Wijana, Yusron Sugiarto

Abstract:

The aim of this research is to compare the results of black grass jelly produced using laboratory scale and double scale. In this research, the production from the laboratory scale is using ingredients of 1 kg black grass jelly added with 5 liters of water, while the double scale is using 5 kg black grass jelly and 75 liters of water. The results of organoleptic tests performed by 30 panelists (general) to the sample gels of grass black powder produced from both of laboratory and double scale are not different significantly in color, odor, flavor, and texture. Proximate test results conducted in both of grass black jelly powder produced in laboratory scale and double scale also have no significant differences in all parameters. Grass black jelly powder from double scale contains water, carbohydrate, crude fiber, and yield in the amount of 12,25 %; 43,7 %; 5,89 %; and 16,28 % respectively. The results of the energy efficiency analysis by boiling, draining, evaporation, drying, and milling processes are 85,11 %; 76,97 %; 99,64 %; 99,99% and 99,39% respectively. The utility needs including water needs for each batch amounted 0.1 m3 and cost Rp 220,5 per batch, the electricity needs for each batch is 20.01 kWh and cost Rp 18569.28 per batch, and LPG needs for each batch is 30 kg costed Rp 234,000.00 so that the total cost spent for the process is Rp 252,789.78 .

Keywords: black grass jelly, powder, mass balance, energy balance, cost

Procedia PDF Downloads 355
7239 Kannada HandWritten Character Recognition by Edge Hinge and Edge Distribution Techniques Using Manhatan and Minimum Distance Classifiers

Authors: C. V. Aravinda, H. N. Prakash

Abstract:

In this paper, we tried to convey fusion and state of art pertaining to SIL character recognition systems. In the first step, the text is preprocessed and normalized to perform the text identification correctly. The second step involves extracting relevant and informative features. The third step implements the classification decision. The three stages which involved are Data acquisition and preprocessing, Feature extraction, and Classification. Here we concentrated on two techniques to obtain features, Feature Extraction & Feature Selection. Edge-hinge distribution is a feature that characterizes the changes in direction of a script stroke in handwritten text. The edge-hinge distribution is extracted by means of a windowpane that is slid over an edge-detected binary handwriting image. Whenever the mid pixel of the window is on, the two edge fragments (i.e. connected sequences of pixels) emerging from this mid pixel are measured. Their directions are measured and stored as pairs. A joint probability distribution is obtained from a large sample of such pairs. Despite continuous effort, handwriting identification remains a challenging issue, due to different approaches use different varieties of features, having different. Therefore, our study will focus on handwriting recognition based on feature selection to simplify features extracting task, optimize classification system complexity, reduce running time and improve the classification accuracy.

Keywords: word segmentation and recognition, character recognition, optical character recognition, hand written character recognition, South Indian languages

Procedia PDF Downloads 470
7238 Comparative Study of Isothermal and Cyclic Oxidation on Titanium Alloys

Authors: Poonam Yadav, Dong Bok Lee

Abstract:

Isothermal oxidation at 800°C for 50h and Cyclic oxidation at 600°C and 800°C for 40h of Pure Ti and Ti64 were performed in a muffle furnace. In Cyclic oxidation, massive scale spallation occurred, and the oxide scale cracks and peels off were observed at high temperature, it represents oxide scale that formed during cyclic oxidation was spalled out owing to stresses due to thermal shock generated during repetitive oxidation and subsequent cooling. The thickness of scale is larger in cyclic oxidation than the isothermal case. This is due to inward diffusion of oxygen through oxide scales and/or pores and cracks in cyclic oxidation.

Keywords: cyclic, diffusion, isothermal, cyclic

Procedia PDF Downloads 886
7237 Adjustment and Scale-Up Strategy of Pilot Liquid Fermentation Process of Azotobacter sp.

Authors: G. Quiroga-Cubides, A. Díaz, M. Gómez

Abstract:

The genus Azotobacter has been widely used as bio-fertilizer due to its significant effects on the stimulation and promotion of plant growth in various agricultural species of commercial interest. In order to obtain significantly viable cellular concentration, a scale-up strategy for a liquid fermentation process (SmF) with two strains of A. chroococcum (named Ac1 and Ac10) was validated and adjusted at laboratory and pilot scale. A batch fermentation process under previously defined conditions was carried out on a biorreactor Infors®, model Minifors of 3.5 L, which served as a baseline for this research. For the purpose of increasing process efficiency, the effect of the reduction of stirring speed was evaluated in combination with a fed-batch-type fermentation laboratory scale. To reproduce the efficiency parameters obtained, a scale-up strategy with geometric and fluid dynamic behavior similarities was evaluated. According to the analysis of variance, this scale-up strategy did not have significant effect on cellular concentration and in laboratory and pilot fermentations (Tukey, p > 0.05). Regarding air consumption, fermentation process at pilot scale showed a reduction of 23% versus the baseline. The percentage of reduction related to energy consumption reduction under laboratory and pilot scale conditions was 96.9% compared with baseline.

Keywords: Azotobacter chroococcum, scale-up, liquid fermentation, fed-batch process

Procedia PDF Downloads 412
7236 Scaling-Down an Agricultural Waste Biogas Plant Fermenter

Authors: Matheus Pessoa, Matthias Kraume

Abstract:

Scale-Down rules in process engineering help us to improve and develop Industrial scale parameters into lab scale. Several scale-down rules available in the literature like Impeller Power Number, Agitation device Power Input, Substrate Tip Speed, Reynolds Number and Cavern Development were investigated in order to stipulate the rotational speed to operate an 11 L working volume lab-scale bioreactor within industrial process parameters. Herein, xanthan gum was used as a fluid with a representative viscosity of a hypothetical biogas plant, with H/D = 1 and central agitation, fermentation broth using sewage sludge and sugar beet pulp as substrate. The results showed that the cavern development strategy was the best method for establishing a rotational speed for the bioreactor operation, while the other rules presented values out of reality for this article proposes.

Keywords: anaerobic digestion, cavern development, scale down rules, xanthan gum

Procedia PDF Downloads 452
7235 A New Approach of Preprocessing with SVM Optimization Based on PSO for Bearing Fault Diagnosis

Authors: Tawfik Thelaidjia, Salah Chenikher

Abstract:

Bearing fault diagnosis has attracted significant attention over the past few decades. It consists of two major parts: vibration signal feature extraction and condition classification for the extracted features. In this paper, feature extraction from faulty bearing vibration signals is performed by a combination of the signal’s Kurtosis and features obtained through the preprocessing of the vibration signal samples using Db2 discrete wavelet transform at the fifth level of decomposition. In this way, a 7-dimensional vector of the vibration signal feature is obtained. After feature extraction from vibration signal, the support vector machine (SVM) was applied to automate the fault diagnosis procedure. To improve the classification accuracy for bearing fault prediction, particle swarm optimization (PSO) is employed to simultaneously optimize the SVM kernel function parameter and the penalty parameter. The results have shown feasibility and effectiveness of the proposed approach

Keywords: condition monitoring, discrete wavelet transform, fault diagnosis, kurtosis, machine learning, particle swarm optimization, roller bearing, rotating machines, support vector machine, vibration measurement

Procedia PDF Downloads 409
7234 Preprocessing and Fusion of Multiple Representation of Finger Vein patterns using Conventional and Machine Learning techniques

Authors: Tomas Trainys, Algimantas Venckauskas

Abstract:

Application of biometric features to the cryptography for human identification and authentication is widely studied and promising area of the development of high-reliability cryptosystems. Biometric cryptosystems typically are designed for patterns recognition, which allows biometric data acquisition from an individual, extracts feature sets, compares the feature set against the set stored in the vault and gives a result of the comparison. Preprocessing and fusion of biometric data are the most important phases in generating a feature vector for key generation or authentication. Fusion of biometric features is critical for achieving a higher level of security and prevents from possible spoofing attacks. The paper focuses on the tasks of initial processing and fusion of multiple representations of finger vein modality patterns. These tasks are solved by applying conventional image preprocessing methods and machine learning techniques, Convolutional Neural Network (SVM) method for image segmentation and feature extraction. An article presents a method for generating sets of biometric features from a finger vein network using several instances of the same modality. Extracted features sets were fused at the feature level. The proposed method was tested and compared with the performance and accuracy results of other authors.

Keywords: bio-cryptography, biometrics, cryptographic key generation, data fusion, information security, SVM, pattern recognition, finger vein method.

Procedia PDF Downloads 118
7233 A Comparative Performance of Polyaspartic Acid and Sodium Polyacrylate on Silicate Scale Inhibition

Authors: Ismail Bin Mohd Saaid, Abubakar Abubakar Umar

Abstract:

Despite the successes recorded by Alkaline/Surfactant/Polymer (ASP) flooding as an effective chemical EOR technique, the combination CEOR is not unassociated with stern glitches, one of which is the scaling of downhole equipment. One of the major issues inside the oil industry is how to control scale formation, regardless of whether it is in the wellhead equipment, down-hole pipelines or even the actual field formation. The best approach to handle the challenge associated with oilfield scale formation is the application of scale inhibitors to avert the scale formation. Chemical inhibitors have been employed in doing such. But due to environmental regulations, the industry have focused on using green scale inhibitors to mitigate the formation of scales. This paper compares the scale inhibition performance of Polyaspartic acid and sodium polyacrylic acid, both commercial green scale inhibitors, in mitigating silicate scales formed during Alkaline/Surfactant/polymer flooding under static conditions. Both PASP and TH5000 are non-threshold inhibitors, therefore their efficiency was only seeing in delaying the deposition of the silicate scales.

Keywords: alkaline/surfactant/polymer flooding (ASP), polyaspartic acid (PASP), sodium polyacrylate (SPA)

Procedia PDF Downloads 314
7232 Factor Structure of the Korean Version of Multidimensional Experiential Avoidance Questionnaire (MEAQ)

Authors: Juyeon Lee, Sungeun You

Abstract:

Experiential avoidance is one’s tendency to avoid painful internal experience, unwanted adverse thoughts, emotions, and physical sensations. The Multidimensional Experiential Avoidance Questionnaire (MEAQ) is a measure of experiential avoidance, and the original scale consisted of 62 items with six subfactors including behavioral avoidance, distress aversion, procrastination, distraction/suppression, repression/denial, and distress endurance. The purpose of this study was to examine the factor structure of the MEAQ in a Korean sample. Three hundred community adults and university students aged 18 to 35 participated in an online survey assessing experiential avoidance (MEAQ and Acceptance and Action Questionnaire-II; AAQ-II), depression (Patient Health Questionnaire-9; PHQ-9), anxiety (Generalized Anxiety Disoder-7; GAD-7), negative affect (Positive and Negative Affect Scale; PANAS), neuroticism (Big Five Inventory; BFI), and quality of life (Satisfaction with Life Scale; SWLS). Factor analysis with principal axis with direct oblimin rotation was conducted to examine subfactors of the MEAQ. Results indicated that the six-factor structure of the original scale was adequate. Eight items out of 62 items were removed due to insufficient factor loading. These items included 3 items of behavior avoidance (e.g., “When I am hurting, I would do anything to feel better”), 2 items of repression/denial (e.g., “I work hard to keep out upsetting feelings”), and 3 items of distress aversion (e.g., “I prefer to stick to what I am comfortable with, rather than try new activities”). The MEAQ was positively associated with the AAQ-II (r = .47, p < .001), PHQ-9 (r = .37, p < .001), GAD-7 (r = .34, p < .001), PANAS (r = .35, p < .001), and neuroticism (r = .24, p < .001), and negatively correlated with the SWLS (r = -.38, p < .001). Internal consistency was good for the MEAQ total (Cronbach’s α = .90) as well as all six subfactors (Cronbach’s α = .83 to .87). The findings of the study support the multidimensional feature of experiential avoidance and validity of the MEAQ in a sample of Korean adults.

Keywords: avoidance, experiential avoidance, factor structure, MEAQ

Procedia PDF Downloads 338
7231 Effects of Employees’ Training Program on the Performance of Small Scale Enterprises in Oyo State

Authors: Itiola Kehinde Adeniran

Abstract:

The study examined the effect of employees’ training on the performance of small scale enterprises in Oyo State. A structured questionnaire was used to collect data from 150 respondents through purposive sampling method. Linear regression was used with the aid of statistical package for social science (SPSS) version 20 to analyze the data collected in order to examine the effect of independent variable, employees’ training on dependent variable, performance (profit) of small scale enterprises. The result revealed that employees’ training has a significant effect on the performance of small scale enterprises. It was concluded that predictor variable namely (training) is 55.5% variance of enterprises performance (profitability). Therefore, the paper recommended that all small scale enterprises in Nigeria should embrace manpower training and development in order to improve employees’ performance leading to organizational profitability.

Keywords: training, employee performance, small scale enterprise, organizational profitability

Procedia PDF Downloads 338
7230 Spatial Scale of Clustering of Residential Burglary and Its Dependence on Temporal Scale

Authors: Mohammed A. Alazawi, Shiguo Jiang, Steven F. Messner

Abstract:

Research has long focused on two main spatial aspects of crime: spatial patterns and spatial processes. When analyzing these patterns and processes, a key issue has been to determine the proper spatial scale. In addition, it is important to consider the possibility that these patterns and processes might differ appreciably for different temporal scales and might vary across geographic units of analysis. We examine the spatial-temporal dependence of residential burglary. This dependence is tested at varying geographical scales and temporal aggregations. The analyses are based on recorded incidents of crime in Columbus, Ohio during the 1994-2002 period. We implement point pattern analysis on the crime points using Ripley’s K function. The results indicate that spatial point patterns of residential burglary reveal spatial scales of clustering relatively larger than the average size of census tracts of the study area. Also, spatial scale is independent of temporal scale. The results of our analyses concerning the geographic scale of spatial patterns and processes can inform the development of effective policies for crime control.

Keywords: inhomogeneous K function, residential burglary, spatial point pattern, spatial scale, temporal scale

Procedia PDF Downloads 310
7229 Produced Water Treatment Using Novel Solid Scale Inhibitors Based on Silver Tungstate Loaded Kit-6: Static and Modeling Evaluation

Authors: R. Hosny, Mahmoud F. Mubarak, Heba M. Salem, Asmaa A. Abdelrahman

Abstract:

Oilfield scaling is a major problem in the oil and gas industry. Scale issues cost the industry millions of dollars in damage and lost production every year. One of the main causes of global production decline is scale. In this study, solid scale inhibitors based on silver tungstate loaded KIT-6 were synthesized and evaluated in both static and scale inhibition modeling. The silver tungstate loaded KIT-6 catalysts were synthesized via a simple impregnated method using 3D mesoporous KIT-6 as support. The synthesized materials were characterized using wide and low XRD, N2 adsorption–desorption analysis, TGA analysis, and FTIR, SEM, and XPS analysis. The scale inhibition efficiency of the synthesized materials was evaluated using a static scale inhibition test. The results of this study demonstrate the potential application of silver tungstate-loaded KIT-6 solid scale inhibitors for the oil and gas industry. The results of this study will contribute to the development of new and innovative solid scale inhibitors based on silver tungstate-loaded KIT-6. The inhibition efficiency of the scale inhibitor increases, and calcite scale inhibitor decreases with increasing pH (2 to8), it proposes that the scale inhibitor was more effective under alkaline conditions. An inhibition efficiency of 99% on calcium carbonate can be achieved at the optimal dosage of 7.5 ppm at 55oC, indicating that the scale inhibitor exhibits a relatively good inhibition performance on calcium carbonate. The use of these materials can potentially lead to more efficient and cost-effective solutions for scaling inhibition in various industrial processes.

Keywords: produced water treatment, solid scale inhibitors, calcite, silver tungestate, 3 D mesoporous KIT-6, oilfield scales, adsorption

Procedia PDF Downloads 119
7228 Deproteinization of Moroccan Sardine (Sardina pilchardus) Scales: A Pilot-Scale Study

Authors: F. Bellali, M. Kharroubi, Y. Rady, N. Bourhim

Abstract:

In Morocco, fish processing industry is an important source income for a large amount of by-products including skins, bones, heads, guts, and scales. Those underutilized resources particularly scales contain a large amount of proteins and calcium. Sardina plichardus scales from resulting from the transformation operation have the potential to be used as raw material for the collagen production. Taking into account this strong expectation of the regional fish industry, scales sardine upgrading is well justified. In addition, political and societal demands for sustainability and environment-friendly industrial production systems, coupled with the depletion of fish resources, drive this trend forward. Therefore, fish scale used as a potential source to isolate collagen has a wide large of applications in food, cosmetic, and biomedical industry. The main aim of this study is to isolate and characterize the acid solubilize collagen from sardine fish scale, Sardina pilchardus. Experimental design methodology was adopted in collagen processing for extracting optimization. The first stage of this work is to investigate the optimization conditions of the sardine scale deproteinization on using response surface methodology (RSM). The second part focus on the demineralization with HCl solution or EDTA. And the last one is to establish the optimum condition for the isolation of collagen from fish scale by solvent extraction. The advancement from lab scale to pilot scale is a critical stage in the technological development. In this study, the optimal condition for the deproteinization which was validated at laboratory scale was employed in the pilot scale procedure. The deproteinization of fish scale was then demonstrated on a pilot scale (2Kg scales, 20l NaOH), resulting in protein content (0,2mg/ml) and hydroxyproline content (2,11mg/l). These results indicated that the pilot-scale showed similar performances to those of lab-scale one.

Keywords: deproteinization, pilot scale, scale, sardine pilchardus

Procedia PDF Downloads 416
7227 Statistical Analysis of Natural Images after Applying ICA and ISA

Authors: Peyman Sheikholharam Mashhadi

Abstract:

Difficulties in analyzing real world images in classical image processing and machine vision framework have motivated researchers towards considering the biology-based vision. It is a common belief that mammalian visual cortex has been adapted to the statistics of the real world images through the evolution process. There are two well-known successful models of mammalian visual cortical cells: Independent Component Analysis (ICA) and Independent Subspace Analysis (ISA). In this paper, we statistically analyze the dependencies which remain in the components after applying these models to the natural images. Also, we investigate the response of feature detectors to gratings with various parameters in order to find optimal parameters of the feature detectors. Finally, the selectiveness of feature detectors to phase, in both models is considered.

Keywords: statistics, independent component analysis, independent subspace analysis, phase, natural images

Procedia PDF Downloads 318
7226 A New DIDS Design Based on a Combination Feature Selection Approach

Authors: Adel Sabry Eesa, Adnan Mohsin Abdulazeez Brifcani, Zeynep Orman

Abstract:

Feature selection has been used in many fields such as classification, data mining and object recognition and proven to be effective for removing irrelevant and redundant features from the original data set. In this paper, a new design of distributed intrusion detection system using a combination feature selection model based on bees and decision tree. Bees algorithm is used as the search strategy to find the optimal subset of features, whereas decision tree is used as a judgment for the selected features. Both the produced features and the generated rules are used by Decision Making Mobile Agent to decide whether there is an attack or not in the networks. Decision Making Mobile Agent will migrate through the networks, moving from node to another, if it found that there is an attack on one of the nodes, it then alerts the user through User Interface Agent or takes some action through Action Mobile Agent. The KDD Cup 99 data set is used to test the effectiveness of the proposed system. The results show that even if only four features are used, the proposed system gives a better performance when it is compared with the obtained results using all 41 features.

Keywords: distributed intrusion detection system, mobile agent, feature selection, bees algorithm, decision tree

Procedia PDF Downloads 368
7225 An Automatic Feature Extraction Technique for 2D Punch Shapes

Authors: Awais Ahmad Khan, Emad Abouel Nasr, H. M. A. Hussein, Abdulrahman Al-Ahmari

Abstract:

Sheet-metal parts have been widely applied in electronics, communication and mechanical industries in recent decades; but the advancement in sheet-metal part design and manufacturing is still behind in comparison with the increasing importance of sheet-metal parts in modern industry. This paper presents a methodology for automatic extraction of some common 2D internal sheet metal features. The features used in this study are taken from Unipunch ™ catalogue. The extraction process starts with the data extraction from STEP file using an object oriented approach and with the application of suitable algorithms and rules, all features contained in the catalogue are automatically extracted. Since the extracted features include geometry and engineering information, they will be effective for downstream application such as feature rebuilding and process planning.

Keywords: feature extraction, internal features, punch shapes, sheet metal

Procedia PDF Downloads 587
7224 Improving Fake News Detection Using K-means and Support Vector Machine Approaches

Authors: Kasra Majbouri Yazdi, Adel Majbouri Yazdi, Saeid Khodayi, Jingyu Hou, Wanlei Zhou, Saeed Saedy

Abstract:

Fake news and false information are big challenges of all types of media, especially social media. There is a lot of false information, fake likes, views and duplicated accounts as big social networks such as Facebook and Twitter admitted. Most information appearing on social media is doubtful and in some cases misleading. They need to be detected as soon as possible to avoid a negative impact on society. The dimensions of the fake news datasets are growing rapidly, so to obtain a better result of detecting false information with less computation time and complexity, the dimensions need to be reduced. One of the best techniques of reducing data size is using feature selection method. The aim of this technique is to choose a feature subset from the original set to improve the classification performance. In this paper, a feature selection method is proposed with the integration of K-means clustering and Support Vector Machine (SVM) approaches which work in four steps. First, the similarities between all features are calculated. Then, features are divided into several clusters. Next, the final feature set is selected from all clusters, and finally, fake news is classified based on the final feature subset using the SVM method. The proposed method was evaluated by comparing its performance with other state-of-the-art methods on several specific benchmark datasets and the outcome showed a better classification of false information for our work. The detection performance was improved in two aspects. On the one hand, the detection runtime process decreased, and on the other hand, the classification accuracy increased because of the elimination of redundant features and the reduction of datasets dimensions.

Keywords: clustering, fake news detection, feature selection, machine learning, social media, support vector machine

Procedia PDF Downloads 145
7223 The Examination of Prospective ICT Teachers’ Attitudes towards Application of Computer Assisted Instruction

Authors: Agâh Tuğrul Korucu, Ismail Fatih Yavuzaslan, Lale Toraman

Abstract:

Nowadays, thanks to development of technology, integration of technology into teaching and learning activities is spreading. Increasing technological literacy which is one of the expected competencies for individuals of 21st century is associated with the effective use of technology in education. The most important factor in effective use of technology in education institutions is ICT teachers. The concept of computer assisted instruction (CAI) refers to the utilization of information and communication technology as a tool aided teachers in order to make education more efficient and improve its quality in the process of educational. Teachers can use computers in different places and times according to owned hardware and software facilities and characteristics of the subject and student in CAI. Analyzing teachers’ use of computers in education is significant because teachers are the ones who manage the course and they are the most important element in comprehending the topic by students. To accomplish computer-assisted instruction efficiently is possible through having positive attitude of teachers. Determination the level of knowledge, attitude and behavior of teachers who get the professional knowledge from educational faculties and elimination of deficiencies if any are crucial when teachers are at the faculty. Therefore, the aim of this paper is to identify ICT teachers' attitudes toward computer-assisted instruction in terms of different variables. Research group consists of 200 prospective ICT teachers studying at Necmettin Erbakan University Ahmet Keleşoğlu Faculty of Education CEIT department. As data collection tool of the study; “personal information form” developed by the researchers and used to collect demographic data and "the attitude scale related to computer-assisted instruction" are used. The scale consists of 20 items. 10 of these items show positive feature, while 10 of them show negative feature. The Kaiser-Meyer-Olkin (KMO) coefficient of the scale is found 0.88 and Barlett test significance value is found 0.000. The Cronbach’s alpha reliability coefficient of the scale is found 0.93. In order to analyze the data collected by data collection tools computer-based statistical software package used; statistical techniques such as descriptive statistics, t-test, and analysis of variance are utilized. It is determined that the attitudes of prospective instructors towards computers do not differ according to their educational branches. On the other hand, the attitudes of prospective instructors who own computers towards computer-supported education are determined higher than those of the prospective instructors who do not own computers. It is established that the departments of students who previously received computer lessons do not affect this situation so much. The result is that; the computer experience affects the attitude point regarding the computer-supported education positively.

Keywords: computer based instruction, teacher candidate, attitude, technology based instruction, information and communication technologies

Procedia PDF Downloads 260
7222 Using Self Organizing Feature Maps for Classification in RGB Images

Authors: Hassan Masoumi, Ahad Salimi, Nazanin Barhemmat, Babak Gholami

Abstract:

Artificial neural networks have gained a lot of interest as empirical models for their powerful representational capacity, multi input and output mapping characteristics. In fact, most feed-forward networks with nonlinear nodal functions have been proved to be universal approximates. In this paper, we propose a new supervised method for color image classification based on self organizing feature maps (SOFM). This algorithm is based on competitive learning. The method partitions the input space using self-organizing feature maps to introduce the concept of local neighborhoods. Our image classification system entered into RGB image. Experiments with simulated data showed that separability of classes increased when increasing training time. In additional, the result shows proposed algorithms are effective for color image classification.

Keywords: classification, SOFM algorithm, neural network, neighborhood, RGB image

Procedia PDF Downloads 444
7221 Joint Modeling of Longitudinal and Time-To-Event Data with Latent Variable

Authors: Xinyuan Y. Song, Kai Kang

Abstract:

Joint models for analyzing longitudinal and survival data are widely used to investigate the relationship between a failure time process and time-variant predictors. A common assumption in conventional joint models in the survival analysis literature is that all predictors are observable. However, this assumption may not always be supported because unobservable traits, namely, latent variables, which are indirectly observable and should be measured through multiple observed variables, are commonly encountered in the medical, behavioral, and financial research settings. In this study, a joint modeling approach to deal with this feature is proposed. The proposed model comprises three parts. The first part is a dynamic factor analysis model for characterizing latent variables through multiple observed indicators over time. The second part is a random coefficient trajectory model for describing the individual trajectories of latent variables. The third part is a proportional hazard model for examining the effects of time-invariant predictors and the longitudinal trajectories of time-variant latent risk factors on hazards of interest. A Bayesian approach coupled with a Markov chain Monte Carlo algorithm to perform statistical inference. An application of the proposed joint model to a study on the Alzheimer's disease neuroimaging Initiative is presented.

Keywords: Bayesian analysis, joint model, longitudinal data, time-to-event data

Procedia PDF Downloads 112
7220 Graph Codes - 2D Projections of Multimedia Feature Graphs for Fast and Effective Retrieval

Authors: Stefan Wagenpfeil, Felix Engel, Paul McKevitt, Matthias Hemmje

Abstract:

Multimedia Indexing and Retrieval is generally designed and implemented by employing feature graphs. These graphs typically contain a significant number of nodes and edges to reflect the level of detail in feature detection. A higher level of detail increases the effectiveness of the results but also leads to more complex graph structures. However, graph-traversal-based algorithms for similarity are quite inefficient and computation intensive, especially for large data structures. To deliver fast and effective retrieval, an efficient similarity algorithm, particularly for large graphs, is mandatory. Hence, in this paper, we define a graph-projection into a 2D space (Graph Code) as well as the corresponding algorithms for indexing and retrieval. We show that calculations in this space can be performed more efficiently than graph-traversals due to a simpler processing model and a high level of parallelization. In consequence, we prove that the effectiveness of retrieval also increases substantially, as Graph Codes facilitate more levels of detail in feature fusion. Thus, Graph Codes provide a significant increase in efficiency and effectiveness (especially for Multimedia indexing and retrieval) and can be applied to images, videos, audio, and text information.

Keywords: indexing, retrieval, multimedia, graph algorithm, graph code

Procedia PDF Downloads 129
7219 Correlation of Spirometry with Six Minute Walk Test and Grading of Dyspnoea in COPD Patients

Authors: Anand K. Patel

Abstract:

Background: Patients with COPD have decreased pulmonary functions, which in turn reflect on their day-to-day activities. Objectives: To assess the correlation between functional vital capacity (FVC) and forced expiratory volume in one second (FEV1) with 6 minutes walk test (6MWT). To correlate the Borg rating for perceived exertion scale (Borg scale) and Modified medical research council (MMRC) dyspnea scale with the 6MWT, FVC and FEV1. Method: In this prospective study total 72 patients with COPD diagnosed by the GOLD guidelines were enrolled after taking written consent. They were first asked to rate physical exertion on the Borg scale as well as the modified medical research council dyspnea scale and then were subjected to perform pre and post bronchodilator spirometry followed by 6 minute walk test. The findings were correlated by calculating the Pearson coefficient for each set and obtaining the p-values, with a p < 0.05 being clinically significant. Result: There was a significant correlation between spirometry and 6MWT suggesting that patients with lower measurements were unable to walk for longer distances. However, FVC had the stronger correlation than FEV1. MMRC scale had a stronger correlation with 6MWT as compared to the Borg scale. Conclusion: The study suggests that 6MWT is a better test for monitoring the patients of COPD. In spirometry, FVC should be used in monitoring patients with COPD, instead of FEV1. MMRC scale shows a stronger correlation than the Borg scale, and we should use it more often.

Keywords: spirometry, 6 minute walk test, MMRC, Borg scale

Procedia PDF Downloads 167
7218 Pantograph-Catenary Contact Force: Features Evaluation for Catenary Diagnostics

Authors: Mehdi Brahimi, Kamal Medjaher, Noureddine Zerhouni, Mohammed Leouatni

Abstract:

The Prognostics and Health Management is a system engineering discipline which provides solutions and models to the implantation of a predictive maintenance. The approach is based on extracting useful information from monitoring data to assess the “health” state of an industrial equipment or an asset. In this paper, we examine multiple extracted features from Pantograph-Catenary contact force in order to select the most relevant ones to achieve a diagnostics function. The feature extraction methodology is based on simulation data generated thanks to a Pantograph-Catenary simulation software called INPAC and measurement data. The feature extraction method is based on both statistical and signal processing analyses. The feature selection method is based on statistical criteria.

Keywords: catenary/pantograph interaction, diagnostics, Prognostics and Health Management (PHM), quality of current collection

Procedia PDF Downloads 260
7217 Development of an Attitude Scale Towards Social Networking Sites

Authors: Münevver Başman, Deniz Gülleroğlu

Abstract:

The purpose of this study is to develop a scale to determine the attitudes towards social networking sites. 45 tryout items, prepared for this aim, were applied to 342 students studying at Marmara University, Faculty of Education. The reliability and the validity of the scale were conducted with the help of these students. As a result of exploratory factor analysis with Varimax rotation, 41 items grouped according to the structure with three factors (interest, reality and negative effects) is obtained. While alpha reliability of the scale is obtained as .899; the reliability of factors is obtained as .899, .799, .775, respectively.

Keywords: Attitude, reliability, social networking sites, validity.

Procedia PDF Downloads 348
7216 Prediction of the Torsional Vibration Characteristics of a Rotor-Shaft System Using Its Scale Model and Scaling Laws

Authors: Jia-Jang Wu

Abstract:

This paper presents the scaling laws that provide the criteria of geometry and dynamic similitude between the full-size rotor-shaft system and its scale model, and can be used to predict the torsional vibration characteristics of the full-size rotor-shaft system by manipulating the corresponding data of its scale model. The scaling factors, which play fundamental roles in predicting the geometry and dynamic relationships between the full-size rotor-shaft system and its scale model, for torsional free vibration problems between scale and full-size rotor-shaft systems are firstly obtained from the equation of motion of torsional free vibration. Then, the scaling factor of external force (i.e., torque) required for the torsional forced vibration problems is determined based on the Newton’s second law. Numerical results show that the torsional free and forced vibration characteristics of a full-size rotor-shaft system can be accurately predicted from those of its scale models by using the foregoing scaling factors. For this reason, it is believed that the presented approach will be significant for investigating the relevant phenomenon in the scale model tests.

Keywords: torsional vibration, full-size model, scale model, scaling laws

Procedia PDF Downloads 369
7215 A Nonlinear Feature Selection Method for Hyperspectral Image Classification

Authors: Pei-Jyun Hsieh, Cheng-Hsuan Li, Bor-Chen Kuo

Abstract:

For hyperspectral image classification, feature reduction is an important pre-processing for avoiding the Hughes phenomena due to the difficulty for collecting training samples. Hence, lots of researches developed feature selection methods such as F-score, HSIC (Hilbert-Schmidt Independence Criterion), and etc., to improve hyperspectral image classification. However, most of them only consider the class separability in the original space, i.e., a linear class separability. In this study, we proposed a nonlinear class separability measure based on kernel trick for selecting an appropriate feature subset. The proposed nonlinear class separability was formed by a generalized RBF kernel with different bandwidths with respect to different features. Moreover, it considered the within-class separability and the between-class separability. A genetic algorithm was applied to tune these bandwidths such that the smallest with-class separability and the largest between-class separability simultaneously. This indicates the corresponding feature space is more suitable for classification. In addition, the corresponding nonlinear classification boundary can separate classes very well. These optimal bandwidths also show the importance of bands for hyperspectral image classification. The reciprocals of these bandwidths can be viewed as weights of bands. The smaller bandwidth, the larger weight of the band, and the more importance for classification. Hence, the descending order of the reciprocals of the bands gives an order for selecting the appropriate feature subsets. In the experiments, three hyperspectral image data sets, the Indian Pine Site data set, the PAVIA data set, and the Salinas A data set, were used to demonstrate the selected feature subsets by the proposed nonlinear feature selection method are more appropriate for hyperspectral image classification. Only ten percent of samples were randomly selected to form the training dataset. All non-background samples were used to form the testing dataset. The support vector machine was applied to classify these testing samples based on selected feature subsets. According to the experiments on the Indian Pine Site data set with 220 bands, the highest accuracies by applying the proposed method, F-score, and HSIC are 0.8795, 0.8795, and 0.87404, respectively. However, the proposed method selects 158 features. F-score and HSIC select 168 features and 217 features, respectively. Moreover, the classification accuracies increase dramatically only using first few features. The classification accuracies with respect to feature subsets of 10 features, 20 features, 50 features, and 110 features are 0.69587, 0.7348, 0.79217, and 0.84164, respectively. Furthermore, only using half selected features (110 features) of the proposed method, the corresponding classification accuracy (0.84168) is approximate to the highest classification accuracy, 0.8795. For other two hyperspectral image data sets, the PAVIA data set and Salinas A data set, we can obtain the similar results. These results illustrate our proposed method can efficiently find feature subsets to improve hyperspectral image classification. One can apply the proposed method to determine the suitable feature subset first according to specific purposes. Then researchers can only use the corresponding sensors to obtain the hyperspectral image and classify the samples. This can not only improve the classification performance but also reduce the cost for obtaining hyperspectral images.

Keywords: hyperspectral image classification, nonlinear feature selection, kernel trick, support vector machine

Procedia PDF Downloads 240
7214 A Study on the Development of Social Participation Activity Scale for the Elderly

Authors: Young-Kwang Lee, Eun-Gu Ji, Min-Joo Kim, Seung-Jae Oh

Abstract:

The purpose of this study is to develop a social participation activity scale for the elderly. As a result of exploratory factor analysis, confirmatory factor analysis was conducted using maximum likelihood method using bundled items. In conclusion, thirteen items of social participation activity scale seemed appropriate. Finally, convergent validity and discriminant validity were verified on the scale with the fit. The convergent validity was based on the variance extracted value. In other words, the hypothesis that the variables are the same is rejected and the validity is confirmed. This study extensively considered the measurement items of the social participation activity scale used to measure social participation activities of the elderly. In the future, it will be meaningful that it can be used as a tool to verify the effectiveness of services in organizations that provide social welfare services to elderly people such as comprehensive social welfare centers and the elderly comprehensive social welfare centers.

Keywords: elderly, social participation, scale development, validity

Procedia PDF Downloads 152
7213 Efficient Human Motion Detection Feature Set by Using Local Phase Quantization Method

Authors: Arwa Alzughaibi

Abstract:

Human Motion detection is a challenging task due to a number of factors including variable appearance, posture and a wide range of illumination conditions and background. So, the first need of such a model is a reliable feature set that can discriminate between a human and a non-human form with a fair amount of confidence even under difficult conditions. By having richer representations, the classification task becomes easier and improved results can be achieved. The Aim of this paper is to investigate the reliable and accurate human motion detection models that are able to detect the human motions accurately under varying illumination levels and backgrounds. Different sets of features are tried and tested including Histogram of Oriented Gradients (HOG), Deformable Parts Model (DPM), Local Decorrelated Channel Feature (LDCF) and Aggregate Channel Feature (ACF). However, we propose an efficient and reliable human motion detection approach by combining Histogram of oriented gradients (HOG) and local phase quantization (LPQ) as the feature set, and implementing search pruning algorithm based on optical flow to reduce the number of false positive. Experimental results show the effectiveness of combining local phase quantization descriptor and the histogram of gradient to perform perfectly well for a large range of illumination conditions and backgrounds than the state-of-the-art human detectors. Areaunder th ROC Curve (AUC) of the proposed method achieved 0.781 for UCF dataset and 0.826 for CDW dataset which indicates that it performs comparably better than HOG, DPM, LDCF and ACF methods.

Keywords: human motion detection, histograms of oriented gradient, local phase quantization, local phase quantization

Procedia PDF Downloads 229