Search results for: alpha version SLO kTG
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 609

Search results for: alpha version SLO kTG

129 Optimization of Ethanol Fermentation from Pineapple Peel Extract Using Response Surface Methodology (RSM)

Authors: Nadya Hajar, Zainal, S., Atikah, O., Tengku Elida, T. Z. M.

Abstract:

Ethanol has been known for a long time, being perhaps the oldest product obtained through traditional biotechnology fermentation. Agriculture waste as substrate in fermentation is vastly discussed as alternative to replace edible food and utilization of organic material. Pineapple peel, highly potential source as substrate is a by-product of the pineapple processing industry. Bio-ethanol from pineapple (Ananas comosus) peel extract was carried out by controlling fermentation without any treatment. Saccharomyces ellipsoides was used as inoculum in this fermentation process as it is naturally found at the pineapple skin. In this study, the capability of Response Surface Methodology (RSM) for optimization of ethanol production from pineapple peel extract using Saccharomyces ellipsoideus in batch fermentation process was investigated. Effect of five test variables in a defined range of inoculum concentration 6- 14% (v/v), pH (4.0-6.0), sugar concentration (14-22°Brix), temperature (24-32°C) and time of incubation (30-54 hrs) on the ethanol production were evaluated. Data obtained from experiment were analyzed with RSM of MINITAB Software (Version 15) whereby optimum ethanol concentration of 8.637% (v/v) was determined. The optimum condition of 14% (v/v) inoculum concentration, pH 6, 22°Brix, 26°C and 30hours of incubation. The significant regression equation or model at the 5% level with correlation value of 99.96% was also obtained.

Keywords: Bio-ethanol, pineapple peel extract, Response Surface Methodology (RSM), Saccharomyces ellipsoideus.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6027
128 Noninvasive Brain-Machine Interface to Control Both Mecha TE Robotic Hands Using Emotiv EEG Neuroheadset

Authors: Adrienne Kline, Jaydip Desai

Abstract:

Electroencephalogram (EEG) is a noninvasive technique that registers signals originating from the firing of neurons in the brain. The Emotiv EEG Neuroheadset is a consumer product comprised of 14 EEG channels and was used to record the reactions of the neurons within the brain to two forms of stimuli in 10 participants. These stimuli consisted of auditory and visual formats that provided directions of ‘right’ or ‘left.’ Participants were instructed to raise their right or left arm in accordance with the instruction given. A scenario in OpenViBE was generated to both stimulate the participants while recording their data. In OpenViBE, the Graz Motor BCI Stimulator algorithm was configured to govern the duration and number of visual stimuli. Utilizing EEGLAB under the cross platform MATLAB®, the electrodes most stimulated during the study were defined. Data outputs from EEGLAB were analyzed using IBM SPSS Statistics® Version 20. This aided in determining the electrodes to use in the development of a brain-machine interface (BMI) using real-time EEG signals from the Emotiv EEG Neuroheadset. Signal processing and feature extraction were accomplished via the Simulink® signal processing toolbox. An Arduino™ Duemilanove microcontroller was used to link the Emotiv EEG Neuroheadset and the right and left Mecha TE™ Hands.

Keywords: Brain-machine interface, EEGLAB, emotiv EEG neuroheadset, openViBE, simulink.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2765
127 A Novel Strategy for Oriented Protein Immobilization

Authors: Ching-Wei Tsai, Chih-I Liu, Ruoh-Chyu Ruaana

Abstract:

A new strategy for oriented immobilization of proteins was proposed. The strategy contains two steps. The first step is to search for a docking site away from the active site on the protein surface. The second step is trying to find a ligand that is able to grasp the targeted site of the protein. To avoid ligand binding to the active site of protein, the targeted docking site is selected to own opposite charges to those near the active site. To enhance the ligand-protein binding, both hydrophobic and electrostatic interactions need to be included. The targeted docking site should therefore contain hydrophobic amino acids. The ligand is then selected through the help of molecular docking simulations. The enzyme α-amylase derived from Aspergillus oryzae (TAKA) was taken as an example for oriented immobilization. The active site of TAKA is surrounded by negatively charged amino acids. All the possible hydrophobic sites on the surface of TAKA were evaluated by the free energy estimation through benzene docking. A hydrophobic site on the opposite side of TAKA-s active site was found to be positive in net charges. A possible ligand, 3,3-,4,4- – Biphenyltetra- carboxylic acid (BPTA), was found to catch TAKA by the designated docking site. Then, the BPTA molecules were grafted onto silica gels and measured the affinity of TAKA adsorption and the specific activity of thereby immobilized enzymes. It was found that TAKA had a dissociation constant as low as 7.0×10-6 M toward the ligand BPTA on silica gel. The increase in ionic strength has little effect on the adsorption of TAKA, which indicated the existence of hydrophobic interaction between ligands and proteins. The specific activity of the immobilized TAKA was compared with the randomly adsorbed TAKA on primary amine containing silica gel. It was found that the orderly immobilized TAKA owns a specific activity twice as high as the one randomly adsorbed by ionic interaction.

Keywords: Protein Oriented immobilization, Molecular docking, ligand design, surface modification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1737
126 Featured based Segmentation of Color Textured Images using GLCM and Markov Random Field Model

Authors: Dipti Patra, Mridula J

Abstract:

In this paper, we propose a new image segmentation approach for colour textured images. The proposed method for image segmentation consists of two stages. In the first stage, textural features using gray level co-occurrence matrix(GLCM) are computed for regions of interest (ROI) considered for each class. ROI acts as ground truth for the classes. Ohta model (I1, I2, I3) is the colour model used for segmentation. Statistical mean feature at certain inter pixel distance (IPD) of I2 component was considered to be the optimized textural feature for further segmentation. In the second stage, the feature matrix obtained is assumed to be the degraded version of the image labels and modeled as Markov Random Field (MRF) model to model the unknown image labels. The labels are estimated through maximum a posteriori (MAP) estimation criterion using ICM algorithm. The performance of the proposed approach is compared with that of the existing schemes, JSEG and another scheme which uses GLCM and MRF in RGB colour space. The proposed method is found to be outperforming the existing ones in terms of segmentation accuracy with acceptable rate of convergence. The results are validated with synthetic and real textured images.

Keywords: Texture Image Segmentation, Gray Level Cooccurrence Matrix, Markov Random Field Model, Ohta colour space, ICM algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2125
125 Graph Cuts Segmentation Approach Using a Patch-Based Similarity Measure Applied for Interactive CT Lung Image Segmentation

Authors: Aicha Majda, Abdelhamid El Hassani

Abstract:

Lung CT image segmentation is a prerequisite in lung CT image analysis. Most of the conventional methods need a post-processing to deal with the abnormal lung CT scans such as lung nodules or other lesions. The simplest similarity measure in the standard Graph Cuts Algorithm consists of directly comparing the pixel values of the two neighboring regions, which is not accurate because this kind of metrics is extremely sensitive to minor transformations such as noise or other artifacts problems. In this work, we propose an improved version of the standard graph cuts algorithm based on the Patch-Based similarity metric. The boundary penalty term in the graph cut algorithm is defined Based on Patch-Based similarity measurement instead of the simple intensity measurement in the standard method. The weights between each pixel and its neighboring pixels are Based on the obtained new term. The graph is then created using theses weights between its nodes. Finally, the segmentation is completed with the minimum cut/Max-Flow algorithm. Experimental results show that the proposed method is very accurate and efficient, and can directly provide explicit lung regions without any post-processing operations compared to the standard method.

Keywords: Graph cuts, lung CT scan, lung parenchyma segmentation, patch based similarity metric.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 702
124 Upgraded Cuckoo Search Algorithm to Solve Optimisation Problems Using Gaussian Selection Operator and Neighbour Strategy Approach

Authors: Mukesh Kumar Shah, Tushar Gupta

Abstract:

An Upgraded Cuckoo Search Algorithm is proposed here to solve optimization problems based on the improvements made in the earlier versions of Cuckoo Search Algorithm. Short comings of the earlier versions like slow convergence, trap in local optima improved in the proposed version by random initialization of solution by suggesting an Improved Lambda Iteration Relaxation method, Random Gaussian Distribution Walk to improve local search and further proposing Greedy Selection to accelerate to optimized solution quickly and by “Study Nearby Strategy” to improve global search performance by avoiding trapping to local optima. It is further proposed to generate better solution by Crossover Operation. The proposed strategy used in algorithm shows superiority in terms of high convergence speed over several classical algorithms. Three standard algorithms were tested on a 6-generator standard test system and the results are presented which clearly demonstrate its superiority over other established algorithms. The algorithm is also capable of handling higher unit systems.

Keywords: Economic dispatch, Gaussian selection operator, prohibited operating zones, ramp rate limits, upgraded cuckoo search.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 638
123 LOD Exploitation and Fast Silhouette Detection for Shadow Volumes

Authors: Mustafa S. Fawad, Wang Wencheng, Wu Enhua

Abstract:

Shadows add great amount of realism to a scene and many algorithms exists to generate shadows. Recently, Shadow volumes (SVs) have made great achievements to place a valuable position in the gaming industries. Looking at this, we concentrate on simple but valuable initial partial steps for further optimization in SV generation, i.e.; model simplification and silhouette edge detection and tracking. Shadow volumes (SVs) usually takes time in generating boundary silhouettes of the object and if the object is complex then the generation of edges become much harder and slower in process. The challenge gets stiffer when real time shadow generation and rendering is demanded. We investigated a way to use the real time silhouette edge detection method, which takes the advantage of spatial and temporal coherence, and exploit the level-of-details (LOD) technique for reducing silhouette edges of the model to use the simplified version of the model for shadow generation speeding up the running time. These steps highly reduce the execution time of shadow volume generations in real-time and are easily flexible to any of the recently proposed SV techniques. Our main focus is to exploit the LOD and silhouette edge detection technique, adopting them to further enhance the shadow volume generations for real time rendering.

Keywords: LOD, perception, Shadow Volumes, SilhouetteEdge, Spatial and Temporal coherence.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1574
122 Fuzzy Relatives of the CLARANS Algorithm With Application to Text Clustering

Authors: Mohamed A. Mahfouz, M. A. Ismail

Abstract:

This paper introduces new algorithms (Fuzzy relative of the CLARANS algorithm FCLARANS and Fuzzy c Medoids based on randomized search FCMRANS) for fuzzy clustering of relational data. Unlike existing fuzzy c-medoids algorithm (FCMdd) in which the within cluster dissimilarity of each cluster is minimized in each iteration by recomputing new medoids given current memberships, FCLARANS minimizes the same objective function minimized by FCMdd by changing current medoids in such away that that the sum of the within cluster dissimilarities is minimized. Computing new medoids may be effected by noise because outliers may join the computation of medoids while the choice of medoids in FCLARANS is dictated by the location of a predominant fraction of points inside a cluster and, therefore, it is less sensitive to the presence of outliers. In FCMRANS the step of computing new medoids in FCMdd is modified to be based on randomized search. Furthermore, a new initialization procedure is developed that add randomness to the initialization procedure used with FCMdd. Both FCLARANS and FCMRANS are compared with the robust and linearized version of fuzzy c-medoids (RFCMdd). Experimental results with different samples of the Reuter-21578, Newsgroups (20NG) and generated datasets with noise show that FCLARANS is more robust than both RFCMdd and FCMRANS. Finally, both FCMRANS and FCLARANS are more efficient and their outputs are almost the same as that of RFCMdd in terms of classification rate.

Keywords: Data Mining, Fuzzy Clustering, Relational Clustering, Medoid-Based Clustering, Cluster Analysis, Unsupervised Learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2365
121 Hand Gesture Interpretation Using Sensing Glove Integrated with Machine Learning Algorithms

Authors: Aqsa Ali, Aleem Mushtaq, Attaullah Memon, Monna

Abstract:

In this paper, we present a low cost design for a smart glove that can perform sign language recognition to assist the speech impaired people. Specifically, we have designed and developed an Assistive Hand Gesture Interpreter that recognizes hand movements relevant to the American Sign Language (ASL) and translates them into text for display on a Thin-Film-Transistor Liquid Crystal Display (TFT LCD) screen as well as synthetic speech. Linear Bayes Classifiers and Multilayer Neural Networks have been used to classify 11 feature vectors obtained from the sensors on the glove into one of the 27 ASL alphabets and a predefined gesture for space. Three types of features are used; bending using six bend sensors, orientation in three dimensions using accelerometers and contacts at vital points using contact sensors. To gauge the performance of the presented design, the training database was prepared using five volunteers. The accuracy of the current version on the prepared dataset was found to be up to 99.3% for target user. The solution combines electronics, e-textile technology, sensor technology, embedded system and machine learning techniques to build a low cost wearable glove that is scrupulous, elegant and portable.

Keywords: American sign language, assistive hand gesture interpreter, human-machine interface, machine learning, sensing glove.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2688
120 The Effect of Parents' Ethnic Socialization Practices on Ethnic Identity, Self-Esteem and Psychological Adjustment of Multi Ethnic Children in Malaysia

Authors: Chua Bee Seok, Rosnah Ismail, Jasmine Adela Mutang, Shaziah Iqbal, Nur Farhana Ardillah Aftar, Alfred Chan Huan Zhi, Ferlis Bin Bahari, Lailawati Madlan, Hon Kai Yee

Abstract:

The present study aims to explore the role of parents' ethnic socialization practices contributes to the ethnic identity development, self-esteem and psychological adjustment of multi ethnic children in Sabah, Malaysia. A total of 342 multi ethnic children (age range = 10 years old to 14 years old; mean age = 12.65 years, SD = 0.88) and their parents participated in the present study. The modified version of Multi group Ethnic Identity Measure (MEIM), The Familial Ethnic Socialization Measure (FESM). The Rosenberg Self-Esteem Scale (RSE) and Behavioral and Emotional Rating Scale Edition 2 (BERS-2) were used in this study. The results showed that: i) parents' ethnic socialization practice was a strong predictor of ethnic identity development of multi ethnic children; ii) parents' ethnic socialization practice also was a significant predictor of self-esteem of multi ethnic children; iii) parents' ethnic socialization practice was not a significant predictor of psychological adjustment of multi ethnic children. The results of this study showed the implications parents' ethnic socialization practices and ethnic identity development in successful multi ethnic families.

Keywords: Ethnic Identity development, multi ethnic children Parents' Ethnic Socialization Practices, psychological adjustment, self-esteem.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2054
119 The Effect of Reducing Superimposed Dead Load on the Lateral Seismic Deformations of Structures

Authors: H. Alnajajra, A. Touqan, M. Dwaikat

Abstract:

The vast majority of the Middle East countries are prone to earthquakes. Despite that and from a seismic hazard point of view, the higher values of the superimposed dead load intensity of partitions and wearing materials of the constructed reinforced concrete slabs in these countries can increase the earthquake vulnerability of the structures. The primary objective of this paper is to investigate the effect of reducing superimposed dead load on the lateral seismic deformations of structures, the inter-story drifts and the seismic pounding damages. The study utilizes a group of three reinforced concrete structures at three different site conditions. These structures are assumed to be constructed in Nablus city of Palestine, and having superimposed dead load value as 1 kN/m2, 3 kN/m2, and 5 kN/m2, respectively. SAP2000 program, Version 18.1.1, is used to perform the response spectrum analysis to obtain the potential lateral seismic deformations of the studied models. Amazingly, the study points that, at the same site, superimposed dead load has a minor effect on the lateral deflections of the models. This, however, promotes the hypothesis that buildings failed during earthquakes mainly because they were not designed appropriately against gravity loads.

Keywords: Gravity loads, inter-story drifts, lateral seismic deformations, reinforced concrete slabs, response spectrum method, SAP2000, seismic design, seismic pounding, superimposed dead load.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1306
118 Weighted Data Replication Strategy for Data Grid Considering Economic Approach

Authors: N. Mansouri, A. Asadi

Abstract:

Data Grid is a geographically distributed environment that deals with data intensive application in scientific and enterprise computing. Data replication is a common method used to achieve efficient and fault-tolerant data access in Grids. In this paper, a dynamic data replication strategy, called Enhanced Latest Access Largest Weight (ELALW) is proposed. This strategy is an enhanced version of Latest Access Largest Weight strategy. However, replication should be used wisely because the storage capacity of each Grid site is limited. Thus, it is important to design an effective strategy for the replication replacement task. ELALW replaces replicas based on the number of requests in future, the size of the replica, and the number of copies of the file. It also improves access latency by selecting the best replica when various sites hold replicas. The proposed replica selection selects the best replica location from among the many replicas based on response time that can be determined by considering the data transfer time, the storage access latency, the replica requests that waiting in the storage queue and the distance between nodes. Simulation results utilizing the OptorSim show our replication strategy achieve better performance overall than other strategies in terms of job execution time, effective network usage and storage resource usage.

Keywords: Data grid, data replication, simulation, replica selection, replica placement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2072
117 Application of ANN for Estimation of Power Demand of Villages in Sulaymaniyah Governorate

Authors: A. Majeed, P. Ali

Abstract:

Before designing an electrical system, the estimation of load is necessary for unit sizing and demand-generation balancing. The system could be a stand-alone system for a village or grid connected or integrated renewable energy to grid connection, especially as there are non–electrified villages in developing countries. In the classical model, the energy demand was found by estimating the household appliances multiplied with the amount of their rating and the duration of their operation, but in this paper, information exists for electrified villages could be used to predict the demand, as villages almost have the same life style. This paper describes a method used to predict the average energy consumed in each two months for every consumer living in a village by Artificial Neural Network (ANN). The input data are collected using a regional survey for samples of consumers representing typical types of different living, household appliances and energy consumption by a list of information, and the output data are collected from administration office of Piramagrun for each corresponding consumer. The result of this study shows that the average demand for different consumers from four villages in different months throughout the year is approximately 12 kWh/day, this model estimates the average demand/day for every consumer with a mean absolute percent error of 11.8%, and MathWorks software package MATLAB version 7.6.0 that contains and facilitate Neural Network Toolbox was used.

Keywords: Artificial neural network, load estimation, regional survey, rural electrification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1325
116 Modern Seismic Design Approach for Buildings with Hysteretic Dampers

Authors: Vanessa A. Segovia, Sonia E. Ruiz

Abstract:

The use of energy dissipation systems for seismic applications has increased worldwide, thus it is necessary to develop practical and modern criteria for their optimal design. Here, a direct displacement-based seismic design approach for frame buildings with hysteretic energy dissipation systems (HEDS) is applied. The building is constituted by two individual structural systems consisting of: 1) a main elastic structural frame designed for service loads; and 2) a secondary system, corresponding to the HEDS, that controls the effects of lateral loads. The procedure implies to control two design parameters: a) the stiffness ratio (α=Kframe/Ktotal system), and b) the strength ratio (γ=Vdamper/Vtotal system). The proposed damage-controlled approach contributes to the design of a more sustainable and resilient building because the structural damage is concentrated on the HEDS. The reduction of the design displacement spectrum is done by means of a damping factor (recently published) for elastic structural systems with HEDS, located in Mexico City. Two limit states are verified: serviceability and near collapse. Instead of the traditional trial-error approach, a procedure that allows the designer to establish the preliminary sizes of the structural elements of both systems is proposed. The design methodology is applied to an 8-story steel building with buckling restrained braces, located in soft soil of Mexico City. With the aim of choosing the optimal design parameters, a parametric study is developed considering different values of હ and ઻. The simplified methodology is for preliminary sizing, design, and evaluation of the effectiveness of HEDS, and it constitutes a modern and practical tool that enables the structural designer to select the best design parameters. 

Keywords: Damage-controlled buildings, direct displacementbased seismic design, optimal hysteretic energy dissipation systems

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2327
115 Stature Prediction Model Based On Hand Anthropometry

Authors: Arunesh Chandra, Pankaj Chandna, Surinder Deswal, Rajesh Kumar Mishra, Rajender Kumar

Abstract:

The arm length, hand length, hand breadth and middle finger length of 1540 right-handed industrial workers of Haryana state was used to assess the relationship between the upper limb dimensions and stature. Initially, the data were analyzed using basic univariate analysis and independent t-tests; then simple and multiple linear regression models were used to estimate stature using SPSS (version 17). There was a positive correlation between upper limb measurements (hand length, hand breadth, arm length and middle finger length) and stature (p < 0.01), which was highest for hand length. The accuracy of stature prediction ranged from ± 54.897 mm to ± 58.307 mm. The use of multiple regression equations gave better results than simple regression equations. This study provides new forensic standards for stature estimation from the upper limb measurements of male industrial workers of Haryana (India). The results of this research indicate that stature can be determined using hand dimensions with accuracy, when only upper limb is available due to any reasons likewise explosions, train/plane crashes, mutilated bodies, etc. The regression formula derived in this study will be useful for anatomists, archaeologists, anthropologists, design engineers and forensic scientists for fairly prediction of stature using regression equations.

Keywords: Anthropometric dimensions, Forensic identification, Industrial workers, Stature prediction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2912
114 Intelligent Recognition of Diabetes Disease via FCM Based Attribute Weighting

Authors: Kemal Polat

Abstract:

In this paper, an attribute weighting method called fuzzy C-means clustering based attribute weighting (FCMAW) for classification of Diabetes disease dataset has been used. The aims of this study are to reduce the variance within attributes of diabetes dataset and to improve the classification accuracy of classifier algorithm transforming from non-linear separable datasets to linearly separable datasets. Pima Indians Diabetes dataset has two classes including normal subjects (500 instances) and diabetes subjects (268 instances). Fuzzy C-means clustering is an improved version of K-means clustering method and is one of most used clustering methods in data mining and machine learning applications. In this study, as the first stage, fuzzy C-means clustering process has been used for finding the centers of attributes in Pima Indians diabetes dataset and then weighted the dataset according to the ratios of the means of attributes to centers of theirs. Secondly, after weighting process, the classifier algorithms including support vector machine (SVM) and k-NN (k- nearest neighbor) classifiers have been used for classifying weighted Pima Indians diabetes dataset. Experimental results show that the proposed attribute weighting method (FCMAW) has obtained very promising results in the classification of Pima Indians diabetes dataset.

Keywords: Fuzzy C-means clustering, Fuzzy C-means clustering based attribute weighting, Pima Indians diabetes dataset, SVM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1727
113 A Fuzzy-Rough Feature Selection Based on Binary Shuffled Frog Leaping Algorithm

Authors: Javad Rahimipour Anaraki, Saeed Samet, Mahdi Eftekhari, Chang Wook Ahn

Abstract:

Feature selection and attribute reduction are crucial problems, and widely used techniques in the field of machine learning, data mining and pattern recognition to overcome the well-known phenomenon of the Curse of Dimensionality. This paper presents a feature selection method that efficiently carries out attribute reduction, thereby selecting the most informative features of a dataset. It consists of two components: 1) a measure for feature subset evaluation, and 2) a search strategy. For the evaluation measure, we have employed the fuzzy-rough dependency degree (FRFDD) of the lower approximation-based fuzzy-rough feature selection (L-FRFS) due to its effectiveness in feature selection. As for the search strategy, a modified version of a binary shuffled frog leaping algorithm is proposed (B-SFLA). The proposed feature selection method is obtained by hybridizing the B-SFLA with the FRDD. Nine classifiers have been employed to compare the proposed approach with several existing methods over twenty two datasets, including nine high dimensional and large ones, from the UCI repository. The experimental results demonstrate that the B-SFLA approach significantly outperforms other metaheuristic methods in terms of the number of selected features and the classification accuracy.

Keywords: Binary shuffled frog leaping algorithm, feature selection, fuzzy-rough set, minimal reduct.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 693
112 Integrating Microcontroller-Based Projects in a Human-Computer Interaction Course

Authors: Miguel Angel Garcia-Ruiz, Pedro Cesar Santana-Mancilla, Laura Sanely Gaytan-Lugo

Abstract:

This paper describes the design and application of a short in-class project conducted in Algoma University’s Human-Computer Interaction (HCI) course taught at the Bachelor of Computer Science. The project was based on the Maker Movement (people using and reusing electronic components and everyday materials to tinker with technology and make interactive applications), where students applied low-cost and easy-to-use electronic components, the Arduino Uno microcontroller board, software tools, and everyday objects. Students collaborated in small teams by completing hands-on activities with them, making an interactive walking cane for blind people. At the end of the course, students filled out a Technology Acceptance Model version 2 (TAM2) questionnaire where they evaluated microcontroller boards’ applications in HCI classes. We also asked them about applying the Maker Movement in HCI classes. Results showed overall students’ positive opinions and response about using microcontroller boards in HCI classes. We strongly suggest that every HCI course should include practical activities related to tinkering with technology such as applying microcontroller boards, where students actively and constructively participate in teams for achieving learning objectives.

Keywords: Maker movement, microcontrollers, learning, projects, course, technology acceptance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 816
111 Organic Agriculture Harmony in Nutrition, Environment and Health: Case Study in Iran

Authors: Sara Jelodarian

Abstract:

Organic agriculture is a kind of living and dynamic agriculture that was introduced in the early 20th century. The fundamental basis for organic agriculture is in harmony with nature. This version of farming emphasizes removing growth hormones, chemical fertilizers, toxins, radiation, genetic manipulation and instead, integration of modern scientific techniques (such as biologic and microbial control) that leads to the production of healthy food and the preservation of the environment and use of agricultural products such as forage and manure. Supports from governments for the markets producing organic products and taking advantage of the experiences from other successful societies in this field can help progress the positive and effective aspects of this technology, especially in developing countries. This research proves that till 2030, 25% of the global agricultural lands would be covered by organic farming. Consequently Iran, due to its rich genetic resources and various climates, can be a pioneer in promoting organic products. In addition, for sustainable farming, blend of organic and other innovative systems is needed. Important limitations exist to accept these systems, also a diversity of policy instruments will be required to comfort their development and implementation. The paper was conducted to results of compilation of reports, issues, books, articles related to the subject with library studies and research. Likewise we combined experimental and survey to get data.

Keywords: Development, production markets, progress, strategic role, technology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 446
110 Diversity for Safety and Security of Autonomous Vehicles against Accidental and Deliberate Faults

Authors: Anil Ranjitbhai Patel, Clement John Shaji, Peter Liggesmeyer

Abstract:

Safety and security of Autonomous Vehicles (AVs) is a growing concern, first, due to the increased number of safety-critical functions taken over by automotive embedded systems; second, due to the increased exposure of the software-intensive systems to potential attackers; third, due to dynamic interaction in an uncertain and unknown environment at runtime which results in changed functional and non-functional properties of the system. Frequently occurring environmental uncertainties, random component failures, and compromise security of the AVs might result in hazardous events, sometimes even in an accident, if left undetected. Beyond these technical issues, we argue that the safety and security of AVs against accidental and deliberate faults are poorly understood and rarely implemented. One possible way to overcome this is through a well-known diversity approach. As an effective approach to increase safety and security, diversity has been widely used in the aviation, railway, and aerospace industries. Thus, paper proposes fault-tolerance by diversity model taking into consideration the mitigation of accidental and deliberate faults by application of structure and variant redundancy. The model can be used to design the AVs with various types of diversity in hardware and software-based multi-version system. The paper evaluates the presented approach by employing an example from adaptive cruise control, followed by discussing the case study with initial findings.

Keywords: Autonomous vehicles, diversity, fault-tolerance, adaptive cruise control, safety, security.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 407
109 Geochemistry of Natural Radionuclides Associated with Acid Mine Drainage (AMD) in a Coal Mining Area in Southern Brazil

Authors: Juliana A. Galhardi, Daniel M. Bonotto

Abstract:

Coal is an important non-renewable energy source of and can be associated with radioactive elements. In Figueira city, Paraná state, Brazil, it was recorded high uranium activity near the coal mine that supplies a local thermoelectric power plant. In this context, the radon activity (Rn-222, produced by the Ra-226 decay in the U-238 natural series) was evaluated in groundwater, river water and effluents produced from the acid mine drainage in the coal reject dumps. The samples were collected in August 2013 and in February 2014 and analyzed at LABIDRO (Laboratory of Isotope and Hydrochemistry), UNESP, Rio Claro city, Brazil, using an alpha spectrometer (AlphaGuard) adjusted to evaluate the mean radon activity concentration in five cycles of 10 minutes. No radon activity concentration above 100 Bq.L-1, which was a previous critic value established by the World Health Organization. The average radon activity concentration in groundwater was higher than in surface water and in effluent samples, possibly due to the accumulation of uranium and radium in the aquifer layers that favors the radon trapping. The lower value in the river waters can indicate dilution and the intermediate value in the effluents may indicate radon absorption in the coal particles of the reject dumps. The results also indicate that the radon activities in the effluents increase with the sample acidification, possibly due to the higher radium leaching and the subsequent radon transport to the drainage flow. The water samples of Laranjinha River and Ribeirão das Pedras stream, which, respectively, supply Figueira city and receive the mining effluent, exhibited higher pH values upstream the mine, reflecting the acid mine drainage discharge. The radionuclides transport indicates the importance of monitoring their activity concentration in natural waters due to the risks that the radioactivity can represent to human health.

Keywords: Radon, radium, acid mine drainage, coal

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2010
108 Breast Cancer Survivability Prediction via Classifier Ensemble

Authors: Mohamed Al-Badrashiny, Abdelghani Bellaachia

Abstract:

This paper presents a classifier ensemble approach for predicting the survivability of the breast cancer patients using the latest database version of the Surveillance, Epidemiology, and End Results (SEER) Program of the National Cancer Institute. The system consists of two main components; features selection and classifier ensemble components. The features selection component divides the features in SEER database into four groups. After that it tries to find the most important features among the four groups that maximizes the weighted average F-score of a certain classification algorithm. The ensemble component uses three different classifiers, each of which models different set of features from SEER through the features selection module. On top of them, another classifier is used to give the final decision based on the output decisions and confidence scores from each of the underlying classifiers. Different classification algorithms have been examined; the best setup found is by using the decision tree, Bayesian network, and Na¨ıve Bayes algorithms for the underlying classifiers and Na¨ıve Bayes for the classifier ensemble step. The system outperforms all published systems to date when evaluated against the exact same data of SEER (period of 1973-2002). It gives 87.39% weighted average F-score compared to 85.82% and 81.34% of the other published systems. By increasing the data size to cover the whole database (period of 1973-2014), the overall weighted average F-score jumps to 92.4% on the held out unseen test set.

Keywords: Classifier ensemble, breast cancer survivability, data mining, SEER.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1633
107 Nuclear Medical Image Treatment System Based On FPGA in Real Time

Authors: B. Mahmoud, M.H. Bedoui, R. Raychev, H. Essabbah

Abstract:

We present in this paper an acquisition and treatment system designed for semi-analog Gamma-camera. It consists of a nuclear medical Image Acquisition, Treatment and Display chain(IATD) ensuring the acquisition, the treatment of the signals(resulting from the Gamma-camera detection head) and the scintigraphic image construction in real time. This chain is composed by an analog treatment board and a digital treatment board. We describe the designed systems and the digital treatment algorithms in which we have improved the performance and the flexibility. The digital treatment algorithms are implemented in a specific reprogrammable circuit FPGA (Field Programmable Gate Array).interface for semi-analog cameras of Sopha Medical Vision(SMVi) by taking as example SOPHY DS7. The developed system consists of an Image Acquisition, Treatment and Display (IATD) ensuring the acquisition and the treatment of the signals resulting from the DH. The developed chain is formed by a treatment analog board and a digital treatment board designed around a DSP [2]. In this paper we have presented the architecture of a new version of our chain IATD in which the integration of the treatment algorithms is executed on an FPGA (Field Programmable Gate Array)

Keywords: Nuclear medical image, scintigraphic image, digitaltreatment, linearity, spectrometry, FPGA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1646
106 Optimization of the Dental Direct Digital Imaging by Applying the Self-Recognition Technology

Authors: Mina Dabirinezhad, Mohsen Bayat Pour, Amin Dabirinejad

Abstract:

This paper is intended to introduce the technology to solve some of the deficiencies of the direct digital radiology. Nowadays, digital radiology is the latest progression in dental imaging, which has become an essential part of dentistry. There are two main parts of the direct digital radiology comprised of an intraoral X-ray machine and a sensor (digital image receptor). The dentists and the dental nurses experience afflictions during the taking image process by the direct digital X-ray machine. For instance, sometimes they need to readjust the sensor in the mouth of the patient to take the X-ray image again due to the low quality of that. Another problem is, the position of the sensor may move in the mouth of the patient and it triggers off an inappropriate image for the dentists. It means that it is a time-consuming process for dentists or dental nurses. On the other hand, taking several the X-ray images brings some problems for the patient such as being harmful to their health and feeling pain in their mouth due to the pressure of the sensor to the jaw. The author provides a technology to solve the above-mentioned issues that is called “Self-Recognition Direct Digital Radiology” (SDDR). This technology is based on the principle that the intraoral X-ray machine is capable to diagnose the location of the sensor in the mouth of the patient automatically. In addition, to solve the aforementioned problems, SDDR technology brings out fewer environmental impacts in comparison to the previous version.

Keywords: Dental direct digital imaging, digital image receptor, digital x-ray machine, and environmental impacts.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 557
105 Contribution of Football Club Jerseys towards English Premier League Fans’ Loyalty in Nigeria

Authors: B. O. Diyaolu

Abstract:

The globalization of football especially among youth over the decade is uprising. Nigeria youth displaying football jerseys at every opportunity is an acceptance of football globalization. The Love for English Premier League (EPL) football jersey is very strong among Nigeria fans. Football club jerseys of the EPL are a common sports product among fans in Nigeria. This study investigates the contribution of football club jerseys towards EPL fans’ loyalty in Nigeria. Descriptive survey research design was used for the study. The population consists of EPL fans in Nigeria. Simple random sampling technique (fish bowl without replacement) was used to select two states from the six geo-political zones. Purposive sampling technique was used to pick eight viewing centres while accidental sampling technique was used to pick five vendor stands from each State. An average of 250 respondents was selected from each state. A total of 3,200 respondents participated in the research. Two research instruments were used. A self-developed structured questionnaire on Football Jersey Scale (FJS): The instrument consists of 10 items. Fans Loyalty Scale (FLS): The instrument was modified from the psychological commitment to team (PCT) scale, and consists of 20 items. The Cronbach’s Alpha reliability coefficient of 0.72 and 0.75 was obtained, respectively. The hypothesis was tested at 0.05 significant levels. Data were analysed using frequency, percentages count, pie chart and multiple regressions. The result showed that the b-value of football club jersey is 0.148 also the standard regression coefficient (Beta) is 0.089. The t = 4.759 is statistically significant at p = 0.000. This signified a relative contribution of football club jersey on EPL fans loyalty in Nigeria. Club jersey, which is the most outstanding identifier of every club, was found to significantly predict loyalty. The jersey on the body of the fan has become the site for a declaration of loyalty which becomes available for social interaction and negotiation. The Nigerian local league clubs in an attempt to keep Nigerian fans loyal must borrow a leaf from their European counterparts.

Keywords: Club jersey, English Premier League, football fans, Nigeria youth.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 826
104 A Preliminary X-Ray Study on Human-Hair Microstructures for a Health-State Indicator

Authors: Phannee Saengkaew, Weerasak Ussawawongaraya, Sasiphan Khaweerat, Supagorn Rugmai, Sirisart Ouajai, Jiraporn Luengviriya, Sakuntam Sanorpim, Manop Tirarattanasompot, Somboon Rhianphumikarakit

Abstract:

We present a preliminary x-ray study on human-hair microstructures for a health-state indicator, in particular a cancer case. As an uncomplicated and low-cost method of x-ray technique, the human-hair microstructure was analyzed by wide-angle x-ray diffractions (XRD) and small-angle x-ray scattering (SAXS). The XRD measurements exhibited the simply reflections at the d-spacing of 28 Å, 9.4 Å and 4.4 Å representing to the periodic distance of the protein matrix of the human-hair macrofibrous and the diameter and the repeated spacing of the polypeptide alpha helixes of the photofibrils of the human-hair microfibrous, respectively. When compared to the normal cases, the unhealthy cases including to the breast- and ovarian-cancer cases obtained higher normalized ratios of the x-ray diffracting peaks of 9.4 Å and 4.4 Å. This likely resulted from the varied distributions of microstructures by a molecular alteration. As an elemental analysis by x-ray fluorescence (XRF), the normalized quantitative ratios of zinc(Zn)/calcium(Ca) and iron(Fe)/calcium(Ca) were determined. Analogously, both Zn/Ca and Fe/Ca ratios of the unhealthy cases were obtained higher than both of the normal cases were. Combining the structural analysis by XRD measurements and the elemental analysis by XRF measurements exhibited that the modified fibrous microstructures of hair samples were in relation to their altered elemental compositions. Therefore, these microstructural and elemental analyses of hair samples will be benefit to associate with a diagnosis of cancer and genetic diseases. This functional method would lower a risk of such diseases by the early diagnosis. However, the high-intensity x-ray source, the highresolution x-ray detector, and more hair samples are necessarily desired to develop this x-ray technique and the efficiency would be enhanced by including the skin and fingernail samples with the human-hair analysis.

Keywords: Human-hair analysis, XRD, SAXS, breast cancer, health-state indicator

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2531
103 A Revisited View to the Paced Auditory Serial Addition Test (PASAT) in Female and Male Normal Subjects

Authors: Javad Razjouyan, Shahriar Gharibzadeh, Ali Fallah, Mehdi Moghaddasi, Mohsen Seyfi, Amir Kasaeian

Abstract:

Paced Auditory Serial Addition Test (PASAT) has been used as a common research tool for different neurological disorders like Multiple Sclerosis. Recently, technology let researchers to introduce a new versions of the visual test, the paced visual serial addition test (PVSAT). In this paper, the computerized version of these two tests is introduced. Beside the number of true responses are interpreted, the reaction time of subjects are calculated by the software. We hypothesize that paying attention to the reaction time may be valuable. For this purpose, sixty eight female normal subjects and fifty eight male normal subjects are enrolled in the study. We investigate the similarity between the PASAT3 and PVSAT3 in number of true responses and the new criterion (the average reaction time of each subject). The similarity between two tests were rejected (p-value = 0.000) which means that these two test differ. The effect of sex in the tests were not approved since the pvalues of different between PASAT3 and PVSAT3 in both sex is the same (p-value = 0.000) which means that male and female subjects performed the tests at no different level of performance. The new criterion shows a negative correlation with the age which offers aged normal subjects may have the same number of true responses as the young subjects but they have latent responses. This will give prove for the importance of reaction time.

Keywords: Paced Auditory Serial Addition Test, Pace Visual Serial Addition Test, reaction time.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1924
102 Hybrid Approach for Software Defect Prediction Using Machine Learning with Optimization Technique

Authors: C. Manjula, Lilly Florence

Abstract:

Software technology is developing rapidly which leads to the growth of various industries. Now-a-days, software-based applications have been adopted widely for business purposes. For any software industry, development of reliable software is becoming a challenging task because a faulty software module may be harmful for the growth of industry and business. Hence there is a need to develop techniques which can be used for early prediction of software defects. Due to complexities in manual prediction, automated software defect prediction techniques have been introduced. These techniques are based on the pattern learning from the previous software versions and finding the defects in the current version. These techniques have attracted researchers due to their significant impact on industrial growth by identifying the bugs in software. Based on this, several researches have been carried out but achieving desirable defect prediction performance is still a challenging task. To address this issue, here we present a machine learning based hybrid technique for software defect prediction. First of all, Genetic Algorithm (GA) is presented where an improved fitness function is used for better optimization of features in data sets. Later, these features are processed through Decision Tree (DT) classification model. Finally, an experimental study is presented where results from the proposed GA-DT based hybrid approach is compared with those from the DT classification technique. The results show that the proposed hybrid approach achieves better classification accuracy.

Keywords: Decision tree, genetic algorithm, machine learning, software defect prediction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1420
101 Energy Conscious Builder Design Pattern with C# and Intermediate Language

Authors: Kayun Chantarasathaporn, Chonawat Srisa-an

Abstract:

Design Patterns have gained more and more acceptances since their emerging in software development world last decade and become another de facto standard of essential knowledge for Object-Oriented Programming developers nowadays. Their target usage, from the beginning, was for regular computers, so, minimizing power consumption had never been a concern. However, in this decade, demands of more complicated software for running on mobile devices has grown rapidly as the much higher performance portable gadgets have been supplied to the market continuously. To get along with time to market that is business reason, the section of software development for power conscious, battery, devices has shifted itself from using specific low-level languages to higher level ones. Currently, complicated software running on mobile devices are often developed by high level languages those support OOP concepts. These cause the trend of embracing Design Patterns to mobile world. However, using Design Patterns directly in software development for power conscious systems is not recommended because they were not originally designed for such environment. This paper demonstrates the adapted Design Pattern for power limitation system. Because there are numerous original design patterns, it is not possible to mention the whole at once. So, this paper focuses only in creating Energy Conscious version of existing regular "Builder Pattern" to be appropriated for developing low power consumption software.

Keywords: Design Patterns, Builder Pattern, Low Power Consumption, Object Oriented Programming, Power Conscious System, Software.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1960
100 Scale Time Offset Robust Modulation (STORM) in a Code Division Multiaccess Environment

Authors: David M. Jenkins Jr.

Abstract:

Scale Time Offset Robust Modulation (STORM) [1]– [3] is a high bandwidth waveform design that adds time-scale to embedded reference modulations using only time-delay [4]. In an environment where each user has a specific delay and scale, identification of the user with the highest signal power and that user-s phase is facilitated by the STORM processor. Both of these parameters are required in an efficient multiuser detection algorithm. In this paper, the STORM modulation approach is evaluated with a direct sequence spread quadrature phase shift keying (DS-QPSK) system. A misconception of the STORM time scale modulation is that a fine temporal resolution is required at the receiver. STORM will be applied to a QPSK code division multiaccess (CDMA) system by modifying the spreading codes. Specifically, the in-phase code will use a typical spreading code, and the quadrature code will use a time-delayed and time-scaled version of the in-phase code. Subsequently, the same temporal resolution in the receiver is required before and after the application of STORM. In this paper, the bit error performance of STORM in a synchronous CDMA system is evaluated and compared to theory, and the bit error performance of STORM incorporated in a single user WCDMA downlink is presented to demonstrate the applicability of STORM in a modern communication system.

Keywords: Pseudonoise coded communication, Cyclic codes, Code division multiaccess

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1598