Search results for: selection sort
145 Selection of Best Band Combination for Soil Salinity Studies using ETM+ Satellite Images (A Case study: Nyshaboor Region,Iran)
Authors: Sanaeinejad, S. H.; A. Astaraei, . P. Mirhoseini.Mousavi, M. Ghaemi,
Abstract:
One of the main environmental problems which affect extensive areas in the world is soil salinity. Traditional data collection methods are neither enough for considering this important environmental problem nor accurate for soil studies. Remote sensing data could overcome most of these problems. Although satellite images are commonly used for these studies, however there are still needs to find the best calibration between the data and real situations in each specified area. Neyshaboor area, North East of Iran was selected as a field study of this research. Landsat satellite images for this area were used in order to prepare suitable learning samples for processing and classifying the images. 300 locations were selected randomly in the area to collect soil samples and finally 273 locations were reselected for further laboratory works and image processing analysis. Electrical conductivity of all samples was measured. Six reflective bands of ETM+ satellite images taken from the study area in 2002 were used for soil salinity classification. The classification was carried out using common algorithms based on the best composition bands. The results showed that the reflective bands 7, 3, 4 and 1 are the best band composition for preparing the color composite images. We also found out, that hybrid classification is a suitable method for identifying and delineation of different salinity classes in the area.
Keywords: Soil salinity, Remote sensing, Image processing, ETM+, Nyshaboor
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2020144 High Specific Speed in Circulating Water Pump Can Cause Cavitation, Noise and Vibration
Authors: Chandra Gupt Porwal
Abstract:
Excessive vibration means increased wear, increased repair efforts, bad product selection & quality and high energy consumption. This may be sometimes experienced by cavitation or suction/discharge recirculation which could occur only when net positive suction head available NPSHA drops below the net positive suction head required NPSHR. Cavitation can cause axial surging, if it is excessive, will damage mechanical seals, bearings, possibly other pump components frequently, and shorten the life of the impeller. Efforts have been made to explain Suction Energy (SE), Specific Speed (Ns), Suction Specific Speed (Nss), NPSHA, NPSHR & their significance, possible reasons of cavitation /internal recirculation, its diagnostics and remedial measures to arrest and prevent cavitation in this paper. A case study is presented by the author highlighting that the root cause of unwanted noise and vibration is due to cavitation, caused by high specific speeds or inadequate net- positive suction head available which results in damages to material surfaces of impeller & suction bells and degradation of machine performance, its capacity and efficiency too. Author strongly recommends revisiting the technical specifications of CW pumps to provide sufficient NPSH margin ratios >1.5, for future projects and Nss be limited to 8500 - 9000 for cavitation free operation.
Keywords: Best efficiency point (BEP), Net positive suction head NPSHA, NPSHR, Specific Speed NS, Suction Specific Speed Nss.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5060143 A Text Mining Technique Using Association Rules Extraction
Authors: Hany Mahgoub, Dietmar Rösner, Nabil Ismail, Fawzy Torkey
Abstract:
This paper describes text mining technique for automatically extracting association rules from collections of textual documents. The technique called, Extracting Association Rules from Text (EART). It depends on keyword features for discover association rules amongst keywords labeling the documents. In this work, the EART system ignores the order in which the words occur, but instead focusing on the words and their statistical distributions in documents. The main contributions of the technique are that it integrates XML technology with Information Retrieval scheme (TFIDF) (for keyword/feature selection that automatically selects the most discriminative keywords for use in association rules generation) and use Data Mining technique for association rules discovery. It consists of three phases: Text Preprocessing phase (transformation, filtration, stemming and indexing of the documents), Association Rule Mining (ARM) phase (applying our designed algorithm for Generating Association Rules based on Weighting scheme GARW) and Visualization phase (visualization of results). Experiments applied on WebPages news documents related to the outbreak of the bird flu disease. The extracted association rules contain important features and describe the informative news included in the documents collection. The performance of the EART system compared with another system that uses the Apriori algorithm throughout the execution time and evaluating extracted association rules.
Keywords: Text mining, data mining, association rule mining
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4435142 From Electroencephalogram to Epileptic Seizures Detection by Using Artificial Neural Networks
Authors: Gaetano Zazzaro, Angelo Martone, Roberto V. Montaquila, Luigi Pavone
Abstract:
Seizure is the main factor that affects the quality of life of epileptic patients. The diagnosis of epilepsy, and hence the identification of epileptogenic zone, is commonly made by using continuous Electroencephalogram (EEG) signal monitoring. Seizure identification on EEG signals is made manually by epileptologists and this process is usually very long and error prone. The aim of this paper is to describe an automated method able to detect seizures in EEG signals, using knowledge discovery in database process and data mining methods and algorithms, which can support physicians during the seizure detection process. Our detection method is based on Artificial Neural Network classifier, trained by applying the multilayer perceptron algorithm, and by using a software application, called Training Builder that has been developed for the massive extraction of features from EEG signals. This tool is able to cover all the data preparation steps ranging from signal processing to data analysis techniques, including the sliding window paradigm, the dimensionality reduction algorithms, information theory, and feature selection measures. The final model shows excellent performances, reaching an accuracy of over 99% during tests on data of a single patient retrieved from a publicly available EEG dataset.
Keywords: Artificial Neural Network, Data Mining, Electroencephalogram, Epilepsy, Feature Extraction, Seizure Detection, Signal Processing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1312141 Selecting Negative Examples for Protein-Protein Interaction
Authors: Mohammad Shoyaib, M. Abdullah-Al-Wadud, Oksam Chae
Abstract:
Proteomics is one of the largest areas of research for bioinformatics and medical science. An ambitious goal of proteomics is to elucidate the structure, interactions and functions of all proteins within cells and organisms. Predicting Protein-Protein Interaction (PPI) is one of the crucial and decisive problems in current research. Genomic data offer a great opportunity and at the same time a lot of challenges for the identification of these interactions. Many methods have already been proposed in this regard. In case of in-silico identification, most of the methods require both positive and negative examples of protein interaction and the perfection of these examples are very much crucial for the final prediction accuracy. Positive examples are relatively easy to obtain from well known databases. But the generation of negative examples is not a trivial task. Current PPI identification methods generate negative examples based on some assumptions, which are likely to affect their prediction accuracy. Hence, if more reliable negative examples are used, the PPI prediction methods may achieve even more accuracy. Focusing on this issue, a graph based negative example generation method is proposed, which is simple and more accurate than the existing approaches. An interaction graph of the protein sequences is created. The basic assumption is that the longer the shortest path between two protein-sequences in the interaction graph, the less is the possibility of their interaction. A well established PPI detection algorithm is employed with our negative examples and in most cases it increases the accuracy more than 10% in comparison with the negative pair selection method in that paper.Keywords: Interaction graph, Negative training data, Protein-Protein interaction, Support vector machine.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1701140 Dynamic Features Selection for Heart Disease Classification
Authors: Walid MOUDANI
Abstract:
The healthcare environment is generally perceived as being information rich yet knowledge poor. However, there is a lack of effective analysis tools to discover hidden relationships and trends in data. In fact, valuable knowledge can be discovered from application of data mining techniques in healthcare system. In this study, a proficient methodology for the extraction of significant patterns from the Coronary Heart Disease warehouses for heart attack prediction, which unfortunately continues to be a leading cause of mortality in the whole world, has been presented. For this purpose, we propose to enumerate dynamically the optimal subsets of the reduced features of high interest by using rough sets technique associated to dynamic programming. Therefore, we propose to validate the classification using Random Forest (RF) decision tree to identify the risky heart disease cases. This work is based on a large amount of data collected from several clinical institutions based on the medical profile of patient. Moreover, the experts- knowledge in this field has been taken into consideration in order to define the disease, its risk factors, and to establish significant knowledge relationships among the medical factors. A computer-aided system is developed for this purpose based on a population of 525 adults. The performance of the proposed model is analyzed and evaluated based on set of benchmark techniques applied in this classification problem.Keywords: Multi-Classifier Decisions Tree, Features Reduction, Dynamic Programming, Rough Sets.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2531139 Portfolio Management for Construction Company during Covid-19 Using AHP Technique
Authors: Sareh Rajabi, Salwa Bheiry
Abstract:
In general, Covid-19 created many financial and non-financial damages to the economy and community. Level and severity of covid-19 as pandemic case varies over the region and due to different types of the projects. Covid-19 virus emerged as one of the most imperative risk management factors word-wide recently. Therefore, as part of portfolio management assessment, it is essential to evaluate severity of such risk on the project and program in portfolio management level to avoid any risky portfolio. Covid-19 appeared very effectively in South America, part of Europe and Middle East. Such pandemic infection affected the whole universe, due to lock down, interruption in supply chain management, health and safety requirements, transportations and commercial impacts. Therefore, this research proposes Analytical Hierarchy Process (AHP) to analyze and assess such pandemic case like Covid-19 and its impacts on the construction projects. The AHP technique uses four sub-criteria: Health and safety, commercial risk, completion risk and contractual risk to evaluate the project and program. The result will provide the decision makers with information which project has higher or lower risk in case of Covid-19 and pandemic scenario. Therefore, the decision makers can have most feasible solution based on effective weighted criteria for project selection within their portfolio to match with the organization’s strategies.
Keywords: Portfolio management, risk management, COVID-19, analytical hierarchy process technique.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 831138 A Security Model of Voice Eavesdropping Protection over Digital Networks
Authors: Supachai Tangwongsan, Sathaporn Kassuvan
Abstract:
The purpose of this research is to develop a security model for voice eavesdropping protection over digital networks. The proposed model provides an encryption scheme and a personal secret key exchange between communicating parties, a so-called voice data transformation system, resulting in a real-privacy conversation. The operation of this system comprises two main steps as follows: The first one is the personal secret key exchange for using the keys in the data encryption process during conversation. The key owner could freely make his/her choice in key selection, so it is recommended that one should exchange a different key for a different conversational party, and record the key for each case into the memory provided in the client device. The next step is to set and record another personal option of encryption, either taking all frames or just partial frames, so-called the figure of 1:M. Using different personal secret keys and different sets of 1:M to different parties without the intervention of the service operator, would result in posing quite a big problem for any eavesdroppers who attempt to discover the key used during the conversation, especially in a short period of time. Thus, it is quite safe and effective to protect the case of voice eavesdropping. The results of the implementation indicate that the system can perform its function accurately as designed. In this regard, the proposed system is suitable for effective use in voice eavesdropping protection over digital networks, without any requirements to change presently existing network systems, mobile phone network and VoIP, for instance.
Keywords: Computer Security, Encryption, Key Exchange, Security Model, Voice Eavesdropping.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1580137 Fiber Braggs Grating Sensor Based Instrumentation to Evaluate Postural Balance and Stability on an Unstable Platform
Authors: Chethana K., Guru Prasad A. S., Vikranth H. N., Varun H., Omkar S. N., Asokan S.
Abstract:
This paper describes a novel application of Fiber Braggs Grating (FBG) sensors in the assessment of human postural stability and balance on an unstable platform. In this work, FBG sensor Stability Analyzing Device (FBGSAD) is developed for measurement of plantar strain to assess the postural stability of subjects on unstable platforms during different stances in eyes open and eyes closed conditions on a rocker board. The studies are validated by comparing the Centre of Gravity (CG) variations measured on the lumbar vertebra of subjects using a commercial accelerometer. The results obtained from the developed FBGSAD depict qualitative similarities with the data recorded by commercial accelerometer. The advantage of the FBGSAD is that it measures simultaneously plantar strain distribution and postural stability of the subject along with its inherent benefits like non-requirement of energizing voltage to the sensor, electromagnetic immunity and simple design which suits its applicability in biomechanical applications. The developed FBGSAD can serve as a tool/yardstick to mitigate space motion sickness, identify individuals who are susceptible to falls and to qualify subjects for balance and stability, which are important factors in the selection of certain unique professionals such as aircraft pilots, astronauts, cosmonauts etc.
Keywords: Biomechanics, Fiber Bragg Gratings, Plantar Strain Measurement, Postural Stability Analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2845136 Multi-Level Air Quality Classification in China Using Information Gain and Support Vector Machine
Authors: Bingchun Liu, Pei-Chann Chang, Natasha Huang, Dun Li
Abstract:
Machine Learning and Data Mining are the two important tools for extracting useful information and knowledge from large datasets. In machine learning, classification is a wildly used technique to predict qualitative variables and is generally preferred over regression from an operational point of view. Due to the enormous increase in air pollution in various countries especially China, Air Quality Classification has become one of the most important topics in air quality research and modelling. This study aims at introducing a hybrid classification model based on information theory and Support Vector Machine (SVM) using the air quality data of four cities in China namely Beijing, Guangzhou, Shanghai and Tianjin from Jan 1, 2014 to April 30, 2016. China's Ministry of Environmental Protection has classified the daily air quality into 6 levels namely Serious Pollution, Severe Pollution, Moderate Pollution, Light Pollution, Good and Excellent based on their respective Air Quality Index (AQI) values. Using the information theory, information gain (IG) is calculated and feature selection is done for both categorical features and continuous numeric features. Then SVM Machine Learning algorithm is implemented on the selected features with cross-validation. The final evaluation reveals that the IG and SVM hybrid model performs better than SVM (alone), Artificial Neural Network (ANN) and K-Nearest Neighbours (KNN) models in terms of accuracy as well as complexity.
Keywords: Machine learning, air quality classification, air quality index, information gain, support vector machine, cross-validation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 947135 Enhancement of Rice Straw Composting Using UV Induced Mutants of Penicillium Strain
Authors: T. N. M. El Sebai, A. A.Khattab, Wafaa M. Abd-El Rahim, H. Moawad
Abstract:
Fungal mutant strains have produced cellulase and xylanase enzymes, and have induced high hydrolysis with enhanced of rice straw. The mutants were obtained by exposing Penicillium strain to UV-light treatments. Screening and selection after treatment with UV-light were carried out using cellulolytic and xylanolytic clear zones method to select the hypercellulolytic and hyperxylanolytic mutants. These mutants were evaluated for their cellulase and xylanase enzyme production as well as their abilities for biodegradation of rice straw. The mutant 12 UV/1 produced 306.21% and 209.91% cellulase and xylanase, respectively, as compared with the original wild type strain. This mutant showed high capacity of rice straw degradation. The effectiveness of tested mutant strain and that of wild strain was compared in relation to enhancing the composting process of rice straw and animal manures mixture. The results obtained showed that the compost product of inoculated mixture with mutant strain (12 UV/1) was the best compared to the wild strain and un-inoculated mixture. Analysis of the composted materials showed that the characteristics of the produced compost were close to those of the high quality standard compost. The results obtained in the present work suggest that the combination between rice straw and animal manure could be used for enhancing the composting process of rice straw and particularly when applied with fungal decomposer accelerating the composting process.
Keywords: Rice straw, composting, UV mutants, Penicillium.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1822134 Comparative Study of Seismic Isolation as Retrofit Method for Historical Constructions
Authors: Carlos H. Cuadra
Abstract:
Seismic isolation can be used as a retrofit method for historical buildings with the advantage that minimum intervention on super-structure is required. However, selection of isolation devices depends on weight and stiffness of upper structure. In this study, two buildings are considered for analyses to evaluate the applicability of this retrofitting methodology. Both buildings are located at Akita prefecture in the north part of Japan. One building is a wooden structure that corresponds to the old council meeting hall of Noshiro city. The second building is a brick masonry structure that was used as house of a foreign mining engineer and it is located at Ani town. Ambient vibration measurements were performed on both buildings to estimate their dynamic characteristics. Then, target period of vibration of isolated systems is selected as 3 seconds is selected to estimate required stiffness of isolation devices. For wooden structure, which is a light construction, it was found that natural rubber isolators in combination with friction bearings are suitable for seismic isolation. In case of masonry building elastomeric isolator can be used for its seismic isolation. Lumped mass systems are used for seismic response analysis and it is verified in both cases that seismic isolation can be used as retrofitting method of historical construction. However, in the case of the light building, most of the weight corresponds to the reinforced concrete slab that is required to install isolation devices.
Keywords: Historical building, finite element method, masonry structure, seismic isolation, wooden structure.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 723133 Performance Analysis of MIMO Based Multi-User Cooperation Diversity Over Various Fading Channels
Authors: Zuhaib Ashfaq Khan, Imran Khan, Nandana Rajatheva
Abstract:
In this paper, hybrid FDMA-TDMA access technique in a cooperative distributive fashion introducing and implementing a modified protocol introduced in [1] is analyzed termed as Power and Cooperation Diversity Gain Protocol (PCDGP). A wireless network consists of two users terminal , two relays and a destination terminal equipped with two antennas. The relays are operating in amplify-and-forward (AF) mode with a fixed gain. Two operating modes: cooperation-gain mode and powergain mode are exploited from source terminals to relays, as it is working in a best channel selection scheme. Vertical BLAST (Bell Laboratories Layered Space Time) or V-BLAST with minimum mean square error (MMSE) nulling is used at the relays to perfectly detect the joint signals from multiple source terminals. The performance is analyzed using binary phase shift keying (BPSK) modulation scheme and investigated over independent and identical (i.i.d) Rayleigh, Ricean-K and Nakagami-m fading environments. Subsequently, simulation results show that the proposed scheme can provide better signal quality of uplink users in a cooperative communication system using hybrid FDMATDMA technique.
Keywords: Cooperation Diversity, Best Channel Selectionscheme, MIMO relay networks, V-BLAST, QRdecomposition, and MMSE.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2001132 A Study of Dose Distribution and Image Quality under an Automatic Tube Current Modulation (ATCM) System for a Toshiba Aquilion 64 CT Scanner Using a New Design of Phantom
Authors: S. Sookpeng, C. J. Martin, D. J. Gentle
Abstract:
Automatic tube current modulation (ATCM) systems are available for all CT manufacturers and are used for the majority of patients. Understanding how the systems work and their influence on patient dose and image quality is important for CT users, in order to gain the most effective use of the systems. In the present study, a new phantom was used for evaluating dose distribution and image quality under the ATCM operation for the Toshiba Aquilion 64 CT scanner using different ATCM options and a fixed mAs technique. A routine chest, abdomen and pelvis (CAP) protocol was selected for study and Gafchromic film was used to measure entrance surface dose (ESD), peripheral dose and central axis dose in the phantom. The results show the dose reductions achievable with various ATCM options, in relation with the target noise. The doses and image noise distribution were more uniform when the ATCM system was implemented compared with the fixed mAs technique. The lower limit set for the tube current will affect the modulations especially for the lower dose option. This limit prevented the tube current being reduced further and therefore the lower dose ATCM setting resembled a fixed mAs technique. Selection of a lower tube current limit is likely to reduce doses for smaller patients in scans of chest and neck regions.
Keywords: Computed Tomography (CT), Automatic Tube Current Modulation (ATCM), Automatic Exposure Control (AEC).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2621131 Investigation of Flame and Soot Propagation in Non-Air Conditioned Railway Locomotives
Authors: Abhishek Agarwal, Manoj Sarda, Juhi Kaushik, Vatsal Sanjay, Arup Kumar Das
Abstract:
Propagation of fire through a non-air conditioned railway compartment is studied by virtue of numerical simulations. Simultaneous computational fire dynamics equations, such as Navier-Stokes, lumped species continuity, overall mass and energy conservation, and heat transfer are solved using finite volume based (for radiation) and finite difference based (for all other equations) solver, Fire Dynamics Simulator (FDS). A single coupe with an eight berth occupancy is used to establish the numerical model, followed by the selection of a three coupe system as the fundamental unit of the locomotive compartment. Heat Release Rate Per Unit Area (HRRPUA) of the initial fire is varied to consider a wide range of compartmental fires. Parameters, such as air inlet velocity relative to the locomotive at the windows, the level of interaction with the ambiance and closure of middle berth are studied through a wide range of numerical simulations. Almost all the loss of lives and properties due to fire breakout can be attributed to the direct or indirect exposure to flames or to the inhalation of toxic gases and resultant suffocation due to smoke and soot. Therefore, the temporal stature of fire and smoke are reported for each of the considered cases which can be used in the present or extended form to develop guidelines to be followed in case of a fire breakout.Keywords: Fire dynamics, flame propagation, locomotive fire, soot flow pattern.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1137130 Game Theory Based Diligent Energy Utilization Algorithm for Routing in Wireless Sensor Network
Authors: X. Mercilin Raajini, R. Raja Kumar, P. Indumathi, V. Praveen
Abstract:
Many cluster based routing protocols have been proposed in the field of wireless sensor networks, in which a group of nodes are formed as clusters. A cluster head is selected from one among those nodes based on residual energy, coverage area, number of hops and that cluster-head will perform data gathering from various sensor nodes and forwards aggregated data to the base station or to a relay node (another cluster-head), which will forward the packet along with its own data packet to the base station. Here a Game Theory based Diligent Energy Utilization Algorithm (GTDEA) for routing is proposed. In GTDEA, the cluster head selection is done with the help of game theory, a decision making process, that selects a cluster-head based on three parameters such as residual energy (RE), Received Signal Strength Index (RSSI) and Packet Reception Rate (PRR). Finding a feasible path to the destination with minimum utilization of available energy improves the network lifetime and is achieved by the proposed approach. In GTDEA, the packets are forwarded to the base station using inter-cluster routing technique, which will further forward it to the base station. Simulation results reveal that GTDEA improves the network performance in terms of throughput, lifetime, and power consumption.Keywords: Cluster head, Energy utilization, Game Theory, LEACH, Sensor network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1902129 Privacy Protection Principles of Omnichannel Approach
Authors: Renata Mekovec, Dijana Peras, Ruben Picek
Abstract:
The advent of the Internet, mobile devices and social media is revolutionizing the experience of retail customers by linking multiple sources through various channels. Omnichannel retailing is a retailing that combines multiple channels to allow customers to seamlessly leverage all the distribution information online and offline while shopping. Therefore, today data are an asset more critical than ever for all organizations. Nonetheless, because of its heterogeneity through platforms, developers are currently facing difficulties in dealing with personal data. Considering the possibilities of omnichannel communication, this paper presents channel categorization that could enhance the customer experience of omnichannel center called hyper center. The purpose of this paper is fundamentally to describe the connection between the omnichannel hyper center and the customer, with particular attention to privacy protection. The first phase was finding the most appropriate channels of communication for hyper center. Consequently, a selection of widely used communication channels has been identified and analyzed with regard to the effect requirements for optimizing user experience. The evaluation criteria are divided into 3 groups: general, user profile and channel options. For each criterion the weight of importance for omnichannel communication was defined. The most important thing was to consider how the hyper center can make user identification while respecting the privacy protection requirements. The study carried out also shows what customer experience across digital networks would look like, based on an omnichannel approach owing to privacy protection principles.Keywords: Personal data, privacy protection, omnichannel communication, retail.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 637128 Applying Theory of Inventive Problem Solving to Develop Innovative Solutions: A Case Study
Authors: Y. H. Wang, C. C. Hsieh
Abstract:
Good service design can increase organization revenue and consumer satisfaction while reducing labor and time costs. The problems facing consumers in the original serve model for eyewear and optical industry includes the following issues: 1. Insufficient information on eyewear products 2. Passively dependent on recommendations, insufficient selection 3. Incomplete records on progression of vision conditions 4. Lack of complete customer records. This study investigates the case of Kobayashi Optical, applying the Theory of Inventive Problem Solving (TRIZ) to develop innovative solutions for eyewear and optical industry. Analysis results raise the following conclusions and management implications: In order to provide customers with improved professional information and recommendations, Kobayashi Optical is suggested to establish customer purchasing records. Overall service efficiency can be enhanced by applying data mining techniques to analyze past consumer preferences and purchase histories. Furthermore, Kobayashi Optical should continue to develop a 3D virtual trial service which can allow customers for easy browsing of different frame styles and colors. This 3D virtual trial service will save customer waiting times in during peak service times at stores.Keywords: Theory of inventive problem solving, service design, augmented reality, eyewear and optical industry.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1671127 An Overview of Islanding Detection Methods in Photovoltaic Systems
Authors: Wei Yee Teoh, Chee Wei Tan
Abstract:
The issue of unintentional islanding in PV grid interconnection still remains as a challenge in grid-connected photovoltaic (PV) systems. This paper discusses the overview of popularly used anti-islanding detection methods, practically applied in PV grid-connected systems. Anti-islanding methods generally can be classified into four major groups, which include passive methods, active methods, hybrid methods and communication base methods. Active methods have been the preferred detection technique over the years due to very small non-detected zone (NDZ) in small scale distribution generation. Passive method is comparatively simpler than active method in terms of circuitry and operations. However, it suffers from large NDZ that significantly reduces its performance. Communication base methods inherit the advantages of active and passive methods with reduced drawbacks. Hybrid method which evolved from the combination of both active and passive methods has been proven to achieve accurate anti-islanding detection by many researchers. For each of the studied anti-islanding methods, the operation analysis is described while the advantages and disadvantages are compared and discussed. It is difficult to pinpoint a generic method for a specific application, because most of the methods discussed are governed by the nature of application and system dependent elements. This study concludes that the setup and operation cost is the vital factor for anti-islanding method selection in order to achieve minimal compromising between cost and system quality.Keywords: Active method, hybrid method, islanding detection, passive method, photovoltaic (PV), utility method
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9758126 Heat and Mass Transfer Modelling of Industrial Sludge Drying at Different Pressures and Temperatures
Authors: L. Al Ahmad, C. Latrille, D. Hainos, D. Blanc, M. Clausse
Abstract:
A two-dimensional finite volume axisymmetric model is developed to predict the simultaneous heat and mass transfers during the drying of industrial sludge. The simulations were run using COMSOL-Multiphysics 3.5a. The input parameters of the numerical model were acquired from a preliminary experimental work. Results permit to establish correlations describing the evolution of the various parameters as a function of the drying temperature and the sludge water content. The selection and coupling of the equation are validated based on the drying kinetics acquired experimentally at a temperature range of 45-65 °C and absolute pressure range of 200-1000 mbar. The model, incorporating the heat and mass transfer mechanisms at different operating conditions, shows simulated values of temperature and water content. Simulated results are found concordant with the experimental values, only at the first and last drying stages where sludge shrinkage is insignificant. Simulated and experimental results show that sludge drying is favored at high temperatures and low pressure. As experimentally observed, the drying time is reduced by 68% for drying at 65 °C compared to 45 °C under 1 atm. At 65 °C, a 200-mbar absolute pressure vacuum leads to an additional reduction in drying time estimated by 61%. However, the drying rate is underestimated in the intermediate stage. This rate underestimation could be improved in the model by considering the shrinkage phenomena that occurs during sludge drying.
Keywords: Industrial sludge drying, heat transfer, mass transfer, mathematical modelling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 668125 Fracture Control of the Soda-Lime Glass in Laser Thermal Cleavage
Authors: Jehnming Lin
Abstract:
The effects of the contact ball-lens on the soda lime glass in laser thermal cleavage with a cw Nd-YAG laser were investigated in this study. A contact ball-lens was adopted to generate a bending force on the crack formation of the soda-lime glass in the laser cutting process. The Nd-YAG laser beam (wavelength of 1064 nm) was focused through the ball-lens and transmitted to the soda-lime glass, which was coated with a carbon film on the surface with a bending force from a ball-lens to generate a tensile stress state on the surface cracking. The fracture was controlled by the contact ball-lens and a straight cutting was tested to demonstrate the feasibility. Experimental observations on the crack propagation from the leading edge, main section and trailing edge of the glass sheet were compared with various mechanical and thermal loadings. Further analyses on the stress under various laser powers and contact ball loadings were made to characterize the innovative technology. The results show that the distributions of the side crack at the leading and trailing edges are mainly dependent on the boundary condition, contact force, cutting speed and laser power. With the increase of the mechanical and thermal loadings, the region of the side cracks might be dramatically reduced with proper selection of the geometrical constrains. Therefore the application of the contact ball-lens is a possible way to control the fracture in laser cleavage with improved cutting qualities.
Keywords: Laser cleavage, controlled fracture, contact ball lens.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2477124 Correlates of Peer Influence and Resistance to HIV/AIDS Counselling and Testing among Students in Tertiary Institutions in Kano State, Nigeria
Authors: A. S. Haruna, M. U. Tambawal, A. A. Salawu
Abstract:
The psychological impact of peer influence on its individual group members, can make them resist HIV/AIDS counselling and testing. This study investigated the correlate of peer influence and resistance to HIV/AIDS counselling and testing among students in tertiary institutions in Kano state, Nigeria. To achieve this, three null hypotheses were postulated and tested. Cross- Sectional Survey Design was employed in which 1512 sample was selected from a student population of 104,841.Simple Random Sampling was used in the selection. A self-developed 20-item scale called Peer Influence and Psychological Resistance Inventory (PIPRI) was used for data collection. Pearson Product Moment Correlation (PPMCC) via test-retest method was applied to estimate a reliability coefficient of 0.86 for the scale. Data obtained was analyzed using t-test and PPMCC at 0.05 level of confidence. Results reveal 26.3% (397) of the respondents being influenced by their peer group, while 39.8% showed resistance. Also, the t-tests and PPMCC statistics were greater than their respective critical values. This shows that there was a significant gender difference in peer influence and a difference between peer influence and resistance to HIV/AIDS counselling and testing. However, a positive relationship between peer influence and resistance to HIV/AIDS counselling and testing was shown. A major recommendation offered suggests the use of reinforcement and social support for positive attitudes and maintenance of safe behaviour among students who patronize HIV/AIDS counselling.
Keywords: Peer influence, HIV/AIDS counselling and testing, Resistance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3356123 A Spanning Tree for Enhanced Cluster Based Routing in Wireless Sensor Network
Authors: M. Saravanan, M. Madheswaran
Abstract:
Wireless Sensor Network (WSN) clustering architecture enables features like network scalability, communication overhead reduction, and fault tolerance. After clustering, aggregated data is transferred to data sink and reducing unnecessary, redundant data transfer. It reduces nodes transmitting, and so saves energy consumption. Also, it allows scalability for many nodes, reduces communication overhead, and allows efficient use of WSN resources. Clustering based routing methods manage network energy consumption efficiently. Building spanning trees for data collection rooted at a sink node is a fundamental data aggregation method in sensor networks. The problem of determining Cluster Head (CH) optimal number is an NP-Hard problem. In this paper, we combine cluster based routing features for cluster formation and CH selection and use Minimum Spanning Tree (MST) for intra-cluster communication. The proposed method is based on optimizing MST using Simulated Annealing (SA). In this work, normalized values of mobility, delay, and remaining energy are considered for finding optimal MST. Simulation results demonstrate the effectiveness of the proposed method in improving the packet delivery ratio and reducing the end to end delay.
Keywords: Wireless sensor network, clustering, minimum spanning tree, genetic algorithm, low energy adaptive clustering hierarchy, simulated annealing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1785122 A Metric-Set and Model Suggestion for Better Software Project Cost Estimation
Authors: Murat Ayyıldız, Oya Kalıpsız, Sırma Yavuz
Abstract:
Software project effort estimation is frequently seen as complex and expensive for individual software engineers. Software production is in a crisis. It suffers from excessive costs. Software production is often out of control. It has been suggested that software production is out of control because we do not measure. You cannot control what you cannot measure. During last decade, a number of researches on cost estimation have been conducted. The metric-set selection has a vital role in software cost estimation studies; its importance has been ignored especially in neural network based studies. In this study we have explored the reasons of those disappointing results and implemented different neural network models using augmented new metrics. The results obtained are compared with previous studies using traditional metrics. To be able to make comparisons, two types of data have been used. The first part of the data is taken from the Constructive Cost Model (COCOMO'81) which is commonly used in previous studies and the second part is collected according to new metrics in a leading international company in Turkey. The accuracy of the selected metrics and the data samples are verified using statistical techniques. The model presented here is based on Multi-Layer Perceptron (MLP). Another difficulty associated with the cost estimation studies is the fact that the data collection requires time and care. To make a more thorough use of the samples collected, k-fold, cross validation method is also implemented. It is concluded that, as long as an accurate and quantifiable set of metrics are defined and measured correctly, neural networks can be applied in software cost estimation studies with successKeywords: Software Metrics, Software Cost Estimation, Neural Network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1956121 On Methodologies for Analysing Sickness Absence Data: An Insight into a New Method
Authors: Xiaoshu Lu, Päivi Leino-Arjas, Kustaa Piha, Akseli Aittomäki, Peppiina Saastamoinen, Ossi Rahkonen, Eero Lahelma
Abstract:
Sickness absence represents a major economic and social issue. Analysis of sick leave data is a recurrent challenge to analysts because of the complexity of the data structure which is often time dependent, highly skewed and clumped at zero. Ignoring these features to make statistical inference is likely to be inefficient and misguided. Traditional approaches do not address these problems. In this study, we discuss model methodologies in terms of statistical techniques for addressing the difficulties with sick leave data. We also introduce and demonstrate a new method by performing a longitudinal assessment of long-term absenteeism using a large registration dataset as a working example available from the Helsinki Health Study for municipal employees from Finland during the period of 1990-1999. We present a comparative study on model selection and a critical analysis of the temporal trends, the occurrence and degree of long-term sickness absences among municipal employees. The strengths of this working example include the large sample size over a long follow-up period providing strong evidence in supporting of the new model. Our main goal is to propose a way to select an appropriate model and to introduce a new methodology for analysing sickness absence data as well as to demonstrate model applicability to complicated longitudinal data.Keywords: Sickness absence, longitudinal data, methodologies, mix-distribution model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2270120 Analysis of Brain Activities due to Differences in Running Shoe Properties
Authors: K. Okubo, Y. Kurihara, T. Kaburagi, K. Watanabe
Abstract:
Many of the ever-growing elderly population require exercise, such as running, for health management. One important element of a runner’s training is the choice of shoes for exercise; shoes are important because they provide the interface between the feet and road. When we purchase shoes, we may instinctively choose a pair after trying on many different pairs of shoes. Selecting the shoes instinctively may work, but it does not guarantee a suitable fit for running activities. Therefore, if we could select suitable shoes for each runner from the viewpoint of brain activities, it would be helpful for validating shoe selection. In this paper, we describe how brain activities show different characteristics during particular task, corresponding to different properties of shoes. Using five subjects, we performed a verification experiment, applying weight, softness, and flexibility as shoe properties. In order to affect the shoe property’s differences to the brain, subjects run for 10 min. Before and after running, subjects conducted a paced auditory serial addition task (PASAT) as the particular task; and the subjects’ brain activities during the PASAT are evaluated based on oxyhemoglobin and deoxyhemoglobin relative concentration changes, measured by near-infrared spectroscopy (NIRS). When the brain works actively, oxihemoglobin and deoxyhemoglobin concentration drastically changes; therefore, we calculate the maximum values of concentration changes. In order to normalize relative concentration changes after running, the maximum value are divided by before running maximum value as evaluation parameters. The classification of the groups of shoes is expressed on a self-organizing map (SOM). As a result, deoxyhemoglobin can make clusters for two of the three types of shoes.
Keywords: Brain activities, NIRS, PASAT, running shoes.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2312119 Optimized Brain Computer Interface System for Unspoken Speech Recognition: Role of Wernicke Area
Authors: Nassib Abdallah, Pierre Chauvet, Abd El Salam Hajjar, Bassam Daya
Abstract:
In this paper, we propose an optimized brain computer interface (BCI) system for unspoken speech recognition, based on the fact that the constructions of unspoken words rely strongly on the Wernicke area, situated in the temporal lobe. Our BCI system has four modules: (i) the EEG Acquisition module based on a non-invasive headset with 14 electrodes; (ii) the Preprocessing module to remove noise and artifacts, using the Common Average Reference method; (iii) the Features Extraction module, using Wavelet Packet Transform (WPT); (iv) the Classification module based on a one-hidden layer artificial neural network. The present study consists of comparing the recognition accuracy of 5 Arabic words, when using all the headset electrodes or only the 4 electrodes situated near the Wernicke area, as well as the selection effect of the subbands produced by the WPT module. After applying the articial neural network on the produced database, we obtain, on the test dataset, an accuracy of 83.4% with all the electrodes and all the subbands of 8 levels of the WPT decomposition. However, by using only the 4 electrodes near Wernicke Area and the 6 middle subbands of the WPT, we obtain a high reduction of the dataset size, equal to approximately 19% of the total dataset, with 67.5% of accuracy rate. This reduction appears particularly important to improve the design of a low cost and simple to use BCI, trained for several words.Keywords: Brain-computer interface, speech recognition, electroencephalography EEG, Wernicke area, artificial neural network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 916118 A Static Android Malware Detection Based on Actual Used Permissions Combination and API Calls
Authors: Xiaoqing Wang, Junfeng Wang, Xiaolan Zhu
Abstract:
Android operating system has been recognized by most application developers because of its good open-source and compatibility, which enriches the categories of applications greatly. However, it has become the target of malware attackers due to the lack of strict security supervision mechanisms, which leads to the rapid growth of malware, thus bringing serious safety hazards to users. Therefore, it is critical to detect Android malware effectively. Generally, the permissions declared in the AndroidManifest.xml can reflect the function and behavior of the application to a large extent. Since current Android system has not any restrictions to the number of permissions that an application can request, developers tend to apply more than actually needed permissions in order to ensure the successful running of the application, which results in the abuse of permissions. However, some traditional detection methods only consider the requested permissions and ignore whether it is actually used, which leads to incorrect identification of some malwares. Therefore, a machine learning detection method based on the actually used permissions combination and API calls was put forward in this paper. Meanwhile, several experiments are conducted to evaluate our methodology. The result shows that it can detect unknown malware effectively with higher true positive rate and accuracy while maintaining a low false positive rate. Consequently, the AdaboostM1 (J48) classification algorithm based on information gain feature selection algorithm has the best detection result, which can achieve an accuracy of 99.8%, a true positive rate of 99.6% and a lowest false positive rate of 0.Keywords: Android, permissions combination, API calls, machine learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1914117 Consumer Behavior and Knowledge on Organic Products in Thailand
Authors: Warunpun Kongsom, Chaiwat Kongsom
Abstract:
The objective of this study was to investigate the awareness, knowledge and consumer behavior towards organic products in Thailand. For this study, a purposive sampling technique was used to identify a sample group of 2,575 consumers over the age of 20 years who intended or made purchases from 1) green shops; 2) supermarkets with branches; and, 3) green markets. A questionnaire was used for data collection across the country. Descriptive statistics were used for data analysis. The results showed that more than 92% of consumers were aware of organic agriculture, but had less knowledge about it. More than 60% of consumers knew that organic agriculture production and processing did not allow the use of chemicals. And about 40% of consumers were confused between the food safety logo and the certified organic logo, and whether GMO was allowed in organic agriculture practice or not. In addition, most consumers perceived that organic agricultural products, good agricultural practice (GAP) products, agricultural chemicals free products, and hydroponic vegetable products had the same standard. In the view of organic consumers, the organic Thailand label was the most seen and reliable among various organic labels. Less than 3% of consumers thought that the International Federation of Organic Agriculture Movements (IFOAM) Global Organic Mark (GOM) was the most seen and reliable. For the behaviors of organic consumers, they purchased organic products mainly at the supermarket and green shop (55.4%), one to two times per month, and with a total expenditure of about 200 to 400 baht each time. The main reason for buying organic products was safety and free from agricultural chemicals. The considered factors in organic product selection were price (29.5%), convenience (22.4%), and a reliable certification system (21.3%). The demands for organic products were mainly rice, vegetables and fruits. Processed organic products were relatively small in quantity.Keywords: Consumer behavior, consumer knowledge, organic products, Thailand.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3132116 A Comprehensive Evaluation of Supervised Machine Learning for the Phase Identification Problem
Authors: Brandon Foggo, Nanpeng Yu
Abstract:
Power distribution circuits undergo frequent network topology changes that are often left undocumented. As a result, the documentation of a circuit’s connectivity becomes inaccurate with time. The lack of reliable circuit connectivity information is one of the biggest obstacles to model, monitor, and control modern distribution systems. To enhance the reliability and efficiency of electric power distribution systems, the circuit’s connectivity information must be updated periodically. This paper focuses on one critical component of a distribution circuit’s topology - the secondary transformer to phase association. This topology component describes the set of phase lines that feed power to a given secondary transformer (and therefore a given group of power consumers). Finding the documentation of this component is call Phase Identification, and is typically performed with physical measurements. These measurements can take time lengths on the order of several months, but with supervised learning, the time length can be reduced significantly. This paper compares several such methods applied to Phase Identification for a large range of real distribution circuits, describes a method of training data selection, describes preprocessing steps unique to the Phase Identification problem, and ultimately describes a method which obtains high accuracy (> 96% in most cases, > 92% in the worst case) using only 5% of the measurements typically used for Phase Identification.Keywords: Distribution network, machine learning, network topology, phase identification, smart grid.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1073