Search results for: machine side converter
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1928

Search results for: machine side converter

1058 Biometric Technology in Securing the Internet Using Large Neural Network Technology

Authors: B. Akhmetov, A. Doszhanova, A. Ivanov, T. Kartbayev, A. Malygin

Abstract:

The article examines the methods of protection of citizens' personal data on the Internet using biometric identity authentication technology. It`s celebrated their potential danger due to the threat of loss of base biometric templates. To eliminate the threat of compromised biometric templates is proposed to use neural networks large and extra-large sizes, which will on the one hand securely (Highly reliable) to authenticate a person by his biometrics, and on the other hand make biometrics a person is not available for observation and understanding. This article also describes in detail the transformation of personal biometric data access code. It`s formed the requirements for biometrics converter code for his work with the images of "Insider," "Stranger", all the "Strangers". It`s analyzed the effect of the dimension of neural networks on the quality of converters mystery of biometrics in access code.

Keywords: Biometric security technologies, Conversion of personal biometric data access code, Electronic signature, Large neural networks, quality of converters "Biometrics - the code", the Egovernment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2155
1057 Decision Tree Based Scheduling for Flexible Job Shops with Multiple Process Plans

Authors: H.-H. Doh, J.-M. Yu, Y.-J. Kwon, J.-H. Shin, H.-W. Kim, S.-H. Nam, D.-H. Lee

Abstract:

This paper suggests a decision tree based approach for flexible job shop scheduling with multiple process plans, i.e. each job can be processed through alternative operations, each of which can be processed on alternative machines. The main decision variables are: (a) selecting operation/machine pair; and (b) sequencing the jobs assigned to each machine. As an extension of the priority scheduling approach that selects the best priority rule combination after many simulation runs, this study suggests a decision tree based approach in which a decision tree is used to select a priority rule combination adequate for a specific system state and hence the burdens required for developing simulation models and carrying out simulation runs can be eliminated. The decision tree based scheduling approach consists of construction and scheduling modules. In the construction module, a decision tree is constructed using a four-stage algorithm, and in the scheduling module, a priority rule combination is selected using the decision tree. To show the performance of the decision tree based approach suggested in this study, a case study was done on a flexible job shop with reconfigurable manufacturing cells and a conventional job shop, and the results are reported by comparing it with individual priority rule combinations for the objectives of minimizing total flow time and total tardiness.

Keywords: Flexible job shop scheduling, Decision tree, Priority rules, Case study.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3291
1056 Influence of a Company’s Dynamic Capabilities on Its Innovation Capabilities

Authors: Lovorka Galetic, Zeljko Vukelic

Abstract:

The advanced concepts of strategic and innovation management in the sphere of company dynamic and innovation capabilities, and achieving their mutual alignment and a synergy effect, are important elements in business today. This paper analyses the theory and empirically investigates the influence of a company’s dynamic capabilities on its innovation capabilities. A new multidimensional model of dynamic capabilities is presented, consisting of five factors appropriate to real time requirements, while innovation capabilities are considered pursuant to the official OECD and Eurostat standards. After examination of dynamic and innovation capabilities indicated their theoretical links, the empirical study testing the model and examining the influence of a company’s dynamic capabilities on its innovation capabilities showed significant results. In the study, a research model was posed to relate company dynamic and innovation capabilities. One side of the model features the variables that are the determinants of dynamic capabilities defined through their factors, while the other side features the determinants of innovation capabilities pursuant to the official standards. With regard to the research model, five hypotheses were set. The study was performed in late 2014 on a representative sample of large and very large Croatian enterprises with a minimum of 250 employees. The research instrument was a questionnaire administered to company top management. For both variables, the position of the company was tested in comparison to industry competitors, on a fivepoint scale. In order to test the hypotheses, correlation tests were performed to determine whether there is a correlation between each individual factor of company dynamic capabilities with the existence of its innovation capabilities, in line with the research model. The results indicate a strong correlation between a company’s possession of dynamic capabilities in terms of their factors, due to the new multi-dimensional model presented in this paper, with its possession of innovation capabilities. Based on the results, all five hypotheses were accepted. Ultimately, it was concluded that there is a strong association between the dynamic and innovation capabilities of a company. 

Keywords: Dynamic capabilities, innovation capabilities, competitive advantage, business results.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1403
1055 Hybrid Pulse Width Modulation Techniques for the Reduction of Switching Losses and Voltage Harmonics in Cascaded Multilevel Inverters

Authors: Venkata Reddy Kota

Abstract:

These days, the industrial trend is moving away from heavy and bulky passive components to power converter systems that use more and more semiconductor elements. Also, it is difficult to connect the traditional converters to the high and medium voltage. For these reasons, a new family of multilevel inverters has appeared as a solution for working with higher voltage levels. Different modulation topologies like Sinusoidal Pulse Width Modulation (SPWM), Selective Harmonic Elimination Pulse Width Modulation (SHE-PWM) are available for multilevel inverters. In this work, different hybrid modulation techniques which are combination of fundamental frequency modulation and multilevel sinusoidal-modulation are compared. The main characteristic of these modulations are reduction of switching losses with good harmonic performance and balanced power loss dissipation among the device. The proposed hybrid modulation schemes are developed and simulated in Matlab/Simulink for cascaded H-bridge inverter. The results validate the applicability of the proposed schemes for cascaded multilevel inverter.

Keywords: Hybrid PWM techniques, Cascaded Multilevel Inverters, Switching loss minimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1963
1054 Optical Verification of an Ophthalmological Examination Apparatus Employing the Electroretinogram Function on Fundus-Related Perimetry

Authors: Naoto Suzuki

Abstract:

Japanese are affected by the most common causes of eyesight loss such as glaucoma, diabetic retinopathy, pigmentary retinal degeneration, and age-related macular degeneration. We developed an ophthalmological examination apparatus with a fundus camera, precisely fundus-related perimetry (microperimetry), and electroretinogram (ERG) functions to diagnose a variety of diseases that cause eyesight loss. The experimental apparatus was constructed with the same optical system as a fundus camera. The microperimetry optical system was calculated and added to the experimental apparatus using the German company Optenso's optical engineering software (OpTaliX-LT 10.8). We also added an Edmund infrared camera (EO-0413), a lens with a 25 mm focal length, a 45° cold mirror, a 12 V/50 W halogen lamp, and an 8-inch monitor. We made the artificial eye of a plane-convex lens, a black spacer, and a hemispherical cup. The hemispherical cup had a small section of the paper at the bottom. The artificial eye was photographed five times using the experimental apparatus. The software was created to display the examination target on the monitor and save examination data using C++Builder 10.2. The retinal fundus was displayed on the monitor at a length and width of 1 mm and a resolution of 70.4 ± 4.1 and 74.7 ± 6.8 pixels, respectively. The microperimetry and ERG functions were successfully added to the experimental ophthalmological apparatus. A moving machine was developed to measure the artificial eye's movement. The artificial eye's rear part was painted black and white in the central area. It was rotated 10 degrees from one side to the other. The movement was captured five times as motion videos. Three static images were extracted from one of the motion videos captured. The images display the artificial eye facing the center, right, and left directions. The three images were processed using Scilab 6.1.0 and Image Processing and Computer Vision Toolbox 4.1.2, including trimming, binarization, making a window, deleting peripheral area, and morphological operations. To calculate the artificial eye's fundus center, we added a gravity method to the program to calculate the gravity position of connected components. From the three images, the image processing could calculate the center position.

Keywords: Ophthalmological examination apparatus, microperimetry, electroretinogram, eye movement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 521
1053 Performance Analysis of Genetic Algorithm with kNN and SVM for Feature Selection in Tumor Classification

Authors: C. Gunavathi, K. Premalatha

Abstract:

Tumor classification is a key area of research in the field of bioinformatics. Microarray technology is commonly used in the study of disease diagnosis using gene expression levels. The main drawback of gene expression data is that it contains thousands of genes and a very few samples. Feature selection methods are used to select the informative genes from the microarray. These methods considerably improve the classification accuracy. In the proposed method, Genetic Algorithm (GA) is used for effective feature selection. Informative genes are identified based on the T-Statistics, Signal-to-Noise Ratio (SNR) and F-Test values. The initial candidate solutions of GA are obtained from top-m informative genes. The classification accuracy of k-Nearest Neighbor (kNN) method is used as the fitness function for GA. In this work, kNN and Support Vector Machine (SVM) are used as the classifiers. The experimental results show that the proposed work is suitable for effective feature selection. With the help of the selected genes, GA-kNN method achieves 100% accuracy in 4 datasets and GA-SVM method achieves in 5 out of 10 datasets. The GA with kNN and SVM methods are demonstrated to be an accurate method for microarray based tumor classification.

Keywords: F-Test, Gene Expression, Genetic Algorithm, k- Nearest-Neighbor, Microarray, Signal-to-Noise Ratio, Support Vector Machine, T-statistics, Tumor Classification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4514
1052 Rapid Frequency Response Measurement of Power Conversion Products with Coherence-Based Confidence Analysis

Authors: Tomi Roinila, Aki Taskinen, Matti Vilkko

Abstract:

Switched-mode converters play now a significant role in modern society. Their operation are often crucial in various electrical applications affecting the every day life. Therefore, the quality of the converters needs to be reliably verified. Recent studies have shown that the converters can be fully characterized by a set of frequency responses which can be efficiently used to validate the proper operation of the converters. Consequently, several methods have been proposed to measure the frequency responses fast and accurately. Most often correlation-based techniques have been applied. The presented measurement methods are highly sensitive to external errors and system nonlinearities. This fact has been often forgotten and the necessary uncertainty analysis of the measured responses has been neglected. This paper presents a simple approach to analyze the noise and nonlinearities in the frequency-response measurements of switched-mode converters. Coherence analysis is applied to form a confidence interval characterizing the noise and nonlinearities involved in the measurements. The presented method is verified by practical measurements from a high-frequency switchedmode converter.

Keywords: Switched-mode converters, Frequency analysis, CoherenceAnalysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1699
1051 Isolation and Classification of Red Blood Cells in Anemic Microscopic Images

Authors: Jameela Ali Alkrimi, Loay E. George, Azizah Suliman, Abdul Rahim Ahmad, Karim Al-Jashamy

Abstract:

Red blood cells (RBCs) are among the most commonly and intensively studied type of blood cells in cell biology. Anemia is a lack of RBCs is characterized by its level compared to the normal hemoglobin level. In this study, a system based image processing methodology was developed to localize and extract RBCs from microscopic images. Also, the machine learning approach is adopted to classify the localized anemic RBCs images. Several textural and geometrical features are calculated for each extracted RBCs. The training set of features was analyzed using principal component analysis (PCA). With the proposed method, RBCs were isolated in 4.3secondsfrom an image containing 18 to 27 cells. The reasons behind using PCA are its low computation complexity and suitability to find the most discriminating features which can lead to accurate classification decisions. Our classifier algorithm yielded accuracy rates of 100%, 99.99%, and 96.50% for K-nearest neighbor (K-NN) algorithm, support vector machine (SVM), and neural network RBFNN, respectively. Classification was evaluated in highly sensitivity, specificity, and kappa statistical parameters. In conclusion, the classification results were obtained within short time period, and the results became better when PCA was used.

Keywords: Red blood cells, pre-processing image algorithms, classification algorithms, principal component analysis PCA, confusion matrix, kappa statistical parameters, ROC.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3177
1050 Fuzzy Logic Based Maximum Power Point Tracking Designed for 10kW Solar Photovoltaic System with Different Membership Functions

Authors: S. Karthika, K. Velayutham, P. Rathika, D. Devaraj

Abstract:

The electric power supplied by a photovoltaic power generation systems depends on the solar irradiation and temperature. The PV system can supply the maximum power to the load at a particular operating point which is generally called as maximum power point (MPP), at which the entire PV system operates with maximum efficiency and produces its maximum power. Hence, a Maximum power point tracking (MPPT) methods are used to maximize the PV array output power by tracking continuously the maximum power point. The proposed MPPT controller is designed for 10kW solar PV system installed at Cape Institute of Technology. This paper presents the fuzzy logic based MPPT algorithm. However, instead of one type of membership function, different structures of fuzzy membership functions are used in the FLC design. The proposed controller is combined with the system and the results are obtained for each membership functions in Matlab/Simulink environment. Simulation results are decided that which membership function is more suitable for this system.

Keywords: MPPT, DC-DC Converter, Fuzzy logic controller, Photovoltaic (PV) system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4235
1049 Expert Solutions to Affordable Housing Finance Challenges in Developing Economies

Authors: Timothy Akinwande, Eddie C. M. Hui

Abstract:

Housing the urban poor has remained a challenge for many years across the world, especially in developing economies, despite the apparent research attention and policy interventions. It is apt to investigate the prevalent affordable housing (AH) provision challenges using unconventional approaches. It is pragmatic to thoroughly examine housing experts to provide supply-side solutions to AH challenges and investigate informal settlers to deduce solutions from AH demand viewpoints. This study, being the supply-side investigation of an ongoing research, interrogated housing experts to determine significant expert solutions. Focus group discussions and in-depth interviews were conducted with housing experts in Nigeria. Through descriptive, content, and systematic thematic analyses of data, major findings are that deliberate finance models designed for the urban poor are the most significant housing finance solution in developing economies. Other findings are that adequately implemented rent control policies, deliberate Public-Private Partnership (PPP) approaches like inclusionary housing and land-value capture, and urban renewal programs to enlighten and tutor the urban poor on how to earn more, spend wisely, and invest in their own better housing will effectively solve AH finance challenges. Study findings are informative for the best approaches to achieve effective, AH finance for the urban poor in Nigeria, which is indispensable for the achievement of sustainable development goals. This research’s originality lies in the exploration of experts’ opinions in relation to AH finance to produce an equation model of critical solutions to AH finance challenges. Study data are useful resources for future pro-poor housing studies. This study makes housing policy-oriented recommendations toward effective, AH for the urban poor in developing countries.

Keywords: Affordable housing, effective affordable housing, housing policy, housing research, sustainable development, urban poor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 110
1048 Computer - based Systems for High Speed Vessels Navigators – Engineers Training

Authors: D. E. Gourgoulis, C. G. Yakinthos, M. G. Vassiliadou

Abstract:

With high speed vessels getting ever more sophisti-cated, travelling at higher and higher speeds and operating in With high speed vessels getting ever more sophisticated, travelling at higher and higher speeds and operating in areas of high maritime traffic density, training becomes of the highest priority to ensure that safety levels are maintained, and risks are adequately mitigated. Training onboard the actual craft on the actual route still remains the most effective way for crews to gain experience. However, operational experience and incidents during the last 10 years demonstrate the need for supplementary training whether in the area of simulation or man to man, man/ machine interaction. Training and familiarisation of the crew is the most important aspect in preventing incidents. The use of simulator, computer and web based training systems in conjunction with onboard training focusing on critical situations will improve the man machine interaction and thereby reduce the risk of accidents. Today, both ship simulator and bridge teamwork courses are now becoming the norm in order to improve further emergency response and crisis management skills. One of the main causes of accidents is the human factor. An efficient way to reduce human errors is to provide high-quality training to the personnel and to select the navigators carefully.areas of high maritime traffic density, training becomes of the highest priority to ensure that safety levels are maintained, and risks are adequately mitigated. Training onboard the actual craft on the actual route still remains the most effective way for crews to gain experience. How-ever, operational experience and incidents during the last 10 years demonstrate the need for supplementary training whether in the area of simulation or man to man, man/ machine interaction. Training and familiarisation of the crew is the most important aspect in preventing incidents. The use of simulator, computer and web based training systems in conjunction with onboard training focusing on critical situations will improve the man machine interaction and thereby reduce the risk of accidents. Today, both ship simulator and bridge teamwork courses are now becoming the norm in order to improve further emergency response and crisis management skills. One of the main causes of accidents is the human factor. An efficient way to reduce human errors is to provide high-quality training to the person-nel and to select the navigators carefully. KeywordsCBT - WBT systems, Human factors.

Keywords: CBT - WBT systems, Human factors.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1508
1047 A TIPSO-SVM Expert System for Efficient Classification of TSTO Surrogates

Authors: Ali Sarosh, Dong Yun-Feng, Muhammad Umer

Abstract:

Fully reusable spaceplanes do not exist as yet. This implies that design-qualification for optimized highly-integrated forebody-inlet configuration of booster-stage vehicle cannot be based on archival data of other spaceplanes. Therefore, this paper proposes a novel TIPSO-SVM expert system methodology. A non-trivial problem related to optimization and classification of hypersonic forebody-inlet configuration in conjunction with mass-model of the two-stage-to-orbit (TSTO) vehicle is solved. The hybrid-heuristic machine learning methodology is based on two-step improved particle swarm optimizer (TIPSO) algorithm and two-step support vector machine (SVM) data classification method. The efficacy of method is tested by first evolving an optimal configuration for hypersonic compression system using TIPSO algorithm; thereafter, classifying the results using two-step SVM method. In the first step extensive but non-classified mass-model training data for multiple optimized configurations is segregated and pre-classified for learning of SVM algorithm. In second step the TIPSO optimized mass-model data is classified using the SVM classification. Results showed remarkable improvement in configuration and mass-model along with sizing parameters.

Keywords: TIPSO-SVM expert system, TIPSO algorithm, two-step SVM method, aerothermodynamics, mass-modeling, TSTO vehicle.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2288
1046 Optimization of Wire EDM Parameters for Fabrication of Micro Channels

Authors: Gurinder Singh Brar, Sarbjeet Singh, Harry Garg

Abstract:

Wire Electric Discharge Machining (WEDM) is thermal machining process capable of machining very hard electrically conductive material irrespective of their hardness. WEDM is being widely used to machine micro scale parts with the high dimensional accuracy and surface finish. The objective of this paper is to optimize the process parameters of wire EDM to fabricate the micro channels and to calculate the surface finish and material removal rate of micro channels fabricated using wire EDM. The material used is aluminum 6061 alloy. The experiments were performed using CNC wire cut electric discharge machine. The effect of various parameters of WEDM like pulse on time (TON) with the levels (100, 150, 200), pulse off time (TOFF) with the levels (25, 35, 45) and current (IP) with the levels (105, 110, 115) were investigated to study the effect on output parameter i.e. Surface Roughness and Material Removal Rate (MRR). Each experiment was conducted under different conditions of pulse on time, pulse off time and peak current. For material removal rate, TON and Ip were the most significant process parameter. MRR increases with the increase in TON and Ip and decreases with the increase in TOFF. For surface roughness, TON and Ip have the maximum effect and TOFF was found out to be less effective.

Keywords: Micro Channels, Wire Electric Discharge Machining (WEDM), Metal Removal Rate (MRR), Surface Finish.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2673
1045 Multi-Stage Multi-Period Production Planning in Wire and Cable Industry

Authors: Mahnaz Hosseinzadeh, Shaghayegh Rezaee Amiri

Abstract:

This paper presents a methodology for serial production planning problem in wire and cable manufacturing process that addresses the problem of input-output imbalance in different consecutive stations, hoping to minimize the halt of machines in each stage. To this end, a linear Goal Programming (GP) model is developed, in which four main categories of constraints as per the number of runs per machine, machines’ sequences, acceptable inventories of machines at the end of each period, and the necessity of fulfillment of the customers’ orders are considered. The model is formulated based upon on the real data obtained from IKO TAK Company, an important supplier of wire and cable for oil and gas and automotive industries in Iran. By solving the model in GAMS software the optimal number of runs, end-of-period inventories, and the possible minimum idle time for each machine are calculated. The application of the numerical results in the target company has shown the efficiency of the proposed model and the solution in decreasing the lead time of the end product delivery to the customers by 20%. Accordingly, the developed model could be easily applied in wire and cable companies for the aim of optimal production planning to reduce the halt of machines in manufacturing stages.

Keywords: Serial manufacturing process, production planning, wire and cable industry, goal programming approach.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 900
1044 Estimation of Individual Power of Noise Sources Operating Simultaneously

Authors: Pankaj Chandna, Surinder Deswal, Arunesh Chandra, SK Sharma

Abstract:

Noise has adverse effect on human health and comfort. Noise not only cause hearing impairment, but it also acts as a causal factor for stress and raising systolic pressure. Additionally it can be a causal factor in work accidents, both by marking hazards and warning signals and by impeding concentration. Industry workers also suffer psychological and physical stress as well as hearing loss due to industrial noise. This paper proposes an approach to enable engineers to point out quantitatively the noisiest source for modification, while multiple machines are operating simultaneously. The model with the point source and spherical radiation in a free field was adopted to formulate the problem. The procedure works very well in ideal cases (point source and free field). However, most of the industrial noise problems are complicated by the fact that the noise is confined in a room. Reflections from the walls, floor, ceiling, and equipment in a room create a reverberant sound field that alters the sound wave characteristics from those for the free field. So the model was validated for relatively low absorption room at NIT Kurukshetra Central Workshop. The results of validation pointed out that the estimated sound power of noise sources under simultaneous conditions were on lower side, within the error limits 3.56 - 6.35 %. Thus suggesting the use of this methodology for practical implementation in industry. To demonstrate the application of the above analytical procedure for estimating the sound power of noise sources under simultaneous operating conditions, a manufacturing facility (Railway Workshop at Yamunanagar, India) having five sound sources (machines) on its workshop floor is considered in this study. The findings of the case study had identified the two most effective candidates (noise sources) for noise control in the Railway Workshop Yamunanagar, India. The study suggests that the modification in the design and/or replacement of these two identified noisiest sources (machine) would be necessary so as to achieve an effective reduction in noise levels. Further, the estimated data allows engineers to better understand the noise situations of the workplace and to revise the map when changes occur in noise level due to a workplace re-layout.

Keywords: Industrial noise, sound power level, multiple noise sources, sources contribution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1830
1043 Investigation of SSR Characteristics of SSSC With GA Based Voltage Controller

Authors: R. Thirumalaivasan, M.Janaki, Nagesh Prabhu

Abstract:

In this paper, investigation of subsynchronous resonance (SSR) characteristics of a hybrid series compensated system and the design of voltage controller for three level 24-pulse Voltage Source Converter based Static Synchronous Series Compensator (SSSC) is presented. Hybrid compensation consists of series fixed capacitor and SSSC which is a active series FACTS controller. The design of voltage controller for SSSC is based on damping torque analysis, and Genetic Algorithm (GA) is adopted for tuning the controller parameters. The SSR Characteristics of SSSC with constant reactive voltage control modes has been investigated. The results show that the constant reactive voltage control of SSSC has the effect of reducing the electrical resonance frequency, which detunes the SSR.The analysis of SSR with SSSC is carried out based on frequency domain method, eigenvalue analysis and transient simulation. While the eigenvalue and damping torque analysis are based on D-Q model of SSSC, the transient simulation considers both D-Q and detailed three phase nonlinear system model using switching functions.

Keywords: FACTS, SSR, SSSC, damping torque, GA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1711
1042 Artificial Intelligence: A Comprehensive and Systematic Literature Review of Applications and Comparative Technologies

Authors: Z. M. Najmi

Abstract:

Over the years, the question around Artificial Intelligence has always been one with many answers. Whether by means of use in business and industry or complicated algorithmic programming, management of these technologies has always been the core focus. More recently, technologies have been questioned in industry and society alike as to whether they have improved human-centred design, assisted choices and objectives, and had a hand in systematic processes across the board. With these questions the answer may lie within AI technologies, and the steps needed in removing common human error. Elements such as Machine Learning, Deep Learning, Recommender Systems and Natural Language Processing will all be features to consider moving forward. Our previous intervention with AI applications has resulted in increased productivity, however, raised concerns for the continuation of traditional human-centred occupations. Emerging technologies such as Augmented Reality and Virtual Reality have all played a part in this during AI’s prominent rise. As mentioned, AI has been constantly under the microscope; the benefits and drawbacks may seem endless is wide, but AI is something we must take notice of and adapt into our everyday lives. The aim of this paper is to give an overview of the technologies surrounding A.I. and its’ related technologies. A comprehensive review has been written as a timeline of the developing events and key points in the history of Artificial Intelligence. This research is gathered entirely from secondary research, academic statements of knowledge and gathered to produce an understanding of the timeline of AI.

Keywords: Artificial Intelligence, Deep Learning, Augmented Reality, Reinforcement Learning, Machine Learning, Supervised Learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 528
1041 Intelligent Recognition of Diabetes Disease via FCM Based Attribute Weighting

Authors: Kemal Polat

Abstract:

In this paper, an attribute weighting method called fuzzy C-means clustering based attribute weighting (FCMAW) for classification of Diabetes disease dataset has been used. The aims of this study are to reduce the variance within attributes of diabetes dataset and to improve the classification accuracy of classifier algorithm transforming from non-linear separable datasets to linearly separable datasets. Pima Indians Diabetes dataset has two classes including normal subjects (500 instances) and diabetes subjects (268 instances). Fuzzy C-means clustering is an improved version of K-means clustering method and is one of most used clustering methods in data mining and machine learning applications. In this study, as the first stage, fuzzy C-means clustering process has been used for finding the centers of attributes in Pima Indians diabetes dataset and then weighted the dataset according to the ratios of the means of attributes to centers of theirs. Secondly, after weighting process, the classifier algorithms including support vector machine (SVM) and k-NN (k- nearest neighbor) classifiers have been used for classifying weighted Pima Indians diabetes dataset. Experimental results show that the proposed attribute weighting method (FCMAW) has obtained very promising results in the classification of Pima Indians diabetes dataset.

Keywords: Fuzzy C-means clustering, Fuzzy C-means clustering based attribute weighting, Pima Indians diabetes dataset, SVM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1743
1040 Development of Genetic-based Machine Learning for Network Intrusion Detection (GBML-NID)

Authors: Wafa' S.Al-Sharafat, Reyadh Naoum

Abstract:

Society has grown to rely on Internet services, and the number of Internet users increases every day. As more and more users become connected to the network, the window of opportunity for malicious users to do their damage becomes very great and lucrative. The objective of this paper is to incorporate different techniques into classier system to detect and classify intrusion from normal network packet. Among several techniques, Steady State Genetic-based Machine Leaning Algorithm (SSGBML) will be used to detect intrusions. Where Steady State Genetic Algorithm (SSGA), Simple Genetic Algorithm (SGA), Modified Genetic Algorithm and Zeroth Level Classifier system are investigated in this research. SSGA is used as a discovery mechanism instead of SGA. SGA replaces all old rules with new produced rule preventing old good rules from participating in the next rule generation. Zeroth Level Classifier System is used to play the role of detector by matching incoming environment message with classifiers to determine whether the current message is normal or intrusion and receiving feedback from environment. Finally, in order to attain the best results, Modified SSGA will enhance our discovery engine by using Fuzzy Logic to optimize crossover and mutation probability. The experiments and evaluations of the proposed method were performed with the KDD 99 intrusion detection dataset.

Keywords: MSSGBML, Network Intrusion Detection, SGA, SSGA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1652
1039 A Novel SVM-Based OOK Detector in Low SNR Infrared Channels

Authors: J. P. Dubois, O. M. Abdul-Latif

Abstract:

Support Vector Machine (SVM) is a recent class of statistical classification and regression techniques playing an increasing role in applications to detection problems in various engineering problems, notably in statistical signal processing, pattern recognition, image analysis, and communication systems. In this paper, SVM is applied to an infrared (IR) binary communication system with different types of channel models including Ricean multipath fading and partially developed scattering channel with additive white Gaussian noise (AWGN) at the receiver. The structure and performance of SVM in terms of the bit error rate (BER) metric is derived and simulated for these channel stochastic models and the computational complexity of the implementation, in terms of average computational time per bit, is also presented. The performance of SVM is then compared to classical binary signal maximum likelihood detection using a matched filter driven by On-Off keying (OOK) modulation. We found that the performance of SVM is superior to that of the traditional optimal detection schemes used in statistical communication, especially for very low signal-to-noise ratio (SNR) ranges. For large SNR, the performance of the SVM is similar to that of the classical detectors. The implication of these results is that SVM can prove very beneficial to IR communication systems that notoriously suffer from low SNR at the cost of increased computational complexity.

Keywords: Least square-support vector machine, on-off keying, matched filter, maximum likelihood detector, wireless infrared communication.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1931
1038 Modeling Aeration of Sharp Crested Weirs by Using Support Vector Machines

Authors: Arun Goel

Abstract:

The present paper attempts to investigate the prediction of air entrainment rate and aeration efficiency of a free overfall jets issuing from a triangular sharp crested weir by using regression based modelling. The empirical equations, Support vector machine (polynomial and radial basis function) models and the linear regression techniques were applied on the triangular sharp crested weirs relating the air entrainment rate and the aeration efficiency to the input parameters namely drop height, discharge, and vertex angle. It was observed that there exists a good agreement between the measured values and the values obtained using empirical equations, Support vector machine (Polynomial and rbf) models and the linear regression techniques. The test results demonstrated that the SVM based (Poly & rbf) model also provided acceptable prediction of the measured values with reasonable accuracy along with empirical equations and linear regression techniques in modelling the air entrainment rate and the aeration efficiency of a free overfall jets issuing from triangular sharp crested weir. Further sensitivity analysis has also been performed to study the impact of input parameter on the output in terms of air entrainment rate and aeration efficiency.

Keywords: Air entrainment rate, dissolved oxygen, regression, SVM, weir.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1935
1037 Digital Control Algorithm Based on Delta-Operator for High-Frequency DC-DC Switching Converters

Authors: Renkai Wang, Tingcun Wei

Abstract:

In this paper, a digital control algorithm based on delta-operator is presented for high-frequency digitally-controlled DC-DC switching converters. The stability and the controlling accuracy of the DC-DC switching converters are improved by using the digital control algorithm based on delta-operator without increasing the hardware circuit scale. The design method of voltage compensator in delta-domain using PID (Proportion-Integration- Differentiation) control is given in this paper, and the simulation results based on Simulink platform are provided, which have verified the theoretical analysis results very well. It can be concluded that, the presented control algorithm based on delta-operator has better stability and controlling accuracy, and easier hardware implementation than the existed control algorithms based on z-operator, therefore it can be used for the voltage compensator design in high-frequency digitally- controlled DC-DC switching converters.

Keywords: Digitally-controlled DC-DC switching converter, finite word length, control algorithm based on delta-operator, high-frequency, stability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1238
1036 Auto Rickshaw Impacts with Pedestrians: A Computational Analysis of Post-Collision Kinematics and Injury Mechanics

Authors: A. J. Al-Graitti, G. A. Khalid, P. Berthelson, A. Mason-Jones, R. Prabhu, M. D. Jones

Abstract:

Motor vehicle related pedestrian road traffic collisions are a major road safety challenge, since they are a leading cause of death and serious injury worldwide, contributing to a third of the global disease burden. The auto rickshaw, which is a common form of urban transport in many developing countries, plays a major transport role, both as a vehicle for hire and for private use. The most common auto rickshaws are quite unlike ‘typical’ four-wheel motor vehicle, being typically characterised by three wheels, a non-tilting sheet-metal body or open frame construction, a canvas roof and side curtains, a small drivers’ cabin, handlebar controls and a passenger space at the rear. Given the propensity, in developing countries, for auto rickshaws to be used in mixed cityscapes, where pedestrians and vehicles share the roadway, the potential for auto rickshaw impacts with pedestrians is relatively high. Whilst auto rickshaws are used in some Western countries, their limited number and spatial separation from pedestrian walkways, as a result of city planning, has not resulted in significant accident statistics. Thus, auto rickshaws have not been subject to the vehicle impact related pedestrian crash kinematic analyses and/or injury mechanics assessment, typically associated with motor vehicle development in Western Europe, North America and Japan. This study presents a parametric analysis of auto rickshaw related pedestrian impacts by computational simulation, using a Finite Element model of an auto rickshaw and an LS-DYNA 50th percentile male Hybrid III Anthropometric Test Device (dummy). Parametric variables include auto rickshaw impact velocity, auto rickshaw impact region (front, centre or offset) and relative pedestrian impact position (front, side and rear). The output data of each impact simulation was correlated against reported injury metrics, Head Injury Criterion (front, side and rear), Neck injury Criterion (front, side and rear), Abbreviated Injury Scale and reported risk level and adds greater understanding to the issue of auto rickshaw related pedestrian injury risk. The parametric analyses suggest that pedestrians are subject to a relatively high risk of injury during impacts with an auto rickshaw at velocities of 20 km/h or greater, which during some of the impact simulations may even risk fatalities. The present study provides valuable evidence for informing a series of recommendations and guidelines for making the auto rickshaw safer during collisions with pedestrians. Whilst it is acknowledged that the present research findings are based in the field of safety engineering and may over represent injury risk, compared to “Real World” accidents, many of the simulated interactions produced injury response values significantly greater than current threshold curves and thus, justify their inclusion in the study. To reduce the injury risk level and increase the safety of the auto rickshaw, there should be a reduction in the velocity of the auto rickshaw and, or, consideration of engineering solutions, such as retro fitting injury mitigation technologies to those auto rickshaw contact regions which are the subject of the greatest risk of producing pedestrian injury.

Keywords: Auto Rickshaw, finite element analysis, injury risk level, LS-DYNA, pedestrian impact.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1293
1035 Identifying Autism Spectrum Disorder Using Optimization-Based Clustering

Authors: Sharifah Mousli, Sona Taheri, Jiayuan He

Abstract:

Autism spectrum disorder (ASD) is a complex developmental condition involving persistent difficulties with social communication, restricted interests, and repetitive behavior. The challenges associated with ASD can interfere with an affected individual’s ability to function in social, academic, and employment settings. Although there is no effective medication known to treat ASD, to our best knowledge, early intervention can significantly improve an affected individual’s overall development. Hence, an accurate diagnosis of ASD at an early phase is essential. The use of machine learning approaches improves and speeds up the diagnosis of ASD. In this paper, we focus on the application of unsupervised clustering methods in ASD, as a large volume of ASD data generated through hospitals, therapy centers, and mobile applications has no pre-existing labels. We conduct a comparative analysis using seven clustering approaches, such as K-means, agglomerative hierarchical, model-based, fuzzy-C-means, affinity propagation, self organizing maps, linear vector quantisation – as well as the recently developed optimization-based clustering (COMSEP-Clust) approach. We evaluate the performances of the clustering methods extensively on real-world ASD datasets encompassing different age groups: toddlers, children, adolescents, and adults. Our experimental results suggest that the COMSEP-Clust approach outperforms the other seven methods in recognizing ASD with well-separated clusters.

Keywords: Autism spectrum disorder, clustering, optimization, unsupervised machine learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 285
1034 A Novel Machining Signal Filtering Technique: Z-notch Filter

Authors: Nuawi M. Z., Lamin F., Ismail A. R., Abdullah S., Wahid Z.

Abstract:

A filter is used to remove undesirable frequency information from a dynamic signal. This paper shows that the Znotch filter filtering technique can be applied to remove the noise nuisance from a machining signal. In machining, the noise components were identified from the sound produced by the operation of machine components itself such as hydraulic system, motor, machine environment and etc. By correlating the noise components with the measured machining signal, the interested components of the measured machining signal which was less interfered by the noise, can be extracted. Thus, the filtered signal is more reliable to be analysed in terms of noise content compared to the unfiltered signal. Significantly, the I-kaz method i.e. comprises of three dimensional graphical representation and I-kaz coefficient, Z∞ could differentiate between the filtered and the unfiltered signal. The bigger space of scattering and the higher value of Z∞ demonstrated that the signal was highly interrupted by noise. This method can be utilised as a proactive tool in evaluating the noise content in a signal. The evaluation of noise content is very important as well as the elimination especially for machining operation fault diagnosis purpose. The Z-notch filtering technique was reliable in extracting noise component from the measured machining signal with high efficiency. Even though the measured signal was exposed to high noise disruption, the signal generated from the interaction between cutting tool and work piece still can be acquired. Therefore, the interruption of noise that could change the original signal feature and consequently can deteriorate the useful sensory information can be eliminated.

Keywords: Digital signal filtering, I-kaz method, Machiningmonitoring, Noise Cancelling, Sound

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1862
1033 Least Square-SVM Detector for Wireless BPSK in Multi-Environmental Noise

Authors: J. P. Dubois, Omar M. Abdul-Latif

Abstract:

Support Vector Machine (SVM) is a statistical learning tool developed to a more complex concept of structural risk minimization (SRM). In this paper, SVM is applied to signal detection in communication systems in the presence of channel noise in various environments in the form of Rayleigh fading, additive white Gaussian background noise (AWGN), and interference noise generalized as additive color Gaussian noise (ACGN). The structure and performance of SVM in terms of the bit error rate (BER) metric is derived and simulated for these advanced stochastic noise models and the computational complexity of the implementation, in terms of average computational time per bit, is also presented. The performance of SVM is then compared to conventional binary signaling optimal model-based detector driven by binary phase shift keying (BPSK) modulation. We show that the SVM performance is superior to that of conventional matched filter-, innovation filter-, and Wiener filter-driven detectors, even in the presence of random Doppler carrier deviation, especially for low SNR (signal-to-noise ratio) ranges. For large SNR, the performance of the SVM was similar to that of the classical detectors. However, the convergence between SVM and maximum likelihood detection occurred at a higher SNR as the noise environment became more hostile.

Keywords: Colour noise, Doppler shift, innovation filter, least square-support vector machine, matched filter, Rayleigh fading, Wiener filter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1794
1032 Understanding the Discharge Activities in Transformer Oil under AC and DC Voltage Adopting UHF Technique

Authors: R. Sarathi, G. Koperundevi

Abstract:

Design of Converter transformer insulation is a major challenge. The insulation of these transformers is stressed by both AC and DC voltages. Particle contamination is one of the major problems in insulation structures, as they generate partial discharges leading it to major failure of insulation. Similarly corona discharges occur in transformer insulation. This partial discharge due to particle movement / corona formation in insulation structure under different voltage wave shapes, are different. In the present study, UHF technique is adopted to understand the discharge activity and could be realized that the characteristics of UHF signal generated under low and high fields are different. In the case of corona generated signal, the frequency content of the UHF sensor output lies in the range 0.3-1.2 GHz and is not much varied except for its increase in magnitude of discharge with the increase in applied voltage. It is realized that the current signal injected due to partial discharges/corona is about 4ns duration measured for first one half cycle. Wavelet technique is adopted in the present study. It allows one to identify the frequency content present in the signal at different instant of time. The STD-MRA analysis helps one to identify the frequency band in which the energy content of the UHF signal is maximum.

Keywords: Contamination, Insulation, Partial Discharges, Transformer oil, UHF sensors.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3816
1031 Contribution to the Study and Optimal Exploitation of a Solar Power System for a Semi-Arid Zone (Case Study: Ferkene, Algeria)

Authors: D. Dib, W. Guebabi, M. B. Guesmi

Abstract:

The objective of this paper is a contribution to a study of power supply by solar energy system called a common Ferkène north of Algerian desert in the semi-arid area. The optimal exploitation of the system, goes through stages of study and essential design, the choice of the model of the photovoltaic panel, the study of behavior with all the parameters involved in simulation before fixing the trajectory tracking the maximum point the power to extract (MPPT), form the essential platform to shape the design of the solar system set up to supply the town Ferkène without considering the grid. The identification of the common Ferkène by the collection of geographical, meteorological, demographic and electrical provides a basis uniform and important data. The results reflect a valid fictive model for any attempt to study and design a solar system to supply an arid or semi-arid zone by electrical energy from photovoltaic panels.

Keywords: Solar power, photovoltaic panel, Boost converter, supply, design, electric power, Ferkène, Algeria.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1726
1030 The Internet, its Social and Ethical Problem to the Young and How Curriculum Can Address the Issue

Authors: R. Ramli

Abstract:

The impact of the information revolution is double edged. While it is applauded for its versatility and performance robustness and acclaimed for making life smooth and easy, on the other hand people are concerned about its dark side especially to younger generations. The education system should extend its educating role beyond the school to home. Parents should be included in forming the policies of Internet use as well as in the curriculum delivery. This paper discusses how curriculum can be instrumental in addressing social and ethical issues resulted from the Internet.

Keywords: Curriculum, Ethics, Internet Addiction, Social Issues

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4569
1029 Multiple Peaks Tracking Algorithm using Particle Swarm Optimization Incorporated with Artificial Neural Network

Authors: Mei Shan Ngan, Chee Wei Tan

Abstract:

Due to the non-linear characteristics of photovoltaic (PV) array, PV systems typically are equipped with the capability of maximum power point tracking (MPPT) feature. Moreover, in the case of PV array under partially shaded conditions, hotspot problem will occur which could damage the PV cells. Partial shading causes multiple peaks in the P-V characteristic curves. This paper presents a hybrid algorithm of Particle Swarm Optimization (PSO) and Artificial Neural Network (ANN) MPPT algorithm for the detection of global peak among the multiple peaks in order to extract the true maximum energy from PV panel. The PV system consists of PV array, dc-dc boost converter controlled by the proposed MPPT algorithm and a resistive load. The system was simulated using MATLAB/Simulink package. The simulation results show that the proposed algorithm performs well to detect the true global peak power. The results of the simulations are analyzed and discussed.

Keywords: Photovoltaic (PV), Partial Shading, Maximum Power Point Tracking (MPPT), Particle Swarm Optimization (PSO) and Artificial Neural Network (ANN)

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3731