Search results for: battery grading algorithm
2456 Automatic Generating CNC-Code for Milling Machine
Authors: Chalakorn Chitsaart, Suchada Rianmora, Mann Rattana-Areeyagon, Wutichai Namjaiprasert
Abstract:
G-code is the main factor in computer numerical control (CNC) machine for controlling the tool-paths and generating the profile of the object’s features. For obtaining high surface accuracy of the surface finish, non-stop operation is required for CNC machine. Recently, to design a new product, the strategy that concerns about a change that has low impact on business and does not consume lot of resources has been introduced. Cost and time for designing minor changes can be reduced since the traditional geometric details of the existing models are applied. In order to support this strategy as the alternative channel for machining operation, this research proposes the automatic generating codes for CNC milling operation. Using this technique can assist the manufacturer to easily change the size and the geometric shape of the product during the operation where the time spent for setting up or processing the machine are reduced. The algorithm implemented on MATLAB platform is developed by analyzing and evaluating the geometric information of the part. Codes are created rapidly to control the operations of the machine. Comparing to the codes obtained from CAM, this developed algorithm can shortly generate and simulate the cutting profile of the part.Keywords: geometric shapes, milling operation, minor changes, CNC Machine, G-code, cutting parameters
Procedia PDF Downloads 3492455 An Improved Image Steganography Technique Based on Least Significant Bit Insertion
Authors: Olaiya Folorunsho, Comfort Y. Daramola, Joel N. Ugwu, Lawrence B. Adewole, Olufisayo S. Ekundayo
Abstract:
In today world, there is a tremendous rise in the usage of internet due to the fact that almost all the communication and information sharing is done over the web. Conversely, there is a continuous growth of unauthorized access to confidential data. This has posed a challenge to information security expertise whose major goal is to curtail the menace. One of the approaches to secure the safety delivery of data/information to the rightful destination without any modification is steganography. Steganography is the art of hiding information inside an embedded information. This research paper aimed at designing a secured algorithm with the use of image steganographic technique that makes use of Least Significant Bit (LSB) algorithm for embedding the data into the bit map image (bmp) in order to enhance security and reliability. In the LSB approach, the basic idea is to replace the LSB of the pixels of the cover image with the Bits of the messages to be hidden without destroying the property of the cover image significantly. The system was implemented using C# programming language of Microsoft.NET framework. The performance evaluation of the proposed system was experimented by conducting a benchmarking test for analyzing the parameters like Mean Squared Error (MSE) and Peak Signal to Noise Ratio (PSNR). The result showed that image steganography performed considerably in securing data hiding and information transmission over the networks.Keywords: steganography, image steganography, least significant bits, bit map image
Procedia PDF Downloads 2662454 Efficient Utilization of Negative Half Wave of Regulator Rectifier Output to Drive Class D LED Headlamp
Authors: Lalit Ahuja, Nancy Das, Yashas Shetty
Abstract:
LED lighting has been increasingly adopted for vehicles in both domestic and foreign automotive markets. Although this miniaturized technology gives the best light output, low energy consumption, and cost-efficient solutions for driving, the same is the need of the hour. In this paper, we present a methodology for driving the highest class two-wheeler headlamp with regulator and rectifier (RR) output. Unlike usual LED headlamps, which are driven by a battery, regulator, and rectifier (RR) driven, a low-cost and highly efficient LED Driver Module (LDM) is proposed. The positive half of magneto output is regulated and used to charge batteries used for various peripherals. While conventionally, the negative half was used for operating bulb-based exterior lamps. But with advancements in LED-based headlamps, which are driven by a battery, this negative half pulse remained unused in most of the vehicles. Our system uses negative half-wave rectified DC output from RR to provide constant light output at all RPMs of the vehicle. With the negative rectified DC output of RR, we have the advantage of pulsating DC input which periodically goes to zero, thus helping us to generate a constant DC output equivalent to the required LED load, and with a change in RPM, additional active thermal bypass circuit help us to maintain the efficiency and thermal rise. The methodology uses the negative half wave output of the RR along with a linear constant current driver with significantly higher efficiency. Although RR output has varied frequency and duty cycles at different engine RPMs, the driver is designed such that it provides constant current to LEDs with minimal ripple. In LED Headlamps, a DC-DC switching regulator is usually used, which is usually bulky. But with linear regulators, we’re eliminating bulky components and improving the form factor. Hence, this is both cost-efficient and compact. Presently, output ripple-free amplitude drivers with fewer components and less complexity are limited to lower-power LED Lamps. The focus of current high-efficiency research is often on high LED power applications. This paper presents a method of driving LED load at both High Beam and Low Beam using the negative half wave rectified pulsating DC from RR with minimum components, maintaining high efficiency within the thermal limitations. Linear regulators are significantly inefficient, with efficiencies typically about 40% and reaching as low as 14%. This leads to poor thermal performance. Although they don’t require complex and bulky circuitry, powering high-power devices is difficult to realise with the same. But with the input being negative half wave rectified pulsating DC, this efficiency can be improved as this helps us to generate constant DC output equivalent to LED load minimising the voltage drop on the linear regulator. Hence, losses are significantly reduced, and efficiency as high as 75% is achieved. With a change in RPM, DC voltage increases, which can be managed by active thermal bypass circuitry, thus resulting in better thermal performance. Hence, the use of bulky and expensive heat sinks can be avoided. Hence, the methodology to utilize the unused negative pulsating DC output of RR to optimize the utilization of RR output power and provide a cost-efficient solution as compared to costly DC-DC drivers.Keywords: class D LED headlamp, regulator and rectifier, pulsating DC, low cost and highly efficient, LED driver module
Procedia PDF Downloads 672453 Effect of an Interface Defect in a Patch/Layer Joint under Dynamic Time Harmonic Load
Authors: Elisaveta Kirilova, Wilfried Becker, Jordanka Ivanova, Tatyana Petrova
Abstract:
The study is a continuation of the research on the hygrothermal piezoelectric response of a smart patch/layer joint with undesirable interface defect (gap) at dynamic time harmonic mechanical and electrical load and environmental conditions. In order to find the axial displacements, shear stress and interface debond length in a closed analytical form for different positions of the interface gap, the 1D modified shear lag analysis is used. The debond length is represented as a function of many parameters (frequency, magnitude, electric displacement, moisture and temperature, joint geometry, position of the gap along the interface, etc.). Then the Genetic algorithm (GA) is implemented to find this position of the gap along the interface at which a vanishing/minimal debond length is ensured, e.g to find the most harmless position for the safe work of the structure. The illustrative example clearly shows that analytical shear-lag solutions and GA method can be combined successfully to give an effective prognosis of interface shear stress and interface delamination in patch/layer structure at combined loading with existing defects. To show the effect of the position of the interface gap, all obtained results are given in figures and discussed.Keywords: genetic algorithm, minimal delamination, optimal gap position, shear lag solution
Procedia PDF Downloads 3002452 Secure Message Transmission Using Meaningful Shares
Authors: Ajish Sreedharan
Abstract:
Visual cryptography encodes a secret image into shares of random binary patterns. If the shares are exerted onto transparencies, the secret image can be visually decoded by superimposing a qualified subset of transparencies, but no secret information can be obtained from the superposition of a forbidden subset. The binary patterns of the shares, however, have no visual meaning and hinder the objectives of visual cryptography. In the Secret Message Transmission through Meaningful Shares a secret message to be transmitted is converted to grey scale image. Then (2,2) visual cryptographic shares are generated from this converted gray scale image. The shares are encrypted using A Chaos-Based Image Encryption Algorithm Using Wavelet Transform. Two separate color images which are of the same size of the shares, taken as cover image of the respective shares to hide the shares into them. The encrypted shares which are covered by meaningful images so that a potential eavesdropper wont know there is a message to be read. The meaningful shares are transmitted through two different transmission medium. During decoding shares are fetched from received meaningful images and decrypted using A Chaos-Based Image Encryption Algorithm Using Wavelet Transform. The shares are combined to regenerate the grey scale image from where the secret message is obtained.Keywords: visual cryptography, wavelet transform, meaningful shares, grey scale image
Procedia PDF Downloads 4552451 Forecasting Optimal Production Program Using Profitability Optimization by Genetic Algorithm and Neural Network
Authors: Galal H. Senussi, Muamar Benisa, Sanja Vasin
Abstract:
In our business field today, one of the most important issues for any enterprises is cost minimization and profit maximization. Second issue is how to develop a strong and capable model that is able to give us desired forecasting of these two issues. Many researches deal with these issues using different methods. In this study, we developed a model for multi-criteria production program optimization, integrated with Artificial Neural Network. The prediction of the production cost and profit per unit of a product, dealing with two obverse functions at same time can be extremely difficult, especially if there is a great amount of conflict information about production parameters. Feed-Forward Neural Networks are suitable for generalization, which means that the network will generate a proper output as a result to input it has never seen. Therefore, with small set of examples the network will adjust its weight coefficients so the input will generate a proper output. This essential characteristic is of the most important abilities enabling this network to be used in variety of problems spreading from engineering to finance etc. From our results as we will see later, Feed-Forward Neural Networks has a strong ability and capability to map inputs into desired outputs.Keywords: project profitability, multi-objective optimization, genetic algorithm, Pareto set, neural networks
Procedia PDF Downloads 4452450 Developing Artificial Neural Networks (ANN) for Falls Detection
Authors: Nantakrit Yodpijit, Teppakorn Sittiwanchai
Abstract:
The number of older adults is rising rapidly. The world’s population becomes aging. Falls is one of common and major health problems in the elderly. Falls may lead to acute and chronic injuries and deaths. The fall-prone individuals are at greater risk for decreased quality of life, lowered productivity and poverty, social problems, and additional health problems. A number of studies on falls prevention using fall detection system have been conducted. Many available technologies for fall detection system are laboratory-based and can incur substantial costs for falls prevention. The utilization of alternative technologies can potentially reduce costs. This paper presents the new design and development of a wearable-based fall detection system using an Accelerometer and Gyroscope as motion sensors for the detection of body orientation and movement. Algorithms are developed to differentiate between Activities of Daily Living (ADL) and falls by comparing Threshold-based values with Artificial Neural Networks (ANN). Results indicate the possibility of using the new threshold-based method with neural network algorithm to reduce the number of false positive (false alarm) and improve the accuracy of fall detection system.Keywords: aging, algorithm, artificial neural networks (ANN), fall detection system, motion sensorsthreshold
Procedia PDF Downloads 4962449 An Improved Data Aided Channel Estimation Technique Using Genetic Algorithm for Massive Multi-Input Multiple-Output
Authors: M. Kislu Noman, Syed Mohammed Shamsul Islam, Shahriar Hassan, Raihana Pervin
Abstract:
With the increasing rate of wireless devices and high bandwidth operations, wireless networking and communications are becoming over crowded. To cope with such crowdy and messy situation, massive MIMO is designed to work with hundreds of low costs serving antennas at a time as well as improve the spectral efficiency at the same time. TDD has been used for gaining beamforming which is a major part of massive MIMO, to gain its best improvement to transmit and receive pilot sequences. All the benefits are only possible if the channel state information or channel estimation is gained properly. The common methods to estimate channel matrix used so far is LS, MMSE and a linear version of MMSE also proposed in many research works. We have optimized these methods using genetic algorithm to minimize the mean squared error and finding the best channel matrix from existing algorithms with less computational complexity. Our simulation result has shown that the use of GA worked beautifully on existing algorithms in a Rayleigh slow fading channel and existence of Additive White Gaussian Noise. We found that the GA optimized LS is better than existing algorithms as GA provides optimal result in some few iterations in terms of MSE with respect to SNR and computational complexity.Keywords: channel estimation, LMMSE, LS, MIMO, MMSE
Procedia PDF Downloads 1912448 Logical-Probabilistic Modeling of the Reliability of Complex Systems
Authors: Sergo Tsiramua, Sulkhan Sulkhanishvili, Elisabed Asabashvili, Lazare Kvirtia
Abstract:
The paper presents logical-probabilistic methods, models and algorithms for reliability assessment of complex systems, based on which a web application for structural analysis and reliability assessment of systems was created. The reliability assessment process included the following stages, which were reflected in the application: 1) Construction of a graphical scheme of the structural reliability of the system; 2) Transformation of the graphic scheme into a logical representation and modeling of the shortest ways of successful functioning of the system; 3) Description of system operability condition with logical function in the form of disjunctive normal form (DNF); 4) Transformation of DNF into orthogonal disjunction normal form (ODNF) using the orthogonalization algorithm; 5) Replacing logical elements with probabilistic elements in ODNF, obtaining a reliability estimation polynomial and quantifying reliability; 6) Calculation of weights of elements. Using the logical-probabilistic methods, models and algorithms discussed in the paper, a special software was created, by means of which a quantitative assessment of the reliability of systems of a complex structure is produced. As a result, structural analysis of systems, research and designing of optimal structure systems are carried out.Keywords: Complex systems, logical-probabilistic methods, orthogonalization algorithm, reliability, weight of element
Procedia PDF Downloads 722447 Optimal Sizes of Energy Storage for Economic Operation Management
Authors: Rohalla Moghimi, Sirus Mohammadi
Abstract:
Batteries for storage of electricity from solar and wind generation farms are a key element in the success of sustainability. In recent years, due to large integration of Renewable Energy Sources (RESs) like wind turbine and photovoltaic unit into the Micro-Grid (MG), the necessity of Battery Energy Storage (BES) has increased dramatically. The BES has several benefits and advantages in the MG-based applications such as short term power supply, power quality improvement, facilitating integration of RES, ancillary service and arbitrage. This paper presents the cost-based formulation to determine the optimal size of the BES in the operation management of MG. Also, some restrictions, i.e. power capacity of Distributed Generators (DGs), power and energy capacity of BES, charge/discharge efficiency of BES, operating reserve and load demand satisfaction should be considered as well. In this paper, a methodology is proposed for the optimal allocation and economic analysis of ESS in MGs on the basis of net present value (NPV). As the optimal operation of an MG strongly depends on the arrangement and allocation of its ESS, economic operation strategies and optimal allocation methods of the ESS devices are required for the MG.Keywords: microgrid, energy storage system, optimal sizing, net present value
Procedia PDF Downloads 5562446 A New Intelligent, Dynamic and Real Time Management System of Sewerage
Authors: R. Tlili Yaakoubi, H.Nakouri, O. Blanpain, S. Lallahem
Abstract:
The current tools for real time management of sewer systems are based on two software tools: the software of weather forecast and the software of hydraulic simulation. The use of the first ones is an important cause of imprecision and uncertainty, the use of the second requires temporal important steps of decision because of their need in times of calculation. This way of proceeding fact that the obtained results are generally different from those waited. The major idea of this project is to change the basic paradigm by approaching the problem by the "automatic" face rather than by that "hydrology". The objective is to make possible the realization of a large number of simulations at very short times (a few seconds) allowing to take place weather forecasts by using directly the real time meditative pluviometric data. The aim is to reach a system where the decision-making is realized from reliable data and where the correction of the error is permanent. A first model of control laws was realized and tested with different return-period rainfalls. The gains obtained in rejecting volume vary from 19 to 100 %. The development of a new algorithm was then used to optimize calculation time and thus to overcome the subsequent combinatorial problem in our first approach. Finally, this new algorithm was tested with 16- year-rainfall series. The obtained gains are 40 % of total volume rejected to the natural environment and of 65 % in the number of discharges.Keywords: automation, optimization, paradigm, RTC
Procedia PDF Downloads 2992445 Base Deficit Profiling in Patients with Isolated Blunt Traumatic Brain Injury – Correlation with Severity and Outcomes
Authors: Shahan Waheed, Muhammad Waqas, Asher Feroz
Abstract:
Objectives: To determine the utility of base deficit in traumatic brain injury in assessing the severity and to correlate with the conventional computed tomography scales in grading the severity of head injury. Methodology: Observational cross-sectional study conducted in a tertiary care facility from 1st January 2010 to 31st December 2012. All patients with isolated traumatic brain injury presenting within 24 hours of the injury to the emergency department were included in the study. Initial Glasgow Coma Scale and base deficit values were taken at presentation, the patients were followed during their hospital stay and CT scan brain findings were recorded and graded as per the Rotterdam scale, the findings were cross-checked by a radiologist, Glasgow Outcome Scale was taken on last follow up. Outcomes were dichotomized into favorable and unfavorable outcomes. Continuous variables with normal and non-normal distributions are reported as mean ± SD. Categorical variables are presented as frequencies and percentages. Relationship of the base deficit with GCS, GOS, CT scan brain and length of stay was calculated using Spearman`s correlation. Results: 154 patients were enrolled in the study. Mean age of the patients were 30 years and 137 were males. The severity of brain injuries as per the GCS was 34 moderate and 109 severe respectively. 34 percent of the total has an unfavorable outcome with a mean of 18±14. The correlation was significant at the 0.01 level with GCS on presentation and the base deficit 0.004. The correlation was not significant between the Rotterdam CT scan brain findings, length of stay and the base deficit. Conclusion: The base deficit was found to be a good predictor of severity of brain injury. There was no association of the severity of injuries on the CT scan brain as per the Rotterdam scale and the base deficit. Further studies with large sample size are needed to further evaluate the associations.Keywords: base deficit, traumatic brain injury, Rotterdam, GCS
Procedia PDF Downloads 4432444 Applying of an Adaptive Neuro-Fuzzy Inference System (ANFIS) for Estimation of Flood Hydrographs
Authors: Amir Ahmad Dehghani, Morteza Nabizadeh
Abstract:
This paper presents the application of an Adaptive Neuro-Fuzzy Inference System (ANFIS) to flood hydrograph modeling of Shahid Rajaee reservoir dam located in Iran. This was carried out using 11 flood hydrographs recorded in Tajan river gauging station. From this dataset, 9 flood hydrographs were chosen to train the model and 2 flood hydrographs to test the model. The different architectures of neuro-fuzzy model according to the membership function and learning algorithm were designed and trained with different epochs. The results were evaluated in comparison with the observed hydrographs and the best structure of model was chosen according the least RMSE in each performance. To evaluate the efficiency of neuro-fuzzy model, various statistical indices such as Nash-Sutcliff and flood peak discharge error criteria were calculated. In this simulation, the coordinates of a flood hydrograph including peak discharge were estimated using the discharge values occurred in the earlier time steps as input values to the neuro-fuzzy model. These results indicate the satisfactory efficiency of neuro-fuzzy model for flood simulating. This performance of the model demonstrates the suitability of the implemented approach to flood management projects.Keywords: adaptive neuro-fuzzy inference system, flood hydrograph, hybrid learning algorithm, Shahid Rajaee reservoir dam
Procedia PDF Downloads 4782443 Fault-Detection and Self-Stabilization Protocol for Wireless Sensor Networks
Authors: Ather Saeed, Arif Khan, Jeffrey Gosper
Abstract:
Sensor devices are prone to errors and sudden node failures, which are difficult to detect in a timely manner when deployed in real-time, hazardous, large-scale harsh environments and in medical emergencies. Therefore, the loss of data can be life-threatening when the sensed phenomenon is not disseminated due to sudden node failure, battery depletion or temporary malfunctioning. We introduce a set of partial differential equations for localizing faults, similar to Green’s and Maxwell’s equations used in Electrostatics and Electromagnetism. We introduce a node organization and clustering scheme for self-stabilizing sensor networks. Green’s theorem is applied to regions where the curve is closed and continuously differentiable to ensure network connectivity. Experimental results show that the proposed GTFD (Green’s Theorem fault-detection and Self-stabilization) protocol not only detects faulty nodes but also accurately generates network stability graphs where urgent intervention is required for dynamically self-stabilizing the network.Keywords: Green’s Theorem, self-stabilization, fault-localization, RSSI, WSN, clustering
Procedia PDF Downloads 752442 Optimization and Automation of Functional Testing with White-Box Testing Method
Authors: Reyhaneh Soltanshah, Hamid R. Zarandi
Abstract:
In order to be more efficient in industries that are related to computer systems, software testing is necessary despite spending time and money. In the embedded system software test, complete knowledge of the embedded system architecture is necessary to avoid significant costs and damages. Software tests increase the price of the final product. The aim of this article is to provide a method to reduce time and cost in tests based on program structure. First, a complete review of eleven white box test methods based on ISO/IEC/IEEE 29119 2015 and 2021 versions has been done. The proposed algorithm is designed using two versions of the 29119 standards, and some white-box testing methods that are expensive or have little coverage have been removed. On each of the functions, white box test methods were applied according to the 29119 standard and then the proposed algorithm was implemented on the functions. To speed up the implementation of the proposed method, the Unity framework has been used with some changes. Unity framework can be used in embedded software testing due to its open source and ability to implement white box test methods. The test items obtained from these two approaches were evaluated using a mathematical ratio, which in various software mining reduced between 50% and 80% of the test cost and reached the desired result with the minimum number of test items.Keywords: embedded software, reduce costs, software testing, white-box testing
Procedia PDF Downloads 542441 PET Image Resolution Enhancement
Authors: Krzysztof Malczewski
Abstract:
PET is widely applied scanning procedure in medical imaging based research. It delivers measurements of functioning in distinct areas of the human brain while the patient is comfortable, conscious and alert. This article presents the new compression sensing based super-resolution algorithm for improving the image resolution in clinical Positron Emission Tomography (PET) scanners. The issue of motion artifacts is well known in Positron Emission Tomography (PET) studies as its side effect. The PET images are being acquired over a limited period of time. As the patients cannot hold breath during the PET data gathering, spatial blurring and motion artefacts are the usual result. These may lead to wrong diagnosis. It is shown that the presented approach improves PET spatial resolution in cases when Compressed Sensing (CS) sequences are used. Compressed Sensing (CS) aims at signal and images reconstructing from significantly fewer measurements than were traditionally thought necessary. The application of CS to PET has the potential for significant scan time reductions, with visible benefits for patients and health care economics. In this study the goal is to combine super-resolution image enhancement algorithm with CS framework to achieve high resolution PET output. Both methods emphasize on maximizing image sparsity on known sparse transform domain and minimizing fidelity.Keywords: PET, super-resolution, image reconstruction, pattern recognition
Procedia PDF Downloads 3712440 Frequency Modulation Continuous Wave Radar Human Fall Detection Based on Time-Varying Range-Doppler Features
Authors: Xiang Yu, Chuntao Feng, Lu Yang, Meiyang Song, Wenhao Zhou
Abstract:
The existing two-dimensional micro-Doppler features extraction ignores the correlation information between the spatial and temporal dimension features. For the range-Doppler map, the time dimension is introduced, and a frequency modulation continuous wave (FMCW) radar human fall detection algorithm based on time-varying range-Doppler features is proposed. Firstly, the range-Doppler sequence maps are generated from the echo signals of the continuous motion of the human body collected by the radar. Then the three-dimensional data cube composed of multiple frames of range-Doppler maps is input into the three-dimensional Convolutional Neural Network (3D CNN). The spatial and temporal features of time-varying range-Doppler are extracted by the convolution layer and pool layer at the same time. Finally, the extracted spatial and temporal features are input into the fully connected layer for classification. The experimental results show that the proposed fall detection algorithm has a detection accuracy of 95.66%.Keywords: FMCW radar, fall detection, 3D CNN, time-varying range-doppler features
Procedia PDF Downloads 1222439 Optimization of Coefficients of Fractional Order Proportional-Integrator-Derivative Controller on Permanent Magnet Synchronous Motors Using Particle Swarm Optimization
Authors: Ali Motalebi Saraji, Reza Zarei Lamuki
Abstract:
Speed control and behavior improvement of permanent magnet synchronous motors (PMSM) that have reliable performance, low loss, and high power density, especially in industrial drives, are of great importance for researchers. Because of its importance in this paper, coefficients optimization of proportional-integrator-derivative fractional order controller is presented using Particle Swarm Optimization (PSO) algorithm in order to improve the behavior of PMSM in its speed control loop. This improvement is simulated in MATLAB software for the proposed optimized proportional-integrator-derivative fractional order controller with a Genetic algorithm and compared with a full order controller with a classic optimization method. Simulation results show the performance improvement of the proposed controller with respect to two other controllers in terms of rising time, overshoot, and settling time.Keywords: speed control loop of permanent magnet synchronous motor, fractional and full order proportional-integrator-derivative controller, coefficients optimization, particle swarm optimization, improvement of behavior
Procedia PDF Downloads 1462438 Adaptive Process Monitoring for Time-Varying Situations Using Statistical Learning Algorithms
Authors: Seulki Lee, Seoung Bum Kim
Abstract:
Statistical process control (SPC) is a practical and effective method for quality control. The most important and widely used technique in SPC is a control chart. The main goal of a control chart is to detect any assignable changes that affect the quality output. Most conventional control charts, such as Hotelling’s T2 charts, are commonly based on the assumption that the quality characteristics follow a multivariate normal distribution. However, in modern complicated manufacturing systems, appropriate control chart techniques that can efficiently handle the nonnormal processes are required. To overcome the shortcomings of conventional control charts for nonnormal processes, several methods have been proposed to combine statistical learning algorithms and multivariate control charts. Statistical learning-based control charts, such as support vector data description (SVDD)-based charts, k-nearest neighbors-based charts, have proven their improved performance in nonnormal situations compared to that of the T2 chart. Beside the nonnormal property, time-varying operations are also quite common in real manufacturing fields because of various factors such as product and set-point changes, seasonal variations, catalyst degradation, and sensor drifting. However, traditional control charts cannot accommodate future condition changes of the process because they are formulated based on the data information recorded in the early stage of the process. In the present paper, we propose a SVDD algorithm-based control chart, which is capable of adaptively monitoring time-varying and nonnormal processes. We reformulated the SVDD algorithm into a time-adaptive SVDD algorithm by adding a weighting factor that reflects time-varying situations. Moreover, we defined the updating region for the efficient model-updating structure of the control chart. The proposed control chart simultaneously allows efficient model updates and timely detection of out-of-control signals. The effectiveness and applicability of the proposed chart were demonstrated through experiments with the simulated data and the real data from the metal frame process in mobile device manufacturing.Keywords: multivariate control chart, nonparametric method, support vector data description, time-varying process
Procedia PDF Downloads 2992437 Identification of Biological Pathways Causative for Breast Cancer Using Unsupervised Machine Learning
Authors: Karthik Mittal
Abstract:
This study performs an unsupervised machine learning analysis to find clusters of related SNPs which highlight biological pathways that are important for the biological mechanisms of breast cancer. Studying genetic variations in isolation is illogical because these genetic variations are known to modulate protein production and function; the downstream effects of these modifications on biological outcomes are highly interconnected. After extracting the SNPs and their effect on different types of breast cancer using the MRBase library, two unsupervised machine learning clustering algorithms were implemented on the genetic variants: a k-means clustering algorithm and a hierarchical clustering algorithm; furthermore, principal component analysis was executed to visually represent the data. These algorithms specifically used the SNP’s beta value on the three different types of breast cancer tested in this project (estrogen-receptor positive breast cancer, estrogen-receptor negative breast cancer, and breast cancer in general) to perform this clustering. Two significant genetic pathways validated the clustering produced by this project: the MAPK signaling pathway and the connection between the BRCA2 gene and the ESR1 gene. This study provides the first proof of concept showing the importance of unsupervised machine learning in interpreting GWAS summary statistics.Keywords: breast cancer, computational biology, unsupervised machine learning, k-means, PCA
Procedia PDF Downloads 1462436 A Comparison of South East Asian Face Emotion Classification based on Optimized Ellipse Data Using Clustering Technique
Authors: M. Karthigayan, M. Rizon, Sazali Yaacob, R. Nagarajan, M. Muthukumaran, Thinaharan Ramachandran, Sargunam Thirugnanam
Abstract:
In this paper, using a set of irregular and regular ellipse fitting equations using Genetic algorithm (GA) are applied to the lip and eye features to classify the human emotions. Two South East Asian (SEA) faces are considered in this work for the emotion classification. There are six emotions and one neutral are considered as the output. Each subject shows unique characteristic of the lip and eye features for various emotions. GA is adopted to optimize irregular ellipse characteristics of the lip and eye features in each emotion. That is, the top portion of lip configuration is a part of one ellipse and the bottom of different ellipse. Two ellipse based fitness equations are proposed for the lip configuration and relevant parameters that define the emotions are listed. The GA method has achieved reasonably successful classification of emotion. In some emotions classification, optimized data values of one emotion are messed or overlapped to other emotion ranges. In order to overcome the overlapping problem between the emotion optimized values and at the same time to improve the classification, a fuzzy clustering method (FCM) of approach has been implemented to offer better classification. The GA-FCM approach offers a reasonably good classification within the ranges of clusters and it had been proven by applying to two SEA subjects and have improved the classification rate.Keywords: ellipse fitness function, genetic algorithm, emotion recognition, fuzzy clustering
Procedia PDF Downloads 5462435 The Algorithm to Solve the Extend General Malfatti’s Problem in a Convex Circular Triangle
Authors: Ching-Shoei Chiang
Abstract:
The Malfatti’s Problem solves the problem of fitting 3 circles into a right triangle such that these 3 circles are tangent to each other, and each circle is also tangent to a pair of the triangle’s sides. This problem has been extended to any triangle (called general Malfatti’s Problem). Furthermore, the problem has been extended to have 1+2+…+n circles inside the triangle with special tangency properties among circles and triangle sides; we call it extended general Malfatti’s problem. In the extended general Malfatti’s problem, call it Tri(Tn), where Tn is the triangle number, there are closed-form solutions for Tri(T₁) (inscribed circle) problem and Tri(T₂) (3 Malfatti’s circles) problem. These problems become more complex when n is greater than 2. In solving Tri(Tn) problem, n>2, algorithms have been proposed to solve these problems numerically. With a similar idea, this paper proposed an algorithm to find the radii of circles with the same tangency properties. Instead of the boundary of the triangle being a straight line, we use a convex circular arc as the boundary and try to find Tn circles inside this convex circular triangle with the same tangency properties among circles and boundary Carc. We call these problems the Carc(Tn) problems. The CPU time it takes for Carc(T16) problem, which finds 136 circles inside a convex circular triangle with specified tangency properties, is less than one second.Keywords: circle packing, computer-aided geometric design, geometric constraint solver, Malfatti’s problem
Procedia PDF Downloads 1102434 Nigeria Energy Security: The Role of Solar Batteries
Authors: Ihugba Okezie A., Oguzie Emeka E.
Abstract:
Nigeria's renewable energy market is expanding due to increased environmental awareness, supportive government policies, and the need for energy diversification. This paper examines the role of solar batteries in enhancing Nigeria's energy security. With growing energy demands and frequent power outages, integrating solar batteries presents a viable solution to stabilize the energy supply. The study investigates the current state of solar battery technology in Nigeria, its economic and environmental benefits, and the challenges to implementation. Through a literature review, case studies, and stakeholder interviews, the paper provides a comprehensive analysis of solar batteries' contribution to a resilient energy future. Key players include Engie SA, TotalEnergies SE, Starsight Energy, Enel SpA, and North-South Power Co. Ltd. Challenges include high upfront costs, inadequate policies, weak infrastructure, and security risks. The paper recommends that the government should strengthen policies and incentives to encourage investments through tax breaks, subsidies, and financial incentives.Keywords: renewable energy, solar batteries, energy security, Nigeria’s electricity generation, job creation
Procedia PDF Downloads 382433 Operation Strategy of Multi-Energy Storage System Considering Power System Reliability
Authors: Wook-Won Kim, Je-Seok Shin, Jin-O Kim
Abstract:
As the penetration of Energy Storage System (ESS) increases in the power system due to higher performance and lower cost than ever, ESS is expanding its role to the ancillary service as well as the storage of extra energy from the intermittent renewable energy resources. For multi-ESS with different capacity and SOC level each other, it is required to make the optimal schedule of SOC level use the multi-ESS effectively. This paper proposes the energy allocation method for the multiple battery ESS with reliability constraint, in order to make the ESS discharge the required energy as long as possible. A simple but effective method is proposed in this paper, to satisfy the power for the spinning reserve requirement while improving the system reliability. Modelling of ESS is also proposed, and reliability is evaluated by using the combined reliability model which includes the proposed ESS model and conventional generation one. In the case study, it can be observed that the required power is distributed to each ESS adequately and accordingly, the SOC is scheduled to improve the reliability indices such as Loss of Load Probability (LOLP) and Loss of Load Expectation (LOLE).Keywords: multiple energy storage system (MESS), energy allocation method, SOC schedule, reliability constraints
Procedia PDF Downloads 3682432 Investigating Message Timing Side Channel Attacks on Networks on Chip with Ring Topology
Authors: Mark Davey
Abstract:
Communications on a Network on Chip (NoC) produce timing information, i.e., network injection delays, packet traversal times, throughput metrics, and other attributes relating to the traffic being sent across the chip. The security requirements of a platform encompass each node to operate with confidentiality, integrity, and availability (ISO 27001). Inherently, a shared NoC interconnect is exposed to analysis of timing patterns created by contention for the network components, i.e., links and switches/routers. This phenomenon is defined as information leakage, which represents a ‘side channel’ of sensitive information that can be correlated to platform activity. The key algorithm presented in this paper evaluates how an adversary can control two platform neighbouring nodes of a target node to obtain sensitive information about communication with the target node. The actual information obtained is the period value of a periodic task communication. This enacts a breach of the expected confidentiality of a node operating in a multiprocessor platform. An experimental investigation of the side channel is undertaken to judge the level and significance of inferred information produced by access times to the NoC. Results are presented with a series of expanding task set scenarios to evaluate the efficacy of the side channel detection algorithm as the network load increases.Keywords: embedded systems, multiprocessor, network on chip, side channel
Procedia PDF Downloads 712431 Channel Sounding and PAPR Reduction in OFDM for WiMAX Using Software Defined Radio
Authors: B. Siva Kumar Reddy, B. Lakshmi
Abstract:
WiMAX is a high speed broadband wireless access technology that adopted OFDM/OFDMA techniques to supply higher data rates with high spectral efficiency. However, OFDM suffers in view of high Peak to Average Power Ratio (PAPR) and high affect to synchronization errors. In this paper, the high PAPR problem is solved by using phase modulation to get Constant Envelop Orthogonal Frequency Division Multiplexing (CE-OFDM). The synchronization failures are brought down by employing a frequency lock loop, Poly phase clock synchronizer, Costas loop and blind equalizers such as Constant Modulus Algorithm (CMA) equalizer and Sign Kurtosis Maximization Adaptive Algorithm (SKMAA) equalizers. The WiMAX physical layer is executed on Software Defined Radio (SDR) prototype by utilizing USRP N210 as hardware and GNU Radio as software plat-forms. A SNR estimation is performed on the signal received through USRP N210. To empathize wireless propagation in specific environments, a sliding correlator wireless channel sounding system is designed by using SDR testbed.Keywords: BER, CMA equalizer, Kurtosis equalizer, GNU Radio, OFDM/OFDMA, USRP N210
Procedia PDF Downloads 3492430 The Optimum Mel-Frequency Cepstral Coefficients (MFCCs) Contribution to Iranian Traditional Music Genre Classification by Instrumental Features
Authors: M. Abbasi Layegh, S. Haghipour, K. Athari, R. Khosravi, M. Tafkikialamdari
Abstract:
An approach to find the optimum mel-frequency cepstral coefficients (MFCCs) for the Radif of Mirzâ Ábdollâh, which is the principal emblem and the heart of Persian music, performed by most famous Iranian masters on two Iranian stringed instruments ‘Tar’ and ‘Setar’ is proposed. While investigating the variance of MFCC for each record in themusic database of 1500 gushe of the repertoire belonging to 12 modal systems (dastgâh and âvâz), we have applied the Fuzzy C-Mean clustering algorithm on each of the 12 coefficient and different combinations of those coefficients. We have applied the same experiment while increasing the number of coefficients but the clustering accuracy remained the same. Therefore, we can conclude that the first 7 MFCCs (V-7MFCC) are enough for classification of The Radif of Mirzâ Ábdollâh. Classical machine learning algorithms such as MLP neural networks, K-Nearest Neighbors (KNN), Gaussian Mixture Model (GMM), Hidden Markov Model (HMM) and Support Vector Machine (SVM) have been employed. Finally, it can be realized that SVM shows a better performance in this study.Keywords: radif of Mirzâ Ábdollâh, Gushe, mel frequency cepstral coefficients, fuzzy c-mean clustering algorithm, k-nearest neighbors (KNN), gaussian mixture model (GMM), hidden markov model (HMM), support vector machine (SVM)
Procedia PDF Downloads 4462429 Deep Routing Strategy: Deep Learning based Intelligent Routing in Software Defined Internet of Things.
Authors: Zabeehullah, Fahim Arif, Yawar Abbas
Abstract:
Software Defined Network (SDN) is a next genera-tion networking model which simplifies the traditional network complexities and improve the utilization of constrained resources. Currently, most of the SDN based Internet of Things(IoT) environments use traditional network routing strategies which work on the basis of max or min metric value. However, IoT network heterogeneity, dynamic traffic flow and complexity demands intelligent and self-adaptive routing algorithms because traditional routing algorithms lack the self-adaptions, intelligence and efficient utilization of resources. To some extent, SDN, due its flexibility, and centralized control has managed the IoT complexity and heterogeneity but still Software Defined IoT (SDIoT) lacks intelligence. To address this challenge, we proposed a model called Deep Routing Strategy (DRS) which uses Deep Learning algorithm to perform routing in SDIoT intelligently and efficiently. Our model uses real-time traffic for training and learning. Results demonstrate that proposed model has achieved high accuracy and low packet loss rate during path selection. Proposed model has also outperformed benchmark routing algorithm (OSPF). Moreover, proposed model provided encouraging results during high dynamic traffic flow.Keywords: SDN, IoT, DL, ML, DRS
Procedia PDF Downloads 1102428 Speed Control of DC Motor Using Optimization Techniques Based PID Controller
Authors: Santosh Kumar Suman, Vinod Kumar Giri
Abstract:
The goal of this paper is to outline a speed controller of a DC motor by choice of a PID parameters utilizing genetic algorithms (GAs), the DC motor is extensively utilized as a part of numerous applications such as steel plants, electric trains, cranes and a great deal more. DC motor could be represented by a nonlinear model when nonlinearities such as attractive dissemination are considered. To provide effective control, nonlinearities and uncertainties in the model must be taken into account in the control design. The DC motor is considered as third order system. Objective of this paper three type of tuning techniques for PID parameter. In this paper, an independently energized DC motor utilizing MATLAB displaying, has been outlined whose velocity might be examined utilizing the Proportional, Integral, Derivative (KP, KI , KD) addition of the PID controller. Since, established controllers PID are neglecting to control the drive when weight parameters be likewise changed. The principle point of this paper is to dissect the execution of optimization techniques viz. The Genetic Algorithm (GA) for improve PID controllers parameters for velocity control of DC motor and list their points of interest over the traditional tuning strategies. The outcomes got from GA calculations were contrasted and that got from traditional technique. It was found that the optimization techniques beat customary tuning practices of ordinary PID controllers.Keywords: DC motor, PID controller, optimization techniques, genetic algorithm (GA), objective function, IAE
Procedia PDF Downloads 4202427 Speckle-Based Phase Contrast Micro-Computed Tomography with Neural Network Reconstruction
Authors: Y. Zheng, M. Busi, A. F. Pedersen, M. A. Beltran, C. Gundlach
Abstract:
X-ray phase contrast imaging has shown to yield a better contrast compared to conventional attenuation X-ray imaging, especially for soft tissues in the medical imaging energy range. This can potentially lead to better diagnosis for patients. However, phase contrast imaging has mainly been performed using highly brilliant Synchrotron radiation, as it requires high coherence X-rays. Many research teams have demonstrated that it is also feasible using a laboratory source, bringing it one step closer to clinical use. Nevertheless, the requirement of fine gratings and high precision stepping motors when using a laboratory source prevents it from being widely used. Recently, a random phase object has been proposed as an analyzer. This method requires a much less robust experimental setup. However, previous studies were done using a particular X-ray source (liquid-metal jet micro-focus source) or high precision motors for stepping. We have been working on a much simpler setup with just small modification of a commercial bench-top micro-CT (computed tomography) scanner, by introducing a piece of sandpaper as the phase analyzer in front of the X-ray source. However, it needs a suitable algorithm for speckle tracking and 3D reconstructions. The precision and sensitivity of speckle tracking algorithm determine the resolution of the system, while the 3D reconstruction algorithm will affect the minimum number of projections required, thus limiting the temporal resolution. As phase contrast imaging methods usually require much longer exposure time than traditional absorption based X-ray imaging technologies, a dynamic phase contrast micro-CT with a high temporal resolution is particularly challenging. Different reconstruction methods, including neural network based techniques, will be evaluated in this project to increase the temporal resolution of the phase contrast micro-CT. A Monte Carlo ray tracing simulation (McXtrace) was used to generate a large dataset to train the neural network, in order to address the issue that neural networks require large amount of training data to get high-quality reconstructions.Keywords: micro-ct, neural networks, reconstruction, speckle-based x-ray phase contrast
Procedia PDF Downloads 257