Search results for: Additive White Gaussian Noise (AWGN)
133 Complex Condition Monitoring System of Aircraft Gas Turbine Engine
Authors: A. M. Pashayev, D. D. Askerov, C. Ardil, R. A. Sadiqov, P. S. Abdullayev
Abstract:
Researches show that probability-statistical methods application, especially at the early stage of the aviation Gas Turbine Engine (GTE) technical condition diagnosing, when the flight information has property of the fuzzy, limitation and uncertainty is unfounded. Hence the efficiency of application of new technology Soft Computing at these diagnosing stages with the using of the Fuzzy Logic and Neural Networks methods is considered. According to the purpose of this problem training with high accuracy of fuzzy multiple linear and non-linear models (fuzzy regression equations) which received on the statistical fuzzy data basis is made. For GTE technical condition more adequate model making dynamics of skewness and kurtosis coefficients- changes are analysed. Researches of skewness and kurtosis coefficients values- changes show that, distributions of GTE workand output parameters of the multiple linear and non-linear generalised models at presence of noise measured (the new recursive Least Squares Method (LSM)). The developed GTE condition monitoring system provides stage-by-stage estimation of engine technical conditions. As application of the given technique the estimation of the new operating aviation engine technical condition was made.Keywords: aviation gas turbine engine, technical condition, fuzzy logic, neural networks, fuzzy statistics
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2545132 Evaluating Generative Neural Attention Weights-Based Chatbot on Customer Support Twitter Dataset
Authors: Sinarwati Mohamad Suhaili, Naomie Salim, Mohamad Nazim Jambli
Abstract:
Sequence-to-sequence (seq2seq) models augmented with attention mechanisms are increasingly important in automated customer service. These models, adept at recognizing complex relationships between input and output sequences, are essential for optimizing chatbot responses. Central to these mechanisms are neural attention weights that determine the model’s focus during sequence generation. Despite their widespread use, there remains a gap in the comparative analysis of different attention weighting functions within seq2seq models, particularly in the context of chatbots utilizing the Customer Support Twitter (CST) dataset. This study addresses this gap by evaluating four distinct attention-scoring functions—dot, multiplicative/general, additive, and an extended multiplicative function with a tanh activation parameter — in neural generative seq2seq models. Using the CST dataset, these models were trained and evaluated over 10 epochs with the AdamW optimizer. Evaluation criteria included validation loss and BLEU scores implemented under both greedy and beam search strategies with a beam size of k = 3. Results indicate that the model with the tanh-augmented multiplicative function significantly outperforms its counterparts, achieving the lowest validation loss (1.136484) and the highest BLEU scores (0.438926 under greedy search, 0.443000 under beam search, k = 3). These findings emphasize the crucial influence of selecting an appropriate attention-scoring function to enhance the performance of seq2seq models for chatbots, particularly highlighting the model integrating tanh activation as a promising approach to improving chatbot quality in customer support contexts.
Keywords: Attention weight, chatbot, encoder-decoder, neural generative attention, score function, sequence-to-sequence.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 89131 Construction Innovation: Support for 3D Printing House
Authors: Andrea Palazzo, Daniel Macek, Veronika Malinova
Abstract:
Contour processing is the new technology challenge for architects and construction companies. The many advantages it promises make it one of the most interesting solutions for construction in terms of automation of building processes. The technology for 3D printing houses offers many application possibilities, from low-cost construction, to being considered by NASA for visionary projects as a good solution for building settlements on other planets. Another very important point is that clients, as architects, will no longer have many limits in design concerning ideas and creativity. The prices for real estate are constantly increasing and the lack of availability of construction materials as well as the speculation that has been created around it in 2021 is bringing prices to such a level that in the future it will be difficult for developers to find customers for these ultra-expensive homes. Hence, this paper starts with the introduction of 3D printing, which now has the potential to gain an important position in the market, becoming a valid alternative to the classic construction process. This technology is not only beneficial from an economic point of view but it is also a great opportunity to have an impact on the environment by reducing CO2 emissions. Further on in the article we will also understand if, after the COP 26 (2021 United Nations Climate Change Conference), world governments could also push towards building technologies that reduce the waste materials that are needed to be disposed of and at the same time reduce emissions with the contribution of governmental funds. This paper will give us insight on the multiple benefits of 3D printing and emphasize the importance of finding new solutions for materials that can be used by the printer. Therefore, based on the type of material, it will be possible to understand the compatibility with current regulations and how the authorities will be inclined to support this technology. This will help to enable the rise and development of this technology in Europe and in the rest of the world on actual housing projects and not only on prototypes.
Keywords: Additive manufacturing, building development building regulation, contour crafting, printing material.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 336130 Comparison of Composite Programming and Compromise Programming for Aircraft Selection Problem Using Multiple Criteria Decision Making Analysis Method
Authors: C. Ardil
Abstract:
In this paper, the comparison of composite programming and compromise programming for the aircraft selection problem is discussed using the multiple criteria decision analysis method. The decision making process requires the prior definition and fulfillment of certain factors, especially when it comes to complex areas such as aircraft selection problems. The proposed technique gives more efficient results by extending the composite programming and compromise programming, which are widely used in modeling multiple criteria decisions. The proposed model is applied to a practical decision problem for evaluating and selecting aircraft problems.A selection of aircraft was made based on the proposed approach developed in the field of multiple criteria decision making. The model presented is solved by using the following methods: composite programming, and compromise programming. The importance values of the weight coefficients of the criteria are calculated using the mean weight method. The evaluation and ranking of aircraft are carried out using the composite programming and compromise programming methods. In order to determine the stability of the model and the ability to apply the developed composite programming and compromise programming approach, the paper analyzes its sensitivity, which involves changing the value of the coefficient λ and q in the first part. The second part of the sensitivity analysis relates to the application of different multiple criteria decision making methods, composite programming and compromise programming. In addition, in the third part of the sensitivity analysis, the Spearman correlation coefficient of the ranks obtained was calculated which confirms the applicability of all the proposed approaches.
Keywords: composite programming, compromise programming, additive weighted model, multiplicative weighted model, multiple criteria decision making analysis, MCDMA, aircraft selection
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 695129 Antioxidant Capacity and Total Phenolic Content of Aqueous Acetone and Ethanol Extract of Edible Parts of Moringa oleifera and Sesbania grandiflora
Authors: Perumal Siddhuraju, Arumugam Abirami, Gunasekaran Nagarani, Marimuthu Sangeethapriya
Abstract:
Aqueous ethanol and aqueous acetone extracts of Moringa oleifera (outer pericarp of immature fruit and flower) and Sesbania grandiflora white variety (flower and leaf) were examined for radical scavenging capacities and antioxidant activities. Ethanol extract of S. grandiflora (flower and leaf) and acetone extract of M. oleifera (outer pericarp of immature fruit and flower) contained relatively higher levels of total dietary phenolics than the other extracts. The antioxidant potential of the extracts were assessed by employing different in vitro assays such as reducing power assay, DPPH˙, ABTS˙+ and ˙OH radical scavenging capacities, antihemolytic assay by hydrogen peroxide induced method and metal chelating ability. Though all the extracts exhibited dose dependent reducing power activity, acetone extract of all the samples were found to have more hydrogen donating ability in DPPH˙ (2.3% - 65.03%) and hydroxyl radical scavenging systems (21.6% - 77.4%) than the ethanol extracts. The potential of multiple antioxidant activity was evident as it possessed antihemolytic activity (43.2 % to 68.0 %) and metal ion chelating potency (45.16 - 104.26 mg EDTA/g sample). The result indicate that acetone extract of M. oleifera (OPIF and flower) and S. grandiflora (flower and leaf) endowed with polyphenols, could be utilized as natural antioxidants/nutraceuticals.Keywords: Antioxidant activity, Moringa oleifera, Polyphenolics, Sesbania grandiflora, Underutilized vegetables.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2408128 3D Star Skeleton for Fast Human Posture Representation
Authors: Sungkuk Chun, Kwangjin Hong, Keechul Jung
Abstract:
In this paper, we propose an improved 3D star skeleton technique, which is a suitable skeletonization for human posture representation and reflects the 3D information of human posture. Moreover, the proposed technique is simple and then can be performed in real-time. The existing skeleton construction techniques, such as distance transformation, Voronoi diagram, and thinning, focus on the precision of skeleton information. Therefore, those techniques are not applicable to real-time posture recognition since they are computationally expensive and highly susceptible to noise of boundary. Although a 2D star skeleton was proposed to complement these problems, it also has some limitations to describe the 3D information of the posture. To represent human posture effectively, the constructed skeleton should consider the 3D information of posture. The proposed 3D star skeleton contains 3D data of human, and focuses on human action and posture recognition. Our 3D star skeleton uses the 8 projection maps which have 2D silhouette information and depth data of human surface. And the extremal points can be extracted as the features of 3D star skeleton, without searching whole boundary of object. Therefore, on execution time, our 3D star skeleton is faster than the “greedy" 3D star skeleton using the whole boundary points on the surface. Moreover, our method can offer more accurate skeleton of posture than the existing star skeleton since the 3D data for the object is concerned. Additionally, we make a codebook, a collection of representative 3D star skeletons about 7 postures, to recognize what posture of constructed skeleton is.Keywords: computer vision, gesture recognition, skeletonization, human posture representation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2122127 Design and Optimization for a Compliant Gripper with Force Regulation Mechanism
Authors: Nhat Linh Ho, Thanh-Phong Dao, Shyh-Chour Huang, Hieu Giang Le
Abstract:
This paper presents a design and optimization for a compliant gripper. The gripper is constructed based on the concept of compliant mechanism with flexure hinge. A passive force regulation mechanism is presented to control the grasping force a micro-sized object instead of using a sensor force. The force regulation mechanism is designed using the planar springs. The gripper is expected to obtain a large range of displacement to handle various sized objects. First of all, the statics and dynamics of the gripper are investigated by using the finite element analysis in ANSYS software. And then, the design parameters of the gripper are optimized via Taguchi method. An orthogonal array L9 is used to establish an experimental matrix. Subsequently, the signal to noise ratio is analyzed to find the optimal solution. Finally, the response surface methodology is employed to model the relationship between the design parameters and the output displacement of the gripper. The design of experiment method is then used to analyze the sensitivity so as to determine the effect of each parameter on the displacement. The results showed that the compliant gripper can move with a large displacement of 213.51 mm and the force regulation mechanism is expected to be used for high precision positioning systems.
Keywords: Flexure hinge, compliant mechanism, compliant gripper, force regulation mechanism, Taguchi method, response surface methodology, design of experiment.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1612126 Temporal Signal Processing by Inference Bayesian Approach for Detection of Abrupt Variation of Statistical Characteristics of Noisy Signals
Authors: Farhad Asadi, Hossein Sadati
Abstract:
In fields such as neuroscience and especially in cognition modeling of mental processes, uncertainty processing in temporal zone of signal is vital. In this paper, Bayesian online inferences in estimation of change-points location in signal are constructed. This method separated the observed signal into independent series and studies the change and variation of the regime of data locally with related statistical characteristics. We give conditions on simulations of the method when the data characteristics of signals vary, and provide empirical evidence to show the performance of method. It is verified that correlation between series around the change point location and its characteristics such as Signal to Noise Ratios and mean value of signal has important factor on fluctuating in finding proper location of change point. And one of the main contributions of this study is related to representing of these influences of signal statistical characteristics for finding abrupt variation in signal. There are two different structures for simulations which in first case one abrupt change in temporal section of signal is considered with variable position and secondly multiple variations are considered. Finally, influence of statistical characteristic for changing the location of change point is explained in details in simulation results with different artificial signals.
Keywords: Time series, fluctuation in statistical characteristics, optimal learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 564125 Experiment Study on the Influence of Tool Materials on the Drilling of Thick Stacked Plate of 2219 Aluminum Alloy
Authors: G. H. Li, M. Liu, H. J. Qi, Q. Zhu, W. Z. He
Abstract:
The drilling and riveting processes are widely used in the assembly of carrier rocket, which makes the efficiency and quality of drilling become the important factor affecting the assembly process. According to the problem existing in the drilling of thick stacked plate (thickness larger than 10mm) of carrier rocket, such as drill break, large noise and burr etc., experimental study of the influence of tool material on the drilling was carried out. The cutting force was measured by a piezoelectric dynamometer, the aperture was measured with an outline projector, and the burr is observed and measured by a digital stereo microscope. Through the measurement, the effects of tool material on the drilling were analyzed from the aspects of drilling force, diameter, and burr. The results show that, compared with carbide drill and coated carbide one, the drilling force of high speed steel is larger. But, the application of high speed steel also has some advantages, e.g. a higher number of hole can be obtained, the height of burr is small, the exit is smooth and the slim burr is less, and the tool experiences wear but not fracture. Therefore, the high speed steel tool is suitable for the drilling of thick stacked plate of 2219 Aluminum alloy.
Keywords: 2219 aluminum alloy, thick stacked plate, drilling, tool material.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1283124 Shape Sensing and Damage Detection of Thin-Walled Cylinders Using an Inverse Finite Element Method
Authors: Ionel D. Craiu, Mihai Nedelcu
Abstract:
Thin-walled cylinders are often used by the offshore industry as columns of floating installations. Based on observed strains, the inverse Finite Element Method (iFEM) may rebuild the deformation of structures. Structural Health Monitoring uses this approach extensively. However, the number of in-situ strain gauges is what determines how accurate it is, and for shell structures with complicated deformation, this number can easily become too high for practical use. Any thin-walled beam member's complicated deformation can be modeled by the Generalized Beam Theory (GBT) as a linear combination of pre-specified cross-section deformation modes. GBT uses bar finite elements as opposed to shell finite elements. This paper proposes an iFEM/GBT formulation for the shape sensing of thin-walled cylinders based on these benefits. This method significantly reduces the number of strain gauges compared to using the traditional inverse-shell finite elements. Using numerical simulations, dent damage detection is achieved by comparing the strain distributions of the undamaged and damaged members. The effect of noise on strain measurements is also investigated.
Keywords: Damage detection, generalized beam theory, inverse finite element method, shape sensing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 158123 Developing a New Vibration Analysis Calculative Method for Esfahan Subway Train and Railways Design, Manufacturing, and Construction
Authors: Omid A. Zargar
Abstract:
The simulated mass and spring method evaluation for subway or railways construction and installation systems have a wide application in rail industries. This kind of design should be optimizing all related parameters to reduce the amount of vibration in cities, homelands, historical zones and other critical locations. Finite element method could help us a lot to analysis such applications with an excellent accuracy but always developing some simple, fast and user friendly evaluation method required in subway industrial applications. In addition, process parameter optimization extremely required in railway industries to achieve some optimal design of railways with maximum safety, reliability and performance. Furthermore, it is important to reduce vibrations and further related maintenance costs as well as possible. In this paper a simple but useful simulated mass and spring evaluation system developed for Esfahan subway construction. Besides, some of related recent patent and innovations in rail world industries like Suspension mass tuned vibration reducer, short sleeper vibration attenuation fastener and Airtight track vibration-noise reducing fastener discussed in details.
Keywords: Subway construction engineering, natural frequency, operation frequency, vibration analysis, polyurethane layer.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2359122 Solar Radiation Time Series Prediction
Authors: Cameron Hamilton, Walter Potter, Gerrit Hoogenboom, Ronald McClendon, Will Hobbs
Abstract:
A model was constructed to predict the amount of solar radiation that will make contact with the surface of the earth in a given location an hour into the future. This project was supported by the Southern Company to determine at what specific times during a given day of the year solar panels could be relied upon to produce energy in sufficient quantities. Due to their ability as universal function approximators, an artificial neural network was used to estimate the nonlinear pattern of solar radiation, which utilized measurements of weather conditions collected at the Griffin, Georgia weather station as inputs. A number of network configurations and training strategies were utilized, though a multilayer perceptron with a variety of hidden nodes trained with the resilient propagation algorithm consistently yielded the most accurate predictions. In addition, a modeled direct normal irradiance field and adjacent weather station data were used to bolster prediction accuracy. In later trials, the solar radiation field was preprocessed with a discrete wavelet transform with the aim of removing noise from the measurements. The current model provides predictions of solar radiation with a mean square error of 0.0042, though ongoing efforts are being made to further improve the model’s accuracy.
Keywords: Artificial Neural Networks, Resilient Propagation, Solar Radiation, Time Series Forecasting.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2761121 PSO Based Weight Selection and Fixed Structure Robust Loop Shaping Control for Pneumatic Servo System with 2DOF Controller
Authors: Randeep Kaur, Jyoti Ohri
Abstract:
This paper proposes a new technique to design a fixed-structure robust loop shaping controller for the pneumatic servosystem. In this paper, a new method based on a particle swarm optimization (PSO) algorithm for tuning the weighting function parameters to design an H∞ controller is presented. The PSO algorithm is used to minimize the infinity norm of the transfer function of the nominal closed loop system to obtain the optimal parameters of the weighting functions. The optimal stability margin is used as an objective in PSO for selecting the optimal weighting parameters; it is shown that the proposed method can simplify the design procedure of H∞ control to obtain optimal robust controller for pneumatic servosystem. In addition, the order of the proposed controller is much lower than that of the conventional robust loop shaping controller, making it easy to implement in practical works. Also two-degree-of-freedom (2DOF) control design procedure is proposed to improve tracking performance in the face of noise and disturbance. Result of simulations demonstrates the advantages of the proposed controller in terms of simple structure and robustness against plant perturbations and disturbances.
Keywords: Robust control, Pneumatic Servosystem, PSO, H∞ control, 2DOF.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2424120 Vision-Based Collision Avoidance for Unmanned Aerial Vehicles by Recurrent Neural Networks
Authors: Yao-Hong Tsai
Abstract:
Due to the sensor technology, video surveillance has become the main way for security control in every big city in the world. Surveillance is usually used by governments for intelligence gathering, the prevention of crime, the protection of a process, person, group or object, or the investigation of crime. Many surveillance systems based on computer vision technology have been developed in recent years. Moving target tracking is the most common task for Unmanned Aerial Vehicle (UAV) to find and track objects of interest in mobile aerial surveillance for civilian applications. The paper is focused on vision-based collision avoidance for UAVs by recurrent neural networks. First, images from cameras on UAV were fused based on deep convolutional neural network. Then, a recurrent neural network was constructed to obtain high-level image features for object tracking and extracting low-level image features for noise reducing. The system distributed the calculation of the whole system to local and cloud platform to efficiently perform object detection, tracking and collision avoidance based on multiple UAVs. The experiments on several challenging datasets showed that the proposed algorithm outperforms the state-of-the-art methods.Keywords: Unmanned aerial vehicle, object tracking, deep learning, collision avoidance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 953119 The Effect of Cow Reproductive Traits on Lifetime Productivity and Longevity
Authors: Lāsma Cielava, Daina Jonkus, Līga Paura
Abstract:
The age of first calving (AFC) is one of the most important factors that have a significant impact on cow productivity in different lactations and its whole life. A belated AFC leads to reduced reproductive performance and it is one of the main reasons for reduced longevity. Cows that calved in time period from 2001-2007 and in this time finished at least four lactations were included in the database. Data were obtained from 68841 crossbred Holstein Black and White (HM), crossbred Latvian Brown (LB), and Latvian Brown genetic resources (LBGR) cows. Cows were distributed in four groups depending on age at first calving. The longest lifespan was conducted for LBGR cows, but they were also characterized with lowest lifetime milk yield and life day milk yield. HM breed cows had the shortest lifespan, but in the lifespan of 2862.2 days was obtained in average 37916.4 kg milk accordingly 13.2 kg milk in one life day. HM breed cows were also characterized with longer calving intervals (CI) in first four lactations, but LBGR cows had the shortest CI in the study group. Age at first calving significantly affected the length of CI in different lactations (p<0.05). HM cows that first time calved >30 months old in the fourth lactation had the longest CI in all study groups (421.4 days). The LBGR cows were characterized with the shortest CI, but there was slight increase in second and third lactation. Age at first calving had a significant impact on cows’ age in each calving time. In the analysis, cow group was conducted that cows with age at first calving <24 months or in average 580.5 days at the time of fifth calving were 2156.7 days (5.9 years) old, but cows with age at first calving >30 months (932.6 days) at the time of fifth calving were 2560.9 days (7.3 years) old.
Keywords: Age at first calving, calving interval, longevity, milk yield.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1634118 Jeffrey's Prior for Unknown Sinusoidal Noise Model via Cramer-Rao Lower Bound
Authors: Samuel A. Phillips, Emmanuel A. Ayanlowo, Rasaki O. Olanrewaju, Olayode Fatoki
Abstract:
This paper employs the Jeffrey's prior technique in the process of estimating the periodograms and frequency of sinusoidal model for unknown noisy time variants or oscillating events (data) in a Bayesian setting. The non-informative Jeffrey's prior was adopted for the posterior trigonometric function of the sinusoidal model such that Cramer-Rao Lower Bound (CRLB) inference was used in carving-out the minimum variance needed to curb the invariance structure effect for unknown noisy time observational and repeated circular patterns. An average monthly oscillating temperature series measured in degree Celsius (0C) from 1901 to 2014 was subjected to the posterior solution of the unknown noisy events of the sinusoidal model via Markov Chain Monte Carlo (MCMC). It was not only deduced that two minutes period is required before completing a cycle of changing temperature from one particular degree Celsius to another but also that the sinusoidal model via the CRLB-Jeffrey's prior for unknown noisy events produced a miniature posterior Maximum A Posteriori (MAP) compare to a known noisy events.
Keywords: Cramer-Rao Lower Bound (CRLB), Jeffrey's prior, Sinusoidal, Maximum A Posteriori (MAP), Markov Chain Monte Carlo (MCMC), Periodograms.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 658117 Graph Cuts Segmentation Approach Using a Patch-Based Similarity Measure Applied for Interactive CT Lung Image Segmentation
Authors: Aicha Majda, Abdelhamid El Hassani
Abstract:
Lung CT image segmentation is a prerequisite in lung CT image analysis. Most of the conventional methods need a post-processing to deal with the abnormal lung CT scans such as lung nodules or other lesions. The simplest similarity measure in the standard Graph Cuts Algorithm consists of directly comparing the pixel values of the two neighboring regions, which is not accurate because this kind of metrics is extremely sensitive to minor transformations such as noise or other artifacts problems. In this work, we propose an improved version of the standard graph cuts algorithm based on the Patch-Based similarity metric. The boundary penalty term in the graph cut algorithm is defined Based on Patch-Based similarity measurement instead of the simple intensity measurement in the standard method. The weights between each pixel and its neighboring pixels are Based on the obtained new term. The graph is then created using theses weights between its nodes. Finally, the segmentation is completed with the minimum cut/Max-Flow algorithm. Experimental results show that the proposed method is very accurate and efficient, and can directly provide explicit lung regions without any post-processing operations compared to the standard method.Keywords: Graph cuts, lung CT scan, lung parenchyma segmentation, patch based similarity metric.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 743116 A Trainable Neural Network Ensemble for ECG Beat Classification
Authors: Atena Sajedin, Shokoufeh Zakernejad, Soheil Faridi, Mehrdad Javadi, Reza Ebrahimpour
Abstract:
This paper illustrates the use of a combined neural network model for classification of electrocardiogram (ECG) beats. We present a trainable neural network ensemble approach to develop customized electrocardiogram beat classifier in an effort to further improve the performance of ECG processing and to offer individualized health care. We process a three stage technique for detection of premature ventricular contraction (PVC) from normal beats and other heart diseases. This method includes a denoising, a feature extraction and a classification. At first we investigate the application of stationary wavelet transform (SWT) for noise reduction of the electrocardiogram (ECG) signals. Then feature extraction module extracts 10 ECG morphological features and one timing interval feature. Then a number of multilayer perceptrons (MLPs) neural networks with different topologies are designed. The performance of the different combination methods as well as the efficiency of the whole system is presented. Among them, Stacked Generalization as a proposed trainable combined neural network model possesses the highest recognition rate of around 95%. Therefore, this network proves to be a suitable candidate in ECG signal diagnosis systems. ECG samples attributing to the different ECG beat types were extracted from the MIT-BIH arrhythmia database for the study. Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2216115 Optimization of Proton Exchange Membrane Fuel Cell Parameters Based on Modified Particle Swarm Algorithms
Authors: M. Dezvarei, S. Morovati
Abstract:
In recent years, increasing usage of electrical energy provides a widespread field for investigating new methods to produce clean electricity with high reliability and cost management. Fuel cells are new clean generations to make electricity and thermal energy together with high performance and no environmental pollution. According to the expansion of fuel cell usage in different industrial networks, the identification and optimization of its parameters is really significant. This paper presents optimization of a proton exchange membrane fuel cell (PEMFC) parameters based on modified particle swarm optimization with real valued mutation (RVM) and clonal algorithms. Mathematical equations of this type of fuel cell are presented as the main model structure in the optimization process. Optimized parameters based on clonal and RVM algorithms are compared with the desired values in the presence and absence of measurement noise. This paper shows that these methods can improve the performance of traditional optimization methods. Simulation results are employed to analyze and compare the performance of these methodologies in order to optimize the proton exchange membrane fuel cell parameters.Keywords: Clonal algorithm, proton exchange membrane fuel cell, particle swarm optimization, real valued mutation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1180114 Pharmaceutical Microencapsulation Technology for Development of Controlled Release Drug Delivery systems
Authors: Mahmood Ahmad, Asadullah Madni, Muhammad Usman, Abubakar Munir, Naveed Akhtar, Haji M. Shoaib Khan
Abstract:
This article demonstrated development of controlled release system of an NSAID drug, Diclofenac sodium employing different ratios of Ethyl cellulose. Diclofenac sodium and ethyl cellulose in different proportions were processed by microencapsulation based on phase separation technique to formulate microcapsules. The prepared microcapsules were then compressed into tablets to obtain controlled release oral formulations. In-vitro evaluation was performed by dissolution test of each preparation was conducted in 900 ml of phosphate buffer solution of pH 7.2 maintained at 37 ± 0.5 °C and stirred at 50 rpm. At predetermined time intervals (0, 0.5, 1.0, 1.5, 2, 3, 4, 6, 8, 10, 12, 16, 20 and 24 hrs). The drug concentration in the collected samples was determined by UV spectrophotometer at 276 nm. The physical characteristics of diclofenac sodium microcapsules were according to accepted range. These were off-white, free flowing and spherical in shape. The release profile of diclofenac sodium from microcapsules was found to be directly proportional to the proportion of ethylcellulose and coat thickness. The in-vitro release pattern showed that with ratio of 1:1 and 1:2 (drug: polymer), the percentage release of drug at first hour was 16.91 and 11.52 %, respectively as compared to 1:3 which is only 6.87 % with in this time. The release mechanism followed higuchi model for its release pattern. Tablet Formulation (F2) of present study was found comparable in release profile the marketed brand Phlogin-SR, microcapsules showed an extended release beyond 24 h. Further, a good correlation was found between drug release and proportion of ethylcellulose in the microcapsules. Microencapsulation based on coacervation found as good technique to control release of diclofenac sodium for making the controlled release formulations.Keywords: Diclofenac sodium, Microencapsulationtechnology, Ethylcellulose, In-Vitro Release Profile
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3161113 Aggregation Scheduling Algorithms in Wireless Sensor Networks
Authors: Min Kyung An
Abstract:
In Wireless Sensor Networks which consist of tiny wireless sensor nodes with limited battery power, one of the most fundamental applications is data aggregation which collects nearby environmental conditions and aggregates the data to a designated destination, called a sink node. Important issues concerning the data aggregation are time efficiency and energy consumption due to its limited energy, and therefore, the related problem, named Minimum Latency Aggregation Scheduling (MLAS), has been the focus of many researchers. Its objective is to compute the minimum latency schedule, that is, to compute a schedule with the minimum number of timeslots, such that the sink node can receive the aggregated data from all the other nodes without any collision or interference. For the problem, the two interference models, the graph model and the more realistic physical interference model known as Signal-to-Interference-Noise-Ratio (SINR), have been adopted with different power models, uniform-power and non-uniform power (with power control or without power control), and different antenna models, omni-directional antenna and directional antenna models. In this survey article, as the problem has proven to be NP-hard, we present and compare several state-of-the-art approximation algorithms in various models on the basis of latency as its performance measure.Keywords: Data aggregation, convergecast, gathering, approximation, interference, omni-directional, directional.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 799112 Reliable Damping Measurements of Solid Beams with Special Focus on the Boundary Conditions and Non-Contact Test Set-Ups
Authors: Ferhat Kadioglu, Ahmet Reha Gunay
Abstract:
Correct measurement of a structural damping value is an important issue for the reliable design of the components exposed to vibratory and noise conditions. As far as a vibrating beam technique is concerned, the specimens under the test somehow are interacted with measuring and exciting devices and also with boundary conditions of the test set-up. The aim of this study is to propose a vibrating beam method that offers a non-contact dynamic measurement of solid beam specimens. To evaluate possible effects of the clamped portion of the specimens with clamped-free ends on the dynamic values (damping and the elastic modulus), the same measuring devices were used, and the results were compared to those with the free-free ends. To get clear idea about the sensitivity of the boundary conditions to the damping values at low, medium and high levels, representative materials were subjected to the tests. The results show that the specimens with low damping values are especially sensitive to the boundary conditions and the most reliable structural damping values are obtained for the specimens with free-free ends. For the damping values at the low levels, a deviation of about 368% was obtained between the specimens with free-free and clamped-free ends, yet, for those having high inherent damping values, comparable results were obtained.
Keywords: Vibrating beam technique, dynamic values, damping, boundary conditions, non-contact measuring systems.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 294111 The Study of the Intelligent Fuzzy Weighted Input Estimation Method Combined with the Experiment Verification for the Multilayer Materials
Authors: Ming-Hui Lee, Tsung-Chien Chen, Tsu-Ping Yu, Horng-Yuan Jang
Abstract:
The innovative intelligent fuzzy weighted input estimation method (FWIEM) can be applied to the inverse heat transfer conduction problem (IHCP) to estimate the unknown time-varying heat flux of the multilayer materials as presented in this paper. The feasibility of this method can be verified by adopting the temperature measurement experiment. The experiment modular may be designed by using the copper sample which is stacked up 4 aluminum samples with different thicknesses. Furthermore, the bottoms of copper samples are heated by applying the standard heat source, and the temperatures on the tops of aluminum are measured by using the thermocouples. The temperature measurements are then regarded as the inputs into the presented method to estimate the heat flux in the bottoms of copper samples. The influence on the estimation caused by the temperature measurement of the sample with different thickness, the processing noise covariance Q, the weighting factor γ , the sampling time interval Δt , and the space discrete interval Δx , will be investigated by utilizing the experiment verification. The results show that this method is efficient and robust to estimate the unknown time-varying heat input of the multilayer materials.Keywords: Multilayer Materials, Input Estimation Method, IHCP, Heat Flux.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1237110 Broadband PowerLine Communications: Performance Analysis
Authors: Justinian Anatory, Nelson Theethayi, M. M. Kissaka, N. H. Mvungi
Abstract:
Power line channel is proposed as an alternative for broadband data transmission especially in developing countries like Tanzania [1]. However the channel is affected by stochastic attenuation and deep notches which can lead to the limitation of channel capacity and achievable data rate. Various studies have characterized the channel without giving exactly the maximum performance and limitation in data transfer rate may be this is due to complexity of channel modeling being used. In this paper the channel performance of medium voltage, low voltage and indoor power line channel is presented. In the investigations orthogonal frequency division multiplexing (OFDM) with phase shift keying (PSK) as carrier modulation schemes is considered, for indoor, medium and low voltage channels with typical ten branches and also Golay coding is applied for medium voltage channel. From channels, frequency response deep notches are observed in various frequencies which can lead to reduce the achievable data rate. However, is observed that data rate up to 240Mbps is realized for a signal to noise ratio of about 50dB for indoor and low voltage channels, however for medium voltage a typical link with ten branches is affected by strong multipath and coding is required for feasible broadband data transfer.
Keywords: Powerline Communications, branched network, channel model, modulation, channel performance, OFDM.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1833109 Image Classification and Accuracy Assessment Using the Confusion Matrix, Contingency Matrix, and Kappa Coefficient
Authors: F. F. Howard, C. B. Boye, I. Yakubu, J. S. Y. Kuma
Abstract:
One of the ways that could be used for the production of land use and land cover maps by a procedure known as image classification is the use of the remote sensing technique. Numerous elements ought to be taken into consideration, including the availability of highly satisfactory Landsat imagery, secondary data and a precise classification process. The goal of this study was to classify and map the land use and land cover of the study area using remote sensing and Geospatial Information System (GIS) analysis. The classification was done using Landsat 8 satellite images acquired in December 2020 covering the study area. The Landsat image was downloaded from the USGS. The Landsat image with 30 m resolution was geo-referenced to the WGS_84 datum and Universal Transverse Mercator (UTM) Zone 30N coordinate projection system. A radiometric correction was applied to the image to reduce the noise in the image. This study consists of two sections: the Land Use/Land Cover (LULC) and Accuracy Assessments using the confusion and contingency matrix and the Kappa coefficient. The LULC classifications were vegetation (agriculture) (67.87%), water bodies (0.01%), mining areas (5.24%), forest (26.02%), and settlement (0.88%). The overall accuracy of 97.87% and the kappa coefficient (K) of 97.3% were obtained for the confusion matrix. While an overall accuracy of 95.7% and a Kappa coefficient of 0.947 were obtained for the contingency matrix, the kappa coefficients were rated as substantial; hence, the classified image is fit for further research.
Keywords: Confusion Matrix, contingency matrix, kappa coefficient, land used/ land cover, accuracy assessment.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 252108 Comparison of Compression Ability Using DCT and Fractal Technique on Different Imaging Modalities
Authors: Sumathi Poobal, G. Ravindran
Abstract:
Image compression is one of the most important applications Digital Image Processing. Advanced medical imaging requires storage of large quantities of digitized clinical data. Due to the constrained bandwidth and storage capacity, however, a medical image must be compressed before transmission and storage. There are two types of compression methods, lossless and lossy. In Lossless compression method the original image is retrieved without any distortion. In lossy compression method, the reconstructed images contain some distortion. Direct Cosine Transform (DCT) and Fractal Image Compression (FIC) are types of lossy compression methods. This work shows that lossy compression methods can be chosen for medical image compression without significant degradation of the image quality. In this work DCT and Fractal Compression using Partitioned Iterated Function Systems (PIFS) are applied on different modalities of images like CT Scan, Ultrasound, Angiogram, X-ray and mammogram. Approximately 20 images are considered in each modality and the average values of compression ratio and Peak Signal to Noise Ratio (PSNR) are computed and studied. The quality of the reconstructed image is arrived by the PSNR values. Based on the results it can be concluded that the DCT has higher PSNR values and FIC has higher compression ratio. Hence in medical image compression, DCT can be used wherever picture quality is preferred and FIC is used wherever compression of images for storage and transmission is the priority, without loosing picture quality diagnostically.Keywords: DCT, FIC, PIFS, PSNR.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1824107 Night-Time Traffic Light Detection Based On SVM with Geometric Moment Features
Authors: Hyun-Koo Kim, Young-Nam Shin, Sa-gong Kuk, Ju H. Park, Ho-Youl Jung
Abstract:
This paper presents an effective traffic lights detection method at the night-time. First, candidate blobs of traffic lights are extracted from RGB color image. Input image is represented on the dominant color domain by using color transform proposed by Ruta, then red and green color dominant regions are selected as candidates. After candidate blob selection, we carry out shape filter for noise reduction using information of blobs such as length, area, area of boundary box, etc. A multi-class classifier based on SVM (Support Vector Machine) applies into the candidates. Three kinds of features are used. We use basic features such as blob width, height, center coordinate, area, area of blob. Bright based stochastic features are also used. In particular, geometric based moment-s values between candidate region and adjacent region are proposed and used to improve the detection performance. The proposed system is implemented on Intel Core CPU with 2.80 GHz and 4 GB RAM and tested with the urban and rural road videos. Through the test, we show that the proposed method using PF, BMF, and GMF reaches up to 93 % of detection rate with computation time of in average 15 ms/frame.Keywords: Night-time traffic light detection, multi-class classification, driving assistance system.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3885106 Precision Grinding of Titanium (Ti-6Al-4V) Alloy Using Nanolubrication
Authors: Ahmed A. D. Sarhan, Hong Wan Ping, M. Sayuti
Abstract:
In this current era of competitive machinery productions, the industries are designed to place more emphasis on the product quality and reduction of cost whilst abiding by the pollution-preventing policy. In attempting to delve into the concerns, the industries are aware that the effectiveness of existing lubrication systems must be improved to achieve power-efficient and pollution-preventing machining processes. As such, this research is targeted to study on a plausible solution to the issue in grinding titanium alloy (Ti-6Al-4V) by using nanolubrication, as an alternative to flood grinding. The aim of this research is to evaluate the optimum condition of grinding force and surface roughness using MQL lubricating system to deliver nano-oil at different level of weight concentration of Silicon Dioxide (SiO2) mixed normal mineral oil. Taguchi Design of Experiment (DoE) method is carried out using a standard Taguchi orthogonal array of L16(43) to find the optimized combination of weight concentration mixture of SiO2, nozzle orientation and pressure of MQL. Surface roughness and grinding force are also analyzed using signal-to-noise(S/N) ratio to determine the best level of each factor that are tested. Consequently, the best combination of parameters is tested for a period of time and the results are compared with conventional grinding method of dry and flood condition. The results show a positive performance of MQL nanolubrication.
Keywords: Grinding, MQL, precision grinding, Taguchi optimization, titanium alloy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1884105 Taguchi-Based Optimization of Surface Roughness and Dimensional Accuracy in Wire EDM Process with S7 Heat Treated Steel
Authors: Joseph C. Chen, Joshua Cox
Abstract:
This research focuses on the use of the Taguchi method to reduce the surface roughness and improve dimensional accuracy of parts machined by Wire Electrical Discharge Machining (EDM) with S7 heat treated steel material. Due to its high impact toughness, the material is a candidate for a wide variety of tooling applications which require high precision in dimension and desired surface roughness. This paper demonstrates that Taguchi Parameter Design methodology is able to optimize both dimensioning and surface roughness successfully by investigating seven wire-EDM controllable parameters: pulse on time (ON), pulse off time (OFF), servo voltage (SV), voltage (V), servo feed (SF), wire tension (WT), and wire speed (WS). The temperature of the water in the Wire EDM process is investigated as the noise factor in this research. Experimental design and analysis based on L18 Taguchi orthogonal arrays are conducted. This paper demonstrates that the Taguchi-based system enables the wire EDM process to produce (1) high precision parts with an average of 0.6601 inches dimension, while the desired dimension is 0.6600 inches; and (2) surface roughness of 1.7322 microns which is significantly improved from 2.8160 microns.
Keywords: Taguchi parameter design, surface roughness, dimensional accuracy, Wire EDM.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1088104 Implementation of a Multimodal Biometrics Recognition System with Combined Palm Print and Iris Features
Authors: Rabab M. Ramadan, Elaraby A. Elgallad
Abstract:
With extensive application, the performance of unimodal biometrics systems has to face a diversity of problems such as signal and background noise, distortion, and environment differences. Therefore, multimodal biometric systems are proposed to solve the above stated problems. This paper introduces a bimodal biometric recognition system based on the extracted features of the human palm print and iris. Palm print biometric is fairly a new evolving technology that is used to identify people by their palm features. The iris is a strong competitor together with face and fingerprints for presence in multimodal recognition systems. In this research, we introduced an algorithm to the combination of the palm and iris-extracted features using a texture-based descriptor, the Scale Invariant Feature Transform (SIFT). Since the feature sets are non-homogeneous as features of different biometric modalities are used, these features will be concatenated to form a single feature vector. Particle swarm optimization (PSO) is used as a feature selection technique to reduce the dimensionality of the feature. The proposed algorithm will be applied to the Institute of Technology of Delhi (IITD) database and its performance will be compared with various iris recognition algorithms found in the literature.
Keywords: Iris recognition, particle swarm optimization, feature extraction, feature selection, palm print, scale invariant feature transform.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 883