Search results for: Genetic algorithm optimization
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4796

Search results for: Genetic algorithm optimization

626 The Customization of 3D Last Form Design Based On Weighted Blending

Authors: Shih-Wen Hsiao, Chu-Hsuan Lee, Rong-Qi Chen

Abstract:

When it comes to last, it is regarded as the critical foundation of shoe design and development. Not only the last relates to the comfort of shoes wearing but also it aids the production of shoe styling and manufacturing. In order to enhance the efficiency and application of last development, a computer aided methodology for customized last form designs is proposed in this study. The reverse engineering is mainly applied to the process of scanning for the last form. Then the minimum energy is used for the revision of surface continuity, the surface of the last is reconstructed with the feature curves of the scanned last. When the surface of a last is reconstructed, based on the foundation of the proposed last form reconstruction module, the weighted arithmetic mean method is applied to the calculation on the shape morphing which differs from the grading for the control mesh of last, and the algorithm of subdivision is used to create the surface of last mesh, thus the feet-fitting 3D last form of different sizes is generated from its original form feature with functions remained. Finally, the practicability of the proposed methodology is verified through later case studies.

Keywords: 3D last design, Customization, Reverse engineering, Weighted morphing, Shape blending.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2204
625 Neural Network Based Icing Identification and Fault Tolerant Control of a 340 Aircraft

Authors: F. Caliskan

Abstract:

This paper presents a Neural Network (NN) identification of icing parameters in an A340 aircraft and a reconfiguration technique to keep the A/C performance close to the performance prior to icing. Five aircraft parameters are assumed to be considerably affected by icing. The off-line training for identifying the clear and iced dynamics is based on the Levenberg-Marquard Backpropagation algorithm. The icing parameters are located in the system matrix. The physical locations of the icing are assumed at the right and left wings. The reconfiguration is based on the technique known as the control mixer approach or pseudo inverse technique. This technique generates the new control input vector such that the A/C dynamics is not much affected by icing. In the simulations, the longitudinal and lateral dynamics of an Airbus A340 aircraft model are considered, and the stability derivatives affected by icing are identified. The simulation results show the successful NN identification of the icing parameters and the reconfigured flight dynamics having the similar performance before the icing. In other words, the destabilizing icing affect is compensated.

Keywords: Aircraft Icing, Stability Derivatives, Neural NetworkIdentification, Reconfiguration.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1701
624 Defining a Pathway to Zero Energy Building: A Case Study on Retrofitting an Old Office Building into a Net Zero Energy Building for Hot-Humid Climate

Authors: Kwame B. O. Amoah

Abstract:

This paper focuses on retrofitting an old existing office building to a net-zero energy building (NZEB). An existing small office building in Melbourne, Florida, was chosen as a case study to integrate state-of-the-art design strategies and energy-efficient building systems to improve building performance and reduce energy consumption. The study aimed to explore possible ways to maximize energy savings and renewable energy generation sources to cover the building's remaining energy needs necessary to achieve net-zero energy goals. A series of retrofit options were reviewed and adopted with some significant additional decision considerations. Detailed processes and considerations leading to zero energy are well documented in this study, with lessons learned adequately outlined. Based on building energy simulations, multiple design considerations were investigated, such as emerging state-of-the-art technologies, material selection, improvements to the building envelope, optimization of the HVAC, lighting systems, and occupancy loads analysis, as well as the application of renewable energy sources. The comparative analysis of simulation results was used to determine how specific techniques led to energy saving and cost reductions. The research results indicate that this small office building can meet net-zero energy use after appropriate design manipulations and renewable energy sources.

Keywords: Energy consumption, building energy analysis, energy retrofits, energy-efficiency.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 340
623 Use of Fuzzy Edge Image in Block Truncation Coding for Image Compression

Authors: Amarunnishad T.M., Govindan V.K., Abraham T. Mathew

Abstract:

An image compression method has been developed using fuzzy edge image utilizing the basic Block Truncation Coding (BTC) algorithm. The fuzzy edge image has been validated with classical edge detectors on the basis of the results of the well-known Canny edge detector prior to applying to the proposed method. The bit plane generated by the conventional BTC method is replaced with the fuzzy bit plane generated by the logical OR operation between the fuzzy edge image and the corresponding conventional BTC bit plane. The input image is encoded with the block mean and standard deviation and the fuzzy bit plane. The proposed method has been tested with test images of 8 bits/pixel and size 512×512 and found to be superior with better Peak Signal to Noise Ratio (PSNR) when compared to the conventional BTC, and adaptive bit plane selection BTC (ABTC) methods. The raggedness and jagged appearance, and the ringing artifacts at sharp edges are greatly reduced in reconstructed images by the proposed method with the fuzzy bit plane.

Keywords: Image compression, Edge detection, Ground truth image, Peak signal to noise ratio

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1699
622 On Mobile Checkpointing using Index and Time Together

Authors: Awadhesh Kumar Singh

Abstract:

Checkpointing is one of the commonly used techniques to provide fault-tolerance in distributed systems so that the system can operate even if one or more components have failed. However, mobile computing systems are constrained by low bandwidth, mobility, lack of stable storage, frequent disconnections and limited battery life. Hence, checkpointing protocols having lesser number of synchronization messages and fewer checkpoints are preferred in mobile environment. There are two different approaches, although not orthogonal, to checkpoint mobile computing systems namely, time-based and index-based. Our protocol is a fusion of these two approaches, though not first of its kind. In the present exposition, an index-based checkpointing protocol has been developed, which uses time to indirectly coordinate the creation of consistent global checkpoints for mobile computing systems. The proposed algorithm is non-blocking, adaptive, and does not use any control message. Compared to other contemporary checkpointing algorithms, it is computationally more efficient because it takes lesser number of checkpoints and does not need to compute dependency relationships. A brief account of important and relevant works in both the fields, time-based and index-based, has also been included in the presentation.

Keywords: Checkpointing, forced checkpoint, mobile computing, recovery, time-coordinated.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1488
621 Generalized Maximal Ratio Combining as a Supra-optimal Receiver Diversity Scheme

Authors: Jean-Pierre Dubois, Rania Minkara, Rafic Ayoubi

Abstract:

Maximal Ratio Combining (MRC) is considered the most complex combining technique as it requires channel coefficients estimation. It results in the lowest bit error rate (BER) compared to all other combining techniques. However the BER starts to deteriorate as errors are introduced in the channel coefficients estimation. A novel combining technique, termed Generalized Maximal Ratio Combining (GMRC) with a polynomial kernel, yields an identical BER as MRC with perfect channel estimation and a lower BER in the presence of channel estimation errors. We show that GMRC outperforms the optimal MRC scheme in general and we hereinafter introduce it to the scientific community as a new “supraoptimal" algorithm. Since diversity combining is especially effective in small femto- and pico-cells, internet-associated wireless peripheral systems are to benefit most from GMRC. As a result, many spinoff applications can be made to IP-based 4th generation networks.

Keywords: Bit error rate, femto-internet cells, generalized maximal ratio combining, signal-to-scattering noise ratio.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2152
620 An Energy Efficient Cluster Formation Protocol with Low Latency In Wireless Sensor Networks

Authors: A. Allirani, M. Suganthi

Abstract:

Data gathering is an essential operation in wireless sensor network applications. So it requires energy efficiency techniques to increase the lifetime of the network. Similarly, clustering is also an effective technique to improve the energy efficiency and network lifetime of wireless sensor networks. In this paper, an energy efficient cluster formation protocol is proposed with the objective of achieving low energy dissipation and latency without sacrificing application specific quality. The objective is achieved by applying randomized, adaptive, self-configuring cluster formation and localized control for data transfers. It involves application - specific data processing, such as data aggregation or compression. The cluster formation algorithm allows each node to make independent decisions, so as to generate good clusters as the end. Simulation results show that the proposed protocol utilizes minimum energy and latency for cluster formation, there by reducing the overhead of the protocol.

Keywords: Sensor networks, Low latency, Energy sorting protocol, data processing, Cluster formation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2741
619 Depth Controls of an Autonomous Underwater Vehicle by Neurocontrollers for Enhanced Situational Awareness

Authors: Igor Astrov, Andrus Pedai

Abstract:

This paper focuses on a critical component of the situational awareness (SA), the neural control of autonomous constant depth flight of an autonomous underwater vehicle (AUV). Autonomous constant depth flight is a challenging but important task for AUVs to achieve high level of autonomy under adverse conditions. The fundamental requirement for constant depth flight is the knowledge of the depth, and a properly designed controller to govern the process. The AUV, named VORAM, is used as a model for the verification of the proposed hybrid control algorithm. Three neural network controllers, named NARMA-L2 controllers, are designed for fast and stable diving maneuvers of chosen AUV model. This hybrid control strategy for chosen AUV model has been verified by simulation of diving maneuvers using software package Simulink and demonstrated good performance for fast SA in real-time searchand- rescue operations.

Keywords: Autonomous underwater vehicles, depth control, neurocontrollers, situational awareness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1870
618 An Improved Illumination Normalization based on Anisotropic Smoothing for Face Recognition

Authors: Sanghoon Kim, Sun-Tae Chung, Souhwan Jung, Seongwon Cho

Abstract:

Robust face recognition under various illumination environments is very difficult and needs to be accomplished for successful commercialization. In this paper, we propose an improved illumination normalization method for face recognition. Illumination normalization algorithm based on anisotropic smoothing is well known to be effective among illumination normalization methods but deteriorates the intensity contrast of the original image, and incurs less sharp edges. The proposed method in this paper improves the previous anisotropic smoothing-based illumination normalization method so that it increases the intensity contrast and enhances the edges while diminishing the effect of illumination variations. Due to the result of these improvements, face images preprocessed by the proposed illumination normalization method becomes to have more distinctive feature vectors (Gabor feature vectors) for face recognition. Through experiments of face recognition based on Gabor feature vector similarity, the effectiveness of the proposed illumination normalization method is verified.

Keywords: Illumination Normalization, Face Recognition, Anisotropic smoothing, Gabor feature vector.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1549
617 Modified Naïve Bayes Based Prediction Modeling for Crop Yield Prediction

Authors: Kefaya Qaddoum

Abstract:

Most of greenhouse growers desire a determined amount of yields in order to accurately meet market requirements. The purpose of this paper is to model a simple but often satisfactory supervised classification method. The original naive Bayes have a serious weakness, which is producing redundant predictors. In this paper, utilized regularization technique was used to obtain a computationally efficient classifier based on naive Bayes. The suggested construction, utilized L1-penalty, is capable of clearing redundant predictors, where a modification of the LARS algorithm is devised to solve this problem, making this method applicable to a wide range of data. In the experimental section, a study conducted to examine the effect of redundant and irrelevant predictors, and test the method on WSG data set for tomato yields, where there are many more predictors than data, and the urge need to predict weekly yield is the goal of this approach. Finally, the modified approach is compared with several naive Bayes variants and other classification algorithms (SVM and kNN), and is shown to be fairly good.

Keywords: Tomato yields prediction, naive Bayes, redundancy

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5109
616 Hypolipidemic and Antioxidant Effects of Black Tea Extract and Quercetin in Atherosclerotic Rats

Authors: Wahyu Widowati, Hana Ratnawati, Tjandrawati Mozefis, Dwiyati Pujimulyani, Yelliantty Yelliantty

Abstract:

Background: Atherosclerosis is the main cause of cardiovascular disease (CVD) with complex and multifactorial process including atherogenic lipoprotein, oxidized low density lipoprotein (LDL), endothelial dysfunction, plaque stability, vascular inflammation, thrombotic and fibrinolytic disorder, exercises and genetic factor Epidemiological studies have shown tea consumption inversely associated with the development and progression of atherosclerosis. The research objectives: to elucidate hypolipidemic, antioxidant effects, as well as ability to improve coronary artery’s histopathologyof black tea extract (BTE) and quercetin in atherosclerotic rats. Methods: The antioxidant activity was determined by using Superoxide Dismutase activity (SOD) of serum and lipid peroxidation product (Malondialdehyde) of plasma and lipid profile including cholesterol total, LDL, triglyceride (TG), High Density Lipoprotein (HDL) of atherosclerotic rats. Inducing atherosclerotic, rats were given cholesterol and cholic acid in feed during ten weeks until rats indicated atherosclerotic symptom with narrowed artery and foamy cells in the artery’s wall. After rats suffered atherosclerotic, the high cholesterol feed and cholic acid were stopped and rats were given BTE 450; 300; 150 mg/kg body weight (BW) daily, quercetin 15; 10; 5 mg/kg BW daily, compared to rats were given vitamin E 60 mg/kg/BW; simvastatin 2.7 mg/kg BW, probucol 30 mg/kg BW daily for 21 days (first treatment) and 42 days (second treatment), negative control (normal feed), positive control (atherosclerotic rats). Results: BTE and quercetin could lower cholesterol total, triglyceride, LDL MDA and increase HDL, SOD were comparable with simvastatin, probucol both for 21 days and 42 days treatment, as well to improve coronary arteries histopathology. Conclusions: BTE andquercetin have hypolipidemic and antioxidant effects, as well as improve coronary arteries histopathology in atherosclerotic rats.

Keywords: Black tea, quercetin, atherosclerosis, antioxidant, hypolipidemic, cardiovascular disease.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2732
615 Automatic Lip Contour Tracking and Visual Character Recognition for Computerized Lip Reading

Authors: Harshit Mehrotra, Gaurav Agrawal, M.C. Srivastava

Abstract:

Computerized lip reading has been one of the most actively researched areas of computer vision in recent past because of its crime fighting potential and invariance to acoustic environment. However, several factors like fast speech, bad pronunciation, poor illumination, movement of face, moustaches and beards make lip reading difficult. In present work, we propose a solution for automatic lip contour tracking and recognizing letters of English language spoken by speakers using the information available from lip movements. Level set method is used for tracking lip contour using a contour velocity model and a feature vector of lip movements is then obtained. Character recognition is performed using modified k nearest neighbor algorithm which assigns more weight to nearer neighbors. The proposed system has been found to have accuracy of 73.3% for character recognition with speaker lip movements as the only input and without using any speech recognition system in parallel. The approach used in this work is found to significantly solve the purpose of lip reading when size of database is small.

Keywords: Contour Velocity Model, Lip Contour Tracking, LipReading, Visual Character Recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2401
614 SNR Classification Using Multiple CNNs

Authors: Thinh Ngo, Paul Rad, Brian Kelley

Abstract:

Noise estimation is essential in today wireless systems for power control, adaptive modulation, interference suppression and quality of service. Deep learning (DL) has already been applied in the physical layer for modulation and signal classifications. Unacceptably low accuracy of less than 50% is found to undermine traditional application of DL classification for SNR prediction. In this paper, we use divide-and-conquer algorithm and classifier fusion method to simplify SNR classification and therefore enhances DL learning and prediction. Specifically, multiple CNNs are used for classification rather than a single CNN. Each CNN performs a binary classification of a single SNR with two labels: less than, greater than or equal. Together, multiple CNNs are combined to effectively classify over a range of SNR values from −20 ≤ SNR ≤ 32 dB.We use pre-trained CNNs to predict SNR over a wide range of joint channel parameters including multiple Doppler shifts (0, 60, 120 Hz), power-delay profiles, and signal-modulation types (QPSK,16QAM,64-QAM). The approach achieves individual SNR prediction accuracy of 92%, composite accuracy of 70% and prediction convergence one order of magnitude faster than that of traditional estimation.

Keywords: Classification, classifier fusion, CNN, Deep Learning, prediction, SNR.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 720
613 OCR for Script Identification of Hindi (Devnagari) Numerals using Feature Sub Selection by Means of End-Point with Neuro-Memetic Model

Authors: Banashree N. P., R. Vasanta

Abstract:

Recognition of Indian languages scripts is challenging problems. In Optical Character Recognition [OCR], a character or symbol to be recognized can be machine printed or handwritten characters/numerals. There are several approaches that deal with problem of recognition of numerals/character depending on the type of feature extracted and different way of extracting them. This paper proposes a recognition scheme for handwritten Hindi (devnagiri) numerals; most admired one in Indian subcontinent. Our work focused on a technique in feature extraction i.e. global based approach using end-points information, which is extracted from images of isolated numerals. These feature vectors are fed to neuro-memetic model [18] that has been trained to recognize a Hindi numeral. The archetype of system has been tested on varieties of image of numerals. . In proposed scheme data sets are fed to neuro-memetic algorithm, which identifies the rule with highest fitness value of nearly 100 % & template associates with this rule is nothing but identified numerals. Experimentation result shows that recognition rate is 92-97 % compared to other models.

Keywords: OCR, Global Feature, End-Points, Neuro-Memetic model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1759
612 Study on the Derivatization Process Using N-O-bis-(trimethylsilyl)-trifluoroacetamide, N-(tert-butyldimethylsilyl)-N-methyltrifluoroace tamide, Trimethylsilydiazomethane for the Determination of Fecal Sterols by Gas Chromatography-Mass Spectrometry

Authors: Jingming Wu, Ruikang Hu, Junqi Yue, Zhaoguang Yang, Lifeng Zhang

Abstract:

Fecal sterol has been proposed as a chemical indicator of human fecal pollution even when fecal coliform populations have diminished due to water chlorination or toxic effects of industrial effluents. This paper describes an improved derivatization procedure for simultaneous determination of four fecal sterols including coprostanol, epicholestanol, cholesterol and cholestanol using gas chromatography-mass spectrometry (GC-MS), via optimization study on silylation procedures using N-O-bis (trimethylsilyl)-trifluoroacetamide (BSTFA), and N-(tert-butyldimethylsilyl)-N-methyltrifluoroacetamide (MTBSTFA), which lead to the formation of trimethylsilyl (TMS) and tert-butyldimethylsilyl (TBS) derivatives, respectively. Two derivatization processes of injection-port derivatization and water bath derivatization (60 oC, 1h) were inspected and compared. Furthermore, the methylation procedure at 25 oC for 2h with trimethylsilydiazomethane (TMSD) for fecal sterols analysis was also studied. It was found that most of TMS derivatives demonstrated the highest sensitivities, followed by methylated derivatives. For BSTFA or MTBSTFA derivatization processes, the simple injection-port derivatization process could achieve the same efficiency as that in the tedious water bath derivatization procedure.

Keywords: Fecal Sterols, Methylation, Silylation, BSTFA, MTBSTFA, TMSD, GC-MS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2260
611 Forensic Medical Capacities of Research of Saliva Stains on Physical Evidence after Washing

Authors: Saule Mussabekova

Abstract:

Recent advances in genetics have allowed increasing acutely the capacities of the formation of reliable evidence in conducting forensic examinations. Thus, traces of biological origin are important sources of information about a crime. Currently, around the world, sexual offenses have increased, and among them are those in which the criminals use various detergents to remove traces of their crime. A feature of modern synthetic detergents is the presence of biological additives - enzymes. Enzymes purposefully destroy stains of biological origin. To study the nature and extent of the impact of modern washing powders on saliva stains on the physical evidence, specially prepared test specimens of different types of tissues to which saliva was applied have been examined. Materials and Methods: Washing machines of famous manufacturers of household appliances have been used with different production characteristics and advertised brands of washing powder for test washing. Over 3,500 experimental samples were tested. After washing, the traces of saliva were identified using modern research methods of forensic medicine. Results: The influence was tested and the dependence of the use of different washing programs, types of washing machines and washing powders in the process of establishing saliva trace and identify of the stains on the physical evidence while washing was revealed. The results of experimental and practical expert studies have shown that in most cases it is not possible to draw the conclusions in the identification of saliva traces on physical evidence after washing. This is a consequence of the effect of biological additives and other additional factors on traces of saliva during washing. Conclusions: On the basis of the results of the study, the feasibility of saliva traces of the stains on physical evidence after washing is established. The use of modern molecular genetic methods makes it possible to partially solve the problems arising in the study of unlaundered evidence. Additional study of physical evidence after washing facilitates detection and investigation of sexual offenses against women and children.

Keywords: Saliva research, modern synthetic detergents, laundry detergents, forensic medicine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1317
610 Validation and Application of a New Optimized RP-HPLC-Fluorescent Detection Method for Norfloxacin

Authors: Mahmood Ahmad, Ghulam Murtaza, Sonia Khiljee, Muhammad Asadullah Madni

Abstract:

A new reverse phase-high performance liquid chromatography (RP-HPLC) method with fluorescent detector (FLD) was developed and optimized for Norfloxacin determination in human plasma. Mobile phase specifications, extraction method and excitation and emission wavelengths were varied for optimization. HPLC system contained a reverse phase C18 (5 μm, 4.6 mm×150 mm) column with FLD operated at excitation 330 nm and emission 440 nm. The optimized mobile phase consisted of 14% acetonitrile in buffer solution. The aqueous phase was prepared by mixing 2g of citric acid, 2g sodium acetate and 1 ml of triethylamine in 1 L of Milli-Q water was run at a flow rate of 1.2 mL/min. The standard curve was linear for the range tested (0.156–20 μg/mL) and the coefficient of determination was 0.9978. Aceclofenac sodium was used as internal standard. A detection limit of 0.078 μg/mL was achieved. Run time was set at 10 minutes because retention time of norfloxacin was 0.99 min. which shows the rapidness of this method of analysis. The present assay showed good accuracy, precision and sensitivity for Norfloxacin determination in human plasma with a new internal standard and can be applied pharmacokinetic evaluation of Norfloxacin tablets after oral administration in human.

Keywords: Norfloxacin, Aceclofenac sodium, Methodoptimization, RP-HPLC method, Fluorescent detection, Calibrationcurve.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2105
609 Optimization of Double Wishbone Suspension System with Variable Camber Angle by Hydraulic Mechanism

Authors: Mohammad Iman Mokhlespour Esfahani, Masoud Mosayebi, Mohammad Pourshams, Ahmad Keshavarzi

Abstract:

Simulation accuracy by recent dynamic vehicle simulation multidimensional expression significantly has progressed and acceptable results not only for passive vehicles but also for active vehicles normally equipped with advanced electronic components is also provided. Recently, one of the subjects that has it been considered, is increasing the safety car in design. Therefore, many efforts have been done to increase vehicle stability especially in the turn. One of the most important efforts is adjusting the camber angle in the car suspension system. Optimum control camber angle in addition to the vehicle stability is effective in the wheel adhesion on road, reducing rubber abrasion and acceleration and braking. Since the increase or decrease in the camber angle impacts on the stability of vehicles, in this paper, a car suspension system mechanism is introduced that could be adjust camber angle and the mechanism is application and also inexpensive. In order to reach this purpose, in this paper, a passive double wishbone suspension system with variable camber angle is introduced and then variable camber mechanism designed and analyzed for study the designed system performance, this mechanism is modeled in Visual Nastran software and kinematic analysis is revealed.

Keywords: Suspension molding, double wishbone, variablecamber, hydraulic mechanism

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7724
608 Voice Disorders Identification Using Hybrid Approach: Wavelet Analysis and Multilayer Neural Networks

Authors: L. Salhi, M. Talbi, A. Cherif

Abstract:

This paper presents a new strategy of identification and classification of pathological voices using the hybrid method based on wavelet transform and neural networks. After speech acquisition from a patient, the speech signal is analysed in order to extract the acoustic parameters such as the pitch, the formants, Jitter, and shimmer. Obtained results will be compared to those normal and standard values thanks to a programmable database. Sounds are collected from normal people and patients, and then classified into two different categories. Speech data base is consists of several pathological and normal voices collected from the national hospital “Rabta-Tunis". Speech processing algorithm is conducted in a supervised mode for discrimination of normal and pathology voices and then for classification between neural and vocal pathologies (Parkinson, Alzheimer, laryngeal, dyslexia...). Several simulation results will be presented in function of the disease and will be compared with the clinical diagnosis in order to have an objective evaluation of the developed tool.

Keywords: Formants, Neural Networks, Pathological Voices, Pitch, Wavelet Transform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2842
607 Complexity Analysis of Some Known Graph Coloring Instances

Authors: Jeffrey L. Duffany

Abstract:

Graph coloring is an important problem in computer science and many algorithms are known for obtaining reasonably good solutions in polynomial time. One method of comparing different algorithms is to test them on a set of standard graphs where the optimal solution is already known. This investigation analyzes a set of 50 well known graph coloring instances according to a set of complexity measures. These instances come from a variety of sources some representing actual applications of graph coloring (register allocation) and others (mycieleski and leighton graphs) that are theoretically designed to be difficult to solve. The size of the graphs ranged from ranged from a low of 11 variables to a high of 864 variables. The method used to solve the coloring problem was the square of the adjacency (i.e., correlation) matrix. The results show that the most difficult graphs to solve were the leighton and the queen graphs. Complexity measures such as density, mobility, deviation from uniform color class size and number of block diagonal zeros are calculated for each graph. The results showed that the most difficult problems have low mobility (in the range of .2-.5) and relatively little deviation from uniform color class size.

Keywords: graph coloring, complexity, algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1401
606 3D Human Reconstruction over Cloud Based Image Data via AI and Machine Learning

Authors: Kaushik Sathupadi, Sandesh Achar

Abstract:

Human action recognition (HAR) modeling is a critical task in machine learning. These systems require better techniques for recognizing body parts and selecting optimal features based on vision sensors to identify complex action patterns efficiently. Still, there is a considerable gap and challenges between images and videos, such as brightness, motion variation, and random clutters. This paper proposes a robust approach for classifying human actions over cloud-based image data. First, we apply pre-processing and detection, human and outer shape detection techniques. Next, we extract valuable information in terms of cues. We extract two distinct features: fuzzy local binary patterns and sequence representation. Then, we applied a greedy, randomized adaptive search procedure for data optimization and dimension reduction, and for classification, we used a random forest. We tested our model on two benchmark datasets, AAMAZ and the KTH Multi-view Football datasets. Our HAR framework significantly outperforms the other state-of-the-art approaches and achieves a better recognition rate of 91% and 89.6% over the AAMAZ and KTH Multi-view Football datasets, respectively.

Keywords: Computer vision, human motion analysis, random forest, machine learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 35
605 Study on the Optimization of Completely Batch Water-using Network with Multiple Contaminants Considering Flow Change

Authors: Jian Du, Shui Hong Hong, Lu Meng, Qing Wei Meng

Abstract:

This work addresses the problem of optimizing completely batch water-using network with multiple contaminants where the flow change caused by mass transfer is taken into consideration for the first time. A mathematical technique for optimizing water-using network is proposed based on source-tank-sink superstructure. The task is to obtain the freshwater usage, recycle assignments among water-using units, wastewater discharge and a steady water-using network configuration by following steps. Firstly, operating sequences of water-using units are determined by time constraints. Next, superstructure is simplified by eliminating the reuse and recycle from water-using units with maximum concentration of key contaminants. Then, the non-linear programming model is solved by GAMS (General Algebra Model System) for minimum freshwater usage, maximum water recycle and minimum wastewater discharge. Finally, numbers of operating periods are calculated to acquire the steady network configuration. A case study is solved to illustrate the applicability of the proposed approach.

Keywords: Completely batch process, flow change, multiple contaminants, water-using network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1451
604 Study on the Derivatization Process Using N-O-bis-(trimethylsilyl)-trifluoroacetamide,N-(tert-butyldimethylsilyl)-N-methyltrifluoroacetamide, Trimethylsilydiazomethane for the Determination of Fecal Sterols by Gas Chromatography-Mass Spectrometry

Authors: Jingming Wu, Ruikang Hu, Junqi Yue, Zhaoguang Yang, Lifeng Zhang

Abstract:

Fecal sterol has been proposed as a chemical indicator of human fecal pollution even when fecal coliform populations have diminished due to water chlorination or toxic effects of industrial effluents. This paper describes an improved derivatization procedure for simultaneous determination of four fecal sterols including coprostanol, epicholestanol, cholesterol and cholestanol using gas chromatography-mass spectrometry (GC-MS), via optimization study on silylation procedures using N-O-bis (trimethylsilyl)-trifluoroacetamide (BSTFA), and N-(tert-butyldimethylsilyl)-N-methyltrifluoroacetamide (MTBSTFA), which lead to the formation of trimethylsilyl (TMS) and tert-butyldimethylsilyl (TBS) derivatives, respectively. Two derivatization processes of injection-port derivatization and water bath derivatization (60 oC, 1h) were inspected and compared. Furthermore, the methylation procedure at 25 oC for 2h with trimethylsilydiazomethane (TMSD) for fecal sterols analysis was also studied. It was found that most of TMS derivatives demonstrated the highest sensitivities, followed by methylated derivatives. For BSTFA or MTBSTFA derivatization processes, the simple injection-port derivatization process could achieve the same efficiency as that in the tedious water bath derivatization procedure.

Keywords: Fecal Sterols, Methylation, Silylation, BSTFA, MTBSTFA, TMSD, GC-MS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3197
603 Solving a New Mixed-Model Assembly LineSequencing Problem in a MTO Environment

Authors: N. Manavizadeh, M. Hosseini, M. Rabbani

Abstract:

In the last decades to supply the various and different demands of clients, a lot of manufacturers trend to use the mixedmodel assembly line (MMAL) in their production lines, since this policy make possible to assemble various and different models of the equivalent goods on the same line with the MTO approach. In this article, we determine the sequence of (MMAL) line, with applying the kitting approach and planning of rest time for general workers to reduce the wastages, increase the workers effectiveness and apply the sector of lean production approach. This Multi-objective sequencing problem solved in small size with GAMS22.2 and PSO meta heuristic in 10 test problems and compare their results together and conclude that their results are very similar together, next we determine the important factors in computing the cost, which improving them cost reduced. Since this problem, is NPhard in large size, we use the particle swarm optimization (PSO) meta-heuristic for solving it. In large size we define some test problems to survey it-s performance and determine the important factors in calculating the cost, that by change or improved them production in minimum cost will be possible.

Keywords: Mixed-Model Assembly Line, particle swarmoptimization, Multi-objective sequencing problem, MTO system, kitto-assembly, rest time

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2037
602 Application of Medium High Hydrostatic Pressure in Preserving Textural Quality and Safety of Pineapple Compote

Authors: Nazim Uddin, Yohiko Nakaura, Kazutaka Yamamoto

Abstract:

Compote (fruit in syrup) of pineapple (Ananas comosus L. Merrill) is expected to have a high market potential as one of convenient ready-to-eat (RTE) foods worldwide. High hydrostatic pressure (HHP) in combination with low temperature (LT) was applied to the processing of pineapple compote as well as medium HHP (MHHP) in combination with medium-high temperature (MHT) since both processes can enhance liquid impregnation and inactivate microbes. MHHP+MHT (55 or 65 °C) process, as well as the HHP+LT process, has successfully inactivated the microbes in the compote to a non-detectable level. Although the compotes processed by MHHP+MHT or HHP+LT have lost the fresh texture as in a similar manner as those processed solely by heat, it was indicated that the texture degradations by heat were suppressed under MHHP. Degassing process reduced the hardness, while calcium (Ca) contributed to be retained hardness in MHT and MHHP+MHT processes. Electrical impedance measurement supported the damage due to degassing and heat. The color, Brix, and appearance were not affected by the processing methods significantly. MHHP+MHT and HHP+LT processes may be applicable to produce high-quality, safe RTE pineapple compotes. Further studies on the optimization of packaging and storage condition will be indispensable for commercialization.

Keywords: Compote of pineapple, ready-to-eat, medium high hydrostatic pressure, postharvest loss, and texture.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 807
601 Semi-automatic Construction of Ontology-based CBR System for Knowledge Integration

Authors: Junjie Gao, Guishi Deng

Abstract:

In order to integrate knowledge in heterogeneous case-based reasoning (CBR) systems, ontology-based CBR system has become a hot topic. To solve the facing problems of ontology-based CBR system, for example, its architecture is nonstandard, reusing knowledge in legacy CBR is deficient, ontology construction is difficult, etc, we propose a novel approach for semi-automatically construct ontology-based CBR system whose architecture is based on two-layer ontology. Domain knowledge implied in legacy case bases can be mapped from relational database schema and knowledge items to relevant OWL local ontology automatically by a mapping algorithm with low time-complexity. By concept clustering based on formal concept analysis, computing concept equation measure and concept inclusion measure, some suggestions about enriching or amending concept hierarchy of OWL local ontologies are made automatically that can aid designers to achieve semi-automatic construction of OWL domain ontology. Validation of the approach is done by an application example.

Keywords: OWL ontology, Case-based Reasoning, FormalConcept Analysis, Knowledge Integration

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2011
600 Model Predictive Control with Unscented Kalman Filter for Nonlinear Implicit Systems

Authors: Takashi Shimizu, Tomoaki Hashimoto

Abstract:

A class of implicit systems is known as a more generalized class of systems than a class of explicit systems. To establish a control method for such a generalized class of systems, we adopt model predictive control method which is a kind of optimal feedback control with a performance index that has a moving initial time and terminal time. However, model predictive control method is inapplicable to systems whose all state variables are not exactly known. In other words, model predictive control method is inapplicable to systems with limited measurable states. In fact, it is usual that the state variables of systems are measured through outputs, hence, only limited parts of them can be used directly. It is also usual that output signals are disturbed by process and sensor noises. Hence, it is important to establish a state estimation method for nonlinear implicit systems with taking the process noise and sensor noise into consideration. To this purpose, we apply the model predictive control method and unscented Kalman filter for solving the optimization and estimation problems of nonlinear implicit systems, respectively. The objective of this study is to establish a model predictive control with unscented Kalman filter for nonlinear implicit systems.

Keywords: Model predictive control, unscented Kalman filter, nonlinear systems, implicit systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 948
599 SFE as a Superior Technique for Extraction of Eugenol-Rich Fraction from Cinnamomum tamala Nees (Bay Leaf) - Process Analysis and Phytochemical Characterization

Authors: Sudip Ghosh, Dipanwita Roy, Dipan Chatterjee, Paramita Bhattacharjee, Satadal Das

Abstract:

Highest yield of eugenol-rich fractions from Cinnamomum tamala (bay leaf) leaves were obtained by supercritical carbon dioxide (SC-CO2), compared to hydro-distillation, organic solvents, liquid CO2 and subcritical CO2 extractions. Optimization of SC-CO2 extraction parameters was carried out to obtain an extract with maximum eugenol content. This was achieved using a sample size of 10g at 55°C, 512 bar after 60min at a flow rate of 25.0 cm3/sof gaseous CO2. This extract has the best combination of phytochemical properties such as phenolic content (1.77mg gallic acid/g dry bay leaf), reducing power (0.80mg BHT/g dry bay leaf), antioxidant activity (IC50 of 0.20mg/ml) and anti-inflammatory potency (IC50 of 1.89mg/ml). Identification of compounds in this extract was performed by GC-MS analysis and its antimicrobial potency was also evaluated. The MIC values against E. coli, P. aeruginosa and S. aureus were 0.5, 0.25 and 0.5mg/ml, respectively

Keywords: Antimicrobial potency, Cinnamomum tamala, eugenol, supercritical carbon dioxide extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3628
598 Degraded Document Analysis and Extraction of Original Text Document: An Approach without Optical Character Recognition

Authors: L. Hamsaveni, Navya Prakash, Suresha

Abstract:

Document Image Analysis recognizes text and graphics in documents acquired as images. An approach without Optical Character Recognition (OCR) for degraded document image analysis has been adopted in this paper. The technique involves document imaging methods such as Image Fusing and Speeded Up Robust Features (SURF) Detection to identify and extract the degraded regions from a set of document images to obtain an original document with complete information. In case, degraded document image captured is skewed, it has to be straightened (deskew) to perform further process. A special format of image storing known as YCbCr is used as a tool to convert the Grayscale image to RGB image format. The presented algorithm is tested on various types of degraded documents such as printed documents, handwritten documents, old script documents and handwritten image sketches in documents. The purpose of this research is to obtain an original document for a given set of degraded documents of the same source.

Keywords: Grayscale image format, image fusing, SURF detection, YCbCr image format.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1155
597 LFC Design of a Deregulated Power System with TCPS Using PSO

Authors: H. Shayeghi, H.A. Shayanfar, A. Jalili

Abstract:

In the LFC problem, the interconnections among some areas are the input of disturbances, and therefore, it is important to suppress the disturbances by the coordination of governor systems. In contrast, tie-line power flow control by TCPS located between two areas makes it possible to stabilize the system frequency oscillations positively through interconnection, which is also expected to provide a new ancillary service for the further power systems. Thus, a control strategy using controlling the phase angle of TCPS is proposed for provide active control facility of system frequency in this paper. Also, the optimum adjustment of PID controller's parameters in a robust way under bilateral contracted scenario following the large step load demands and disturbances with and without TCPS are investigated by Particle Swarm Optimization (PSO), that has a strong ability to find the most optimistic results. This newly developed control strategy combines the advantage of PSO and TCPS and has simple stricture that is easy to implement and tune. To demonstrate the effectiveness of the proposed control strategy a three-area restructured power system is considered as a test system under different operating conditions and system nonlinearities. Analysis reveals that the TCPS is quite capable of suppressing the frequency and tie-line power oscillations effectively as compared to that obtained without TCPS for a wide range of plant parameter changes, area load demands and disturbances even in the presence of system nonlinearities.

Keywords: LFC, TCPS, Dregulated Power System, PowerSystem Control, PSO.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2069