Search results for: Experimental Study.
11711 Logic Programming and Artificial Neural Networks in Pharmacological Screening of Schinus Essential Oils
Authors: José Neves, M. Rosário Martins, Fátima Candeias, Diana Ferreira, Sílvia Arantes, Júlio Cruz-Morais, Guida Gomes, Joaquim Macedo, António Abelha, Henrique Vicente
Abstract:
Some plants of genus Schinus have been used in the folk medicine as topical antiseptic, digestive, purgative, diuretic, analgesic or antidepressant, and also for respiratory and urinary infections. Chemical composition of essential oils of S. molle and S. terebinthifolius had been evaluated and presented high variability according with the part of the plant studied and with the geographic and climatic regions. The pharmacological properties, namely antimicrobial, anti-tumoural and anti-inflammatory activities are conditioned by chemical composition of essential oils. Taking into account the difficulty to infer the pharmacological properties of Schinus essential oils without hard experimental approach, this work will focus on the development of a decision support system, in terms of its knowledge representation and reasoning procedures, under a formal framework based on Logic Programming, complemented with an approach to computing centered on Artificial Neural Networks and the respective Degree-of-Confidence that one has on such an occurrence.Keywords: Artificial neuronal networks, essential oils, knowledge representation and reasoning, logic programming, Schinus molle L, Schinus terebinthifolius raddi.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 242211710 Comparison of Two Airfoil Sections for Application in Straight-Bladed Darrieus VAWT
Authors: Marco Raciti Castelli, Ernesto Benini
Abstract:
This paper presents a model for the evaluation of energy performance and aerodynamic forces acting on a small straight-bladed Darrieus-type vertical axis wind turbine depending on blade geometrical section. It consists of an analytical code coupled to a solid modeling software, capable of generating the desired blade geometry based on the desired blade design geometric parameters. Such module is then linked to a finite volume commercial CFD code for the calculation of rotor performance by integration of the aerodynamic forces along the perimeter of each blade for a full period of revolution.After describing and validating the computational model with experimental data, the results of numerical simulations are proposed on the bases of two candidate airfoil sections, that is a classical symmetrical NACA 0021 blade profile and the recently developed DU 06-W-200 non-symmetric and laminar blade profile.Through a full CFD campaign of analysis, the effects of blade geometrical section on angle of attack are first investigated and then the overall rotor torque and power are analyzed as a function of blade azimuthal position, achieving a numerical quantification of the influence of airfoil geometry on overall rotor performance.Keywords: Wind turbine, NACA 0021, DU 06-W-200.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 382411709 Electron Density Discrepancy Analysis of Energy Metabolism Coenzymes
Authors: Alan Luo, Hunter N. B. Moseley
Abstract:
Many macromolecular structure entries in the Protein Data Bank (PDB) have a range of regional (localized) quality issues, be it derived from X-ray crystallography, Nuclear Magnetic Resonance (NMR) spectroscopy, or other experimental approaches. However, most PDB entries are judged by global quality metrics like R-factor, R-free, and resolution for X-ray crystallography or backbone phi-psi distribution statistics and average restraint violations for NMR. Regional quality is often ignored when PDB entries are re-used for a variety of structurally based analyses. The binding of ligands, especially ligands involved in energy metabolism, is of particular interest in many structurally focused protein studies. Using a regional quality metric that provides chemically interpretable information from electron density maps, a significant number of outliers in regional structural quality was detected across X-ray crystallographic PDB entries for proteins bound to biochemically critical ligands. In this study, a series of analyses was performed to evaluate both specific and general potential factors that could promote these outliers. In particular, these potential factors were the minimum distance to a metal ion, the minimum distance to a crystal contact, and the isotropic atomic b-factor. To evaluate these potential factors, Fisher’s exact tests were performed, using regional quality criteria of outlier (top 1%, 2.5%, 5%, or 10%) versus non-outlier compared to a potential factor metric above versus below a certain outlier cutoff. The results revealed a consistent general effect from region-specific normalized b-factors but no specific effect from metal ion contact distances and only a very weak effect from crystal contact distance as compared to the b-factor results. These findings indicate that no single specific potential factor explains a majority of the outlier ligand-bound regions, implying that human error is likely as important as these other factors. Thus, all factors, including human error, should be considered when regions of low structural quality are detected. Also, the downstream re-use of protein structures for studying ligand-bound conformations should screen the regional quality of the binding sites. Doing so prevents misinterpretation due to the presence of structural uncertainty or flaws in regions of interest.
Keywords: Biomacromolecular structure, coenzyme, electron density discrepancy analysis, X-ray crystallography.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25511708 Diagnosis of the Abdominal Aorta Aneurysm in Magnetic Resonance Imaging Images
Authors: W. Kultangwattana, K. Somkantha, P. Phuangsuwan
Abstract:
This paper presents a technique for diagnosis of the abdominal aorta aneurysm in magnetic resonance imaging (MRI) images. First, our technique is designed to segment the aorta image in MRI images. This is a required step to determine the volume of aorta image which is the important step for diagnosis of the abdominal aorta aneurysm. Our proposed technique can detect the volume of aorta in MRI images using a new external energy for snakes model. The new external energy for snakes model is calculated from Law-s texture. The new external energy can increase the capture range of snakes model efficiently more than the old external energy of snakes models. Second, our technique is designed to diagnose the abdominal aorta aneurysm by Bayesian classifier which is classification models based on statistical theory. The feature for data classification of abdominal aorta aneurysm was derived from the contour of aorta images which was a result from segmenting of our snakes model, i.e., area, perimeter and compactness. We also compare the proposed technique with the traditional snakes model. In our experiment results, 30 images are trained, 20 images are tested and compared with expert opinion. The experimental results show that our technique is able to provide more accurate results than 95%.
Keywords: Adbominal Aorta Aneurysm, Bayesian Classifier, Snakes Model, Texture Feature.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 159211707 Applications for Additive Manufacturing Technology for Reducing the Weight of Body Parts of Gas Turbine Engines
Authors: Liubov A. Magerramova, Mikhail A. Petrov, Vladimir V. Isakov, Liana A. Shcherbinina, Suren G. Gukasyan, Daniil V. Povalyukhin, Olga G. Klimova-Korsmik, Darya V. Volosevich
Abstract:
Aircraft engines are developing along the path of increasing resource, strength, reliability, and safety. The building of gas turbine engine body parts is a complex design and technological task. Particularly complex in the design and manufacturing are the casings of the input stages of helicopter gearboxes and central drives of aircraft engines. Traditional technologies, such as precision casting or isothermal forging, are characterized by significant limitations in parts production. For parts like housing, additive technologies guarantee spatial freedom and limitless or flexible design. This article presents the results of computational and experimental studies. These investigations justify the applicability of additive technologies (AT) to reduce the weight of aircraft housing gearbox parts by up to 32%. This is possible due to geometrical optimization compared to the classical, less flexible manufacturing methods and as-casted aircraft parts with over-insured values of safety factors. Using an example of the body of the input stage of an aircraft gearbox, visualization of the layer-by-layer manufacturing of a part based on thermal deformation was demonstrated.
Keywords: Additive technologies, gas turbine engines, geometric optimization, weight reduction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12111706 An Adaptive Dimensionality Reduction Approach for Hyperspectral Imagery Semantic Interpretation
Authors: Akrem Sellami, Imed Riadh Farah, Basel Solaiman
Abstract:
With the development of HyperSpectral Imagery (HSI) technology, the spectral resolution of HSI became denser, which resulted in large number of spectral bands, high correlation between neighboring, and high data redundancy. However, the semantic interpretation is a challenging task for HSI analysis due to the high dimensionality and the high correlation of the different spectral bands. In fact, this work presents a dimensionality reduction approach that allows to overcome the different issues improving the semantic interpretation of HSI. Therefore, in order to preserve the spatial information, the Tensor Locality Preserving Projection (TLPP) has been applied to transform the original HSI. In the second step, knowledge has been extracted based on the adjacency graph to describe the different pixels. Based on the transformation matrix using TLPP, a weighted matrix has been constructed to rank the different spectral bands based on their contribution score. Thus, the relevant bands have been adaptively selected based on the weighted matrix. The performance of the presented approach has been validated by implementing several experiments, and the obtained results demonstrate the efficiency of this approach compared to various existing dimensionality reduction techniques. Also, according to the experimental results, we can conclude that this approach can adaptively select the relevant spectral improving the semantic interpretation of HSI.Keywords: Band selection, dimensionality reduction, feature extraction, hyperspectral imagery, semantic interpretation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 117111705 CNC Wire-Cut Parameter Optimized Determination of the Stair Shape Workpiece
Authors: Chana Raksiri, Pornchai Chatchaikulsiri
Abstract:
The objective of this research is parameters optimized of the stair shape workpiece which is cut by CNC Wire-Cut EDM (WEDW). The experiment material is SKD-11 steel of stair-shaped with variable height workpiece 10, 20, 30 and 40 mm. with the same 10 mm. thickness are cut by Sodick's CNC Wire-Cut EDM model AD325L. The experiments are designed by 3k full factorial experimental design at 3 level 2 factors and 9 experiments with 2 replicate. The selected two factor are servo voltage (SV) and servo feed rate (SF) and the response is cutting thickness error. The experiment is divided in two experiments. The first experiment determines the significant effective factor at confidential interval 95%. The SV factor is the significant effective factor from first result. In order to result smallest cutting thickness error of workpieces is 17 micron with the SV value is 46 volt. Also show that the lower SV value, the smaller different thickness error of workpiece. Then the second experiment is done to reduce different cutting thickness error of workpiece as small as possible by lower SV. The second experiment result show the significant effective factor at confidential interval 95% is the SV factor and the smallest cutting thickness error of workpieces reduce to 11 micron with the experiment SV value is 36 volt.Keywords: CNC Wire-Cut, Variable Thickness Workpiece, Design of Experiments, Full Factorial Design
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 482111704 Hand Gesture Recognition Based on Combined Features Extraction
Authors: Mahmoud Elmezain, Ayoub Al-Hamadi, Bernd Michaelis
Abstract:
Hand gesture is an active area of research in the vision community, mainly for the purpose of sign language recognition and Human Computer Interaction. In this paper, we propose a system to recognize alphabet characters (A-Z) and numbers (0-9) in real-time from stereo color image sequences using Hidden Markov Models (HMMs). Our system is based on three main stages; automatic segmentation and preprocessing of the hand regions, feature extraction and classification. In automatic segmentation and preprocessing stage, color and 3D depth map are used to detect hands where the hand trajectory will take place in further step using Mean-shift algorithm and Kalman filter. In the feature extraction stage, 3D combined features of location, orientation and velocity with respected to Cartesian systems are used. And then, k-means clustering is employed for HMMs codeword. The final stage so-called classification, Baum- Welch algorithm is used to do a full train for HMMs parameters. The gesture of alphabets and numbers is recognized using Left-Right Banded model in conjunction with Viterbi algorithm. Experimental results demonstrate that, our system can successfully recognize hand gestures with 98.33% recognition rate.Keywords: Gesture Recognition, Computer Vision & Image Processing, Pattern Recognition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 403211703 Kinetic Theory Based CFD Modeling of Particulate Flows in Horizontal Pipes
Authors: Pandaba Patro, Brundaban Patro
Abstract:
The numerical simulation of fully developed gas–solid flow in a horizontal pipe is done using the eulerian-eulerian approach, also known as two fluids modeling as both phases are treated as continuum and inter-penetrating continua. The solid phase stresses are modeled using kinetic theory of granular flow (KTGF). The computed results for velocity profiles and pressure drop are compared with the experimental data. We observe that the convection and diffusion terms in the granular temperature cannot be neglected in gas solid flow simulation along a horizontal pipe. The particle-wall collision and lift also play important role in eulerian modeling. We also investigated the effect of flow parameters like gas velocity, particle properties and particle loading on pressure drop prediction in different pipe diameters. Pressure drop increases with gas velocity and particle loading. The gas velocity has the same effect ((proportional toU2 ) as single phase flow on pressure drop prediction. With respect to particle diameter, pressure drop first increases, reaches a peak and then decreases. The peak is a strong function of pipe bore.
Keywords: CFD, Eulerian modeling, gas solid flow, KTGF.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 317511702 Mechanical Properties of Enset Fibers Obtained from Different Breeds of Enset Plant
Authors: Diriba T. Balcha, Boris Kulig, Oliver Hensel, Eyassu Woldesenbet
Abstract:
Enset fiber is agricultural waste and available in a surplus amount in Ethiopia. However, the hypothesized variation in properties of this fiber due to diversity of its plant source breed, fiber position within plant stem and chemical treatment duration had not proven that its application for the development of composite products is problematic. Currently, limited data are known on the functional properties of the fiber as a potential functional fiber. Thus, an effort is made in this study to narrow the knowledge gaps by characterizing it. The experimental design was conducted using Design-Expert software and the tensile test was conducted on Enset fiber from 10 breeds: Dego, Dirbo, Gishera, Itine, Siskela, Neciho, Yesherkinke, Tuzuma, Ankogena, and Kucharkia. The effects of 5% Na-OH surface treatment duration and fiber location along and across the plant pseudostem was also investigated. The test result shows that the rupture stress variation is not significant among the fibers from 10 Enset breeds. However, strain variation is significant among the fibers from 10 Enset breeds that breed Dego fiber has the highest strain before failure. Surface treated fibers showed improved rupture strength and elastic modulus per 24 hours of treatment duration. Also, the result showed that chemical treatment can deteriorate the load-bearing capacity of the fiber. The raw fiber has the higher load-bearing capacity than the treated fiber. And, it was noted that both the rupture stress and strain increase in the top to bottom gradient, whereas there is no significant variation across the stem. Elastic modulus variation both along and across the stem was insignificant. The rupture stress, elastic modulus, and strain result of Enset fiber are 360.11 ± 181.86 MPa, 12.80 ± 6.85 GPa and 0.04 ± 0.02 mm/mm, respectively. These results show that Enset fiber is comparable to other natural fibers such as abaca, banana, and sisal fibers and can be used as alternatives natural fiber for composites application. Besides, the insignificant variation of properties among breeds and across stem is essential for all breeds and all leaf sheath of the Enset fiber plant for fiber extraction. The use of short natural fiber over the long is preferable to reduce the significant variation of properties along the stem or fiber direction. In conclusion, Enset fiber application for composite product design and development is mechanically feasible.Keywords: Agricultural waste, chemical treatment, fiber characteristics, natural fiber.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 73111701 Improvement of Parallel Compressor Model in Dealing Outlet Unequal Pressure Distribution
Authors: Kewei Xu, Jens Friedrich, Kevin Dwinger, Wei Fan, Xijin Zhang
Abstract:
Parallel Compressor Model (PCM) is a simplified approach to predict compressor performance with inlet distortions. In PCM calculation, it is assumed that the sub-compressors’ outlet static pressure is uniform and therefore simplifies PCM calculation procedure. However, if the compressor’s outlet duct is not long and straight, such assumption frequently induces error ranging from 10% to 15%. This paper provides a revised calculation method of PCM that can correct the error. The revised method employs energy equation, momentum equation and continuity equation to acquire needed parameters and replace the equal static pressure assumption. Based on the revised method, PCM is applied on two compression system with different blades types. The predictions of their performance in non-uniform inlet conditions are yielded through the revised calculation method and are employed to evaluate the method’s efficiency. Validating the results by experimental data, it is found that although little deviation occurs, calculated result agrees well with experiment data whose error ranges from 0.1% to 3%. Therefore, this proves the revised calculation method of PCM possesses great advantages in predicting the performance of the distorted compressor with limited exhaust duct.Keywords: Parallel Compressor Model (PCM), Revised Calculation Method, Inlet Distortion, Outlet Unequal Pressure Distribution.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 168811700 Mining Network Data for Intrusion Detection through Naïve Bayesian with Clustering
Authors: Dewan Md. Farid, Nouria Harbi, Suman Ahmmed, Md. Zahidur Rahman, Chowdhury Mofizur Rahman
Abstract:
Network security attacks are the violation of information security policy that received much attention to the computational intelligence society in the last decades. Data mining has become a very useful technique for detecting network intrusions by extracting useful knowledge from large number of network data or logs. Naïve Bayesian classifier is one of the most popular data mining algorithm for classification, which provides an optimal way to predict the class of an unknown example. It has been tested that one set of probability derived from data is not good enough to have good classification rate. In this paper, we proposed a new learning algorithm for mining network logs to detect network intrusions through naïve Bayesian classifier, which first clusters the network logs into several groups based on similarity of logs, and then calculates the prior and conditional probabilities for each group of logs. For classifying a new log, the algorithm checks in which cluster the log belongs and then use that cluster-s probability set to classify the new log. We tested the performance of our proposed algorithm by employing KDD99 benchmark network intrusion detection dataset, and the experimental results proved that it improves detection rates as well as reduces false positives for different types of network intrusions.Keywords: Clustering, detection rate, false positive, naïveBayesian classifier, network intrusion detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 553611699 MARTI and MRSD: Newly Developed Isolation-Damping Devices with Adaptive Hardening for Seismic Protection of Structures
Authors: Murat Dicleli, Ali Salem Milani
Abstract:
In this paper, a summary of analytical and experimental studies into the behavior of a new hysteretic damper, designed for seismic protection of structures is presented. The Multidirectional Torsional Hysteretic Damper (MRSD) is a patented invention in which a symmetrical arrangement of identical cylindrical steel cores is so configured as to yield in torsion while the structure experiences planar movements due to earthquake shakings. The new device has certain desirable properties. Notably, it is characterized by a variable and controllable-via-design post-elastic stiffness. The mentioned property is a result of MRSD’s kinematic configuration which produces this geometric hardening, rather than being a secondary large-displacement effect. Additionally, the new system is capable of reaching high force and displacement capacities, shows high levels of damping, and very stable cyclic response. The device has gone through many stages of design refinement, multiple prototype verification tests and development of design guide-lines and computer codes to facilitate its implementation in practice. Practicality of the new device, as offspring of an academic sphere, is assured through extensive collaboration with industry in its final design stages, prototyping and verification test programs.Keywords: Seismic, isolation, damper, adaptive stiffness.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 200011698 Inferring Hierarchical Pronunciation Rules from a Phonetic Dictionary
Authors: Erika Pigliapoco, Valerio Freschi, Alessandro Bogliolo
Abstract:
This work presents a new phonetic transcription system based on a tree of hierarchical pronunciation rules expressed as context-specific grapheme-phoneme correspondences. The tree is automatically inferred from a phonetic dictionary by incrementally analyzing deeper context levels, eventually representing a minimum set of exhaustive rules that pronounce without errors all the words in the training dictionary and that can be applied to out-of-vocabulary words. The proposed approach improves upon existing rule-tree-based techniques in that it makes use of graphemes, rather than letters, as elementary orthographic units. A new linear algorithm for the segmentation of a word in graphemes is introduced to enable outof- vocabulary grapheme-based phonetic transcription. Exhaustive rule trees provide a canonical representation of the pronunciation rules of a language that can be used not only to pronounce out-of-vocabulary words, but also to analyze and compare the pronunciation rules inferred from different dictionaries. The proposed approach has been implemented in C and tested on Oxford British English and Basic English. Experimental results show that grapheme-based rule trees represent phonetically sound rules and provide better performance than letter-based rule trees.
Keywords: Automatic phonetic transcription, pronunciation rules, hierarchical tree inference.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 192511697 Gas Detection via Machine Learning
Authors: Walaa Khalaf, Calogero Pace, Manlio Gaudioso
Abstract:
We present an Electronic Nose (ENose), which is aimed at identifying the presence of one out of two gases, possibly detecting the presence of a mixture of the two. Estimation of the concentrations of the components is also performed for a volatile organic compound (VOC) constituted by methanol and acetone, for the ranges 40-400 and 22-220 ppm (parts-per-million), respectively. Our system contains 8 sensors, 5 of them being gas sensors (of the class TGS from FIGARO USA, INC., whose sensing element is a tin dioxide (SnO2) semiconductor), the remaining being a temperature sensor (LM35 from National Semiconductor Corporation), a humidity sensor (HIH–3610 from Honeywell), and a pressure sensor (XFAM from Fujikura Ltd.). Our integrated hardware–software system uses some machine learning principles and least square regression principle to identify at first a new gas sample, or a mixture, and then to estimate the concentrations. In particular we adopt a training model using the Support Vector Machine (SVM) approach with linear kernel to teach the system how discriminate among different gases. Then we apply another training model using the least square regression, to predict the concentrations. The experimental results demonstrate that the proposed multiclassification and regression scheme is effective in the identification of the tested VOCs of methanol and acetone with 96.61% correctness. The concentration prediction is obtained with 0.979 and 0.964 correlation coefficient for the predicted versus real concentrations of methanol and acetone, respectively.Keywords: Electronic nose, Least square regression, Mixture ofgases, Support Vector Machine.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 253911696 An Induction Motor Drive System with Intelligent Supervisory Control for Water Networks Including Storage Tank
Authors: O. S. Ebrahim, K. O. Shawky, M. A. Badr, P. K. Jain
Abstract:
This paper describes an efficient; low-cost; high-availability; induction motor (IM) drive system with intelligent supervisory control for water distribution networks including storage tank. To increase the operational efficiency and reduce cost, the IM drive system includes main pumping unit and an auxiliary voltage source inverter (VSI) fed unit. The main unit comprises smart star/delta starter, regenerative fluid clutch, switched VAR compensator, and hysteresis liquid-level controller. Three-state energy saving mode (ESM) is defined at no-load and a logic algorithm is developed for best energetic cost reduction. To reduce voltage sag, the supervisory controller operates the switched VAR compensator upon motor starting. To provide smart star/delta starter at low cost, a method based on current sensing is developed for interlocking, malfunction detection, and life–cycles counting and used to synthesize an improved fuzzy logic (FL) based availability assessment scheme. Furthermore, a recurrent neural network (RNN) full state estimator is proposed to provide sensor fault-tolerant algorithm for the feedback control. The auxiliary unit is working at low flow rates and improves the system efficiency and flexibility for distributed generation during islanding mode. Compared with doubly-fed IM, the proposed one ensures 30% working throughput under main motor/pump fault conditions, higher efficiency, and marginal cost difference. This is critically important in case of water networks. Theoretical analysis, computer simulations, cost study, as well as efficiency evaluation, using timely cascaded energy-conservative systems, are performed on IM experimental setup to demonstrate the validity and effectiveness of the proposed drive and control.
Keywords: Artificial Neural Network, ANN, Availability Assessment, Cloud Computing, Energy Saving, Induction Machine, IM, Supervisory Control, Fuzzy Logic, FL, Pumped Storage.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 63011695 Design and Optimization for a Compliant Gripper with Force Regulation Mechanism
Authors: Nhat Linh Ho, Thanh-Phong Dao, Shyh-Chour Huang, Hieu Giang Le
Abstract:
This paper presents a design and optimization for a compliant gripper. The gripper is constructed based on the concept of compliant mechanism with flexure hinge. A passive force regulation mechanism is presented to control the grasping force a micro-sized object instead of using a sensor force. The force regulation mechanism is designed using the planar springs. The gripper is expected to obtain a large range of displacement to handle various sized objects. First of all, the statics and dynamics of the gripper are investigated by using the finite element analysis in ANSYS software. And then, the design parameters of the gripper are optimized via Taguchi method. An orthogonal array L9 is used to establish an experimental matrix. Subsequently, the signal to noise ratio is analyzed to find the optimal solution. Finally, the response surface methodology is employed to model the relationship between the design parameters and the output displacement of the gripper. The design of experiment method is then used to analyze the sensitivity so as to determine the effect of each parameter on the displacement. The results showed that the compliant gripper can move with a large displacement of 213.51 mm and the force regulation mechanism is expected to be used for high precision positioning systems.
Keywords: Flexure hinge, compliant mechanism, compliant gripper, force regulation mechanism, Taguchi method, response surface methodology, design of experiment.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 161311694 A New Approach for Image Segmentation using Pillar-Kmeans Algorithm
Authors: Ali Ridho Barakbah, Yasushi Kiyoki
Abstract:
This paper presents a new approach for image segmentation by applying Pillar-Kmeans algorithm. This segmentation process includes a new mechanism for clustering the elements of high-resolution images in order to improve precision and reduce computation time. The system applies K-means clustering to the image segmentation after optimized by Pillar Algorithm. The Pillar algorithm considers the pillars- placement which should be located as far as possible from each other to withstand against the pressure distribution of a roof, as identical to the number of centroids amongst the data distribution. This algorithm is able to optimize the K-means clustering for image segmentation in aspects of precision and computation time. It designates the initial centroids- positions by calculating the accumulated distance metric between each data point and all previous centroids, and then selects data points which have the maximum distance as new initial centroids. This algorithm distributes all initial centroids according to the maximum accumulated distance metric. This paper evaluates the proposed approach for image segmentation by comparing with K-means and Gaussian Mixture Model algorithm and involving RGB, HSV, HSL and CIELAB color spaces. The experimental results clarify the effectiveness of our approach to improve the segmentation quality in aspects of precision and computational time.Keywords: Image segmentation, K-means clustering, Pillaralgorithm, color spaces.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 337211693 PointNetLK-OBB: A Point Cloud Registration Algorithm with High Accuracy
Authors: Wenhao Lan, Ning Li, Qiang Tong
Abstract:
To improve the registration accuracy of a source point cloud and template point cloud when the initial relative deflection angle is too large, a PointNetLK algorithm combined with an oriented bounding box (PointNetLK-OBB) is proposed. In this algorithm, the OBB of a 3D point cloud is used to represent the macro feature of source and template point clouds. Under the guidance of the iterative closest point algorithm, the OBB of the source and template point clouds is aligned, and a mirror symmetry effect is produced between them. According to the fitting degree of the source and template point clouds, the mirror symmetry plane is detected, and the optimal rotation and translation of the source point cloud is obtained to complete the 3D point cloud registration task. To verify the effectiveness of the proposed algorithm, a comparative experiment was performed using the publicly available ModelNet40 dataset. The experimental results demonstrate that, compared with PointNetLK, PointNetLK-OBB improves the registration accuracy of the source and template point clouds when the initial relative deflection angle is too large, and the sensitivity of the initial relative position between the source point cloud and template point cloud is reduced. The primary contribution of this paper is the use of PointNetLK to avoid the non-convex problem of traditional point cloud registration and leveraging the regularity of the OBB to avoid the local optimization problem in the PointNetLK context.
Keywords: Mirror symmetry, oriented bounding box, point cloud registration, PointNetLK-OBB.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 70611692 Development of Fuzzy Logic and Neuro-Fuzzy Surface Roughness Prediction Systems Coupled with Cutting Current in Milling Operation
Authors: Joseph C. Chen, Venkata Mohan Kudapa
Abstract:
Development of two real-time surface roughness (Ra) prediction systems for milling operations was attempted. The systems used not only cutting parameters, such as feed rate and spindle speed, but also the cutting current generated and corrected by a clamp type energy sensor. Two different approaches were developed. First, a fuzzy inference system (FIS), in which the fuzzy logic rules are generated by experts in the milling processes, was used to conduct prediction modeling using current cutting data. Second, a neuro-fuzzy system (ANFIS) was explored. Neuro-fuzzy systems are adaptive techniques in which data are collected on the network, processed, and rules are generated by the system. The inference system then uses these rules to predict Ra as the output. Experimental results showed that the parameters of spindle speed, feed rate, depth of cut, and input current variation could predict Ra. These two systems enable the prediction of Ra during the milling operation with an average of 91.83% and 94.48% accuracy by FIS and ANFIS systems, respectively. Statistically, the ANFIS system provided better prediction accuracy than that of the FIS system.Keywords: Surface roughness, input current, fuzzy logic, neuro-fuzzy, milling operations.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 49311691 Modeling and Optimization of Abrasive Waterjet Parameters using Regression Analysis
Authors: Farhad Kolahan, A. Hamid Khajavi
Abstract:
Abrasive waterjet is a novel machining process capable of processing wide range of hard-to-machine materials. This research addresses modeling and optimization of the process parameters for this machining technique. To model the process a set of experimental data has been used to evaluate the effects of various parameter settings in cutting 6063-T6 aluminum alloy. The process variables considered here include nozzle diameter, jet traverse rate, jet pressure and abrasive flow rate. Depth of cut, as one of the most important output characteristics, has been evaluated based on different parameter settings. The Taguchi method and regression modeling are used in order to establish the relationships between input and output parameters. The adequacy of the model is evaluated using analysis of variance (ANOVA) technique. The pairwise effects of process parameters settings on process response outputs are also shown graphically. The proposed model is then embedded into a Simulated Annealing algorithm to optimize the process parameters. The optimization is carried out for any desired values of depth of cut. The objective is to determine proper levels of process parameters in order to obtain a certain level of depth of cut. Computational results demonstrate that the proposed solution procedure is quite effective in solving such multi-variable problems.
Keywords: AWJ cutting, Mathematical modeling, Simulated Annealing, Optimization
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 215411690 Multi-Layer Multi-Feature Background Subtraction Using Codebook Model Framework
Authors: Yun-Tao Zhang, Jong-Yeop Bae, Whoi-Yul Kim
Abstract:
Background modeling and subtraction in video analysis has been widely used as an effective method for moving objects detection in many computer vision applications. Recently, a large number of approaches have been developed to tackle different types of challenges in this field. However, the dynamic background and illumination variations are the most frequently occurred problems in the practical situation. This paper presents a favorable two-layer model based on codebook algorithm incorporated with local binary pattern (LBP) texture measure, targeted for handling dynamic background and illumination variation problems. More specifically, the first layer is designed by block-based codebook combining with LBP histogram and mean value of each RGB color channel. Because of the invariance of the LBP features with respect to monotonic gray-scale changes, this layer can produce block wise detection results with considerable tolerance of illumination variations. The pixel-based codebook is employed to reinforce the precision from the output of the first layer which is to eliminate false positives further. As a result, the proposed approach can greatly promote the accuracy under the circumstances of dynamic background and illumination changes. Experimental results on several popular background subtraction datasets demonstrate very competitive performance compared to previous models.Keywords: Background subtraction, codebook model, local binary pattern, dynamic background, illumination changes.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 196511689 Flow-Through Supercritical Installation for Producing Biodiesel Fuel
Authors: Y. A. Shapovalov, F. M. Gumerov, M. K. Nauryzbaev, S. V. Mazanov, R. A. Usmanov, A. V. Klinov, L. K. Safiullina, S. A. Soshin
Abstract:
A flow-through installation was created and manufactured for the transesterification of triglycerides of fatty acids and production of biodiesel fuel under supercritical fluid conditions. Transesterification of rapeseed oil with ethanol was carried out according to two parameters: temperature and the ratio of alcohol/oil mixture at the constant pressure of 19 MPa. The kinetics of the yield of fatty acids ethyl esters (FAEE) was determined in the temperature range of 320-380 °C at the alcohol/oil molar ratio of 6:1-20:1. The content of the formed FAEE was determined by the method of correlation of the resulting biodiesel fuel by its kinematic viscosity. The maximum FAEE yield (about 90%) was obtained within 30 min at the ethanol/oil molar ratio of 12:1 and a temperature of 380 °C. When studying of transesterification of triglycerides, a kinetic model of an isothermal flow reactor was used. The reaction order implemented in the flow reactor has been determined. The first order of the reaction was confirmed by data on the conversion of FAEE during the reaction at different temperatures and the molar ratios of the initial reagents (ethanol/oil). Using the Arrhenius equation, the values of the effective constants of the transesterification reaction rate were calculated at different reaction temperatures. In addition, based on the experimental data, the activation energy and the pre-exponential factor of the transesterification reaction were determined.
Keywords: Biodiesel, fatty acid esters, supercritical fluid technology, transesterification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 40611688 The Reproducibility and Repeatability of Modified Likelihood Ratio for Forensics Handwriting Examination
Authors: O. Abiodun Adeyinka, B. Adeyemo Adesesan
Abstract:
The forensic use of handwriting depends on the analysis, comparison, and evaluation decisions made by forensic document examiners. When using biometric technology in forensic applications, it is necessary to compute Likelihood Ratio (LR) for quantifying strength of evidence under two competing hypotheses, namely the prosecution and the defense hypotheses wherein a set of assumptions and methods for a given data set will be made. It is therefore important to know how repeatable and reproducible our estimated LR is. This paper evaluated the accuracy and reproducibility of examiners' decisions. Confidence interval for the estimated LR were presented so as not get an incorrect estimate that will be used to deliver wrong judgment in the court of Law. The estimate of LR is fundamentally a Bayesian concept and we used two LR estimators, namely Logistic Regression (LoR) and Kernel Density Estimator (KDE) for this paper. The repeatability evaluation was carried out by retesting the initial experiment after an interval of six months to observe whether examiners would repeat their decisions for the estimated LR. The experimental results, which are based on handwriting dataset, show that LR has different confidence intervals which therefore implies that LR cannot be estimated with the same certainty everywhere. Though the LoR performed better than the KDE when tested using the same dataset, the two LR estimators investigated showed a consistent region in which LR value can be estimated confidently. These two findings advance our understanding of LR when used in computing the strength of evidence in handwriting using forensics.Keywords: Logistic Regression LoR, Kernel Density Estimator KDE, Handwriting, Confidence Interval, Repeatability, Reproducibility.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 47211687 Evaluation of Buckwheat Genotypes to Different Planting Geometries and Fertility Levels in Northern Transition Zone of Karnataka
Authors: U. K. Hulihalli, Shantveerayya
Abstract:
Buckwheat (Fagopyrum esculentum Moench) is an annual crop belongs to family Poligonaceae. The cultivated buckwheat species are notable for their exceptional nutritive values. It is an important source of carbohydrates, fibre, macro, and microelements such as K, Ca, Mg, Na and Mn, Zn, Se, and Cu. It also contains rutin, flavonoids, riboflavin, pyridoxine and many amino acids which have beneficial effects on human health, including lowering both blood lipid and sugar levels. Rutin, quercetin and some other polyphenols are potent carcinogens against colon and other cancers. Buckwheat has significant nutritive value and plenty of uses. Cultivation of buckwheat in Sothern part of India is very meager. Hence, a study was planned with an objective to know the performance of buckwheat genotypes to different planting geometries and fertility levels. The field experiment was conducted at Main Agriculture Research Station, University of Agriculture Sciences, Dharwad, India, during 2017 Kharif. The experiment was laid-out in split-plot design with three replications having three planting geometries as main plots, two genotypes as sub plots and three fertility levels as sub-sub plot treatments. The soil of the experimental site was vertisol. The standard procedures are followed to record the observations. The planting geometry of 30*10 cm was recorded significantly higher seed yield (893 kg/ha⁻¹), stover yield (1507 kg ha⁻¹), clusters plant⁻¹ (7.4), seeds clusters⁻¹ (7.9) and 1000 seed weight (26.1 g) as compared to 40*10 cm and 20*10 cm planting geometries. Between the genotypes, significantly higher seed yield (943 kg ha⁻¹) and harvest index (45.1) was observed with genotype IC-79147 as compared to PRB-1 genotype (687 kg ha⁻¹ and 34.2, respectively). However, the genotype PRB-1 recorded significantly higher stover yield (1344 kg ha⁻¹) as compared to genotype IC-79147 (1173 kg ha⁻¹). The genotype IC-79147 was recorded significantly higher clusters plant⁻¹ (7.1), seeds clusters⁻¹ (7.9) and 1000 seed weight (24.5 g) as compared PRB-1 (5.4, 5.8 and 22.3 g, respectively). Among the fertility levels tried, the fertility level of 60:30 NP kg ha⁻¹ recorded significantly higher seed yield (845 kg ha-1) and stover yield (1359 kg ha⁻¹) as compared to 40:20 NP kg ha-1 (808 and 1259 kg ha⁻¹ respectively) and 20:10 NP kg ha-1 (793 and 1144 kg ha⁻¹ respectively). Within the treatment combinations, IC 79147 genotype having 30*10 cm planting geometry with 60:30 NP kg ha⁻¹ recorded significantly higher seed yield (1070 kg ha⁻¹), clusters plant⁻¹ (10.3), seeds clusters⁻¹ (9.9) and 1000 seed weight (27.3 g) compared to other treatment combinations.
Keywords: Buckwheat, fertility levels, genotypes, geometry, polyphenols, rutin.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 83711686 Time/Temperature-Dependent Finite Element Model of Laminated Glass Beams
Authors: Alena Zemanová, Jan Zeman, Michal Šejnoha
Abstract:
The polymer foil used for manufacturing of laminated glass members behaves in a viscoelastic manner with temperature dependance. This contribution aims at incorporating the time/temperature-dependent behavior of interlayer to our earlier elastic finite element model for laminated glass beams. The model is based on a refined beam theory: each layer behaves according to the finite-strain shear deformable formulation by Reissner and the adjacent layers are connected via the Lagrange multipliers ensuring the inter-layer compatibility of a laminated unit. The time/temperature-dependent behavior of the interlayer is accounted for by the generalized Maxwell model and by the time-temperature superposition principle due to the Williams, Landel, and Ferry. The resulting system is solved by the Newton method with consistent linearization and the viscoelastic response is determined incrementally by the exponential algorithm. By comparing the model predictions against available experimental data, we demonstrate that the proposed formulation is reliable and accurately reproduces the behavior of the laminated glass units.Keywords: Laminated glass, finite element method, finite-strain Reissner model, Lagrange multipliers, generalized Maxwell model, Williams-Landel-Ferry equation, Newton method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 168511685 Internal Structure Formation in High Strength Fiber Concrete during Casting
Authors: Olga Kononova, Andrejs Krasnikovs , Videvuds Lapsa, Jurijs Kalinka, Angelina Galushchak
Abstract:
Post cracking behavior and load –bearing capacity of the steel fiber reinforced high-strength concrete (SFRHSC) are dependent on the number of fibers are crossing the weakest crack (bridged the crack) and their orientation to the crack surface. Filling the mould by SFRHSC, fibers are moving and rotating with the concrete matrix flow till the motion stops in each internal point of the concrete body. Filling the same mould from the different ends SFRHSC samples with the different internal structures (and different strength) can be obtained. Numerical flow simulations (using Newton and Bingham flow models) were realized, as well as single fiber planar motion and rotation numerical and experimental investigation (in viscous flow) was performed. X-ray pictures for prismatic samples were obtained and internal fiber positions and orientations were analyzed. Similarly fiber positions and orientations in cracked cross-section were recognized and were compared with numerically simulated. Structural SFRHSC fracture model was created based on single fiber pull-out laws, which were determined experimentally. Model predictions were validated by 15x15x60cm prisms 4 point bending tests.Keywords: fibers, orientation, high strength concrete, flow
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 144511684 Biospeckle Supported Fruit Bruise Detection
Authors: Adilson M. Enes, Juliana A. Fracarolli, Inácio M. Dal Fabbro, Silvestre Rodrigues
Abstract:
This research work proposed a study of fruit bruise detection by means of a biospeckle method, selecting the papaya fruit (Carica papaya) as testing body. Papaya is recognized as a fruit of outstanding nutritional qualities, showing high vitamin A content, calcium, carbohydrates, exhibiting high popularity all over the world, considering consumption and acceptability. The commercialization of papaya faces special problems which are associated to bruise generation during harvesting, packing and transportation. Papaya is classified as climacteric fruit, permitting to be harvested before the maturation is completed. However, by one side bruise generation is partially controlled once the fruit flesh exhibits high mechanical firmness. By the other side, mechanical loads can set a future bruise at that maturation stage, when it can not be detected yet by conventional methods. Mechanical damages of fruit skin leave an entrance door to microorganisms and pathogens, which will cause severe losses of quality attributes. Traditional techniques of fruit quality inspection include total soluble solids determination, mechanical firmness tests, visual inspections, which would hardly meet required conditions for a fully automated process. However, the pertinent literature reveals a new method named biospeckle which is based on the laser reflectance and interference phenomenon. The laser biospeckle or dynamic speckle is quantified by means of the Moment of Inertia, named after its mechanical counterpart due to similarity between the defining formulae. Biospeckle techniques are able to quantify biological activities of living tissues, which has been applied to seed viability analysis, vegetable senescence and similar topics. Since the biospeckle techniques can monitor tissue physiology, it could also detect changes in the fruit caused by mechanical damages. The proposed technique holds non invasive character, being able to generate numerical results consistent with an adequate automation. The experimental tests associated to this research work included the selection of papaya fruit at different maturation stages which were submitted to artificial mechanical bruising tests. Damages were visually compared with the frequency maps yielded by the biospeckle technique. Results were considered in close agreement.
Keywords: Biospeckle, papaya, mechanical damages, vegetable bruising.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 257311683 Studies of Rule Induction by STRIM from the Decision Table with Contaminated Attribute Values from Missing Data and Noise — In the Case of Critical Dataset Size —
Authors: Tetsuro Saeki, Yuichi Kato, Shoutarou Mizuno
Abstract:
STRIM (Statistical Test Rule Induction Method) has been proposed as a method to effectively induct if-then rules from the decision table which is considered as a sample set obtained from the population of interest. Its usefulness has been confirmed by simulation experiments specifying rules in advance, and by comparison with conventional methods. However, scope for future development remains before STRIM can be applied to the analysis of real-world data sets. The first requirement is to determine the size of the dataset needed for inducting true rules, since finding statistically significant rules is the core of the method. The second is to examine the capacity of rule induction from datasets with contaminated attribute values created by missing data and noise, since real-world datasets usually contain such contaminated data. This paper examines the first problem theoretically, in connection with the rule length. The second problem is then examined in a simulation experiment, utilizing the critical size of dataset derived from the first step. The experimental results show that STRIM is highly robust in the analysis of datasets with contaminated attribute values, and hence is applicable to real-world data
Keywords: Rule induction, decision table, missing data, noise.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 146311682 Evaluation of the Internal Quality for Pineapple Based on the Spectroscopy Approach and Neural Network
Authors: Nonlapun Meenil, Pisitpong Intarapong, Thitima Wongsheree, Pranchalee Samanpiboon
Abstract:
In Thailand, once pineapples are harvested, they must be classified into two classes based on their sweetness: sweet and unsweet. This paper has studied and developed the assessment of internal quality of pineapples using a low-cost compact spectroscopy sensor according to the spectroscopy approach and Neural Network (NN). During the experiments, Batavia pineapples were utilized, generating 100 samples. The extracted pineapple juice of each sample was used to determine the Soluble Solid Content (SSC) labeling into sweet and unsweet classes. In terms of experimental equipment, the sensor cover was specifically designed to install the sensor and light source to read the reflectance at a five mm depth from pineapple flesh. By using a spectroscopy sensor, data on visible and near-infrared reflectance (Vis-NIR) were collected. The NN was used to classify the pineapple classes. Before the classification step, the preprocessing methods, which are class balancing, data shuffling, and standardization, were applied. The 510 nm and 900 nm reflectance values of the middle parts of pineapples were used as features of the NN. With the sequential model and ReLU activation function, 100% accuracy of the training set and 76.67% accuracy of the test set were achieved. According to the abovementioned information, using a low-cost compact spectroscopy sensor has achieved favorable results in classifying the sweetness of the two classes of pineapples.
Keywords: Spectroscopy, soluble solid content, pineapple, neural network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 121