Search results for: Hexapod Machine Tool (HMT)
671 Aerodynamic Performance of a Pitching Bio-Inspired Corrugated Airfoil
Authors: Hadi Zarafshani, Shidvash Vakilipour, Shahin Teimori, Sara Barati
Abstract:
In the present study, the aerodynamic performance of a rigid two-dimensional pitching bio-inspired corrugate airfoil was numerically investigated at Reynolds number of 14000. The Open Field Operations And Manipulations (OpenFOAM) computational fluid dynamic tool is used to solve flow governing equations numerically. The k-ω SST turbulence model with low Reynolds correction (k-ω SST LRC) and the pimpleDyMFOAM solver are utilized to simulate the flow field around pitching bio-airfoil. The lift and drag coefficients of the airfoil are calculated at reduced frequencies k=1.24-4.96 and the angular amplitude of A=5°-20°. Results show that in a fixed reduced frequency, the absolute value of the sectional lift and drag coefficients increase with increasing pitching amplitude. In a fixed angular amplitude, the absolute value of the lift and drag coefficients increase as the pitching reduced frequency increases.
Keywords: Bio-inspired pitching airfoils, OpenFOAM, low Reynolds k-ω SST model, lift and drag coefficients.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 906670 Twitter Sentiment Analysis during the Lockdown on New Zealand
Authors: Smah Doeban Almotiri
Abstract:
One of the most common fields of natural language processing (NLP) is sentimental analysis. The inferred feeling in the text can be successfully mined for various events using sentiment analysis. Twitter is viewed as a reliable data point for sentimental analytics studies since people are using social media to receive and exchange different types of data on a broad scale during the COVID-19 epidemic. The processing of such data may aid in making critical decisions on how to keep the situation under control. The aim of this research is to look at how sentimental states differed in a single geographic region during the lockdown at two different times.1162 tweets were analyzed related to the COVID-19 pandemic lockdown using keywords hashtags (lockdown, COVID-19) for the first sample tweets were from March 23, 2020, until April 23, 2020, and the second sample for the following year was from March 1, 2021, until April 4, 2021. Natural language processing (NLP), which is a form of Artificial intelligent was used for this research to calculate the sentiment value of all of the tweets by using AFINN Lexicon sentiment analysis method. The findings revealed that the sentimental condition in both different times during the region's lockdown was positive in the samples of this study, which are unique to the specific geographical area of New Zealand. This research suggests applied machine learning sentimental method such as Crystal Feel and extended the size of the sample tweet by using multiple tweets over a longer period of time.
Keywords: sentiment analysis, Twitter analysis, lockdown, Covid-19, AFINN, NodeJS
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 585669 N-Grams: A Tool for Repairing Word Order Errors in Ill-formed Texts
Authors: Theologos Athanaselis, Stelios Bakamidis, Ioannis Dologlou, Konstantinos Mamouras
Abstract:
This paper presents an approach for repairing word order errors in English text by reordering words in a sentence and choosing the version that maximizes the number of trigram hits according to a language model. A possible way for reordering the words is to use all the permutations. The problem is that for a sentence with length N words the number of all permutations is N!. The novelty of this method concerns the use of an efficient confusion matrix technique for reordering the words. The confusion matrix technique has been designed in order to reduce the search space among permuted sentences. The limitation of search space is succeeded using the statistical inference of N-grams. The results of this technique are very interesting and prove that the number of permuted sentences can be reduced by 98,16%. For experimental purposes a test set of TOEFL sentences was used and the results show that more than 95% can be repaired using the proposed method.
Keywords: Permutations filtering, Statistical language model N-grams, Word order errors, TOEFL
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1668668 A Fuzzy-Rough Feature Selection Based on Binary Shuffled Frog Leaping Algorithm
Authors: Javad Rahimipour Anaraki, Saeed Samet, Mahdi Eftekhari, Chang Wook Ahn
Abstract:
Feature selection and attribute reduction are crucial problems, and widely used techniques in the field of machine learning, data mining and pattern recognition to overcome the well-known phenomenon of the Curse of Dimensionality. This paper presents a feature selection method that efficiently carries out attribute reduction, thereby selecting the most informative features of a dataset. It consists of two components: 1) a measure for feature subset evaluation, and 2) a search strategy. For the evaluation measure, we have employed the fuzzy-rough dependency degree (FRFDD) of the lower approximation-based fuzzy-rough feature selection (L-FRFS) due to its effectiveness in feature selection. As for the search strategy, a modified version of a binary shuffled frog leaping algorithm is proposed (B-SFLA). The proposed feature selection method is obtained by hybridizing the B-SFLA with the FRDD. Nine classifiers have been employed to compare the proposed approach with several existing methods over twenty two datasets, including nine high dimensional and large ones, from the UCI repository. The experimental results demonstrate that the B-SFLA approach significantly outperforms other metaheuristic methods in terms of the number of selected features and the classification accuracy.Keywords: Binary shuffled frog leaping algorithm, feature selection, fuzzy-rough set, minimal reduct.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 731667 Optimized Facial Features-based Age Classification
Authors: Md. Zahangir Alom, Mei-Lan Piao, Md. Shariful Islam, Nam Kim, Jae-Hyeung Park
Abstract:
The evaluation and measurement of human body dimensions are achieved by physical anthropometry. This research was conducted in view of the importance of anthropometric indices of the face in forensic medicine, surgery, and medical imaging. The main goal of this research is to optimization of facial feature point by establishing a mathematical relationship among facial features and used optimize feature points for age classification. Since selected facial feature points are located to the area of mouth, nose, eyes and eyebrow on facial images, all desire facial feature points are extracted accurately. According this proposes method; sixteen Euclidean distances are calculated from the eighteen selected facial feature points vertically as well as horizontally. The mathematical relationships among horizontal and vertical distances are established. Moreover, it is also discovered that distances of the facial feature follows a constant ratio due to age progression. The distances between the specified features points increase with respect the age progression of a human from his or her childhood but the ratio of the distances does not change (d = 1 .618 ) . Finally, according to the proposed mathematical relationship four independent feature distances related to eight feature points are selected from sixteen distances and eighteen feature point-s respectively. These four feature distances are used for classification of age using Support Vector Machine (SVM)-Sequential Minimal Optimization (SMO) algorithm and shown around 96 % accuracy. Experiment result shows the proposed system is effective and accurate for age classification.Keywords: 3D Face Model, Face Anthropometrics, Facial Features Extraction, Feature distances, SVM-SMO
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2047666 Performance Comparison of Situation-Aware Models for Activating Robot Vacuum Cleaner in a Smart Home
Authors: Seongcheol Kwon, Jeongmin Kim, Kwang Ryel Ryu
Abstract:
We assume an IoT-based smart-home environment where the on-off status of each of the electrical appliances including the room lights can be recognized in a real time by monitoring and analyzing the smart meter data. At any moment in such an environment, we can recognize what the household or the user is doing by referring to the status data of the appliances. In this paper, we focus on a smart-home service that is to activate a robot vacuum cleaner at right time by recognizing the user situation, which requires a situation-aware model that can distinguish the situations that allow vacuum cleaning (Yes) from those that do not (No). We learn as our candidate models a few classifiers such as naïve Bayes, decision tree, and logistic regression that can map the appliance-status data into Yes and No situations. Our training and test data are obtained from simulations of user behaviors, in which a sequence of user situations such as cooking, eating, dish washing, and so on is generated with the status of the relevant appliances changed in accordance with the situation changes. During the simulation, both the situation transition and the resulting appliance status are determined stochastically. To compare the performances of the aforementioned classifiers we obtain their learning curves for different types of users through simulations. The result of our empirical study reveals that naïve Bayes achieves a slightly better classification accuracy than the other compared classifiers.Keywords: Situation-awareness, Smart home, IoT, Machine learning, Classifier.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1857665 Face Recognition Using Principal Component Analysis, K-Means Clustering, and Convolutional Neural Network
Authors: Zukisa Nante, Wang Zenghui
Abstract:
Face recognition is the problem of identifying or recognizing individuals in an image. This paper investigates a possible method to bring a solution to this problem. The method proposes an amalgamation of Principal Component Analysis (PCA), K-Means clustering, and Convolutional Neural Network (CNN) for a face recognition system. It is trained and evaluated using the ORL dataset. This dataset consists of 400 different faces with 40 classes of 10 face images per class. Firstly, PCA enabled the usage of a smaller network. This reduces the training time of the CNN. Thus, we get rid of the redundancy and preserve the variance with a smaller number of coefficients. Secondly, the K-Means clustering model is trained using the compressed PCA obtained data which select the K-Means clustering centers with better characteristics. Lastly, the K-Means characteristics or features are an initial value of the CNN and act as input data. The accuracy and the performance of the proposed method were tested in comparison to other Face Recognition (FR) techniques namely PCA, Support Vector Machine (SVM), as well as K-Nearest Neighbour (kNN). During experimentation, the accuracy and the performance of our suggested method after 90 epochs achieved the highest performance: 99% accuracy F1-Score, 99% precision, and 99% recall in 463.934 seconds. It outperformed the PCA that obtained 97% and KNN with 84% during the conducted experiments. Therefore, this method proved to be efficient in identifying faces in the images.
Keywords: Face recognition, Principal Component Analysis, PCA, Convolutional Neural Network, CNN, Rectified Linear Unit, ReLU, feature extraction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 508664 Entrepreneurship Education as a 21st Century Strategy for Economic Growth and Sustainable Development
Authors: M. Fems Kurotimi, Agada Franklin, Godsave Aladei, Opigo Helen
Abstract:
Within the last 30 years, entrepreneurship education (EE) has continued to gain massive interest both in the field of research and among policy makers. This surge in interest can be attributed to the perceived importance EE plays in the equipping of potential entrepreneurs and as a 21st century strategy to foster economic growth and development. This paper sets out to ascertain the correlation between EE and economic growth and development. A desk research approach was adopted where a multiplicity of literatures in the field were studied intensely. The findings reveal that indeed EE has a positive effect on entrepreneurship engagement thereby fostering economic growth and development. However, some research studies reported the contrary. That although EE may be able to equip potential entrepreneurs with requisite entrepreneurial skills and competencies, it will only be successful in producing entrepreneurs if they are internally driven to become entrepreneurs, because we cannot make people what they are not. The findings also reveal that countries that adopted EE early have more innovations inspired by entrepreneurs and are more developed than those that only recently adopted EE as a viable tool for entrepreneurship and economic development.
Keywords: Entrepreneurship, entrepreneurship education, economic development, economic growth, sustainable development.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1761663 Degraded Document Analysis and Extraction of Original Text Document: An Approach without Optical Character Recognition
Authors: L. Hamsaveni, Navya Prakash, Suresha
Abstract:
Document Image Analysis recognizes text and graphics in documents acquired as images. An approach without Optical Character Recognition (OCR) for degraded document image analysis has been adopted in this paper. The technique involves document imaging methods such as Image Fusing and Speeded Up Robust Features (SURF) Detection to identify and extract the degraded regions from a set of document images to obtain an original document with complete information. In case, degraded document image captured is skewed, it has to be straightened (deskew) to perform further process. A special format of image storing known as YCbCr is used as a tool to convert the Grayscale image to RGB image format. The presented algorithm is tested on various types of degraded documents such as printed documents, handwritten documents, old script documents and handwritten image sketches in documents. The purpose of this research is to obtain an original document for a given set of degraded documents of the same source.Keywords: Grayscale image format, image fusing, SURF detection, YCbCr image format.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1155662 Comparison of Different Neural Network Approaches for the Prediction of Kidney Dysfunction
Authors: Ali Hussian Ali AlTimemy, Fawzi M. Al Naima
Abstract:
This paper presents the prediction of kidney dysfunction using different neural network (NN) approaches. Self organization Maps (SOM), Probabilistic Neural Network (PNN) and Multi Layer Perceptron Neural Network (MLPNN) trained with Back Propagation Algorithm (BPA) are used in this study. Six hundred and sixty three sets of analytical laboratory tests have been collected from one of the private clinical laboratories in Baghdad. For each subject, Serum urea and Serum creatinin levels have been analyzed and tested by using clinical laboratory measurements. The collected urea and cretinine levels are then used as inputs to the three NN models in which the training process is done by different neural approaches. SOM which is a class of unsupervised network whereas PNN and BPNN are considered as class of supervised networks. These networks are used as a classifier to predict whether kidney is normal or it will have a dysfunction. The accuracy of prediction, sensitivity and specificity were found for each type of the proposed networks .We conclude that PNN gives faster and more accurate prediction of kidney dysfunction and it works as promising tool for predicting of routine kidney dysfunction from the clinical laboratory data.Keywords: Kidney Dysfunction, Prediction, SOM, PNN, BPNN, Urea and Creatinine levels.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1932661 Performance Evaluation of Iris Region Detection and Localization for Biometric Identification System
Authors: Chit Su Htwe, Win Htay
Abstract:
The iris recognition technology is the most accurate, fast and less invasive one compared to other biometric techniques using for example fingerprints, face, retina, hand geometry, voice or signature patterns. The system developed in this study has the potential to play a key role in areas of high-risk security and can enable organizations with means allowing only to the authorized personnel a fast and secure way to gain access to such areas. The paper aim is to perform the iris region detection and iris inner and outer boundaries localization. The system was implemented on windows platform using Visual C# programming language. It is easy and efficient tool for image processing to get great performance accuracy. In particular, the system includes two main parts. The first is to preprocess the iris images by using Canny edge detection methods, segments the iris region from the rest of the image and determine the location of the iris boundaries by applying Hough transform. The proposed system tested on 756 iris images from 60 eyes of CASIA iris database images.Keywords: Canny, C#, hough transform, image preprocessing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2087660 Centre Of Mass Selection Operator Based Meta-Heuristic For Unbounded Knapsack Problem
Authors: D.Venkatesan, K.Kannan, S. Raja Balachandar
Abstract:
In this paper a new Genetic Algorithm based on a heuristic operator and Centre of Mass selection operator (CMGA) is designed for the unbounded knapsack problem(UKP), which is NP-Hard combinatorial optimization problem. The proposed genetic algorithm is based on a heuristic operator, which utilizes problem specific knowledge. This center of mass operator when combined with other Genetic Operators forms a competitive algorithm to the existing ones. Computational results show that the proposed algorithm is capable of obtaining high quality solutions for problems of standard randomly generated knapsack instances. Comparative study of CMGA with simple GA in terms of results for unbounded knapsack instances of size up to 200 show the superiority of CMGA. Thus CMGA is an efficient tool of solving UKP and this algorithm is competitive with other Genetic Algorithms also.
Keywords: Genetic Algorithm, Unbounded Knapsack Problem, Combinatorial Optimization, Meta-Heuristic, Center of Mass
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1700659 An Anatomically-Based Model of the Nerves in the Human Foot
Authors: Muhammad Zeeshan UlHaque, Peng Du, Leo K. Cheng, Marc D. Jacobs
Abstract:
Sensory nerves in the foot play an important part in the diagnosis of various neuropathydisorders, especially in diabetes mellitus.However, a detailed description of the anatomical distribution of the nerves is currently lacking. A computationalmodel of the afferent nerves inthe foot may bea useful tool for the study of diabetic neuropathy. In this study, we present the development of an anatomically-based model of various major sensory nerves of the sole and dorsal sidesof the foot. In addition, we presentan algorithm for generating synthetic somatosensory nerve networks in the big-toe region of a right foot model. The algorithm was based on a modified version of the Monte Carlo algorithm, with the capability of being able to vary the intra-epidermal nerve fiber density in differentregionsof the foot model. Preliminary results from the combinedmodel show the realistic anatomical structure of the major nerves as well as the smaller somatosensory nerves of the foot. The model may now be developed to investigate the functional outcomes of structural neuropathyindiabetic patients.
Keywords: Diabetic neuropathy, Finite element modeling, Monte Carlo Algorithm, Somatosensory nerve networks
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2335658 Computer Simulation of Low Volume Roads Made from Recycled Materials
Authors: Aleš Florian, Lenka Ševelová
Abstract:
Low volume roads are widely used all over the world. To improve their quality the computer simulation of their behavior is proposed. The FEM model enables to determine stress and displacement conditions in the pavement and/or also in the particular material layers. Different variants of pavement layers, material used, humidity as well as loading conditions can be studied. Among others, the input information about material properties of individual layers made from recycled materials is crucial for obtaining results as exact as possible. For this purpose the cyclic-load triaxial test machine testing of cyclic-load performance of materials is a promising test method. The test is able to simulate the real traffic loading on particular materials taking into account the changes in the horizontal stress conditions produced in particular layers by crossings of vehicles. Also the test specimen can be prepared with different amount of water. Thus modulus of elasticity (Young modulus) of different materials including recycled ones can be measured under the different conditions of horizontal and vertical stresses as well as under the different humidity conditions. Using the proposed testing procedure the modulus of elasticity of recycled materials used in the newly built low volume road is obtained under different stress and humidity conditions set to standard, dry and fully saturated level. Obtained values of modulus of elasticity are used in FEA.
Keywords: FEA, FEM, geotechnical materials, low volume roads, pavement, triaxial test, Young modulus.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1628657 A Psychophysiological Evaluation of an Effective Recognition Technique Using Interactive Dynamic Virtual Environments
Authors: Mohammadhossein Moghimi, Robert Stone, Pia Rotshtein
Abstract:
Recording psychological and physiological correlates of human performance within virtual environments and interpreting their impacts on human engagement, ‘immersion’ and related emotional or ‘effective’ states is both academically and technologically challenging. By exposing participants to an effective, real-time (game-like) virtual environment, designed and evaluated in an earlier study, a psychophysiological database containing the EEG, GSR and Heart Rate of 30 male and female gamers, exposed to 10 games, was constructed. Some 174 features were subsequently identified and extracted from a number of windows, with 28 different timing lengths (e.g. 2, 3, 5, etc. seconds). After reducing the number of features to 30, using a feature selection technique, K-Nearest Neighbour (KNN) and Support Vector Machine (SVM) methods were subsequently employed for the classification process. The classifiers categorised the psychophysiological database into four effective clusters (defined based on a 3-dimensional space – valence, arousal and dominance) and eight emotion labels (relaxed, content, happy, excited, angry, afraid, sad, and bored). The KNN and SVM classifiers achieved average cross-validation accuracies of 97.01% (±1.3%) and 92.84% (±3.67%), respectively. However, no significant differences were found in the classification process based on effective clusters or emotion labels.
Keywords: Virtual Reality, effective computing, effective VR, emotion-based effective physiological database.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 994656 Development of EPID-based Real time Dose Verification for Dynamic IMRT
Authors: Todsaporn Fuangrod, Daryl J. O'Connor, Boyd MC McCurdy, Peter B. Greer
Abstract:
An electronic portal image device (EPID) has become a method of patient-specific IMRT dose verification for radiotherapy. Research studies have focused on pre and post-treatment verification, however, there are currently no interventional procedures using EPID dosimetry that measure the dose in real time as a mechanism to ensure that overdoses do not occur and underdoses are detected as soon as is practically possible. As a result, an EPID-based real time dose verification system for dynamic IMRT was developed and was implemented with MATLAB/Simulink. The EPID image acquisition was set to continuous acquisition mode at 1.4 images per second. The system defined the time constraint gap, or execution gap at the image acquisition time, so that every calculation must be completed before the next image capture is completed. In addition, the <=-evaluation method was used for dose comparison, with two types of comparison processes; individual image and cumulative dose comparison monitored. The outputs of the system are the <=-map, the percent of <=<1, and mean-<= versus time, all in real time. Two strategies were used to test the system, including an error detection test and a clinical data test. The system can monitor the actual dose delivery compared with the treatment plan data or previous treatment dose delivery that means a radiation therapist is able to switch off the machine when the error is detected.Keywords: real-time dose verification, EPID dosimetry, simulation, dynamic IMRT
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2188655 Assessing Students’ Attitudinal Response towards the Use of Virtual Reality in a Mandatory English Class at a Women’s University in Japan
Authors: Felix David
Abstract:
The use of virtual reality (VR) technology is still in its infancy. This is especially true in a Japanese educational context with very little to no exposition of VR technology inside classrooms. Technology is growing and changing rapidly in America, but Japan seems to be lagging behind in integrating VR into its curriculum. The aim of this research was to expose 111 students from Hiroshima Jogakuin University (HJU) to seven classes that involved VR content and assess students’ attitudinal responses toward this new technology. The students are all female, and they are taking the “Kiso Eigo/基礎英語” or Foundation English course, which is mandatory for all first- and second-year students. Two surveys were given, one before the treatment and a second survey after the treatment, which in this case means the seven VR classes. These surveys first established that the technical environment could accommodate VR activities in terms of internet connection, VR headsets, and the quality of the smartphone’s screen. Based on the attitudinal responses gathered in this research, VR is perceived by students as “fun,” useful to “learn about the world,” as well as being useful to “learn about English.” This research validates VR as a worthy educational tool and it should therefore continue being an integral part of the mandatory English course curriculum at HJU.
Keywords: Virtual Reality, smartphone, English Learning, curriculum.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 174654 Graphical Environment for Modeling Control Systems in Full Scope Training Simulators
Authors: Guillermo Romero-Jiménez, Víctor Jiménez-Sánchez, Edgardo J. Roldán-Villasana
Abstract:
This paper describes the development of a control system model using a graphical software tool. This control system is part of an operator training simulator developed for the National Training Center for Operators of Ixtapantongo (CNCAOI, acronym according to its name in Spanish language) of the Mexico-s Federal Commission of Electricity, CFE). The Department of Simulation of the Electrical Research Institute (IIE) developed this simulator using as reference the Unit I of the Combined Cycle Power Plant El Sauz, located at the centre of Mexico. The first step in the project was the developing of the Gas Turbine System and its control system simulator. The Turbo Gas simulator was finished and delivered to CNCAOI in March 2007 for commercial operation. This simulator is a high-fidelity real time dynamic simulator built and tested for accurate operation over the entire load range. The simulator was used primarily for operator training although it has been used for procedure development and evaluation of plant transients.Keywords: Operators training, Power plant simulator, simulation environment.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1605653 High Specific Speed in Circulating Water Pump Can Cause Cavitation, Noise and Vibration
Authors: Chandra Gupt Porwal
Abstract:
Excessive vibration means increased wear, increased repair efforts, bad product selection & quality and high energy consumption. This may be sometimes experienced by cavitation or suction/discharge recirculation which could occur only when net positive suction head available NPSHA drops below the net positive suction head required NPSHR. Cavitation can cause axial surging, if it is excessive, will damage mechanical seals, bearings, possibly other pump components frequently, and shorten the life of the impeller. Efforts have been made to explain Suction Energy (SE), Specific Speed (Ns), Suction Specific Speed (Nss), NPSHA, NPSHR & their significance, possible reasons of cavitation /internal recirculation, its diagnostics and remedial measures to arrest and prevent cavitation in this paper. A case study is presented by the author highlighting that the root cause of unwanted noise and vibration is due to cavitation, caused by high specific speeds or inadequate net- positive suction head available which results in damages to material surfaces of impeller & suction bells and degradation of machine performance, its capacity and efficiency too. Author strongly recommends revisiting the technical specifications of CW pumps to provide sufficient NPSH margin ratios >1.5, for future projects and Nss be limited to 8500 - 9000 for cavitation free operation.
Keywords: Best efficiency point (BEP), Net positive suction head NPSHA, NPSHR, Specific Speed NS, Suction Specific Speed Nss.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5065652 Material and Parameter Analysis of the PolyJet Process for Mold Making Using Design of Experiments
Authors: A. Kampker, K. Kreisköther, C. Reinders
Abstract:
Since additive manufacturing technologies constantly advance, the use of this technology in mold making seems reasonable. Many manufacturers of additive manufacturing machines, however, do not offer any suggestions on how to parameterize the machine to achieve optimal results for mold making. The purpose of this research is to determine the interdependencies of different materials and parameters within the PolyJet process by using design of experiments (DoE), to additively manufacture molds, e.g. for thermoforming and injection molding applications. Therefore, the general requirements of thermoforming molds, such as heat resistance, surface quality and hardness, have been identified. Then, different materials and parameters of the PolyJet process, such as the orientation of the printed part, the layer thickness, the printing mode (matte or glossy), the distance between printed parts and the scaling of parts, have been examined. The multifactorial analysis covers the following properties of the printed samples: Tensile strength, tensile modulus, bending strength, elongation at break, surface quality, heat deflection temperature and surface hardness. The key objective of this research is that by joining the results from the DoE with the requirements of the mold making, optimal and tailored molds can be additively manufactured with the PolyJet process. These additively manufactured molds can then be used in prototyping processes, in process testing and in small to medium batch production.
Keywords: Additive manufacturing, design of experiments, mold making, PolyJet.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1730651 Implementing a Visual Servoing System for Robot Controlling
Authors: Maryam Vafadar, Alireza Behrad, Saeed Akbari
Abstract:
Nowadays, with the emerging of the new applications like robot control in image processing, artificial vision for visual servoing is a rapidly growing discipline and Human-machine interaction plays a significant role for controlling the robot. This paper presents a new algorithm based on spatio-temporal volumes for visual servoing aims to control robots. In this algorithm, after applying necessary pre-processing on video frames, a spatio-temporal volume is constructed for each gesture and feature vector is extracted. These volumes are then analyzed for matching in two consecutive stages. For hand gesture recognition and classification we tested different classifiers including k-Nearest neighbor, learning vector quantization and back propagation neural networks. We tested the proposed algorithm with the collected data set and results showed the correct gesture recognition rate of 99.58 percent. We also tested the algorithm with noisy images and algorithm showed the correct recognition rate of 97.92 percent in noisy images.Keywords: Back propagation neural network, Feature vector, Hand gesture recognition, k-Nearest Neighbor, Learning vector quantization neural network, Robot control, Spatio-temporal volume, Visual servoing
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1670650 The Effect of Mixture Velocity and Droplet Diameter on Oil-water Separator using Computational Fluid Dynamics (CFD)
Authors: M. Abdulkadir, V. Hernandez-Perez
Abstract:
The characteristics of fluid flow and phase separation in an oil-water separator were numerically analysed as part of the work presented herein. Simulations were performed for different velocities and droplet diameters, and the way this parameters can influence the separator geometry was studied. The simulations were carried out using the software package Fluent 6.2, which is designed for numerical simulation of fluid flow and mass transfer. The model consisted of a cylindrical horizontal separator. A tetrahedral mesh was employed in the computational domain. The condition of two-phase flow was simulated with the two-fluid model, taking into consideration turbulence effects using the k-ε model. The results showed that there is a strong dependency of phase separation on mixture velocity and droplet diameter. An increase in mixture velocity will bring about a slow down in phase separation and as a consequence will require a weir of greater height. An increase in droplet diameter will produce a better phase separation. The simulations are in agreement with results reported in literature and show that CFD can be a useful tool in studying a horizontal oilwater separator.Keywords: CFD, droplet diameter, mixture velocity
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3180649 Effects of Position and Cut-Out Lengths on the Axial Crushing Behavior of Aluminum Tubes: Experimental and Simulation
Authors: B. Käfer, V. K. Bheemineni, H. Lammer, M. Kotnik, F. O. Riemelmoser
Abstract:
Axial compression tests are performed on circular tubes made of Aluminum EN AW 6060 (AlMgSi0.5 alloy) in T66 state. All the received tubes have the uniform outer diameter of 40mm and thickness of 1.5mm. Two different lengths 100mm and 200mm are used in the analysis. After performing compression tests on the uniform tube, important crashworthy parameters like peak force, average force, crush efficiency and energy absorption are measured. The present paper has given importance to increase the percentage of crush efficiency without decreasing the value energy absorption of a tube, so a circumferential notch was introduced on the top section of the tube. The effects of position and cut-out lengths of a circumferential notch on the crush efficiency are well explained with relative deformation modes and force-displacement curves. The numerical simulations were carried on the software tool ANSYS/LS-DYNA. It is seen that the numerical results are reasonably good in agreement with the experimental results.
Keywords: Crash box, Notch triggering, Energy absorption, FEM simulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2146648 A Detailed Experimental Study and Evaluation of Springback under Stretch Bending Process
Authors: A. Soualem
Abstract:
The design of multi stage deep drawing processes requires the evaluation of many process parameters such as the intermediate die geometry, the blank shape, the sheet thickness, the blank holder force, friction, lubrication etc..These process parameters have to be determined for the optimum forming conditions before the process design. In general sheet metal forming may involve stretching drawing or various combinations of these basic modes of deformation. It is important to determine the influence of the process variables in the design of sheet metal working process. Especially, the punch and die corner for deep drawing will affect the formability. At the same time the prediction of sheet metals springback after deep drawing is an important issue to solve for the control of manufacturing processes. Nowadays, the importance of this problem increases because of the use of steel sheeting with high stress and also aluminum alloys.
The aim of this paper is to give a better understanding of the springback and its effect in various sheet metals forming process such as expansion and restreint deep drawing in the cup drawing process, by varying radius die, lubricant for two commercially available materials e.g. galvanized steel and Aluminum sheet. To achieve these goals experiments were carried out and compared with other results. The original of our purpose consist on tests which are ensured by adapting a U-type stretching-bending device on a tensile testing machine, where we studied and quantified the variation of the springback.
Keywords: Deep drawing, Expansion, Restreint deep drawing, Springback.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2528647 Analyzing Current Transformer’s Transient and Steady State Behavior for Different Burden’s Using LabVIEW Data Acquisition Tool
Abstract:
Current transformers (CTs) are used to transform large primary currents to a small secondary current. Since most standard equipment’s are not designed to handle large primary currents the CTs have an important part in any electrical system for the purpose of Metering and Protection both of which are integral in Power system. Now a days due to advancement in solid state technology, the operation times of the protective relays have come to a few cycles from few seconds. Thus, in such a scenario it becomes important to study the transient response of the current transformers as it will play a vital role in the operating of the protective devices.
This paper shows the steady state and transient behavior of current transformers and how it changes with change in connected burden. The transient and steady state response will be captured using the data acquisition software LabVIEW. Analysis is done on the real time data gathered using LabVIEW. Variation of current transformer characteristics with changes in burden will be discussed.
Keywords: Accuracy, Accuracy limiting factor, Burden, Current Transformer, Instrument Security factor.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3319646 Parametric Modeling Approach for Call Holding Times for IP based Public Safety Networks via EM Algorithm
Authors: Badarch Tuyatsetseg
Abstract:
This paper presents parametric probability density models for call holding times (CHTs) into emergency call center based on the actual data collected for over a week in the public Emergency Information Network (EIN) in Mongolia. When the set of chosen candidates of Gamma distribution family is fitted to the call holding time data, it is observed that the whole area in the CHT empirical histogram is underestimated due to spikes of higher probability and long tails of lower probability in the histogram. Therefore, we provide the Gaussian parametric model of a mixture of lognormal distributions with explicit analytical expressions for the modeling of CHTs of PSNs. Finally, we show that the CHTs for PSNs are fitted reasonably by a mixture of lognormal distributions via the simulation of expectation maximization algorithm. This result is significant as it expresses a useful mathematical tool in an explicit manner of a mixture of lognormal distributions.Keywords: A mixture of lognormal distributions, modeling call holding times, public safety network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1650645 Six Sigma-Based Optimization of Shrinkage Accuracy in Injection Molding Processes
Authors: Sky Chou, Joseph C. Chen
Abstract:
This paper focuses on using six sigma methodologies to reach the desired shrinkage of a manufactured high-density polyurethane (HDPE) part produced by the injection molding machine. It presents a case study where the correct shrinkage is required to reduce or eliminate defects and to improve the process capability index Cp and Cpk for an injection molding process. To improve this process and keep the product within specifications, the six sigma methodology, design, measure, analyze, improve, and control (DMAIC) approach, was implemented in this study. The six sigma approach was paired with the Taguchi methodology to identify the optimized processing parameters that keep the shrinkage rate within the specifications by our customer. An L9 orthogonal array was applied in the Taguchi experimental design, with four controllable factors and one non-controllable/noise factor. The four controllable factors identified consist of the cooling time, melt temperature, holding time, and metering stroke. The noise factor is the difference between material brand 1 and material brand 2. After the confirmation run was completed, measurements verify that the new parameter settings are optimal. With the new settings, the process capability index has improved dramatically. The purpose of this study is to show that the six sigma and Taguchi methodology can be efficiently used to determine important factors that will improve the process capability index of the injection molding process.
Keywords: Injection molding, shrinkage, six sigma, Taguchi parameter design.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1385644 Selecting Negative Examples for Protein-Protein Interaction
Authors: Mohammad Shoyaib, M. Abdullah-Al-Wadud, Oksam Chae
Abstract:
Proteomics is one of the largest areas of research for bioinformatics and medical science. An ambitious goal of proteomics is to elucidate the structure, interactions and functions of all proteins within cells and organisms. Predicting Protein-Protein Interaction (PPI) is one of the crucial and decisive problems in current research. Genomic data offer a great opportunity and at the same time a lot of challenges for the identification of these interactions. Many methods have already been proposed in this regard. In case of in-silico identification, most of the methods require both positive and negative examples of protein interaction and the perfection of these examples are very much crucial for the final prediction accuracy. Positive examples are relatively easy to obtain from well known databases. But the generation of negative examples is not a trivial task. Current PPI identification methods generate negative examples based on some assumptions, which are likely to affect their prediction accuracy. Hence, if more reliable negative examples are used, the PPI prediction methods may achieve even more accuracy. Focusing on this issue, a graph based negative example generation method is proposed, which is simple and more accurate than the existing approaches. An interaction graph of the protein sequences is created. The basic assumption is that the longer the shortest path between two protein-sequences in the interaction graph, the less is the possibility of their interaction. A well established PPI detection algorithm is employed with our negative examples and in most cases it increases the accuracy more than 10% in comparison with the negative pair selection method in that paper.Keywords: Interaction graph, Negative training data, Protein-Protein interaction, Support vector machine.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1702643 Green Product Design for Mobile Phones
Authors: İlke Bereketli, Müjde Erol Genevois, H. Ziya Ulukan
Abstract:
Nowadays, manufacturers are facing great challenges with regard to the production of green products due to the emerging issue of hazardous substance management (HSM). In particular, environmental legislation pressures have yielded to increased risk, manufacturing complexity and green components demands. The green principles were expanded to many departments within organization, including supply chain. Green supply chain management (GSCM) was emerging in the last few years. This idea covers every stage in manufacturing from the first to the last stage of life cycle. From product lifecycle concept, the cycle starts at the design of a product. QFD is a customer-driven product development tool, considered as a structured management approach for efficiently translating customer needs into design requirements and parts deployment, as well as manufacturing plans and controls in order to achieve higher customer satisfaction. This paper develops an Eco- QFD to provide a framework for designing Eco-mobile phone by integrating the life cycle analysis LCA into QFD throughout the entire product development process.Keywords: Eco-design, Eco-QFD, EEE, Environmental New Product Development, Mobile Phone.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2637642 Web-GIS based Outdoor Education Program for Elementary Schools
Authors: Noriyoshi Hosoya, Kayoko Yamamoto
Abstract:
This study, focusing on the importance of encouraging outdoor activities for children, aims to propose and implement a Web-GIS based outdoor education program for elementary schools, which will then be evaluated by users. Specifically, for the purpose of improved outdoor activities in the elementary school education, the outdoor education program, with chiefly using the Web-GIS that provides a good information provision and sharing tool, is proposed and implemented before being evaluated by users. Conclusions of the study boil down to: (1) An eight-staged outdoor education program based on the Web-GIS was proposed for a “second school" of an elementary school that was then implemented before being evaluated by users (teachers, instructors, students, and their parents). (2) The program generally received a good evaluation, while a lot of students and their parents evaluated negatively for the degree of discovery and for the degree of interest, respectively, in the questionnaire survey of students and their parents conducted after the “second school". The surveys clearly show that an issue to be solved, from the viewpoint of teachers in particular, is the establishment of the GIS that will easily represent teaching materials developed by teachers and of Web-GIS, and improved significance of the use of GIS and Web-GIS for their widespread.Keywords: Elementary Schools, School Education, Outdooreducation, Web-GIS
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1631