Search results for: Machine translation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1318

Search results for: Machine translation

118 High Specific Speed in Circulating Water Pump Can Cause Cavitation, Noise and Vibration

Authors: Chandra Gupt Porwal

Abstract:

Excessive vibration means increased wear, increased repair efforts, bad product selection & quality and high energy consumption. This may be sometimes experienced by cavitation or suction/discharge recirculation which could occur only when net positive suction head available NPSHA drops below the net positive suction head required NPSHR. Cavitation can cause axial surging, if it is excessive, will damage mechanical seals, bearings, possibly other pump components frequently, and shorten the life of the impeller. Efforts have been made to explain Suction Energy (SE), Specific Speed (Ns), Suction Specific Speed (Nss), NPSHA, NPSHR & their significance, possible reasons of cavitation /internal recirculation, its diagnostics and remedial measures to arrest and prevent cavitation in this paper. A case study is presented by the author highlighting that the root cause of unwanted noise and vibration is due to cavitation, caused by high specific speeds or inadequate net- positive suction head available which results in damages to material surfaces of impeller & suction bells and degradation of machine performance, its capacity and efficiency too. Author strongly recommends revisiting the technical specifications of CW pumps to provide sufficient NPSH margin ratios >1.5, for future projects and Nss be limited to 8500 - 9000 for cavitation free operation.

Keywords: Best efficiency point (BEP), Net positive suction head NPSHA, NPSHR, Specific Speed NS, Suction Specific Speed Nss.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4965
117 Material and Parameter Analysis of the PolyJet Process for Mold Making Using Design of Experiments

Authors: A. Kampker, K. Kreisköther, C. Reinders

Abstract:

Since additive manufacturing technologies constantly advance, the use of this technology in mold making seems reasonable. Many manufacturers of additive manufacturing machines, however, do not offer any suggestions on how to parameterize the machine to achieve optimal results for mold making. The purpose of this research is to determine the interdependencies of different materials and parameters within the PolyJet process by using design of experiments (DoE), to additively manufacture molds, e.g. for thermoforming and injection molding applications. Therefore, the general requirements of thermoforming molds, such as heat resistance, surface quality and hardness, have been identified. Then, different materials and parameters of the PolyJet process, such as the orientation of the printed part, the layer thickness, the printing mode (matte or glossy), the distance between printed parts and the scaling of parts, have been examined. The multifactorial analysis covers the following properties of the printed samples: Tensile strength, tensile modulus, bending strength, elongation at break, surface quality, heat deflection temperature and surface hardness. The key objective of this research is that by joining the results from the DoE with the requirements of the mold making, optimal and tailored molds can be additively manufactured with the PolyJet process. These additively manufactured molds can then be used in prototyping processes, in process testing and in small to medium batch production.

Keywords: Additive manufacturing, design of experiments, mold making, PolyJet.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1668
116 Implementing a Visual Servoing System for Robot Controlling

Authors: Maryam Vafadar, Alireza Behrad, Saeed Akbari

Abstract:

Nowadays, with the emerging of the new applications like robot control in image processing, artificial vision for visual servoing is a rapidly growing discipline and Human-machine interaction plays a significant role for controlling the robot. This paper presents a new algorithm based on spatio-temporal volumes for visual servoing aims to control robots. In this algorithm, after applying necessary pre-processing on video frames, a spatio-temporal volume is constructed for each gesture and feature vector is extracted. These volumes are then analyzed for matching in two consecutive stages. For hand gesture recognition and classification we tested different classifiers including k-Nearest neighbor, learning vector quantization and back propagation neural networks. We tested the proposed algorithm with the collected data set and results showed the correct gesture recognition rate of 99.58 percent. We also tested the algorithm with noisy images and algorithm showed the correct recognition rate of 97.92 percent in noisy images.

Keywords: Back propagation neural network, Feature vector, Hand gesture recognition, k-Nearest Neighbor, Learning vector quantization neural network, Robot control, Spatio-temporal volume, Visual servoing

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1623
115 A Detailed Experimental Study and Evaluation of Springback under Stretch Bending Process

Authors: A. Soualem

Abstract:

The design of multi stage deep drawing processes requires the evaluation of many process parameters such as the intermediate die geometry, the blank shape, the sheet thickness, the blank holder force, friction, lubrication etc..These process parameters have to be determined for the optimum forming conditions before the process design. In general sheet metal forming may involve stretching drawing or various combinations of these basic modes of deformation. It is important to determine the influence of the process variables in the design of sheet metal working process. Especially, the punch and die corner for deep drawing will affect the formability. At the same time the prediction of sheet metals springback after deep drawing is an important issue to solve for the control of manufacturing processes. Nowadays, the importance of this problem increases because of the use of steel sheeting with high stress and also aluminum alloys.

The aim of this paper is to give a better understanding of the springback and its effect in various sheet metals forming process such as expansion and restreint deep drawing in the cup drawing process, by varying radius die, lubricant for two commercially available materials e.g. galvanized steel and Aluminum sheet. To achieve these goals experiments were carried out and compared with other results. The original of our purpose consist on tests which are ensured by adapting a U-type stretching-bending device on a tensile testing machine, where we studied and quantified the variation of the springback.

Keywords: Deep drawing, Expansion, Restreint deep drawing, Springback.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2483
114 Six Sigma-Based Optimization of Shrinkage Accuracy in Injection Molding Processes

Authors: Sky Chou, Joseph C. Chen

Abstract:

This paper focuses on using six sigma methodologies to reach the desired shrinkage of a manufactured high-density polyurethane (HDPE) part produced by the injection molding machine. It presents a case study where the correct shrinkage is required to reduce or eliminate defects and to improve the process capability index Cp and Cpk for an injection molding process. To improve this process and keep the product within specifications, the six sigma methodology, design, measure, analyze, improve, and control (DMAIC) approach, was implemented in this study. The six sigma approach was paired with the Taguchi methodology to identify the optimized processing parameters that keep the shrinkage rate within the specifications by our customer. An L9 orthogonal array was applied in the Taguchi experimental design, with four controllable factors and one non-controllable/noise factor. The four controllable factors identified consist of the cooling time, melt temperature, holding time, and metering stroke. The noise factor is the difference between material brand 1 and material brand 2. After the confirmation run was completed, measurements verify that the new parameter settings are optimal. With the new settings, the process capability index has improved dramatically. The purpose of this study is to show that the six sigma and Taguchi methodology can be efficiently used to determine important factors that will improve the process capability index of the injection molding process.

Keywords: Injection molding, shrinkage, six sigma, Taguchi parameter design.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1317
113 Selecting Negative Examples for Protein-Protein Interaction

Authors: Mohammad Shoyaib, M. Abdullah-Al-Wadud, Oksam Chae

Abstract:

Proteomics is one of the largest areas of research for bioinformatics and medical science. An ambitious goal of proteomics is to elucidate the structure, interactions and functions of all proteins within cells and organisms. Predicting Protein-Protein Interaction (PPI) is one of the crucial and decisive problems in current research. Genomic data offer a great opportunity and at the same time a lot of challenges for the identification of these interactions. Many methods have already been proposed in this regard. In case of in-silico identification, most of the methods require both positive and negative examples of protein interaction and the perfection of these examples are very much crucial for the final prediction accuracy. Positive examples are relatively easy to obtain from well known databases. But the generation of negative examples is not a trivial task. Current PPI identification methods generate negative examples based on some assumptions, which are likely to affect their prediction accuracy. Hence, if more reliable negative examples are used, the PPI prediction methods may achieve even more accuracy. Focusing on this issue, a graph based negative example generation method is proposed, which is simple and more accurate than the existing approaches. An interaction graph of the protein sequences is created. The basic assumption is that the longer the shortest path between two protein-sequences in the interaction graph, the less is the possibility of their interaction. A well established PPI detection algorithm is employed with our negative examples and in most cases it increases the accuracy more than 10% in comparison with the negative pair selection method in that paper.

Keywords: Interaction graph, Negative training data, Protein-Protein interaction, Support vector machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1656
112 Procedure for Impact Testing of Fused Recycled Glass

Authors: David Halley, Tyra Oseng-Rees, Luca Pagano, Juan A Ferriz-Papi

Abstract:

Recycled glass material is made from 100% recycled bottle glass and consumes less energy than re-melt technology. It also uses no additives in the manufacturing process allowing the recycled glass material, in principal, to go back to the recycling stream after end-of-use, contributing to the circular economy with a low ecological impact. The aim of this paper is to investigate the procedure for testing the recycled glass material for impact resistance, so it can be applied to pavements and other surfaces which are at risk of impact during service. A review of different impact test procedures for construction materials was undertaken, comparing methodologies and international standards applied to other materials such as natural stone, ceramics and glass. A drop weight impact testing machine was designed and manufactured in-house to perform these tests. As a case study, samples of the recycled glass material were manufactured with two different thicknesses and tested. The impact energy was calculated theoretically, obtaining results with 5 and 10 J. The results on the material were subsequently discussed. Improvements on the procedure can be made using high speed video technology to calculate velocity just before and immediately after the impact to know the absorbed energy. The initial results obtained in this procedure were positive although repeatability needs to be developed to obtain a correlation of results and finally be able to validate the procedure. The experiment with samples showed the practicality of this procedure and application to the recycled glass material impact testing although further research needs to be developed.

Keywords: Construction materials, drop weight impact, impact testing, recycled glass.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1466
111 Performance Assessment of Multi-Level Ensemble for Multi-Class Problems

Authors: Rodolfo Lorbieski, Silvia Modesto Nassar

Abstract:

Many supervised machine learning tasks require decision making across numerous different classes. Multi-class classification has several applications, such as face recognition, text recognition and medical diagnostics. The objective of this article is to analyze an adapted method of Stacking in multi-class problems, which combines ensembles within the ensemble itself. For this purpose, a training similar to Stacking was used, but with three levels, where the final decision-maker (level 2) performs its training by combining outputs from the tree-based pair of meta-classifiers (level 1) from Bayesian families. These are in turn trained by pairs of base classifiers (level 0) of the same family. This strategy seeks to promote diversity among the ensembles forming the meta-classifier level 2. Three performance measures were used: (1) accuracy, (2) area under the ROC curve, and (3) time for three factors: (a) datasets, (b) experiments and (c) levels. To compare the factors, ANOVA three-way test was executed for each performance measure, considering 5 datasets by 25 experiments by 3 levels. A triple interaction between factors was observed only in time. The accuracy and area under the ROC curve presented similar results, showing a double interaction between level and experiment, as well as for the dataset factor. It was concluded that level 2 had an average performance above the other levels and that the proposed method is especially efficient for multi-class problems when compared to binary problems.

Keywords: Stacking, multi-layers, ensemble, multi-class.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1039
110 A Software Framework for Predicting Oil-Palm Yield from Climate Data

Authors: Mohd. Noor Md. Sap, A. Majid Awan

Abstract:

Intelligent systems based on machine learning techniques, such as classification, clustering, are gaining wide spread popularity in real world applications. This paper presents work on developing a software system for predicting crop yield, for example oil-palm yield, from climate and plantation data. At the core of our system is a method for unsupervised partitioning of data for finding spatio-temporal patterns in climate data using kernel methods which offer strength to deal with complex data. This work gets inspiration from the notion that a non-linear data transformation into some high dimensional feature space increases the possibility of linear separability of the patterns in the transformed space. Therefore, it simplifies exploration of the associated structure in the data. Kernel methods implicitly perform a non-linear mapping of the input data into a high dimensional feature space by replacing the inner products with an appropriate positive definite function. In this paper we present a robust weighted kernel k-means algorithm incorporating spatial constraints for clustering the data. The proposed algorithm can effectively handle noise, outliers and auto-correlation in the spatial data, for effective and efficient data analysis by exploring patterns and structures in the data, and thus can be used for predicting oil-palm yield by analyzing various factors affecting the yield.

Keywords: Pattern analysis, clustering, kernel methods, spatial data, crop yield

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1917
109 Safe and Efficient Deep Reinforcement Learning Control Model: A Hydroponics Case Study

Authors: Almutasim Billa A. Alanazi, Hal S. Tharp

Abstract:

Safe performance and efficient energy consumption are essential factors for designing a control system. This paper presents a reinforcement learning (RL) model that can be applied to control applications to improve safety and reduce energy consumption. As hardware constraints and environmental disturbances are imprecise and unpredictable, conventional control methods may not always be effective in optimizing control designs. However, RL has demonstrated its value in several artificial intelligence (AI) applications, especially in the field of control systems. The proposed model intelligently monitors a system's success by observing the rewards from the environment, with positive rewards counting as a success when the controlled reference is within the desired operating zone. Thus, the model can determine whether the system is safe to continue operating based on the designer/user specifications, which can be adjusted as needed. Additionally, the controller keeps track of energy consumption to improve energy efficiency by enabling the idle mode when the controlled reference is within the desired operating zone, thus reducing the system energy consumption during the controlling operation. Water temperature control for a hydroponic system is taken as a case study for the RL model, adjusting the variance of disturbances to show the model’s robustness and efficiency. On average, the model showed safety improvement by up to 15% and energy efficiency improvements by 35%-40% compared to a traditional RL model.

Keywords: Control system, hydroponics, machine learning, reinforcement learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 49
108 Weighted-Distance Sliding Windows and Cooccurrence Graphs for Supporting Entity-Relationship Discovery in Unstructured Text

Authors: Paolo Fantozzi, Luigi Laura, Umberto Nanni

Abstract:

The problem of Entity relation discovery in structured data, a well covered topic in literature, consists in searching within unstructured sources (typically, text) in order to find connections among entities. These can be a whole dictionary, or a specific collection of named items. In many cases machine learning and/or text mining techniques are used for this goal. These approaches might be unfeasible in computationally challenging problems, such as processing massive data streams. A faster approach consists in collecting the cooccurrences of any two words (entities) in order to create a graph of relations - a cooccurrence graph. Indeed each cooccurrence highlights some grade of semantic correlation between the words because it is more common to have related words close each other than having them in the opposite sides of the text. Some authors have used sliding windows for such problem: they count all the occurrences within a sliding windows running over the whole text. In this paper we generalise such technique, coming up to a Weighted-Distance Sliding Window, where each occurrence of two named items within the window is accounted with a weight depending on the distance between items: a closer distance implies a stronger evidence of a relationship. We develop an experiment in order to support this intuition, by applying this technique to a data set consisting in the text of the Bible, split into verses.

Keywords: Cooccurrence graph, entity relation graph, unstructured text, weighted distance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 624
107 Application of KL Divergence for Estimation of Each Metabolic Pathway Genes

Authors: Shohei Maruyama, Yasuo Matsuyama, Sachiyo Aburatani

Abstract:

Development of a method to estimate gene functions is an important task in bioinformatics. One of the approaches for the annotation is the identification of the metabolic pathway that genes are involved in. Since gene expression data reflect various intracellular phenomena, those data are considered to be related with genes’ functions. However, it has been difficult to estimate the gene function with high accuracy. It is considered that the low accuracy of the estimation is caused by the difficulty of accurately measuring a gene expression. Even though they are measured under the same condition, the gene expressions will vary usually. In this study, we proposed a feature extraction method focusing on the variability of gene expressions to estimate the genes' metabolic pathway accurately. First, we estimated the distribution of each gene expression from replicate data. Next, we calculated the similarity between all gene pairs by KL divergence, which is a method for calculating the similarity between distributions. Finally, we utilized the similarity vectors as feature vectors and trained the multiclass SVM for identifying the genes' metabolic pathway. To evaluate our developed method, we applied the method to budding yeast and trained the multiclass SVM for identifying the seven metabolic pathways. As a result, the accuracy that calculated by our developed method was higher than the one that calculated from the raw gene expression data. Thus, our developed method combined with KL divergence is useful for identifying the genes' metabolic pathway.

Keywords: Metabolic pathways, gene expression data, microarray, Kullback–Leibler divergence, KL divergence, support vector machines, SVM, machine learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2257
106 Development and Characterization of Re-Entrant Auxetic Fibrous Structures for Application in Ballistic Composites

Authors: Rui Magalhães, Sohel Rana, Raul Fangueiro, Clara Gonçalves, Pedro Nunes, Gustavo Dias

Abstract:

Auxetic fibrous structures and composites with negative Poisson’s ratio (NPR) have huge potential for application in ballistic protection due to their high energy absorption and excellent impact resistance. In the present research, re-entrant lozenge auxetic fibrous structures were produced through weft knitting technology using high performance polyamide and para-aramid fibres. Fabric structural parameters (e.g. loop length) and machine parameters (e.g. take down load) were varied in order to investigate their influence on the auxetic behaviours of the produced structures. These auxetic structures were then impregnated with two types of polymeric resins (epoxy and polyester) to produce composite materials, which were subsequently characterized for the auxetic behaviour. It was observed that the knitted fabrics produced using the polyamide yarns exhibited NPR over a wide deformation range, which was strongly dependant on the loop length and take down load. The polymeric composites produced from the auxetic fabrics also showed good auxetic property, which was superior in case of the polyester matrix. The experimental results suggested that these composites made from the auxetic fibrous structures can be properly designed to find potential use in the body amours for personal protection applications.

Keywords: Auxetic fabrics, high performance, composites, impact resistance, energy absorption.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 737
105 Computing Entropy for Ortholog Detection

Authors: Hsing-Kuo Pao, John Case

Abstract:

Biological sequences from different species are called or-thologs if they evolved from a sequence of a common ancestor species and they have the same biological function. Approximations of Kolmogorov complexity or entropy of biological sequences are already well known to be useful in extracting similarity information between such sequences -in the interest, for example, of ortholog detection. As is well known, the exact Kolmogorov complexity is not algorithmically computable. In prac-tice one can approximate it by computable compression methods. How-ever, such compression methods do not provide a good approximation to Kolmogorov complexity for short sequences. Herein is suggested a new ap-proach to overcome the problem that compression approximations may notwork well on short sequences. This approach is inspired by new, conditional computations of Kolmogorov entropy. A main contribution of the empir-ical work described shows the new set of entropy-based machine learning attributes provides good separation between positive (ortholog) and nega-tive (non-ortholog) data - better than with good, previously known alter-natives (which do not employ some means to handle short sequences well).Also empirically compared are the new entropy based attribute set and a number of other, more standard similarity attributes sets commonly used in genomic analysis. The various similarity attributes are evaluated by cross validation, through boosted decision tree induction C5.0, and by Receiver Operating Characteristic (ROC) analysis. The results point to the conclu-sion: the new, entropy based attribute set by itself is not the one giving the best prediction; however, it is the best attribute set for use in improving the other, standard attribute sets when conjoined with them.

Keywords: compression, decision tree, entropy, ortholog, ROC.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1785
104 Noise Reduction in Web Data: A Learning Approach Based on Dynamic User Interests

Authors: Julius Onyancha, Valentina Plekhanova

Abstract:

One of the significant issues facing web users is the amount of noise in web data which hinders the process of finding useful information in relation to their dynamic interests. Current research works consider noise as any data that does not form part of the main web page and propose noise web data reduction tools which mainly focus on eliminating noise in relation to the content and layout of web data. This paper argues that not all data that form part of the main web page is of a user interest and not all noise data is actually noise to a given user. Therefore, learning of noise web data allocated to the user requests ensures not only reduction of noisiness level in a web user profile, but also a decrease in the loss of useful information hence improves the quality of a web user profile. Noise Web Data Learning (NWDL) tool/algorithm capable of learning noise web data in web user profile is proposed. The proposed work considers elimination of noise data in relation to dynamic user interest. In order to validate the performance of the proposed work, an experimental design setup is presented. The results obtained are compared with the current algorithms applied in noise web data reduction process. The experimental results show that the proposed work considers the dynamic change of user interest prior to elimination of noise data. The proposed work contributes towards improving the quality of a web user profile by reducing the amount of useful information eliminated as noise.

Keywords: Web log data, web user profile, user interest, noise web data learning, machine learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1674
103 Assessment Power and Frequency Oscillation Damping Using POD Controller and Proposed FOD Controller

Authors: Yahya Naderi, Tohid Rahimi, Babak Yousefi, Seyed Hossein Hosseini

Abstract:

Today’s modern interconnected power system is highly complex in nature. In this, one of the most important requirements during the operation of the electric power system is the reliability and security. Power and frequency oscillation damping mechanism improve the reliability. Because of power system stabilizer (PSS) low speed response against of major fault such as three phase short circuit, FACTs devise that can control the network condition in very fast time, are becoming popular. But FACTs capability can be seen in a major fault present when nonlinear models of FACTs devise and power system equipment are applied. To realize this aim, the model of multi-machine power system with FACTs controller is developed in MATLAB/SIMULINK using Sim Power System (SPS) blockiest. Among the FACTs device, Static synchronous series compensator (SSSC) due to high speed changes its reactance characteristic inductive to capacitive, is effective power flow controller. Tuning process of controller parameter can be performed using different method. But Genetic Algorithm (GA) ability tends to use it in controller parameter tuning process. In this paper firstly POD controller is used to power oscillation damping. But in this station, frequency oscillation dos not has proper damping situation. So FOD controller that is tuned using GA is using that cause to damp out frequency oscillation properly and power oscillation damping has suitable situation.

Keywords: Power oscillation damping (POD), frequency oscillation damping (FOD), Static synchronous series compensator (SSSC), Genetic Algorithm (GA).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3107
102 Evolution of Web Development Techniques in Modern Technology

Authors: Abdul Basit Kiani, Maryam Kiani

Abstract:

The art of web development in new technologies is a dynamic journey, shaped by the constant evolution of tools and platforms. With the emergence of JavaScript frameworks and APIs, web developers are empowered to craft web applications that are not only robust but also highly interactive. The aim is to provide an overview of the developments in the field. The integration of artificial intelligence (AI) and machine learning (ML) has opened new horizons in web development. Chatbots, intelligent recommendation systems, and personalization algorithms have become integral components of modern websites. These AI-powered features enhance user engagement, provide personalized experiences, and streamline customer support processes, revolutionizing the way businesses interact with their audiences. Lastly, the emphasis on web security and privacy has been a pivotal area of progress. With the increasing incidents of cyber threats, web developers have implemented robust security measures to safeguard user data and ensure secure transactions. Innovations such as HTTPS protocol, two-factor authentication, and advanced encryption techniques have bolstered the overall security of web applications, fostering trust and confidence among users. Hence, recent progress in web development has propelled the industry forward, enabling developers to craft innovative and immersive digital experiences. From responsive design to AI integration and enhanced security, the landscape of web development continues to evolve, promising a future filled with endless possibilities.

Keywords: Web development, software testing, progressive web apps, web and mobile native application.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 145
101 Finite Element Analysis of Raft Foundation on Various Soil Types under Earthquake Loading

Authors: Qassun S. Mohammed Shafiqu, Murtadha A. Abdulrasool

Abstract:

The design of shallow foundations to withstand different dynamic loads has given considerable attention in recent years. Dynamic loads may be due to the earthquakes, pile driving, blasting, water waves, and machine vibrations. But, predicting the behavior of shallow foundations during earthquakes remains a difficult task for geotechnical engineers. A database for dynamic and static parameters for different soils in seismic active zones in Iraq is prepared which has been collected from geophysical and geotechnical investigation works. Then, analysis of a typical 3-D soil-raft foundation system under earthquake loading is carried out using the database. And a parametric study has been carried out taking into consideration the influence of some parameters on the dynamic behavior of the raft foundation, such as raft stiffness, damping ratio as well as the influence of the earthquake acceleration-time records. The results of the parametric study show that the settlement caused by the earthquake can be decreased by about 72% with increasing the thickness from 0.5 m to 1.5 m. But, it has been noticed that reduction in the maximum bending moment by about 82% was predicted by decreasing the raft thickness from 1.5 m to 0.5 m in all sites model. Also, it has been observed that the maximum lateral displacement, the maximum vertical settlement and the maximum bending moment for damping ratio 0% is about 14%, 20%, and 18% higher than that for damping ratio 7.5%, respectively for all sites model.

Keywords: Shallow foundation, seismic behavior, raft thickness, damping ratio.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 877
100 The Effect of Tool Path Strategy on Surface and Dimension in High Speed Milling

Authors: A. Razavykia, A. Esmaeilzadeh, S. Iranmanesh

Abstract:

Many orthopedic implants like proximal humerus cases require lower surface roughness and almost immediate/short lead time surgery. Thus, rapid response from the manufacturer is very crucial. Tool path strategy of milling process has a direct influence on the surface roughness and lead time of medical implant. High-speed milling as promised process would improve the machined surface quality, but conventional or super-abrasive grinding still required which imposes some drawbacks such as additional costs and time. Currently, many CAD/CAM software offers some different tool path strategies to milling free form surfaces. Nevertheless, the users must identify how to choose the strategies according to cutting tool geometry, geometry complexity, and their effects on the machined surface. This study investigates the effect of different tool path strategies for milling a proximal humerus head during finishing operation on stainless steel 316L. Experiments have been performed using MAHO MH700 S vertical milling machine and four machining strategies, namely, spiral outward, spiral inward, and radial as well as zig-zag. In all cases, the obtained surfaces were analyzed in terms of roughness and dimension accuracy compared with those obtained by simulation. The findings provide evidence that surface roughness, dimensional accuracy, and machining time have been affected by the considered tool path strategy.

Keywords: CAD/CAM software, milling, orthopedic implants, tool path strategy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 932
99 Mining User-Generated Contents to Detect Service Failures with Topic Model

Authors: Kyung Bae Park, Sung Ho Ha

Abstract:

Online user-generated contents (UGC) significantly change the way customers behave (e.g., shop, travel), and a pressing need to handle the overwhelmingly plethora amount of various UGC is one of the paramount issues for management. However, a current approach (e.g., sentiment analysis) is often ineffective for leveraging textual information to detect the problems or issues that a certain management suffers from. In this paper, we employ text mining of Latent Dirichlet Allocation (LDA) on a popular online review site dedicated to complaint from users. We find that the employed LDA efficiently detects customer complaints, and a further inspection with the visualization technique is effective to categorize the problems or issues. As such, management can identify the issues at stake and prioritize them accordingly in a timely manner given the limited amount of resources. The findings provide managerial insights into how analytics on social media can help maintain and improve their reputation management. Our interdisciplinary approach also highlights several insights by applying machine learning techniques in marketing research domain. On a broader technical note, this paper illustrates the details of how to implement LDA in R program from a beginning (data collection in R) to an end (LDA analysis in R) since the instruction is still largely undocumented. In this regard, it will help lower the boundary for interdisciplinary researcher to conduct related research.

Keywords: Latent Dirichlet allocation, R program, text mining, topic model, user generated contents, visualization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1168
98 Bridge Health Monitoring: A Review

Authors: Mohammad Bakhshandeh

Abstract:

Structural Health Monitoring (SHM) is a crucial and necessary practice that plays a vital role in ensuring the safety and integrity of critical structures, and in particular, bridges. The continuous monitoring of bridges for signs of damage or degradation through Bridge Health Monitoring (BHM) enables early detection of potential problems, allowing for prompt corrective action to be taken before significant damage occurs. Although all monitoring techniques aim to provide accurate and decisive information regarding the remaining useful life, safety, integrity, and serviceability of bridges, understanding the development and propagation of damage is vital for maintaining uninterrupted bridge operation. Over the years, extensive research has been conducted on BHM methods, and experts in the field have increasingly adopted new methodologies. In this article, we provide a comprehensive exploration of the various BHM approaches, including sensor-based, non-destructive testing (NDT), model-based, and artificial intelligence (AI)-based methods. We also discuss the challenges associated with BHM, including sensor placement and data acquisition, data analysis and interpretation, cost and complexity, and environmental effects, through an extensive review of relevant literature and research studies. Additionally, we examine potential solutions to these challenges and propose future research ideas to address critical gaps in BHM.

Keywords: Structural health monitoring, bridge health monitoring, sensor-based methods, machine-learning algorithms, model-based techniques, sensor placement, data acquisition, data analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 97
97 Starch Based Biofilms for Green Packaging

Authors: Roshafima R. Ali, W. A. Wan Abdul Rahman, Rafiziana M. Kasmani, N. Ibrahim

Abstract:

This current research focused on development of degradable starch based packaging film with enhanced mechanical properties. A series of low density polyethylene (LDPE)/tapioca starch compounds with various tapioca starch contents were prepared by twin screw extrusion with the addition of maleic anhydride grafted polyethylene as compatibilizer. Palm cooking oil was used as processing aid to ease the blown film process, thus, degradable film can be processed via conventional blown film machine. Studies on their characteristics, mechanical properties and biodegradation were carried out by Fourier Transform Infrared (FTIR) spectroscopy and optical properties, tensile test and exposure to fungi environment respectively. The presence of high starch contents had an adverse effect on the tensile properties of LDPE/tapioca starch blends. However, the addition of compatibilizer to the blends improved the interfacial adhesion between the two materials, hence, improved the tensile properties of the films. High content of starch amount also was found to increase the rate of biodegradability of LDPE/tapioca starch films. It can be proved by exposure of the film to fungi environment. A growth of microbes colony can be seen on the surface of LDPE/tapioca starch film indicates that the granular starch present on the surface of the polymer film is attacked by microorganisms, until most of it is assimilated as a carbon source.

Keywords: Degradable polymer, starch based biofilms, blown film extrusion, green food packaging.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5167
96 Temperature Evolution, Microstructure and Mechanical Properties of Heat-Treatable Aluminum Alloy Welded by Friction Stir Welding: Comparison with Tungsten Inert Gas

Authors: Saliha Gachi, Mouloud Aissani, Fouad Boubenider

Abstract:

Friction Stir Welding (FSW) is a solid-state welding technique that can join material without melting the plates to be welded. In this work, we are interested to demonstrate the potentiality of FSW for joining the heat-treatable aluminum alloy 2024-T3 which is reputed as difficult to be welded by fusion techniques. Thereafter, the FSW joint is compared with another one obtained from a conventional fusion process Tungsten Inert Gas (TIG). FSW welds are made up using an FSW tool mounted on a milling machine. Single pass welding was applied to fabricated TIG joint. The comparison between the two processes has been made on the temperature evolution, mechanical and microstructure behavior. The microstructural examination revealed that FSW weld is composed of four zones: Base metal (BM), Heat affected zone (HAZ), Thermo-mechanical affected zone (THAZ) and the nugget zone (NZ). The NZ exhibits a recrystallized equiaxed refined grains that induce better mechanical properties and good ductility compared to TIG joint where the grains have a larger size in the welded region compared with the BM due to the elevated heat input. The microhardness results show that, in FSW weld, the THAZ contains the lowest microhardness values and increase in the NZ; however, in TIG process, the lowest values are localized on the NZ.

Keywords: Friction stir welding, tungsten inert gaz, aluminum, microstructure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 718
95 From Primer Generation to Chromosome Identification: A Primer Generation Genotyping Method for Bacterial Identification and Typing

Authors: Wisam H. Benamer, Ehab A. Elfallah, Mohamed A. Elshaari, Farag A. Elshaari

Abstract:

A challenge for laboratories is to provide bacterial identification and antibiotic sensitivity results within a short time. Hence, advancement in the required technology is desirable to improve timing, accuracy and quality. Even with the current advances in methods used for both phenotypic and genotypic identification of bacteria the need is there to develop method(s) that enhance the outcome of bacteriology laboratories in accuracy and time. The hypothesis introduced here is based on the assumption that the chromosome of any bacteria contains unique sequences that can be used for its identification and typing. The outcome of a pilot study designed to test this hypothesis is reported in this manuscript. Methods: The complete chromosome sequences of several bacterial species were downloaded to use as search targets for unique sequences. Visual basic and SQL server (2014) were used to generate a complete set of 18-base long primers, a process started with reverse translation of randomly chosen 6 amino acids to limit the number of the generated primers. In addition, the software used to scan the downloaded chromosomes using the generated primers for similarities was designed, and the resulting hits were classified according to the number of similar chromosomal sequences, i.e., unique or otherwise. Results: All primers that had identical/similar sequences in the selected genome sequence(s) were classified according to the number of hits in the chromosomes search. Those that were identical to a single site on a single bacterial chromosome were referred to as unique. On the other hand, most generated primers sequences were identical to multiple sites on a single or multiple chromosomes. Following scanning, the generated primers were classified based on ability to differentiate between medically important bacterial and the initial results looks promising. Conclusion: A simple strategy that started by generating primers was introduced; the primers were used to screen bacterial genomes for match. Primer(s) that were uniquely identical to specific DNA sequence on a specific bacterial chromosome were selected. The identified unique sequence can be used in different molecular diagnostic techniques, possibly to identify bacteria. In addition, a single primer that can identify multiple sites in a single chromosome can be exploited for region or genome identification. Although genomes sequences draft of isolates of organism DNA enable high throughput primer design using alignment strategy, and this enhances diagnostic performance in comparison to traditional molecular assays. In this method the generated primers can be used to identify an organism before the draft sequence is completed. In addition, the generated primers can be used to build a bank for easy access of the primers that can be used to identify bacteria.

Keywords: Bacteria chromosome, bacterial identification, sequence, primer generation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 990
94 The Effect of Alkaline Treatment on Tensile Strength and Morphological Properties of Kenaf Fibres for Yarn Production

Authors: A. Khalina, K. Shaharuddin, M. S. Wahab, M. P. Saiman, H. A. Aisyah

Abstract:

This paper investigates the effect of alkali treatment and mechanical properties of kenaf (Hibiscus cannabinus) fibre for the development of yarn. Two different fibre sources are used for the yarn production. Kenaf fibres were treated with sodium hydroxide (NaOH) in the concentration of 3, 6, 9, and 12% prior to fibre opening process and tested for their tensile strength and Young’s modulus. Then, the selected fibres were introduced to fibre opener at three different opening processing parameters; namely, speed of roller feeder, small drum, and big drum. The diameter size, surface morphology, and fibre durability towards machine of the fibres were characterized. The results show that concentrations of NaOH used have greater effects on fibre mechanical properties. From this study, the tensile and modulus properties of the treated fibres for both types have improved significantly as compared to untreated fibres, especially at the optimum level of 6% NaOH. It is also interesting to highlight that 6% NaOH is the optimum concentration for the alkaline treatment. The untreated and treated fibres at 6% NaOH were then introduced to fibre opener, and it was found that the treated fibre produced higher fibre diameter with better surface morphology compared to the untreated fibre. Higher speed parameter during opening was found to produce higher yield of opened-kenaf fibres.

Keywords: Alkaline treatment, Kenaf fibre, Tensile strength, Yarn production.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1145
93 Rank-Based Chain-Mode Ensemble for Binary Classification

Authors: Chongya Song, Kang Yen, Alexander Pons, Jin Liu

Abstract:

In the field of machine learning, the ensemble has been employed as a common methodology to improve the performance upon multiple base classifiers. However, the true predictions are often canceled out by the false ones during consensus due to a phenomenon called “curse of correlation” which is represented as the strong interferences among the predictions produced by the base classifiers. In addition, the existing practices are still not able to effectively mitigate the problem of imbalanced classification. Based on the analysis on our experiment results, we conclude that the two problems are caused by some inherent deficiencies in the approach of consensus. Therefore, we create an enhanced ensemble algorithm which adopts a designed rank-based chain-mode consensus to overcome the two problems. In order to evaluate the proposed ensemble algorithm, we employ a well-known benchmark data set NSL-KDD (the improved version of dataset KDDCup99 produced by University of New Brunswick) to make comparisons between the proposed and 8 common ensemble algorithms. Particularly, each compared ensemble classifier uses the same 22 base classifiers, so that the differences in terms of the improvements toward the accuracy and reliability upon the base classifiers can be truly revealed. As a result, the proposed rank-based chain-mode consensus is proved to be a more effective ensemble solution than the traditional consensus approach, which outperforms the 8 ensemble algorithms by 20% on almost all compared metrices which include accuracy, precision, recall, F1-score and area under receiver operating characteristic curve.

Keywords: Consensus, curse of correlation, imbalanced classification, rank-based chain-mode ensemble.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 660
92 Cross Signal Identification for PSG Applications

Authors: Carmen Grigoraş, Victor Grigoraş, Daniela Boişteanu

Abstract:

The standard investigational method for obstructive sleep apnea syndrome (OSAS) diagnosis is polysomnography (PSG), which consists of a simultaneous, usually overnight recording of multiple electro-physiological signals related to sleep and wakefulness. This is an expensive, encumbering and not a readily repeated protocol, and therefore there is need for simpler and easily implemented screening and detection techniques. Identification of apnea/hypopnea events in the screening recordings is the key factor for the diagnosis of OSAS. The analysis of a solely single-lead electrocardiographic (ECG) signal for OSAS diagnosis, which may be done with portable devices, at patient-s home, is the challenge of the last years. A novel artificial neural network (ANN) based approach for feature extraction and automatic identification of respiratory events in ECG signals is presented in this paper. A nonlinear principal component analysis (NLPCA) method was considered for feature extraction and support vector machine for classification/recognition. An alternative representation of the respiratory events by means of Kohonen type neural network is discussed. Our prospective study was based on OSAS patients of the Clinical Hospital of Pneumology from Iaşi, Romania, males and females, as well as on non-OSAS investigated human subjects. Our computed analysis includes a learning phase based on cross signal PSG annotation.

Keywords: Artificial neural networks, feature extraction, obstructive sleep apnea syndrome, pattern recognition, signalprocessing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1487
91 Mixed Mode Fracture Analyses Using Finite Element Method of Edge Cracked Heavy Spinning Annulus Pulley

Authors: Bijit Kalita, K. V. N. Surendra

Abstract:

Rotating disk is one of the most indispensable parts of a rotating machine. Rotating disk has found many applications in the diverging field of science and technology. In this paper, we have taken into consideration the problem of a heavy spinning disk mounted on a rotor system acted upon by boundary traction. Finite element modelling is used at various loading condition to determine the mixed mode stress intensity factors. The effect of combined shear and normal traction on the boundary is incorporated in the analysis under the action of gravity. The variation near the crack tip is characterized in terms of the stress intensity factor (SIF) with an aim to find the SIF for a wide range of parameters. The results of the finite element analyses carried out on the compressed disk of a belt pulley arrangement using fracture mechanics concepts are shown. A total of hundred cases of the problem are solved for each of the variations in loading arc parameter and crack orientation using finite element models of the disc under compression. All models were prepared and analyzed for the uncracked disk, disk with a single crack at different orientation emanating from shaft hole as well as for a disc with pair of cracks emerging from the same center hole. Curves are plotted for various loading conditions. Finally, crack propagation paths are determined using kink angle concepts.

Keywords: Crack-tip deformations, static loading, stress concentration, stress intensity factor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 821
90 Analysis on Modeling and Simulink of DC Motor and its Driving System Used for Wheeled Mobile Robot

Authors: Wai Phyo Aung

Abstract:

Wheeled Mobile Robots (WMRs) are built with their Wheels- drive machine, Motors. Depend on their desire design of WMR, Technicians made used of DC Motors for motion control. In this paper, the author would like to analyze how to choose DC motor to be balance with their applications of especially for WMR. Specification of DC Motor that can be used with desire WMR is to be determined by using MATLAB Simulink model. Therefore, this paper is mainly focus on software application of MATLAB and Control Technology. As the driving system of DC motor, a Peripheral Interface Controller (PIC) based control system is designed including the assembly software technology and H-bridge control circuit. This Driving system is used to drive two DC gear motors which are used to control the motion of WMR. In this analyzing process, the author mainly focus the drive system on driving two DC gear motors that will control with Differential Drive technique to the Wheeled Mobile Robot . For the design analysis of Motor Driving System, PIC16F84A is used and five inputs of sensors detected data are tested with five ON/OFF switches. The outputs of PIC are the commands to drive two DC gear motors, inputs of Hbridge circuit .In this paper, Control techniques of PIC microcontroller and H-bridge circuit, Mechanism assignments of WMR are combined and analyzed by mainly focusing with the “Modeling and Simulink of DC Motor using MATLAB".

Keywords: Control System Design, DC Motors, DifferentialDrive, H-bridge control circuit, MATLAB Simulink model, Peripheral Interface Controller (PIC), Wheeled Mobile Robots.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11212
89 A Flute Tracking System for Monitoring the Wear of Cutting Tools in Milling Operations

Authors: Hatim Laalej, Salvador Sumohano-Verdeja, Thomas McLeay

Abstract:

Monitoring of tool wear in milling operations is essential for achieving the desired dimensional accuracy and surface finish of a machined workpiece. Although there are numerous statistical models and artificial intelligence techniques available for monitoring the wear of cutting tools, these techniques cannot pin point which cutting edge of the tool, or which insert in the case of indexable tooling, is worn or broken. Currently, the task of monitoring the wear on the tool cutting edges is carried out by the operator who performs a manual inspection, causing undesirable stoppages of machine tools and consequently resulting in costs incurred from lost productivity. The present study is concerned with the development of a flute tracking system to segment signals related to each physical flute of a cutter with three flutes used in an end milling operation. The purpose of the system is to monitor the cutting condition for individual flutes separately in order to determine their progressive wear rates and to predict imminent tool failure. The results of this study clearly show that signals associated with each flute can be effectively segmented using the proposed flute tracking system. Furthermore, the results illustrate that by segmenting the sensor signal by flutes it is possible to investigate the wear in each physical cutting edge of the cutting tool. These findings are significant in that they facilitate the online condition monitoring of a cutting tool for each specific flute without the need for operators/engineers to perform manual inspections of the tool.

Keywords: Tool condition monitoring, tool wear prediction, milling operation, flute tracking.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1614