Search results for: histogram feature.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1021

Search results for: histogram feature.

61 Cirrhosis Mortality Prediction as Classification Using Frequent Subgraph Mining

Authors: Abdolghani Ebrahimi, Diego Klabjan, Chenxi Ge, Daniela Ladner, Parker Stride

Abstract:

In this work, we use machine learning and data analysis techniques to predict the one-year mortality of cirrhotic patients. Data from 2,322 patients with liver cirrhosis are collected at a single medical center. Different machine learning models are applied to predict one-year mortality. A comprehensive feature space including demographic information, comorbidity, clinical procedure and laboratory tests is being analyzed. A temporal pattern mining technic called Frequent Subgraph Mining (FSM) is being used. Model for End-stage liver disease (MELD) prediction of mortality is used as a comparator. All of our models statistically significantly outperform the MELD-score model and show an average 10% improvement of the area under the curve (AUC). The FSM technic itself does not improve the model significantly, but FSM, together with a machine learning technique called an ensemble, further improves the model performance. With the abundance of data available in healthcare through electronic health records (EHR), existing predictive models can be refined to identify and treat patients at risk for higher mortality. However, due to the sparsity of the temporal information needed by FSM, the FSM model does not yield significant improvements. Our work applies modern machine learning algorithms and data analysis methods on predicting one-year mortality of cirrhotic patients and builds a model that predicts one-year mortality significantly more accurate than the MELD score. We have also tested the potential of FSM and provided a new perspective of the importance of clinical features.

Keywords: machine learning, liver cirrhosis, subgraph mining, supervised learning

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 445
60 Fake Account Detection in Twitter Based on Minimum Weighted Feature set

Authors: Ahmed El Azab, Amira M. Idrees, Mahmoud A. Mahmoud, Hesham Hefny

Abstract:

Social networking sites such as Twitter and Facebook attracts over 500 million users across the world, for those users, their social life, even their practical life, has become interrelated. Their interaction with social networking has affected their life forever. Accordingly, social networking sites have become among the main channels that are responsible for vast dissemination of different kinds of information during real time events. This popularity in Social networking has led to different problems including the possibility of exposing incorrect information to their users through fake accounts which results to the spread of malicious content during life events. This situation can result to a huge damage in the real world to the society in general including citizens, business entities, and others. In this paper, we present a classification method for detecting the fake accounts on Twitter. The study determines the minimized set of the main factors that influence the detection of the fake accounts on Twitter, and then the determined factors are applied using different classification techniques. A comparison of the results of these techniques has been performed and the most accurate algorithm is selected according to the accuracy of the results. The study has been compared with different recent researches in the same area; this comparison has proved the accuracy of the proposed study. We claim that this study can be continuously applied on Twitter social network to automatically detect the fake accounts; moreover, the study can be applied on different social network sites such as Facebook with minor changes according to the nature of the social network which are discussed in this paper.

Keywords: Fake accounts detection, classification algorithms, twitter accounts analysis, features based techniques.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5835
59 Application of Voltage Stability Indices for Proper Placement of STATCOM under Load Increase Scenario

Authors: A. S. Telang, P. P. Bedekar

Abstract:

In today’s world, electrical energy has become an indispensable component of all aspects of modern human life. Reliability, security and stability are the key aspects of any power system. Failure to meet any of these three aspects results into a great impediment to modern life. Modern power systems are being subjected to heavily stressed conditions leading to voltage stability problems. If the voltage stability problems are not mitigated properly through proper voltage stability assessment methods, cascading events may occur which may lead to voltage collapse or blackout events. Modern FACTS devices like STATCOM are one of the measures to overcome the blackout problems. As these devices are very costly, they must be installed properly at suitable locations, mostly at weak bus. Line voltage stability indices such as FVSI, Lmn and LQP play important role for identification of a weak bus. This paper presents evaluation of these line stability indices for the assessment of reliable information about the closeness of the power system to voltage collapse. PSAT is a user-friendly MATLAB toolbox, of which CPF is an important feature which has been extensively used for the placement of STATCOM to assess the stability. Novelty of the present research work lies in that the active and reactive load has been changed simultaneously at all the load buses under consideration. MATLAB code has been developed for the same and tested successfully on various standard IEEE test systems. The results for standard IEEE14 bus test system, specifically, are presented in this paper.

Keywords: Voltage stability analysis, voltage collapse, PSAT, CPF, VSI, FVSI, Lmn, LQP.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1782
58 Elliptical Features Extraction Using Eigen Values of Covariance Matrices, Hough Transform and Raster Scan Algorithms

Authors: J. Prakash, K. Rajesh

Abstract:

In this paper, we introduce a new method for elliptical object identification. The proposed method adopts a hybrid scheme which consists of Eigen values of covariance matrices, Circular Hough transform and Bresenham-s raster scan algorithms. In this approach we use the fact that the large Eigen values and small Eigen values of covariance matrices are associated with the major and minor axial lengths of the ellipse. The centre location of the ellipse can be identified using circular Hough transform (CHT). Sparse matrix technique is used to perform CHT. Since sparse matrices squeeze zero elements and contain a small number of nonzero elements they provide an advantage of matrix storage space and computational time. Neighborhood suppression scheme is used to find the valid Hough peaks. The accurate position of circumference pixels is identified using raster scan algorithm which uses the geometrical symmetry property. This method does not require the evaluation of tangents or curvature of edge contours, which are generally very sensitive to noise working conditions. The proposed method has the advantages of small storage, high speed and accuracy in identifying the feature. The new method has been tested on both synthetic and real images. Several experiments have been conducted on various images with considerable background noise to reveal the efficacy and robustness. Experimental results about the accuracy of the proposed method, comparisons with Hough transform and its variants and other tangential based methods are reported.

Keywords: Circular Hough transform, covariance matrix, Eigen values, ellipse detection, raster scan algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2639
57 A Case Study on Appearance Based Feature Extraction Techniques and Their Susceptibility to Image Degradations for the Task of Face Recognition

Authors: Vitomir Struc, Nikola Pavesic

Abstract:

Over the past decades, automatic face recognition has become a highly active research area, mainly due to the countless application possibilities in both the private as well as the public sector. Numerous algorithms have been proposed in the literature to cope with the problem of face recognition, nevertheless, a group of methods commonly referred to as appearance based have emerged as the dominant solution to the face recognition problem. Many comparative studies concerned with the performance of appearance based methods have already been presented in the literature, not rarely with inconclusive and often with contradictory results. No consent has been reached within the scientific community regarding the relative ranking of the efficiency of appearance based methods for the face recognition task, let alone regarding their susceptibility to appearance changes induced by various environmental factors. To tackle these open issues, this paper assess the performance of the three dominant appearance based methods: principal component analysis, linear discriminant analysis and independent component analysis, and compares them on equal footing (i.e., with the same preprocessing procedure, with optimized parameters for the best possible performance, etc.) in face verification experiments on the publicly available XM2VTS database. In addition to the comparative analysis on the XM2VTS database, ten degraded versions of the database are also employed in the experiments to evaluate the susceptibility of the appearance based methods on various image degradations which can occur in "real-life" operating conditions. Our experimental results suggest that linear discriminant analysis ensures the most consistent verification rates across the tested databases.

Keywords: Biometrics, face recognition, appearance based methods, image degradations, the XM2VTS database.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2283
56 Security Analysis of Password Hardened Multimodal Biometric Fuzzy Vault

Authors: V. S. Meenakshi, G. Padmavathi

Abstract:

Biometric techniques are gaining importance for personal authentication and identification as compared to the traditional authentication methods. Biometric templates are vulnerable to variety of attacks due to their inherent nature. When a person-s biometric is compromised his identity is lost. In contrast to password, biometric is not revocable. Therefore, providing security to the stored biometric template is very crucial. Crypto biometric systems are authentication systems, which blends the idea of cryptography and biometrics. Fuzzy vault is a proven crypto biometric construct which is used to secure the biometric templates. However fuzzy vault suffer from certain limitations like nonrevocability, cross matching. Security of the fuzzy vault is affected by the non-uniform nature of the biometric data. Fuzzy vault when hardened with password overcomes these limitations. Password provides an additional layer of security and enhances user privacy. Retina has certain advantages over other biometric traits. Retinal scans are used in high-end security applications like access control to areas or rooms in military installations, power plants, and other high risk security areas. This work applies the idea of fuzzy vault for retinal biometric template. Multimodal biometric system performance is well compared to single modal biometric systems. The proposed multi modal biometric fuzzy vault includes combined feature points from retina and fingerprint. The combined vault is hardened with user password for achieving high level of security. The security of the combined vault is measured using min-entropy. The proposed password hardened multi biometric fuzzy vault is robust towards stored biometric template attacks.

Keywords: Biometric Template Security, Crypto Biometric Systems, Hardening Fuzzy Vault, Min-Entropy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2158
55 Numerical and Experimental Analyses of a Semi-Active Pendulum Tuned Mass Damper

Authors: H. Juma, F. Al-hujaili, R. Kashani

Abstract:

Modern structures such as floor systems, pedestrian bridges and high-rise buildings have become lighter in mass and more flexible with negligible damping and thus prone to vibration. In this paper, a semi-actively controlled pendulum tuned mass dampers (PTMD) is presented that uses air springs as both the restoring (resilient) and energy dissipating (damping) elements; the tuned mass damper (TMD) uses no passive dampers. The proposed PTMD can readily be fine-tuned and re-tuned, via software, without changing any hardware. Almost all existing semi-active systems have the three elements that passive TMDs have, i.e., inertia, resilient, and dissipative elements with some adjustability built into one or two of these elements. The proposed semi-active air suspended TMD, on the other hand, is made up of only inertia and resilience elements. A notable feature of this TMD is the absence of a physical damping element in its make-up. The required viscous damping is introduced into the TMD using a semi-active control scheme residing in a micro-controller which actuates a high-speed proportional valve regulating the flow of air in and out of the air springs. In addition to introducing damping into the TMD, the semi-active control scheme adjusts the stiffness of the TMD. The focus of this work has been the synthesis and analysis of the control algorithms and strategies to vary the tuning accuracy, introduce damping into air suspended PTMD, and enable the PTMD to self-tune itself. The accelerations of the main structure and PTMD as well as the pressure in the air springs are used as the feedback signals in control strategies. Numerical simulation and experimental evaluation of the proposed tuned damping system are presented in this paper.

Keywords: Tuned mass damper, air spring, semi-active, vibration control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 655
54 Effect of Alkaline Activator, Water, Superplasticiser and Slag Contents on the Compressive Strength and Workability of Slag-Fly Ash Based Geopolymer Mortar Cured under Ambient Temperature

Authors: M. Al-Majidi, A. Lampropoulos, A. Cundy

Abstract:

Geopolymer (cement-free) concrete is the most promising green alternative to ordinary Portland cement concrete and other cementitious materials. While a range of different geopolymer concretes have been produced, a common feature of these concretes is heat curing treatment which is essential in order to provide sufficient mechanical properties in the early age. However, there are several practical issues with the application of heat curing in large-scale structures. The purpose of this study is to develop cement-free concrete without heat curing treatment. Experimental investigations were carried out in two phases. In the first phase (Phase A), the optimum content of water, polycarboxylate based superplasticizer contents and potassium silicate activator in the mix was determined. In the second stage (Phase B), the effect of ground granulated blast furnace slag (GGBFS) incorporation on the compressive strength of fly ash (FA) and Slag based geopolymer mixtures was evaluated. Setting time and workability were also conducted alongside with compressive tests. The results showed that as the slag content was increased the setting time was reduced while the compressive strength was improved. The obtained compressive strength was in the range of 40-50 MPa for 50% slag replacement mixtures. Furthermore, the results indicated that increment of water and superplasticizer content resulted to retarding of the setting time and slight reduction of the compressive strength. The compressive strength of the examined mixes was considerably increased as potassium silicate content was increased.

Keywords: Fly ash, geopolymer, potassium silicate, room temperature treatment, slag.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2607
53 Acceleration-Based Motion Model for Visual SLAM

Authors: Daohong Yang, Xiang Zhang, Wanting Zhou, Lei Li

Abstract:

Visual Simultaneous Localization and Mapping (VSLAM) is a technology that gathers information about the surrounding environment to ascertain its own position and create a map. It is widely used in computer vision, robotics, and various other fields. Many visual SLAM systems, such as OBSLAM3, utilize a constant velocity motion model. The utilization of this model facilitates the determination of the initial pose of the current frame, thereby enhancing the efficiency and precision of feature matching. However, it is often difficult to satisfy the constant velocity motion model in actual situations. This can result in a significant deviation between the obtained initial pose and the true value, leading to errors in nonlinear optimization results. Therefore, this paper proposes a motion model based on acceleration that can be applied to most SLAM systems. To provide a more accurate description of the camera pose acceleration, we separate the pose transformation matrix into its rotation matrix and translation vector components. The rotation matrix is now represented by a rotation vector. We assume that, over a short period, the changes in rotating angular velocity and translation vector remain constant. Based on this assumption, the initial pose of the current frame is estimated. In addition, the error of the constant velocity model is analyzed theoretically. Finally, we apply our proposed approach to the ORBSLAM3 system and evaluate two sets of sequences from the TUM datasets. The results show that our proposed method has a more accurate initial pose estimation, resulting in an improvement of 6.61% and 6.46% in the accuracy of the ORBSLAM3 system on the two test sequences, respectively.

Keywords: Error estimation, constant acceleration motion model, pose estimation, visual SLAM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 249
52 Implementation of the Personal Emergency Response System

Authors: Ah-young Jeon, In-cheol Kim, Jae-hee Jung, Soo-young Ye, Jae-hyung Kim, Ki-gon Nam, Seoung-wan Baik, Jung-hoon Ro, Gye-rok Jeon

Abstract:

The aged are faced with increasing risk for falls. The aged have the easily fragile bones than others. When falls have occurred, it is important to detect this emergency state because such events often lead to more serious illness or even death. A implementation of PDA system, for detection of emergency situation, was developed using 3-axis accelerometer in this paper as follows. The signals were acquired from the 3-axis accelerometer, and then transmitted to the PDA through Bluetooth module. This system can classify the human activity, and also detect the emergency state like falls. When the fall occurs, the system generates the alarm on the PDA. If a subject does not respond to the alarm, the system determines whether the current situation is an emergency state or not, and then sends some information to the emergency center in the case of urgent situation. Three different studies were conducted on 12 experimental subjects, with results indicating a good accuracy. The first study was performed to detect the posture change of human daily activity. The second study was performed to detect the correct direction of fall. The third study was conducted to check the classification of the daily physical activity. Each test was lasted at least 1 min. in third study. The output of acceleration signal was compared and evaluated by changing a various posture after attaching a 3-axis accelerometer module on the chest. The newly developed system has some important features such as portability, convenience and low cost. One of the main advantages of this system is that it is available at home healthcare environment. Another important feature lies in low cost to manufacture device. The implemented system can detect the fall accurately, so will be widely used in emergency situation.

Keywords: Alarm System, Ambulatory monitoring, Emergency detection, Classification of activity, and 3-axis accelerometer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1594
51 A Design for Customer Preferences Model by Cluster Analysis of Geometric Features and Customer Preferences

Authors: Yuan-Jye Tseng, Ching-Yen Chen

Abstract:

In the design cycle, a main design task is to determine the external shape of the product. The external shape of a product is one of the key factors that can affect the customers’ preferences linking to the motivation to buy the product, especially in the case of a consumer electronic product such as a mobile phone. The relationship between the external shape and the customer preferences needs to be studied to enhance the customer’s purchase desire and action. In this research, a design for customer preferences model is developed for investigating the relationships between the external shape and the customer preferences of a product. In the first stage, the names of the geometric features are collected and evaluated from the data of the specified internet web pages using the developed text miner. The key geometric features can be determined if the number of occurrence on the web pages is relatively high. For each key geometric feature, the numerical values are explored using the text miner to collect the internet data from the web pages. In the second stage, a cluster analysis model is developed to evaluate the numerical values of the key geometric features to divide the external shapes into several groups. Several design suggestion cases can be proposed, for example, large model, mid-size model, and mini model, for designing a mobile phone. A customer preference index is developed by evaluating the numerical data of each of the key geometric features of the design suggestion cases. The design suggestion case with the top ranking of the customer preference index can be selected as the final design of the product. In this paper, an example product of a notebook computer is illustrated. It shows that the external shape of a product can be used to drive customer preferences. The presented design for customer preferences model is useful for determining a suitable external shape of the product to increase customer preferences.

Keywords: Cluster analysis, customer preferences, design evaluation, design for customer preferences, product design.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 775
50 Numerical Analysis of Laminar Reflux Condensation from Gas-Vapour Mixtures in Vertical Parallel Plate Channels

Authors: Foad Hassaninejadafarahani, Scott Ormiston

Abstract:

Reflux condensation occurs in vertical channels and tubes when there is an upward core flow of vapour (or gas-vapour mixture) and a downward flow of the liquid film. The understanding of this condensation configuration is crucial in the design of reflux condensers, distillation columns, and in loss-of-coolant safety analyses in nuclear power plant steam generators. The unique feature of this flow is the upward flow of the vapour-gas mixture (or pure vapour) that retards the liquid flow via shear at the liquid-mixture interface. The present model solves the full, elliptic governing equations in both the film and the gas-vapour core flow. The computational mesh is non-orthogonal and adapts dynamically the phase interface, thus produces a sharp and accurate interface. Shear forces and heat and mass transfer at the interface are accounted for fundamentally. This modeling is a big step ahead of current capabilities by removing the limitations of previous reflux condensation models which inherently cannot account for the detailed local balances of shear, mass, and heat transfer at the interface. Discretisation has been done based on finite volume method and co-located variable storage scheme. An in-house computer code was developed to implement the numerical solution scheme. Detailed results are presented for laminar reflux condensation from steam-air mixtures flowing in vertical parallel plate channels. The results include velocity and gas mass fraction profiles, as well as axial variations of film thickness.

Keywords: Reflux Condensation, Heat Transfer, Channel, Laminar Flow

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1850
49 Comparative in silico and in vitro Study of N-(1- Methyl-2-Oxo-2-N-Methyl Anilino-Ethyl) Benzene Sulfonamide and Its Analogues as an Anticancer Agent

Authors: Pamita Awasthi, Kirna, Shilpa Dogra, Manu Vatsal, Ritu Barthwal

Abstract:

Doxorubicin, also known as Adriamycin, is an anthracycline class of drug used in cancer chemotherapy. It is used in the treatment of non-Hodgkin’s lymphoma, multiple myeloma, acute leukemia, breast cancer, lung cancer, endometrium cancer and ovary cancers. It functions via intercalating DNA and ultimately killing cancer cells. The major side effects of doxorubicin are hair loss, myelosuppression, nausea & vomiting, oesophagitis, diarrhea, heart damage and liver dysfunction. The minor modifications in the structure of compound exhibit large variation in the biological activity, has prompted us to carry out the synthesis of sulfonamide derivatives. Sulfonamide is an important feature with broad spectrum of biological activity such as antiviral, antifungal, diuretics, antiinflammatory, antibacterial and anticancer activities. Structure of the synthesized compound N-(1-methyl-2-oxo-2-N-methyl anilinoethyl) benzene sulfonamide confirmed by proton nuclear magnetic resonance (1H NMR),13C NMR, Mass and FTIR spectroscopic tools to assure the position of all protons and hence stereochemistry of the molecule. Further we have reported the binding potential of synthesized sulfonamide analogues in comparison to doxorubicin drug using Auto Dock 4.2 software. Computational binding energy (B.E.) and inhibitory constant (Ki) has been evaluated for the synthesized compound in comparison of doxorubicin against Poly (dA-dT).Poly (dA-dT) and Poly (dG-dC).Poly (dG-dC) sequences. The in vitro cytotoxic study against human breast cancer cell lines confirms the better anticancer activity of the synthesized compound over currently in use anticancer drug doxorubicin. The IC50 value of the synthesized compound is 7.12 μM whereas for doxorubicin is 7.2 μM.

Keywords: Anticancer, Auto Dock, Doxorubicin, Sulfonamide.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2339
48 Analysis of Turkish Government Cultural Portal for Supporting Gastronomy Tourism

Authors: Hilmi Rafet Yüncü

Abstract:

Today Internet has very important role to promote products and services all over the world. Companies and destinations in tourism industry use Internet to sell and to promote their core products to directly potential tourists. Internet technologies have redefined the relationships between tourists, tourism companies, and travel agents. The new relationship allows for accessing and tapping tourism information and services. Internet technologies ensure new opportunities to available for the tourism industry, including travel accommodation, and tourist destination organizations. Websites are important devices to the marketing of a destination. Most people make a research about the destination before arriving via internet. Governments have a considerable role in the process of marketing tourism destinations. Governments make policies and regulations; furthermore, they help to market destinations to potential tourists. Governments have a comprehensive overview of the sector to see changes in tourism market and design better policies, programs and marketing plans. At the same time, governments support developing of alternative tourism in the country with regulations and marketing tools. The aim of this study is to analyse of an Internet website of governmental tourism portal in Turkey to determine effectiveness about gastronomy tourism. The Turkish government has established a culture portal for foreign and local tourists. The Portal provides local and general information about tourism attractions of cities and Turkey. There are 81 official cities in Turkey and all these cities are conducted to analyse to determine how effective marketing is done by Turkish Government in the manner of gastronomy tourism. A content analysis will be conducted to Internet website of the portal with food content, recipes and gastronomic feature of cities.

Keywords: Content analysis, culture portal, gastronomy tourism, Turkey.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1442
47 A Secure Auditing Framework for Load Balancing in Cloud Environment

Authors: R. Geetha, T. Padmavathy

Abstract:

Security audit is an important aspect or feature to be considered in cloud service customer. It is basically a certification process to audit the controls that deliver the security requirements. Security audits are conducted by trained and qualified staffs that belong to an independent auditing organization. Security audits must be carried as a standard of security controls. Proper check to be made that the cloud user has a proper reporting and logging facilities with the customer's system and hence ensuring appropriate business and operational flow of data through cloud service. We propose a cloud-based secure auditing framework, which enables confided in power to safely store their mystery information on the semi-believed cloud specialist co-ops, and specifically share their mystery information with a wide scope of information recipient, to diminish the key administration intricacy for power proprietors and information collectors. Unique in relation to past cloud-based information framework, data proprietors transfer their mystery information into cloud utilizing static and dynamic evaluating plan. Another propelled determination is, if any information beneficiary needs individual record to download, the information collector will send the solicitation to the expert. The specialist proprietor has the Access Control. At the off probability, the businessman must impart the primary record to the knowledge collector, acknowledge statistics beneficiary solicitation. Once the acknowledgement for the records is over, the recipient downloads the first record and this record shifting time with date and downloading time with date are monitored by the inspector. In addition to deduplication concept, diminished cloud memory area using dynamic document distribution has been proposed.

Keywords: Cloud computing, cloud storage auditing, data integrity, key exposure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1167
46 Computer Aided Design Solution Based on Genetic Algorithms for FMEA and Control Plan in Automotive Industry

Authors: Nadia Belu, Laurentiu M. Ionescu, Agnieszka Misztal

Abstract:

In this paper we propose a computer-aided solution with Genetic Algorithms in order to reduce the drafting of reports: FMEA analysis and Control Plan required in the manufacture of the product launch and improved knowledge development teams for future projects. The solution allows to the design team to introduce data entry required to FMEA. The actual analysis is performed using Genetic Algorithms to find optimum between RPN risk factor and cost of production. A feature of Genetic Algorithms is that they are used as a means of finding solutions for multi criteria optimization problems. In our case, along with three specific FMEA risk factors is considered and reduce production cost. Analysis tool will generate final reports for all FMEA processes. The data obtained in FMEA reports are automatically integrated with other entered parameters in Control Plan. Implementation of the solution is in the form of an application running in an intranet on two servers: one containing analysis and plan generation engine and the other containing the database where the initial parameters and results are stored. The results can then be used as starting solutions in the synthesis of other projects. The solution was applied to welding processes, laser cutting and bending to manufacture chassis for buses. Advantages of the solution are efficient elaboration of documents in the current project by automatically generating reports FMEA and Control Plan using multiple criteria optimization of production and build a solid knowledge base for future projects. The solution which we propose is a cheap alternative to other solutions on the market using Open Source tools in implementation.

Keywords: Automotive industry, control plan, FMEA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2874
45 Nanomaterial Based Electrochemical Sensors for Endocrine Disrupting Compounds

Authors: Gaurav Bhanjana, Ganga Ram Chaudhary, Sandeep Kumar, Neeraj Dilbaghi

Abstract:

Main sources of endocrine disrupting compounds in the ecosystem are hormones, pesticides, phthalates, flame retardants, dioxins, personal-care products, coplanar polychlorinated biphenyls (PCBs), bisphenol A, and parabens. These endocrine disrupting compounds are responsible for learning disabilities, brain development problems, deformations of the body, cancer, reproductive abnormalities in females and decreased sperm count in human males. Although discharge of these chemical compounds into the environment cannot be stopped, yet their amount can be retarded through proper evaluation and detection techniques. The available techniques for determination of these endocrine disrupting compounds mainly include high performance liquid chromatography (HPLC), mass spectroscopy (MS) and gas chromatography-mass spectrometry (GC–MS). These techniques are accurate and reliable but have certain limitations like need of skilled personnel, time consuming, interference and requirement of pretreatment steps. Moreover, these techniques are laboratory bound and sample is required in large amount for analysis. In view of above facts, new methods for detection of endocrine disrupting compounds should be devised that promise high specificity, ultra sensitivity, cost effective, efficient and easy-to-operate procedure. Nowadays, electrochemical sensors/biosensors modified with nanomaterials are gaining high attention among researchers. Bioelement present in this system makes the developed sensors selective towards analyte of interest. Nanomaterials provide large surface area, high electron communication feature, enhanced catalytic activity and possibilities of chemical modifications. In most of the cases, nanomaterials also serve as an electron mediator or electrocatalyst for some analytes.

Keywords: Sensors, endocrine disruptors, nanoparticles, electrochemical, microscopy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1574
44 Validation on 3D Surface Roughness Algorithm for Measuring Roughness of Psoriasis Lesion

Authors: M.H. Ahmad Fadzil, Esa Prakasa, Hurriyatul Fitriyah, Hermawan Nugroho, Azura Mohd Affandi, S.H. Hussein

Abstract:

Psoriasis is a widespread skin disease affecting up to 2% population with plaque psoriasis accounting to about 80%. It can be identified as a red lesion and for the higher severity the lesion is usually covered with rough scale. Psoriasis Area Severity Index (PASI) scoring is the gold standard method for measuring psoriasis severity. Scaliness is one of PASI parameter that needs to be quantified in PASI scoring. Surface roughness of lesion can be used as a scaliness feature, since existing scale on lesion surface makes the lesion rougher. The dermatologist usually assesses the severity through their tactile sense, therefore direct contact between doctor and patient is required. The problem is the doctor may not assess the lesion objectively. In this paper, a digital image analysis technique is developed to objectively determine the scaliness of the psoriasis lesion and provide the PASI scaliness score. Psoriasis lesion is modelled by a rough surface. The rough surface is created by superimposing a smooth average (curve) surface with a triangular waveform. For roughness determination, a polynomial surface fitting is used to estimate average surface followed by a subtraction between rough and average surface to give elevation surface (surface deviations). Roughness index is calculated by using average roughness equation to the height map matrix. The roughness algorithm has been tested to 444 lesion models. From roughness validation result, only 6 models can not be accepted (percentage error is greater than 10%). These errors occur due the scanned image quality. Roughness algorithm is validated for roughness measurement on abrasive papers at flat surface. The Pearson-s correlation coefficient of grade value (G) of abrasive paper and Ra is -0.9488, its shows there is a strong relation between G and Ra. The algorithm needs to be improved by surface filtering, especially to overcome a problem with noisy data.

Keywords: psoriasis, roughness algorithm, polynomial surfacefitting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2490
43 Apoptosis Pathway Targeted by Thymoquinone in MCF7 Breast Cancer Cell Line

Authors: M. Marjaneh, M. Y. Narazah, H. Shahrul

Abstract:

Array-based gene expression analysis is a powerful tool to profile expression of genes and to generate information on therapeutic effects of new anti-cancer compounds. Anti-apoptotic effect of thymoquinone was studied in MCF7 breast cancer cell line using gene expression profiling with cDNA microarray. The purity and yield of RNA samples were determined using RNeasyPlus Mini kit. The Agilent RNA 6000 NanoLabChip kit evaluated the quantity of the RNA samples. AffinityScript RT oligo-dT promoter primer was used to generate cDNA strands. T7 RNA polymerase was used to convert cDNA to cRNA. The cRNA samples and human universal reference RNA were labelled with Cy-3-CTP and Cy-5-CTP, respectively. Feature Extraction and GeneSpring softwares analysed the data. The single experiment analysis revealed involvement of 64 pathways with up-regulated genes and 78 pathways with downregulated genes. The MAPK and p38-MAPK pathways were inhibited due to the up-regulation of PTPRR gene. The inhibition of p38-MAPK suggested up-regulation of TGF-ß pathway. Inhibition of p38-MAPK caused up-regulation of TP53 and down-regulation of Bcl2 genes indicating involvement of intrinsic apoptotic pathway. Down-regulation of CARD16 gene as an adaptor molecule regulated CASP1 and suggested necrosis-like programmed cell death and involvement of caspase in apoptosis. Furthermore, down-regulation of GPCR, EGF-EGFR signalling pathways suggested reduction of ER. Involvement of AhR pathway which control cytochrome P450 and glucuronidation pathways showed metabolism of Thymoquinone. The findings showed differential expression of several genes in apoptosis pathways with thymoquinone treatment in estrogen receptor-positive breast cancer cells.

Keywords: CARD16, CASP10, cDNA microarray, PTPRR, Thymoquinone.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2287
42 Operational Analysis of Urban Intelligent Transportation System and Strategies for Future Development - Taking Calling Service of Taxi in Wuhan as an Example

Authors: Wang Xu, Yao Yangyang, Lin Ying, Wang Zhenzhen

Abstract:

Intelligent Transportation System integrates various modern advanced technologies into the ground transportation system, and it will be the goal of urban transport system in the future because of its comprehensive effects. However, it also brings some problems, such as project performance assessment, fairness of benefiting groups, fund management, which are directly related to its operation and implementation. Wuhan has difficulties in organizing transportation because of its nature feature (river and lake), therefore, calling Service of Taxi plays an important role in transportation. This paper researches on calling Service of Taxi in Wuhan, based on quantitative and qualitative analysis. It analyzes its operations management systematically, including business model, finance, usage analysis and users evaluation. As for business model, it is that the government leads the operation at the initial stage, and the third part dominates the operation at the mature stage, which not only eases the pressure of the third part and benefits the spread of the calling service at the initial stage, but also alleviates financial pressure of government and improve the efficiency of the operation at the mature stage. As for finance, it draws that this service will bring heavy financial burden of equipments, but it will be alleviated in the future because of its spread. As for usage analysis, through data comparison, this service can bring some benefits for taxi drivers, and time and spatial distribution of usage have certain features. As for user evaluation, it analyzes using group and the reason why choosing it. At last, according to the analysis above, the paper puts forward the potentials, limitations, and future development strategies for it.

Keywords: Assessment, Calling service of taxi, Operations management, Strategies, Using groups.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2242
41 Exploration of an Environmentally Friendly Form of City Development Combined with a River: An Example of a Four-Dimensional Analysis Based on the Expansion of the City of Jinan across the Yellow River

Authors: Zhaocheng Shang

Abstract:

In order to study the topic of cities crossing rivers, a Four-Dimensional Analysis Method consisting of timeline, X-axis, Y-axis, and Z-axis is proposed. Policies, plans, and their implications are summarized and researched along with the timeline. The X-axis is the direction which is parallel to the river. The research area was chosen because of its important connection function. It is proposed that more surface water network should be built because of the ecological orientation of the research area. And the analysis of groundwater makes it for sure that the proposal is feasible. After the blue water network is settled, the green landscape network which is surrounded by it could be planned. The direction which is transversal to the river (Y-axis) should run through the transportation axis so that the urban texture could stretch in an ecological way. Therefore, it is suggested that the work of the planning bureau and river bureau should be coordinated. The Z-axis research is on the section view of the river, especially on the Yellow River’s special feature of being a perched river. Based on water control safety demands, river parks could be constructed on the embankment buffer zone, whereas many kinds of ornamental trees could be used to build the buffer zone. City Crossing River is a typical case where we make use of landscaping to build a symbiotic relationship between the urban landscape architecture and the environment. The local environment should be respected in the process of city expansion. The planning order of "Benefit- Flood Control Safety" should be replaced by "Flood Control Safety - Landscape Architecture- People - Benefit".

Keywords: Blue-Green landscape network, city crossing river, four-dimensional analysis method, planning order.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 691
40 Load Forecasting in Microgrid Systems with R and Cortana Intelligence Suite

Authors: F. Lazzeri, I. Reiter

Abstract:

Energy production optimization has been traditionally very important for utilities in order to improve resource consumption. However, load forecasting is a challenging task, as there are a large number of relevant variables that must be considered, and several strategies have been used to deal with this complex problem. This is especially true also in microgrids where many elements have to adjust their performance depending on the future generation and consumption conditions. The goal of this paper is to present a solution for short-term load forecasting in microgrids, based on three machine learning experiments developed in R and web services built and deployed with different components of Cortana Intelligence Suite: Azure Machine Learning, a fully managed cloud service that enables to easily build, deploy, and share predictive analytics solutions; SQL database, a Microsoft database service for app developers; and PowerBI, a suite of business analytics tools to analyze data and share insights. Our results show that Boosted Decision Tree and Fast Forest Quantile regression methods can be very useful to predict hourly short-term consumption in microgrids; moreover, we found that for these types of forecasting models, weather data (temperature, wind, humidity and dew point) can play a crucial role in improving the accuracy of the forecasting solution. Data cleaning and feature engineering methods performed in R and different types of machine learning algorithms (Boosted Decision Tree, Fast Forest Quantile and ARIMA) will be presented, and results and performance metrics discussed.

Keywords: Time-series, features engineering methods for forecasting, energy demand forecasting, Azure machine learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1288
39 LTE Performance Analysis in the City of Bogota Northern Zone for Two Different Mobile Broadband Operators over Qualipoc

Authors: Víctor D. Rodríguez, Edith P. Estupiñán, Juan C. Martínez

Abstract:

The evolution in mobile broadband technologies has allowed to increase the download rates in users considering the current services. The evaluation of technical parameters at the link level is of vital importance to validate the quality and veracity of the connection, thus avoiding large losses of data, time and productivity. Some of these failures may occur between the eNodeB (Evolved Node B) and the user equipment (UE), so the link between the end device and the base station can be observed. LTE (Long Term Evolution) is considered one of the IP-oriented mobile broadband technologies that work stably for data and VoIP (Voice Over IP) for those devices that have that feature. This research presents a technical analysis of the connection and channeling processes between UE and eNodeB with the TAC (Tracking Area Code) variables, and analysis of performance variables (Throughput, Signal to Interference and Noise Ratio (SINR)). Three measurement scenarios were proposed in the city of Bogotá using QualiPoc, where two operators were evaluated (Operator 1 and Operator 2). Once the data were obtained, an analysis of the variables was performed determining that the data obtained in transmission modes vary depending on the parameters BLER (Block Error Rate), performance and SNR (Signal-to-Noise Ratio). In the case of both operators, differences in transmission modes are detected and this is reflected in the quality of the signal. In addition, due to the fact that both operators work in different frequencies, it can be seen that Operator 1, despite having spectrum in Band 7 (2600 MHz), together with Operator 2, is reassigning to another frequency, a lower band, which is AWS (1700 MHz), but the difference in signal quality with respect to the establishment with data by the provider Operator 2 and the difference found in the transmission modes determined by the eNodeB in Operator 1 is remarkable.

Keywords: BLER, LTE, Network, Qualipoc, SNR.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 534
38 Multi-Sensor Image Fusion for Visible and Infrared Thermal Images

Authors: Amit Kr. Happy

Abstract:

This paper is motivated by the importance of multi-sensor image fusion with specific focus on Infrared (IR) and Visible image (VI) fusion for various applications including military reconnaissance. Image fusion can be defined as the process of combining two or more source images into a single composite image with extended information content that improves visual perception or feature extraction. These images can be from different modalities like Visible camera & IR Thermal Imager. While visible images are captured by reflected radiations in the visible spectrum, the thermal images are formed from thermal radiation (IR) that may be reflected or self-emitted. A digital color camera captures the visible source image and a thermal IR camera acquires the thermal source image. In this paper, some image fusion algorithms based upon Multi-Scale Transform (MST) and region-based selection rule with consistency verification have been proposed and presented. This research includes implementation of the proposed image fusion algorithm in MATLAB along with a comparative analysis to decide the optimum number of levels for MST and the coefficient fusion rule. The results are presented, and several commonly used evaluation metrics are used to assess the suggested method's validity. Experiments show that the proposed approach is capable of producing good fusion results. While deploying our image fusion algorithm approaches, we observe several challenges from the popular image fusion methods. While high computational cost and complex processing steps of image fusion algorithms provide accurate fused results, but they also make it hard to become deployed in system and applications that require real-time operation, high flexibility and low computation ability. So, the methods presented in this paper offer good results with minimum time complexity.

Keywords: Image fusion, IR thermal imager, multi-sensor, Multi-Scale Transform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 429
37 Text Mining Technique for Data Mining Application

Authors: M. Govindarajan

Abstract:

Text Mining is around applying knowledge discovery techniques to unstructured text is termed knowledge discovery in text (KDT), or Text data mining or Text Mining. In decision tree approach is most useful in classification problem. With this technique, tree is constructed to model the classification process. There are two basic steps in the technique: building the tree and applying the tree to the database. This paper describes a proposed C5.0 classifier that performs rulesets, cross validation and boosting for original C5.0 in order to reduce the optimization of error ratio. The feasibility and the benefits of the proposed approach are demonstrated by means of medial data set like hypothyroid. It is shown that, the performance of a classifier on the training cases from which it was constructed gives a poor estimate by sampling or using a separate test file, either way, the classifier is evaluated on cases that were not used to build and evaluate the classifier are both are large. If the cases in hypothyroid.data and hypothyroid.test were to be shuffled and divided into a new 2772 case training set and a 1000 case test set, C5.0 might construct a different classifier with a lower or higher error rate on the test cases. An important feature of see5 is its ability to classifiers called rulesets. The ruleset has an error rate 0.5 % on the test cases. The standard errors of the means provide an estimate of the variability of results. One way to get a more reliable estimate of predictive is by f-fold –cross- validation. The error rate of a classifier produced from all the cases is estimated as the ratio of the total number of errors on the hold-out cases to the total number of cases. The Boost option with x trials instructs See5 to construct up to x classifiers in this manner. Trials over numerous datasets, large and small, show that on average 10-classifier boosting reduces the error rate for test cases by about 25%.

Keywords: C5.0, Error Ratio, text mining, training data, test data.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2487
36 Dynamic Behavior of the Nanostructure of Load-bearing Biological Materials

Authors: M. Qwamizadeh, K. Zhou, Z. Zhang, YW. Zhang

Abstract:

Typical load-bearing biological materials like bone, mineralized tendon and shell, are biocomposites made from both organic (collagen) and inorganic (biomineral) materials. This amazing class of materials with intrinsic internally designed hierarchical structures show superior mechanical properties with regard to their weak components from which they are formed. Extensive investigations concentrating on static loading conditions have been done to study the biological materials failure. However, most of the damage and failure mechanisms in load-bearing biological materials will occur whenever their structures are exposed to dynamic loading conditions. The main question needed to be answered here is: What is the relation between the layout and architecture of the load-bearing biological materials and their dynamic behavior? In this work, a staggered model has been developed based on the structure of natural materials at nanoscale and Finite Element Analysis (FEA) has been used to study the dynamic behavior of the structure of load-bearing biological materials to answer why the staggered arrangement has been selected by nature to make the nanocomposite structure of most of the biological materials. The results showed that the staggered structures will efficiently attenuate the stress wave rather than the layered structure. Furthermore, such staggered architecture is effectively in charge of utilizing the capacity of the biostructure to resist both normal and shear loads. In this work, the geometrical parameters of the model like the thickness and aspect ratio of the mineral inclusions selected from the typical range of the experimentally observed feature sizes and layout dimensions of the biological materials such as bone and mineralized tendon. Furthermore, the numerical results validated with existing theoretical solutions. Findings of the present work emphasize on the significant effects of dynamic behavior on the natural evolution of load-bearing biological materials and can help scientists to design bioinspired materials in the laboratories.

Keywords: Load-bearing biological materials, nanostructure, staggered structure, stress wave decay.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2079
35 A Proposed Optimized and Efficient Intrusion Detection System for Wireless Sensor Network

Authors: Abdulaziz Alsadhan, Naveed Khan

Abstract:

In recent years intrusions on computer network are the major security threat. Hence, it is important to impede such intrusions. The hindrance of such intrusions entirely relies on its detection, which is primary concern of any security tool like Intrusion detection system (IDS). Therefore, it is imperative to accurately detect network attack. Numerous intrusion detection techniques are available but the main issue is their performance. The performance of IDS can be improved by increasing the accurate detection rate and reducing false positive. The existing intrusion detection techniques have the limitation of usage of raw dataset for classification. The classifier may get jumble due to redundancy, which results incorrect classification. To minimize this problem, Principle component analysis (PCA), Linear Discriminant Analysis (LDA) and Local Binary Pattern (LBP) can be applied to transform raw features into principle features space and select the features based on their sensitivity. Eigen values can be used to determine the sensitivity. To further classify, the selected features greedy search, back elimination, and Particle Swarm Optimization (PSO) can be used to obtain a subset of features with optimal sensitivity and highest discriminatory power. This optimal feature subset is used to perform classification. For classification purpose, Support Vector Machine (SVM) and Multilayer Perceptron (MLP) are used due to its proven ability in classification. The Knowledge Discovery and Data mining (KDD’99) cup dataset was considered as a benchmark for evaluating security detection mechanisms. The proposed approach can provide an optimal intrusion detection mechanism that outperforms the existing approaches and has the capability to minimize the number of features and maximize the detection rates.

Keywords: Particle Swarm Optimization (PSO), Principle component analysis (PCA), Linear Discriminant Analysis (LDA), Local Binary Pattern (LBP), Support Vector Machine (SVM), Multilayer Perceptron (MLP).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2762
34 Performance Analysis of HSDPA Systems using Low-Density Parity-Check (LDPC)Coding as Compared to Turbo Coding

Authors: K. Anitha Sheela, J. Tarun Kumar

Abstract:

HSDPA is a new feature which is introduced in Release-5 specifications of the 3GPP WCDMA/UTRA standard to realize higher speed data rate together with lower round-trip times. Moreover, the HSDPA concept offers outstanding improvement of packet throughput and also significantly reduces the packet call transfer delay as compared to Release -99 DSCH. Till now the HSDPA system uses turbo coding which is the best coding technique to achieve the Shannon limit. However, the main drawbacks of turbo coding are high decoding complexity and high latency which makes it unsuitable for some applications like satellite communications, since the transmission distance itself introduces latency due to limited speed of light. Hence in this paper it is proposed to use LDPC coding in place of Turbo coding for HSDPA system which decreases the latency and decoding complexity. But LDPC coding increases the Encoding complexity. Though the complexity of transmitter increases at NodeB, the End user is at an advantage in terms of receiver complexity and Bit- error rate. In this paper LDPC Encoder is implemented using “sparse parity check matrix" H to generate a codeword at Encoder and “Belief Propagation algorithm "for LDPC decoding .Simulation results shows that in LDPC coding the BER suddenly drops as the number of iterations increase with a small increase in Eb/No. Which is not possible in Turbo coding. Also same BER was achieved using less number of iterations and hence the latency and receiver complexity has decreased for LDPC coding. HSDPA increases the downlink data rate within a cell to a theoretical maximum of 14Mbps, with 2Mbps on the uplink. The changes that HSDPA enables includes better quality, more reliable and more robust data services. In other words, while realistic data rates are only a few Mbps, the actual quality and number of users achieved will improve significantly.

Keywords: AMC, HSDPA, LDPC, WCDMA, 3GPP.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2047
33 Palestine Smart Tourism Augmented Reality Mobile Application

Authors: Murad Al-Rajab, Sherin Hazboun, Azhar Al-Hamamreh, Nirmeen Odeh, Siham Halaseh

Abstract:

Tourism is considered an important sector for most countries, while maintaining good tourism attractions can promote national economic development. The State of Palestine is historically considered a wealthy country full of many archaeological places. In the city of Bethlehem, for example, the Church of the Nativity is the most important touristic site, but it does not have enough technology development to attract tourists. In this paper, we propose a smart mobile application named “Pal-STAR” (Palestine Smart Tourist Augmented Reality) as an innovative solution which targets tourists and assists them to make a visit inside the Church of the Nativity. The application will use augmented reality and feature a virtual tourist guide showing views of the church while providing historical information in a smart, easy, effective and user-friendly way. The proposed application is compatible with multiple mobile platforms and is considered user friendly. The findings show that this application will improve the practice of the tourism sector in the Holy Land, it will also increase the number of tourists visiting the Church of the Nativity and it will facilitate access to historical data that have been difficult to obtain using traditional tourism guidance. The value that tourism adds to a country cannot be denied, and the more technological advances are incorporated in this sector, the better the country’s tourism sector can be served. Palestine’s economy is heavily dependent on tourism in many of its main cities, despite several limitations, and technological development is needed to enable this sector to flourish. The proposed mobile application would definitely have a good impact on the development of the tourism sector by creating an Augmented Reality environment for tourists inside the church, helping them to navigate and learn about holy places in a non-traditional way, using a virtual tourist guide.

Keywords: Smartphones, tourism, tourists guide, augmented reality, Palestine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 595
32 High Quality Colored Wind Chimes by Anodization on Aluminum Alloy

Authors: Chia-Chih Wei, Yun-Qi Li, Ssu-Ying Chen, Hsuan-Jung Chen, Hsi-Wen Yang, Chih-Yuan Chen, Chien-Chon Chen

Abstract:

In this paper, we used a high-quality anodization technique to make a colored wind chime with a nano-tube structure anodic film, which controls the length-to-diameter ratio of an aluminum rod and controls the oxide film structure on the surface of the aluminum rod by an anodizing method. The research experiment used hard anodization to grow a controllable thickness of anodic film on an aluminum alloy surface. The hard anodization film has high hardness, high insulation, high-temperature resistance, good corrosion resistance, colors, and mass production properties that can be further applied to transportation, electronic products, biomedical fields, or energy industry applications. This study also provides in-depth research and a detailed discussion of the related process of aluminum alloy surface hard anodizing, including pre-anodization, anodization, and post-anodization. The experiment parameters of anodization include using a mixed acid solution of sulfuric acid and oxalic acid as an anodization electrolyte and controlling the temperature, time, current density, and final voltage to obtain the anodic film. In the results of the experiments, the properties of the anodic film, including thickness, hardness, insulation, and corrosion characteristics, the microstructure of the anode film were measured, and the hard anodization efficiency was calculated. Thereby it can obtain different transmission speeds of sound in the aluminum rod. And, different audio sounds can present on the aluminum rod. Another feature of the present experiment result is the use of the anodizing method and dyeing method, laser engraving patterning and electrophoresis method to make good-quality colored aluminum wind chimes.

Keywords: Anodization, aluminum, wind chime, nano-tube.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 81