Search results for: Avionics Based Integrity Augmentation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11413

Search results for: Avionics Based Integrity Augmentation

7303 Modeling the Hybrid Battery/Super-Storage System for a Solar Standalone Microgrid

Authors: Astiaj Khoramshahi, Hossein Ahmadi Danesh Ashtiani, Ahmad Khoshgard, Hamidreza Damghani, Leila Damghani

Abstract:

Solar energy systems using various storages are required to be evaluated based on energy requirements and applications. Also, modeling and analysis of storage systems are necessary to increase the effectiveness of combinations of these systems. In this paper, analysis based on the MATLAB software has been analyzed to evaluate the response of the hybrid energy system considering various technologies of renewable energy and energy storage. In the present study, three different simulation scenarios are presented. Simulation output results using software for the first scenario show that the battery is effective in smoothing the overall power demand to the consumer studied during a day, but temporary loads on the grid with high frequencies, effectively cannot be canceled due to the limited response speed of battery control. Simulation outputs for the second scenario using the energy storage system show that sudden changes in demand power are paved by super saving. The majority of these sudden changes in power demand are caused by sewing consumers and receiving variable solar power (due to clouds passing through the solar array). Simulation outputs for the third scenario show the effects of the hybrid system for the same consumer and the output of the solar array, leading to the smallest amount of power demand fed into the grid, as well as demand at peak times. According to the "battery only" scenario, the displacement technique of the peak load has been significantly reduced.

Keywords: Storage system, super storage, standalone, microgrid.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 297
7302 Scholar Index for Research Performance Evaluation Using Multiple Criteria Decision Making Analysis

Authors: C. Ardil

Abstract:

This paper aims to present an objective quantitative methodology on how to evaluate individual’s scholarly research output using multiple criteria decision analysis. A multiple criteria decision making analysis (MCDMA) methodological process is adopted to build a multiple criteria evaluation model. With the introduction of the scholar index, which gives significant information about a researcher's productivity and the scholarly impact of his or her publications in a single number (s is the number of publications with at least s citations); cumulative research citation index; the scholar index is included in the citation databases to cover the multidimensional complexity of scholarly research performance and to undertake objective evaluations with scholar index. The scholar index, one of publication activity indexes, is analyzed by considering it to be the most appropriate sciencemetric indicator which allows to smooth over many drawbacks of scholarly output assessment by mere calculation of the number of publications (quantity) and citations (quality). Hence, this study includes a set of indicators-based scholar index to be used for evaluating scholarly researchers. Google Scholar open science database was used to assess and discuss scholarly productivity and impact of researchers. Based on the experiment of computing the scholar index, and its derivative indexes for a set of researchers on open research database platform, quantitative methods of assessing scholarly research output were successfully considered to rank researchers. The proposed methodology considers the ranking, and the selection of data on which a scholarly research performance evaluation was based, the analysis of the data, and the presentation of the multiple criteria analysis results.

Keywords: Multiple Criteria Decision Making Analysis, MCDMA, Research Performance Evaluation, Scholar Index, h index, Science Citation Index, Science Efficiency, Cumulative Citation Index, Sciencemetrics

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 440
7301 Tool Wear of Titanium/Tungsten/Silicon/Aluminum-based-coated end Mill Cutters in Millin Hardened Steel

Authors: Tadahiro Wada, Koji Iwamoto

Abstract:

In turning hardened steel, polycrystalline cubic boron nitride (cBN) compacts are widely used, due to their higher hardness and higher thermal conductivity. However, in milling hardened steel, fracture of cBN cutting tools readily occurs because they have poor fracture toughness. Therefore, coated cemented carbide tools, which have good fracture toughness and wear resistance, are generally widely used. In this study, hardened steel (ASTM D2, JIS SKD11, 60HRC) was milled with three physical vapor deposition (PVD)-coated cemented carbide end mill cutters in order to determine effective tool materials for cutting hardened steel at high cutting speeds. The coating films used were (Ti,W)N/(Ti,W,Si)N and (Ti,W)N/(Ti,W,Si,Al)N coating films. (Ti,W,Si,Al)N is a new type of coating film. The inner layer of the (Ti,W)N/(Ti,W,Si)N and (Ti,W)N/(Ti,W,Si,Al)N coating system is (Ti,W)N coating film, and the outer layer is (Ti,W,Si)N and (Ti,W,Si,Al)N coating films, respectively. Furthermore, commercial (Ti,Al)N-based coating film was also used. The following results were obtained: (1) In milling hardened steel at a cutting speed of 3.33 m/s, the tool wear width of the (Ti,W)N/(Ti,W,Si,Al)N-coated tool was smaller than that of the (Ti,W)N/(Ti,W,Si)N-coated tool. And, compared with the commercial (Ti,Al)N, the tool wear width of the (Ti,W)N/(Ti,W,Si,Al)N-coated tool was smaller than that of the (Ti,Al)N-coated tool. (2) The tool wear of the (Ti,W)N/(Ti,W,Si,Al)N-coated tool increased with an increase in cutting speed. (3) The (Ti,W)N/(Ti,W,Si,Al)N-coated cemented carbide was an effective tool material for high-speed cutting below a cutting speed of 3.33 m/s.

Keywords: cutting, physical vapor deposition (PVD) coating system, hardened steel, tool wear

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2036
7300 Perforation Analysis of the Aluminum Alloy Sheets Subjected to High Rate of Loading and Heated Using Thermal Chamber: Experimental and Numerical Approach

Authors: A. Bendarma, T. Jankowiak, A. Rusinek, T. Lodygowski, M. Klósak, S. Bouslikhane

Abstract:

The analysis of the mechanical characteristics and dynamic behavior of aluminum alloy sheet due to perforation tests based on the experimental tests coupled with the numerical simulation is presented. The impact problems (penetration and perforation) of the metallic plates have been of interest for a long time. Experimental, analytical as well as numerical studies have been carried out to analyze in details the perforation process. Based on these approaches, the ballistic properties of the material have been studied. The initial and residual velocities laser sensor is used during experiments to obtain the ballistic curve and the ballistic limit. The energy balance is also reported together with the energy absorbed by the aluminum including the ballistic curve and ballistic limit. The high speed camera helps to estimate the failure time and to calculate the impact force. A wide range of initial impact velocities from 40 up to 180 m/s has been covered during the tests. The mass of the conical nose shaped projectile is 28 g, its diameter is 12 mm, and the thickness of the aluminum sheet is equal to 1.0 mm. The ABAQUS/Explicit finite element code has been used to simulate the perforation processes. The comparison of the ballistic curve was obtained numerically and was verified experimentally, and the failure patterns are presented using the optimal mesh densities which provide the stability of the results. A good agreement of the numerical and experimental results is observed.

Keywords: Aluminum alloy, ballistic behavior, failure criterion, numerical simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 910
7299 The MUST ADS Concept

Authors: J-B. Clavel, N. Thiollière, B. Mouginot

Abstract:

The presented work is motivated by a French law regarding nuclear waste management. A new conceptual Accelerator Driven System (ADS) designed for the Minor Actinides (MA) transmutation has been assessed by numerical simulation. The MUltiple Spallation Target (MUST) ADS combines high thermal power (up to 1.4 GWth) and high specific power. A 30 mA and 1 GeV proton beam is divided into three secondary beams transmitted on three liquid lead-bismuth spallation targets. Neutron and thermalhydraulic simulations have been performed with the code MURE, based on the Monte-Carlo transport code MCNPX. A methodology has been developed to define characteristic of the MUST ADS concept according to a specific transmutation scenario. The reference scenario is based on a MA flux (neptunium, americium and curium) providing from European Fast Reactor (EPR) and a plutonium multireprocessing strategy is accounted for. The MUST ADS reference concept is a sodium cooled fast reactor. The MA fuel at equilibrium is mixed with MgO inert matrix to limit the core reactivity and improve the fuel thermal conductivity. The fuel is irradiated over five years. Five years of cooling and two years for the fuel fabrication are taken into account. The MUST ADS reference concept burns about 50% of the initial MA inventory during a complete cycle. In term of mass, up to 570 kg/year are transmuted in one concept. The methodology to design the MUST ADS and to calculate fuel composition at equilibrium is precisely described in the paper. A detailed fuel evolution analysis is performed and the reference scenario is compared to a scenario where only americium transmutation is performed.

Keywords: Accelerator Driven System, double strata scenario, minor actinides, MUST, transmutation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1669
7298 Catalytic Decomposition of Potassium Monopersulfate. The Kinetics

Authors: Olga Gimeno, Javier Rivas, Maria Carbajo, Teresa Borralho

Abstract:

Potassium monopersulfate has been decomposed in aqueous solution in the presence of Co(II). The process has been simulated by means of a mechanism based on elementary reactions. Rate constants have been taken from literature reports or, alternatively, assimilated to analogous reactions occurring in Fenton's chemistry. Several operating conditions have been successfully applied.

Keywords: Monopersulfate, Oxone®, Sulfate radicals, Water treatment

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1945
7297 Automated Video Surveillance System for Detection of Suspicious Activities during Academic Offline Examination

Authors: G. Sandhya Devi, G. Suvarna Kumar, S. Chandini

Abstract:

This research work aims to develop a system that will analyze and identify students who indulge in malpractices/suspicious activities during the course of an academic offline examination. Automated Video Surveillance provides an optimal solution which helps in monitoring the students and identifying the malpractice event immediately. This work is organized into three modules. The first module deals with performing an impersonation check using a PCA-based face recognition method which is done by cross checking his profile with the database. The presence or absence of the student is even determined in this module by implementing an image registration technique wherein a grid is formed by considering all the images registered using the frontal camera at the determined positions. Second, detecting such facial malpractices in which a student gets involved in conversation with another, trying to obtain unauthorized information etc., based on the threshold range evaluated by considering his/her mouth state whether open or closed. The third module deals with identification of unauthorized material or gadgets used in the examination hall by training the positive samples of the object through various stages. Here, a top view camera feed is analyzed to detect the suspicious activities. The system automatically alerts the administration when any suspicious activities are identified, thereby reducing the error rate caused due to manual monitoring. This work is an improvement over our previous work published in identifying suspicious activities done by examinees in an offline examination.

Keywords: Impersonation, image registration, incrimination, object detection, threshold evaluation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1555
7296 Review of Downscaling Methods in Climate Change and Their Role in Hydrological Studies

Authors: Nishi Bhuvandas, P. V. Timbadiya, P. L. Patel, P. D. Porey

Abstract:

Recent perceived climate variability raises concerns with unprecedented hydrological phenomena and extremes. Distribution and circulation of the waters of the Earth become increasingly difficult to determine because of additional uncertainty related to anthropogenic emissions. The world wide observed changes in the large-scale hydrological cycle have been related to an increase in the observed temperature over several decades. Although the effect of change in climate on hydrology provides a general picture of possible hydrological global change, new tools and frameworks for modelling hydrological series with nonstationary characteristics at finer scales, are required for assessing climate change impacts. Of the downscaling techniques, dynamic downscaling is usually based on the use of Regional Climate Models (RCMs), which generate finer resolution output based on atmospheric physics over a region using General Circulation Model (GCM) fields as boundary conditions. However, RCMs are not expected to capture the observed spatial precipitation extremes at a fine cell scale or at a basin scale. Statistical downscaling derives a statistical or empirical relationship between the variables simulated by the GCMs, called predictors, and station-scale hydrologic variables, called predictands. The main focus of the paper is on the need for using statistical downscaling techniques for projection of local hydrometeorological variables under climate change scenarios. The projections can be then served as a means of input source to various hydrologic models to obtain streamflow, evapotranspiration, soil moisture and other hydrological variables of interest.

Keywords: Climate Change, Downscaling, GCM, RCM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3348
7295 Growth Performance and Economy of Production of Pullets Fed on Different Energy Based Sources

Authors: O. A. Anjola, M. A. Adejobi, A. Ogunbameru, F. P. Agbaye, R. O. Odunukan

Abstract:

This experiment was conducted for 8 weeks to evaluate the growth performance and economics of pullets fed on different dietary energy sources. A total of 300 Harco black was used for this experiment. The birds were completely randomized and divided into four diet treatment groups. Each treatment group had three replicates of twenty-five birds per replicate. Four diets containing maize, spaghetti, noodles, and biscuit was formulated to represent diet 1, 2, 3 and 4 respectively. Diet 1 containing maize is the control, while diet 2, 3, and 4 contains spaghetti, noodles, and biscuit waste meal at 100% replacement for maize on weight for weight basis. Performance indices on Feed intake, body weight, weight gain, feed conversion ratio (FCR) and economy of production were measured. Blood samples were also collected for heamatology and serum biochemistry assessment. The result of the experiment indicated that different dietary energy source fed to birds significantly (P < 0.05) affect feed intake, body weight, weight gain, and feed conversion ratio (FCR). The best cost of feed per kilogram of body weight gain was obtained in Spaghetti based diet (₦559.30). However, the best performance were obtained from diet 1(maize), it can be concluded that spaghetti as a replacement for maize in diet of pullet is most economical and profitable for production without any deleterious effects attached. Blood parameters of birds were not significantly (p > 0.05) influenced by the use of the dietary energy sources used in this experiment.

Keywords: Growth performance, spaghetti, noodles, biscuit, profit, hematology and serum biochemistry.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1120
7294 Digital Automatic Gain Control Integrated on WLAN Platform

Authors: Emilija Miletic, Milos Krstic, Maxim Piz, Michael Methfessel

Abstract:

In this work we present a solution for DAGC (Digital Automatic Gain Control) in WLAN receivers compatible to IEEE 802.11a/g standard. Those standards define communication in 5/2.4 GHz band using Orthogonal Frequency Division Multiplexing OFDM modulation scheme. WLAN Transceiver that we have used enables gain control over Low Noise Amplifier (LNA) and a Variable Gain Amplifier (VGA). The control over those signals is performed in our digital baseband processor using dedicated hardware block DAGC. DAGC in this process is used to automatically control the VGA and LNA in order to achieve better signal-to-noise ratio, decrease FER (Frame Error Rate) and hold the average power of the baseband signal close to the desired set point. DAGC function in baseband processor is done in few steps: measuring power levels of baseband samples of an RF signal,accumulating the differences between the measured power level and actual gain setting, adjusting a gain factor of the accumulation, and applying the adjusted gain factor the baseband values. Based on the measurement results of RSSI signal dependence to input power we have concluded that this digital AGC can be implemented applying the simple linearization of the RSSI. This solution is very simple but also effective and reduces complexity and power consumption of the DAGC. This DAGC is implemented and tested both in FPGA and in ASIC as a part of our WLAN baseband processor. Finally, we have integrated this circuit in a compact WLAN PCMCIA board based on MAC and baseband ASIC chips designed from us.

Keywords: WLAN, AGC, RSSI, baseband processor

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3931
7293 Crashworthiness Optimization of an Automotive Front Bumper in Composite Material

Authors: S. Boria

Abstract:

In the last years, the crashworthiness of an automotive body structure can be improved, since the beginning of the design stage, thanks to the development of specific optimization tools. It is well known how the finite element codes can help the designer to investigate the crashing performance of structures under dynamic impact. Therefore, by coupling nonlinear mathematical programming procedure and statistical techniques with FE simulations, it is possible to optimize the design with reduced number of analytical evaluations. In engineering applications, many optimization methods which are based on statistical techniques and utilize estimated models, called meta-models, are quickly spreading. A meta-model is an approximation of a detailed simulation model based on a dataset of input, identified by the design of experiments (DOE); the number of simulations needed to build it depends on the number of variables. Among the various types of meta-modeling techniques, Kriging method seems to be excellent in accuracy, robustness and efficiency compared to other ones when applied to crashworthiness optimization. Therefore the application of such meta-model was used in this work, in order to improve the structural optimization of a bumper for a racing car in composite material subjected to frontal impact. The specific energy absorption represents the objective function to maximize and the geometrical parameters subjected to some design constraints are the design variables. LS-DYNA codes were interfaced with LS-OPT tool in order to find the optimized solution, through the use of a domain reduction strategy. With the use of the Kriging meta-model the crashworthiness characteristic of the composite bumper was improved.

Keywords: Composite material, crashworthiness, finite element analysis, optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1104
7292 Silver Modified TiO2/Halloysite Thin Films for Decontamination of Target Pollutants

Authors: Dionisios Panagiotaras, Elias Stathatos, Dimitrios Papoulis

Abstract:

 Sol-gel method has been used to fabricate nanocomposite films on glass substrates composed halloysite clay mineral and nanocrystalline TiO2. The methodology for the synthesis involves a simple chemistry method utilized nonionic surfactant molecule as pore directing agent along with the acetic acid-based solgel route with the absence of water molecules. The thermal treatment of composite films at 450oC ensures elimination of organic material and lead to the formation of TiO2 nanoparticles onto the surface of the halloysite nanotubes. Microscopy techniques and porosimetry methods used in order to delineate the structural characteristics of the materials. The nanocomposite films produced have no cracks and active anatase crystal phase with small crystallite size were deposited on halloysite nanotubes. The photocatalytic properties for the new materials were examined for the decomposition of the Basic Blue 41 azo dye in solution. These, nanotechnology based composite films show high efficiency for dye’s discoloration in spite of different halloysite quantities and small amount of halloysite/TiO2 catalyst immobilized onto glass substrates. Moreover, we examined the modification of the halloysite/TiO2 films with silver particles in order to improve the photocatalytic properties of the films. Indeed, the presence of silver nanoparticles enhances the discoloration rate of the Basic Blue 41 compared to the efficiencies obtained for unmodified films.

Keywords: Clay mineral, nanotubular Halloysite, Photocatalysis, Titanium Dioxide, Silver modification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2507
7291 The Importance of Zakat in Struggle against Circle of Poverty and Income Redistribution

Authors: Hasan Bulent Kantarcı

Abstract:

This paper examines how “Zakat” provides fair income redistribution and aids the struggle against poverty. Providing fair income redistribution and combating poverty constitutes some of the fundamental tasks performed by countries all over the world. Each country seeks a solution for these problems according to their political, economic and administrative styles through applying various economic and financial policies. The same situation can be handled via “zakat” association in Islam. Nowadays, we observe different versions of “zakat” in developed countries. Applications such as negative income tax denote merely a different form of “zakat” that is being applied almost in the same way but under changed names. However, the minimum values to donate under zakat (e.g. 85 gr. gold and 40 animals) get altered and various amounts are put into practice. It might be named as negative income tax instead of zakat, nonetheless, these applications are based on the Holy Koran and the hadith released 1400 years ago. Besides, considering the savage and slavery in the world at those times, we might easily recognize the true value of the zakat being applied for the first time then in the Islamic system. Through zakat, governments are able to transfer incomes to the poor as a means of enabling them achieve the minimum standard of living required. With regards to who benefits from the Zakat, an objective and fair criteria was used to determine who benefits from the zakat contrary to the notion that it was based on peoples’ own choices. Since the zakat is obligatory, the transfers do not get forwarded directly but via the government and get distributed, which requires vast governmental organizations. Through the application of Zakat, reduced levels of poverty can be achieved and also ensure the fair income redistribution.

Keywords: Cycle of poverty, Islamic finance, income redistribution, zakat.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2277
7290 Performance Analysis of Digital Signal Processors Using SMV Benchmark

Authors: Erh-Wen Hu, Cyril S. Ku, Andrew T. Russo, Bogong Su, Jian Wang

Abstract:

Unlike general-purpose processors, digital signal processors (DSP processors) are strongly application-dependent. To meet the needs for diverse applications, a wide variety of DSP processors based on different architectures ranging from the traditional to VLIW have been introduced to the market over the years. The functionality, performance, and cost of these processors vary over a wide range. In order to select a processor that meets the design criteria for an application, processor performance is usually the major concern for digital signal processing (DSP) application developers. Performance data are also essential for the designers of DSP processors to improve their design. Consequently, several DSP performance benchmarks have been proposed over the past decade or so. However, none of these benchmarks seem to have included recent new DSP applications. In this paper, we use a new benchmark that we recently developed to compare the performance of popular DSP processors from Texas Instruments and StarCore. The new benchmark is based on the Selectable Mode Vocoder (SMV), a speech-coding program from the recent third generation (3G) wireless voice applications. All benchmark kernels are compiled by the compilers of the respective DSP processors and run on their simulators. Weighted arithmetic mean of clock cycles and arithmetic mean of code size are used to compare the performance of five DSP processors. In addition, we studied how the performance of a processor is affected by code structure, features of processor architecture and optimization of compiler. The extensive experimental data gathered, analyzed, and presented in this paper should be helpful for DSP processor and compiler designers to meet their specific design goals.

Keywords: digital signal processors, DSP benchmark, instruction level parallelism, modified cyclomatic complexity, performance analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1583
7289 Prediction of Product Size Distribution of a Vertical Stirred Mill Based on Breakage Kinetics

Authors: C. R. Danielle, S. Erik, T. Patrick, M. Hugh

Abstract:

In the last decade there has been an increase in demand for fine grinding due to the depletion of coarse-grained orebodies and an increase of processing fine disseminated minerals and complex orebodies. These ores have provided new challenges in concentrator design because fine and ultra-fine grinding is required to achieve acceptable recovery rates. Therefore, the correct design of a grinding circuit is important for minimizing unit costs and increasing product quality. The use of ball mills for grinding in fine size ranges is inefficient and, therefore, vertical stirred grinding mills are becoming increasingly popular in the mineral processing industry due to its already known high energy efficiency. This work presents a hypothesis of a methodology to predict the product size distribution of a vertical stirred mill using a Bond ball mill. The Population Balance Model (PBM) was used to empirically analyze the performance of a vertical mill and a Bond ball mill. The breakage parameters obtained for both grinding mills are compared to determine the possibility of predicting the product size distribution of a vertical mill based on the results obtained from the Bond ball mill. The biggest advantage of this methodology is that most of the minerals processing laboratories already have a Bond ball mill to perform the tests suggested in this study. Preliminary results show the possibility of predicting the performance of a laboratory vertical stirred mill using a Bond ball mill.

Keywords: Bond ball mill, population balance model, product size distribution, vertical stirred mill.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1127
7288 Pushover Analysis of Masonry Infilled Reinforced Concrete Frames for Performance Based Design for Near Field Earthquakes

Authors: Alok Madan, Ashok Gupta, Arshad K. Hashmi

Abstract:

Non-linear dynamic time history analysis is considered as the most advanced and comprehensive analytical method for evaluating the seismic response and performance of multi-degree-of-freedom building structures under the influence of earthquake ground motions. However, effective and accurate application of the method requires the implementation of advanced hysteretic constitutive models of the various structural components including masonry infill panels. Sophisticated computational research tools that incorporate realistic hysteresis models for non-linear dynamic time-history analysis are not popular among the professional engineers as they are not only difficult to access but also complex and time-consuming to use. In addition, commercial computer programs for structural analysis and design that are acceptable to practicing engineers do not generally integrate advanced hysteretic models which can accurately simulate the hysteresis behavior of structural elements with a realistic representation of strength degradation, stiffness deterioration, energy dissipation and ‘pinching’ under cyclic load reversals in the inelastic range of behavior. In this scenario, push-over or non-linear static analysis methods have gained significant popularity, as they can be employed to assess the seismic performance of building structures while avoiding the complexities and difficulties associated with non-linear dynamic time-history analysis. “Push-over” or non-linear static analysis offers a practical and efficient alternative to non-linear dynamic time-history analysis for rationally evaluating the seismic demands. The present paper is based on the analytical investigation of the effect of distribution of masonry infill panels over the elevation of planar masonry infilled reinforced concrete [R/C] frames on the seismic demands using the capacity spectrum procedures implementing nonlinear static analysis [pushover analysis] in conjunction with the response spectrum concept. An important objective of the present study is to numerically evaluate the adequacy of the capacity spectrum method using pushover analysis for performance based design of masonry infilled R/C frames for near-field earthquake ground motions.

Keywords: Nonlinear analysis, capacity spectrum method, response spectrum, seismic demand, near-field earthquakes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2222
7287 Development of Nondestructive Imaging Analysis Method Using Muonic X-Ray with a Double-Sided Silicon Strip Detector

Authors: I-Huan Chiu, Kazuhiko Ninomiya, Shin’ichiro Takeda, Meito Kajino, Miho Katsuragawa, Shunsaku Nagasawa, Atsushi Shinohara, Tadayuki Takahashi, Ryota Tomaru, Shin Watanabe, Goro Yabu

Abstract:

In recent years, a nondestructive elemental analysis method based on muonic X-ray measurements has been developed and applied for various samples. Muonic X-rays are emitted after the formation of a muonic atom, which occurs when a negatively charged muon is captured in a muon atomic orbit around the nucleus. Because muonic X-rays have a higher energy than electronic X-rays due to the muon mass, they can be measured without being absorbed by a material. Thus, estimating the two-dimensional (2D) elemental distribution of a sample became possible using an X-ray imaging detector. In this work, we report a non-destructive imaging experiment using muonic X-rays at Japan Proton Accelerator Research Complex. The irradiated target consisted of a polypropylene material, and a double-sided silicon strip detector, which was developed as an imaging detector for astronomical obervation, was employed. A peak corresponding to muonic X-rays from the carbon atoms in the target was clearly observed in the energy spectrum at an energy of 14 keV, and 2D visualizations were successfully reconstructed to reveal the projection image from the target. This result demonstrates the potential of the nondestructive elemental imaging method that is based on muonic X-ray measurement. To obtain a higher position resolution for imaging a smaller target, a new detector system will be developed to improve the statistical analysis in further research.

Keywords: DSSD, muon, muonic X-ray, imaging, non-destructive analysis

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1213
7286 Control of Airborne Aromatic Hydrocarbons over TiO2-Carbon Nanotube Composites

Authors: Joon Y. Lee, Seung H. Shin, Ho H. Chun, Wan K. Jo

Abstract:

Poly vinyl acetate (PVA)-based titania (TiO2)–carbon nanotube composite nanofibers (PVA-TCCNs) with various PVA-to-solvent ratios and PVA-based TiO2 composite nanofibers (PVA-TN) were synthesized using an electrospinning process, followed by thermal treatment. The photocatalytic activities of these nanofibers in the degradation of airborne monocyclic aromatics under visible-light irradiation were examined. This study focuses on the application of these photocatalysts to the degradation of the target compounds at sub-part-per-million indoor air concentrations. The characteristics of the photocatalysts were examined using scanning electron microscopy, X-ray diffraction, ultraviolet-visible spectroscopy, and Fourier-transform infrared spectroscopy. For all the target compounds, the PVA-TCCNs showed photocatalytic degradation efficiencies superior to those of the reference PVA-TN. Specifically, the average photocatalytic degradation efficiencies for benzene, toluene, ethyl benzene, and o-xylene (BTEX) obtained using the PVA-TCCNs with a PVA-to-solvent ratio of 0.3 (PVA-TCCN-0.3) were 11%, 59%, 89%, and 92%, respectively, whereas those observed using PVA-TNs were 5%, 9%, 28%, and 32%, respectively. PVA-TCCN-0.3 displayed the highest photocatalytic degradation efficiency for BTEX, suggesting the presence of an optimal PVA-to-solvent ratio for the synthesis of PVA-TCCNs. The average photocatalytic efficiencies for BTEX decreased from 11% to 4%, 59% to 18%, 89% to 37%, and 92% to 53%, respectively, when the flow rate was increased from 1.0 to 4.0 L min1. In addition, the average photocatalytic efficiencies for BTEX increased 11% to ~0%, 59% to 3%, 89% to 7%, and 92% to 13%, respectively, when the input concentration increased from 0.1 to 1.0 ppm. The prepared PVA-TCCNs were effective for the purification of airborne aromatics at indoor concentration levels, particularly when the operating conditions were optimized.

Keywords: Mixing ratio, nanofiber, polymer, reference photocatalyst.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2222
7285 An Investigation into the Use of an Atomistic, Hermeneutic, Holistic Approach in Education Relating to the Architectural Design Process

Authors: N. Pritchard

Abstract:

Within architectural education, students arrive fore-armed with; their life-experience; knowledge gained from subject-based learning; their brains and more specifically their imaginations. The learning-by-doing that they embark on in studio-based/project-based learning calls for supervision that allows the student to proactively undertake research and experimentation with design solution possibilities. The degree to which this supervision includes direction is subject to debate and differing opinion. It can be argued that if the student is to learn-by-doing, then design decision making within the design process needs to be instigated and owned by the student so that they have the ability to personally reflect on and evaluate those decisions. Within this premise lies the problem that the student's endeavours can become unstructured and unfocused as they work their way into a new and complex activity. A resultant weakness can be that the design activity is compartmented and not holistic or comprehensive, and therefore, the student's reflections are consequently impoverished in terms of providing a positive, informative feedback loop. The construct proffered in this paper is that a supportive 'armature' or 'Heuristic-Framework' can be developed that facilitates a holistic approach and reflective learning. The normal explorations of architectural design comprise: Analysing the site and context, reviewing building precedents, assimilating the briefing information. However, the student can still be compromised by 'not knowing what they need to know'. The long-serving triad 'Firmness, Commodity and Delight' provides a broad-brush framework of considerations to explore and integrate into good design. If this were further atomised in subdivision formed from the disparate aspects of architectural design that need to be considered within the design process, then the student could sieve through the facts more methodically and reflectively in terms of considering their interrelationship conflict and alliances. The words facts and sieve hold the acronym of the aspects that form the Heuristic-Framework: Function, Aesthetics, Context, Tectonics, Spatial, Servicing, Infrastructure, Environmental, Value and Ecological issues. The Heuristic could be used as a Hermeneutic Model with each aspect of design being focused on and considered in abstraction and then considered in its relation to other aspect and the design proposal as a whole. Importantly, the heuristic could be used as a method for gathering information and enhancing the design brief. The more poetic, mysterious, intuitive, unconscious processes should still be able to occur for the student. The Heuristic-Framework should not be seen as comprehensive prescriptive formulaic or inhibiting to the wide exploration of possibilities and solutions within the architectural design process.

Keywords: Atomistic, hermeneutic, holistic, approach architectural design studio education.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1353
7284 An Analysis of Gamification in the Post-Secondary Classroom

Authors: F. Saccucci

Abstract:

Gamification has now started to take root in the post-secondary classroom. Educators have learned much about gamification to date but there is still a great deal to learn. One definition of gamification is the ability to engage post-secondary students with games that are fun and correlate to class room curriculum. There is no shortage of literature illustrating the advantages of gamification in the class room. This study is an extension of similar thought as well as an extension of a previous study where in class testing proved with the used of paired T-test that gamification did significantly improve the students’ understanding of subject material. Gamification itself in the class room can range from high end computer simulated software to paper based games of which both have advantages and disadvantages. This analysis used a paper based game to highlight certain qualitative advantages of gamification. The paper based game in this analysis was inexpensive, required low preparation time for the faculty member and consumed approximately 20 minutes of class room time. Data for the study was collected through in class student feedback surveys and narrative from the faculty member moderating the game. Students were randomly selected into groups of four. Qualitative advantages identified in this analysis included: 1. Students had a chance to meet, connect and know other students. 2. Students enjoyed the gamification process given there was a sense of fun and competition. 3. The post assessment that followed the simulation game was not part of their grade calculation therefore it was an opportunity to participate in a low risk activity whereby students could subsequently self-assess their understanding of the subject material. 4. In the view of the student, content knowledge did increase after the gamification process. These qualitative advantages identified in this analysis contribute to the argument that there should be an attempt to use gamification in today’s post-secondary class room. The analysis also highlighted that eighty (80) percent of the respondents believe twenty minutes devoted to the gamification process was appropriate, however twenty (20) percentage of respondents believed that rather than scheduling a gamification process and its post quiz in the last week, a review for the final exam may have been more useful. An additional study to this hopes to determine if the scheduling of the gamification had any correlation to a percentage of the students not wanting to be engaged in the process. As well, the additional study hopes to determine at what incremental level of time invested in class room gamification produce no material incremental benefits to the student as well as determine if any correlation exist between respondents preferring not to have it at the end of the semester to students not believing the gamification process added to the increase of their curricular knowledge.

Keywords: Gamification, inexpensive, qualitative advantages, post-secondary.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 850
7283 Comparative Evaluation of Accuracy of Selected Machine Learning Classification Techniques for Diagnosis of Cancer: A Data Mining Approach

Authors: Rajvir Kaur, Jeewani Anupama Ginige

Abstract:

With recent trends in Big Data and advancements in Information and Communication Technologies, the healthcare industry is at the stage of its transition from clinician oriented to technology oriented. Many people around the world die of cancer because the diagnosis of disease was not done at an early stage. Nowadays, the computational methods in the form of Machine Learning (ML) are used to develop automated decision support systems that can diagnose cancer with high confidence in a timely manner. This paper aims to carry out the comparative evaluation of a selected set of ML classifiers on two existing datasets: breast cancer and cervical cancer. The ML classifiers compared in this study are Decision Tree (DT), Support Vector Machine (SVM), k-Nearest Neighbor (k-NN), Logistic Regression, Ensemble (Bagged Tree) and Artificial Neural Networks (ANN). The evaluation is carried out based on standard evaluation metrics Precision (P), Recall (R), F1-score and Accuracy. The experimental results based on the evaluation metrics show that ANN showed the highest-level accuracy (99.4%) when tested with breast cancer dataset. On the other hand, when these ML classifiers are tested with the cervical cancer dataset, Ensemble (Bagged Tree) technique gave better accuracy (93.1%) in comparison to other classifiers.

Keywords: Artificial neural networks, breast cancer, cancer dataset, classifiers, cervical cancer, F-score, logistic regression, machine learning, precision, recall, support vector machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1521
7282 The DAQ Debugger for iFDAQ of the COMPASS Experiment

Authors: Y. Bai, M. Bodlak, V. Frolov, S. Huber, V. Jary, I. Konorov, D. Levit, J. Novy, D. Steffen, O. Subrt, M. Virius

Abstract:

In general, state-of-the-art Data Acquisition Systems (DAQ) in high energy physics experiments must satisfy high requirements in terms of reliability, efficiency and data rate capability. This paper presents the development and deployment of a debugging tool named DAQ Debugger for the intelligent, FPGA-based Data Acquisition System (iFDAQ) of the COMPASS experiment at CERN. Utilizing a hardware event builder, the iFDAQ is designed to be able to readout data at the average maximum rate of 1.5 GB/s of the experiment. In complex softwares, such as the iFDAQ, having thousands of lines of code, the debugging process is absolutely essential to reveal all software issues. Unfortunately, conventional debugging of the iFDAQ is not possible during the real data taking. The DAQ Debugger is a tool for identifying a problem, isolating the source of the problem, and then either correcting the problem or determining a way to work around it. It provides the layer for an easy integration to any process and has no impact on the process performance. Based on handling of system signals, the DAQ Debugger represents an alternative to conventional debuggers provided by most integrated development environments. Whenever problem occurs, it generates reports containing all necessary information important for a deeper investigation and analysis. The DAQ Debugger was fully incorporated to all processes in the iFDAQ during the run 2016. It helped to reveal remaining software issues and improved significantly the stability of the system in comparison with the previous run. In the paper, we present the DAQ Debugger from several insights and discuss it in a detailed way.

Keywords: DAQ debugger, data acquisition system, FPGA, system signals, Qt framework.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 875
7281 FACTS Based Stabilization for Smart Grid Applications

Authors: Adel M. Sharaf, Foad H. Gandoman

Abstract:

Nowadays, Photovoltaic-PV Farms/ Parks and large PV-Smart Grid Interface Schemes are emerging and commonly utilized in Renewable Energy distributed generation. However, PVhybrid- Dc-Ac Schemes using interface power electronic converters usually has negative impact on power quality and stabilization of modern electrical network under load excursions and network fault conditions in smart grid. Consequently, robust FACTS based interface schemes are required to ensure efficient energy utilization and stabilization of bus voltages as well as limiting switching/fault onrush current condition. FACTS devices are also used in smart grid- Battery Interface and Storage Schemes with PV-Battery Storage hybrid systems as an elegant alternative to renewable energy utilization with backup battery storage for electric utility energy and demand side management to provide needed energy and power capacity under heavy load conditions. The paper presents a robust interface PV-Li-Ion Battery Storage Interface Scheme for Distribution/Utilization Low Voltage Interface using FACTS stabilization enhancement and dynamic maximum PV power tracking controllers. Digital simulation and validation of the proposed scheme is done using MATLAB/Simulink software environment for Low Voltage- Distribution/Utilization system feeding a hybrid Linear-Motorized inrush and nonlinear type loads from a DC-AC Interface VSC-6- pulse Inverter Fed from the PV Park/Farm with a back-up Li-Ion Storage Battery.

Keywords: AC FACTS, Smart grid, Stabilization, PV-Battery Storage, Switched Filter-Compensation (SFC).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3223
7280 Memristor-A Promising Candidate for Neural Circuits in Neuromorphic Computing Systems

Authors: Juhi Faridi, Mohd. Ajmal Kafeel

Abstract:

The advancements in the field of Artificial Intelligence (AI) and technology has led to an evolution of an intelligent era. Neural networks, having the computational power and learning ability similar to the brain is one of the key AI technologies. Neuromorphic computing system (NCS) consists of the synaptic device, neuronal circuit, and neuromorphic architecture. Memristor are a promising candidate for neuromorphic computing systems, but when it comes to neuromorphic computing, the conductance behavior of the synaptic memristor or neuronal memristor needs to be studied thoroughly in order to fathom the neuroscience or computer science. Furthermore, there is a need of more simulation work for utilizing the existing device properties and providing guidance to the development of future devices for different performance requirements. Hence, development of NCS needs more simulation work to make use of existing device properties. This work aims to provide an insight to build neuronal circuits using memristors to achieve a Memristor based NCS.  Here we throw a light on the research conducted in the field of memristors for building analog and digital circuits in order to motivate the research in the field of NCS by building memristor based neural circuits for advanced AI applications. This literature is a step in the direction where we describe the various Key findings about memristors and its analog and digital circuits implemented over the years which can be further utilized in implementing the neuronal circuits in the NCS. This work aims to help the electronic circuit designers to understand how the research progressed in memristors and how these findings can be used in implementing the neuronal circuits meant for the recent progress in the NCS.

Keywords: Analog circuits, digital circuits, memristors, neuromorphic computing systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1180
7279 Estimation of Geotechnical Parameters by Comparing Monitoring Data with Numerical Results: Case Study of Arash–Esfandiar-Niayesh Under-Passing Tunnel, Africa Tunnel, Tehran, Iran

Authors: Aliakbar Golshani, Seyyed Mehdi Poorhashemi, Mahsa Gharizadeh

Abstract:

The under passing tunnels are strongly influenced by the soils around. There are some complexities in the specification of real soil behavior, owing to the fact that lots of uncertainties exist in soil properties, and additionally, inappropriate soil constitutive models. Such mentioned factors may cause incompatible settlements in numerical analysis with the obtained values in actual construction. This paper aims to report a case study on a specific tunnel constructed by NATM. The tunnel has a depth of 11.4 m, height of 12.2 m, and width of 14.4 m with 2.5 lanes. The numerical modeling was based on a 2D finite element program. The soil material behavior was modeled by hardening soil model. According to the field observations, the numerical estimated settlement at the ground surface was approximately four times more than the measured one, after the entire installation of the initial lining, indicating that some unknown factors affect the values. Consequently, the geotechnical parameters are accurately revised by a numerical back-analysis using laboratory and field test data and based on the obtained monitoring data. The obtained result confirms that typically, the soil parameters are conservatively low-estimated. And additionally, the constitutive models cannot be applied properly for all soil conditions.

Keywords: NATM tunnel, initial lining, field test data, laboratory test data, monitoring data, numerical back-analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 702
7278 Unsupervised Segmentation Technique for Acute Leukemia Cells Using Clustering Algorithms

Authors: N. H. Harun, A. S. Abdul Nasir, M. Y. Mashor, R. Hassan

Abstract:

Leukaemia is a blood cancer disease that contributes to the increment of mortality rate in Malaysia each year. There are two main categories for leukaemia, which are acute and chronic leukaemia. The production and development of acute leukaemia cells occurs rapidly and uncontrollable. Therefore, if the identification of acute leukaemia cells could be done fast and effectively, proper treatment and medicine could be delivered. Due to the requirement of prompt and accurate diagnosis of leukaemia, the current study has proposed unsupervised pixel segmentation based on clustering algorithm in order to obtain a fully segmented abnormal white blood cell (blast) in acute leukaemia image. In order to obtain the segmented blast, the current study proposed three clustering algorithms which are k-means, fuzzy c-means and moving k-means algorithms have been applied on the saturation component image. Then, median filter and seeded region growing area extraction algorithms have been applied, to smooth the region of segmented blast and to remove the large unwanted regions from the image, respectively. Comparisons among the three clustering algorithms are made in order to measure the performance of each clustering algorithm on segmenting the blast area. Based on the good sensitivity value that has been obtained, the results indicate that moving kmeans clustering algorithm has successfully produced the fully segmented blast region in acute leukaemia image. Hence, indicating that the resultant images could be helpful to haematologists for further analysis of acute leukaemia.

Keywords: Acute Leukaemia Images, Clustering Algorithms, Image Segmentation, Moving k-Means.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2760
7277 Injection Molding of Inconel718 Parts for Aerospace Application Using Novel Binder System Based On Palm Oil Derivatives

Authors: R. Ibrahim, M. Azmirruddin, M. Jabir, N. Johari, M. Muhamad, A. R. A. Talib

Abstract:

Inconel718 has been widely used as a super alloy in aerospace application due to the high strength at elevated temperatures, satisfactory oxidation resistance and heat corrosion resistance. In this study, the Inconel718 has been fabricated using high technology of Metal Injection Molding (MIM) process due to the cost effective technique for producing small, complex and precision parts in high volume compared with conventional method through machining. Through MIM, the binder system is one of the most important criteria in order to successfully fabricate the Inconel718. Even though, the binder system is a temporary, but failure in the selection and removal of the binder system will affect on the final properties of the sintered parts. Therefore, the binder system based on palm oil derivative which is palm stearin has been formulated and developed to replace the conventional binder system. The rheological studies of the mixture between the powder and binders system have been determined properly in order to be successful during injection into injection molding machine. After molding, the binder holds the particles in place. The binder system has to be removed completely through debinding step. During debinding step, solvent debinding and thermal pyrolysis has been used to remove completely of the binder system. The debound part is then sintered to give the required physical and mechanical properties. The results show that the properties of the final sintered parts fulfill the Standard Metal Powder Industries Federation (MPIF) 35 for MIM parts.

Keywords: Binder system, rheological study, metal injection molding, debinding and sintered parts.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2695
7276 A Geographical Spatial Analysis on the Benefits of Using Wind Energy in Kuwait

Authors: Obaid AlOtaibi, Salman Hussain

Abstract:

Wind energy is associated with many geographical factors including wind speed, climate change, surface topography, environmental impacts, and several economic factors, most notably the advancement of wind technology and energy prices. It is the fastest-growing and least economically expensive method for generating electricity. Wind energy generation is directly related to the characteristics of spatial wind. Therefore, the feasibility study for the wind energy conversion system is based on the value of the energy obtained relative to the initial investment and the cost of operation and maintenance. In Kuwait, wind energy is an appropriate choice as a source of energy generation. It can be used in groundwater extraction in agricultural areas such as Al-Abdali in the north and Al-Wafra in the south, or in fresh and brackish groundwater fields or remote and isolated locations such as border areas and projects away from conventional power electricity services, to take advantage of alternative energy, reduce pollutants, and reduce energy production costs. The study covers the State of Kuwait with an exception of metropolitan area. Climatic data were attained through the readings of eight distributed monitoring stations affiliated with Kuwait Institute for Scientific Research (KISR). The data were used to assess the daily, monthly, quarterly, and annual available wind energy accessible for utilization. The researchers applied the Suitability Model to analyze the study by using the ArcGIS program. It is a model of spatial analysis that compares more than one location based on grading weights to choose the most suitable one. The study criteria are: the average annual wind speed, land use, topography of land, distance from the main road networks, urban areas. According to the previous criteria, the four proposed locations to establish wind farm projects are selected based on the weights of the degree of suitability (excellent, good, average, and poor). The percentage of areas that represents the most suitable locations with an excellent rank (4) is 8% of Kuwait’s area. It is relatively distributed as follows: Al-Shqaya, Al-Dabdeba, Al-Salmi (5.22%), Al-Abdali (1.22%), Umm al-Hayman (0.70%), North Wafra and Al-Shaqeeq (0.86%). The study recommends to decision-makers to consider the proposed location (No.1), (Al-Shqaya, Al-Dabdaba, and Al-Salmi) as the most suitable location for future development of wind farms in Kuwait, this location is economically feasible.

Keywords: Kuwait, renewable energy, spatial analysis, wind energy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 866
7275 Profile Calculation in Water Phantom of Symmetric and Asymmetric Photon Beam

Authors: N. Chegeni, M. J. Tahmasebi Birgani

Abstract:

Nowadays, in most radiotherapy departments, the commercial treatment planning systems (TPS) used to calculate dose distributions needs to be verified; therefore, quick, easy-to-use and low cost dose distribution algorithms are desirable to test and verify the performance of the TPS. In this paper, we put forth an analytical method to calculate the phantom scatter contribution and depth dose on the central axis based on the equivalent square concept. Then, this method was generalized to calculate the profiles at any depth and for several field shapes regular or irregular fields under symmetry and asymmetry photon beam conditions. Varian 2100 C/D and Siemens Primus Plus Linacs with 6 and 18 MV photon beam were used for irradiations. Percentage depth doses (PDDs) were measured for a large number of square fields for both energies, and for 45º wedges which were employed to obtain the profiles in any depth. To assess the accuracy of the calculated profiles, several profile measurements were carried out for some treatment fields. The calculated and measured profiles were compared by gamma-index calculation. All γ–index calculations were based on a 3% dose criterion and a 3 mm dose-to-agreement (DTA) acceptance criterion. The γ values were less than 1 at most points. However, the maximum γ observed was about 1.10 in the penumbra region in most fields and in the central area for the asymmetric fields. This analytical approach provides a generally quick and fairly accurate algorithm to calculate dose distribution for some treatment fields in conventional radiotherapy.

Keywords: Dose distribution, equivalent field, asymmetric field, irregular field.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3024
7274 Dual-Actuated Vibration Isolation Technology for a Rotary System’s Position Control on a Vibrating Frame: Disturbance Rejection and Active Damping

Authors: Kamand Bagherian, Nariman Niknejad

Abstract:

A vibration isolation technology for precise position control of a rotary system powered by two permanent magnet DC (PMDC) motors is proposed, where this system is mounted on an oscillatory frame. To achieve vibration isolation for this system, active damping and disturbance rejection (ADDR) technology is presented which introduces a cooperation of a main and an auxiliary PMDC, controlled by discrete-time sliding mode control (DTSMC) based schemes. The controller of the main actuator tracks a desired position and the auxiliary actuator simultaneously isolates the induced vibration, as its controller follows a torque trend. To determine this torque trend, a combination of two algorithms is introduced by the ADDR technology. The first torque-trend producing algorithm rejects the disturbance by counteracting the perturbation, estimated using a model-based observer. The second torque trend applies active variable damping to minimize the oscillation of the output shaft. In this practice, the presented technology is implemented on a rotary system with a pendulum attached, mounted on a linear actuator simulating an oscillation-transmitting structure. In addition, the obtained results illustrate the functionality of the proposed technology.

Keywords: Vibration isolation, position control, discrete-time nonlinear controller, active damping, disturbance tracking algorithm, oscillation transmitting support, stability robustness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 579