Search results for: cut-fill method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18611

Search results for: cut-fill method

14471 Hybrid Structure Learning Approach for Assessing the Phosphate Laundries Impact

Authors: Emna Benmohamed, Hela Ltifi, Mounir Ben Ayed

Abstract:

Bayesian Network (BN) is one of the most efficient classification methods. It is widely used in several fields (i.e., medical diagnostics, risk analysis, bioinformatics research). The BN is defined as a probabilistic graphical model that represents a formalism for reasoning under uncertainty. This classification method has a high-performance rate in the extraction of new knowledge from data. The construction of this model consists of two phases for structure learning and parameter learning. For solving this problem, the K2 algorithm is one of the representative data-driven algorithms, which is based on score and search approach. In addition, the integration of the expert's knowledge in the structure learning process allows the obtainment of the highest accuracy. In this paper, we propose a hybrid approach combining the improvement of the K2 algorithm called K2 algorithm for Parents and Children search (K2PC) and the expert-driven method for learning the structure of BN. The evaluation of the experimental results, using the well-known benchmarks, proves that our K2PC algorithm has better performance in terms of correct structure detection. The real application of our model shows its efficiency in the analysis of the phosphate laundry effluents' impact on the watershed in the Gafsa area (southwestern Tunisia).

Keywords: Bayesian network, classification, expert knowledge, structure learning, surface water analysis

Procedia PDF Downloads 110
14470 Alternative Approach to the Machine Vision System Operating for Solving Industrial Control Issue

Authors: M. S. Nikitenko, S. A. Kizilov, D. Y. Khudonogov

Abstract:

The paper considers an approach to a machine vision operating system combined with using a grid of light markers. This approach is used to solve several scientific and technical problems, such as measuring the capability of an apron feeder delivering coal from a lining return port to a conveyor in the technology of mining high coal releasing to a conveyor and prototyping an autonomous vehicle obstacle detection system. Primary verification of a method of calculating bulk material volume using three-dimensional modeling and validation in laboratory conditions with relative errors calculation were carried out. A method of calculating the capability of an apron feeder based on a machine vision system and a simplifying technology of a three-dimensional modelled examined measuring area with machine vision was offered. The proposed method allows measuring the volume of rock mass moved by an apron feeder using machine vision. This approach solves the volume control issue of coal produced by a feeder while working off high coal by lava complexes with release to a conveyor with accuracy applied for practical application. The developed mathematical apparatus for measuring feeder productivity in kg/s uses only basic mathematical functions such as addition, subtraction, multiplication, and division. Thus, this fact simplifies software development, and this fact expands the variety of microcontrollers and microcomputers suitable for performing tasks of calculating feeder capability. A feature of an obstacle detection issue is to correct distortions of the laser grid, which simplifies their detection. The paper presents algorithms for video camera image processing and autonomous vehicle model control based on obstacle detection machine vision systems. A sample fragment of obstacle detection at the moment of distortion with the laser grid is demonstrated.

Keywords: machine vision, machine vision operating system, light markers, measuring capability, obstacle detection system, autonomous transport

Procedia PDF Downloads 97
14469 Suppression Subtractive Hybridization Technique for Identification of the Differentially Expressed Genes

Authors: Tuhina-khatun, Mohamed Hanafi Musa, Mohd Rafii Yosup, Wong Mui Yun, Aktar-uz-Zaman, Mahbod Sahebi

Abstract:

Suppression subtractive hybridization (SSH) method is valuable tool for identifying differentially regulated genes in disease specific or tissue specific genes important for cellular growth and differentiation. It is a widely used method for separating DNA molecules that distinguish two closely related DNA samples. SSH is one of the most powerful and popular methods for generating subtracted cDNA or genomic DNA libraries. It is based primarily on a suppression polymerase chain reaction (PCR) technique and combines normalization and subtraction in a solitary procedure. The normalization step equalizes the abundance of DNA fragments within the target population, and the subtraction step excludes sequences that are common to the populations being compared. This dramatically increases the probability of obtaining low-abundance differentially expressed cDNAs or genomic DNA fragments and simplifies analysis of the subtracted library. SSH technique is applicable to many comparative and functional genetic studies for the identification of disease, developmental, tissue specific, or other differentially expressed genes, as well as for the recovery of genomic DNA fragments distinguishing the samples under comparison.

Keywords: suppression subtractive hybridization, differentially expressed genes, disease specific genes, tissue specific genes

Procedia PDF Downloads 417
14468 Fuzzy Total Factor Productivity by Credibility Theory

Authors: Shivi Agarwal, Trilok Mathur

Abstract:

This paper proposes the method to measure the total factor productivity (TFP) change by credibility theory for fuzzy input and output variables. Total factor productivity change has been widely studied with crisp input and output variables, however, in some cases, input and output data of decision-making units (DMUs) can be measured with uncertainty. These data can be represented as linguistic variable characterized by fuzzy numbers. Malmquist productivity index (MPI) is widely used to estimate the TFP change by calculating the total factor productivity of a DMU for different time periods using data envelopment analysis (DEA). The fuzzy DEA (FDEA) model is solved using the credibility theory. The results of FDEA is used to measure the TFP change for fuzzy input and output variables. Finally, numerical examples are presented to illustrate the proposed method to measure the TFP change input and output variables. The suggested methodology can be utilized for performance evaluation of DMUs and help to assess the level of integration. The methodology can also apply to rank the DMUs and can find out the DMUs that are lagging behind and make recommendations as to how they can improve their performance to bring them at par with other DMUs.

Keywords: chance-constrained programming, credibility theory, data envelopment analysis, fuzzy data, Malmquist productivity index

Procedia PDF Downloads 345
14467 Design Optimisation of a Novel Cross Vane Expander-Compressor Unit for Refrigeration System

Authors: Y. D. Lim, K. S. Yap, K. T. Ooi

Abstract:

In recent years, environmental issue has been a hot topic in the world, especially the global warming effect caused by conventional non-environmentally friendly refrigerants has increased. Several studies of a more energy-efficient and environmentally friendly refrigeration system have been conducted in order to tackle the issue. In search of a better refrigeration system, CO2 refrigeration system has been proposed as a better option. However, the high throttling loss involved during the expansion process of the refrigeration cycle leads to a relatively low efficiency and thus the system is impractical. In order to improve the efficiency of the refrigeration system, it is suggested by replacing the conventional expansion valve in the refrigeration system with an expander. Based on this issue, a new type of expander-compressor combined unit, named Cross Vane Expander-Compressor (CVEC) was introduced to replace the compressor and the expansion valve of a conventional refrigeration system. A mathematical model was developed to calculate the performance of CVEC, and it was found that the machine is capable of saving the energy consumption of a refrigeration system by as much as 18%. Apart from energy saving, CVEC is also geometrically simpler and more compact. To further improve its efficiency, optimization study of the device is carried out. In this report, several design parameters of CVEC were chosen to be the variables of optimization study. This optimization study was done in a simulation program by using complex optimization method, which is a direct search, multi-variables and constrained optimization method. It was found that the main design parameters, which was shaft radius was reduced around 8% while the inner cylinder radius was remained unchanged at its lower limit after optimization. Furthermore, the port sizes were increased to their upper limit after optimization. The changes of these design parameters have resulted in reduction of around 12% in the total frictional loss and reduction of 4% in power consumption. Eventually, the optimization study has resulted in an improvement in the mechanical efficiency CVEC by 4% and improvement in COP by 6%.

Keywords: complex optimization method, COP, cross vane expander-compressor, CVEC, design optimization, direct search, energy saving, improvement, mechanical efficiency, multi variables

Procedia PDF Downloads 356
14466 Development of a Systematic Approach to Assess the Applicability of Silver Coated Conductive Yarn

Authors: Y. T. Chui, W. M. Au, L. Li

Abstract:

Recently, wearable electronic textiles have been emerging in today’s market and were developed rapidly since, beside the needs for the clothing uses for leisure, fashion wear and personal protection, there also exist a high demand for the clothing to be capable for function in this electronic age, such as interactive interfaces, sensual being and tangible touch, social fabric, material witness and so on. With the requirements of wearable electronic textiles to be more comfortable, adorable, and easy caring, conductive yarn becomes one of the most important fundamental elements within the wearable electronic textile for interconnection between different functional units or creating a functional unit. The properties of conductive yarns from different companies can vary to a large extent. There are vitally important criteria for selecting the conductive yarns, which may directly affect its optimization, prospect, applicability and performance of the final garment. However, according to the literature review, few researches on conductive yarns on shelf focus on the assessment methods of conductive yarns for the scientific selection of material by a systematic way under different conditions. Therefore, in this study, direction of selecting high-quality conductive yarns is given. It is to test the stability and reliability of the conductive yarns according the problems industrialists would experience with the yarns during the every manufacturing process, in which, this assessment system can be classified into four stage. That is 1) Yarn stage, 2) Fabric stage, 3) Apparel stage and 4) End user stage. Several tests with clear experiment procedures and parameters are suggested to be carried out in each stage. This assessment method suggested that the optimal conducting yarns should be stable in property and resistant to various corrosions at every production stage or during using them. It is expected that this demonstration of assessment method can serve as a pilot study that assesses the stability of Ag/nylon yarns systematically at various conditions, i.e. during mass production with textile industry procedures, and from the consumer perspective. It aims to assist industrialists to understand the qualities and properties of conductive yarns and suggesting a few important parameters that they should be reminded of for the case of higher level of suitability, precision and controllability.

Keywords: applicability, assessment method, conductive yarn, wearable electronics

Procedia PDF Downloads 522
14465 A Mixed Integer Programming Model for Optimizing the Layout of an Emergency Department

Authors: Farhood Rismanchian, Seong Hyeon Park, Young Hoon Lee

Abstract:

During the recent years, demand for healthcare services has dramatically increased. As the demand for healthcare services increases, so does the necessity of constructing new healthcare buildings and redesigning and renovating existing ones. Increasing demands necessitate the use of optimization techniques to improve the overall service efficiency in healthcare settings. However, high complexity of care processes remains the major challenge to accomplish this goal. This study proposes a method based on process mining results to address the high complexity of care processes and to find the optimal layout of the various medical centers in an emergency department. ProM framework is used to discover clinical pathway patterns and relationship between activities. Sequence clustering plug-in is used to remove infrequent events and to derive the process model in the form of Markov chain. The process mining results served as an input for the next phase which consists of the development of the optimization model. Comparison of the current ED design with the one obtained from the proposed method indicated that a carefully designed layout can significantly decrease the distances that patients must travel.

Keywords: Mixed Integer programming, Facility layout problem, Process Mining, Healthcare Operation Management

Procedia PDF Downloads 327
14464 Solid Particles Transport and Deposition Prediction in a Turbulent Impinging Jet Using the Lattice Boltzmann Method and a Probabilistic Model on GPU

Authors: Ali Abdul Kadhim, Fue Lien

Abstract:

Solid particle distribution on an impingement surface has been simulated utilizing a graphical processing unit (GPU). In-house computational fluid dynamics (CFD) code has been developed to investigate a 3D turbulent impinging jet using the lattice Boltzmann method (LBM) in conjunction with large eddy simulation (LES) and the multiple relaxation time (MRT) models. This paper proposed an improvement in the LBM-cellular automata (LBM-CA) probabilistic method. In the current model, the fluid flow utilizes the D3Q19 lattice, while the particle model employs the D3Q27 lattice. The particle numbers are defined at the same regular LBM nodes, and transport of particles from one node to its neighboring nodes are determined in accordance with the particle bulk density and velocity by considering all the external forces. The previous models distribute particles at each time step without considering the local velocity and the number of particles at each node. The present model overcomes the deficiencies of the previous LBM-CA models and, therefore, can better capture the dynamic interaction between particles and the surrounding turbulent flow field. Despite the increasing popularity of LBM-MRT-CA model in simulating complex multiphase fluid flows, this approach is still expensive in term of memory size and computational time required to perform 3D simulations. To improve the throughput of each simulation, a single GeForce GTX TITAN X GPU is used in the present work. The CUDA parallel programming platform and the CuRAND library are utilized to form an efficient LBM-CA algorithm. The methodology was first validated against a benchmark test case involving particle deposition on a square cylinder confined in a duct. The flow was unsteady and laminar at Re=200 (Re is the Reynolds number), and simulations were conducted for different Stokes numbers. The present LBM solutions agree well with other results available in the open literature. The GPU code was then used to simulate the particle transport and deposition in a turbulent impinging jet at Re=10,000. The simulations were conducted for L/D=2,4 and 6, where L is the nozzle-to-surface distance and D is the jet diameter. The effect of changing the Stokes number on the particle deposition profile was studied at different L/D ratios. For comparative studies, another in-house serial CPU code was also developed, coupling LBM with the classical Lagrangian particle dispersion model. Agreement between results obtained with LBM-CA and LBM-Lagrangian models and the experimental data is generally good. The present GPU approach achieves a speedup ratio of about 350 against the serial code running on a single CPU.

Keywords: CUDA, GPU parallel programming, LES, lattice Boltzmann method, MRT, multi-phase flow, probabilistic model

Procedia PDF Downloads 191
14463 Mixed Number Algebra and Its Application

Authors: Md. Shah Alam

Abstract:

Mushfiq Ahmad has defined a Mixed Number, which is the sum of a scalar and a Cartesian vector. He has also defined the elementary group operations of Mixed numbers i.e. the norm of Mixed numbers, the product of two Mixed numbers, the identity element and the inverse. It has been observed that Mixed Number is consistent with Pauli matrix algebra and a handy tool to work with Dirac electron theory. Its use as a mathematical method in Physics has been studied. (1) We have applied Mixed number in Quantum Mechanics: Mixed Number version of Displacement operator, Vector differential operator, and Angular momentum operator has been developed. Mixed Number method has also been applied to Klein-Gordon equation. (2) We have applied Mixed number in Electrodynamics: Mixed Number version of Maxwell’s equation, the Electric and Magnetic field quantities and Lorentz Force has been found. (3) An associative transformation of Mixed Number numbers fulfilling Lorentz invariance requirement is developed. (4) We have applied Mixed number algebra as an extension of Complex number. Mixed numbers and the Quaternions have isomorphic correspondence, but they are different in algebraic details. The multiplication of unit Mixed number and the multiplication of unit Quaternions are different. Since Mixed Number has properties similar to those of Pauli matrix algebra, Mixed Number algebra is a more convenient tool to deal with Dirac equation.

Keywords: mixed number, special relativity, quantum mechanics, electrodynamics, pauli matrix

Procedia PDF Downloads 343
14462 Finite Element Method Analysis of a Modified Rotor 6/4 Switched Reluctance Motor's and Comparison with Brushless Direct Current Motor in Pan-Tilt Applications

Authors: Umit Candan, Kadir Dogan, Ozkan Akin

Abstract:

In this study, the use of a modified rotor 6/4 Switched Reluctance Motor (SRM) and a Brushless Direct Current Motor (BLDC) in pan-tilt systems is compared. Pan-tilt systems are critical mechanisms that enable the precise orientation of cameras and sensors, and their performance largely depends on the characteristics of the motors used. The aim of the study is to determine how the performance of the SRM can be improved through rotor modifications and how these improvements can compete with BLDC motors. Using Finite Element Method (FEM) analyses, the design characteristics and magnetic performance of the 6/4 Switched Reluctance Motor are examined in detail. The modified SRM is found to offer increased torque capacity and efficiency while standing out with its simple construction and robustness. FEM analysis results of SRM indicate that considering its cost-effectiveness and performance improvements achieved through modifications, the SRM is a strong alternative for certain pan-tilt applications. This study aims to provide engineers and researchers with a performance comparison of the modified rotor 6/4 SRM and BLDC motors in pan-tilt systems, helping them make more informed and effective motor selections.

Keywords: reluctance machines, switched reluctance machines, pan-tilt application, comparison, FEM analysis

Procedia PDF Downloads 26
14461 Information Theoretic Approach for Beamforming in Wireless Communications

Authors: Syed Khurram Mahmud, Athar Naveed, Shoaib Arif

Abstract:

Beamforming is a signal processing technique extensively utilized in wireless communications and radars for desired signal intensification and interference signal minimization through spatial selectivity. In this paper, we present a method for calculation of optimal weight vectors for smart antenna array, to achieve a directive pattern during transmission and selective reception in interference prone environment. In proposed scheme, Mutual Information (MI) extrema are evaluated through an energy constrained objective function, which is based on a-priori information of interference source and desired array factor. Signal to Interference plus Noise Ratio (SINR) performance is evaluated for both transmission and reception. In our scheme, MI is presented as an index to identify trade-off between information gain, SINR, illumination time and spatial selectivity in an energy constrained optimization problem. The employed method yields lesser computational complexity, which is presented through comparative analysis with conventional methods in vogue. MI based beamforming offers enhancement of signal integrity in degraded environment while reducing computational intricacy and correlating key performance indicators.

Keywords: beamforming, interference, mutual information, wireless communications

Procedia PDF Downloads 263
14460 Designing an Effective Accountability Model for Islamic Azad University Using the Qualitative Approach of Grounded Theory

Authors: Davoud Maleki, Neda Zamani

Abstract:

The present study aims at exploring the effective accountability model of Islamic Azad University using a qualitative approach of grounded theory. The data of this study were obtained from semi-structured interviews with 25 professors and scholars in Islamic Azad University of Tehran who were selected by theoretical sampling method. In the data analysis, the stepwise method and Strauss and Corbin analytical methods (1992) were used. After identification of the main component (balanced response to stakeholders’ needs) and using it to bring the categories together, expressions and ideas representing the relationships between the main and subcomponents, and finally, the revealed components were categorized into six dimensions of the paradigm model, with the relationships among them, including causal conditions (7 components), main component (balanced response to stakeholders’ needs), strategies (5 components), environmental conditions (5 components), intervention features (4 components), and consequences (3 components). Research findings show an exploratory model for describing the relationships between causal conditions, main components, accountability strategies, environmental conditions, university environmental features, and that consequences.

Keywords: accountability, effectiveness, Islamic Azad University, grounded theory

Procedia PDF Downloads 71
14459 Hydraulic Characteristics of Mine Tailings by Metaheuristics Approach

Authors: Akhila Vasudev, Himanshu Kaushik, Tadikonda Venkata Bharat

Abstract:

A large number of mine tailings are produced every year as part of the extraction process of phosphates, gold, copper, and other materials. Mine tailings are high in water content and have very slow dewatering behavior. The efficient design of tailings dam and economical disposal of these slurries requires the knowledge of tailings consolidation behavior. The large-strain consolidation theory closely predicts the self-weight consolidation of these slurries as the theory considers the conservation of mass and momentum conservation and considers the hydraulic conductivity as a function of void ratio. Classical laboratory techniques, such as settling column test, seepage consolidation test, etc., are expensive and time-consuming for the estimation of hydraulic conductivity variation with void ratio. Inverse estimation of the constitutive relationships from the measured settlement versus time curves is explored. In this work, inverse analysis based on metaheuristics techniques will be explored for predicting the hydraulic conductivity parameters for mine tailings from the base excess pore water pressure dissipation curve and the initial conditions of the mine tailings. The proposed inverse model uses particle swarm optimization (PSO) algorithm, which is based on the social behavior of animals searching for food sources. The finite-difference numerical solution of the forward analytical model is integrated with the PSO algorithm to solve the inverse problem. The method is tested on synthetic data of base excess pore pressure dissipation curves generated using the finite difference method. The effectiveness of the method is verified using base excess pore pressure dissipation curve obtained from a settling column experiment and further ensured through comparison with available predicted hydraulic conductivity parameters.

Keywords: base excess pore pressure, hydraulic conductivity, large strain consolidation, mine tailings

Procedia PDF Downloads 122
14458 Preparation of Biodiesel by Three Step Method Followed Purification by Various Silica Sources

Authors: Chanchal Mewar, Shikha Gangil, Yashwant Parihar, Virendra Dhakar, Bharat Modhera

Abstract:

Biodiesel was prepared from Karanja oil by three step methods: saponification, acidification and esterification. In first step, saponification was done in presence of methanol and KOH or NaOH with Karanja oil. During second step acidification, various acids such as H3PO4, HCl, H2SO4 were used as acid catalyst. In third step, esterification followed by purification was done with various silica sources as Ludox (colloidal silicate) and fumed silica gel. It was found that there was no significant change in density, kinematic viscosity, iodine number, acid value, saponification number, flash point, cloud point, pour point and cetane number after purification by these adsorbents. The objective of this research is the comparison among different adsorbents which were used for the purification of biodiesel. Ludox (colloidal silicate) and fumed silica gel were used as adsorbents for the removal of glycerin from biodiesel and evaluate the effectiveness of biodiesel purity. Furthermore, this study compared the results of distilled water washing also. It was observed that Ludox, fumed silica gel and distilled water produced yield about 93%, 91% and 83% respectively. Highest yield was obtained with Ludox at 100 oC temperature using H3PO4 as acid catalyst and NaOH as base catalyst with methanol, (3:1) alcohol to oil molar ratio in 90 min.

Keywords: biodiesel, three step method, purification, silica sources

Procedia PDF Downloads 489
14457 Structural and Magnetic Properties of Calcium Mixed Ferrites Prepared by Co-Precipitation Method

Authors: Sijo S. Thomas, S. Hridya, Manoj Mohan, Bibin Jacob, Hysen Thomas

Abstract:

Ferrites are iron based oxides with technologically significant magnetic properties and have widespread applications in medicine, technology, and industry. There has been a growing interest in the study of magnetic, electrical and structural properties of mixed ferrites. In the present work, structural and magnetic properties of Nickel and Calcium substituted Fe₃O₄ nanoparticles were investigated. NiₓCa₁₋ₓFe₂O₄ nanoparticles (x = 0, 0.1, 0.3, 0.5, 0.7, 0.9) were synthesized by chemical co-precipitation method and the samples were subsequently sintered at 900°C. The magnetic and structural properties of NiₓCa₁₋ₓFe₂O₄ were investigated using Vibrating Sample Magnetometer and X-Ray diffraction. The XRD results revealed that the synthesized particles have nanometer size and it varies from 46-72 nm as the calcium concentration diminishes. The variation is explained based on the increase in the reaction rate with Ni concentration which favors the formation of ultrafine particles of mixed ferrites. VSM results show pure CaFe₂O₄ exhibit paramagnetic behavior with low saturation value. As the concentration of Ca decreases, a transition occurs from paramagnetic state to ferromagnetic state. When the concentration of Ni becomes dominant, magnetic saturation, coercivity, and retentivity become high, indicating near ferromagnetic behavior of the compound.

Keywords: co-precipitation, ferrites, magnetic behavior, structure

Procedia PDF Downloads 227
14456 Four-Electron Auger Process for Hollow Ions

Authors: Shahin A. Abdel-Naby, James P. Colgan, Michael S. Pindzola

Abstract:

A time-dependent close-coupling method is developed to calculate a total, double and triple autoionization rates for hollow atomic ions of four-electron systems. This work was motivated by recent observations of the four-electron Auger process in near K-edge photoionization of C+ ions. The time-dependent close-coupled equations are solved using lattice techniques to obtain a discrete representation of radial wave functions and all operators on a four-dimensional grid with uniform spacing. Initial excited states are obtained by relaxation of the Schrodinger equation in imaginary time using a Schmidt orthogonalization method involving interior subshells. The radial wave function grids are partitioned over the cores on a massively parallel computer, which is essential due to the large memory requirements needed to store the coupled-wave functions and the long run times needed to reach the convergence of the ionization process. Total, double, and triple autoionization rates are obtained by the propagation of the time-dependent close-coupled equations in real-time using integration over bound and continuum single-particle states. These states are generated by matrix diagonalization of one-electron Hamiltonians. The total autoionization rates for each L excited state is found to be slightly above the single autoionization rate for the excited configuration using configuration-average distorted-wave theory. As expected, we find the double and triple autoionization rates to be much smaller than the total autoionization rates. Future work can be extended to study electron-impact triple ionization of atoms or ions. The work was supported in part by grants from the American University of Sharjah and the US Department of Energy. Computational work was carried out at the National Energy Research Scientific Computing Center (NERSC) in Berkeley, California, USA.

Keywords: hollow atoms, autoionization, auger rates, time-dependent close-coupling method

Procedia PDF Downloads 140
14455 Research of the Rotation Magnetic Field Current Driven Effect on Pulsed Plasmoid Acceleration of Electric Propulsion

Authors: X. F. Sun, X. D. Wen, L. J. Liu, C. C. Wu, Y. H. Jia

Abstract:

The field reversed closed magnetic field configuration plasmoid has a potential for large thrust and high power propulsion missions such as deep space exploration due to its high plasma density and larger azimuthal current, which will be a most competitive program for the next generation electric propulsion technology. Moreover, without the electrodes, it also has a long lifetime. Thus, the research on this electric propulsion technology is quite necessary. The plasmoid will be formatted and accelerated by applying a rotation magnetic field (RMF) method. And, the essence of this technology lies on the generation of the azimuthal electron currents driven by RMF. Therefore, the effect of RMF current on the plasmoid acceleration efficiency is a concerned problem. In the paper, the influences of the penetration process of RMF in plasma, the relations of frequency and amplitude of input RF power with current strength and the RMF antenna configuration on the plasmoid acceleration efficiency will be given by a two-fluid numerical simulation method. The results show that the radio-frequency and input power have remarkable influence on the formation and acceleration of plasmoid. These results will provide useful advice for the development, and optimized designing of field reversed configuration plasmoid thruster.

Keywords: rotation magnetic field, current driven, plasma penetration, electric propulsion

Procedia PDF Downloads 102
14454 Bioconcentration Analysis of Iodine Species in Seaweed (Eucheuma cottonii) from Maluku Marine as Alternative Food Source

Authors: Yeanchon H. Dulanlebit, Nikmans Hattu, Gloria Bora

Abstract:

Seaweed is a type of macro algae which are good source of iodine and have been widely used as food and nutrition supplement. One of iodine species that found in ocean plant is iodate. Analysis of iodate in seaweed (Eucheuma cottonii) from coastal area of Maluku has been done. The determination is done by using spectrophotometric method. Iodate in sample is reduced in excess of potassium iodide in the presence of acid solution, and then is reacted with starch to form blue complex. The study found out that the highest wavelength on determination of iodate species using spectrophotometer analysis method is 570 nm. Optimum value to yield maximum absorption is used in this research. Contents of iodate in seawater from coastal area of Ambon Island, Western Seram and Southeast Maluku are 0.2655, 0.2719 and 0.1760 mg/L, respectively. While in seaweeds from Ambon Island, Western Seram, Southeast Maluku-Taar, Ohoidertawun and Wab are 6.3122, 6.3293, 6.2333, 3.7406 and 4.4207 mg/kg in dry weight. Bioconcentration (enrichment) factor of iodate in seaweed (Eucheuma cottonii) from the three samples (cluster) is different; in Coastal area of Ambon Island, Western Seram and Southeast Maluku respectively are 23.78, 23.28 and 27.26.

Keywords: bioconcentration, eucheuma cottonii, iodate, iodine, seaweed

Procedia PDF Downloads 197
14453 Development of Numerical Model to Compute Water Hammer Transients in Pipe Flow

Authors: Jae-Young Lee, Woo-Young Jung, Myeong-Jun Nam

Abstract:

Water hammer is a hydraulic transient problem which is commonly encountered in the penstocks of hydropower plants. The numerical model was developed to estimate the transient behavior of pressure waves in pipe systems. The computational algorithm was proposed to model the water hammer phenomenon in a pipe system with pump shutdown at midstream and sudden valve closure at downstream. To predict the pressure head and flow velocity as a function of time as a result of rapidly closing a valve and pump shutdown, two boundary conditions at the ends considering pump operation and valve control can be implemented as specified equations of the pressure head and flow velocity based on the characteristics method. It was shown that the effects of transient flow make it determine the needs for protection devices, such as surge tanks, surge relief valves, or air valves, at various points in the system against overpressure and low pressure. It produced reasonably good performance with the results of the proposed transient model for pipeline systems. The proposed numerical model can be used as an efficient tool for the safety assessment of hydropower plants due to water hammer.

Keywords: water hammer, hydraulic transient, pipe systems, characteristics method

Procedia PDF Downloads 123
14452 A PROMETHEE-BELIEF Approach for Multi-Criteria Decision Making Problems with Incomplete Information

Authors: H. Moalla, A. Frikha

Abstract:

Multi-criteria decision aid methods consider decision problems where numerous alternatives are evaluated on several criteria. These methods are used to deal with perfect information. However, in practice, it is obvious that this information requirement is too much strict. In fact, the imperfect data provided by more or less reliable decision makers usually affect decision results since any decision is closely linked to the quality and availability of information. In this paper, a PROMETHEE-BELIEF approach is proposed to help multi-criteria decisions based on incomplete information. This approach solves problems with incomplete decision matrix and unknown weights within PROMETHEE method. On the base of belief function theory, our approach first determines the distributions of belief masses based on PROMETHEE’s net flows and then calculates weights. Subsequently, it aggregates the distribution masses associated to each criterion using Murphy’s modified combination rule in order to infer a global belief structure. The final action ranking is obtained via pignistic probability transformation. A case study of real-world application concerning the location of a waste treatment center from healthcare activities with infectious risk in the center of Tunisia is studied to illustrate the detailed process of the BELIEF-PROMETHEE approach.

Keywords: belief function theory, incomplete information, multiple criteria analysis, PROMETHEE method

Procedia PDF Downloads 150
14451 Efficiency in Islamic Banks: Some Empirical Evidences in Indonesian Finance Market

Authors: Ahmed Sameer El Khatib

Abstract:

The aim of the present paper is to examine the revenue efficiency of the Indonesian Islamic banking sector. The study also seeks to investigate the potential internal (bank specific) and external (macroeconomic) determinants that influence the revenue efficiency of Indonesian domestic Islamic banks. We employ the whole gamut of domestic and foreign Islamic banks operating in the Indonesian Islamic banking sector during the period of 2009 to 2018. The level of revenue efficiency is computed by using the Data Envelopment Analysis (DEA) method. Furthermore, we employ a panel regression analysis framework based on the Ordinary Least Square (OLS) method to examine the potential determinants of revenue efficiency. The results indicate that the level of revenue efficiency of Indonesian domestic Islamic banks is lower compared to their foreign Islamic bank counterparts. We find that bank market power, liquidity, and management quality significantly influence the improvement in revenue efficiency of the Indonesian domestic Islamic banks during the period under study. By calculating these efficiency concepts, we can observe the efficiency levels of the domestic and foreign Islamic banks. In addition, by comparing both cost and profit efficiency, we can identify the influence of the revenue efficiency on the banks’ profitability.

Keywords: Islamic Finance, Islamic Banks, Revenue Efficiency, Data Envelopment Analysis

Procedia PDF Downloads 228
14450 Optimisation of Nitrogen as a Protective Gas via the Alternating Shielding Gas Technique in the Gas Metal Arc Welding Process

Authors: M. P. E. E Silva, A. M. Galloway, A. I. Toumpis

Abstract:

An increasing concern exists in the welding industry in terms of faster joining processes. Methods such as the alternation between shielding gases such Ar, CO₂ and He have been able to provide improved penetration of the joint, reduced heat transfer to the workpiece, and increased travel speeds of the welding torch. Nitrogen as a shielding gas is not desirable due to its reactive behavior within the arc plasma, being absorbed by the molten pool during the welding process. Below certain amounts, nitrogen is not harmful. However, the nitrogen threshold is reduced during the solidification of the joint, and if its subsequent desorption is not completed on time, gas entrapment and blowhole formation may occur. The present study expanded the use of the alternating shielding gas method in the gas metal arc welding (GMAW) process by alternately supplying Ar/5%N₂ and He. Improvements were introduced in terms of joint strength and grain refinement. Microstructural characterization findings showed porosity-free welds with reduced inclusion formation while mechanical tests such as tensile and bend tests confirmed the reinforcement of the joint by the addition of nitrogen. Additionally, significant reductions of the final distortion of the workpiece were found after the welding procedure as well as decreased heat affected zones and temperatures of the weld.

Keywords: alternating shielding gas method, GMAW, grain refinement, nitrogen, porosity, mechanical testing

Procedia PDF Downloads 98
14449 Numerical Investigation on Design Method of Timber Structures Exposed to Parametric Fire

Authors: Robert Pečenko, Karin Tomažič, Igor Planinc, Sabina Huč, Tomaž Hozjan

Abstract:

Timber is favourable structural material due to high strength to weight ratio, recycling possibilities, and green credentials. Despite being flammable material, it has relatively high fire resistance. Everyday engineering practice around the word is based on an outdated design of timber structures considering standard fire exposure, while modern principles of performance-based design enable use of advanced non-standard fire curves. In Europe, standard for fire design of timber structures EN 1995-1-2 (Eurocode 5) gives two methods, reduced material properties method and reduced cross-section method. In the latter, fire resistance of structural elements depends on the effective cross-section that is a residual cross-section of uncharred timber reduced additionally by so called zero strength layer. In case of standard fire exposure, Eurocode 5 gives a fixed value of zero strength layer, i.e. 7 mm, while for non-standard parametric fires no additional comments or recommendations for zero strength layer are given. Thus designers often implement adopted 7 mm rule also for parametric fire exposure. Since the latest scientific evidence suggests that proposed value of zero strength layer can be on unsafe side for standard fire exposure, its use in the case of a parametric fire is also highly questionable and more numerical and experimental research in this field is needed. Therefore, the purpose of the presented study is to use advanced calculation methods to investigate the thickness of zero strength layer and parametric charring rates used in effective cross-section method in case of parametric fire. Parametric studies are carried out on a simple solid timber beam that is exposed to a larger number of parametric fire curves Zero strength layer and charring rates are determined based on the numerical simulations which are performed by the recently developed advanced two step computational model. The first step comprises of hygro-thermal model which predicts the temperature, moisture and char depth development and takes into account different initial moisture states of timber. In the second step, the response of timber beam simultaneously exposed to mechanical and fire load is determined. The mechanical model is based on the Reissner’s kinematically exact beam model and accounts for the membrane, shear and flexural deformations of the beam. Further on, material non-linear and temperature dependent behaviour is considered. In the two step model, the char front temperature is, according to Eurocode 5, assumed to have a fixed temperature of around 300°C. Based on performed study and observations, improved levels of charring rates and new thickness of zero strength layer in case of parametric fires are determined. Thus, the reduced cross section method is substantially improved to offer practical recommendations for designing fire resistance of timber structures. Furthermore, correlations between zero strength layer thickness and key input parameters of the parametric fire curve (for instance, opening factor, fire load, etc.) are given, representing a guideline for a more detailed numerical and also experimental research in the future.

Keywords: advanced numerical modelling, parametric fire exposure, timber structures, zero strength layer

Procedia PDF Downloads 153
14448 Studying the Impact of Soil Characteristics in Displacement of Retaining Walls Using Finite Element

Authors: Mojtaba Ahmadabadi, Akbar Masoudi, Morteza Rezai

Abstract:

In this paper, using the finite element method, the effect of soil and wall characteristics was investigated. Thirty and two different models were studied by different parameters. These studies could calculate displacement at any height of the wall for frictional-cohesive soils. The main purpose of this research is to determine the most effective soil characteristics in reducing the wall displacement. Comparing different models showed that the overall increase in internal friction angle, angle of friction between soil and wall and modulus of elasticity reduce the replacement of the wall. In addition, increase in special weight of soil will increase the wall displacement. Based on results, it can be said that all wall displacements were overturning and in the backfill, soil was bulging. Results show that the highest impact is seen in reducing wall displacement, internal friction angle, and the angle friction between soil and wall. One of the advantages of this study is taking into account all the parameters of the soil and walls replacement distribution in wall and backfill soil. In this paper, using the finite element method and considering all parameters of the soil, we investigated the impact of soil parameter in wall displacement. The aim of this study is to provide the best conditions in reducing the wall displacement and displacement wall and soil distribution.

Keywords: retaining wall, fem, soil and wall interaction, angle of internal friction of the soil, wall displacement

Procedia PDF Downloads 377
14447 Seismic Response of Belt Truss System in Regular RC Frame Structure at the Different Positions of the Storey

Authors: Mohd Raish Ansari, Tauheed Alam Khan

Abstract:

This research paper is a comparative study of the belt truss in the Regular RC frame structure at the different positions of the floor. The method used in this research is the response spectrum method with the help of the ETABS Software, there are six models in this paper with belt truss. The Indian standard code used in this work are IS 456:2000, IS 800:2007, IS 875 part-1, IS 875 part-1, and IS 1893 Part-1:2016. The cross-section of the belt truss is the I-section, a grade of steel that is made up of Mild Steel. The basic model in this research paper is the same, only position of the belt truss is going to change, and the dimension of the belt truss is remain constant for all models. The plan area of all models is 24.5 meters x 28 meters, and the model has G+20, where the height of the ground floor is 3.5 meters, and all floor height is 3.0 meters remains constant. This comparative research work selected some important seismic parameters to check the stability of all models, the parameters are base shear, fundamental period, storey overturning moment, and maximum storey displacement.

Keywords: belt truss, RC frames structure, ETABS, response spectrum analysis, special moment resisting frame

Procedia PDF Downloads 74
14446 A Hybrid Based Algorithm to Solve the Multi-objective Minimum Spanning Tree Problem

Authors: Boumesbah Asma, Chergui Mohamed El-amine

Abstract:

Since it has been shown that the multi-objective minimum spanning tree problem (MOST) is NP-hard even with two criteria, we propose in this study a hybrid NSGA-II algorithm with an exact mutation operator, which is only used with low probability, to find an approximation to the Pareto front of the problem. In a connected graph G, a spanning tree T of G being a connected and cycle-free graph, if k edges of G\T are added to T, we obtain a partial graph H of G inducing a reduced size multi-objective spanning tree problem compared to the initial one. With a weak probability for the mutation operator, an exact method for solving the reduced MOST problem considering the graph H is then used to give birth to several mutated solutions from a spanning tree T. Then, the selection operator of NSGA-II is activated to obtain the Pareto front approximation. Finally, an adaptation of the VNS metaheuristic is called for further improvements on this front. It allows finding good individuals to counterbalance the diversification and the intensification during the optimization search process. Experimental comparison studies with an exact method show promising results and indicate that the proposed algorithm is efficient.

Keywords: minimum spanning tree, multiple objective linear optimization, combinatorial optimization, non-sorting genetic algorithm, variable neighborhood search

Procedia PDF Downloads 78
14445 Evaluation of Liquefaction Potential of Fine Grained Soil: Kerman Case Study

Authors: Reza Ziaie Moayed, Maedeh Akhavan Tavakkoli

Abstract:

This research aims to investigate and evaluate the liquefaction potential in a project in Kerman city based on different methods for fine-grained soils. Examining the previous damages caused by recent earthquakes, it has been observed that fine-grained soils play an essential role in the level of damage caused by soil liquefaction. But, based on previous investigations related to liquefaction, there is limited attention to evaluating the cyclic resistance ratio for fine-grain soils, especially with the SPT method. Although using a standard penetration test (SPT) to find the liquefaction potential of fine-grain soil is not common, it can be a helpful method based on its rapidness, serviceability, and availability. In the present study, the liquefaction potential has been first determined by the soil’s physical properties obtained from laboratory tests. Then, using the SPT test and its available criterion for evaluating the cyclic resistance ratio and safety factor of liquefaction, the correction of effecting fine-grained soils is made, and then the results are compared. The results show that using the SPT test for liquefaction is more accurate than using laboratory tests in most cases due to the contribution of different physical parameters of soil, which leads to an increase in the ultimate N₁(60,cs).

Keywords: liquefaction, cyclic resistance ratio, SPT test, clay soil, cohesion soils

Procedia PDF Downloads 87
14444 Video Text Information Detection and Localization in Lecture Videos Using Moments

Authors: Belkacem Soundes, Guezouli Larbi

Abstract:

This paper presents a robust and accurate method for text detection and localization over lecture videos. Frame regions are classified into text or background based on visual feature analysis. However, lecture video shows significant degradation mainly related to acquisition conditions, camera motion and environmental changes resulting in low quality videos. Hence, affecting feature extraction and description efficiency. Moreover, traditional text detection methods cannot be directly applied to lecture videos. Therefore, robust feature extraction methods dedicated to this specific video genre are required for robust and accurate text detection and extraction. Method consists of a three-step process: Slide region detection and segmentation; Feature extraction and non-text filtering. For robust and effective features extraction moment functions are used. Two distinct types of moments are used: orthogonal and non-orthogonal. For orthogonal Zernike Moments, both Pseudo Zernike moments are used, whereas for non-orthogonal ones Hu moments are used. Expressivity and description efficiency are given and discussed. Proposed approach shows that in general, orthogonal moments show high accuracy in comparison to the non-orthogonal one. Pseudo Zernike moments are more effective than Zernike with better computation time.

Keywords: text detection, text localization, lecture videos, pseudo zernike moments

Procedia PDF Downloads 136
14443 Experimental Studies on the Effect of Premixing Methods in Anaerobic Digestor with Corn Stover

Authors: M. Sagarika, M. Chandra Sekhar

Abstract:

Agricultural residues are producing in large quantities in India and account for abundant but underutilized source of renewable biomass in agriculture. In India, the amount of crop residues available is estimated to be approximately 686 million tons. Anaerobic digestion is a promising option to utilize the surplus agricultural residues and can produce biogas and digestate. Biogas is mainly methane (CH4), which can be utilized as an energy source in replacement for fossil fuels such as natural gas, oil, in other hand, digestate contains high amounts of nutrients, can be employed as fertilizer. Solid state anaerobic digestion (total solids ≥ 15%) is suitable for agricultural residues, as it reduces the problems like stratification and floating issues that occur in liquid anaerobic digestion (total solids < 15%). The major concern in solid-state anaerobic digestion is the low mass transfer of feedstock and inoculum that resulting in low performance. To resolve this low mass transfer issue, effective mixing of feedstock and inoculum is required. Mechanical mixing using stirrer at the time of digestion process can be done, but it is difficult to operate the stirring of feedstock with high solids percentage and high viscosity. Complete premixing of feedstock and inoculum is an alternative method, which is usual in lab scale studies but may not be affordable due to high energy demand in large-scale digesters. Developing partial premixing methods may reduce this problem. Current study is to improve the performance of solid-state anaerobic digestion of corn stover at feedstock to inoculum ratios 3 and 5, by applying partial premixing methods and to compare the complete premixing method with two partial premixing methods which are two alternative layers of feedstock and inoculum and three alternative layers of feedstock and inoculum where higher inoculum ratios in the top layers. From experimental studies it is observed that, partial premixing method with three alternative layers of feedstock and inoculum yielded good methane.

Keywords: anaerobic digestion, premixing methods, methane yield, corn stover, volatile solids

Procedia PDF Downloads 221
14442 Water Footprint for the Palm Oil Industry in Malaysia

Authors: Vijaya Subramaniam, Loh Soh Kheang, Astimar Abdul Aziz

Abstract:

Water footprint (WFP) has gained importance due to the increase in water scarcity in the world. This study analyses the WFP for an agriculture sector, i.e., the oil palm supply chain, which produces oil palm fresh fruit bunch (FFB), crude palm oil, palm kernel, and crude palm kernel oil. The water accounting and vulnerability evaluation (WAVE) method was used. This method analyses the water depletion index (WDI) based on the local blue water scarcity. The main contribution towards the WFP at the plantation was the production of FFB from the crop itself at 0.23m³/tonne FFB. At the mill, the burden shifts to the water added during the process, which consists of the boiler and process water, which accounted for 6.91m³/tonne crude palm oil. There was a 33% reduction in the WFP when there was no dilution or water addition after the screw press at the mill. When allocation was performed, the WFP reduced by 42% as the burden was shared with the palm kernel and palm kernel shell. At the kernel crushing plant (KCP), the main contributor towards the WFP 4.96 m³/tonne crude palm kernel oil which came from the palm kernel which carried the burden from upstream followed by electricity, 0.33 m³/tonne crude palm kernel oil used for the process and 0.08 m³/tonne crude palm kernel oil for transportation of the palm kernel. A comparison was carried out for mills with biogas capture versus no biogas capture, and the WFP had no difference for both scenarios. The comparison when the KCPs operate in the proximity of mills as compared to those operating in the proximity of ports only gave a reduction of 6% for the WFP. Both these scenarios showed no difference and insignificant difference, which differed from previous life cycle assessment studies on the carbon footprint, which showed significant differences. This shows that findings change when only certain impact categories are focused on. It can be concluded that the impact from the water used by the oil palm tree is low due to the practice of no irrigation at the plantations and the high availability of water from rainfall in Malaysia. This reiterates the importance of planting oil palm trees in regions with high rainfall all year long, like the tropics. The milling stage had the most significant impact on the WFP. Mills should avoid dilution to reduce this impact.

Keywords: life cycle assessment, water footprint, crude palm oil, crude palm kernel oil, WAVE method

Procedia PDF Downloads 154