Search results for: optimal operating parameters
11412 Thermodynamics of Aqueous Solutions of Organic Molecule and Electrolyte: Use Cloud Point to Obtain Better Estimates of Thermodynamic Parameters
Authors: Jyoti Sahu, Vinay A. Juvekar
Abstract:
Electrolytes are often used to bring about salting-in and salting-out of organic molecules and polymers (e.g. polyethylene glycols/proteins) from the aqueous solutions. For quantification of these phenomena, a thermodynamic model which can accurately predict activity coefficient of electrolyte as a function of temperature is needed. The thermodynamics models available in the literature contain a large number of empirical parameters. These parameters are estimated using lower/upper critical solution temperature of the solution in the electrolyte/organic molecule at different temperatures. Since the number of parameters is large, inaccuracy can bethe creep in during their estimation, which can affect the reliability of prediction beyond the range in which these parameters are estimated. Cloud point of solution is related to its free energy through temperature and composition derivative. Hence, the Cloud point measurement can be used for accurate estimation of the temperature and composition dependence of parameters in the model for free energy. Hence, if we use a two pronged procedure in which we first use cloud point of solution to estimate some of the parameters of the thermodynamic model and determine the rest using osmotic coefficient data, we gain on two counts. First, since the parameters, estimated in each of the two steps, are fewer, we achieve higher accuracy of estimation. The second and more important gain is that the resulting model parameters are more sensitive to temperature. This is crucial when we wish to use the model outside temperatures window within which the parameter estimation is sought. The focus of the present work is to prove this proposition. We have used electrolyte (NaCl/Na2CO3)-water-organic molecule (Iso-propanol/ethanol) as the model system. The model of Robinson-Stokes-Glukauf is modified by incorporating the temperature dependent Flory-Huggins interaction parameters. The Helmholtz free energy expression contains, in addition to electrostatic and translational entropic contributions, three Flory-Huggins pairwise interaction contributions viz., and (w-water, p-polymer, s-salt). These parameters depend both on temperature and concentrations. The concentration dependence is expressed in the form of a quadratic expression involving the volume fractions of the interacting species. The temperature dependence is expressed in the form .To obtain the temperature-dependent interaction parameters for organic molecule-water and electrolyte-water systems, Critical solution temperature of electrolyte -water-organic molecules is measured using cloud point measuring apparatus The temperature and composition dependent interaction parameters for electrolyte-water-organic molecule are estimated through measurement of cloud point of solution. The model is used to estimate critical solution temperature (CST) of electrolyte water-organic molecules solution. We have experimentally determined the critical solution temperature of different compositions of electrolyte-water-organic molecule solution and compared the results with the estimates based on our model. The two sets of values show good agreement. On the other hand when only osmotic coefficients are used for estimation of the free energy model, CST predicted using the resulting model show poor agreement with the experiments. Thus, the importance of the CST data in the estimation of parameters of the thermodynamic model is confirmed through this work.Keywords: concentrated electrolytes, Debye-Hückel theory, interaction parameters, Robinson-Stokes-Glueckauf model, Flory-Huggins model, critical solution temperature
Procedia PDF Downloads 39111411 A Simple Device for Characterizing High Power Electron Beams for Welding
Authors: Aman Kaur, Colin Ribton, Wamadeva Balachandaran
Abstract:
Electron beam welding due to its inherent advantages is being extensively used for material processing where high precision is required. Especially in aerospace or nuclear industries, there are high quality requirements and the cost of materials and processes is very high which makes it very important to ensure the beam quality is maintained and checked prior to carrying out the welds. Although the processes in these industries are highly controlled, however, even the minor changes in the operating parameters of the electron gun can make large enough variations in the beam quality that can result in poor welding. To measure the beam quality a simple device has been designed that can be used at high powers. The device consists of two slits in x and y axis which collects a small portion of the beam current when the beam is deflected over the slits. The signals received from the device are processed in data acquisition hardware and the dedicated software developed for the device. The device has been used in controlled laboratory environments to analyse the signals and the weld quality relationships by varying the focus current. The results showed matching trends in the weld dimensions and the beam characteristics. Further experimental work is being carried out to determine the ability of the device and signal processing software to detect subtle changes in the beam quality and to relate these to the physical weld quality indicators.Keywords: electron beam welding, beam quality, high power, weld quality indicators
Procedia PDF Downloads 32411410 Determination the Effects of Physico-Chemical Parameters on Groundwater Status by Water Quality Index
Authors: Samaneh Abolli, Mahdi Ahmadi Nasab, Kamyar Yaghmaeian, Mahmood Alimohammadi
Abstract:
The quality of drinking water, in addition to the presence of physicochemical parameters, depends on the type and geographical location of water sources. In this study, groundwater quality was investigated by sampling total dissolved solids (TDS), electrical conductivity (EC), total hardness (TH), Cl, Ca²⁺, and Mg²⁺ parameters in 13 sites, and 40 water samples were sent to the laboratory. Electrometric, titration, and spectrophotometer methods were used. In the next step, the water quality index (WQI) was used to investigate the impact and weight of each parameter in the groundwater. The results showed that only the mean of magnesium ion (40.88 mg/l) was lower than the guidelines of World Health Organization (WHO). Interpreting the WQI based on the WHO guidelines showed that the statuses of 21, 11, and 7 samples were very poor, poor, and average quality, respectively, and one sample had excellent quality. Among the studied parameters, the means of EC (2,087.49 mS/cm) and Cl (1,015.87 mg/l) exceeded the global and national limits. Classifying water quality of TH was very hard (87.5%), hard (7.5%), and moderate (5%), respectively. Based on the geographical distribution, the drinking water index in sites 4 and 11 did not have acceptable quality. Chloride ion was identified as the responsible pollutant and the most important ion for raising the index. The outputs of statistical tests and Spearman correlation had significant and direct correlation (p < 0.05, r > 0.7) between TDS, EC, and chloride, EC and chloride, as well as TH, Ca²⁺, and Mg²⁺.Keywords: water quality index, groundwater, chloride, GIS, Garmsar
Procedia PDF Downloads 10411409 Expected Present Value of Losses in the Computation of Optimum Seismic Design Parameters
Authors: J. García-Pérez
Abstract:
An approach to compute optimum seismic design parameters is presented. It is based on the optimization of the expected present value of the total cost, which includes the initial cost of structures as well as the cost due to earthquakes. Different types of seismicity models are considered, including one for characteristic earthquakes. Uncertainties are included in some variables to observe the influence on optimum values. Optimum seismic design coefficients are computed for three different structural types representing high, medium and low rise buildings, located near and far from the seismic sources. Ordinary and important structures are considered in the analysis. The results of optimum values show an important influence of seismicity models as well as of uncertainties on the variables.Keywords: importance factors, optimum parameters, seismic losses, seismic risk, total cost
Procedia PDF Downloads 28411408 Non-Local Simultaneous Sparse Unmixing for Hyperspectral Data
Authors: Fanqiang Kong, Chending Bian
Abstract:
Sparse unmixing is a promising approach in a semisupervised fashion by assuming that the observed pixels of a hyperspectral image can be expressed in the form of linear combination of only a few pure spectral signatures (end members) in an available spectral library. However, the sparse unmixing problem still remains a great challenge at finding the optimal subset of endmembers for the observed data from a large standard spectral library, without considering the spatial information. Under such circumstances, a sparse unmixing algorithm termed as non-local simultaneous sparse unmixing (NLSSU) is presented. In NLSSU, the non-local simultaneous sparse representation method for endmember selection of sparse unmixing, is used to finding the optimal subset of endmembers for the similar image patch set in the hyperspectral image. And then, the non-local means method, as a regularizer for abundance estimation of sparse unmixing, is used to exploit the abundance image non-local self-similarity. Experimental results on both simulated and real data demonstrate that NLSSU outperforms the other algorithms, with a better spectral unmixing accuracy.Keywords: hyperspectral unmixing, simultaneous sparse representation, sparse regression, non-local means
Procedia PDF Downloads 24511407 Recovery of Helicobacter Pylori from Stagnant and Moving Water Biofilms
Authors: Maryam Zafar, Sajida Rasheed, Imran Hashmi
Abstract:
Water as an environmental reservoir is reported to act as a habitat and transmission route to microaerophilic bacteria such as Heliobacter pylori. It has been studied that in biofilms are the predominant dwellings for the bacteria to grow in water and protective reservoir for numerous pathogens by protecting them against harsh conditions, such as shear stress, low carbon concentration and less than optimal temperature. In this study, influence of these and many other parameters was studied on H. pylori in stagnant and moving water biofilms both in surface and underground aquatic reservoirs. H. pylori were recovered from pipe of different materials such as Polyvinyl Chloride, Polypropylene and Galvanized iron pipe cross sections from an urban water distribution network. Biofilm swabbed from inner cross section was examined by molecular biology methods coupled with gene sequencing and H. pylori 16S rRNA peptide nucleic acid probe showing positive results for H. pylori presence. Studies showed that pipe material affect growth of biofilm which in turn provide additional survival mechanism for pathogens like H. pylori causing public health concerns.Keywords: biofilm, gene sequencing, heliobacter pylori, pipe materials
Procedia PDF Downloads 35911406 Damage Identification in Reinforced Concrete Beams Using Modal Parameters and Their Formulation
Authors: Ali Al-Ghalib, Fouad Mohammad
Abstract:
The identification of damage in reinforced concrete structures subjected to incremental cracking performance exploiting vibration data is recognized as a challenging topic in the published and heavily cited literature. Therefore, this paper attempts to shine light on the extent of dynamic methods when applied to reinforced concrete beams simulated with various scenarios of defects. For this purpose, three different reinforced concrete beams are tested through the course of the study. The three beams are loaded statically to failure in incremental successive load cycles and later rehabilitated. After each static load stage, the beams are tested under free-free support condition using experimental modal analysis. The beams were all of the same length and cross-sectional area (2.0x0.14x0.09)m, but they were different in concrete compressive strength and the type of damage presented. The experimental modal parameters as damage identification parameters were showed computationally expensive, time consuming and require substantial inputs and considerable expertise. Nonetheless, they were proved plausible for the condition monitoring of the current case study as well as structural changes in the course of progressive loads. It was accentuated that a satisfactory localization and quantification for structural changes (Level 2 and Level 3 of damage identification problem) can only be achieved reasonably through considering frequencies and mode shapes of a system in a proper analytical model. A convenient post analysis process for various datasets of vibration measurements for the three beams is conducted in order to extract, check and correlate the basic modal parameters; namely, natural frequency, modal damping and mode shapes. The results of the extracted modal parameters and their combination are utilized and discussed in this research as quantification parameters.Keywords: experimental modal analysis, damage identification, structural health monitoring, reinforced concrete beam
Procedia PDF Downloads 26311405 Application of Additive Manufacturing for Production of Optimum Topologies
Authors: Mahdi Mottahedi, Peter Zahn, Armin Lechler, Alexander Verl
Abstract:
Optimal topology of components leads to the maximum stiffness with the minimum material use. For the generation of these topologies, normally algorithms are employed, which tackle manufacturing limitations, at the cost of the optimal result. The global optimum result with penalty factor one, however, cannot be fabricated with conventional methods. In this article, an additive manufacturing method is introduced, in order to enable the production of global topology optimization results. For a benchmark, topology optimization with higher and lower penalty factors are performed. Different algorithms are employed in order to interpret the results of topology optimization with lower factors in many microstructure layers. These layers are then joined to form the final geometry. The algorithms’ benefits are then compared experimentally and numerically for the best interpretation. The findings demonstrate that by implementation of the selected algorithm, the stiffness of the components produced with this method is higher than what could have been produced by conventional techniques.Keywords: topology optimization, additive manufacturing, 3D-printer, laminated object manufacturing
Procedia PDF Downloads 33911404 Stress Analysis of the Ceramics Heads with Different Sizes under the Destruction Tests
Authors: V. Fuis, P. Janicek, T. Navrat
Abstract:
The global solved problem is the calculation of the parameters of ceramic material from a set of destruction tests of ceramic heads of total hip joint endoprosthesis. The standard way of calculation of the material parameters consists in carrying out a set of 3 or 4 point bending tests of specimens cut out from parts of the ceramic material to be analysed. In case of ceramic heads, it is not possible to cut out specimens of required dimensions because the heads are too small (if the cut out specimens were smaller than the normalized ones, the material parameters derived from them would exhibit higher strength values than those which the given ceramic material really has). A special destruction device for heads destruction was designed and the solved local problem is the modification of this destructive device based on the analysis of tensile stress in the head for two different values of the depth of the conical hole in the head. The goal of device modification is a shift of the location with extreme value of 1 max from the region of head’s hole bottom to its opening. This modification will increase the credibility of the obtained material properties of bio ceramics, which will be determined from a set of head destructions using the Weibull weakest link theory.Keywords: ceramic heads, depth of the conical hole, destruction test, material parameters, principal stress, total hip joint endoprosthesis
Procedia PDF Downloads 41911403 Woman, House, Identity: The Study of the Role of House in Constructing the Contemporary Dong Minority Woman’s Identity
Authors: Sze Wai Veera Fung, Peter W. Ferretto
Abstract:
Similar to most ethnic groups in China, men of the Dong minority hold the primary position in policymaking, moral authority, social values, and the control of the property. As the spatial embodiment of the patriarchal ideals, the house plays a significant role in producing and reproducing the distinctive gender status within the Dong society. Nevertheless, Dong women do not see their home as a cage of confinement, nor do they see themselves as a victim of oppression. For these women with reference to their productive identity, a house is a dwelling place with manifold meanings, including a proof of identity, an economic instrument, and a public resource operating on the community level. This paper examines the role of the house as a central site for identity construction and maintenance for the southern dialect Dong minority women in Hunan, China. Drawing on recent interviews with the Dong women, this study argues that women as productive individuals have a strong influence on the form of their house and the immediate environment, regardless of the male-dominated social construct of the Dong society. The aim of this study is not to produce a definitive relationship between women, house, and identity. Rather, it seeks to offer an alternative lens into the complexity and diversity of gender dynamics operating in and beyond the boundary of the house in the context of contemporary rural China.Keywords: conception of home, Dong minority, house, rural China, woman’s identity
Procedia PDF Downloads 13811402 Design of Geochemical Maps of Industrial City Using Gradient Boosting and Geographic Information System
Authors: Ruslan Safarov, Zhanat Shomanova, Yuri Nossenko, Zhandos Mussayev, Ayana Baltabek
Abstract:
Geochemical maps of distribution of polluting elements V, Cr, Mn, Co, Ni, Cu, Zn, Mo, Cd, Pb on the territory of the Pavlodar city (Kazakhstan), which is an industrial hub were designed. The samples of soil were taken from 100 locations. Elemental analysis has been performed using XRF. The obtained data was used for training of the computational model with gradient boosting algorithm. The optimal parameters of model as well as the loss function were selected. The computational model was used for prediction of polluting elements concentration for 1000 evenly distributed points. Based on predicted data geochemical maps were created. Additionally, the total pollution index Zc was calculated for every from 1000 point. The spatial distribution of the Zc index was visualized using GIS (QGIS). It was calculated that the maximum coverage area of the territory of the Pavlodar city belongs to the moderately hazardous category (89.7%). The visualization of the obtained data allowed us to conclude that the main source of contamination goes from the industrial zones where the strategic metallurgical and refining plants are placed.Keywords: Pavlodar, geochemical map, gradient boosting, CatBoost, QGIS, spatial distribution, heavy metals
Procedia PDF Downloads 8211401 Low Overhead Dynamic Channel Selection with Cluster-Based Spatial-Temporal Station Reporting in Wireless Networks
Authors: Zeyad Abdelmageid, Xianbin Wang
Abstract:
Choosing the operational channel for a WLAN access point (AP) in WLAN networks has been a static channel assignment process initiated by the user during the deployment process of the AP, which fails to cope with the dynamic conditions of the assigned channel at the station side afterward. However, the dramatically growing number of Wi-Fi APs and stations operating in the unlicensed band has led to dynamic, distributed, and often severe interference. This highlights the urgent need for the AP to dynamically select the best overall channel of operation for the basic service set (BSS) by considering the distributed and changing channel conditions at all stations. Consequently, dynamic channel selection algorithms which consider feedback from the station side have been developed. Despite the significant performance improvement, existing channel selection algorithms suffer from very high feedback overhead. Feedback latency from the STAs, due to the high overhead, can cause the eventually selected channel to no longer be optimal for operation due to the dynamic sharing nature of the unlicensed band. This has inspired us to develop our own dynamic channel selection algorithm with reduced overhead through the proposed low-overhead, cluster-based station reporting mechanism. The main idea behind the cluster-based station reporting is the observation that STAs which are very close to each other tend to have very similar channel conditions. Instead of requesting each STA to report on every candidate channel while causing high overhead, the AP divides STAs into clusters then assigns each STA in each cluster one channel to report feedback on. With the proper design of the cluster based reporting, the AP does not lose any information about the channel conditions at the station side while reducing feedback overhead. The simulation results show equal performance and, at times, better performance with a fraction of the overhead. We believe that this algorithm has great potential in designing future dynamic channel selection algorithms with low overhead.Keywords: channel assignment, Wi-Fi networks, clustering, DBSCAN, overhead
Procedia PDF Downloads 11911400 Analysis of Thermal Damage Characteristics of High Pressure Turbine Blade According to Off-Design Operating Conditions
Authors: Seon Ho Kim, Minho Bang, Seok Min Choi, Young Moon Lee, Dong Kwan Kim, Hyung Hee Cho
Abstract:
Gas turbines are heat engines that convert chemical energy into electrical energy through mechanical energy. Since their high energy density per unit volume and low pollutant emissions, gas turbines are classified as clean energy. In order to obtain better performance, the turbine inlet temperature of the current gas turbine is operated at about 1600℃, and thermal damage is a very serious problem. Especially, these thermal damages are more prominent in off-design conditions than in design conditions. In this study, the thermal damage characteristics of high temperature components of a gas turbine made of a single crystal material are studied numerically for the off-design operating conditions. The target gas turbine is configured as a reheat cycle and is operated in peak load operation mode, not normal operation. In particular, the target gas turbine features a lot of low-load operation. In this study, a commercial code, ANSYS 18.2, was used for analyzing the thermal-flow coupling problems. As a result, the flow separation phenomenon on the pressure side due to the flow reduction was remarkable at the off-design condition, and the high heat transfer coefficient at the upper end of the suction surface due to the tip leakage flow was appeared.Keywords: gas turbine, single crystal blade, off-design, thermal analysis
Procedia PDF Downloads 21311399 Enhanced CNN for Rice Leaf Disease Classification in Mobile Applications
Authors: Kayne Uriel K. Rodrigo, Jerriane Hillary Heart S. Marcial, Samuel C. Brillo
Abstract:
Rice leaf diseases significantly impact yield production in rice-dependent countries, affecting their agricultural sectors. As part of precision agriculture, early and accurate detection of these diseases is crucial for effective mitigation practices and minimizing crop losses. Hence, this study proposes an enhancement to the Convolutional Neural Network (CNN), a widely-used method for Rice Leaf Disease Image Classification, by incorporating MobileViTV2—a recently advanced architecture that combines CNN and Vision Transformer models while maintaining fewer parameters, making it suitable for broader deployment on edge devices. Our methodology utilizes a publicly available rice disease image dataset from Kaggle, which was validated by a university structural biologist following the guidelines provided by the Philippine Rice Institute (PhilRice). Modifications to the dataset include renaming certain disease categories and augmenting the rice leaf image data through rotation, scaling, and flipping. The enhanced dataset was then used to train the MobileViTV2 model using the Timm library. The results of our approach are as follows: the model achieved notable performance, with 98% accuracy in both training and validation, 6% training and validation loss, and a Receiver Operating Characteristic (ROC) curve ranging from 95% to 100% for each label. Additionally, the F1 score was 97%. These metrics demonstrate a significant improvement compared to a conventional CNN-based approach, which, in a previous 2022 study, achieved only 78% accuracy after using 5 convolutional layers and 2 dense layers. Thus, it can be concluded that MobileViTV2, with its fewer parameters, outperforms traditional CNN models, particularly when applied to Rice Leaf Disease Image Identification. For future work, we recommend extending this model to include datasets validated by international rice experts and broadening the scope to accommodate biotic factors such as rice pest classification, as well as abiotic stressors such as climate, soil quality, and geographic information, which could improve the accuracy of disease prediction.Keywords: convolutional neural network, MobileViTV2, rice leaf disease, precision agriculture, image classification, vision transformer
Procedia PDF Downloads 2311398 Microfiber Release During Laundry Under Different Rinsing Parameters
Authors: Fulya Asena Uluç, Ehsan Tuzcuoğlu, Songül Bayraktar, Burak Koca, Alper Gürarslan
Abstract:
Microplastics are contaminants that are widely distributed in the environment with a detrimental ecological effect. Besides this, recent research has proved the existence of microplastics in human blood and organs. Microplastics in the environment can be divided into two main categories: primary and secondary microplastics. Primary microplastics are plastics that are released into the environment as microscopic particles. On the other hand, secondary microplastics are the smaller particles that are shed as a result of the consumption of synthetic materials in textile products as well as other products. Textiles are the main source of microplastic contamination in aquatic ecosystems. Laundry of synthetic textiles (34.8%) accounts for an average annual discharge of 3.2 million tons of primary microplastics into the environment. Recently, microfiber shedding from laundry research has gained traction. However, no comprehensive study was conducted from the standpoint of rinsing parameters during laundry to analyze microfiber shedding. The purpose of the present study is to quantify microfiber shedding from fabric under different rinsing conditions and determine the effective rinsing parameters on microfiber release in a laundry environment. In this regard, a parametric study is carried out to investigate the key factors affecting the microfiber release from a front-load washing machine. These parameters are the amount of water used during the rinsing step and the spinning speed at the end of the washing cycle. Minitab statistical program is used to create a design of the experiment (DOE) and analyze the experimental results. Tests are repeated twice and besides the controlled parameters, other washing parameters are kept constant in the washing algorithm. At the end of each cycle, released microfibers are collected via a custom-made filtration system and weighted with precision balance. The results showed that by increasing the water amount during the rinsing step, the amount of microplastic released from the washing machine increased drastically. Also, the parametric study revealed that increasing the spinning speed results in an increase in the microfiber release from textiles.Keywords: front load, laundry, microfiber, microfiber release, microfiber shedding, microplastic, pollution, rinsing parameters, sustainability, washing parameters, washing machine
Procedia PDF Downloads 9711397 Effect of Different Level of Pomegranate Molasses on Performance, Egg Quality Trait, Serological and Hematological Parameters in Older Laying Hens
Authors: Ismail Bayram, Aamir Iqbal, E. Eren Gultepe, Cangir Uyarlar, Umit Ozcınar, I. Sadi Cetingul
Abstract:
The current study was planned with the objective to explore the potential of pomegranate molasses (PM) on performance, egg quality and blood parameters in older laying hens. A total of 240 Babcock white laying hens (52 weeks old) were divided into 5 groups (n=48) with 8 subgroups having 6 hens in each. Pomegranate molasses was added in the drinking water to experimental groups with 0 %, 0.1%, 0.25 %, 0.5%, and 1%, respectively during one month. In our results, egg weight values were remained the same in all pomegranate molasses supplemented groups except 1% group over control. However, feed consumption, egg production, feed conversion ratio (FCR), egg mass, egg yolk cholesterol, body weights, and water consumption remained unaffected (P > 0.05). During mid-study (15 Days) analyses, egg quality parameters such as Haugh unit, eggshell thickness, albumin index, yolk index, and egg yolk color were remained non-significant (P > 0.05) while after final (30 Days) egg analyses, only egg yolk color had positively (P < 0.05) increased in 0.5% group. Moreover, Haugh unit, eggshell thickness, and albumin index were not significantly (P > 0.05) affected by the supplementation of pomegranate molasses. Regarding serological parameters, pomegranate molasses did not show any positive effect on cholesterol, total protein, LDL, HDL, GGT, AST, ALT, and glucose level. Similarly, pomegranate molasses also showed non-significant (P > 0.05) results on different blood parameters such as HCT, RBC, MCV, MCH, MCHC, PLT, RDWC, MPV except hemoglobin level. Only hemoglobin level was increased in all experimental groups over control showing that pomegranate molasses can be used as an enhancer in animals with low hemoglobin level.Keywords: pomegranate molasses, laying hen, egg yield, blood parameters
Procedia PDF Downloads 16911396 Evaluation of Pozzolanic Properties of Micro and Nanofillers Origin from Waste Products
Authors: Laura Vitola, Diana Bajare, Genadijs Sahmenko, Girts Bumanis
Abstract:
About 8 % of CO2 emission in the world is produced by concrete industry therefore replacement of cement in concrete composition by additives with pozzolanic activity would give a significant impact on the environment. Material which contains silica SiO2 or amorphous silica SiO2 together with aluminum dioxide Al2O3 is called pozzolana type additives in the concrete industry. Pozzolana additives are possible to obtain from recycling industry and different production by-products such as processed bulb boric silicate (DRL type) and lead (LB type) glass, coal combustion bottom ash, utilized brick pieces and biomass ash, thus solving utilization problem which is so important in the world, as well as practically using materials which previously were considered as unusable. In the literature, there is no summarized method which could be used for quick waste-product pozzolana activity evaluation without the performance of wide researches related to the production of innumerable concrete contents and samples in the literature. Besides it is important to understand which parameters should be predicted to characterize the efficiency of waste-products. Simple methods of pozzolana activity increase for different types of waste-products are also determined. The aim of this study is to evaluate effectiveness of the different types of waste materials and industrial by-products (coal combustion bottom ash, biomass ash, waste glass, waste kaolin and calcined illite clays), and determine which parameters have the greatest impact on pozzolanic activity. By using materials, which previously were considered as unusable and landfilled, in concrete industry basic utilization problems will be partially solved. The optimal methods for treatment of waste materials and industrial by–products were detected with the purpose to increase their pozzolanic activity and produce substitutes for cement in the concrete industry. Usage of mentioned pozzolanic allows us to replace of necessary cement amount till 20% without reducing the compressive strength of concrete.Keywords: cement substitutes, micro and nano fillers, pozzolanic properties, specific surface area, particle size, waste products
Procedia PDF Downloads 42711395 Quantum Sieving for Hydrogen Isotope Separation
Authors: Hyunchul Oh
Abstract:
One of the challenges in modern separation science and technology is the separation of hydrogen isotopes mixtures since D2 and H2 consist of almost identical size, shape and thermodynamic properties. Recently, quantum sieving of isotopes by confinement in narrow space has been proposed as an alternative technique. Despite many theoretical suggestions, however, it has been difficult to discover a feasible microporous material up to now. Among various porous materials, the novel class of microporous framework materials (COFs, ZIFs and MOFs) is considered as a promising material class for isotope sieving due to ultra-high porosity and uniform pore size which can be tailored. Hence, we investigate experimentally the fundamental correlation between D2/H2 molar ratio and pore size at optimized operating conditions by using different ultramicroporous frameworks. The D2/H2 molar ratio is strongly depending on pore size, pressure and temperature. An experimentally determined optimum pore diameter for quantum sieving lies between 3.0 and 3.4 Å which can be an important guideline for designing and developing feasible microporous frameworks for isotope separation. Afterwards, we report a novel strategy for efficient hydrogen isotope separation at technologically relevant operating pressure through the development of quantum sieving exploited by the pore aperture engineering. The strategy involves installation of flexible components in the pores of the framework to tune the pore surface.Keywords: gas adsorption, hydrogen isotope, metal organic frameworks(MOFs), quantum sieving
Procedia PDF Downloads 26511394 Calibration of Contact Model Parameters and Analysis of Microscopic Behaviors of Cuxhaven Sand Using The Discrete Element Method
Authors: Anjali Uday, Yuting Wang, Andres Alfonso Pena Olare
Abstract:
The Discrete Element Method is a promising approach to modeling microscopic behaviors of granular materials. The quality of the simulations however depends on the model parameters utilized. The present study focuses on calibration and validation of the discrete element parameters for Cuxhaven sand based on the experimental data from triaxial and oedometer tests. A sensitivity analysis was conducted during the sample preparation stage and the shear stage of the triaxial tests. The influence of parameters like rolling resistance, inter-particle friction coefficient, confining pressure and effective modulus were investigated on the void ratio of the sample generated. During the shear stage, the effect of parameters like inter-particle friction coefficient, effective modulus, rolling resistance friction coefficient and normal-to-shear stiffness ratio are examined. The calibration of the parameters is carried out such that the simulations reproduce the macro mechanical characteristics like dilation angle, peak stress, and stiffness. The above-mentioned calibrated parameters are then validated by simulating an oedometer test on the sand. The oedometer test results are in good agreement with experiments, which proves the suitability of the calibrated parameters. In the next step, the calibrated and validated model parameters are applied to forecast the micromechanical behavior including the evolution of contact force chains, buckling of columns of particles, observation of non-coaxiality, and sample inhomogeneity during a simple shear test. The evolution of contact force chains vividly shows the distribution, and alignment of strong contact forces. The changes in coordination number are in good agreement with the volumetric strain exhibited during the simple shear test. The vertical inhomogeneity of void ratios is documented throughout the shearing phase, which shows looser structures in the top and bottom layers. Buckling of columns is not observed due to the small rolling resistance coefficient adopted for simulations. The non-coaxiality of principal stress and strain rate is also well captured. Thus the micromechanical behaviors are well described using the calibrated and validated material parameters.Keywords: discrete element model, parameter calibration, triaxial test, oedometer test, simple shear test
Procedia PDF Downloads 12011393 Optimal Placement and Sizing of Energy Storage System in Distribution Network with Photovoltaic Based Distributed Generation Using Improved Firefly Algorithms
Authors: Ling Ai Wong, Hussain Shareef, Azah Mohamed, Ahmad Asrul Ibrahim
Abstract:
The installation of photovoltaic based distributed generation (PVDG) in active distribution system can lead to voltage fluctuation due to the intermittent and unpredictable PVDG output power. This paper presented a method in mitigating the voltage rise by optimally locating and sizing the battery energy storage system (BESS) in PVDG integrated distribution network. The improved firefly algorithm is used to perform optimal placement and sizing. Three objective functions are presented considering the voltage deviation and BESS off-time with state of charge as the constraint. The performance of the proposed method is compared with another optimization method such as the original firefly algorithm and gravitational search algorithm. Simulation results show that the proposed optimum BESS location and size improve the voltage stability.Keywords: BESS, firefly algorithm, PVDG, voltage fluctuation
Procedia PDF Downloads 32111392 A Near-Optimal Domain Independent Approach for Detecting Approximate Duplicates
Authors: Abdelaziz Fellah, Allaoua Maamir
Abstract:
We propose a domain-independent merging-cluster filter approach complemented with a set of algorithms for identifying approximate duplicate entities efficiently and accurately within a single and across multiple data sources. The near-optimal merging-cluster filter (MCF) approach is based on the Monge-Elkan well-tuned algorithm and extended with an affine variant of the Smith-Waterman similarity measure. Then we present constant, variable, and function threshold algorithms that work conceptually in a divide-merge filtering fashion for detecting near duplicates as hierarchical clusters along with their corresponding representatives. The algorithms take recursive refinement approaches in the spirit of filtering, merging, and updating, cluster representatives to detect approximate duplicates at each level of the cluster tree. Experiments show a high effectiveness and accuracy of the MCF approach in detecting approximate duplicates by outperforming the seminal Monge-Elkan’s algorithm on several real-world benchmarks and generated datasets.Keywords: data mining, data cleaning, approximate duplicates, near-duplicates detection, data mining applications and discovery
Procedia PDF Downloads 38711391 Analysis of Hard Turning Process of AISI D3-Thermal Aspects
Authors: B. Varaprasad, C. Srinivasa Rao
Abstract:
In the manufacturing sector, hard turning has emerged as vital machining process for cutting hardened steels. Besides many advantages of hard turning operation, one has to implement to achieve close tolerances in terms of surface finish, high product quality, reduced machining time, low operating cost and environmentally friendly characteristics. In the present study, three-dimensional CAE (Computer Aided Engineering) based simulation of hard turning by using commercial software DEFORM 3D has been compared to experimental results of stresses, temperatures and tool forces in machining of AISI D3 steel using mixed Ceramic inserts (CC6050). In the present analysis, orthogonal cutting models are proposed, considering several processing parameters such as cutting speed, feed, and depth of cut. An exhaustive friction modeling at the tool-work interfaces is carried out. Work material flow around the cutting edge is carefully modeled with adaptive re-meshing simulation capability. In process simulations, feed rate and cutting speed are constant (i.e.,. 0.075 mm/rev and 155 m/min), and analysis is focused on stresses, forces, and temperatures during machining. Close agreement is observed between CAE simulation and experimental values.Keywords: hard turning, computer aided engineering, computational machining, finite element method
Procedia PDF Downloads 45411390 Water Management Scheme: Panacea to Development Using Nigeria’s University of Ibadan Water Supply Scheme as a Case Study
Authors: Sunday Olufemi Adesogan
Abstract:
The supply of potable water at least is a very important index in national development. Water tariffs depend on the treatment cost which carries the highest percentage of the total operation cost in any water supply scheme. In order to keep water tariffs as low as possible, treatment costs have to be minimized. The University of Ibadan, Nigeria, water supply scheme consists of a treatment plant with three distribution stations (Amina way, Kurumi and Lander) and two raw water supply sources (Awba dam and Eleyele dam). An operational study of the scheme was carried out to ascertain the efficiency of the supply of potable water on the campus to justify the need for water supply schemes in tertiary institutions. The study involved regular collection, processing and analysis of periodic operational data. Data collected include supply reading (water production on daily basis) and consumers metered reading for a period of 22 months (October 2013 - July 2015), and also collected, were the operating hours of both plants and human beings. Applying the required mathematical equations, total loss was determined for the distribution system, which was translated into monetary terms. Adequacies of the operational functions were also determined. The study revealed that water supply scheme is justified in tertiary institutions. It was also found that approximately 10.7 million Nigerian naira (Keywords: development, panacea, supply, water
Procedia PDF Downloads 20911389 Purification of Bilge Water by Adsorption
Authors: Fatiha Atmani, Lamia Djellab, Nacera Yeddou Mezenner, Zohra Bensaadi
Abstract:
Generally, bilge waters can be briefly defined as saline and greasy wastewaters. The oil and grease are mixed with the sea water, which affects many marine species. Bilge water is a complex mixture of various compounds such as solvents, surfactants, fuel, lubricating oils, and hydraulic oils. It is resulted mainly by the leakage from the machinery and fresh water washdowns,which are allowed to drain to the lowest inner part of the ship's hull. There are several physicochemical methods used for bilge water treatment such as biodegradation electrochemical and electro-coagulation/flotation.The research herein presented discusses adsorption as a method to treat bilge water and eggshells were studied as an adsorbent. The influence of operating parameters as contact time, temperature and adsorbent dose (0,2 - 2g/l) on the removal efficiency of Chemical oxygen demand, COD, and turbidity was analyzed. The bilge wastewater used for this study was supplied by Harbour Bouharoune. Chemical oxygen demand removal increased from 26.7% to 68.7% as the adsorbent dose increased from 0.2 to 2 g. The kinetics of adsorption by eggshells were fast, reaching 55 % of the total adsorption capacity in ten minutes (T= 20°C, pH =7.66, m=2g/L). It was found that the turbidity removal efficiency decreased and 95% were achieved at the end of 90 min reaction. The adsorption process was found to be effective for the purification of bilge water and pseudo-second-order kinetic model was fitted for COD removal.Keywords: adsorption, bilge water, eggshells and kinetics, equilibrium and kinetics
Procedia PDF Downloads 35511388 Improvement of Ground Water Quality Index Using Citrus limetta
Authors: Rupas Kumar M., Saravana Kumar M., Amarendra Kumar S., Likhita Komal V., Sree Deepthi M.
Abstract:
The demand for water is increasing at an alarming rate due to rapid urbanization and increase in population. Due to freshwater scarcity, Groundwater became the necessary source of potable water to major parts of the world. This problem of freshwater scarcity and groundwater dependency is very severe particularly in developing countries and overpopulated regions like India. The present study aimed at evaluating the Ground Water Quality Index (GWQI), which represents overall quality of water at certain location and time based on water quality parameters. To evaluate the GWQI, sixteen water quality parameters have been considered viz. colour, pH, electrical conductivity, total dissolved solids, turbidity, total hardness, alkalinity, calcium, magnesium, sodium, chloride, nitrate, sulphate, iron, manganese and fluorides. The groundwater samples are collected from Kadapa City in Andhra Pradesh, India and subjected to comprehensive physicochemical analysis. The high value of GWQI has been found to be mainly from higher values of total dissolved solids, electrical conductivity, turbidity, alkalinity, hardness, and fluorides. in the present study, citrus limetta (sweet lemon) peel powder has used as a coagulant and GWQI values are recorded in different concentrations to improve GWQI. Sensitivity analysis is also carried out to determine the effect of coagulant dosage, mixing speed and stirring time on GWQI. The research found the maximum percentage improvement in GWQI values are obtained when the coagulant dosage is 100ppm, mixing speed is 100 rpm and stirring time is 10 mins. Alum is also used as a coagulant aid and the optimal ratio of citrus limetta and alum is identified as 3:2 which resulted in best GWQI value. The present study proposes Citrus limetta peel powder as a potential natural coagulant to treat Groundwater and to improve GWQI.Keywords: alum, Citrus limetta, ground water quality index, physicochemical analysis
Procedia PDF Downloads 22711387 Preventing the Drought of Lakes by Using Deep Reinforcement Learning in France
Authors: Farzaneh Sarbandi Farahani
Abstract:
Drought and decrease in the level of lakes in recent years due to global warming and excessive use of water resources feeding lakes are of great importance, and this research has provided a structure to investigate this issue. First, the information required for simulating lake drought is provided with strong references and necessary assumptions. Entity-Component-System (ECS) structure has been used for simulation, which can consider assumptions flexibly in simulation. Three major users (i.e., Industry, agriculture, and Domestic users) consume water from groundwater and surface water (i.e., streams, rivers and lakes). Lake Mead has been considered for simulation, and the information necessary to investigate its drought has also been provided. The results are presented in the form of a scenario-based design and optimal strategy selection. For optimal strategy selection, a deep reinforcement algorithm is developed to select the best set of strategies among all possible projects. These results can provide a better view of how to plan to prevent lake drought.Keywords: drought simulation, Mead lake, entity component system programming, deep reinforcement learning
Procedia PDF Downloads 9011386 Thermal and Caloric Imperfections Effect on the Supersonic Flow Parameters with Application for Air in Nozzles
Authors: Merouane Salhi, Toufik Zebbiche, Omar Abada
Abstract:
When the stagnation pressure of perfect gas increases, the specific heat and their ratio do not remain constant anymore and start to vary with this pressure. The gas does not remain perfect. Its state equation change and it becomes a real gas. In this case, the effects of molecular size and inter molecular attraction forces intervene to correct the state equation. The aim of this work is to show and discuss the effect of stagnation pressure on supersonic thermo dynamical, physical and geometrical flow parameters, to find a general case for real gas. With the assumptions that Berthelot’s state equation accounts for molecular size and inter molecular force effects, expressions are developed for analyzing supersonic flow for thermally and calorically imperfect gas lower than the dissociation molecules threshold. The designs parameters for supersonic nozzle like thrust coefficient depend directly on stagnation parameters of the combustion chamber. The application is for air. A computation of error is made in this case to give a limit of perfect gas model compared to real gas model.Keywords: supersonic flow, real gas model, Berthelot’s state equation, Simpson’s method, condensation function, stagnation pressure
Procedia PDF Downloads 52411385 Optimization of Energy Consumption with Various Design Parameters on Office Buildings in Chinese Severe Cold Zone
Authors: Yuang Guo, Dewancker Bart
Abstract:
The primary energy consumption of buildings throughout China was approximately 814 million tons of coal equivalents in 2014, which accounts for 19.12% of China's total primary energy consumption. Also, the energy consumption of public buildings takes a bigger share than urban residential buildings and rural residential buildings among the total energy consumption. To improve the level of energy demand, various design parameters were chosen. Meanwhile, a series of simulations by Energy Plus (EP-Launch) is performed using a base case model established in Open Studio. Through the results, 16%-23% of total energy demand reductions can be found in the severe cold zone of China, and it can also provide a reference for the architectural design of other similar climate zones.Keywords: energy consumption, design parameters, indoor thermal comfort, simulation study, severe cold climate zone
Procedia PDF Downloads 15611384 Optimal Feature Extraction Dimension in Finger Vein Recognition Using Kernel Principal Component Analysis
Authors: Amir Hajian, Sepehr Damavandinejadmonfared
Abstract:
In this paper the issue of dimensionality reduction is investigated in finger vein recognition systems using kernel Principal Component Analysis (KPCA). One aspect of KPCA is to find the most appropriate kernel function on finger vein recognition as there are several kernel functions which can be used within PCA-based algorithms. In this paper, however, another side of PCA-based algorithms -particularly KPCA- is investigated. The aspect of dimension of feature vector in PCA-based algorithms is of importance especially when it comes to the real-world applications and usage of such algorithms. It means that a fixed dimension of feature vector has to be set to reduce the dimension of the input and output data and extract the features from them. Then a classifier is performed to classify the data and make the final decision. We analyze KPCA (Polynomial, Gaussian, and Laplacian) in details in this paper and investigate the optimal feature extraction dimension in finger vein recognition using KPCA.Keywords: biometrics, finger vein recognition, principal component analysis (PCA), kernel principal component analysis (KPCA)
Procedia PDF Downloads 36511383 Neuron Dynamics of Single-Compartment Traub Model for Hardware Implementations
Authors: J. C. Moctezuma, V. Breña-Medina, Jose Luis Nunez-Yanez, Joseph P. McGeehan
Abstract:
In this work we make a bifurcation analysis for a single compartment representation of Traub model, one of the most important conductance-based models. The analysis focus in two principal parameters: current and leakage conductance. Study of stable and unstable solutions are explored; also Hop-bifurcation and frequency interpretation when current varies is examined. This study allows having control of neuron dynamics and neuron response when these parameters change. Analysis like this is particularly important for several applications such as: tuning parameters in learning process, neuron excitability tests, measure bursting properties of the neuron, etc. Finally, a hardware implementation results were developed to corroborate these results.Keywords: Traub model, Pinsky-Rinzel model, Hopf bifurcation, single-compartment models, bifurcation analysis, neuron modeling
Procedia PDF Downloads 323