Search results for: thermodynamic calculation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1645

Search results for: thermodynamic calculation

595 Effect of Corrosion on the Shear Buckling Strength

Authors: Myoung-Jin Lee, Sung-Jin Lee, Young-Kon Park, Jin-Wook Kim, Bo-Kyoung Kim, Song-Hun Chong, Sun-Ii Kim

Abstract:

The ability to resist the shear strength arises mainly from the web panel of steel girders and as such, the shear buckling strength of these girders has been extensively investigated. For example, Blaser’s reported that when buckling occurs, the tension field has an effect after the buckling strength of the steel is reached. The findings of these studies have been applied by AASHTO, AISC, and to the European Code that provides guidelines for designs aimed at preventing shear buckling. Steel girders are susceptible to corrosion resulting from exposure to natural elements such as rainfall, humidity, and temperature. This corrosion leads to a reduction in the size of the web panel section, thereby resulting in a decrease in the shear strength. The decrease in the panel section has a significant effect on the maintenance section of the bridge. However, in most conventional designs, the influence of corrosion is overlooked during the calculation of the shear buckling strength and hence over-design is common. Therefore, in this study, a steel girder with an A/D of 1:1, as well as a 6-mm-, 16-mm-, and 12-mm-thick web panel, flange, and intermediate reinforcing material, respectively, were used. The total length was set to that (3200 mm) of the default model. The effect of corrosion shear buckling was investigated by determining the volume amount of corrosion, shape of the erosion patterns, and the angular change in the tensile field of the shear buckling strength. This study provides the basic data that will enable designs that incorporate values closer (than those used in most conventional designs) to the actual shear buckling strength.

Keywords: corrosion, shear buckling strength, steel girder, shear strength

Procedia PDF Downloads 375
594 Kinetics of Hydrogen Sulfide Removal from Biogas Using Biofilm on Packed Bed of Salak Fruit Seeds

Authors: Retno A. S. Lestari, Wahyudi B. Sediawan, Siti Syamsiah, Sarto

Abstract:

Sulfur-oxidizing bacteria were isolated and then grown on salak fruit seeds forming a biofilm on the surface. Their performances in sulfide removal were experimentally observed. In doing so, the salak fruit seeds containing biofilm were then used as packing material in a cylinder. Biogas obtained from biological treatment, which contains 27.95 ppm of hydrogen sulfide was flown through the packed bed. The hydrogen sulfide from the biogas was absorbed in the biofilm and then degraded by the microbes in the biofilm. The hydrogen sulfide concentrations at a various axial position and various times were analyzed. A set of simple kinetics model for the rate of the sulfide removal and the bacterial growth was proposed. Since the biofilm is very thin, the sulfide concentration in the Biofilm at a certain axial position is assumed to be uniform. The simultaneous ordinary differential equations obtained were then solved numerically using Runge-Kutta method. The values of the parameters were also obtained by curve-fitting. The accuracy of the model proposed was tested by comparing the calculation results using the model with the experimental data obtained. It turned out that the model proposed can describe the removal of sulfide liquid using bio-filter in the packed bed. The biofilter could remove 89,83 % of the hydrogen sulfide in the feed at 2.5 hr of operation and biogas flow rate of 30 L/hr.

Keywords: sulfur-oxidizing bacteria, salak fruit seeds, biofilm, packing material, biogas

Procedia PDF Downloads 222
593 Consumer Welfare in the Platform Economy

Authors: Prama Mukhopadhyay

Abstract:

Starting from transport to food, today’s world platform economy and digital markets have taken over almost every sphere of consumers’ lives. Sellers and buyers are getting connected through platforms, which is acting as an intermediary. It has made consumer’s life easier in terms of time, price, choice and other factors. Having said that, there are several concerns regarding platforms. There are competition law concerns like unfair pricing, deep discounting by the platforms which affect the consumer welfare. Apart from that, the biggest problem is lack of transparency with respect to the business models, how it operates, price calculation, etc. In most of the cases, consumers are unaware of how their personal data are being used. In most of the cases, they are unaware of how algorithm uses their personal data to determine the price of the product or even to show the relevant products using their previous searches. Using personal or non-personal data without consumer’s consent is a huge legal concern. In addition to this, another major issue lies with the question of liability. If a dispute arises, who will be responsible? The seller or the platform? For example, if someone ordered food through a food delivery app and the food was bad, in this situation who will be liable: the restaurant or the food delivery platform? In this paper, the researcher tries to examine the legal concern related to platform economy from the consumer protection and consumer welfare perspectives. The paper analyses the cases from different jurisdictions and approach taken by the judiciaries. The author compares the existing legislation of EU, US and other Asian Countries and tries to highlight the best practices.

Keywords: competition, consumer, data, platform

Procedia PDF Downloads 144
592 Gadolinium-Based Polymer Nanostructures as Magnetic Resonance Imaging Contrast Agents

Authors: Franca De Sarno, Alfonso Maria Ponsiglione, Enza Torino

Abstract:

Recent advances in diagnostic imaging technology have significantly contributed to a better understanding of specific changes associated with diseases progression. Among different imaging modalities, Magnetic Resonance Imaging (MRI) represents a noninvasive medical diagnostic technique, which shows low sensitivity and long acquisition time and it can discriminate between healthy and diseased tissues by providing 3D data. In order to improve the enhancement of MRI signals, some imaging exams require intravenous administration of contrast agents (CAs). Recently, emerging research reports a progressive deposition of these drugs, in particular, gadolinium-based contrast agents (GBCAs), in the body many years after multiple MRI scans. These discoveries confirm the need to have a biocompatible system able to boost a clinical relevant Gd-chelate. To this aim, several approaches based on engineered nanostructures have been proposed to overcome the common limitations of conventional CAs, such as the insufficient signal-to-noise ratios due to relaxivity and poor safety profile. In particular, nanocarriers, labeling or loading with CAs, capable of carrying high payloads of CAs have been developed. Currently, there’s no a comprehensive understanding of the thermodynamic contributions enable of boosting the efficacy of conventional CAs by using biopolymers matrix. Thus, considering the importance of MRI in diagnosing diseases, here it is reported a successful example of the next generation of these drugs where the commercial gadolinium chelate is incorporate into a biopolymer nanostructure, formed by cross-linked hyaluronic acid (HA), with improved relaxation properties. In addition, they are highlighted the basic principles ruling biopolymer-CA interactions in the perspective of their influence on the relaxometric properties of the CA by adopting a multidisciplinary experimental approach. On the basis of these discoveries, it is clear that the main point consists in increasing the rigidification of readily-available Gd-CAs within the biopolymer matrix by controlling the water dynamics, the physicochemical interactions, and the polymer conformations. In the end, the acquired knowledge about polymer-CA systems has been applied to develop of Gd-based HA nanoparticles with enhanced relaxometric properties.

Keywords: biopolymers, MRI, nanoparticles, contrast agent

Procedia PDF Downloads 149
591 Environmental Modeling of Storm Water Channels

Authors: L. Grinis

Abstract:

Turbulent flow in complex geometries receives considerable attention due to its importance in many engineering applications. It has been the subject of interest for many researchers. Some of these interests include the design of storm water channels. The design of these channels requires testing through physical models. The main practical limitation of physical models is the so called “scale effect”, that is, the fact that in many cases only primary physical mechanisms can be correctly represented, while secondary mechanisms are often distorted. These observations form the basis of our study, which centered on problems associated with the design of storm water channels near the Dead Sea, in Israel. To help reach a final design decision we used different physical models. Our research showed good coincidence with the results of laboratory tests and theoretical calculations, and allowed us to study different effects of fluid flow in an open channel. We determined that problems of this nature cannot be solved only by means of theoretical calculation and computer simulation. This study demonstrates the use of physical models to help resolve very complicated problems of fluid flow through baffles and similar structures. The study applies these models and observations to different construction and multiphase water flows, among them, those that include sand and stone particles, a significant attempt to bring to the testing laboratory a closer association with reality.

Keywords: open channel, physical modeling, baffles, turbulent flow

Procedia PDF Downloads 284
590 Meteorological Risk Assessment for Ships with Fuzzy Logic Designer

Authors: Ismail Karaca, Ridvan Saracoglu, Omer Soner

Abstract:

Fuzzy Logic, an advanced method to support decision-making, is used by various scientists in many disciplines. Fuzzy programming is a product of fuzzy logic, fuzzy rules, and implication. In marine science, fuzzy programming for ships is dramatically increasing together with autonomous ship studies. In this paper, a program to support the decision-making process for ship navigation has been designed. The program is produced in fuzzy logic and rules, by taking the marine accidents and expert opinions into account. After the program was designed, the program was tested by 46 ship accidents reported by the Transportation Safety Investigation Center of Turkey. Wind speed, sea condition, visibility, day/night ratio have been used as input data. They have been converted into a risk factor within the Fuzzy Logic Designer application and fuzzy rules set by marine experts. Finally, the expert's meteorological risk factor for each accident is compared with the program's risk factor, and the error rate was calculated. The main objective of this study is to improve the navigational safety of ships, by using the advance decision support model. According to the study result, fuzzy programming is a robust model that supports safe navigation.

Keywords: calculation of risk factor, fuzzy logic, fuzzy programming for ship, safety navigation of ships

Procedia PDF Downloads 189
589 Critical Parameters of a Square-Well Fluid

Authors: Hamza Javar Magnier, Leslie V. Woodcock

Abstract:

We report extensive molecular dynamics (MD) computational investigations into the thermodynamic description of supercritical properties for a model fluid that is the simplest realistic representation of atoms or molecules. The pair potential is a hard-sphere repulsion of diameter σ with a very short attraction of length λσ. When λ = 1.005 the range is so short that the model atoms are referred to as “adhesive spheres”. Molecular dimers, trimers …etc. up to large clusters, or droplets, of many adhesive-sphere atoms are unambiguously defined. This then defines percolation transitions at the molecular level that bound the existence of gas and liquid phases at supercritical temperatures, and which define the existence of a supercritical mesophase. Both liquid and gas phases are seen to terminate at the loci of percolation transitions, and below a second characteristic temperature (Tc2) are separated by the supercritical mesophase. An analysis of the distribution of clusters in gas, meso- and liquid phases confirms the colloidal nature of this mesophase. The general phase behaviour is compared with both experimental properties of the water-steam supercritical region and also with formally exact cluster theory of Mayer and Mayer. Both are found to be consistent with the present findings that in this system the supercritical mesophase narrows in density with increasing T > Tc and terminates at a higher Tc2 at a confluence of the primary percolation loci. The expended plot of the MD data points in the mesophase of 7 critical and supercritical isotherms in highlight this narrowing in density of the linear-slope region of the mesophase as temperature is increased above the critical. This linearity in the mesophase implies the existence of a linear combination rule between gas and liquid which is an extension of the Lever rule in the subcritical region, and can be used to obtain critical parameters without resorting to experimental data in the two-phase region. Using this combination rule, the calculated critical parameters Tc = 0.2007 and Pc = 0.0278 are found be agree with the values found by of Largo and coworkers. The properties of this supercritical mesophase are shown to be consistent with an alternative description of the phenomenon of critical opalescence seen in the supercritical region of both molecular and colloidal-protein supercritical fluids.

Keywords: critical opalescence, supercritical, square-well, percolation transition, critical parameters.

Procedia PDF Downloads 521
588 Investigation of Single Particle Breakage inside an Impact Mill

Authors: E. Ghasemi Ardi, K. J. Dong, A. B. Yu, R. Y. Yang

Abstract:

In current work, a numerical model based on the discrete element method (DEM) was developed which provided information about particle dynamic and impact event condition inside a laboratory scale impact mill (Fritsch). It showed that each particle mostly experiences three impacts inside the mill. While the first impact frequently happens at front surface of the rotor’s rib, the frequent location of the second impact is side surfaces of the rotor’s rib. It was also showed that while the first impact happens at small impact angle mostly varying around 35º, the second impact happens at around 70º which is close to normal impact condition. Also analyzing impact energy revealed that varying mill speed from 6000 to 14000 rpm, the ratio of first impact’s average impact energy and minimum required energy to break particle (Wₘᵢₙ) increased from 0.30 to 0.85. Moreover, it was seen that second impact poses intense impact energy on particle which can be considered as the main cause of particle splitting. Finally, obtained information from DEM simulation along with obtained data from conducted experiments was implemented in semi-empirical equations in order to find selection and breakage functions. Then, using a back-calculation approach, those parameters were used to predict the PSDs of ground particles under different impact energies. Results were compared with experiment results and showed reasonable accuracy and prediction ability.

Keywords: single particle breakage, particle dynamic, population balance model, particle size distribution, discrete element method

Procedia PDF Downloads 291
587 An Approximate Formula for Calculating the Fundamental Mode Period of Vibration of Practical Building

Authors: Abdul Hakim Chikho

Abstract:

Most international codes allow the use of an equivalent lateral load method for designing practical buildings to withstand earthquake actions. This method requires calculating an approximation to the fundamental mode period of vibrations of these buildings. Several empirical equations have been suggested to calculate approximations to the fundamental periods of different types of structures. Most of these equations are knowing to provide an only crude approximation to the required fundamental periods and repeating the calculation utilizing a more accurate formula is usually required. In this paper, a new formula to calculate a satisfactory approximation of the fundamental period of a practical building is proposed. This formula takes into account the mass and the stiffness of the building therefore, it is more logical than the conventional empirical equations. In order to verify the accuracy of the proposed formula, several examples have been solved. In these examples, calculating the fundamental mode periods of several farmed buildings utilizing the proposed formula and the conventional empirical equations has been accomplished. Comparing the obtained results with those obtained from a dynamic computer has shown that the proposed formula provides a more accurate estimation of the fundamental periods of practical buildings. Since the proposed method is still simple to use and requires only a minimum computing effort, it is believed to be ideally suited for design purposes.

Keywords: earthquake, fundamental mode period, design, building

Procedia PDF Downloads 284
586 Optoelectronic Hardware Architecture for Recurrent Learning Algorithm in Image Processing

Authors: Abdullah Bal, Sevdenur Bal

Abstract:

This paper purposes a new type of hardware application for training of cellular neural networks (CNN) using optical joint transform correlation (JTC) architecture for image feature extraction. CNNs require much more computation during the training stage compare to test process. Since optoelectronic hardware applications offer possibility of parallel high speed processing capability for 2D data processing applications, CNN training algorithm can be realized using Fourier optics technique. JTC employs lens and CCD cameras with laser beam that realize 2D matrix multiplication and summation in the light speed. Therefore, in the each iteration of training, JTC carries more computation burden inherently and the rest of mathematical computation realized digitally. The bipolar data is encoded by phase and summation of correlation operations is realized using multi-object input joint images. Overlapping properties of JTC are then utilized for summation of two cross-correlations which provide less computation possibility for training stage. Phase-only JTC does not require data rearrangement, electronic pre-calculation and strict system alignment. The proposed system can be incorporated simultaneously with various optical image processing or optical pattern recognition techniques just in the same optical system.

Keywords: CNN training, image processing, joint transform correlation, optoelectronic hardware

Procedia PDF Downloads 506
585 Vision-Based Collision Avoidance for Unmanned Aerial Vehicles by Recurrent Neural Networks

Authors: Yao-Hong Tsai

Abstract:

Due to the sensor technology, video surveillance has become the main way for security control in every big city in the world. Surveillance is usually used by governments for intelligence gathering, the prevention of crime, the protection of a process, person, group or object, or the investigation of crime. Many surveillance systems based on computer vision technology have been developed in recent years. Moving target tracking is the most common task for Unmanned Aerial Vehicle (UAV) to find and track objects of interest in mobile aerial surveillance for civilian applications. The paper is focused on vision-based collision avoidance for UAVs by recurrent neural networks. First, images from cameras on UAV were fused based on deep convolutional neural network. Then, a recurrent neural network was constructed to obtain high-level image features for object tracking and extracting low-level image features for noise reducing. The system distributed the calculation of the whole system to local and cloud platform to efficiently perform object detection, tracking and collision avoidance based on multiple UAVs. The experiments on several challenging datasets showed that the proposed algorithm outperforms the state-of-the-art methods.

Keywords: unmanned aerial vehicle, object tracking, deep learning, collision avoidance

Procedia PDF Downloads 160
584 Methods of Variance Estimation in Two-Phase Sampling

Authors: Raghunath Arnab

Abstract:

The two-phase sampling which is also known as double sampling was introduced in 1938. In two-phase sampling, samples are selected in phases. In the first phase, a relatively large sample of size is selected by some suitable sampling design and only information on the auxiliary variable is collected. During the second phase, a sample of size is selected either from, the sample selected in the first phase or from the entire population by using a suitable sampling design and information regarding the study and auxiliary variable is collected. Evidently, two phase sampling is useful if the auxiliary information is relatively easy and cheaper to collect than the study variable as well as if the strength of the relationship between the variables and is high. If the sample is selected in more than two phases, the resulting sampling design is called a multi-phase sampling. In this article we will consider how one can use data collected at the first phase sampling at the stages of estimation of the parameter, stratification, selection of sample and their combinations in the second phase in a unified setup applicable to any sampling design and wider classes of estimators. The problem of the estimation of variance will also be considered. The variance of estimator is essential for estimating precision of the survey estimates, calculation of confidence intervals, determination of the optimal sample sizes and for testing of hypotheses amongst others. Although, the variance is a non-negative quantity but its estimators may not be non-negative. If the estimator of variance is negative, then it cannot be used for estimation of confidence intervals, testing of hypothesis or measure of sampling error. The non-negativity properties of the variance estimators will also be studied in details.

Keywords: auxiliary information, two-phase sampling, varying probability sampling, unbiased estimators

Procedia PDF Downloads 588
583 Analysis of Attention to the Confucius Institute from Domestic and Foreign Mainstream Media

Authors: Wei Yang, Xiaohui Cui, Weiping Zhu, Liqun Liu

Abstract:

The rapid development of the Confucius Institute is attracting more and more attention from mainstream media around the world. Mainstream media plays a large role in public information dissemination and public opinion. This study presents efforts to analyze the correlation and functional relationship between domestic and foreign mainstream media by analyzing the amount of reports on the Confucius Institute. Three kinds of correlation calculation methods, the Pearson correlation coefficient (PCC), the Spearman correlation coefficient (SCC), and the Kendall rank correlation coefficient (KCC), were applied to analyze the correlations among mainstream media from three regions: mainland of China; Hong Kong and Macao (the two special administration regions of China denoted as SARs); and overseas countries excluding China, such as the United States, England, and Canada. Further, the paper measures the functional relationships among the regions using a regression model. The experimental analyses found high correlations among mainstream media from the different regions. Additionally, we found that there is a linear relationship between the mainstream media of overseas countries and those of the SARs by analyzing the amount of reports on the Confucius Institute based on a data set obtained by crawling the websites of 106 mainstream media during the years 2004 to 2014.

Keywords: mainstream media, Confucius institute, correlation analysis, regression model

Procedia PDF Downloads 318
582 Theoretical Insight into Ligand Free Manganese Catalyzed C-O Coupling Protocol for the Synthesis of Biaryl Ethers

Authors: Carolin Anna Joy, Rohith K. R, Rehin Sulay, Parvathy Santhoshkumar, G.Anil Kumar, Vibin Ipe Thomas

Abstract:

Ullmann coupling reactions are gaining great relevance owing to their contribution in the synthesis of biologically and pharmaceutically important compounds. Palladium and many other heavy metals have proven their excellent ability in coupling reaction, but the toxicity matters. The first-row transition metal also possess toxicity, except in the case of iron and manganese. The suitability of manganese as a catalyst is achieving great interest in oxidation, reduction, C-H activation, coupling reaction etc. In this presentation, we discuss the thermo chemistry of ligand free manganese catalyzed C-O coupling reaction between phenol and aryl halide for the synthesis of biaryl ethers using Density functional theory techniques. The mechanism involves an oxidative addition-reductive elimination step. The transition state for both the step had been studied and confirmed using Intrinsic Reaction Coordinate (IRC) calculation. The barrier height for the reaction had also been calculated from the rate determining step. The possibility of other mechanistic way had also been studied. To achieve further insight into the mechanism, substrate having various functional groups is considered in our study to direct their effect on the feasibility of the reaction.

Keywords: Density functional theory, Molecular Modeling, ligand free, biaryl ethers, Ullmann coupling

Procedia PDF Downloads 146
581 Compression Index Estimation by Water Content and Liquid Limit and Void Ratio Using Statistics Method

Authors: Lizhou Chen, Abdelhamid Belgaid, Assem Elsayed, Xiaoming Yang

Abstract:

Compression index is essential in foundation settlement calculation. The traditional method for determining compression index is consolidation test which is expensive and time consuming. Many researchers have used regression methods to develop empirical equations for predicting compression index from soil properties. Based on a large number of compression index data collected from consolidation tests, the accuracy of some popularly empirical equations were assessed. It was found that primary compression index is significantly overestimated in some equations while it is underestimated in others. The sensitivity analyses of soil parameters including water content, liquid limit and void ratio were performed. The results indicate that the compression index obtained from void ratio is most accurate. The ANOVA (analysis of variance) demonstrates that the equations with multiple soil parameters cannot provide better predictions than the equations with single soil parameter. In other words, it is not necessary to develop the relationships between compression index and multiple soil parameters. Meanwhile, it was noted that secondary compression index is approximately 0.7-5.0% of primary compression index with an average of 2.0%. In the end, the proposed prediction equations using power regression technique were provided that can provide more accurate predictions than those from existing equations.

Keywords: compression index, clay, settlement, consolidation, secondary compression index, soil parameter

Procedia PDF Downloads 163
580 A Simple Computational Method for the Gravitational and Seismic Soil-Structure-Interaction between New and Existent Buildings Sites

Authors: Nicolae Daniel Stoica, Ion Mierlus Mazilu

Abstract:

This work is one of numerical research and aims to address the issue of the design of new buildings in a 3D location of existing buildings. In today's continuous development and congestion of urban centers is a big question about the influence of the new buildings on an already existent vicinity site. Thus, in this study, we tried to focus on how existent buildings may be affected by any newly constructed buildings and in how far this influence is really decreased. The problem of modeling the influence of interaction between buildings is not simple in any area in the world, and neither in Romania. Unfortunately, most often the designers not done calculations that can determine how close to reality these 3D influences nor the simplified method and the more superior methods. In the most literature making a "shield" (the pilots or molded walls) is absolutely sufficient to stop the influence between the buildings, and so often the soil under the structure is ignored in the calculation models. The main causes for which the soil is neglected in the analysis are related to the complexity modeling of interaction between soil and structure. In this paper, based on a new simple but efficient methodology we tried to determine for a lot of study cases the influence, in terms of assessing the interaction land structure on the behavior of structures that influence a new building on an existing one. The study covers additional subsidence that may occur during the execution of new works and after its completion. It also highlighted the efforts diagrams and deflections in the soil for both the original case and the final stage. This is necessary to see to what extent the expected impact of the new building on existing areas.

Keywords: soil, structure, interaction, piles, earthquakes

Procedia PDF Downloads 291
579 Design, Construction and Evaluation of a Mechanical Vapor Compression Distillation System for Wastewater Treatment in a Poultry Company

Authors: Juan S. Vera, Miguel A. Gomez, Omar Gelvez

Abstract:

Water is Earth's most valuable resource, and the lack of it is currently a critical problem in today’s society. Non-treated wastewaters contribute to this situation, especially those coming from industrial activities, as they reduce the quality of the water bodies, annihilating all kind of life and bringing disease to people in contact with them. An effective solution for this problem is distillation, which removes most contaminants. However, this approach must also be energetically efficient in order to appeal to the industry. In this endeavour, most water distillation treatments fail, with the exception of the Mechanical Vapor Compression (MVC) distillation system, which has a great efficiency due to energy input by a compressor and the latent heat exchange. This paper presents the process of design, construction, and evaluation of a Mechanical Vapor Compression (MVC) distillation system for the main Colombian poultry company Avidesa Macpollo SA. The system will be located in the principal slaughterhouse in the state of Santander, and it will work along with the Gas Energy Mixing system (GEM) to treat the wastewaters from the plant. The main goal of the MVC distiller, rarely used in this type of application, is to reduce the chlorides, Chemical Oxygen Demand (COD) and Biological Oxygen Demand (BOD) levels according to the state regulations since the GEM cannot decrease them enough. The MVC distillation system works with three components, the evaporator/condenser heat exchanger where the distillation takes place, a low-pressure compressor which gives the energy to create the temperature differential between the evaporator and condenser cavities and a preheater to save the remaining energy in the distillate. The model equations used to describe how the compressor power consumption, heat exchange area and distilled water are related is based on a thermodynamic balance and heat transfer analysis, with correlations taken from the literature. Finally, the design calculations and the measurements of the installation are compared, showing accordance with the predictions in distillate production and power consumption, changing the temperature difference of the evaporator/condenser.

Keywords: mechanical vapor compression, distillation, wastewater, design, construction, evaluation

Procedia PDF Downloads 159
578 Mathematical modeling of the calculation of the absorbed dose in uranium production workers with the genetic effects.

Authors: P. Kazymbet, G. Abildinova, K.Makhambetov, M. Bakhtin, D. Rybalkina, K. Zhumadilov

Abstract:

Conducted cytogenetic research in workers Stepnogorsk Mining-Chemical Combine (Akmola region) with the study of 26341 chromosomal metaphase. Using a regression analysis with program DataFit, version 5.0, dependence between exposure dose and the following cytogenetic exponents has been studied: frequency of aberrant cells, frequency of chromosomal aberrations, frequency of the amounts of dicentric chromosomes, and centric rings. Experimental data on calibration curves "dose-effect" enabled the development of a mathematical model, allowing on data of the frequency of aberrant cells, chromosome aberrations, the amounts of dicentric chromosomes and centric rings calculate the absorbed dose at the time of the study. In the dose range of 0.1 Gy to 5.0 Gy dependence cytogenetic parameters on the dose had the following equation: Y = 0,0067е^0,3307х (R2 = 0,8206) – for frequency of chromosomal aberrations; Y = 0,0057е^0,3161х (R2 = 0,8832) –for frequency of cells with chromosomal aberrations; Y =5 Е-0,5е^0,6383 (R2 = 0,6321) – or frequency of the amounts of dicentric chromosomes and centric rings on cells. On the basis of cytogenetic parameters and regression equations calculated absorbed dose in workers of uranium production at the time of the study did not exceed 0.3 Gy.

Keywords: Stepnogorsk, mathematical modeling, cytogenetic, dicentric chromosomes

Procedia PDF Downloads 478
577 Atomistic Insight into the System of Trapped Oil Droplet/ Nanofluid System in Nanochannels

Authors: Yuanhao Chang, Senbo Xiao, Zhiliang Zhang, Jianying He

Abstract:

The role of nanoparticles (NPs) in enhanced oil recovery (EOR) is being increasingly emphasized. In this study, the motion of NPs and local stress distribution of tapped oil droplet/nanofluid in nanochannels are studied with coarse-grained modeling and molecular dynamic simulations. The results illustrate three motion patterns for NPs: hydrophilic NPs are more likely to adsorb on the channel and stay near the three-phase contact areas, hydrophobic NPs move inside the oil droplet as clusters and more mixed NPs are trapped at the oil-water interface. NPs in each pattern affect the flow of fluid and the interfacial thickness to various degrees. Based on the calculation of atomistic stress, the characteristic that the higher value of stress occurs at the place where NPs aggregate can be obtained. Different occurrence patterns correspond to specific local stress distribution. Significantly, in the three-phase contact area for hydrophilic NPs, the local stress distribution close to the pattern of structural disjoining pressure is observed, which proves the existence of structural disjoining pressure in molecular dynamics simulation for the first time. Our results guide the design and screen of NPs for EOR and provide a basic understanding of nanofluid applications.

Keywords: local stress distribution, nanoparticles, enhanced oil recovery, molecular dynamics simulation, trapped oil droplet, structural disjoining pressure

Procedia PDF Downloads 134
576 A Case Study of Ontology-Based Sentiment Analysis for Fan Pages

Authors: C. -L. Huang, J. -H. Ho

Abstract:

Social media has become more and more important in our life. Many enterprises promote their services and products to fans via the social media. The positive or negative sentiment of feedbacks from fans is very important for enterprises to improve their products, services, and promotion activities. The purpose of this paper is to understand the sentiment of the fan’s responses by analyzing the responses posted by fans on Facebook. The entity and aspect of fan’s responses were analyzed based on a predefined ontology. The ontology for cell phone sentiment analysis consists of aspect categories on the top level as follows: overall, shape, hardware, brand, price, and service. Each category consists of several sub-categories. All aspects for a fan’s response were found based on the ontology, and their corresponding sentimental terms were found using lexicon-based approach. The sentimental scores for aspects of fan responses were obtained by summarizing the sentimental terms in responses. The frequency of 'like' was also weighted in the sentimental score calculation. Three famous cell phone fan pages on Facebook were selected as demonstration cases to evaluate performances of the proposed methodology. Human judgment by several domain experts was also built for performance comparison. The performances of proposed approach were as good as those of human judgment on precision, recall and F1-measure.

Keywords: opinion mining, ontology, sentiment analysis, text mining

Procedia PDF Downloads 232
575 The Characteristics of Quantity Operation for 2nd and 3rd Grade Mathematics Slow Learners

Authors: Pi-Hsia Hung

Abstract:

The development of mathematical competency has individual benefits as well as benefits to the wider society. Children who begin school behind their peers in their understanding of number, counting, and simple arithmetic are at high risk of staying behind throughout their schooling. The development of effective strategies for improving the educational trajectory of these individuals will be contingent on identifying areas of early quantitative knowledge that influence later mathematics achievement. A computer-based quantity assessment was developed in this study to investigate the characteristics of 2nd and 3rd grade slow learners in quantity. The concept of quantification involves understanding measurements, counts, magnitudes, units, indicators, relative size, and numerical trends and patterns. Fifty-five tasks of quantitative reasoning—such as number sense, mental calculation, estimation and assessment of reasonableness of results—are included as quantity problem solving. Thus, quantity is defined in this study as applying knowledge of number and number operations in a wide variety of authentic settings. Around 1000 students were tested and categorized into 4 different performance levels. Students’ quantity ability correlated higher with their school math grade than other subjects. Around 20% students are below basic level. The intervention design implications of the preliminary item map constructed are discussed.

Keywords: mathematics assessment, mathematical cognition, quantity, number sense, validity

Procedia PDF Downloads 247
574 A Fuzzy Inference Tool for Assessing Cancer Risk from Radiation Exposure

Authors: Bouharati Lokman, Bouharati Imen, Bouharati Khaoula, Bouharati Oussama, Bouharati Saddek

Abstract:

Ionizing radiation exposure is an established cancer risk factor. Compared to other common environmental carcinogens, it is relatively easy to determine organ-specific radiation dose and, as a result, radiation dose-response relationships tend to be highly quantified. Nevertheless, there can be considerable uncertainty about questions of radiation-related cancer risk as they apply to risk protection and public policy, and the interpretations of interested parties can differ from one person to another. Examples of tools used in the analysis of the risk of developing cancer due to radiation are characterized by uncertainty. These uncertainties are related to the history of exposure and different assumptions involved in the calculation. We believe that the results of statistical calculations are characterized by uncertainty and imprecision. Having regard to the physiological variation from one person to another. In this study, we develop a tool based on fuzzy logic inference. As fuzzy logic deals with imprecise and uncertain, its application in this area is adequate. We propose a fuzzy system with three input variables (age, sex and body attainable cancer). The output variable expresses the risk of infringement rate of each organ. A base rule is established from recorded actual data. After successful simulation, this will instantly predict the risk of infringement rate of each body following chronic exposure to 0.1 Gy.

Keywords: radiation exposure, cancer, modeling, fuzzy logic

Procedia PDF Downloads 311
573 A Framework for Security Risk Level Measures Using CVSS for Vulnerability Categories

Authors: Umesh Kumar Singh, Chanchala Joshi

Abstract:

With increasing dependency on IT infrastructure, the main objective of a system administrator is to maintain a stable and secure network, with ensuring that the network is robust enough against malicious network users like attackers and intruders. Security risk management provides a way to manage the growing threats to infrastructures or system. This paper proposes a framework for risk level estimation which uses vulnerability database National Institute of Standards and Technology (NIST) National Vulnerability Database (NVD) and the Common Vulnerability Scoring System (CVSS). The proposed framework measures the frequency of vulnerability exploitation; converges this measured frequency with standard CVSS score and estimates the security risk level which helps in automated and reasonable security management. In this paper equation for the Temporal score calculation with respect to availability of remediation plan is derived and further, frequency of exploitation is calculated with determined temporal score. The frequency of exploitation along with CVSS score is used to calculate the security risk level of the system. The proposed framework uses the CVSS vectors for risk level estimation and measures the security level of specific network environment, which assists system administrator for assessment of security risks and making decision related to mitigation of security risks.

Keywords: CVSS score, risk level, security measurement, vulnerability category

Procedia PDF Downloads 321
572 Analysis Model for the Relationship of Users, Products, and Stores on Online Marketplace Based on Distributed Representation

Authors: Ke He, Wumaier Parezhati, Haruka Yamashita

Abstract:

Recently, online marketplaces in the e-commerce industry, such as Rakuten and Alibaba, have become some of the most popular online marketplaces in Asia. In these shopping websites, consumers can select purchase products from a large number of stores. Additionally, consumers of the e-commerce site have to register their name, age, gender, and other information in advance, to access their registered account. Therefore, establishing a method for analyzing consumer preferences from both the store and the product side is required. This study uses the Doc2Vec method, which has been studied in the field of natural language processing. Doc2Vec has been used in many cases to analyze the extraction of semantic relationships between documents (represented as consumers) and words (represented as products) in the field of document classification. This concept is applicable to represent the relationship between users and items; however, the problem is that one more factor (i.e., shops) needs to be considered in Doc2Vec. More precisely, a method for analyzing the relationship between consumers, stores, and products is required. The purpose of our study is to combine the analysis of the Doc2vec model for users and shops, and for users and items in the same feature space. This method enables the calculation of similar shops and items for each user. In this study, we derive the real data analysis accumulated in the online marketplace and demonstrate the efficiency of the proposal.

Keywords: Doc2Vec, online marketplace, marketing, recommendation systems

Procedia PDF Downloads 112
571 Amino Acid Derivatives as Green Corrosion Inhibitors for Mild Steel in 1M HCl: Electrochemical, Surface and Density Functional Theory Studies

Authors: Jiyaul Haque, Vandana Srivastava, M. A. Quraishi

Abstract:

The amino acids based corrosion inhibitors 2-(3-(carboxymethyl)-1H-imidazol-3-ium-1-yl) acetate (Z-1),2-(3-(1-carboxyethyl)-1H-imidazol-3-ium-1-yl) propanoate (Z-2) and 2-(3-(1-carboxy-2-phenylethyl)-1H-imidazol-3-ium-1-yl)-3- phenylpropanoate (Z-3) were synthesized by the reaction of amino acids, glyoxal and formaldehyde, and characterized by the FTIR and NMR spectroscopy. The corrosion inhibition performance of synthesized inhibitors was studied by electrochemical (EIS and PDP), surface and DFT methods. The results show, the studied Z-1, Z-2 and Z-3 are effective inhibitors, showed the maximum inhibition efficiency of 88.52 %, 89.48 and 96.08% at concentration 200ppm, respectively. The results of potentiodynamic polarization (PDP) study showed that Z-1 act as a cathodic inhibitor, while Z-2 and Z-3 act as mixed type inhibitors. The results of electrochemical impedance spectroscopy (EIS) studies showed that zwitterions inhibit the corrosion through adsorption mechanism. The adsorption of synthesized zwitterions on the mild steel surface was followed the Langmuir adsorption isotherm. The formation of zwitterions film on mild steel surface was confirmed by the scanning electron microscope (SEM) and energy-dispersive X-ray spectroscopy (EDX). The quantum chemical parameters were used to study the reactivity of inhibitors and supported the experimental results. An inhibitor adsorption model is proposed.

Keywords: electrochemical impedance spectroscopy, green corrosion inhibitors, mild steel, SEM, quantum chemical calculation, zwitterions

Procedia PDF Downloads 195
570 Comparing Two Interventions for Teaching Math to Pre-School Students with Autism

Authors: Hui Fang Huang Su, Jia Borror

Abstract:

This study compared two interventions for teaching math to preschool-aged students with autism spectrum disorder (ASD). The first is considered the business as usual (BAU) intervention, which uses the Strategies for Teaching Based on Autism Research (STAR) curriculum and discrete trial teaching as the instructional methodology. The second is the Math is Not Difficult (Project MIND) activity-embedded, naturalistic intervention. These interventions were randomly assigned to four preschool students with ASD classrooms and implemented over three months for Project Mind. We used measurement gained during the same three months for the STAR intervention. In addition, we used A quasi-experimental, pre-test/post-test design to compare the effectiveness of these two interventions in building mathematical knowledge and skills. The pre-post measures include three standardized instruments: the Test of Early Math Ability-3, the Problem Solving and Calculation subtests of the Woodcock-Johnson Test of Achievement IV, and the Bracken Test of Basic Concepts-3 Receptive. The STAR curriculum-based assessment is administered to all Baudhuin students three times per year, and we used the results in this study. We anticipated that implementing these two approaches would improve the mathematical knowledge and skills of children with ASD. Still, it is crucial to see whether a behavioral or naturalistic teaching approach leads to more significant results.

Keywords: early learning, autism, math for pre-schoolers, special education, teaching strategies

Procedia PDF Downloads 165
569 A Mechanical Diagnosis Method Based on Vibration Fault Signal down-Sampling and the Improved One-Dimensional Convolutional Neural Network

Authors: Bowei Yuan, Shi Li, Liuyang Song, Huaqing Wang, Lingli Cui

Abstract:

Convolutional neural networks (CNN) have received extensive attention in the field of fault diagnosis. Many fault diagnosis methods use CNN for fault type identification. However, when the amount of raw data collected by sensors is massive, the neural network needs to perform a time-consuming classification task. In this paper, a mechanical fault diagnosis method based on vibration signal down-sampling and the improved one-dimensional convolutional neural network is proposed. Through the robust principal component analysis, the low-rank feature matrix of a large amount of raw data can be separated, and then down-sampling is realized to reduce the subsequent calculation amount. In the improved one-dimensional CNN, a smaller convolution kernel is used to reduce the number of parameters and computational complexity, and regularization is introduced before the fully connected layer to prevent overfitting. In addition, the multi-connected layers can better generalize classification results without cumbersome parameter adjustments. The effectiveness of the method is verified by monitoring the signal of the centrifugal pump test bench, and the average test accuracy is above 98%. When compared with the traditional deep belief network (DBN) and support vector machine (SVM) methods, this method has better performance.

Keywords: fault diagnosis, vibration signal down-sampling, 1D-CNN

Procedia PDF Downloads 131
568 Particle Filter Supported with the Neural Network for Aircraft Tracking Based on Kernel and Active Contour

Authors: Mohammad Izadkhah, Mojtaba Hoseini, Alireza Khalili Tehrani

Abstract:

In this paper we presented a new method for tracking flying targets in color video sequences based on contour and kernel. The aim of this work is to overcome the problem of losing target in changing light, large displacement, changing speed, and occlusion. The proposed method is made in three steps, estimate the target location by particle filter, segmentation target region using neural network and find the exact contours by greedy snake algorithm. In the proposed method we have used both region and contour information to create target candidate model and this model is dynamically updated during tracking. To avoid the accumulation of errors when updating, target region given to a perceptron neural network to separate the target from background. Then its output used for exact calculation of size and center of the target. Also it is used as the initial contour for the greedy snake algorithm to find the exact target's edge. The proposed algorithm has been tested on a database which contains a lot of challenges such as high speed and agility of aircrafts, background clutter, occlusions, camera movement, and so on. The experimental results show that the use of neural network increases the accuracy of tracking and segmentation.

Keywords: video tracking, particle filter, greedy snake, neural network

Procedia PDF Downloads 342
567 Civil Engineering Tool Kit for Making Perfect Ellipses of Desired Dimensions on Very Large Surfaces

Authors: Karam Chand Gupta

Abstract:

If an ellipse is to be drawn of given dimensions on a large ground, there is no formula, method or set of calculations & procedure available which will help in drawing an ellipse of given length and width on ground. Whenever a field engineer is to start the work of an ellipse-shaped structure like elliptical conference hall, screening chamber and pump chamber in disposal work etc., it is cumbersome for him to give demarcation of the structure on the big surface of the ground. No procedure is available, even in Google. A set of formulas with calculations has been made which helps the field engineer to draw an true and perfect ellipse of given length and width on the large ground very easily so as to start the construction work of elliptical structure. Based on these formulas a civil Engineering tool kit has been made with the help of which we can make perfect ellipse of desired dimensions on very large surface. The Patent of the tool kit has been filed in Intellectual Property India with Patent Filing Number: 201611026153 and Patent Application Filing Date: 30.07.2016. An App named ‘KC’s Mesh Formula’ has also been made to ease the calculation work. This can be downloaded from Play Store. After adopting these formulas and tool kit, a field engineer will not face difficulty in drawing ellipse on the ground to start the work.

Keywords: ellipse, elliptical structure, foci, string, wooden peg

Procedia PDF Downloads 268
566 Seismic Directionality Effects on In-Structure Response Spectra in Seismic Probabilistic Risk Assessment

Authors: Sittipong Jarernprasert, Enrique Bazan-Zurita, Paul C. Rizzo

Abstract:

Currently, seismic probabilistic risk assessments (SPRA) for nuclear facilities use In-Structure Response Spectra (ISRS) in the calculation of fragilities for systems and components. ISRS are calculated via dynamic analyses of the host building subjected to two orthogonal components of horizontal ground motion. Each component is defined as the median motion in any horizontal direction. Structural engineers applied the components along selected X and Y Cartesian axes. The ISRS at different locations in the building are also calculated in the X and Y directions. The choice of the directions of X and Y are not specified by the ground motion model with respect to geographic coordinates, and are rather arbitrarily selected by the structural engineer. Normally, X and Y coincide with the “principal” axes of the building, in the understanding that this practice is generally conservative. For SPRA purposes, however, it is desirable to remove any conservatism in the estimates of median ISRS. This paper examines the effects of the direction of horizontal seismic motion on the ISRS on typical nuclear structure. We also evaluate the variability of ISRS calculated along different horizontal directions. Our results indicate that some central measures of the ISRS provide robust estimates that are practically independent of the selection of the directions of the horizontal Cartesian axes.

Keywords: seismic, directionality, in-structure response spectra, probabilistic risk assessment

Procedia PDF Downloads 410