Search results for: modified Navier method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 20713

Search results for: modified Navier method

17533 Modified Surface Morphology, Structure and Enhanced Weathering Performance of Polyester-Urethane/Organoclay Nanocomposite Coatings

Authors: Gaurav Verma

Abstract:

Organoclay loaded (0-5 weight %) polyester-urethane (PU) coatings were prepared with a branched hydroxyl-bearing polyester and an aliphatic poly-isocyanate. TEM micrographs show partial exfoliation and intercalation of clay platelets in organoclay-polyester dispersions. AFM surface images reveals that the PU hard domains tend to regularise and also self-organise into spherical shapes of sizes 50 nm (0 wt %), 60 nm (2 wt %) and 190 nm (4 wt %) respectively. IR analysis shows that PU chains have increasing tendency to interact with exfoliated clay platelets through hydrogen bonding. This interaction strengthens inter-chain linkages in PU matrix and hence improves anti-ageing properties. 1000 hours of accelerated weathering was evaluated by ATR spectroscopy, while yellowing and overall discoloration was quantified by the Δb* and ΔE* values of the CIELab colour scale. Post-weathering surface properties also showed improvement as the loss of thickness and reduction in gloss in neat PU was 25% and 42%; while it was just 3.5% and 14% respectively for the 2 wt% nanocomposite coating. This work highlights the importance of modifying surface and bulk properties of PU coatings at nanoscale, which led to improved performance in accelerated weathering conditions.

Keywords: coatings, AFM, ageing, spectroscopy

Procedia PDF Downloads 454
17532 A Hybrid Fuzzy Clustering Approach for Fertile and Unfertile Analysis

Authors: Shima Soltanzadeh, Mohammad Hosain Fazel Zarandi, Mojtaba Barzegar Astanjin

Abstract:

Diagnosis of male infertility by the laboratory tests is expensive and, sometimes it is intolerable for patients. Filling out the questionnaire and then using classification method can be the first step in decision-making process, so only in the cases with a high probability of infertility we can use the laboratory tests. In this paper, we evaluated the performance of four classification methods including naive Bayesian, neural network, logistic regression and fuzzy c-means clustering as a classification, in the diagnosis of male infertility due to environmental factors. Since the data are unbalanced, the ROC curves are most suitable method for the comparison. In this paper, we also have selected the more important features using a filtering method and examined the impact of this feature reduction on the performance of each methods; generally, most of the methods had better performance after applying the filter. We have showed that using fuzzy c-means clustering as a classification has a good performance according to the ROC curves and its performance is comparable to other classification methods like logistic regression.

Keywords: classification, fuzzy c-means, logistic regression, Naive Bayesian, neural network, ROC curve

Procedia PDF Downloads 337
17531 Real-Time Recognition of the Terrain Configuration to Improve Driving Stability for Unmanned Robots

Authors: Bongsoo Jeon, Jayoung Kim, Jihong Lee

Abstract:

Methods for measuring or estimating of ground shape by a laser range finder and a vision sensor (exteroceptive sensors) have critical weakness in terms that these methods need prior database built to distinguish acquired data as unique surface condition for driving. Also, ground information by exteroceptive sensors does not reflect the deflection of ground surface caused by the movement of UGVs. Therefore, this paper proposes a method of recognizing exact and precise ground shape using Inertial Measurement Unit (IMU) as a proprioceptive sensor. In this paper, firstly this method recognizes attitude of a robot in real-time using IMU and compensates attitude data of a robot with angle errors through analysis of vehicle dynamics. This method is verified by outdoor driving experiments of a real mobile robot.

Keywords: inertial measurement unit, laser range finder, real-time recognition of the ground shape, proprioceptive sensor

Procedia PDF Downloads 287
17530 Development of Thermo-Regulating Fabric Using Microcapsules of Phase Change Material

Authors: D. Benmoussa, H. Hannache, O. Cherkaoui

Abstract:

In textiles, the major interest in microencapsulation is currently in the application of durable fragrances, skin softeners, phase-change materials, antimicrobial agents and drug delivery systems onto textile materials. In our research “Polyethylene Glycol” was applied as phase change material and it was encapsulated in polymethacrylic acid (PMA) by radical polymerization in suspension of methacrylic acid in presence of N,N'-methylenebisacrylamide (MBAM) as crosslinking agent. Thereafter the obtained microcapsule was modified by amidation with ethylenediamine as a spacer molecule. At the end of this spacer trichlorotriazine reactive group was fixed. Microcapsules were grafted onto cotton textile substrate. The surface morphologies of the microencapsulated phase change materials (micro PCMs) were studied by scanning electron microscopy (SEM). Thermal properties, thermal reliabilities and thermal stabilities of the as-prepared micro PCMs were investigated by differential scanning calorimetry (DSC) and thermogravmetric analysis (TGA). The results obtained show the obtaining microcapsules with a mean diameter of 10 µm and the resistance of the microcapsules is demonstrated by thermal analysis.

Keywords: energy storage, microencapsulation, phase-change materials, thermogravmetric analysis (TGA)

Procedia PDF Downloads 675
17529 Currently Use Pesticides: Fate, Availability, and Effects in Soils

Authors: Lucie Bielská, Lucia Škulcová, Martina Hvězdová, Jakub Hofman, Zdeněk Šimek

Abstract:

The currently used pesticides represent a broad group of chemicals with various physicochemical and environmental properties which input has reached 2×106 tons/year and is expected to even increases. From that amount, only 1% directly interacts with the target organism while the rest represents a potential risk to the environment and human health. Despite being authorized and approved for field applications, the effects of pesticides in the environment can differ from the model scenarios due to the various pesticide-soil interactions and resulting modified fate and behavior. As such, a direct monitoring of pesticide residues and evaluation of their impact on soil biota, aquatic environment, food contamination, and human health should be performed to prevent environmental and economic damages. The present project focuses on fluvisols as they are intensively used in the agriculture but face to several environmental stressors. Fluvisols develop in the vicinity of rivers by the periodic settling of alluvial sediments and periodic interruptions to pedogenesis by flooding. As a result, fluvisols exhibit very high yields per area unit, are intensively used and loaded by pesticides. Regarding the floods, their regular contacts with surface water arise from serious concerns about the surface water contamination. In order to monitor pesticide residues and assess their environmental and biological impact within this project, 70 fluvisols were sampled over the Czech Republic and analyzed for the total and bioaccessible amounts of 40 various pesticides. For that purpose, methodologies for the pesticide extraction and analysis with liquid chromatography-mass spectrometry technique were developed and optimized. To assess the biological risks, both the earthworm bioaccumulation tests and various types of passive sampling techniques (XAD resin, Chemcatcher, and silicon rubber) were optimized and applied. These data on chemical analysis and bioavailability were combined with the results of soil analysis, including the measurement of basic physicochemical soil properties as well detailed characterization of soil organic matter with the advanced method of diffuse reflectance infrared spectrometry. The results provide unique data on the residual levels of pesticides in the Czech Republic and on the factors responsible for increased pesticide residue levels that should be included in the modeling of pesticide fate and effects.

Keywords: currently used pesticides, fluvisoils, bioavailability, Quechers, liquid-chromatography-mass spectrometry, soil properties, DRIFT analysis, pesticides

Procedia PDF Downloads 464
17528 Reducing Uncertainty of Monte Carlo Estimated Fatigue Damage in Offshore Wind Turbines Using FORM

Authors: Jan-Tore H. Horn, Jørgen Juncher Jensen

Abstract:

Uncertainties related to fatigue damage estimation of non-linear systems are highly dependent on the tail behaviour and extreme values of the stress range distribution. By using a combination of the First Order Reliability Method (FORM) and Monte Carlo simulations (MCS), the accuracy of the fatigue estimations may be improved for the same computational efforts. The method is applied to a bottom-fixed, monopile-supported large offshore wind turbine, which is a non-linear and dynamically sensitive system. Different curve fitting techniques to the fatigue damage distribution have been used depending on the sea-state dependent response characteristics, and the effect of a bi-linear S-N curve is discussed. Finally, analyses are performed on several environmental conditions to investigate the long-term applicability of this multistep method. Wave loads are calculated using state-of-the-art theory, while wind loads are applied with a simplified model based on rotor thrust coefficients.

Keywords: fatigue damage, FORM, monopile, Monte Carlo, simulation, wind turbine

Procedia PDF Downloads 260
17527 Stability Indicating RP – HPLC Method Development, Validation and Kinetic Study for Amiloride Hydrochloride and Furosemide in Pharmaceutical Dosage Form

Authors: Jignasha Derasari, Patel Krishna M, Modi Jignasa G.

Abstract:

Chemical stability of pharmaceutical molecules is a matter of great concern as it affects the safety and efficacy of the drug product.Stability testing data provides the basis to understand how the quality of a drug substance and drug product changes with time under the influence of various environmental factors. Besides this, it also helps in selecting proper formulation and package as well as providing proper storage conditions and shelf life, which is essential for regulatory documentation. The ICH guideline states that stress testing is intended to identify the likely degradation products which further help in determination of the intrinsic stability of the molecule and establishing degradation pathways, and to validate the stability indicating procedures. A simple, accurate and precise stability indicating RP- HPLC method was developed and validated for simultaneous estimation of Amiloride Hydrochloride and Furosemide in tablet dosage form. Separation was achieved on an Phenomenexluna ODS C18 (250 mm × 4.6 mm i.d., 5 µm particle size) by using a mobile phase consisting of Ortho phosphoric acid: Acetonitrile (50:50 %v/v) at a flow rate of 1.0 ml/min (pH 3.5 adjusted with 0.1 % TEA in Water) isocratic pump mode, Injection volume 20 µl and wavelength of detection was kept at 283 nm. Retention time for Amiloride Hydrochloride and Furosemide was 1.810 min and 4.269 min respectively. Linearity of the proposed method was obtained in the range of 40-60 µg/ml and 320-480 µg/ml and Correlation coefficient was 0.999 and 0.998 for Amiloride hydrochloride and Furosemide, respectively. Forced degradation study was carried out on combined dosage form with various stress conditions like hydrolysis (acid and base hydrolysis), oxidative and thermal conditions as per ICH guideline Q2 (R1). The RP- HPLC method has shown an adequate separation for Amiloride hydrochloride and Furosemide from its degradation products. Proposed method was validated as per ICH guidelines for specificity, linearity, accuracy; precision and robustness for estimation of Amiloride hydrochloride and Furosemide in commercially available tablet dosage form and results were found to be satisfactory and significant. The developed and validated stability indicating RP-HPLC method can be used successfully for marketed formulations. Forced degradation studies help in generating degradants in much shorter span of time, mostly a few weeks can be used to develop the stability indicating method which can be applied later for the analysis of samples generated from accelerated and long term stability studies. Further, kinetic study was also performed for different forced degradation parameters of the same combination, which help in determining order of reaction.

Keywords: amiloride hydrochloride, furosemide, kinetic study, stability indicating RP-HPLC method validation

Procedia PDF Downloads 464
17526 Study of Flow-Induced Noise Control Effects on Flat Plate through Biomimetic Mucus Injection

Authors: Chen Niu, Xuesong Zhang, Dejiang Shang, Yongwei Liu

Abstract:

Fishes can secrete high molecular weight fluid on their body skin to enable their rapid movement in the water. In this work, we employ a hybrid method that combines Computational Fluid Dynamics (CFD) and Finite Element Method (FEM) to investigate the effects of different mucus viscosities and injection velocities on fluctuation pressure in the boundary layer and flow-induced structural vibration noise of a flat plate model. To accurately capture the transient flow distribution on the plate surface, we use Large Eddy Simulation (LES) while the mucus inlet is positioned at a sufficient distance from the model to ensure effective coverage. Mucus injection is modeled using the Volume of Fluid (VOF) method for multiphase flow calculations. The results demonstrate that mucus control of pulsating pressure effectively reduces flow-induced structural vibration noise, providing an approach for controlling flow-induced noise in underwater vehicles.

Keywords: mucus, flow control, noise control, flow-induced noise

Procedia PDF Downloads 146
17525 Impact on Cost of Equity of Accounting and Disclosures

Authors: Abhishek Ranga

Abstract:

The study examined the effect of accounting choice and level of disclosure on the firm’s implied cost of equity in Indian environment. For the study accounting choice was classified as aggressive or conservative depending upon the firm’s choice of accounting methods, accounting policies and accounting estimates. Level of disclosure is the quantum of financial and non-financial information disclosed in firm’s annual report, essentially in note to accounts section, schedules forming part of financial statements and Management Discussion and Analysis report. Regression models were developed with cost of equity as a dependent variable and accounting choice, level of disclosure as an independent variable along with selected control variables. Cost of equity was measured using Edward-Bell-Ohlson (EBO) valuation model, to measure accounting choice Modified-Jones-Model (MJM) was used and level of disclosure was measured using a disclosure index essentially drawn from Botosan study. Results indicated a negative association between the implied cost of equity and conservative accounting choice and also between level of disclosure and cost of equity.

Keywords: aggressive accounting choice, conservative accounting choice, disclosure, implied cost of equity

Procedia PDF Downloads 462
17524 Design, Modification and Structural Analysis of Bicycle Sprocket Using ANSYS

Authors: Roman Kalvin, Saba Arif, Anam Nadeem, Burhan Ali Ghumman, Juntakan Taweekun

Abstract:

Bicycles are important parts of the transportation industry. In the current world, use of sprocket is very high on bicycles these days. Sprocket and chains are important parts of the transmission of power in the bicycle. However, transmission of power is highly dependent on sprocket design. In conventional bicycles, sprockets are made up of mild steel which undergoes wear and tears with the passage of time due to high pressures applied on it. In the current research, a new sprocket is designed by changing its structure and material to carbon fiber from mild steel. The existing sprocket of a bicycle is compared with the new and modified sprocket design. However, new design has structural and material changes as well. According to the results, in carbon fiber, sprocket deformation is 0.091 mm while sprocket stress value is 371.13N/mm². Also, comparison based analysis is done by physical testing and software analysis. There is 8.1% variation in software and experimental results of steel. Additionally, the difference between both methods comes 8 to 9%. This improved design can be used in future for more durability and long run timings for bicycles.

Keywords: sprocket, mild steel, drafting, stress, deformation

Procedia PDF Downloads 254
17523 Enhance Security in XML Databases: XLog File for Severity-Aware Trust-Based Access Control

Authors: A: Asmawi, L. S. Affendey, N. I. Udzir, R. Mahmod

Abstract:

The topic of enhancing security in XML databases is important as it includes protecting sensitive data and providing a secure environment to users. In order to improve security and provide dynamic access control for XML databases, we presented XLog file to calculate user trust values by recording users’ bad transaction, errors and query severities. Severity-aware trust-based access control for XML databases manages the access policy depending on users' trust values and prevents unauthorized processes, malicious transactions and insider threats. Privileges are automatically modified and adjusted over time depending on user behaviour and query severity. Logging in database is an important process and is used for recovery and security purposes. In this paper, the Xlog file is presented as a dynamic and temporary log file for XML databases to enhance the level of security.

Keywords: XML database, trust-based access control, severity-aware, trust values, log file

Procedia PDF Downloads 300
17522 Relevant LMA Features for Human Motion Recognition

Authors: Insaf Ajili, Malik Mallem, Jean-Yves Didier

Abstract:

Motion recognition from videos is actually a very complex task due to the high variability of motions. This paper describes the challenges of human motion recognition, especially motion representation step with relevant features. Our descriptor vector is inspired from Laban Movement Analysis method. We propose discriminative features using the Random Forest algorithm in order to remove redundant features and make learning algorithms operate faster and more effectively. We validate our method on MSRC-12 and UTKinect datasets.

Keywords: discriminative LMA features, features reduction, human motion recognition, random forest

Procedia PDF Downloads 195
17521 Nonlinear Dynamic Analysis of Base-Isolated Structures Using a Partitioned Solution Approach and an Exponential Model

Authors: Nicolò Vaiana, Filip C. Filippou, Giorgio Serino

Abstract:

The solution of the nonlinear dynamic equilibrium equations of base-isolated structures adopting a conventional monolithic solution approach, i.e. an implicit single-step time integration method employed with an iteration procedure, and the use of existing nonlinear analytical models, such as differential equation models, to simulate the dynamic behavior of seismic isolators can require a significant computational effort. In order to reduce numerical computations, a partitioned solution method and a one dimensional nonlinear analytical model are presented in this paper. A partitioned solution approach can be easily applied to base-isolated structures in which the base isolation system is much more flexible than the superstructure. Thus, in this work, the explicit conditionally stable central difference method is used to evaluate the base isolation system nonlinear response and the implicit unconditionally stable Newmark’s constant average acceleration method is adopted to predict the superstructure linear response with the benefit in avoiding iterations in each time step of a nonlinear dynamic analysis. The proposed mathematical model is able to simulate the dynamic behavior of seismic isolators without requiring the solution of a nonlinear differential equation, as in the case of widely used differential equation model. The proposed mixed explicit-implicit time integration method and nonlinear exponential model are adopted to analyze a three dimensional seismically isolated structure with a lead rubber bearing system subjected to earthquake excitation. The numerical results show the good accuracy and the significant computational efficiency of the proposed solution approach and analytical model compared to the conventional solution method and mathematical model adopted in this work. Furthermore, the low stiffness value of the base isolation system with lead rubber bearings allows to have a critical time step considerably larger than the imposed ground acceleration time step, thus avoiding stability problems in the proposed mixed method.

Keywords: base-isolated structures, earthquake engineering, mixed time integration, nonlinear exponential model

Procedia PDF Downloads 280
17520 Accurate Positioning Method of Indoor Plastering Robot Based on Line Laser

Authors: Guanqiao Wang, Hongyang Yu

Abstract:

There is a lot of repetitive work in the traditional construction industry. These repetitive tasks can significantly improve production efficiency by replacing manual tasks with robots. There- fore, robots appear more and more frequently in the construction industry. Navigation and positioning are very important tasks for construction robots, and the requirements for accuracy of positioning are very high. Traditional indoor robots mainly use radiofrequency or vision methods for positioning. Compared with ordinary robots, the indoor plastering robot needs to be positioned closer to the wall for wall plastering, so the requirements for construction positioning accuracy are higher, and the traditional navigation positioning method has a large error, which will cause the robot to move. Without the exact position, the wall cannot be plastered, or the error of plastering the wall is large. A new positioning method is proposed, which is assisted by line lasers and uses image processing-based positioning to perform more accurate positioning on the traditional positioning work. In actual work, filter, edge detection, Hough transform and other operations are performed on the images captured by the camera. Each time the position of the laser line is found, it is compared with the standard value, and the position of the robot is moved or rotated to complete the positioning work. The experimental results show that the actual positioning error is reduced to less than 0.5 mm by this accurate positioning method.

Keywords: indoor plastering robot, navigation, precise positioning, line laser, image processing

Procedia PDF Downloads 148
17519 The Origins of Inflation in Tunisia

Authors: Narimen Rdhaounia Mohamed Kouni

Abstract:

Our aim in this paper is to identify the origins of inflation in Tunisia on the period from 1988 to 2018. In order to estimate the model, an ARDL methodology is used. We studied also the effect of informal economy on inflation. Indeed, we estimated the size of the informal economy in Tunisia based on Gutmann method. The results showed that there are three main origins of inflation. In fact, the first origin is the fiscal policy adopted by Tunisia, particularly after revolution. The second origin is the increase of monetary variables. Finally, informal economy played an important role in inflation.

Keywords: inflation, consumer price index, informal, gutmann method, ARDL model

Procedia PDF Downloads 82
17518 A Trend Based Forecasting Framework of the ATA Method and Its Performance on the M3-Competition Data

Authors: H. Taylan Selamlar, I. Yavuz, G. Yapar

Abstract:

It is difficult to make predictions especially about the future and making accurate predictions is not always easy. However, better predictions remain the foundation of all science therefore the development of accurate, robust and reliable forecasting methods is very important. Numerous number of forecasting methods have been proposed and studied in the literature. There are still two dominant major forecasting methods: Box-Jenkins ARIMA and Exponential Smoothing (ES), and still new methods are derived or inspired from them. After more than 50 years of widespread use, exponential smoothing is still one of the most practically relevant forecasting methods available due to their simplicity, robustness and accuracy as automatic forecasting procedures especially in the famous M-Competitions. Despite its success and widespread use in many areas, ES models have some shortcomings that negatively affect the accuracy of forecasts. Therefore, a new forecasting method in this study will be proposed to cope with these shortcomings and it will be called ATA method. This new method is obtained from traditional ES models by modifying the smoothing parameters therefore both methods have similar structural forms and ATA can be easily adapted to all of the individual ES models however ATA has many advantages due to its innovative new weighting scheme. In this paper, the focus is on modeling the trend component and handling seasonality patterns by utilizing classical decomposition. Therefore, ATA method is expanded to higher order ES methods for additive, multiplicative, additive damped and multiplicative damped trend components. The proposed models are called ATA trended models and their predictive performances are compared to their counter ES models on the M3 competition data set since it is still the most recent and comprehensive time-series data collection available. It is shown that the models outperform their counters on almost all settings and when a model selection is carried out amongst these trended models ATA outperforms all of the competitors in the M3- competition for both short term and long term forecasting horizons when the models’ forecasting accuracies are compared based on popular error metrics.

Keywords: accuracy, exponential smoothing, forecasting, initial value

Procedia PDF Downloads 177
17517 Optimal Portfolio Selection under Treynor Ratio Using Genetic Algorithms

Authors: Imad Zeyad Ramadan

Abstract:

In this paper a genetic algorithm was developed to construct the optimal portfolio based on the Treynor method. The GA maximizes the Treynor ratio under budget constraint to select the best allocation of the budget for the companies in the portfolio. The results show that the GA was able to construct a conservative portfolio which includes companies from the three sectors. This indicates that the GA reduced the risk on the investor as it choose some companies with positive risks (goes with the market) and some with negative risks (goes against the market).

Keywords: oOptimization, genetic algorithm, portfolio selection, Treynor method

Procedia PDF Downloads 449
17516 The Use of Random Set Method in Reliability Analysis of Deep Excavations

Authors: Arefeh Arabaninezhad, Ali Fakher

Abstract:

Since the deterministic analysis methods fail to take system uncertainties into account, probabilistic and non-probabilistic methods are suggested. Geotechnical analyses are used to determine the stress and deformation caused by construction; accordingly, many input variables which depend on ground behavior are required for geotechnical analyses. The Random Set approach is an applicable reliability analysis method when comprehensive sources of information are not available. Using Random Set method, with relatively small number of simulations compared to fully probabilistic methods, smooth extremes on system responses are obtained. Therefore random set approach has been proposed for reliability analysis in geotechnical problems. In the present study, the application of random set method in reliability analysis of deep excavations is investigated through three deep excavation projects which were monitored during the excavating process. A finite element code is utilized for numerical modeling. Two expected ranges, from different sources of information, are established for each input variable, and a specific probability assignment is defined for each range. To determine the most influential input variables and subsequently reducing the number of required finite element calculations, sensitivity analysis is carried out. Input data for finite element model are obtained by combining the upper and lower bounds of the input variables. The relevant probability share of each finite element calculation is determined considering the probability assigned to input variables present in these combinations. Horizontal displacement of the top point of excavation is considered as the main response of the system. The result of reliability analysis for each intended deep excavation is presented by constructing the Belief and Plausibility distribution function (i.e. lower and upper bounds) of system response obtained from deterministic finite element calculations. To evaluate the quality of input variables as well as applied reliability analysis method, the range of displacements extracted from models has been compared to the in situ measurements and good agreement is observed. The comparison also showed that Random Set Finite Element Method applies to estimate the horizontal displacement of the top point of deep excavation. Finally, the probability of failure or unsatisfactory performance of the system is evaluated by comparing the threshold displacement with reliability analysis results.

Keywords: deep excavation, random set finite element method, reliability analysis, uncertainty

Procedia PDF Downloads 268
17515 Optimising Transcranial Alternating Current Stimulation

Authors: Robert Lenzie

Abstract:

Transcranial electrical stimulation (tES) is significant in the research literature. However, the effects of tES on brain activity are still poorly understood at the surface level, the Brodmann Area level, and the impact on neural networks. Using a method like electroencephalography (EEG) in conjunction with tES might make it possible to comprehend the brain response and mechanisms behind published observed alterations in more depth. Using a method to directly see the effect of tES on EEG may offer high temporal resolution data on the brain activity changes/modulations brought on by tES that correlate to various processing stages within the brain. This paper provides unpublished information on a cutting-edge methodology that may reveal details about the dynamics of how the human brain works beyond what is now achievable with existing methods.

Keywords: tACS, frequency, EEG, optimal

Procedia PDF Downloads 83
17514 Estimation of Shear Wave Velocity from Cone Penetration Test for Structured Busan Clays

Authors: Vinod K. Singh, S. G. Chung

Abstract:

The degree of structuration of Busan clays at the mouth of Nakdong River mouth was highly influenced by the depositional environment, i.e., flow of the river stream, marine regression, and transgression during the sedimentation process. As a result, the geotechnical properties also varies along the depth with change in degree of structuration. Thus, the in-situ tests such as cone penetration test (CPT) could not be used to predict various geotechnical properties properly by using the conventional empirical methods. In this paper, the shear wave velocity (Vs) was measured from the field using the seismic dilatometer. The Vs was also measured in the laboratory from high quality undisturbed and remolded samples using bender element method to evaluate the degree of structuration. The degree of structuration was quantitatively defined by the modulus ratio of undisturbed to remolded soil samples which is found well correlated with the normalized void ratio (e0/eL) where eL is the void ratio at the liquid limit. It is revealed that the empirical method based on laboratory results incorporating e0/eL can predict Vs from the field more accurately. Thereafter, the CPT based empirical method was developed to estimate the shear wave velocity taking the effect of structuration in the consideration. The developed method was found to predict shear wave velocity reasonably for Busan clays.

Keywords: level of structuration, normalized modulus, normalized void ratio, shear wave velocity, site characterization

Procedia PDF Downloads 235
17513 Food Composition Tables Used as an Instrument to Estimate the Nutrient Ingest in Ecuador

Authors: Ortiz M. Rocío, Rocha G. Karina, Domenech A. Gloria

Abstract:

There are several tools to assess the nutritional status of the population. A main instrument commonly used to build those tools is the food composition tables (FCT). Despite the importance of FCT, there are many error sources and variability factors that can be presented on building those tables and can lead to an under or over estimation of ingest of nutrients of a population. This work identified different food composition tables used as an instrument to estimate the nutrient ingest in Ecuador.The collection of data for choosing FCT was made through key informants –self completed questionnaires-, supplemented with institutional web research. A questionnaire with general variables (origin, year of edition, etc) and methodological variables (method of elaboration, information of the table, etc) was passed to the identified FCT. Those variables were defined based on an extensive literature review. A descriptive analysis of content was performed. Ten printed tables and three databases were reported which were all indistinctly treated as food composition tables. We managed to get information from 69% of the references. Several informants referred to printed documents that were not accessible. In addition, searching the internet was not successful. Of the 9 final tables, n=8 are from Latin America, and, n= 5 of these were constructed by indirect method (collection of already published data) having as a main source of information a database from the United States department of agriculture USDA. One FCT was constructed by using direct method (bromatological analysis) and has its origin in Ecuador. The 100% of the tables made a clear distinction of the food and its method of cooking, 88% of FCT expressed values of nutrients per 100g of edible portion, 77% gave precise additional information about the use of the table, and 55% presented all the macro and micro nutrients on a detailed way. The more complete FCT were: INCAP (Central America), Composition of foods (Mexico). The more referred table was: Ecuadorian food composition table of 1965 (70%). The indirect method was used for most tables within this study. However, this method has the disadvantage that it generates less reliable food composition tables because foods show variations in composition. Therefore, a database cannot accurately predict the composition of any isolated sample of a food product.In conclusion, analyzing the pros and cons, and, despite being a FCT elaborated by using an indirect method, it is considered appropriate to work with the FCT of INCAP Central America, given the proximity to our country and a food items list that is very similar to ours. Also, it is imperative to have as a reference the table of composition for Ecuadorian food, which, although is not updated, was constructed using the direct method with Ecuadorian foods. Hence, both tables will be used to elaborate a questionnaire with the purpose of assessing the food consumption of the Ecuadorian population. In case of having disparate values, we will proceed by taking just the INCAP values because this is an updated table.

Keywords: Ecuadorian food composition tables, FCT elaborated by direct method, ingest of nutrients of Ecuadorians, Latin America food composition tables

Procedia PDF Downloads 432
17512 Performance Analysis of Bluetooth Low Energy Mesh Routing Algorithm in Case of Disaster Prediction

Authors: Asmir Gogic, Aljo Mujcic, Sandra Ibric, Nermin Suljanovic

Abstract:

Ubiquity of natural disasters during last few decades have risen serious questions towards the prediction of such events and human safety. Every disaster regardless its proportion has a precursor which is manifested as a disruption of some environmental parameter such as temperature, humidity, pressure, vibrations and etc. In order to anticipate and monitor those changes, in this paper we propose an overall system for disaster prediction and monitoring, based on wireless sensor network (WSN). Furthermore, we introduce a modified and simplified WSN routing protocol built on the top of the trickle routing algorithm. Routing algorithm was deployed using the bluetooth low energy protocol in order to achieve low power consumption. Performance of the WSN network was analyzed using a real life system implementation. Estimates of the WSN parameters such as battery life time, network size and packet delay are determined. Based on the performance of the WSN network, proposed system can be utilized for disaster monitoring and prediction due to its low power profile and mesh routing feature.

Keywords: bluetooth low energy, disaster prediction, mesh routing protocols, wireless sensor networks

Procedia PDF Downloads 385
17511 Prevalence of Diabetes Mellitus Type 2 Risk Factors among Nurses in Mongolia

Authors: V. Davaakhuu, D. Tserendagva, D. Amarsaikhan, T. Altanstetseg

Abstract:

In this study we aimed to detect main risk factors for diabetes in Mongolia and obtain data we used survey modified questionnaire. Survey data were obtained from 634 valid nurses (day work nurses-317, shift work nurses-317). Participants who were pregnant, less than 20 years old and no check for fasting glucose level were excluded from the survey in order to determine the risk factors of diabetes. Our study result shows the main risk factors of diabetes were physical inactivity, overweight and obesity, alcohol and tobacco use and lack of vegetable and fruit consumption. Peripheral blood glucose level was normal in subjects with BMI 26.28 ± 0.56, but 20 % of the subjects with normal blood glucose level were obese. Blood glucose level was higher in subjects with BMI 28.63 ± 2.32 and 36 % of them were obese. According to our study results, 3.62% of the surveyed population were identified having no diabetes risk factors, 52.3% were at risk, 28.8% were in higher risk for diabetes by the WHO criteria. In general, the prevalence of blood glucose were especially higher in shift work nurses.

Keywords: day work nurses, shift work nurses, BMI, WHR

Procedia PDF Downloads 593
17510 Crystal Structure, Vibration Study, and Calculated Frequencies by Density Functional Theory Method of Copper Phosphate Dihydrate

Authors: Soufiane Zerraf, Malika Tridane, Said Belaaouad

Abstract:

CuHPO₃.2H₂O was synthesized by the direct method. CuHPO₃.2H₂O crystallizes in the orthorhombic system, space group P2₁2₁2₁, a = 6.7036 (2) Å, b = 7.3671 (4) Å, c = 8.9749 (4) Å, Z = 4, V = 443.24 (4) ų. The crystal structure was refined to R₁= 0.0154, R₂= 0.0380 for 19018 reflections satisfying criterion I ≥ 2σ (I). The structural resolution shows the existence of chains of ions HPO₃- linked together by hydrogen bonds. The crystalline structure is formed by chains consisting of Cu[O₃(H₂O)₃] deformed octahedral, which are connected to the vertices. The chains extend parallel to b and are mutually linked by PO₃ groups. The structure is closely related to that of CuSeO₃.2H₂O and CuTeO₃.2H₂O. The experimental studies of the infrared and Raman spectra were used to confirm the presence of the phosphate ion and were compared in the (0-4000) cm-1 region with the theoretical results calculated by the density functional theory (DFT) method to provide reliable assignments of all observed bands in the experimental spectra.

Keywords: crystal structure, X-ray diffraction, vibration study, thermal behavior, density functional theory

Procedia PDF Downloads 117
17509 Iron Recovery from Red Mud as Zero-Valent Iron Metal Powder Using Direct Electrochemical Reduction Method

Authors: Franky Michael Hamonangan Siagian, Affan Maulana, Himawan Tri Bayu Murti Petrus, Widi Astuti

Abstract:

In this study, the feasibility of the direct electrowinning method was used to produce zero-valent iron from red mud. The bauxite residue sample came from the Tayan mine, Indonesia, which contains high hematite (Fe₂O₃). Before electrolysis, the samples were characterized by various analytical techniques (ICP-AES, SEM, XRD) to determine their chemical composition and mineralogy. The direct electrowinning method of red mud suspended in NaOH was introduced at low temperatures ranging from 30 - 110 °C. Variations of current density, red mud: NaOH ratio and temperature were carried out to determine the optimum operation of the direct electrowinning process. Cathode deposits and residues in electrochemical cells were analyzed using XRD, XRF, and SEM to determine the chemical composition and current recovery. The low-temperature electrolysis current efficiency on Redmud can reach 20% recovery at a current density of 920,945 A/m². The moderate performance of the process was investigated with red mud, which was attributed to the troublesome adsorption of red mud particles on the cathode, making the reduction far less efficient than that with hematite.

Keywords: red mud, electrochemical reduction, Iron production, hematite

Procedia PDF Downloads 75
17508 Modelling the Behavior of Commercial and Test Textiles against Laundering Process by Statistical Assessment of Their Performance

Authors: M. H. Arslan, U. K. Sahin, H. Acikgoz-Tufan, I. Gocek, I. Erdem

Abstract:

Various exterior factors have perpetual effects on textile materials during wear, use and laundering in everyday life. In accordance with their frequency of use, textile materials are required to be laundered at certain intervals. The medium in which the laundering process takes place have inevitable detrimental physical and chemical effects on textile materials caused by the unique parameters of the process inherently existing. Connatural structures of various textile materials result in many different physical, chemical and mechanical characteristics. Because of their specific structures, these materials have different behaviors against several exterior factors. By modeling the behavior of commercial and test textiles as group-wise against laundering process, it is possible to disclose the relation in between these two groups of materials, which will lead to better understanding of their behaviors in terms of similarities and differences against the washing parameters of the laundering. Thus, the goal of the current research is to examine the behavior of two groups of textile materials as commercial textiles and as test textiles towards the main washing machine parameters during laundering process such as temperature, load quantity, mechanical action and level of water amount by concentrating on shrinkage, pilling, sewing defects, collar abrasion, the other defects other than sewing, whitening and overall properties of textiles. In this study, cotton fabrics were preferred as commercial textiles due to the fact that garments made of cotton are the most demanded products in the market by the textile consumers in daily life. Full factorial experimental set-up was used to design the experimental procedure. All profiles always including all of the commercial and the test textiles were laundered for 20 cycles by commercial home laundering machine to investigate the effects of the chosen parameters. For the laundering process, a modified version of ‘‘IEC 60456 Test Method’’ was utilized. The amount of detergent was altered as 0.5% gram per liter depending on varying load quantity levels. Datacolor 650®, EMPA Photographic Standards for Pilling Test and visual examination were utilized to test and characterize the textiles. Furthermore, in the current study the relation in between commercial and test textiles in terms of their performance was deeply investigated by the help of statistical analysis performed by MINITAB® package program modeling their behavior against the parameters of the laundering process. In the experimental work, the behaviors of both groups of textiles towards washing machine parameters were visually and quantitatively assessed in dry state.

Keywords: behavior against washing machine parameters, performance evaluation of textiles, statistical analysis, commercial and test textiles

Procedia PDF Downloads 359
17507 An Inviscid Compressible Flow Solver Based on Unstructured OpenFOAM Mesh Format

Authors: Utkan Caliskan

Abstract:

Two types of numerical codes based on finite volume method are developed in order to solve compressible Euler equations to simulate the flow through forward facing step channel. Both algorithms have AUSM+- up (Advection Upstream Splitting Method) scheme for flux splitting and two-stage Runge-Kutta scheme for time stepping. In this study, the flux calculations differentiate between the algorithm based on OpenFOAM mesh format which is called 'face-based' algorithm and the basic algorithm which is called 'element-based' algorithm. The face-based algorithm avoids redundant flux computations and also is more flexible with hybrid grids. Moreover, some of OpenFOAM’s preprocessing utilities can be used on the mesh. Parallelization of the face based algorithm for which atomic operations are needed due to the shared memory model, is also presented. For several mesh sizes, 2.13x speed up is obtained with face-based approach over the element-based approach.

Keywords: cell centered finite volume method, compressible Euler equations, OpenFOAM mesh format, OpenMP

Procedia PDF Downloads 319
17506 Magnetohydrodynamics (MHD) Boundary Layer Flow Past A Stretching Plate with Heat Transfer and Viscous Dissipation

Authors: Jiya Mohammed, Tsadu Shuaib, Yusuf Abdulhakeem

Abstract:

The research work focuses on the cases of MHD boundary layer flow past a stretching plate with heat transfer and viscous dissipation. The non-linear of momentum and energy equation are transform into ordinary differential equation by using similarity transformation, the resulting equation are solved using Adomian Decomposition Method (ADM). An attempt has been made to show the potentials and wide range application of the Adomian decomposition method in the comparison with the previous one in solving heat transfer problems. The Pade approximates value (η= 11[11, 11]) is use on the difficulty at infinity. The results are compared by numerical technique method. A vivid conclusion can be drawn from the results that ADM provides highly precise numerical solution for non-linear differential equations. The result where accurate especially for η ≤ 4, a general equating terms of Eckert number (Ec), Prandtl number (Pr) and magnetic parameter ( ) is derived which was used to investigate velocity and temperature profiles in boundary layer.

Keywords: MHD, Adomian decomposition, boundary layer, viscous dissipation

Procedia PDF Downloads 551
17505 Effect of Incorporation of Seaweed Extract in Gelatin Based Film on Physic-Chemical and Bioactive Properties of Film

Authors: Shekhar U. Kadam, S. K. Pankaj, Brijesh K. Tiwari, P. J. Cullen, Colm P. O’Donnell

Abstract:

Brown seaweed L. hyperborea is a rich source of phenolic compounds with antioxidant and antimicrobial properties. The aim of this work was to study the effect of incorporation of L. hyperborea extract to bovine gelatin film on the physicochemical and antioxidant properties of film. Films with fraction of 25% by weight of bovine gelatin sample were cast with addition of glycerol as a plasticizer. The total phenolic content and antioxidant activity of the films showed higher levels with addition of seaweed extract. Also film appearance properties such as film thickness, color and light transparency were evaluated. Film appearance was slightly modified whereas microstructure of films showed rough patches at 50% level of extract in the film. Hydrophilicity and glass transition temperature of the films also increased with increased level of seaweed extract. It was found that seaweed extract can be incorporated within gelatin and casein for development of biofunctional films.

Keywords: Laminaria hyperborea, ultrasound, seaweed extract, bovine gelatin film, antioxidant, phenolic compounds

Procedia PDF Downloads 519
17504 3D Objects Indexing Using Spherical Harmonic for Optimum Measurement Similarity

Authors: S. Hellam, Y. Oulahrir, F. El Mounchid, A. Sadiq, S. Mbarki

Abstract:

In this paper, we propose a method for three-dimensional (3-D)-model indexing based on defining a new descriptor, which we call new descriptor using spherical harmonics. The purpose of the method is to minimize, the processing time on the database of objects models and the searching time of similar objects to request object. Firstly we start by defining the new descriptor using a new division of 3-D object in a sphere. Then we define a new distance which will be used in the search for similar objects in the database.

Keywords: 3D indexation, spherical harmonic, similarity of 3D objects, measurement similarity

Procedia PDF Downloads 433