Search results for: experimental simulation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11281

Search results for: experimental simulation

841 Allelopathic Action of Diferents Sorghum bicolor [L.] Moench Fractions on Ipomoea grandifolia [Dammer] O'Donell

Authors: Mateus L. O. Freitas, Flávia H. de M. Libório, Letycia L. Ricardo, Patrícia da C. Zonetti, Graciene de S. Bido

Abstract:

Weeds compete with agricultural crops for resources such as light, water, and nutrients. This competition can cause significant damage to agricultural producers, and, currently, the use of agrochemicals is the most effective method for controlling these undesirable plants. Morning glory (Ipomoea grandifolia [Dammer] O'Donell) is an aggressive weed and significantly reduces agricultural productivity making harvesting difficult, especially mechanical harvesting. The biggest challenge in modern agriculture is to preserve high productivity reducing environmental damage and maintaining soil characteristics. No-till is a sustainable practice that can reduce the use of agrochemicals and environmental impacts due to the presence of plant residues in the soil, which release allelopathic compounds and reduce the incidence or alter the growth and development of crops and weeds. Sorghum (Sorghum bicolor [L.] Moench) is a forage with proven allelopathic activity, mainly for producing sorgholeone. In this context, this research aimed to evaluate the allelopathic action of sorghum fractions using hexane, dichloromethane, butanol, and ethyl acetate on the germination and initial growth of morning glory. The parameters analyzed were the percentage of germination, speed of germination, seedling length, and biomass weight (fresh and dry). The bioassays were performed in Petri dishes, kept in an incubation chamber for 7 days, at 25 °C, with a 12h photoperiod. The experimental design was completely randomized, with five replicates of each treatment. The data were evaluated by analysis of variance, and the averages between each treatment were compared using the Scott Knott test at a 5% significance level. The results indicated that the dichloromethane and ethyl acetate fractions showed bioherbicidal effects, promoting effective reductions on germination and initial growth of the morning glory. It was concluded that allelochemicals were probably extracted in these fractions. These secondary metabolites can reduce the use of agrochemicals and environmental impact, making agricultural production systems more sustainable.

Keywords: allelochemicals, secondary metabolism, sorgoleone, weeds

Procedia PDF Downloads 138
840 Physico-Mechanical Properties of Wood-Plastic Composites Produced from Polyethylene Terephthalate Plastic Bottle Wastes and Sawdust of Three Tropical Hardwood Species

Authors: Amos Olajide Oluyege, Akpanobong Akpan Ekong, Emmanuel Uchechukwu Opara, Sunday Adeniyi Adedutan, Joseph Adeola Fuwape, Olawale John Olukunle

Abstract:

This study was carried out to evaluate the influence of wood species and wood plastic ratio on the physical and mechanical properties of wood plastic composites (WPCs) produced from polyethylene terephthalate (PET) plastic bottle wastes and sawdust from three hardwood species, namely, Terminalia superba, Gmelina arborea, and Ceiba pentandra. The experimental WPCs were prepared from sawdust particle size classes of ≤ 0.5, 0.5 – 1.0, and 1.0 – 2.0 mm at wood/plastic ratios of 40:60, 50:50 and 60:40 (percentage by weight). The WPCs for each study variable combination were prepared in 3 replicates and laid out in a randomized complete block design (RCBD). The physical properties investigated water absorption (WA), linear expansion (LE) and thickness swelling (TS) while the mechanical properties evaluated were Modulus of Elasticity (MOE) and Modulus of Rupture (MOR). The mean values for WA, LE and TS ranged from 1.07 to 34.04, 0.11 to 1.76 and 0.11 to 4.05 %, respectively. The mean values of the three physical properties increased with decrease in wood plastic ratio. Wood plastic ratio of 40:60 at each particle size class generally resulted in the lowest values while wood plastic ratio of 60:40 had the highest values for each of the three species. For each of the physical properties, T. superba had the least mean values followed by G. arborea, while the highest values were observed C. pentandra. The mean values for MOE and MOR ranged from 458.17 to 1875.67 and 2.64 to 18.39 N/mm2, respectively. The mean values of the two mechanical properties decreased with increase in wood plastic ratio. Wood plastic ratio of 40:60 at each wood particle size class generally had the highest values while wood plastic ratio of 60:40 had the least values for each of the three species. For each of the mechanical properties, C. pentandra had the highest mean values followed by G. arborea, while the least values were observed T. superba. There were improvements in both the physical and mechanical properties due to decrease in sawdust particle size class with the particle size class of ≤ 0.5 mm giving the best result. The results of the Analysis of variance revealed significant (P < 0.05) effects of the three study variables – wood species, sawdust particle size class and wood/plastic ratio on all the physical and mechanical properties of the WPCs. It can be concluded from the results of this study that wood plastic composites from sawdust particle size ≤ 0.5 and PET plastic bottle wastes with acceptable physical and mechanical properties are better produced using 40:60 wood/plastic ratio, and that at this ratio, all the three species are suitable for the production of wood plastic composites.

Keywords: polyethylene terephthalate plastic bottle wastes, wood plastic composite, physical properties, mechanical properties

Procedia PDF Downloads 187
839 Design of Photonic Crystal with Defect Layer to Eliminate Interface Corrugations for Obtaining Unidirectional and Bidirectional Beam Splitting under Normal Incidence

Authors: Evrim Colak, Andriy E. Serebryannikov, Pavel V. Usik, Ekmel Ozbay

Abstract:

Working with a dielectric photonic crystal (PC) structure which does not include surface corrugations, unidirectional transmission and dual-beam splitting are observed under normal incidence as a result of the strong diffractions caused by the embedded defect layer. The defect layer has twice the period of the regular PC segments which sandwich the defect layer. Although the PC has even number of rows, the structural symmetry is broken due to the asymmetric placement of the defect layer with respect to the symmetry axis of the regular PC. The simulations verify that efficient splitting and occurrence of strong diffractions are related to the dispersion properties of the Floquet-Bloch modes of the photonic crystal. Unidirectional and bi-directional splitting, which are associated with asymmetric transmission, arise due to the dominant contribution of the first positive and first negative diffraction orders. The effect of the depth of the defect layer is examined by placing single defect layer in varying rows, preserving the asymmetry of PC. Even for deeply buried defect layer, asymmetric transmission is still valid even if the zeroth order is not coupled. This transmission is due to evanescent waves which reach to the deeply embedded defect layer and couple to higher order modes. In an additional selected performance, whichever surface is illuminated, i.e., in both upper and lower surface illumination cases, incident beam is split into two beams of equal intensity at the output surface where the intensity of the out-going beams are equal for both illumination cases. That is, although the structure is asymmetric, symmetric bidirectional transmission with equal transmission values is demonstrated and the structure mimics the behavior of symmetric structures. Finally, simulation studies including the examination of a coupled-cavity defect for two different permittivity values (close to the permittivity values of GaAs or Si and alumina) reveal unidirectional splitting for a wider band of operation in comparison to the bandwidth obtained in the case of a single embedded defect layer. Since the dielectric materials that are utilized are low-loss and weakly dispersive in a wide frequency range including microwave and optical frequencies, the studied structures should be scalable to the mentioned ranges.

Keywords: asymmetric transmission, beam deflection, blazing, bi-directional splitting, defect layer, dual beam splitting, Floquet-Bloch modes, isofrequency contours, line defect, oblique incidence, photonic crystal, unidirectionality

Procedia PDF Downloads 175
838 Object-Scene: Deep Convolutional Representation for Scene Classification

Authors: Yanjun Chen, Chuanping Hu, Jie Shao, Lin Mei, Chongyang Zhang

Abstract:

Traditional image classification is based on encoding scheme (e.g. Fisher Vector, Vector of Locally Aggregated Descriptor) with low-level image features (e.g. SIFT, HoG). Compared to these low-level local features, deep convolutional features obtained at the mid-level layer of convolutional neural networks (CNN) have richer information but lack of geometric invariance. For scene classification, there are scattered objects with different size, category, layout, number and so on. It is crucial to find the distinctive objects in scene as well as their co-occurrence relationship. In this paper, we propose a method to take advantage of both deep convolutional features and the traditional encoding scheme while taking object-centric and scene-centric information into consideration. First, to exploit the object-centric and scene-centric information, two CNNs that trained on ImageNet and Places dataset separately are used as the pre-trained models to extract deep convolutional features at multiple scales. This produces dense local activations. By analyzing the performance of different CNNs at multiple scales, it is found that each CNN works better in different scale ranges. A scale-wise CNN adaption is reasonable since objects in scene are at its own specific scale. Second, a fisher kernel is applied to aggregate a global representation at each scale and then to merge into a single vector by using a post-processing method called scale-wise normalization. The essence of Fisher Vector lies on the accumulation of the first and second order differences. Hence, the scale-wise normalization followed by average pooling would balance the influence of each scale since different amount of features are extracted. Third, the Fisher vector representation based on the deep convolutional features is followed by a linear Supported Vector Machine, which is a simple yet efficient way to classify the scene categories. Experimental results show that the scale-specific feature extraction and normalization with CNNs trained on object-centric and scene-centric datasets can boost the results from 74.03% up to 79.43% on MIT Indoor67 when only two scales are used (compared to results at single scale). The result is comparable to state-of-art performance which proves that the representation can be applied to other visual recognition tasks.

Keywords: deep convolutional features, Fisher Vector, multiple scales, scale-specific normalization

Procedia PDF Downloads 317
837 Portable and Parallel Accelerated Development Method for Field-Programmable Gate Array (FPGA)-Central Processing Unit (CPU)- Graphics Processing Unit (GPU) Heterogeneous Computing

Authors: Nan Hu, Chao Wang, Xi Li, Xuehai Zhou

Abstract:

The field-programmable gate array (FPGA) has been widely adopted in the high-performance computing domain. In recent years, the embedded system-on-a-chip (SoC) contains coarse granularity multi-core CPU (central processing unit) and mobile GPU (graphics processing unit) that can be used as general-purpose accelerators. The motivation is that algorithms of various parallel characteristics can be efficiently mapped to the heterogeneous architecture coupled with these three processors. The CPU and GPU offload partial computationally intensive tasks from the FPGA to reduce the resource consumption and lower the overall cost of the system. However, in present common scenarios, the applications always utilize only one type of accelerator because the development approach supporting the collaboration of the heterogeneous processors faces challenges. Therefore, a systematic approach takes advantage of write-once-run-anywhere portability, high execution performance of the modules mapped to various architectures and facilitates the exploration of design space. In this paper, A servant-execution-flow model is proposed for the abstraction of the cooperation of the heterogeneous processors, which supports task partition, communication and synchronization. At its first run, the intermediate language represented by the data flow diagram can generate the executable code of the target processor or can be converted into high-level programming languages. The instantiation parameters efficiently control the relationship between the modules and computational units, including two hierarchical processing units mapping and adjustment of data-level parallelism. An embedded system of a three-dimensional waveform oscilloscope is selected as a case study. The performance of algorithms such as contrast stretching, etc., are analyzed with implementations on various combinations of these processors. The experimental results show that the heterogeneous computing system with less than 35% resources achieves similar performance to the pure FPGA and approximate energy efficiency.

Keywords: FPGA-CPU-GPU collaboration, design space exploration, heterogeneous computing, intermediate language, parameterized instantiation

Procedia PDF Downloads 101
836 Lipid Emulsion versus DigiFab in a Rat Model of Acute Digoxin Toxicity

Authors: Cansu Arslan Turan, Tuba Cimilli Ozturk, Ebru Unal Akoglu, Kemal Aygun, Ecem Deniz Kırkpantur, Ozge Ecmel Onur

Abstract:

Although the mechanism of action is not well known, Intravenous Lipid Emulsion (ILE) has been shown to be effective in the treatment of lipophilic drug intoxications. It is thought that ILE probably separate the lipophilic drugs from target tissue by creating a lipid-rich compartment in the plasma. The second theory is that ILE provides energy to myocardium with high dose free fatty acids activating the voltage gated calcium channels in the myocytes. In this study, the effects of ILE treatment on digoxin overdose which are frequently observed in emergency departments was searched in an animal model in terms of cardiac side effects and survival. The study was carried out at Yeditepe University, Faculty of Medicine-Experimental Animals Research Center Labs in December 2015. 40 Sprague-Dawley rats weighing 300-400 g were divided into 5 groups randomly. As the pre-treatment, the first group received saline, the second group received lipid, the third group received DigiFab, and the fourth group received DigiFab and lipid. Following that, digoxin was infused to all groups until death except the control group. First arrhythmia and cardiac arrest occurrence times were recorded. As no medication causing arrhythmia was infused, Group 5 was excluded from the statistical analysis performed for the comparisons of first arrhythmia and death time. According to the results although there was no significant difference in the statistical analysis comparing the four groups, as the rats, only exposed to digoxin intoxication were compared with the rats pre-treated with ILE in terms of first arrhythmia time and cardiac arrest occurrence times, significant difference was observed between the groups. According to our results, using DigiFab treatment, intralipid treatment, intralipid and DigiFab treatment for the rats exposed to digoxin intoxication makes no significant difference in terms of the first arrhythmia and death occurrence time. However, it is not possible to say that at the doses we use in the study, ILE treatment might be successful at least as a known antidote. The fact that the statistical significance between the two groups is not observed in the inter-comparisons of all the groups, the study should be repeated in the larger groups.

Keywords: arrhytmia, cardiac arrest, DigiFab, digoxin intoxication

Procedia PDF Downloads 222
835 A Hydrometallurgical Route for the Recovery of Molybdenum from Spent Mo-Co Catalyst

Authors: Bina Gupta, Rashmi Singh, Harshit Mahandra

Abstract:

Molybdenum is a strategic metal and finds applications in petroleum refining, thermocouples, X-ray tubes and in making of steel alloy owing to its high melting temperature and tensile strength. The growing significance and economic value of molybdenum has increased interest in the development of efficient processes aiming its recovery from secondary sources. Main secondary sources of Mo are molybdenum catalysts which are used for hydrodesulphurisation process in petrochemical refineries. The activity of these catalysts gradually decreases with time during the desulphurisation process as the catalysts get contaminated with toxic material and are dumped as waste which leads to environmental issues. In this scenario, recovery of molybdenum from spent catalyst is significant from both economic and environmental point of view. Recently ionic liquids have gained prominence due to their low vapour pressure, high thermal stability, good extraction efficiency and recycling capacity. The present study reports recovery of molybdenum from Mo-Co spent leach liquor using Cyphos IL 102[trihexyl(tetradecyl)phosphonium bromide] as an extractant. Spent catalyst was leached with 3.0 mol/L HCl, and the leach liquor containing Mo-870 ppm, Co-341 ppm, Al-508 ppm and Fe-42 ppm was subjected to extraction step. The effect of extractant concentration on the leach liquor was investigated and almost 85% extraction of Mo was achieved with 0.05 mol/L Cyphos IL 102. Results of stripping studies revealed that 2.0 mol/L HNO3 can effectively strip 94% of the extracted Mo from the loaded organic phase. McCabe- Thiele diagrams were constructed to determine the number of stages required for quantitative extraction and stripping of molybdenum and were confirmed by countercurrent simulation studies. According to McCabe- Thiele extraction and stripping isotherms, two stages are required for quantitative extraction and stripping of molybdenum at A/O= 1:1. Around 95.4% extraction of molybdenum was achieved in two-stage counter current at A/O= 1:1 with the negligible extraction of Co and Al. However, iron was coextracted and removed from the loaded organic phase by scrubbing with 0.01 mol/L HCl. Quantitative stripping (~99.5 %) of molybdenum was achieved with 2.0 mol/L HNO₃ in two stages at O/A=1:1. Overall ~95.0% molybdenum with 99 % purity was recovered from Mo-Co spent catalyst. From the strip solution, MoO₃ was obtained by crystallization followed by thermal decomposition. The product obtained after thermal decomposition was characterized by XRD, FE-SEM and EDX techniques. XRD peaks of MoO₃ correspond to molybdite Syn-MoO₃ structure. FE-SEM depicts the rod-like morphology of synthesized MoO₃. EDX analysis of MoO₃ shows 1:3 atomic percentage of molybdenum and oxygen. The synthesised MoO₃ can find application in gas sensors, electrodes of batteries, display devices, smart windows, lubricants and as a catalyst.

Keywords: cyphos Il 102, extraction, spent mo-co catalyst, recovery

Procedia PDF Downloads 160
834 Development of a Real-Time Simulink Based Robotic System to Study Force Feedback Mechanism during Instrument-Object Interaction

Authors: Jaydip M. Desai, Antonio Valdevit, Arthur Ritter

Abstract:

Robotic surgery is used to enhance minimally invasive surgical procedure. It provides greater degree of freedom for surgical tools but lacks of haptic feedback system to provide sense of touch to the surgeon. Surgical robots work on master-slave operation, where user is a master and robotic arms are the slaves. Current, surgical robots provide precise control of the surgical tools, but heavily rely on visual feedback, which sometimes cause damage to the inner organs. The goal of this research was to design and develop a real-time simulink based robotic system to study force feedback mechanism during instrument-object interaction. Setup includes three Velmex XSlide assembly (XYZ Stage) for three dimensional movement, an end effector assembly for forceps, electronic circuit for four strain gages, two Novint Falcon 3D gaming controllers, microcontroller board with linear actuators, MATLAB and Simulink toolboxes. Strain gages were calibrated using Imada Digital Force Gauge device and tested with a hard-core wire to measure instrument-object interaction in the range of 0-35N. Designed simulink model successfully acquires 3D coordinates from two Novint Falcon controllers and transfer coordinates to the XYZ stage and forceps. Simulink model also reads strain gages signal through 10-bit analog to digital converter resolution of a microcontroller assembly in real time, converts voltage into force and feedback the output signals to the Novint Falcon controller for force feedback mechanism. Experimental setup allows user to change forward kinematics algorithms to achieve the best-desired movement of the XYZ stage and forceps. This project combines haptic technology with surgical robot to provide sense of touch to the user controlling forceps through machine-computer interface.

Keywords: surgical robot, haptic feedback, MATLAB, strain gage, simulink

Procedia PDF Downloads 523
833 Tests for Zero Inflation in Count Data with Measurement Error in Covariates

Authors: Man-Yu Wong, Siyu Zhou, Zhiqiang Cao

Abstract:

In quality of life, health service utilization is an important determinant of medical resource expenditures on Colorectal cancer (CRC) care, a better understanding of the increased utilization of health services is essential for optimizing the allocation of healthcare resources to services and thus for enhancing the service quality, especially for high expenditure on CRC care like Hong Kong region. In assessing the association between the health-related quality of life (HRQOL) and health service utilization in patients with colorectal neoplasm, count data models can be used, which account for over dispersion or extra zero counts. In our data, the HRQOL evaluation is a self-reported measure obtained from a questionnaire completed by the patients, misreports and variations in the data are inevitable. Besides, there are more zero counts from the observed number of clinical consultations (observed frequency of zero counts = 206) than those from a Poisson distribution with mean equal to 1.33 (expected frequency of zero counts = 156). This suggests that excess of zero counts may exist. Therefore, we study tests for detecting zero-inflation in models with measurement error in covariates. Method: Under classical measurement error model, the approximate likelihood function for zero-inflation Poisson regression model can be obtained, then Approximate Maximum Likelihood Estimation(AMLE) can be derived accordingly, which is consistent and asymptotically normally distributed. By calculating score function and Fisher information based on AMLE, a score test is proposed to detect zero-inflation effect in ZIP model with measurement error. The proposed test follows asymptotically standard normal distribution under H0, and it is consistent with the test proposed for zero-inflation effect when there is no measurement error. Results: Simulation results show that empirical power of our proposed test is the highest among existing tests for zero-inflation in ZIP model with measurement error. In real data analysis, with or without considering measurement error in covariates, existing tests, and our proposed test all imply H0 should be rejected with P-value less than 0.001, i.e., zero-inflation effect is very significant, ZIP model is superior to Poisson model for analyzing this data. However, if measurement error in covariates is not considered, only one covariate is significant; if measurement error in covariates is considered, only another covariate is significant. Moreover, the direction of coefficient estimations for these two covariates is different in ZIP regression model with or without considering measurement error. Conclusion: In our study, compared to Poisson model, ZIP model should be chosen when assessing the association between condition-specific HRQOL and health service utilization in patients with colorectal neoplasm. and models taking measurement error into account will result in statistically more reliable and precise information.

Keywords: count data, measurement error, score test, zero inflation

Procedia PDF Downloads 272
832 An Experimental Investigation of Chemical Enhanced Oil Recovery (Ceor) for Fractured Carbonate Reservoirs, Case Study: Kais Formation on Wakamuk Field

Authors: Jackson Andreas Theo Pola, Leksono Mucharam, Hari Oetomo, Budi Susanto, Wisnu Nugraha

Abstract:

About half of the world oil reserves are located in carbonate reservoirs, where 65% of the total carbonate reservoirs are oil wet and 12% intermediate wet [1]. Oil recovery in oil wet or mixed wet carbonate reservoirs can be increased by dissolving surfactant to injected water to change the rock wettability from oil wet to more water wet. The Wakamuk Field operated by PetroChina International (Bermuda) Ltd. and PT. Pertamina EP in Papua, produces from main reservoir of Miocene Kais Limestone. First production commenced on August, 2004 and the peak field production of 1456 BOPD occurred in August, 2010. It was found that is a complex reservoir system and until 2014 cumulative oil production was 2.07 MMBO, less than 9% of OOIP. This performance is indicative of presence of secondary porosity, other than matrix porosity which is of low average porosity 13% and permeability less than 7 mD. Implementing chemical EOR in this case is the best way to increase oil production. However, the selected chemical must be able to lower the interfacial tension (IFT), reduce oil viscosity, and alter the wettability; thus a special chemical treatment named SeMAR has been proposed. Numerous laboratory tests such as phase behavior test, core compatibility test, mixture viscosity, contact angle measurement, IFT, imbibitions test and core flooding were conducted on Wakamuk field samples. Based on the spontaneous imbibitions results for Wakamuk field core, formulation of SeMAR with compositional S12A gave oil recovery 43.94% at 1wt% concentration and maximum percentage of oil recovery 87.3% at 3wt% concentration respectively. In addition, the results for first scenario of core flooding test gave oil recovery 60.32% at 1 wt% concentration S12A and the second scenario gave 96.78% of oil recovery at concentration 3 wt% respectively. The soaking time of chemicals has a significant effect on the recovery and higher chemical concentrations affect larger areas for wettability and therefore, higher oil recovery. The chemical that gives best overall results from laboratory tests study will also be a consideration for Huff and Puff injections trial (pilot project) for increasing oil recovery from Wakamuk Field

Keywords: Wakamuk field, chemical treatment, oil recovery, viscosity

Procedia PDF Downloads 681
831 A Re-Evaluation of Green Architecture and Its Contributions to Environmental Sustainability

Authors: Po-Ching Wang

Abstract:

Considering the notable effects of natural resource consumption and impacts on fragile ecosystems, reflection on contemporary sustainable design is critical. Nevertheless, the idea of ‘green’ has been misapplied and even abused, and, in fact, much damage to the environment has been done in its name. In 1996’s popular science fiction film Independence Day, an alien species, having exhausted the natural resources of one planet, moves on to another —a fairly obvious irony on contemporary human beings’ irresponsible use of the Earth’s natural resources in modern times. In fact, the human ambition to master nature and freely access the world’s resources has long been inherent in manifestos evinced by productions of the environmental design professions. Ron Herron’s Walking City, an experimental architectural piece of 1964, is one example that comes to mind here. For this design concept, the architect imagined a gigantic nomadic urban aggregate that by way of an insect-like robotic carrier would move all over the world, on land and sea, to wherever its inhabitants want. Given the contemporary crisis regarding natural resources, recently ideas pertinent to structuring a sustainable environment have been attracting much interest in architecture, a field that has been accused of significantly contributing to ecosystem degradation. Great art, such as Fallingwater building, has been regarded as nature-friendly, but its notion of ‘green’ might be inadequate in the face of the resource demands made by human populations today. This research suggests a more conservative and scrupulous attitude to attempting to modify nature for architectural settings. Designs that pursue spiritual or metaphysical interconnections through anthropocentric aesthetics are not sufficient to benefit ecosystem integrity; though high-tech energy-saving processes may contribute to a fine-scale sustainability, they may ultimately cause catastrophe in the global scale. Design with frugality is proposed in order to actively reduce environmental load. The aesthetic taste and ecological sensibility of design professions and the public alike may have to be reshaped in order to make the goals of environmental sustainability viable.

Keywords: anthropocentric aesthetic, aquarium sustainability, biosphere 2, ecological aesthetic, ecological footprint, frugal design

Procedia PDF Downloads 198
830 Study of Motion of Impurity Ions in Poly(Vinylidene Fluoride) from View Point of Microstructure of Polymer Solid

Authors: Yuichi Anada

Abstract:

Electrical properties of polymer solid is characterized by dielectric relaxation phenomenon. Complex permittivity shows a high dependence on frequency of external stimulation in the broad frequency range from 0.1mHz to 10GHz. The complex-permittivity dispersion gives us a lot of useful information about the molecular motion of polymers and the structure of polymer aggregates. However, the large dispersion of permittivity at low frequencies due to DC conduction of impurity ions often covers the dielectric relaxation in polymer solid. In experimental investigation, many researchers have tried to remove the DC conduction experimentally or analytically for a long time. On the other hand, our laboratory chose another way of research for this problem from the point of view of a reversal in thinking. The way of our research is to use the impurity ions in the DC conduction as a probe to detect the motion of polymer molecules and to investigate the structure of polymer aggregates. In addition to the complex permittivity, the electric modulus and the conductivity relaxation time are strong tools for investigating the ionic motion in DC conduction. In a non-crystalline part of melt-crystallized polymers, free spaces with inhomogeneous size exist between crystallites. As the impurity ions exist in the non-crystalline part and move through these inhomogeneous free spaces, the motion of ions reflects the microstructure of non-crystalline part. The ionic motion of impurity ions in poly(vinylidene fluoride) (PVDF) is investigated in this study. Frequency dependence of the loss permittivity of PVDF shows a characteristic of the direct current (DC) conduction below 1 kHz of frequency at 435 K. The electric modulus-frequency curve shows a characteristic of the dispersion with the single conductivity relaxation time. Namely, it is the Debye-type dispersion. The conductivity relaxation time analyzed from this curve is 0.00003 s at 435 K. From the plot of conductivity relaxation time of PVDF together with the other polymers against permittivity, it was found that there are two group of polymers; one of the group is characterized by small conductivity relaxation time and large permittivity, and another is characterized by large conductivity relaxation time and small permittivity.

Keywords: conductivity relaxation time, electric modulus, ionic motion, permittivity, poly(vinylidene fluoride), DC conduction

Procedia PDF Downloads 161
829 Evaluation of Nutrient Intake, Body Weight Gain and Carcass Characteristics of Growing Washera Lamb Fed Grass Hay as a Basal Diet with Supplementation of Dried Atella and Niger Seed Cake in Different Combinations

Authors: Fana Woldetsadik

Abstract:

Ethiopia has a huge livestock population, including sheep, that has been contributing a considerable portion to the economy of the country and still promising to rally around the economic advancement of the country. However, feed shortage is a limiting factor in the production and productivity of sheep among Ethiopian smallholder farmers. Therefore, the aim of this study was to prove the role of the locally available brewery by-products called dried Atella as a supplement in feed intake, digestibility, live weight gain, carcass yield, and economic benefit in comparison with commercially purchased supplements known as niger seed cake (NSC). This on-station feeding experiment was conducted on the Zenzelma Campus of Bahir Dar University animal farm. The experimental design used for this research was a completely randomized design (CRD) with five replications. The crude protein (CP) content of dried Atella, wheat bran (WB), natural pasture hay (NPH) and NSC were about 25.07%, 16.57%, 4.48% and 38.04%, respectively, while the neutral detergent fibre (NDF),acid detergent fibre (ADF) and acid detergent lignin (ADL) content of dried Atella, WB, NPH and NSC were around 31.75%, 8.31%, 8.14%; 42.05%, 22.64%, 4.04%; 74.21%, 50.81%, 8.66%; 42.31%, 26.95% and 6.9%, respectively. The result depicted that a higher(P < 0.001) feed intake, nutrient intake, and digestibility for lambs supplemented with Atella than those supplemented with NSC. Furthermore, daily body weight gain and carcass characteristics were better (P < 0.05) for the sheep supplemented with dried Atella than NSC. On the other hand, in terms of profitability, although there was no substantial difference (P > 0.05) between T2 (animals fed NPH,NSC and WB) and T3 (animals fed NPH, Atella and WB), slightly better benefit was recorded in T3 groups. However, loss of money was recorded in T1 (animals fed NPH and WB). Hence, from the biological performance of lambs, it was concluded that Atella could be a potential supplementary feed for sheep fattening among smallholder farmers than NSC despite no profitability difference. Nevertheless, further investigation is recommended to examine the consequence of supplementation of NPH with NSC and NPH with Atella on fatty acid profile analysis, the physicochemical composition of meat, and meat composition.

Keywords: Attela, Bahir Dar university, Carcass yield, digestibility, natural pasture hay, Niger seed cake, smallholder farmers, weight gain, Ethiopia

Procedia PDF Downloads 130
828 A Regression Model for Predicting Sugar Crystal Size in a Fed-Batch Vacuum Evaporative Crystallizer

Authors: Sunday B. Alabi, Edikan P. Felix, Aniediong M. Umo

Abstract:

Crystal size distribution is of great importance in the sugar factories. It determines the market value of granulated sugar and also influences the cost of production of sugar crystals. Typically, sugar is produced using fed-batch vacuum evaporative crystallizer. The crystallization quality is examined by crystal size distribution at the end of the process which is quantified by two parameters: the average crystal size of the distribution in the mean aperture (MA) and the width of the distribution of the coefficient of variation (CV). Lack of real-time measurement of the sugar crystal size hinders its feedback control and eventual optimisation of the crystallization process. An attractive alternative is to use a soft sensor (model-based method) for online estimation of the sugar crystal size. Unfortunately, the available models for sugar crystallization process are not suitable as they do not contain variables that can be measured easily online. The main contribution of this paper is the development of a regression model for estimating the sugar crystal size as a function of input variables which are easy to measure online. This has the potential to provide real-time estimates of crystal size for its effective feedback control. Using 7 input variables namely: initial crystal size (Lo), temperature (T), vacuum pressure (P), feed flowrate (Ff), steam flowrate (Fs), initial super-saturation (S0) and crystallization time (t), preliminary studies were carried out using Minitab 14 statistical software. Based on the existing sugar crystallizer models, and the typical ranges of these 7 input variables, 128 datasets were obtained from a 2-level factorial experimental design. These datasets were used to obtain a simple but online-implementable 6-input crystal size model. It seems the initial crystal size (Lₒ) does not play a significant role. The goodness of the resulting regression model was evaluated. The coefficient of determination, R² was obtained as 0.994, and the maximum absolute relative error (MARE) was obtained as 4.6%. The high R² (~1.0) and the reasonably low MARE values are an indication that the model is able to predict sugar crystal size accurately as a function of the 6 easy-to-measure online variables. Thus, the model can be used as a soft sensor to provide real-time estimates of sugar crystal size during sugar crystallization process in a fed-batch vacuum evaporative crystallizer.

Keywords: crystal size, regression model, soft sensor, sugar, vacuum evaporative crystallizer

Procedia PDF Downloads 198
827 A Multilingual Model in the Multicultural World

Authors: Marina Petrova

Abstract:

Language policy issues related to the preservation and development of the native languages of the Russian peoples and the state languages of the national republics are increasingly becoming the focus of recent attention of educators and parents, public and national figures. Is it legal to teach the national language or the mother tongue as the state language? Due to that dispute language phobia moods easily evolve into xenophobia among the population. However, a civilized, intelligent multicultural personality can only be formed if the country develops bilingualism and multilingualism, and languages as a political tool help to find ‘keys’ to sufficiently closed national communities both within a poly-ethnic state and in internal relations of multilingual countries. The purpose of this study is to design and theoretically substantiate an efficient model of language education in the innovatively developing Republic of Sakha. 800 participants from different educational institutions of Yakutia worked at developing a multilingual model of education. This investigation is of considerable practical importance because researchers could build a methodical system designed to create conditions for the formation of a cultural language personality and the development of the multilingual communicative competence of Yakut youth, necessary for communication in native, Russian and foreign languages. The selected methodology of humane-personal and competence approaches is reliable and valid. Researchers used a variety of sources of information, including access to related scientific fields (philosophy of education, sociology, humane and social pedagogy, psychology, effective psychotherapy, methods of teaching Russian, psycholinguistics, socio-cultural education, ethnoculturology, ethnopsychology). Of special note is the application of theoretical and empirical research methods, a combination of academic analysis of the problem and experienced training, positive results of experimental work, representative series, correct processing and statistical reliability of the obtained data. It ensures the validity of the investigation’s findings as well as their broad introduction into practice of life-long language education.

Keywords: intercultural communication, language policy, multilingual and multicultural education, the Sakha Republic of Yakutia

Procedia PDF Downloads 213
826 Morphological Differentiation and Temporal Variability in Essential Oil Yield and Composition among Origanum vulgare ssp. hirtum L., Origanum onites L. and Origanum x intercedens from Ikaria Island (Greece)

Authors: A.Assariotakis, P. Vahamidis, P. Tarantilis, G. Economou

Abstract:

Greece, due to its geographical location and the particular climatic conditions, presents high biodiversity of Medicinal and Aromatic Plants. Among them, the genus Origanum not only presents a wide distribution, but it also has great economic importance. After extensive surveys in Ikaria Island (Greece), 3 species of the genus Origanum were identified, namely, Origanum vulgare ssp. hirtum (Greek oregano), Origanum onites (Turkish oregano) and Origanum x intercedens (hybrid), a naturally occurring hybrid between O. hirtum and O. onites. The purpose of this study was to determine their morphological as well as their temporal variability in essential oil yield and composition under field conditions. For this reason, a plantation of each species was created using vegetative propagation and was established at the experimental field of the Agricultural University of Athens (A.U.A.). From the establishment year and for the following two years (3 years of observations), several observations were taken during each growing season with the purpose of identifying the morphological differences among the studied species. Each year collected plant (at bloom stage) material was air-dried at room temperature in the shade. The essential oil content was determined by hydrodistillation using a Clevenger-type apparatus. The chemical composition of essential oils was investigated by Gas Chromatography-Mass Spectrometry (GC – MS). Significant differences were observed among the three oregano species in terms of plant height, leaf size, inflorescence features, as well as concerning their biological cycle. O. intercedens inflorescence presented more similarities with O. hirtum than with O. onites. It was found that calyx morphology could serve as a clear distinction feature between O. intercedens and O. hirtum. The calyx in O. hirtum presents five isometric teeth whereas in O. intercedens two high and three shorter. Essential oil content was significantly affected by genotype and year. O. hirtum presented higher essential oil content than the other two species during the first year of cultivation, however during the second year the hybrid (O. intercedens) recorded the highest values. Carvacrol, p-cymene and γ-terpinene were the main essential oil constituents of the three studied species. In O. hirtum carvacrol content varied from 84,28 - 93,35%, in O. onites from 86,97 - 91,89%, whereas in O. intercedens it was recorded the highest carvacrol content, namely from 89,25 - 97,23%.

Keywords: variability, oregano biotypes, essential oil, carvacrol

Procedia PDF Downloads 119
825 An Inquiry of the Impact of Flood Risk on Housing Market with Enhanced Geographically Weighted Regression

Authors: Lin-Han Chiang Hsieh, Hsiao-Yi Lin

Abstract:

This study aims to determine the impact of the disclosure of flood potential map on housing prices. The disclosure is supposed to mitigate the market failure by reducing information asymmetry. On the other hand, opponents argue that the official disclosure of simulated results will only create unnecessary disturbances on the housing market. This study identifies the impact of the disclosure of the flood potential map by comparing the hedonic price of flood potential before and after the disclosure. The flood potential map used in this study is published by Taipei municipal government in 2015, which is a result of a comprehensive simulation based on geographical, hydrological, and meteorological factors. The residential property sales data of 2013 to 2016 is used in this study, which is collected from the actual sales price registration system by the Department of Land Administration (DLA). The result shows that the impact of flood potential on residential real estate market is statistically significant both before and after the disclosure. But the trend is clearer after the disclosure, suggesting that the disclosure does have an impact on the market. Also, the result shows that the impact of flood potential differs by the severity and frequency of precipitation. The negative impact for a relatively mild, high frequency flood potential is stronger than that for a heavy, low possibility flood potential. The result indicates that home buyers are of more concern to the frequency, than the intensity of flood. Another contribution of this study is in the methodological perspective. The classic hedonic price analysis with OLS regression suffers from two spatial problems: the endogeneity problem caused by omitted spatial-related variables, and the heterogeneity concern to the presumption that regression coefficients are spatially constant. These two problems are seldom considered in a single model. This study tries to deal with the endogeneity and heterogeneity problem together by combining the spatial fixed-effect model and geographically weighted regression (GWR). A series of literature indicates that the hedonic price of certain environmental assets varies spatially by applying GWR. Since the endogeneity problem is usually not considered in typical GWR models, it is arguable that the omitted spatial-related variables might bias the result of GWR models. By combing the spatial fixed-effect model and GWR, this study concludes that the effect of flood potential map is highly sensitive by location, even after controlling for the spatial autocorrelation at the same time. The main policy application of this result is that it is improper to determine the potential benefit of flood prevention policy by simply multiplying the hedonic price of flood risk by the number of houses. The effect of flood prevention might vary dramatically by location.

Keywords: flood potential, hedonic price analysis, endogeneity, heterogeneity, geographically-weighted regression

Procedia PDF Downloads 283
824 An Evaluation of the Lae City Road Network Improvement Project

Authors: Murray Matarab Konzang

Abstract:

Lae Port Development Project, Four Lane Highway and other development in the extraction industry which have direct road link to Lae City are predicted to have significant impact on its road network system. This paper evaluates Lae roads improvement program with forecast on planning, economic and the installation of bypasses to ease congestion, effective and convenient transport service for bulk goods and reduce travel time. Land-use transportation study and plans for local area traffic management scheme will be considered. City roads are faced with increased number of traffic and some inadequate road pavement width, poor transport plans, and facilities to meet this transportation demand. Lae also has drainage system which might not hold a 100 year flood. Proper evaluation, plan, design and intersection analysis is needed to evaluate road network system thus recommend improvement and estimate future growth. Repetitive and cyclic loading by heavy commercial vehicles with different axle configurations apply on the flexible pavement which weakens and tear the pavement surface thus small cracks occur. Rain water seeps through and overtime it creates potholes. Effective planning starts from experimental research and appropriate design standards to enable firm embankment, proper drains and quality pavement material. This paper will address traffic problems as well as road pavement, capacities of intersections, and pedestrian flow during peak hours. The outcome of this research will be to identify heavily trafficked road sections and recommend treatments to reduce traffic congestions, road classification, and proposal for bypass routes and improvement. First part of this study will describe transport or traffic related problems within the city. Second part would be to identify challenges imposed by traffic and road related problems and thirdly to recommend solutions after the analyzing traffic data that will indicate current capacities of road intersections and finally recommended treatment for improvement and future growth.

Keywords: Lae, road network, highway, vehicle traffic, planning

Procedia PDF Downloads 350
823 Recursion, Merge and Event Sequence: A Bio-Mathematical Perspective

Authors: Noury Bakrim

Abstract:

Formalization is indeed a foundational Mathematical Linguistics as demonstrated by the pioneering works. While dialoguing with this frame, we nonetheless propone, in our approach of language as a real object, a mathematical linguistics/biosemiotics defined as a dialectical synthesis between induction and computational deduction. Therefore, relying on the parametric interaction of cycles, rules, and features giving way to a sub-hypothetic biological point of view, we first hypothesize a factorial equation as an explanatory principle within Category Mathematics of the Ergobrain: our computation proposal of Universal Grammar rules per cycle or a scalar determination (multiplying right/left columns of the determinant matrix and right/left columns of the logarithmic matrix) of the transformable matrix for rule addition/deletion and cycles within representational mapping/cycle heredity basing on the factorial example, being the logarithmic exponent or power of rule deletion/addition. It enables us to propone an extension of minimalist merge/label notions to a Language Merge (as a computing principle) within cycle recursion relying on combinatorial mapping of rules hierarchies on external Entax of the Event Sequence. Therefore, to define combinatorial maps as language merge of features and combinatorial hierarchical restrictions (governing, commanding, and other rules), we secondly hypothesize from our results feature/hierarchy exponentiation on graph representation deriving from Gromov's Symbolic Dynamics where combinatorial vertices from Fe are set to combinatorial vertices of Hie and edges from Fe to Hie such as for all combinatorial group, there are restriction maps representing different derivational levels that are subgraphs: the intersection on I defines pullbacks and deletion rules (under restriction maps) then under disjunction edges H such that for the combinatorial map P belonging to Hie exponentiation by intersection there are pullbacks and projections that are equal to restriction maps RM₁ and RM₂. The model will draw on experimental biomathematics as well as structural frames with focus on Amazigh and English (cases from phonology/micro-semantics, Syntax) shift from Structure to event (especially Amazigh formant principle resolving its morphological heterogeneity).

Keywords: rule/cycle addition/deletion, bio-mathematical methodology, general merge calculation, feature exponentiation, combinatorial maps, event sequence

Procedia PDF Downloads 115
822 A Hydrometallurgical Route for the Recovery of Molybdenum from Mo-Co Spent Catalyst

Authors: Bina Gupta, Rashmi Singh, Harshit Mahandra

Abstract:

Molybdenum is a strategic metal and finds applications in petroleum refining, thermocouples, X-ray tubes and in making of steel alloy owing to its high melting temperature and tensile strength. The growing significance and economic value of molybdenum have increased interest in the development of efficient processes aiming its recovery from secondary sources. Main secondary sources of Mo are molybdenum catalysts which are used for hydrodesulphurisation process in petrochemical refineries. The activity of these catalysts gradually decreases with time during the desulphurisation process as the catalysts get contaminated with toxic material and are dumped as waste which leads to environmental issues. In this scenario, recovery of molybdenum from spent catalyst is significant from both economic and environmental point of view. Recently ionic liquids have gained prominence due to their low vapour pressure, high thermal stability, good extraction efficiency and recycling capacity. Present study reports recovery of molybdenum from Mo-Co spent leach liquor using Cyphos IL 102[trihexyl(tetradecyl)phosphonium bromide] as an extractant. Spent catalyst was leached with 3 mol/L HCl and the leach liquor containing Mo-870 ppm, Co-341 ppm, Al-508 ppm and Fe-42 ppm was subjected to extraction step. The effect of extractant concentration on the leach liquor was investigated and almost 85% extraction of Mo was achieved with 0.05 mol/L Cyphos IL 102. Results of stripping studies revealed that 2 mol/L HNO3 can effectively strip 94% of the extracted Mo from the loaded organic phase. McCabe-Thiele diagrams were constructed to determine the number of stages required for quantitative extraction and stripping of molybdenum and were confirmed by counter current simulation studies. According to McCabe-Thiele extraction and stripping isotherms, two stages are required for quantitative extraction and stripping of molybdenum at A/O= 1:1. Around 95.4% extraction of molybdenum was achieved in two stage counter current at A/O= 1:1 with negligible extraction of Co and Al. However, iron was coextracted and removed from the loaded organic phase by scrubbing with 0.01 mol/L HCl. Quantitative stripping (~99.5 %) of molybdenum was achieved with 2.0 mol/L HNO3 in two stages at O/A=1:1. Overall ~95.0% molybdenum with 99 % purity was recovered from Mo-Co spent catalyst. From the strip solution, MoO3 was obtained by crystallization followed by thermal decomposition. The product obtained after thermal decomposition was characterized by XRD, FE-SEM and EDX techniques. XRD peaks of MoO3correspond to molybdite Syn-MoO3 structure. FE-SEM depicts the rod like morphology of synthesized MoO3. EDX analysis of MoO3 shows 1:3 atomic percentage of molybdenum and oxygen. The synthesised MoO3 can find application in gas sensors, electrodes of batteries, display devices, smart windows, lubricants and as catalyst.

Keywords: cyphos IL 102, extraction, Mo-Co spent catalyst, recovery

Procedia PDF Downloads 256
821 Effects of Lateness Gene on Yield and Related Traits in Indica Rice

Authors: B. B. Rana, M. Yokota, Y. Shimizu, Y. Koide, I. Takamure, T. Kawano, M. Murai

Abstract:

Various genes which control or affect heading time have been found in rice. Out of them, Se1 and E1 loci play important roles in determining heading time by controlling photosensitivity. An isogenic-line pair of late and early lines were developed from progenies of the F1 from Suweon 258 × 36U. A lateness gene tentatively designated as “Ex” was found to control the difference in heading time between the early and late lines mentioned above. The present study was conducted to examine the effect of Ex on yield and related traits. Indica-type variety Suweon 258 was crossed with 36U, which is an Ur1 (Undulate rachis-1) isogenic line of IR36. In the F2 population, comparatively early-heading, late-heading and intermediate-heading plants were segregated. Segregation similar to that by the three types of heading was observed in the F3 and later generations. A late-heading plant and an early-heading plant were selected in the F8 population from an intermediate-heading F7 plant, for developing L and E of the isogenic-line pair, respectively. Experiments for L and E were conducted by randomized block design with three replications. Transplanting was conducted on May 3 at a planting distance of 30 cm × 15 cm with two seedlings per hill to an experimental field of the Faculty of Agriculture, Kochi University. Chemical fertilizers containing N, P2O5 and K2O were applied at the nitrogen levels of 4 g/m2, 9 g/m2 and 18 g/m2 in total being denoted by "N4", "N9" and "N18", respectively. Yield, yield components and other traits were measured. Ex delayed 80%-heading by 17 or 18 days in L as compared with E. In total brown rice yield (g/m2), L was 635, 606 and 590, and E was 577, 548 and 501, respectively, at N18, N9 and N4, indicating that Ex increased this trait by 10% to 18%. Ex increased yield-1.5 mm sieve (g/m2) b 9% to 15% at the three fertilizer levels. Ex increased the spikelet number per panicle by 16% to 22%. As a result, the spikelet number per m2 was increased by 11% to 18% at the three fertilizer levels. Ex decreased 1000-grain weight (g) by 2 to 4%. L was not significantly different from E in ripened-grain percentage, fertilized-spikelet percentage and percentage of ripened grains to fertilized spikelets. Hence, it is inferred that Ex increased yield by increasing spikelet number per panicle. Hence, Ex could be utilized to develop high yielding varieties for warmer districts.

Keywords: heading time, lateness gene, photosensitivity, yield, yield components

Procedia PDF Downloads 187
820 Particle Deflection in a PDMS Microchannel Caused by a Plane Travelling Surface Acoustic Wave

Authors: Florian Keipert, Hagen Schmitd

Abstract:

The size selective separation of different species in a microfluidic system is an actual task in biological or medical research. Former works dealt with the utilisation of the acoustic radiation force (ARF) caused by a plane travelling Surface Acoustic Wave (tSAW). In literature the ARF is described by a dimensionless parameter κ, depending on the wavelength and the particle diameter. To our knowledge research was done for values 0.2 < κ < 5.8 showing that the ARF is dominating the acoustic streaming force (ASF) for κ > 1.2. As a consequence the particle separation is limited by κ. In addition the dependence on the electrical power level was examined but only for κ > 1 pointing out an increased particle deflection for higher electrical power levels. Nevertheless a detailed study on the ASF and ARF especially for κ < 1 is still missing. In our setup we used a tSAW with a wavelength λ = 90 µm and 3 µm PS particles corresponding to κ = 0.3. Herewith the influence of the applied electrical power level on the particle deflection in a polydimethylsiloxan micro channel was investigated. Our results show an increased particle deflection for an increased electrical power level, which coincides with the reported results for κ > 1. Therefore particle separation is in contrast to literature also possible for lower κ values. Thereby the experimental setup can be generally simplified by a coordinated electrical power level for the specific particle size. Furthermore this raises the question of whether this particle deflection is caused only by the ARF as adopted so far or by the ASF or the sum of both forces. To investigate this fact a 0% - 24% saline solution was used and thus the mismatch between the compressibility of the PS particle and the working fluid could be changed. Therefore it is possible to change the relative strength between ARF and ASF and consequently the particle deflection. We observed a decreasing in the particle deflection for an increased NaCl content up to a 12% saline solution and subsequently an increasing of the particle deflection. Our observation could be explained by the acoustic contrast factor Φ, which depends on the compressibility mismatch. The compressibility of water is increased by the NaCl and the range of a 0% - 24% saline solution covers the PS particle compressibility. Hence the particle deflection reaches a minimum value for the accordance between compressibility of PS particle and saline solution. This minimum value can be estimated as the particle deflection only caused by the ASF. Knowing the particle deflection due to the ASF the particle deflection caused by the ARF can be calculated and thus finally the relation between both forces. Concluding, the particle deflection and therefore the size selective particle separation generated by a tSAW can be achieved for values κ < 1, simplifying actual setups by adjusting the electrical power level. Beyond we studied for the first time the relative strength between ARF and ASF to characterise the particle deflection in a microchannel.

Keywords: ARF, ASF, particle separation, saline solution, tSAW

Procedia PDF Downloads 248
819 Validating Quantitative Stormwater Simulations in Edmonton Using MIKE URBAN

Authors: Mohamed Gaafar, Evan Davies

Abstract:

Many municipalities within Canada and abroad use chloramination to disinfect drinking water so as to avert the production of the disinfection by-products (DBPs) that result from conventional chlorination processes and their consequential public health risks. However, the long-lasting monochloramine disinfectant (NH2Cl) can pose a significant risk to the environment. As, it can be introduced into stormwater sewers, from different water uses, and thus freshwater sources. Little research has been undertaken to monitor and characterize the decay of NH2Cl and to study the parameters affecting its decomposition in stormwater networks. Therefore, the current study was intended to investigate this decay starting by building a stormwater model and validating its hydraulic and hydrologic computations, and then modelling water quality in the storm sewers and examining the effects of different parameters on chloramine decay. The presented work here is only the first stage of this study. The 30th Avenue basin in Southern Edmonton was chosen as a case study, because the well-developed basin has various land-use types including commercial, industrial, residential, parks and recreational. The City of Edmonton has already built a MIKE-URBAN stormwater model for modelling floods. Nevertheless, this model was built to the trunk level which means that only the main drainage features were presented. Additionally, this model was not calibrated and known to consistently compute pipe flows higher than the observed values; not to the benefit of studying water quality. So the first goal was to complete modelling and updating all stormwater network components. Then, available GIS Data was used to calculate different catchment properties such as slope, length and imperviousness. In order to calibrate and validate this model, data of two temporary pipe flow monitoring stations, collected during last summer, was used along with records of two other permanent stations available for eight consecutive summer seasons. The effect of various hydrological parameters on model results was investigated. It was found that model results were affected by the ratio of impervious areas. The catchment length was tested, however calculated, because it is approximate representation of the catchment shape. Surface roughness coefficients were calibrated using. Consequently, computed flows at the two temporary locations had correlation coefficients of values 0.846 and 0.815, where the lower value pertained to the larger attached catchment area. Other statistical measures, such as peak error of 0.65%, volume error of 5.6%, maximum positive and negative differences of 2.17 and -1.63 respectively, were all found in acceptable ranges.

Keywords: stormwater, urban drainage, simulation, validation, MIKE URBAN

Procedia PDF Downloads 283
818 Vibroacoustic Modulation of Wideband Vibrations and its Possible Application for Windmill Blade Diagnostics

Authors: Abdullah Alnutayfat, Alexander Sutin, Dong Liu

Abstract:

Wind turbine has become one of the most popular energy productions. However, failure of blades and maintenance costs evolve into significant issues in the wind power industry, so it is essential to detect the initial blade defects to avoid the collapse of the blades and structure. This paper aims to apply modulation of high-frequency blade vibrations by low-frequency blade rotation, which is close to the known Vibro-Acoustic Modulation (VAM) method. The high-frequency wideband blade vibration is produced by the interaction of the surface blades with the environment air turbulence, and the low-frequency modulation is produced by alternating bending stress due to gravity. The low-frequency load of rotational wind turbine blades ranges between 0.2-0.4 Hz and can reach up to 2 Hz for strong wind. The main difference between this study and previous ones on VAM methods is the use of a wideband vibration signal from the blade's natural vibrations. Different features of the vibroacoustic modulation are considered using a simple model of breathing crack. This model considers the simple mechanical oscillator, where the parameters of the oscillator are varied due to low-frequency blade rotation. During the blade's operation, the internal stress caused by the weight of the blade modifies the crack's elasticity and damping. The laboratory experiment using steel samples demonstrates the possibility of VAM using a probe wideband noise signal. A cycle load with a small amplitude was used as a pump wave to damage the tested sample, and a small transducer generated a wideband probe wave. The received signal demodulation was conducted using the Detecting of Envelope Modulation on Noise (DEMON) approach. In addition, the experimental results were compared with the modulation index (MI) technique regarding the harmonic pump wave. The wideband and traditional VAM methods demonstrated similar sensitivity for earlier detection of invisible cracks. Importantly, employing a wideband probe signal with the DEMON approach speeds up and simplifies testing since it eliminates the need to conduct tests repeatedly for various harmonic probe frequencies and to adjust the probe frequency.

Keywords: vibro-acoustic modulation, detecting of envelope modulation on noise, damage, turbine blades

Procedia PDF Downloads 88
817 A Local Tensor Clustering Algorithm to Annotate Uncharacterized Genes with Many Biological Networks

Authors: Paul Shize Li, Frank Alber

Abstract:

A fundamental task of clinical genomics is to unravel the functions of genes and their associations with disorders. Although experimental biology has made efforts to discover and elucidate the molecular mechanisms of individual genes in the past decades, still about 40% of human genes have unknown functions, not to mention the diseases they may be related to. For those biologists who are interested in a particular gene with unknown functions, a powerful computational method tailored for inferring the functions and disease relevance of uncharacterized genes is strongly needed. Studies have shown that genes strongly linked to each other in multiple biological networks are more likely to have similar functions. This indicates that the densely connected subgraphs in multiple biological networks are useful in the functional and phenotypic annotation of uncharacterized genes. Therefore, in this work, we have developed an integrative network approach to identify the frequent local clusters, which are defined as those densely connected subgraphs that frequently occur in multiple biological networks and consist of the query gene that has few or no disease or function annotations. This is a local clustering algorithm that models multiple biological networks sharing the same gene set as a three-dimensional matrix, the so-called tensor, and employs the tensor-based optimization method to efficiently find the frequent local clusters. Specifically, massive public gene expression data sets that comprehensively cover dynamic, physiological, and environmental conditions are used to generate hundreds of gene co-expression networks. By integrating these gene co-expression networks, for a given uncharacterized gene that is of biologist’s interest, the proposed method can be applied to identify the frequent local clusters that consist of this uncharacterized gene. Finally, those frequent local clusters are used for function and disease annotation of this uncharacterized gene. This local tensor clustering algorithm outperformed the competing tensor-based algorithm in both module discovery and running time. We also demonstrated the use of the proposed method on real data of hundreds of gene co-expression data and showed that it can comprehensively characterize the query gene. Therefore, this study provides a new tool for annotating the uncharacterized genes and has great potential to assist clinical genomic diagnostics.

Keywords: local tensor clustering, query gene, gene co-expression network, gene annotation

Procedia PDF Downloads 143
816 Quantitative Evaluation of Efficiency of Surface Plasmon Excitation with Grating-Assisted Metallic Nanoantenna

Authors: Almaz R. Gazizov, Sergey S. Kharintsev, Myakzyum Kh. Salakhov

Abstract:

This work deals with background signal suppression in tip-enhanced near-field optical microscopy (TENOM). The background appears because an optical signal is detected not only from the subwavelength area beneath the tip but also from a wider diffraction-limited area of laser’s waist that might contain another substance. The background can be reduced by using a taper probe with a grating on its lateral surface where an external illumination causes surface plasmon excitation. It requires the grating with parameters perfectly matched with a given incident light for effective light coupling. This work is devoted to an analysis of the light-grating coupling and a quest of grating parameters to enhance a near-field light beneath the tip apex. The aim of this work is to find the figure of merit of plasmon excitation depending on grating period and location of grating in respect to the apex. In our consideration the metallic grating on the lateral surface of the tapered plasmonic probe is illuminated by a plane wave, the electric field is perpendicular to the sample surface. Theoretical model of efficiency of plasmon excitation and propagation toward the apex is tested by fdtd-based numerical simulation. An electric field of the incident light is enhanced on the grating by every single slit due to lightning rod effect. Hence, grating causes amplitude and phase modulation of the incident field in various ways depending on geometry and material of grating. The phase-modulating grating on the probe is a sort of metasurface that provides manipulation by spatial frequencies of the incident field. The spatial frequency-dependent electric field is found from the angular spectrum decomposition. If one of the components satisfies the phase-matching condition then one can readily calculate the figure of merit of plasmon excitation, defined as a ratio of the intensities of the surface mode and the incident light. During propagation towards the apex, surface wave undergoes losses in probe material, radiation losses, and mode compression. There is an optimal location of the grating in respect to the apex. One finds the value by matching quadratic law of mode compression and the exponential law of light extinction. Finally, performed theoretical analysis and numerical simulations of plasmon excitation demonstrate that various surface waves can be effectively excited by using the overtones of a period of the grating or by phase modulation of the incident field. The gratings with such periods are easy to fabricate. Tapered probe with the grating effectively enhances and localizes the incident field at the sample.

Keywords: angular spectrum decomposition, efficiency, grating, surface plasmon, taper nanoantenna

Procedia PDF Downloads 273
815 Nitriding of Super-Ferritic Stainless Steel by Plasma Immersion Ion Implantation in Radio Frequency and Microwave Plasma System

Authors: H. Bhuyan, S. Mändl, M. Favre, M. Cisternas, A. Henriquez, E. Wyndham, M. Walczak, D. Manova

Abstract:

The 470 Li-24 Cr and 460Li-21 Cr are two alloys belonging to the next generation of super-ferritic nickel free stainless steel grades, containing titanium (Ti), niobium (Nb) and small percentage of carbon (C) and nitrogen (N). The addition of Ti and Nb improves in general the corrosion resistance while the low interstitial content of C and N assures finer precipitates and greater ductility compared to conventional ferritic grades. These grades are considered an economic alternative to AISI 316L and 304 due to comparable or superior corrosion. However, since 316L and 304 can be nitrided to improve the mechanical surface properties like hardness and wear; it is hypothesize that the tribological properties of these super-ferritic stainless steels grades can also be improved by plasma nitriding. Thus two sets of plasma immersion ion implantation experiments have been carried out, one with a high pressure capacitively coupled radio frequency plasma at PUC Chile and the other using a low pressure microwave plasma at IOM Leipzig, in order to explore further improvements in the mechanical properties of 470 Li-24 Cr and 460Li-21 Cr steel. Nitrided and unnitrided substrates have been subsequently investigated using different surface characterization techniques including secondary ion mass spectroscopy, scanning electron microscopy, energy dispersive x-ray analysis, Vickers hardness, wear resistance, as well as corrosion test. In most of the characterizations no major differences have been observed for nitrided 470 Li-24 Cr and 460Li-21 Cr. Due to the ion bombardment, an increase in the surface roughness is observed for higher treatment temperature, independent of the steel types. The formation of chromium nitride compound takes place only at a treatment temperature around 4000C-4500C, or above. However, corrosion properties deteriorate after treatment at higher temperatures. The physical characterization results show up to 25 at.% of nitrogen for a diffusion zone of 4-6 m, and a 4-5 times increase in hardness for different experimental conditions. The samples implanted with temperature higher than 400 °C presented a wear resistance around two orders of magnitude higher than the untreated substrates. The hardness is apparently affected by the different roughness of the samples and their different profile of nitrogen.

Keywords: ion implantation, plasma, RF and microwave plasma, stainless steel

Procedia PDF Downloads 457
814 The Comparison Study of Methanol and Water Extract of Chuanxiong Rhizoma: A Fingerprint Analysis

Authors: Li Chun Zhao, Zhi Chao Hu, Xi Qiang Liu, Man Lai Lee, Chak Shing Yeung, Man Fei Xu, Yuen Yee Kwan, Alan H. M. Ho, Nickie W. K. Chan, Bin Deng, Zhong Zhen Zhao, Min Xu

Abstract:

Background: Chuangxiong Rhizoma (Chuangxion, CX) is one of the most frequently used herbs in Chinese medicine because of its wide therapeutic effects such as vasorelaxation and anti-inflammation. Aim: The purposes of this study are (1) to perform non-targeted / targeted analyses of CX methanol extract and water extract, and compare the present data with previously LC-MS or GC-MS fingerprints; (2) to examine the difference between CX methanol extract and water extract for preliminarily evaluating whether current compound markers of methanol extract from crude CX materials could be suitable for quality control of CX water extract. Method: CX methanol extract was prepared according to the Hong Kong Chinese Materia Medica Standards. DG water extract was prepared by boiling with pure water for three times (one hour each). UHPLC-Q-TOF-MS/MS fingerprint analysis was performed by C18 column (1.7 µm, 2.1 × 100 mm) with Agilent 1290 Infinity system. Experimental data were analyzed by Agilent MassHunter Software. A database was established based on 13 published LC-MS and GC-MS CX fingerprint analyses. Total 18 targeted compounds in database were selected as markers to compare present data with previous data, and these markers also used to compare CX methanol extract and water extract. Result: (1) Non-targeted analysis indicated that there were 133 compounds identified in CX methanol extract, while 325 compounds in CX water extract that was more than double of CX methanol extract. (2) Targeted analysis further indicated that 9 in 18 targeted compounds were identified in CX methanol extract, while 12 in 18 targeted compounds in CX water extract that showed a lower lose-rate of water extract when compared with methanol extract. (3) By comparing CX methanol extract and water extract, Senkyunolide A (+1578%), Ferulic acid (+529%) and Senkyunolide H (+169%) were significantly higher in water extract when compared with methanol extract. (4) Other bioactive compounds such as Tetramethylpyrazine were only found in CX water extract. Conclusion: Many new compounds in both CX methanol and water extracts were found by using UHPLC Q-TOF MS/MS analysis when compared with previous published reports. A new standard reference including non-targeted compound profiling and targeted markers functioned especially for quality control of CX water extract (herbal decoction) should be established in future. (This project was supported by Hong Kong Baptist University (FRG2/14-15/109) & Natural Science Foundation of Guangdong Province (2014A030313414)).

Keywords: Chuanxiong rhizoma, fingerprint analysis, targeted analysis, quality control

Procedia PDF Downloads 483
813 The Effectiveness of Multiphase Flow in Well- Control Operations

Authors: Ahmed Borg, Elsa Aristodemou, Attia Attia

Abstract:

Well control involves managing the circulating drilling fluid within the wells and avoiding kicks and blowouts as these can lead to losses in human life and drilling facilities. Current practices for good control incorporate predictions of pressure losses through computational models. Developing a realistic hydraulic model for a good control problem is a very complicated process due to the existence of a complex multiphase region, which usually contains a non-Newtonian drilling fluid and the miscibility of formation gas in drilling fluid. The current approaches assume an inaccurate flow fluid model within the well, which leads to incorrect pressure loss calculations. To overcome this problem, researchers have been considering the more complex two-phase fluid flow models. However, even these more sophisticated two-phase models are unsuitable for applications where pressure dynamics are important, such as in managed pressure drilling. This study aims to develop and implement new fluid flow models that take into consideration the miscibility of fluids as well as their non-Newtonian properties for enabling realistic kick treatment. furthermore, a corresponding numerical solution method is built with an enriched data bank. The research work considers and implements models that take into consideration the effect of two phases in kick treatment for well control in conventional drilling. In this work, a corresponding numerical solution method is built with an enriched data bank. Software STARCCM+ for the computational studies to study the important parameters to describe wellbore multiphase flow, the mass flow rate, volumetric fraction, and velocity of each phase. Results showed that based on the analysis of these simulation studies, a coarser full-scale model of the wellbore, including chemical modeling established. The focus of the investigations was put on the near drill bit section. This inflow area shows certain characteristics that are dominated by the inflow conditions of the gas as well as by the configuration of the mud stream entering the annulus. Without considering the gas solubility effect, the bottom hole pressure could be underestimated by 4.2%, while the bottom hole temperature is overestimated by 3.2%. and without considering the heat transfer effect, the bottom hole pressure could be overestimated by 11.4% under steady flow conditions. Besides, larger reservoir pressure leads to a larger gas fraction in the wellbore. However, reservoir pressure has a minor effect on the steady wellbore temperature. Also as choke pressure increases, less gas will exist in the annulus in the form of free gas.

Keywords: multiphase flow, well- control, STARCCM+, petroleum engineering and gas technology, computational fluid dynamic

Procedia PDF Downloads 106
812 Biomechanical Analysis on Skin and Jejunum of Chemically Prepared Cat Cadavers Used in Surgery Training

Authors: Raphael C. Zero, Thiago A. S. S. Rocha, Marita V. Cardozo, Caio C. C. Santos, Alisson D. S. Fechis, Antonio C. Shimano, FabríCio S. Oliveira

Abstract:

Biomechanical analysis is an important factor in tissue studies. The objective of this study was to determine the feasibility of a new anatomical technique and quantify the changes in skin and the jejunum resistance of cats’ corpses throughout the process. Eight adult cat cadavers were used. For every kilogram of weight, 120ml of fixative solution (95% 96GL ethyl alcohol and 5% pure glycerin) was applied via the external common carotid artery. Next, the carcasses were placed in a container with 96 GL ethyl alcohol for 60 days. After fixing, all carcasses were preserved in a 30% sodium chloride solution for 60 days. Before fixation, control samples were collected from fresh cadavers and after fixation, three skin and jejunum fragments from each cadaver were tested monthly for strength and displacement until complete rupture in a universal testing machine. All results were analyzed by F-test (P <0.05). In the jejunum, the force required to rupture the fresh samples and the samples fixed in alcohol for 60 days was 31.27±19.14N and 29.25±11.69N, respectively. For the samples preserved in the sodium chloride solution for 30 and 60 days, the strength was 26.17±16.18N and 30.57±13.77N, respectively. In relation to the displacement required for the rupture of the samples, the values of fresh specimens and those fixed in alcohol for 60 days was 2.79±0.73mm and 2.80±1.13mm, respectively. For the samples preserved for 30 and 60 days with sodium chloride solution, the displacement was 2.53±1.03mm and 2.83±1.27mm, respectively. There was no statistical difference between the samples (P=0.68 with respect to strength, and P=0.75 with respect to displacement). In the skin, the force needed to rupture the fresh samples and the samples fixed for 60 days in alcohol was 223.86±131.5N and 211.86±137.53N respectively. For the samples preserved in sodium chloride solution for 30 and 60 days, the force was 227.73±129.06 and 224.78±143.83N, respectively. In relation to the displacement required for the rupture of the samples, the values of fresh specimens and those fixed in alcohol for 60 days were 3.67±1.03mm and 4.11±0.87mm, respectively. For the samples preserved for 30 and 60 days with sodium chloride solution, the displacement was 4.21±0.93mm and 3.93±0.71mm, respectively. There was no statistical difference between the samples (P=0.65 with respect to strength, and P=0.98 with respect to displacement). The resistance of the skin and intestines of the cat carcasses suffered little change when subjected to alcohol fixation and preservation in sodium chloride solution, each for 60 days, which is promising for use in surgery training. All experimental procedures were approved by the Municipal Legal Department (protocol 02.2014.000027-1). The project was funded by FAPESP (protocol 2015-08259-9).

Keywords: anatomy, conservation, fixation, small animal

Procedia PDF Downloads 277