Search results for: soil texture prediction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5625

Search results for: soil texture prediction

375 Evaluation of Buckwheat Genotypes to Different Planting Geometries and Fertility Levels in Northern Transition Zone of Karnataka

Authors: U. K. Hulihalli, Shantveerayya

Abstract:

Buckwheat (Fagopyrum esculentum Moench) is an annual crop belongs to family Poligonaceae. The cultivated buckwheat species are notable for their exceptional nutritive values. It is an important source of carbohydrates, fibre, macro, and microelements such as K, Ca, Mg, Na and Mn, Zn, Se, and Cu. It also contains rutin, flavonoids, riboflavin, pyridoxine and many amino acids which have beneficial effects on human health, including lowering both blood lipid and sugar levels. Rutin, quercetin and some other polyphenols are potent carcinogens against colon and other cancers. Buckwheat has significant nutritive value and plenty of uses. Cultivation of buckwheat in Sothern part of India is very meager. Hence, a study was planned with an objective to know the performance of buckwheat genotypes to different planting geometries and fertility levels. The field experiment was conducted at Main Agriculture Research Station, University of Agriculture Sciences, Dharwad, India, during 2017 Kharif. The experiment was laid-out in split-plot design with three replications having three planting geometries as main plots, two genotypes as sub plots and three fertility levels as sub-sub plot treatments. The soil of the experimental site was vertisol. The standard procedures are followed to record the observations. The planting geometry of 30*10 cm was recorded significantly higher seed yield (893 kg/ha⁻¹), stover yield (1507 kg ha⁻¹), clusters plant⁻¹ (7.4), seeds clusters⁻¹ (7.9) and 1000 seed weight (26.1 g) as compared to 40*10 cm and 20*10 cm planting geometries. Between the genotypes, significantly higher seed yield (943 kg ha⁻¹) and harvest index (45.1) was observed with genotype IC-79147 as compared to PRB-1 genotype (687 kg ha⁻¹ and 34.2, respectively). However, the genotype PRB-1 recorded significantly higher stover yield (1344 kg ha⁻¹) as compared to genotype IC-79147 (1173 kg ha⁻¹). The genotype IC-79147 was recorded significantly higher clusters plant⁻¹ (7.1), seeds clusters⁻¹ (7.9) and 1000 seed weight (24.5 g) as compared PRB-1 (5.4, 5.8 and 22.3 g, respectively). Among the fertility levels tried, the fertility level of 60:30 NP kg ha⁻¹ recorded significantly higher seed yield (845 kg ha-1) and stover yield (1359 kg ha⁻¹) as compared to 40:20 NP kg ha-1 (808 and 1259 kg ha⁻¹ respectively) and 20:10 NP kg ha-1 (793 and 1144 kg ha⁻¹ respectively). Within the treatment combinations, IC 79147 genotype having 30*10 cm planting geometry with 60:30 NP kg ha⁻¹ recorded significantly higher seed yield (1070 kg ha⁻¹), clusters plant⁻¹ (10.3), seeds clusters⁻¹ (9.9) and 1000 seed weight (27.3 g) compared to other treatment combinations.

Keywords: buckwheat, planting geometry, genotypes, fertility levels

Procedia PDF Downloads 175
374 Provisional Settlements and Urban Resilience: The Transformation of Refugee Camps into Cities

Authors: Hind Alshoubaki

Abstract:

The world is now confronting a widespread urban phenomenon: refugee camps, which have mostly been established in ‘rushing mode,’ pointing toward affording temporary settlements for refugees that provide them with minimum levels of safety, security and protection from harsh weather conditions within a very short time period. In fact, those emergency settlements are transforming into permanent ones since time is a decisive factor in terms of construction and camps’ age. These play an essential role in transforming their temporary character into a permanent one that generates deep modifications to the city’s territorial structure, shaping a new identity and creating a contentious change in the city’s form and history. To achieve a better understanding for the transformation of refugee camps, this study is based on a mixed-methods approach: the qualitative approach explores different refugee camps and analyzes their transformation process in terms of population density and the changes to the city’s territorial structure and urban features. The quantitative approach employs a statistical regression analysis as a reliable prediction of refugees’ satisfaction within the Zaatari camp in order to predict its future transformation. Obviously, refugees’ perceptions of their current conditions will affect their satisfaction, which plays an essential role in transforming emergency settlements into permanent cities over time. The test basically discusses five main themes: the access and readiness of schools, the dispersion of clinics and shopping centers; the camp infrastructure, the construction materials, and the street networks. The statistical analysis showed that Syrian refugees were not satisfied with their current conditions inside the Zaatari refugee camp and that they had started implementing changes according to their needs, desires, and aspirations because they are conscious about the fact of their prolonged stay in this settlement. Also, the case study analyses showed that neglecting the fact that construction takes time leads settlements being created with below-minimum standards that are deteriorating and creating ‘slums,’ which lead to increased crime rates, suicide, drug use and diseases and deeply affect cities’ urban tissues. For this reason, recognizing the ‘temporary-eternal’ character of those settlements is the fundamental concept to consider refugee camps from the beginning as definite permanent cities. This is the key factor to minimize the trauma of displacement on both refugees and the hosting countries. Since providing emergency settlements within a short time period does not mean using temporary materials, having a provisional character or creating ‘makeshift cities.’

Keywords: refugee, refugee camp, temporary, Zaatari

Procedia PDF Downloads 138
373 Additive Friction Stir Manufacturing Process: Interest in Understanding Thermal Phenomena and Numerical Modeling of the Temperature Rise Phase

Authors: Antoine Lauvray, Fabien Poulhaon, Pierre Michaud, Pierre Joyot, Emmanuel Duc

Abstract:

Additive Friction Stir Manufacturing (AFSM) is a new industrial process that follows the emergence of friction-based processes. The AFSM process is a solid-state additive process using the energy produced by the friction at the interface between a rotating non-consumable tool and a substrate. Friction depends on various parameters like axial force, rotation speed or friction coefficient. The feeder material is a metallic rod that flows through a hole in the tool. Unlike in Friction Stir Welding (FSW) where abundant literature exists and addresses many aspects going from process implementation to characterization and modeling, there are still few research works focusing on AFSM. Therefore, there is still a lack of understanding of the physical phenomena taking place during the process. This research work aims at a better AFSM process understanding and implementation, thanks to numerical simulation and experimental validation performed on a prototype effector. Such an approach is considered a promising way for studying the influence of the process parameters and to finally identify a process window that seems relevant. The deposition of material through the AFSM process takes place in several phases. In chronological order these phases are the docking phase, the dwell time phase, the deposition phase, and the removal phase. The present work focuses on the dwell time phase that enables the temperature rise of the system composed of the tool, the filler material, and the substrate and due to pure friction. Analytic modeling of heat generation based on friction considers as main parameters the rotational speed and the contact pressure. Another parameter considered influential is the friction coefficient assumed to be variable due to the self-lubrication of the system with the rise in temperature or the materials in contact roughness smoothing over time. This study proposes, through numerical modeling followed by experimental validation, to question the influence of the various input parameters on the dwell time phase. Rotation speed, temperature, spindle torque, and axial force are the main monitored parameters during experimentations and serve as reference data for the calibration of the numerical model. This research shows that the geometry of the tool as well as fluctuations of the input parameters like axial force and rotational speed are very influential on the temperature reached and/or the time required to reach the targeted temperature. The main outcome is the prediction of a process window which is a key result for a more efficient process implementation.

Keywords: numerical model, additive manufacturing, friction, process

Procedia PDF Downloads 150
372 First-Trimester Screening of Preeclampsia in a Routine Care

Authors: Tamar Grdzelishvili, Zaza Sinauridze

Abstract:

Introduction: Preeclampsia is a complication of the second trimester of pregnancy, which is characterized by high morbidity and multiorgan damage. Many complex pathogenic mechanisms are now implicated to be responsible for this disease (1). Preeclampsia is one of the leading causes of maternal mortality worldwide. Statistics are enough to convince you of the seriousness of this pathology: about 100,000 women die of preeclampsia every year. It occurs in 3-14% (varies significantly depending on racial origin or ethnicity and geographical region) of pregnant women, in 75% of cases - in a mild form, and in 25% - in a severe form. During severe pre-eclampsia-eclampsia, perinatal mortality increases by 5 times and stillbirth by 9.6 times. Considering that the only way to treat the disease is to end the pregnancy, the main thing is timely diagnosis and prevention of the disease. Identification of high-risk pregnant women for PE and giving prophylaxis would reduce the incidence of preterm PE. First-trimester screening model developed by the Fetal Medicine Foundation (FMF), which uses the Bayes-theorem to combine maternal characteristics and medical history together with measurements of mean arterial pressure, uterine artery pulsatility index, and serum placental growth factor, has been proven to be effective and have superior screening performance to that of traditional risk factor-based approach for the prediction of PE (2) Methods: Retrospective single center screening study. The study population consisted of women from the Tbilisi maternity hospital “Pineo medical ecosystem” who met the following criteria: they spoke Georgian, English, or Russian and agreed to participate in the study after discussing informed consent and answering questions. Prior to the study, the informed consent forms approved by the Institutional Review Board were obtained from the study subjects. Early assessment of preeclampsia was performed between 11-13 weeks of pregnancy. The following were evaluated: anamnesis, dopplerography of the uterine artery, mean arterial blood pressure, and biochemical parameter: Pregnancy-associated plasma protein A (PAPP-A). Individual risk assessment was performed with performed by Fast Screen 3.0 software ThermoFisher scientific. Results: A total of 513 women were recruited and through the study, 51 women were diagnosed with preeclampsia (34.5% in the pregnant women with high risk, 6.5% in the pregnant women with low risk; P<0.000 1). Conclusions: First-trimester screening combining maternal factors with uterine artery Doppler, blood pressure, and pregnancy-associated plasma protein-A is useful to predict PE in a routine care setting. More patient studies are needed for final conclusions. The research is still ongoing.

Keywords: first-trimester, preeclampsia, screening, pregnancy-associated plasma protein

Procedia PDF Downloads 80
371 Effect of Different Phosphorus Levels on Vegetative Growth of Maize Variety

Authors: Tegene Nigussie

Abstract:

Introduction: Maize is the most domesticated of all the field crops. Wild maize has not been found to date and there has been much speculation on its origin. Regardless of the validity of different theories, it is generally agreed that the center of origin of maize is Central America, primarily Mexico and the Caribbean. Maize in Africa is of a recent introduction although data suggest that it was present in Nigeria even before Columbus voyages. After being taken to Europe in 1493, maize was introduced to Africa and distributed (spread through the continent by different routes. Maize is an important cereal crop in Ethiopia in general, it is the primarily stable food, and rural households show strong preference. For human food, the important constituents of grain are carbohydrates (starch and sugars), protein, fat or oil (in the embryo) and minerals. About 75 percent of the kernel is starch, a range of 60.80 percent but low protein content (8-15%). In Ethiopia, the introduction of modern farming techniques appears to be a priority. However, the adoption of modern inputs by peasant farmers is found to be very slow, for example, the adoption rate of fertilizer, an input that is relatively adopted, is very slow. The difference in socio-economic factors lay behind the low rate of technological adoption, including price & marketing input. Objective: The aim of the study is to determine the optimum application rate or level of different phosphorus fertilizers for the vegetative growth of maize and to identify the effect of different phosphorus rates on the growth and development of maize. Methods: The vegetative parameter (above ground) measurement from five plants randomly sampled from the middle rows of each plot. Results: The interaction of nitrogen and maize variety showed a significant at (p<0.01) effect on plant height, with the application of 60kg/ha and BH140 maize variety in combination and root length with the application of 60kg/ha of nitrogen and BH140 variety of maize. The highest mean (12.33) of the number of leaves per plant and mean (7.1) of the number of nodes per plant can be used as an alternative for better vegetative growth of maize. Conclusion and Recommendation: Maize is one of the popular and cultivated crops in Ethiopia. This study was conducted to investigate the best dosage of phosphorus for vegetative growth, yield, and better quality of maize variety and to recommend a level of phosphorus rate and the best variety adaptable to the specific soil condition or area.

Keywords: leaf, carbohydrate protein, adoption, sugar

Procedia PDF Downloads 21
370 Monitoring of Serological Test of Blood Serum in Indicator Groups of the Population of Central Kazakhstan

Authors: Praskovya Britskaya, Fatima Shaizadina, Alua Omarova, Nessipkul Alysheva

Abstract:

Planned preventive vaccination, which is carried out in the Republic of Kazakhstan, promoted permanent decrease in the incidence of measles and viral hepatitis B. In the structure of VHB patients prevail people of young, working age. Monitoring of infectious incidence, monitoring of coverage of immunization of the population, random serological control over the immunity enable well-timed identification of distribution of the activator, effectiveness of the taken measures and forecasting. The serological blood analysis was conducted in indicator groups of the population of Central Kazakhstan for the purpose of identification of antibody titre for vaccine preventable infections (measles, viral hepatitis B). Measles antibodies were defined by method of enzyme-linked assay (ELA) with test-systems "VektoKor" – Ig G ('Vektor-Best' JSC). Antibodies for HBs-antigen of hepatitis B virus in blood serum was identified by method of enzyme-linked assay (ELA) with VektoHBsAg test systems – antibodies ('Vektor-Best' JSC). The result of the analysis is positive, the concentration of IgG to measles virus in the studied sample is equal to 0.18 IU/ml or more. Protective level of concentration of anti-HBsAg makes 10 mIU/ml. The results of the study of postvaccinal measles immunity showed that the share of seropositive people made 87.7% of total number of surveyed. The level of postvaccinal immunity to measles in age groups differs. So, among people older than 56 the percentage of seropositive made 95.2%. Among people aged 15-25 were registered 87.0% seropositive, at the age of 36-45 – 86.6%. In age groups of 25-35 and 36-45 the share of seropositive people was approximately at the same level – 88.5% and 88.8% respectively. The share of people seronegative to a measles virus made 12.3%. The biggest share of seronegative people was found among people aged 36-45 – 13.4% and 15-25 – 13.0%. The analysis of results of the examined people for the existence of postvaccinal immunity to viral hepatitis B showed that from all surveyed only 33.5% have the protective level of concentration of anti-HBsAg of 10 mIU/ml and more. The biggest share of people protected from VHB virus is observed in the age group of 36-45 and makes 60%. In the indicator group – above 56 – seropositive people made 4.8%. The high percentage of seronegative people has been observed in all studied age groups from 40.0% to 95.2%. The group of people which is least protected from getting VHB is people above 56 (95.2%). The probability to get VHB is also high among young people aged 25-35, the percentage of seronegative people made 80%. Thus, the results of the conducted research testify to the need for carrying out serological monitoring of postvaccinal immunity for the purpose of operational assessment of the epidemiological situation, early identification of its changes and prediction of the approaching danger.

Keywords: antibodies, blood serum, immunity, immunoglobulin

Procedia PDF Downloads 261
369 Alleviation of Adverse Effects of Salt Stress on Soybean (Glycine max. L.) by Using Osmoprotectants and Compost Application

Authors: Ayman El Sabagh, SobhySorour, AbdElhamid Omar, Adel Ragab, Mohammad Sohidul Islam, Celaleddin Barutçular, Akihiro Ueda, Hirofumi Saneoka

Abstract:

Salinity is one of the major factors limiting crop production in an arid environment. What adds to the concern is that all the legume crops are sensitive to increasing soil salinity. So it is implacable to either search for salinity enhancement of legume plants. The exogenous of osmoprotectants has been found effective in reducing the adverse effects of salinity stress on plant growth. Despite its global importance soybean production suffer the problems of salinity stress causing damages at plant development. Therefore, in the current study we try to clarify the mechanism that might be involved in the ameliorating effects of osmo-protectants such as proline and glycine betaine and compost application on soybean plants grown under salinity stress. Experiments were carried out in the greenhouse of the experimental station, plant nutritional physiology, Hiroshima University, Japan in 2011- 2012. The experiment was arranged in a factorial design with 4 replications at NaCl concentrations (0 and 15 mM). The exogenous, proline and glycine betaine concentrations (0 mM and 25 mM) for each. Compost treatments (0 and 24 t ha-1). Results indicated that salinity stress induced reduction in all growth and physiological parameters (dry weights plant-1, chlorophyll content, N and K+ content) likewise, seed and quality traits of soybean plant compared with those of the unstressed plants. In contrast, salinity stress led to increases in the electrolyte leakage ratio, Na and proline contents. Thus tolerance against salt stress was observed, the improvement of salt tolerance resulted from proline, glycine betaine and compost were accompanied with improved membrane stability, K+, and proline accumulation on contrary, decreased Na+ content. These results clearly demonstrate that could be used to reduce the harmful effect of salinity on both physiological aspects and growth parameters of soybean. They are capable of restoring yield potential and quality of seed and may be useful in agronomic situations where saline conditions are diagnosed as a problem. Consequently, exogenous osmo-protectants combine with compost will effectively solve seasonal salinity stress problem and are a good strategy to increase salinity resistance in the drylands.

Keywords: compost, glycine betaine, proline, salinity tolerance, soybean

Procedia PDF Downloads 376
368 Risk Assessment of Flood Defences by Utilising Condition Grade Based Probabilistic Approach

Authors: M. Bahari Mehrabani, Hua-Peng Chen

Abstract:

Management and maintenance of coastal defence structures during the expected life cycle have become a real challenge for decision makers and engineers. Accurate evaluation of the current condition and future performance of flood defence structures is essential for effective practical maintenance strategies on the basis of available field inspection data. Moreover, as coastal defence structures age, it becomes more challenging to implement maintenance and management plans to avoid structural failure. Therefore, condition inspection data are essential for assessing damage and forecasting deterioration of ageing flood defence structures in order to keep the structures in an acceptable condition. The inspection data for flood defence structures are often collected using discrete visual condition rating schemes. In order to evaluate future condition of the structure, a probabilistic deterioration model needs to be utilised. However, existing deterioration models may not provide a reliable prediction of performance deterioration for a long period due to uncertainties. To tackle the limitation, a time-dependent condition-based model associated with a transition probability needs to be developed on the basis of condition grade scheme for flood defences. This paper presents a probabilistic method for predicting future performance deterioration of coastal flood defence structures based on condition grading inspection data and deterioration curves estimated by expert judgement. In condition-based deterioration modelling, the main task is to estimate transition probability matrices. The deterioration process of the structure related to the transition states is modelled according to Markov chain process, and a reliability-based approach is used to estimate the probability of structural failure. Visual inspection data according to the United Kingdom Condition Assessment Manual are used to obtain the initial condition grade curve of the coastal flood defences. The initial curves then modified in order to develop transition probabilities through non-linear regression based optimisation algorithms. The Monte Carlo simulations are then used to evaluate the future performance of the structure on the basis of the estimated transition probabilities. Finally, a case study is given to demonstrate the applicability of the proposed method under no-maintenance and medium-maintenance scenarios. Results show that the proposed method can provide an effective predictive model for various situations in terms of available condition grading data. The proposed model also provides useful information on time-dependent probability of failure in coastal flood defences.

Keywords: condition grading, flood defense, performance assessment, stochastic deterioration modelling

Procedia PDF Downloads 241
367 An Effective Modification to Multiscale Elastic Network Model and Its Evaluation Based on Analyses of Protein Dynamics

Authors: Weikang Gong, Chunhua Li

Abstract:

Dynamics plays an essential role in function exertion of proteins. Elastic network model (ENM), a harmonic potential-based and cost-effective computational method, is a valuable and efficient tool for characterizing the intrinsic dynamical properties encoded in biomacromolecule structures and has been widely used to detect the large-amplitude collective motions of proteins. Gaussian network model (GNM) and anisotropic network model (ANM) are the two often-used ENM models. In recent years, many ENM variants have been proposed. Here, we propose a small but effective modification (denoted as modified mENM) to the multiscale ENM (mENM) where fitting weights of Kirchhoff/Hessian matrixes with the least square method (LSM) is modified since it neglects the details of pairwise interactions. Then we perform its comparisons with the original mENM, traditional ENM, and parameter-free ENM (pfENM) on reproducing dynamical properties for the six representative proteins whose molecular dynamics (MD) trajectories are available in http://mmb.pcb.ub.es/MoDEL/. In the results, for B-factor prediction, mENM achieves the best performance among the four ENM models. Additionally, it is noted that with the weights of the multiscale Kirchhoff/Hessian matrixes modified, interestingly, the modified mGNM/mANM still has a much better performance than the corresponding traditional ENM and pfENM models. As to dynamical cross-correlation map (DCCM) calculation, taking the data obtained from MD trajectories as the standard, mENM performs the worst while the results produced by the modified mENM and pfENM models are close to those from MD trajectories with the latter a little better than the former. Generally, ANMs perform better than the corresponding GNMs except for the mENM. Thus, pfANM and the modified mANM, especially the former, have an excellent performance in dynamical cross-correlation calculation. Compared with GNMs (except for mGNM), the corresponding ANMs can capture quite a number of positive correlations for the residue pairs nearly largest distances apart, which is maybe due to the anisotropy consideration in ANMs. Furtherly, encouragingly the modified mANM displays the best performance in capturing the functional motional modes, followed by pfANM and traditional ANM models, while mANM fails in all the cases. This suggests that the consideration of long-range interactions is critical for ANM models to produce protein functional motions. Based on the analyses, the modified mENM is a promising method in capturing multiple dynamical characteristics encoded in protein structures. This work is helpful for strengthening the understanding of the elastic network model and provides a valuable guide for researchers to utilize the model to explore protein dynamics.

Keywords: elastic network model, ENM, multiscale ENM, molecular dynamics, parameter-free ENM, protein structure

Procedia PDF Downloads 124
366 Effects of Temperature and the Use of Bacteriocins on Cross-Contamination from Animal Source Food Processing: A Mathematical Model

Authors: Benjamin Castillo, Luis Pastenes, Fernando Cerdova

Abstract:

The contamination of food by microbial agents is a common problem in the industry, especially regarding the elaboration of animal source products. Incorrect manipulation of the machinery or on the raw materials can cause a decrease in production or an epidemiological outbreak due to intoxication. In order to improve food product quality, different methods have been used to reduce or, at least, to slow down the growth of the pathogens, especially deteriorated, infectious or toxigenic bacteria. These methods are usually carried out under low temperatures and short processing time (abiotic agents), along with the application of antibacterial substances, such as bacteriocins (biotic agents). This, in a controlled and efficient way that fulfills the purpose of bacterial control without damaging the final product. Therefore, the objective of the present study is to design a secondary mathematical model that allows the prediction of both the biotic and abiotic factor impact associated with animal source food processing. In order to accomplish this objective, the authors propose a three-dimensional differential equation model, whose components are: bacterial growth, release, production and artificial incorporation of bacteriocins and changes in pH levels of the medium. These three dimensions are constantly being influenced by the temperature of the medium. Secondly, this model adapts to an idealized situation of cross-contamination animal source food processing, with the study agents being both the animal product and the contact surface. Thirdly, the stochastic simulations and the parametric sensibility analysis are compared with referential data. The main results obtained from the analysis and simulations of the mathematical model were to discover that, although bacterial growth can be stopped in lower temperatures, even lower ones are needed to eradicate it. However, this can be not only expensive, but counterproductive as well in terms of the quality of the raw materials and, on the other hand, higher temperatures accelerate bacterial growth. In other aspects, the use and efficiency of bacteriocins are an effective alternative in the short and medium terms. Moreover, an indicator of bacterial growth is a low-level pH, since lots of deteriorating bacteria are lactic acids. Lastly, the processing times are a secondary agent of concern when the rest of the aforementioned agents are under control. Our main conclusion is that when acclimating a mathematical model within the context of the industrial process, it can generate new tools that predict bacterial contamination, the impact of bacterial inhibition, and processing method times. In addition, the mathematical modeling proposed logistic input of broad application, which can be replicated on non-meat food products, other pathogens or even on contamination by crossed contact of allergen foods.

Keywords: bacteriocins, cross-contamination, mathematical model, temperature

Procedia PDF Downloads 149
365 Influence of Intra-Yarn Permeability on Mesoscale Permeability of Plain Weave and 3D Fabrics

Authors: Debabrata Adhikari, Mikhail Matveev, Louise Brown, Andy Long, Jan Kočí

Abstract:

A good understanding of mesoscale permeability of complex architectures in fibrous porous preforms is of particular interest in order to achieve efficient and cost-effective resin impregnation of liquid composite molding (LCM). Fabrics used in structural reinforcements are typically woven or stitched. However, 3D fabric reinforcement is of particular interest because of the versatility in the weaving pattern with the binder yarn and in-plain yarn arrangements to manufacture thick composite parts, overcome the limitation in delamination, improve toughness etc. To predict the permeability based on the available pore spaces between the inter yarn spaces, unit cell-based computational fluid dynamics models have been using the Stokes Darcy model. Typically, the preform consists of an arrangement of yarns with spacing in the order of mm, wherein each yarn consists of thousands of filaments with spacing in the order of μm. The fluid flow during infusion exchanges the mass between the intra and inter yarn channels, meaning there is no dead-end of flow between the mesopore in the inter yarn space and the micropore in the yarn. Several studies have employed the Brinkman equation to take into account the flow through dual-scale porosity reinforcement to estimate their permeability. Furthermore, to reduce the computational effort of dual scale flow, scale separation criteria based on the ratio between yarn permeability to the yarn spacing was also proposed to quantify the dual scale and negligible micro-scale flow regime for the prediction of mesoscale permeability. In the present work, the key parameter to identify the influence of intra yarn permeability on the mesoscale permeability has been investigated with the systematic study of weft and warp yarn spacing on the plane weave as well as the position of binder yarn and number of in-plane yarn layers on 3D weave fabric. The permeability tensor has been estimated using an OpenFOAM-based model for the various weave pattern with idealized geometry of yarn implemented using open-source software TexGen. Additionally, scale separation criterion has been established based on the various configuration of yarn permeability for the 3D fabric with both the isotropic and anisotropic yarn from Gebart’s model. It was observed that the variation of mesoscale permeability Kxx within 30% when the isotropic porous yarn is considered for a 3D fabric with binder yarn. Furthermore, the permeability model developed in this study will be used for multi-objective optimizations of the preform mesoscale geometry in terms of yarn spacing, binder pattern, and a number of layers with an aim to obtain improved permeability and reduced void content during the LCM process.

Keywords: permeability, 3D fabric, dual-scale flow, liquid composite molding

Procedia PDF Downloads 100
364 An Adaptive Oversampling Technique for Imbalanced Datasets

Authors: Shaukat Ali Shahee, Usha Ananthakumar

Abstract:

A data set exhibits class imbalance problem when one class has very few examples compared to the other class, and this is also referred to as between class imbalance. The traditional classifiers fail to classify the minority class examples correctly due to its bias towards the majority class. Apart from between-class imbalance, imbalance within classes where classes are composed of a different number of sub-clusters with these sub-clusters containing different number of examples also deteriorates the performance of the classifier. Previously, many methods have been proposed for handling imbalanced dataset problem. These methods can be classified into four categories: data preprocessing, algorithmic based, cost-based methods and ensemble of classifier. Data preprocessing techniques have shown great potential as they attempt to improve data distribution rather than the classifier. Data preprocessing technique handles class imbalance either by increasing the minority class examples or by decreasing the majority class examples. Decreasing the majority class examples lead to loss of information and also when minority class has an absolute rarity, removing the majority class examples is generally not recommended. Existing methods available for handling class imbalance do not address both between-class imbalance and within-class imbalance simultaneously. In this paper, we propose a method that handles between class imbalance and within class imbalance simultaneously for binary classification problem. Removing between class imbalance and within class imbalance simultaneously eliminates the biases of the classifier towards bigger sub-clusters by minimizing the error domination of bigger sub-clusters in total error. The proposed method uses model-based clustering to find the presence of sub-clusters or sub-concepts in the dataset. The number of examples oversampled among the sub-clusters is determined based on the complexity of sub-clusters. The method also takes into consideration the scatter of the data in the feature space and also adaptively copes up with unseen test data using Lowner-John ellipsoid for increasing the accuracy of the classifier. In this study, neural network is being used as this is one such classifier where the total error is minimized and removing the between-class imbalance and within class imbalance simultaneously help the classifier in giving equal weight to all the sub-clusters irrespective of the classes. The proposed method is validated on 9 publicly available data sets and compared with three existing oversampling techniques that rely on the spatial location of minority class examples in the euclidean feature space. The experimental results show the proposed method to be statistically significantly superior to other methods in terms of various accuracy measures. Thus the proposed method can serve as a good alternative to handle various problem domains like credit scoring, customer churn prediction, financial distress, etc., that typically involve imbalanced data sets.

Keywords: classification, imbalanced dataset, Lowner-John ellipsoid, model based clustering, oversampling

Procedia PDF Downloads 418
363 Assessment of Genetic Variability of Potato Genotypes for Proline Under Salt Stress Conditions

Authors: Elchin Hajiyev, Afet Memmedova Dadash, Sabina Hajiyeva, Aynur Karimova, Ramiz Aliyev

Abstract:

Although potatoes have a wide distribution range, the yield potential of varieties varies greatly depending on the region. Our country is made up of agricultural regions with very different environmental characteristics.In this case, we cannot expect the introduced varieties to show the same adaptation to the different conditions of our country. For this reason, in our country, varieties with high general adaptability should be used, rather than varieties with special adaptability in certain areas. Soil salinization has become a global problem.Increased salinity has a serious impact on food security by reducing plant productivity. Plants have protective mechanisms of adaptation to salt stress, such as the synthesis of physiologically active substances, resistance to antioxidant stress and oxidation of membrane lipids. One of these substances is free proline. Our study revealed genetic variation in proline accumulation among samples exposed to stress factors.Changes in proline content under stress conditions were studied in 50 samples. There was wide variation across all treatments.The amount of proline varied between 7.2–37.7 μM/g under salinity conditions.The lowest rate was in the SF33 genotype (1.5 times more than the control (2.5 μM/g)).The highest level of proline under the influence of salt stress was in the SF45 genotype (7.25 times higher than the control (32.5 μM/g)). Our studies have found that the protective system reacts differently to the influence of stress factors. According to the results obtained on the amount of proline, adaptation mechanisms must be more actively activated to maintain metabolism and ensure viability in sensitive forms under the influence of stress factors. At high doses of the salt stressor, a tenfold increase in proline compared to the control indicates significant damage to the plant organism as a result of stress.To prevent damage to the body, the antioxidant system needs to quickly mobilize and work at full capacity in adverse conditions. An increase in the dose of the stress factor salt in our study caused a greater increase in the amount of free proline in plant tissues. Considering the functions of proline as an osmoprotector and antioxidant, it was found that increasing its amount is aimed at protecting the plant from the acute effects of stressors.

Keywords: genetic variability, potato, genotypes, proline, stress

Procedia PDF Downloads 58
362 Nano-Pesticides: Recent Emerging Tool for Sustainable Agricultural Practices

Authors: Ekta, G. K. Darbha

Abstract:

Nanotechnology offers the potential of simultaneously increasing efficiency as compared to their bulk material as well as reducing harmful environmental impacts of pesticides in field of agriculture. The term nanopesticide covers different pesticides that are cumulative of several surfactants, polymers, metal ions, etc. of nanometer size ranges from 1-1000 nm and exhibit abnormal behavior (high efficacy and high specific surface area) of nanomaterials. Commercial formulations of pesticides used by farmers nowadays cannot be used effectively due to a number of problems associated with them. For example, more than 90% of applied formulations are either lost in the environment or unable to reach the target area required for effective pest control. Around 20−30% of pesticides are lost through emissions. A number of factors (application methods, physicochemical properties of the formulations, and environmental conditions) can influence the extent of loss during application. It is known that among various formulations, polymer-based formulations show the greatest potential due to their greater efficacy, slow release and protection against premature degradation of active ingredient as compared to other commercial formulations. However, the nanoformulations can have a significant effect on the fate of active ingredient as well as may release some new ingredients by reacting with existing soil contaminants. Environmental fate of these newly generated species is still not explored very well which is essential to field scale experiments and hence a lot to be explored in the field of environmental fate, nanotoxicology, transport properties and stability of such formulations. In our preliminary work, we have synthesized polymer based nanoformulation of commercially used weedicide atrazine. Atrazine belongs to triazine class of herbicide, which is used in the effective control of seed germinated dicot weeds and grasses. It functions by binding to the plastoquinone-binding protein in PS-II. Plant death results from starvation and oxidative damage caused by breakdown in electron transport system. The stability of the suspension of nanoformulation containing herbicide has been evaluated by considering different parameters like polydispersity index, particle diameter, zeta-potential under different environmental relevance condition such as pH range 4-10, temperature range from 25°C to 65°C and stability of encapsulation also have been studied for different amount of added polymer. Morphological characterization has been done by using SEM.

Keywords: atrazine, nanoformulation, nanopesticide, nanotoxicology

Procedia PDF Downloads 260
361 Topological Language for Classifying Linear Chord Diagrams via Intersection Graphs

Authors: Michela Quadrini

Abstract:

Chord diagrams occur in mathematics, from the study of RNA to knot theory. They are widely used in theory of knots and links for studying the finite type invariants, whereas in molecular biology one important motivation to study chord diagrams is to deal with the problem of RNA structure prediction. An RNA molecule is a linear polymer, referred to as the backbone, that consists of four types of nucleotides. Each nucleotide is represented by a point, whereas each chord of the diagram stands for one interaction for Watson-Crick base pairs between two nonconsecutive nucleotides. A chord diagram is an oriented circle with a set of n pairs of distinct points, considered up to orientation preserving diffeomorphisms of the circle. A linear chord diagram (LCD) is a special kind of graph obtained cutting the oriented circle of a chord diagram. It consists of a line segment, called its backbone, to which are attached a number of chords with distinct endpoints. There is a natural fattening on any linear chord diagram; the backbone lies on the real axis, while all the chords are in the upper half-plane. Each linear chord diagram has a natural genus of its associated surface. To each chord diagram and linear chord diagram, it is possible to associate the intersection graph. It consists of a graph whose vertices correspond to the chords of the diagram, whereas the chord intersections are represented by a connection between the vertices. Such intersection graph carries a lot of information about the diagram. Our goal is to define an LCD equivalence class in terms of identity of intersection graphs, from which many chord diagram invariants depend. For studying these invariants, we introduce a new representation of Linear Chord Diagrams based on a set of appropriate topological operators that permits to model LCD in terms of the relations among chords. Such set is composed of: crossing, nesting, and concatenations. The crossing operator is able to generate the whole space of linear chord diagrams, and a multiple context free grammar able to uniquely generate each LDC starting from a linear chord diagram adding a chord for each production of the grammar is defined. In other words, it allows to associate a unique algebraic term to each linear chord diagram, while the remaining operators allow to rewrite the term throughout a set of appropriate rewriting rules. Such rules define an LCD equivalence class in terms of the identity of intersection graphs. Starting from a modelled RNA molecule and the linear chord, some authors proposed a topological classification and folding. Our LCD equivalence class could contribute to the RNA folding problem leading to the definition of an algorithm that calculates the free energy of the molecule more accurately respect to the existing ones. Such LCD equivalence class could be useful to obtain a more accurate estimate of link between the crossing number and the topological genus and to study the relation among other invariants.

Keywords: chord diagrams, linear chord diagram, equivalence class, topological language

Procedia PDF Downloads 206
360 Biodegradation of Phenazine-1-Carboxylic Acid by Rhodanobacter sp. PCA2 Proceeds via Decarboxylation and Cleavage of Nitrogen-Containing Ring

Authors: Miaomiao Zhang, Sabrina Beckmann, Haluk Ertan, Rocky Chau, Mike Manefield

Abstract:

Phenazines are a large class of nitrogen-containing aromatic heterocyclic compounds, which are almost exclusively produced by bacteria from diverse genera including Pseudomonas and Streptomyces. Phenazine-1-carboxylic acid (PCA) as one of 'core' phenazines are converted from chorismic acid before modified to other phenazine derivatives in different cells. Phenazines have attracted enormous interests because of their multiple roles on biocontrol, bacterial interaction, biofilm formation and fitness of their producers. However, in spite of ecological importance, degradation as a part of phenazines’ fate only have extremely limited attention now. Here, to isolate PCA-degrading bacteria, 200 mg L-1 PCA was supplied as sole carbon, nitrogen and energy source in minimal mineral medium. Quantitative PCR and Reverse-transcript PCR were employed to study abundance and activity of functional gene MFORT 16269 in PCA degradation, respectively. Intermediates and products of PCA degradation were identified with LC-MS/MS. After enrichment and isolation, a PCA-degrading strain was selected from soil and was designated as Rhodanobacter sp. PCA2 based on full 16S rRNA sequencing. As determined by HPLC, strain PCA2 consumed 200 mg L-1 (836 µM) PCA at a rate of 17.4 µM h-1, accompanying with significant cells yield from 1.92 × 105 to 3.11 × 106 cells per mL. Strain PCA2 was capable of degrading other phenazines as well, including phenazine (4.27 µM h-1), pyocyanin (2.72 µM h-1), neutral red (1.30 µM h-1) and 1-hydroxyphenazine (0.55 µM h-1). Moreover, during the incubation, transcript copies of MFORT 16269 gene increased significantly from 2.13 × 106 to 8.82 × 107 copies mL-1, which was 2.77 times faster than that of the corresponding gene copy number (2.20 × 106 to 3.32 × 107 copies mL-1), indicating that MFORT 16269 gene was activated and played roles on PCA degradation. As analyzed by LC-MS/MS, decarboxylation from the ring structure was determined as the first step of PCA degradation, followed by cleavage of nitrogen-containing ring by dioxygenase which catalyzed phenazine to nitrosobenzene. Subsequently, phenylhydroxylamine was detected after incubation for two days and was then transferred to aniline and catechol. Additionally, genomic and proteomic analyses were also carried out for strain PCA2. Overall, the findings presented here showed that a newly isolated strain Rhodanobacter sp. PCA2 was capable of degrading phenazines through decarboxylation and cleavage of nitrogen-containing ring, during which MFORT 16269 gene was activated and played important roles.

Keywords: decarboxylation, MFORT16269 gene, phenazine-1-carboxylic acid degradation, Rhodanobacter sp. PCA2

Procedia PDF Downloads 226
359 Development of an EEG-Based Real-Time Emotion Recognition System on Edge AI

Authors: James Rigor Camacho, Wansu Lim

Abstract:

Over the last few years, the development of new wearable and processing technologies has accelerated in order to harness physiological data such as electroencephalograms (EEGs) for EEG-based applications. EEG has been demonstrated to be a source of emotion recognition signals with the highest classification accuracy among physiological signals. However, when emotion recognition systems are used for real-time classification, the training unit is frequently left to run offline or in the cloud rather than working locally on the edge. That strategy has hampered research, and the full potential of using an edge AI device has yet to be realized. Edge AI devices are computers with high performance that can process complex algorithms. It is capable of collecting, processing, and storing data on its own. It can also analyze and apply complicated algorithms like localization, detection, and recognition on a real-time application, making it a powerful embedded device. The NVIDIA Jetson series, specifically the Jetson Nano device, was used in the implementation. The cEEGrid, which is integrated to the open-source brain computer-interface platform (OpenBCI), is used to collect EEG signals. An EEG-based real-time emotion recognition system on Edge AI is proposed in this paper. To perform graphical spectrogram categorization of EEG signals and to predict emotional states based on input data properties, machine learning-based classifiers were used. Until the emotional state was identified, the EEG signals were analyzed using the K-Nearest Neighbor (KNN) technique, which is a supervised learning system. In EEG signal processing, after each EEG signal has been received in real-time and translated from time to frequency domain, the Fast Fourier Transform (FFT) technique is utilized to observe the frequency bands in each EEG signal. To appropriately show the variance of each EEG frequency band, power density, standard deviation, and mean are calculated and employed. The next stage is to identify the features that have been chosen to predict emotion in EEG data using the K-Nearest Neighbors (KNN) technique. Arousal and valence datasets are used to train the parameters defined by the KNN technique.Because classification and recognition of specific classes, as well as emotion prediction, are conducted both online and locally on the edge, the KNN technique increased the performance of the emotion recognition system on the NVIDIA Jetson Nano. Finally, this implementation aims to bridge the research gap on cost-effective and efficient real-time emotion recognition using a resource constrained hardware device, like the NVIDIA Jetson Nano. On the cutting edge of AI, EEG-based emotion identification can be employed in applications that can rapidly expand the research and implementation industry's use.

Keywords: edge AI device, EEG, emotion recognition system, supervised learning algorithm, sensors

Procedia PDF Downloads 109
358 A Methodology of Using Fuzzy Logics and Data Analytics to Estimate the Life Cycle Indicators of Solar Photovoltaics

Authors: Thor Alexis Sazon, Alexander Guzman-Urbina, Yasuhiro Fukushima

Abstract:

This study outlines the method of how to develop a surrogate life cycle model based on fuzzy logic using three fuzzy inference methods: (1) the conventional Fuzzy Inference System (FIS), (2) the hybrid system of Data Analytics and Fuzzy Inference (DAFIS), which uses data clustering for defining the membership functions, and (3) the Adaptive-Neuro Fuzzy Inference System (ANFIS), a combination of fuzzy inference and artificial neural network. These methods were demonstrated with a case study where the Global Warming Potential (GWP) and the Levelized Cost of Energy (LCOE) of solar photovoltaic (PV) were estimated using Solar Irradiation, Module Efficiency, and Performance Ratio as inputs. The effects of using different fuzzy inference types, either Sugeno- or Mamdani-type, and of changing the number of input membership functions to the error between the calibration data and the model-generated outputs were also illustrated. The solution spaces of the three methods were consequently examined with a sensitivity analysis. ANFIS exhibited the lowest error while DAFIS gave slightly lower errors compared to FIS. Increasing the number of input membership functions helped with error reduction in some cases but, at times, resulted in the opposite. Sugeno-type models gave errors that are slightly lower than those of the Mamdani-type. While ANFIS is superior in terms of error minimization, it could generate solutions that are questionable, i.e. the negative GWP values of the Solar PV system when the inputs were all at the upper end of their range. This shows that the applicability of the ANFIS models highly depends on the range of cases at which it was calibrated. FIS and DAFIS generated more intuitive trends in the sensitivity runs. DAFIS demonstrated an optimal design point wherein increasing the input values does not improve the GWP and LCOE anymore. In the absence of data that could be used for calibration, conventional FIS presents a knowledge-based model that could be used for prediction. In the PV case study, conventional FIS generated errors that are just slightly higher than those of DAFIS. The inherent complexity of a Life Cycle study often hinders its widespread use in the industry and policy-making sectors. While the methodology does not guarantee a more accurate result compared to those generated by the Life Cycle Methodology, it does provide a relatively simpler way of generating knowledge- and data-based estimates that could be used during the initial design of a system.

Keywords: solar photovoltaic, fuzzy logic, inference system, artificial neural networks

Procedia PDF Downloads 171
357 Solid Particles Transport and Deposition Prediction in a Turbulent Impinging Jet Using the Lattice Boltzmann Method and a Probabilistic Model on GPU

Authors: Ali Abdul Kadhim, Fue Lien

Abstract:

Solid particle distribution on an impingement surface has been simulated utilizing a graphical processing unit (GPU). In-house computational fluid dynamics (CFD) code has been developed to investigate a 3D turbulent impinging jet using the lattice Boltzmann method (LBM) in conjunction with large eddy simulation (LES) and the multiple relaxation time (MRT) models. This paper proposed an improvement in the LBM-cellular automata (LBM-CA) probabilistic method. In the current model, the fluid flow utilizes the D3Q19 lattice, while the particle model employs the D3Q27 lattice. The particle numbers are defined at the same regular LBM nodes, and transport of particles from one node to its neighboring nodes are determined in accordance with the particle bulk density and velocity by considering all the external forces. The previous models distribute particles at each time step without considering the local velocity and the number of particles at each node. The present model overcomes the deficiencies of the previous LBM-CA models and, therefore, can better capture the dynamic interaction between particles and the surrounding turbulent flow field. Despite the increasing popularity of LBM-MRT-CA model in simulating complex multiphase fluid flows, this approach is still expensive in term of memory size and computational time required to perform 3D simulations. To improve the throughput of each simulation, a single GeForce GTX TITAN X GPU is used in the present work. The CUDA parallel programming platform and the CuRAND library are utilized to form an efficient LBM-CA algorithm. The methodology was first validated against a benchmark test case involving particle deposition on a square cylinder confined in a duct. The flow was unsteady and laminar at Re=200 (Re is the Reynolds number), and simulations were conducted for different Stokes numbers. The present LBM solutions agree well with other results available in the open literature. The GPU code was then used to simulate the particle transport and deposition in a turbulent impinging jet at Re=10,000. The simulations were conducted for L/D=2,4 and 6, where L is the nozzle-to-surface distance and D is the jet diameter. The effect of changing the Stokes number on the particle deposition profile was studied at different L/D ratios. For comparative studies, another in-house serial CPU code was also developed, coupling LBM with the classical Lagrangian particle dispersion model. Agreement between results obtained with LBM-CA and LBM-Lagrangian models and the experimental data is generally good. The present GPU approach achieves a speedup ratio of about 350 against the serial code running on a single CPU.

Keywords: CUDA, GPU parallel programming, LES, lattice Boltzmann method, MRT, multi-phase flow, probabilistic model

Procedia PDF Downloads 210
356 Cross-Sectoral Energy Demand Prediction for Germany with a 100% Renewable Energy Production in 2050

Authors: Ali Hashemifarzad, Jens Zum Hingst

Abstract:

The structure of the world’s energy systems has changed significantly over the past years. One of the most important challenges in the 21st century in Germany (and also worldwide) is the energy transition. This transition aims to comply with the recent international climate agreements from the United Nations Climate Change Conference (COP21) to ensure sustainable energy supply with minimal use of fossil fuels. Germany aims for complete decarbonization of the energy sector by 2050 according to the federal climate protection plan. One of the stipulations of the Renewable Energy Sources Act 2017 for the expansion of energy production from renewable sources in Germany is that they cover at least 80% of the electricity requirement in 2050; The Gross end energy consumption is targeted for at least 60%. This means that by 2050, the energy supply system would have to be almost completely converted to renewable energy. An essential basis for the development of such a sustainable energy supply from 100% renewable energies is to predict the energy requirement by 2050. This study presents two scenarios for the final energy demand in Germany in 2050. In the first scenario, the targets for energy efficiency increase and demand reduction are set very ambitiously. To build a comparison basis, the second scenario provides results with less ambitious assumptions. For this purpose, first, the relevant framework conditions (following CUTEC 2016) were examined, such as the predicted population development and economic growth, which were in the past a significant driver for the increase in energy demand. Also, the potential for energy demand reduction and efficiency increase (on the demand side) was investigated. In particular, current and future technological developments in energy consumption sectors and possible options for energy substitution (namely the electrification rate in the transport sector and the building renovation rate) were included. Here, in addition to the traditional electricity sector, the areas of heat, and fuel-based consumptions in different sectors such as households, commercial, industrial and transport are taken into account, supporting the idea that for a 100% supply from renewable energies, the areas currently based on (fossil) fuels must be almost completely be electricity-based by 2050. The results show that in the very ambitious scenario a final energy demand of 1,362 TWh/a is required, which is composed of 818 TWh/a electricity, 229 TWh/a ambient heat for electric heat pumps and approx. 315 TWh/a non-electric energy (raw materials for non-electrifiable processes). In the less ambitious scenario, in which the targets are not fully achieved by 2050, the final energy demand will need a higher electricity part of almost 1,138 TWh/a (from the total: 1,682 TWh/a). It has also been estimated that 50% of the electricity revenue must be saved to compensate for fluctuations in the daily and annual flows. Due to conversion and storage losses (about 50%), this would mean that the electricity requirement for the very ambitious scenario would increase to 1,227 TWh / a.

Keywords: energy demand, energy transition, German Energiewende, 100% renewable energy production

Procedia PDF Downloads 136
355 Ocean Planner: A Web-Based Decision Aid to Design Measures to Best Mitigate Underwater Noise

Authors: Thomas Folegot, Arnaud Levaufre, Léna Bourven, Nicolas Kermagoret, Alexis Caillard, Roger Gallou

Abstract:

Concern for negative impacts of anthropogenic noise on the ocean’s ecosystems has increased over the recent decades. This concern leads to a similar increased willingness to regulate noise-generating activities, of which shipping is one of the most significant. Dealing with ship noise requires not only knowledge about the noise from individual ships, but also how the ship noise is distributed in time and space within the habitats of concern. Marine mammals, but also fish, sea turtles, larvae and invertebrates are mostly dependent on the sounds they use to hunt, feed, avoid predators, during reproduction to socialize and communicate, or to defend a territory. In the marine environment, sight is only useful up to a few tens of meters, whereas sound can propagate over hundreds or even thousands of kilometers. Directive 2008/56/EC of the European Parliament and of the Council of June 17, 2008 called the Marine Strategy Framework Directive (MSFD) require the Member States of the European Union to take the necessary measures to reduce the impacts of maritime activities to achieve and maintain a good environmental status of the marine environment. The Ocean-Planner is a web-based platform that provides to regulators, managers of protected or sensitive areas, etc. with a decision support tool that enable to anticipate and quantify the effectiveness of management measures in terms of reduction or modification the distribution of underwater noise, in response to Descriptor 11 of the MSFD and to the Marine Spatial Planning Directive. Based on the operational sound modelling tool Quonops Online Service, Ocean-Planner allows the user via an intuitive geographical interface to define management measures at local (Marine Protected Area, Natura 2000 sites, Harbors, etc.) or global (Particularly Sensitive Sea Area) scales, seasonal (regulation over a period of time) or permanent, partial (focused to some maritime activities) or complete (all maritime activities), etc. Speed limit, exclusion area, traffic separation scheme (TSS), and vessel sound level limitation are among the measures supported be the tool. Ocean Planner help to decide on the most effective measure to apply to maintain or restore the biodiversity and the functioning of the ecosystems of the coastal seabed, maintain a good state of conservation of sensitive areas and maintain or restore the populations of marine species.

Keywords: underwater noise, marine biodiversity, marine spatial planning, mitigation measures, prediction

Procedia PDF Downloads 127
354 Anti-Acanthamoeba Activities of Fatty Acid Salts and Fatty Acids

Authors: Manami Masuda, Mariko Era, Takayoshi Kawahara, Takahide Kanyama, Hiroshi Morita

Abstract:

Objectives: Fatty acid salts are a type of anionic surfactant and are produced from fatty acids and alkali. Moreover, fatty acid salts are known to have potent antibacterial activities. Acanthamoeba is ubiquitously distributed in the environment including sea water, fresh water, soil and even from the air. Although generally free-living, Acanthamoeba can be an opportunistic pathogen, which could cause a potentially blinding corneal infection known as Acanthamoeba keratitis. So, in this study, we evaluated the anti-amoeba activity of fatty acid salts and fatty acids to Acanthamoeba castellanii ATCC 30010. Materials and Methods: The antibacterial activity of 9 fatty acid salts (potassium butyrate (C4K), caproate (C6K), caprylate (C8K), caprate (C10K), laurate (C12K), myristate (C14K), oleate (C18:1K), linoleate (C18:2K), linolenate (C18:3K)) tested on cells of Acanthamoeba castellanii ATCC 30010. Fatty acid salts (concentration of 175 mM and pH 10.5) were prepared by mixing the fatty acid with the appropriate amount of KOH. The amoeba suspension mixed with KOH with a pH adjusted solution was used as the control. Fatty acids (concentration of 175 mM) were prepared by mixing the fatty acid with Tween 80 (20 %). The amoeba suspension mixed with Tween 80 (20 %) was used as the control. The anti-amoeba method, the amoeba suspension (3.0 × 104 cells/ml trophozoites) was mixed with the sample of fatty acid potassium (final concentration of 175 mM). Samples were incubated at 30°C, for 10 min, 60 min, and 180 min and then the viability of A. castellanii was evaluated using plankton counting chamber and trypan blue stainings. The minimum inhibitory concentration (MIC) against Acanthamoeba was determined using the two-fold dilution method. The MIC was defined as the minimal anti-amoeba concentration that inhibited visible amoeba growth following incubation (180 min). Results: C8K, C10K, and C12K were the anti-amoeba effect of 4 log-unit (99.99 % growth suppression of A. castellanii) incubated time for 180 min against A. castellanii at 175mM. After the amoeba, the suspension was mixed with C10K or C12K, destroying the cell membrane had been observed. Whereas, the pH adjusted control solution did not exhibit any effect even after 180 min of incubation with A. castellanii. Moreover, C6, C8, and C18:3 were the anti-amoeba effect of 4 log-unit incubated time for 60 min. C4 and C18:2 exhibited a 4-log reduction after 180 min incubation. Furthermore, the minimum inhibitory concentration (MIC) was determined. The MIC of C10K, C12K and C4 were 2.7 mM. These results indicate that C10K, C12K and C4 have high anti-amoeba activity against A. castellanii and suggest C10K, C12K and C4 have great potential for antimi-amoeba agents.

Keywords: Fatty acid salts, anti-amoeba activities, Acanthamoeba, fatty acids

Procedia PDF Downloads 482
353 Sea Surface Trend over the Arabian Sea and Its Influence on the South West Monsoon Rainfall Variability over Sri Lanka

Authors: Sherly Shelton, Zhaohui Lin

Abstract:

In recent decades, the inter-annual variability of summer precipitation over the India and Sri Lanka has intensified significantly with an increased frequency of both abnormally dry and wet summers. Therefore prediction of the inter-annual variability of summer precipitation is crucial and urgent for water management and local agriculture scheduling. However, none of the hypotheses put forward so far could understand the relationship to monsoon variability and related factors that affect to the South West Monsoon (SWM) variability in Sri Lanka. This study focused to identify the spatial and temporal variability of SWM rainfall events from June to September (JJAS) over Sri Lanka and associated trend. The monthly rainfall records covering 1980-2013 over the Sri Lanka are used for 19 stations to investigate long-term trends in SWM rainfall over Sri Lanka. The linear trends of atmospheric variables are calculated to understand the drivers behind the changers described based on the observed precipitation, sea surface temperature and atmospheric reanalysis products data for 34 years (1980–2013). Empirical orthogonal function (EOF) analysis was applied to understand the spatial and temporal behaviour of seasonal SWM rainfall variability and also investigate whether the trend pattern is the dominant mode that explains SWM rainfall variability. The spatial and stations based precipitation over the country showed statistically insignificant decreasing trends except few stations. The first two EOFs of seasonal (JJAS) mean of rainfall explained 52% and 23 % of the total variance and first PC showed positive loadings of the SWM rainfall for the whole landmass while strongest positive lording can be seen in western/ southwestern part of the Sri Lanka. There is a negative correlation (r ≤ -0.3) between SMRI and SST in the Arabian Sea and Central Indian Ocean which indicate that lower temperature in the Arabian Sea and Central Indian Ocean are associated with greater rainfall over the country. This study also shows that consistently warming throughout the Indian Ocean. The result shows that the perceptible water over the county is decreasing with the time which the influence to the reduction of precipitation over the area by weakening drawn draft. In addition, evaporation is getting weaker over the Arabian Sea, Bay of Bengal and Sri Lankan landmass which leads to reduction of moisture availability required for the SWM rainfall over Sri Lanka. At the same time, weakening of the SST gradients between Arabian Sea and Bay of Bengal can deteriorate the monsoon circulation, untimely which diminish SWM over Sri Lanka. The decreasing trends of moisture, moisture transport, zonal wind, moisture divergence with weakening evaporation over Arabian Sea, during the past decade having an aggravating influence on decreasing trends of monsoon rainfall over the Sri Lanka.

Keywords: Arabian Sea, moisture flux convergence, South West Monsoon, Sri Lanka, sea surface temperature

Procedia PDF Downloads 137
352 The Use of Optical-Radar Remotely-Sensed Data for Characterizing Geomorphic, Structural and Hydrologic Features and Modeling Groundwater Prospective Zones in Arid Zones

Authors: Mohamed Abdelkareem

Abstract:

Remote sensing data contributed on predicting the prospective areas of water resources. Integration of microwave and multispectral data along with climatic, hydrologic, and geological data has been used here. In this article, Sentinel-2, Landsat-8 Operational Land Imager (OLI), Shuttle Radar Topography Mission (SRTM), Tropical Rainfall Measuring Mission (TRMM), and Advanced Land Observing Satellite (ALOS) Phased Array Type L‐band Synthetic Aperture Radar (PALSAR) data were utilized to identify the geological, hydrologic and structural features of Wadi Asyuti which represents a defunct tributary of the Nile basin, in the eastern Sahara. The image transformation of Sentinel-2 and Landsat-8 data allowed characterizing the different varieties of rock units. Integration of microwave remotely-sensed data and GIS techniques provided information on physical characteristics of catchments and rainfall zones that are of a crucial role for mapping groundwater prospective zones. A fused Landsat-8 OLI and ALOS/PALSAR data improved the structural elements that difficult to reveal using optical data. Lineament extraction and interpretation indicated that the area is clearly shaped by the NE-SW graben that is cut by NW-SE trend. Such structures allowed the accumulation of thick sediments in the downstream area. Processing of recent OLI data acquired on March 15, 2014, verified the flood potential maps and offered the opportunity to extract the extent of the flooding zone of the recent flash flood event (March 9, 2014), as well as revealed infiltration characteristics. Several layers including geology, slope, topography, drainage density, lineament density, soil characteristics, rainfall, and morphometric characteristics were combined after assigning a weight for each using a GIS-based knowledge-driven approach. The results revealed that the predicted groundwater potential zones (GPZs) can be arranged into six distinctive groups, depending on their probability for groundwater, namely very low, low, moderate, high very, high, and excellent. Field and well data validated the delineated zones.

Keywords: GIS, remote sensing, groundwater, Egypt

Procedia PDF Downloads 101
351 Using Collaborative Planning to Develop a Guideline for Integrating Biodiversity into Land Use Schemes

Authors: Sagwata A. Manyike, Hulisani Magada

Abstract:

The South African National Biodiversity Institute is in the process of developing a guideline which sets out how biodiversity can be incorporated into land use (zoning) schemes. South Africa promulgated its Spatial Planning and Land Use Management Act in 2015 and the act seeks, amongst other things, to bridge the gap between spatial planning and land use management within the country. In addition, the act requires local governments to develop wall-to-wall land use schemes for their entire jurisdictions as they had previously only developed them for their urban areas. At the same time, South Africa has a rich history of systematic conservation planning whereby Critical Biodiversity Areas and Ecological Support Areas have been spatially delineated at a scale appropriate for spatial planning and land use management at the scale of local government. South Africa is also in the process of spatially delineating ecological infrastructure which is defined as naturally occurring ecosystems which provide valuable services to people such as water and climate regulation, soil formation, disaster risk reduction, etc. The Biodiversity and Land Use Project, which is funded by the Global Environmental Facility through the United Nations Development Programme is seeking to explore ways in which biodiversity information and ecological infrastructure can be incorporated into the spatial planning and land use management systems of local governments. Towards this end, the Biodiversity and Land Use Project have developed a guideline which sets out how local governments can integrate biodiversity into their land-use schemes as a way of not only ensuring sustainable development but also as a way helping them prepare for climate change. In addition, by incorporating biodiversity into land-use schemes, the project is exploring new ways of protecting biodiversity through land use schemes. The Guideline for Incorporating Biodiversity into Land Use Schemes was developed as a response to the fact that the National Land Use Scheme Guidelines only indicates that local governments needed to incorporate biodiversity without explaining how this could be achieved. The Natioanl Guideline also failed to specify which biodiversity-related layers are compatible with which land uses or what the benefits of incorporating biodiversity into the schemes will be for that local government. The guideline, therefore, sets out an argument for why biodiversity is important in land management processes and proceeds to provide a step by step guideline for how schemes can integrate priority biodiversity layers. This guideline will further be added as an addendum to the National Land Use Guidelines. Although the planning act calls for local government to have wall to wall schemes within 5 years of its enactment, many municipalities will not meet this deadline and so this guideline will support them in the development of their new schemes.

Keywords: biodiversity, climate change, land use schemes, local government

Procedia PDF Downloads 185
350 Transition from Linear to Circular Economy in Gypsum in India

Authors: Shanti Swaroop Gupta, Bibekananda Mohapatra, S. K. Chaturvedi, Anand Bohra

Abstract:

For sustainable development in India, there is an urgent need to follow the principles of industrial symbiosis in the industrial processes, under which the scraps, wastes, or by‐products of one industry can become the raw materials for another. This will not only help in reducing the dependence on natural resources but also help in gaining economic advantage to the industry. Gypsum is one such area in India, where the linear economy model of by-product gypsum utilization has resulted in unutilized legacy phosphogypsum stock of 64.65 million tonnes (mt) at phosphoric acid plants in 2020-21. In the future, this unutilized gypsum stock will increase further due to the expected generation of Flue Gas Desulphurization (FGD) gypsum in huge quantities from thermal power plants. Therefore, it is essential to transit from the linear to circular economy in Gypsum in India, which will result in huge environmental as well as ecological benefits. Gypsum is required in many sectors like Construction (Cement industry, gypsum boards, glass fiber reinforced gypsum panels, gypsum plaster, fly ash lime bricks, floor screeds, road construction), agriculture, in the manufacture of Plaster of Paris, pottery, ceramic industry, water treatment processes, manufacture of ammonium sulphate, paints, textiles, etc. The challenges faced in areas of quality, policy, logistics, lack of infrastructure, promotion, etc., for complete utilization of by-product gypsum have been discussed. The untapped potential of by-product gypsum utilization in various sectors like the use of gypsum in agriculture for sodic soil reclamation, utilization of legacy stock in cement industry on mission mode, improvement in quality of by-product gypsum by standardization and usage in building materials industry has been identified. Based on the measures required to tackle the various challenges and utilization of the untapped potential of gypsum, a comprehensive action plan for the transition from linear to the circular economy in gypsum in India has been formulated. The strategies and policy measures required to implement the action plan to achieve a circular economy in Gypsum have been recommended for various government departments. It is estimated that the focused implementation of the proposed action plan would result in a significant decrease in unutilized gypsum legacy stock in the next five years and it would cease to exist by 2027-28 if the proposed action plan is effectively implemented.

Keywords: circular economy, FGD gypsum, India, phosphogypsum

Procedia PDF Downloads 272
349 Prediction of Outcome after Endovascular Thrombectomy for Anterior and Posterior Ischemic Stroke: ASPECTS on CT

Authors: Angela T. H. Kwan, Wenjun Liang, Jack Wellington, Mohammad Mofatteh, Thanh N. Nguyen, Pingzhong Fu, Juanmei Chen, Zile Yan, Weijuan Wu, Yongting Zhou, Shuiquan Yang, Sijie Zhou, Yimin Chen

Abstract:

Background: Endovascular Therapy (EVT)—in the form of mechanical thrombectomy—following intravenous thrombolysis is the standard gold treatment for patients with acute ischemic stroke (AIS) due to large vessel occlusion (LVO). It is well established that an ASPECTS ≥ 7 is associated with an increased likelihood of positive post-EVT outcomes, as compared to an ASPECTS < 7. There is also prognostic utility in coupling posterior circulation ASPECTS (pc-ASPECTS) with magnetic resonance imaging for evaluating the post-EVT functional outcome. However, the value of pc-ASPECTS applied to CT must be explored further to determine its usefulness in predicting functional outcomes following EVT. Objective: In this study, we aimed to determine whether pc-ASPECTS on CT can predict post-EVT functional outcomes among patients with AIS due to LVO. Methods: A total of 247 consecutive patients aged 18 and over receiving EVT for LVO-related AIS were recruited into a prospective database. The data were retrospectively analyzed between March 2019 to February 2022 from two comprehensive tertiary care stroke centers: Foshan Sanshui District People’s Hospital and First People's Hospital of Foshan in China. Patient parameters included EVT within 24hrs of symptom onset, premorbid modified Rankin Scale (mRS) ≤ 2, presence of distal and terminal cerebral blood vessel occlusion, and subsequent 24–72-hour post-stroke onset CT scan. Univariate comparisons were performed using the Fisher exact test or χ2 test for categorical variables and the Mann–Whitney U test for continuous variables. A p-value of ≤ 0.05 was statistically significant. Results: A total of 247 patients met the inclusion criteria; however, 3 were excluded due to the absence of post-CTs and 8 for pre-EVT ASPECTS < 7. Overall, 236 individuals were examined: 196 anterior circulation ischemic strokes and 40 posterior strokes of basilar artery occlusion. We found that both baseline post- and pc-ASPECTS ≥ 7 serve as strong positive markers of favorable outcomes at 90 days post-EVT. Moreover, lower rates of inpatient mortality/hospice discharge, 90-day mortality, and 90-day poor outcome were observed. Moreover, patients in the post-ASPECTS ≥ 7 anterior circulation group had shorter door-to-recanalization time (DRT), puncture-to-recanalization time (PRT), and last known normal-to-puncture-time (LKNPT). Conclusion: Patients of anterior and posterior circulation ischemic strokes with baseline post- and pc-ASPECTS ≥ 7 may benefit from EVT.

Keywords: endovascular therapy, thrombectomy, large vessel occlusion, cerebral ischemic stroke, ASPECTS

Procedia PDF Downloads 117
348 In vitro Callus Production from Lantana Camara: A Step towards Biotransformation Studies

Authors: Maged El-Sayed Mohamed

Abstract:

Plant tissue culture practices are presented nowadays as the most promising substitute to a whole plant in the terms of secondary metabolites production. They offer the advantages of high production, tunability and they have less effect on plant ecosystems. Lantana camara is a weed, which is common all over the world as an ornamental plant. Weeds can adapt to any type of soil and climate due to their rich cellular machinery for secondary metabolites’ production. This characteristic is found in Lantana camara as a plant of very rich diversity of secondary metabolites with no dominant class of compounds. Aim: This trait has encouraged the author to develop tissue culture experiments for Lantana camara to be a platform for production and manipulation of secondary metabolites through biotransformation. Methodology: The plant was collected in its flowering stage in September 2014, from which explants were prepared from shoot tip, auxiliary bud and leaf. Different types of culture media were tried as well as four phytohormones and their combinations; NAA, 2,4-D, BAP and kinetin. Explants were grown in dark or in 12 hours dark and light cycles at 25°C. A metabolic profile for the produced callus was made and then compared to the whole plant profile. The metabolic profile was made using GC-MS for volatile constituents (extracted by n-hexane) and by HPLC-MS and capillary electrophoresis-mass spectrometry (CE-MS) for non-volatile constituents (extracted by ethanol and water). Results: The best conditions for the callus induction was achieved using MS media supplied with 30 gm sucrose and NAA/BAP (1:0.2 mg/L). Initiation of callus was favoured by incubation in dark for 20 day. The callus produced under these conditions showed yellow colour, which changed to brownish after 30 days. The rate of callus growth was high, expressed in the callus diameter, which reached to 1.15±0.2 cm in 30 days; however, the induction of callus delayed for 15 days. The metabolic profile for both volatile and non-volatile constituents of callus showed more simple background metabolites than the whole plant with two new (unresolved) peaks in the callus’ nonvolatile constituents’ chromatogram. Conclusion: Lantana camara callus production can be itself a source of new secondary metabolites and could be used for biotransformation studies due to its simple metabolic background, which allow easy identification of newly formed metabolites. The callus production gathered the simple metabolic background with the rich cellular secondary metabolite machinery of the plant, which could be elicited to produce valuable medicinally active products.

Keywords: capillary electrophoresis-mass spectrometry, gas chromatography, metabolic profile, plant tissue culture

Procedia PDF Downloads 394
347 Biodeterioration of Historic Parks of UK by Algae

Authors: Syeda Fatima Manzelat

Abstract:

This chapter investigates the biodeterioration of parks in the UK caused by lichens, focusing on Campbell Park and Great Linford Manor Park in Milton Keynes. The study first isolates and identifies potent biodeteriogens responsible for potential biodeterioration in these parks, enumerating and recording different classes and genera of lichens known for their biodeteriorative properties. It then examines the implications of lichens on biodeterioration at historic sites within these parks, considering impacts on historic structures, the environment, and associated health risks. Conservation strategies and preventive measures are discussed before concluding.Lichens, characterized by their symbiotic association between a fungus and an alga, thrive on various surfaces including building materials, soil, rock, wood, and trees. The fungal component provides structure and protection, while the algal partner performs photosynthesis. Lichens collected from the park sites, such as Xanthoria, Cladonia, and Arthonia, were observed affecting the historic walls, objects, and trees. Their biodeteriorative impacts were visible to the naked eye, contributing to aesthetic and structural damage. The study highlights the role of lichens as bioindicators of pollution, sensitive to changes in air quality. The presence and diversity of lichens provide insights into the air quality and pollution levels in the parks. However, lichens also pose health risks, with certain species causing respiratory issues, allergies, skin irritation, and other toxic effects in humans and animals. Conservation strategies discussed include regular monitoring, biological and chemical control methods, physical removal, and preventive cleaning. The study emphasizes the importance of a multifaceted, multidisciplinary approach to managing lichen-induced biodeterioration. Future management practices could involve advanced techniques such as eco-friendly biocides and self-cleaning materials to effectively control lichen growth and preserve historic structures. In conclusion, this chapter underscores the dual role of lichens as agents of biodeterioration and indicators of environmental quality. Comprehensive conservation management approaches, encompassing monitoring, targeted interventions, and advanced conservation methods, are essential for preserving the historic and natural integrity of parks like Campbell Park and Great Linford Manor Park.

Keywords: biodeterioration, historic parks, algae, UK

Procedia PDF Downloads 39
346 Flash Flood in Gabes City (Tunisia): Hazard Mapping and Vulnerability Assessment

Authors: Habib Abida, Noura Dahri

Abstract:

Flash floods are among the most serious natural hazards that have disastrous environmental and human impacts. They are associated with exceptional rain events, characterized by short durations, very high intensities, rapid flows and small spatial extent. Flash floods happen very suddenly and are difficult to forecast. They generally cause damage to agricultural crops and property, infrastructures, and may even result in the loss of human lives. The city of Gabes (South-eastern Tunisia) has been exposed to numerous damaging floods because of its mild topography, clay soil, high urbanization rate and erratic rainfall distribution. The risks associated with this situation are expected to increase further in the future because of climate change, deemed responsible for the increase of the frequency and the severity of this natural hazard. Recently, exceptional events hit Gabes City causing death and major property losses. A major flooding event hit the region on June 2nd, 2014, causing human deaths and major material losses. It resulted in the stagnation of storm water in the numerous low zones of the study area, endangering thereby human health and causing disastrous environmental impacts. The characterization of flood risk in Gabes Watershed (South-eastern Tunisia) is considered an important step for flood management. Analytical Hierarchy Process (AHP) method coupled with Monte Carlo simulation and geographic information system were applied to delineate and characterize flood areas. A spatial database was developed based on geological map, digital elevation model, land use, and rainfall data in order to evaluate the different factors susceptible to affect flood analysis. Results obtained were validated by remote sensing data for the zones that showed very high flood hazard during the extreme rainfall event of June 2014 that hit the study basin. Moreover, a survey was conducted from different areas of the city in order to understand and explore the different causes of this disaster, its extent and its consequences.

Keywords: analytical hierarchy process, flash floods, Gabes, remote sensing, Tunisia

Procedia PDF Downloads 112