Search results for: mixture rules
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2528

Search results for: mixture rules

158 The Effect of Political Characteristics on the Budget Balance of Local Governments: A Dynamic System Generalized Method of Moments Data Approach

Authors: Stefanie M. Vanneste, Stijn Goeminne

Abstract:

This paper studies the effect of political characteristics of 308 Flemish municipalities on their budget balance in the period 1995-2011. All local governments experience the same economic and financial setting, however some governments have high budget balances, while others have low budget balances. The aim of this paper is to explain the differences in municipal budget balances by a number of economic, socio-demographic and political variables. The economic and socio-demographic variables will be used as control variables, while the focus of this paper will be on the political variables. We test four hypotheses resulting from the literature, namely (i) the partisan hypothesis tests if left wing governments have lower budget balances, (ii) the fragmentation hypothesis stating that more fragmented governments have lower budget balances, (iii) the hypothesis regarding the power of the government, higher powered governments would resolve in higher budget balances, and (iv) the opportunistic budget cycle to test whether politicians manipulate the economic situation before elections in order to maximize their reelection possibilities and therefore have lower budget balances before elections. The contributions of our paper to the existing literature are multiple. First, we use the whole array of political variables and not just a selection of them. Second, we are dealing with a homogeneous database with the same budget and election rules, making it easier to focus on the political factors without having to control for the impact of differences in the political systems. Third, our research extends the existing literature on Flemish municipalities as this is the first dynamic research on local budget balances. We use a dynamic panel data model. Because of the two lagged dependent variables as explanatory variables, we employ the system GMM (Generalized Method of Moments) estimator. This is the best possible estimator as we are dealing with political panel data that is rather persistent. Our empirical results show that the effect of the ideological position and the power of the coalition are of less importance to explain the budget balance. The political fragmentation of the government on the other hand has a negative and significant effect on the budget balance. The more parties in a coalition the worse the budget balance is ceteris paribus. Our results also provide evidence of an opportunistic budget cycle, the budget balances are lower in pre-election years relative to the other years to try and increase the incumbents reelection possibilities. An additional finding is that the incremental effect of the budget balance is very important and should not be ignored like is being done in a lot of empirical research. The coefficients of the lagged dependent variables are always positive and very significant. This proves that the budget balance is subject to incrementalism. It is not possible to change the entire policy from one year to another so the actions taken in recent past years still have an impact on the current budget balance. Only a relatively small amount of research concerning the budget balance takes this considerable incremental effect into account. Our findings survive several robustness checks.

Keywords: budget balance, fragmentation, ideology, incrementalism, municipalities, opportunistic budget cycle, panel data, political characteristics, power, system GMM

Procedia PDF Downloads 299
157 An Introduction to the Radiation-Thrust Based on Alpha Decay and Spontaneous Fission

Authors: Shiyi He, Yan Xia, Xiaoping Ouyang, Liang Chen, Zhongbing Zhang, Jinlu Ruan

Abstract:

As the key system of the spacecraft, various propelling system have been developing rapidly, including ion thrust, laser thrust, solar sail and other micro-thrusters. However, there still are some shortages in these systems. The ion thruster requires the high-voltage or magnetic field to accelerate, resulting in extra system, heavy quantity and large volume. The laser thrust now is mostly ground-based and providing pulse thrust, restraint by the station distribution and the capacity of laser. The thrust direction of solar sail is limited to its relative position with the Sun, so it is hard to propel toward the Sun or adjust in the shadow.In this paper, a novel nuclear thruster based on alpha decay and spontaneous fission is proposed and the principle of this radiation-thrust with alpha particle has been expounded. Radioactive materials with different released energy, such as 210Po with 5.4MeV and 238Pu with 5.29MeV, attached to a metal film will provides various thrust among 0.02-5uN/cm2. With this repulsive force, radiation is able to be a power source. With the advantages of low system quantity, high accuracy and long active time, the radiation thrust is promising in the field of space debris removal, orbit control of nano-satellite array and deep space exploration. To do further study, a formula lead to the amplitude and direction of thrust by the released energy and decay coefficient is set up. With the initial formula, the alpha radiation elements with the half life period longer than a hundred days are calculated and listed. As the alpha particles emit continuously, the residual charge in metal film grows and affects the emitting energy distribution of alpha particles. With the residual charge or extra electromagnetic field, the emitting of alpha particles performs differently and is analyzed in this paper. Furthermore, three more complex situations are discussed. Radiation element generating alpha particles with several energies in different intensity, mixture of various radiation elements, and cascaded alpha decay are studied respectively. In combined way, it is more efficient and flexible to adjust the thrust amplitude. The propelling model of the spontaneous fission is similar with the one of alpha decay, which has a more complex angular distribution. A new quasi-sphere space propelling system based on the radiation-thrust has been introduced, as well as the collecting and processing system of excess charge and reaction heat. The energy and spatial angular distribution of emitting alpha particles on unit area and certain propelling system have been studied. As the alpha particles are easily losing energy and self-absorb, the distribution is not the simple stacking of each nuclide. With the change of the amplitude and angel of radiation-thrust, orbital variation strategy on space debris removal is shown and optimized.

Keywords: alpha decay, angular distribution, emitting energy, orbital variation, radiation-thruster

Procedia PDF Downloads 209
156 Evaluating the Effectiveness of Mesotherapy and Topical 2% Minoxidil for Androgenic Alopecia in Females, Using Topical 2% Minoxidil as a Common Treatment

Authors: Hamed Delrobai Ghoochan Atigh

Abstract:

Androgenic alopecia (AGA) is a common form of hair loss, impacting approximately 50% of females, which leads to reduced self-esteem and quality of life. It causes progressive follicular miniaturization in genetically predisposed individuals. Mesotherapy -- a minimally invasive procedure, topical 2% minoxidil, and oral finasteride have emerged as popular treatment options in the realm of cosmetics. However, the efficacy of mesotherapy compared to other options remains unclear. This study aims to assess the effectiveness of mesotherapy when it is added to topical 2% minoxidil treatment on female androgenic alopecia. Mesotherapy, also known as intradermotherapy, is a technique that entails administering multiple intradermal injections of a carefully composed mixture of compounds in low doses, applied at various points in close proximity to or directly over the affected areas. This study involves a randomized controlled trial with 100 female participants diagnosed with androgenic alopecia. The subjects were randomly assigned to two groups: Group A used topical 2% minoxidil twice daily and took Finastride oral tablet. For Group B, 10 mesotherapy sessions were added to the prior treatment. The injections were administered every week in the first month of treatment, every two weeks in the second month, and after that the injections were applied monthly for four consecutive months. The response assessment was made at baseline, the 4th session, and finally after 6 months when the treatment was complete. Clinical photographs, 7-point Likert scale patient self-evaluation, and 7-point Likert scale assessment tool were used to measure the effectiveness of the treatment. During this evaluation, a significant and visible improvement in hair density and thickness was observed. The study demonstrated a significant increase in treatment efficacy in Group B compared to Group A post-treatment, with no adverse effects. Based on the findings, it appears that mesotherapy offers a significant improvement in female AGA over minoxidil. Hair loss was stopped in Group B after one month and improvement in density and thickness of hair was observed after the third month. The findings from this study provide valuable insights into the efficacy of mesotherapy in treating female androgenic alopecia. Our evaluation offers a detailed assessment of hair growth parameters, enabling a better understanding of the treatments' effectiveness. The potential of this promising technique is significantly enhanced when carried out in a medical facility, guided by appropriate indications and skillful execution. An interesting observation in our study is that in areas where the hair had turned grey, the newly regrown hair does not retain its original grey color; instead, it becomes darker. The results contribute to evidence-based decision-making in dermatological practice and offer different insights into the treatment of female pattern hair loss.

Keywords: androgenic alopecia, female hair loss, mesotherapy, topical 2% minoxidil

Procedia PDF Downloads 103
155 Developing Granular Sludge and Maintaining High Nitrite Accumulation for Anammox to Treat Municipal Wastewater High-efficiently in a Flexible Two-stage Process

Authors: Zhihao Peng, Qiong Zhang, Xiyao Li, Yongzhen Peng

Abstract:

Nowadays, conventional nitrogen removal process (nitrification and denitrification) was adopted in most wastewater treatment plants, but many problems have occurred, such as: high aeration energy consumption, extra carbon sources dosage and high sludge treatment costs. The emergence of anammox has bring about the great revolution to the nitrogen removal technology, and only the ammonia and nitrite were required to remove nitrogen autotrophically, no demand for aeration and sludge treatment. However, there existed many challenges in anammox applications: difficulty of biomass retention, insufficiency of nitrite substrate, damage from complex organic etc. Much effort was put into the research in overcoming the above challenges, and the payment was rewarded. It was also imperative to establish an innovative process that can settle the above problems synchronously, after all any obstacle above mentioned can cause the collapse of anammox system. Therefore, in this study, a two-stage process was established that the sequencing batch reactor (SBR) and upflow anaerobic sludge blanket (UASB) were used in the pre-stage and post-stage, respectively. The domestic wastewater entered into the SBR first and went through anaerobic/aerobic/anoxic (An/O/A) mode, and the draining at the aerobic end of SBR was mixed with domestic wastewater, the mixture then entering to the UASB. In the long term, organic and nitrogen removal performance was evaluated. All along the operation, most COD was removed in pre-stage (COD removal efficiency > 64.1%), including some macromolecular organic matter, like: tryptophan, tyrosinase and fulvic acid, which could weaken the damage of organic matter to anammox. And the An/O/A operating mode of SBR was beneficial to the achievement and maintenance of partial nitrification (PN). Hence, sufficient and steady nitrite supply was another favorable condition to anammox enhancement. Besides, the flexible mixing ratio helped to gain a substrate ratio appropriate to anammox (1.32-1.46), which further enhance the anammox. Further, the UASB was used and gas recirculation strategy was adopted in the post-stage, aiming to achieve granulation by the selection pressure. As expected, the granules formed rapidly during 38 days, which increased from 153.3 to 354.3 μm. Based on bioactivity and gene measurement, the anammox metabolism and abundance level rose evidently, by 2.35 mgN/gVss·h and 5.3 x109. The anammox bacteria mainly distributed in the large granules (>1000 μm), while the biomass in the flocs (<200 μm) and microgranules (200-500 μm) barely displayed anammox bioactivity. Enhanced anammox promoted the advanced autotrophic nitrogen removal, which increased from 71.9% to 93.4%, even when the temperature was only 12.9 ℃. Therefore, it was feasible to enhance anammox in the multiple favorable conditions created, and the strategy extended the application of anammox to the full-scale mainstream, enhanced the understanding of anammox in the aspects of culturing conditions.

Keywords: anammox, granules, nitrite accumulation, nitrogen removal efficiency

Procedia PDF Downloads 49
154 Assessment of Commercial Antimicrobials Incorporated into Gelatin Coatings and Applied to Conventional Heat-Shrinking Material for the Prevention of Blown Pack Spoilage in Vacuum Packaged Beef Cuts

Authors: Andrey A. Tyuftin, Rachael Reid, Paula Bourke, Patrick J. Cullen, Seamus Fanning, Paul Whyte, Declan Bolton , Joe P. Kerry

Abstract:

One of the primary spoilage issues associated with vacuum-packed beef products is blown pack spoilage (BPS) caused by the psychrophilic spore-forming strain of Clostridium spp. Spores derived from this organism can be activated after heat-shrinking (eg. 90°C for 3 seconds). To date, research into the control of Clostridium spp in beef packaging is limited. Active packaging in the form of antimicrobially-active coatings may be one approach to its control. Antimicrobial compounds may be incorporated into packaging films or coated onto the internal surfaces of packaging films using a carrier matrix. Three naturally-sourced, commercially-available antimicrobials, namely; Auranta FV (AFV) (bitter oranges extract) from Envirotech Innovative Products Ltd, Ireland; Inbac-MDA (IMDA) from Chemital LLC, Spain, mixture of different organic acids and sodium octanoate (SO) from Sigma-Aldrich, UK, were added into gelatin solutions at 2 concentrations: 2.5 and 3.5 times their minimum inhibition concentration (MIC) against Clostridium estertheticum (DSMZ 8809). These gelatin solutions were coated onto the internal polyethylene layer of cold plasma treated, heat-shrinkable laminates conventionally used for meat packaging applications. Atmospheric plasma was used in order to enhance adhesion between packaging films and gelatin coatings. Pouches were formed from these coated packaging materials, and beef cuts which had been inoculated with C. estertheticum were vacuum packaged. Inoculated beef was vacuum packaged without employing active films and this treatment served as the control. All pouches were heat-sealed and then heat-shrunk at 90°C for 3 seconds and incubated at 2°C for 100 days. During this storage period, packs were monitored for the indicators of blown pack spoilage as follows; gas bubbles in drip, loss of vacuum (onset of BPS), blown, the presence of sufficient gas inside the packs to produce pack distension and tightly stretched, “overblown” packs/ packs leaking. Following storage and assessment of indicator date, it was concluded that AFV- and SO-containing packaging inhibited the growth of C. estertheticum, significantly delaying the blown pack spoilage of beef primals. IMDA did not inhibit the growth of C. estertheticum. This may be attributed to differences in release rates and possible reactions with gelatin. Overall, active films were successfully produced following plasma surface treatment, and experimental data demonstrated clearly that the use of antimicrobially-active films could significantly prolong the storage stability of beef primals through the effective control of BPS.

Keywords: active packaging, blown pack spoilage, Clostridium, antimicrobials, edible coatings, food packaging, gelatin films, meat science

Procedia PDF Downloads 265
153 Numerical Investigation of the Influence on Buckling Behaviour Due to Different Launching Bearings

Authors: Nadine Maier, Martin Mensinger, Enea Tallushi

Abstract:

In general, today, two types of launching bearings are used in the construction of large steel and steel concrete composite bridges. These are sliding rockers and systems with hydraulic bearings. The advantages and disadvantages of the respective systems are under discussion. During incremental launching, the center of the webs of the superstructure is not perfectly in line with the center of the launching bearings due to unavoidable tolerances, which may have an influence on the buckling behavior of the web plates. These imperfections are not considered in the current design against plate buckling, according to DIN EN 1993-1-5. It is therefore investigated whether the design rules have to take into account any eccentricities which occur during incremental launching and also if this depends on the respective launching bearing. Therefore, at the Technical University Munich, large-scale buckling tests were carried out on longitudinally stiffened plates under biaxial stresses with the two different types of launching bearings and eccentric load introduction. Based on the experimental results, a numerical model was validated. Currently, we are evaluating different parameters for both types of launching bearings, such as load introduction length, load eccentricity, the distance between longitudinal stiffeners, the position of the rotation point of the spherical bearing, which are used within the hydraulic bearings, web, and flange thickness and imperfections. The imperfection depends on the geometry of the buckling field and whether local or global buckling occurs. This and also the size of the meshing is taken into account in the numerical calculations of the parametric study. As a geometric imperfection, the scaled first buckling mode is applied. A bilinear material curve is used so that a GMNIA analysis is performed to determine the load capacity. Stresses and displacements are evaluated in different directions, and specific stress ratios are determined at the critical points of the plate at the time of the converging load step. To evaluate the load introduction of the transverse load, the transverse stress concentration is plotted on a defined longitudinal section on the web. In the same way, the rotation of the flange is evaluated in order to show the influence of the different degrees of freedom of the launching bearings under eccentric load introduction and to be able to make an assessment for the case, which is relevant in practice. The input and the output are automatized and depend on the given parameters. Thus we are able to adapt our model to different geometric dimensions and load conditions. The programming is done with the help of APDL and a Python code. This allows us to evaluate and compare more parameters faster. Input and output errors are also avoided. It is, therefore, possible to evaluate a large spectrum of parameters in a short time, which allows a practical evaluation of different parameters for buckling behavior. This paper presents the results of the tests as well as the validation and parameterization of the numerical model and shows the first influences on the buckling behavior under eccentric and multi-axial load introduction.

Keywords: buckling behavior, eccentric load introduction, incremental launching, large scale buckling tests, multi axial stress states, parametric numerical modelling

Procedia PDF Downloads 108
152 On the Road towards Effective Administrative Justice in Macedonia, Albania and Kosovo: Common Challenges and Problems

Authors: Arlinda Memetaj

Abstract:

A sound system of administrative justice represents a vital element of democratic governance. The proper control of public administration consists not only of a sound civil service framework and legislative oversight, but empowerment of the public and courts to hold public officials accountable for their decision-making through the application of fair administrative procedural rules and the use of appropriate administrative appeals processes and judicial review. The establishment of both effective public administration and administrative justice system has been for a long period of time among the most ‘important and urgent’ final strategic objectives of almost any country in the Balkans region, including Macedonia, Albania and Kosovo. Closely related to this is their common strategic goal to enter the membership in the European Union, which requires fulfilling of many criteria and standards as incorporated in EU acquis communautaire. The latter is presently done with the framework of the Stabilization and Association Agreement which each of these countries has concluded with the EU accordingly. To above aims, each of the three countries has so far adopted a huge series of legislative and strategic documents related to any aspects of their individual administrative justice system. ‘Changes and reforms’ in this field have been thus the most frequent terms being used in any of these countries. The three countries have already established their own national administrative judiciary, while permanently amending their laws on the general administrative procedure introducing thereby considerable innovations concerned. National administrative courts are expected to have crucial important role within the broader judiciary systems-related reforms of these countries; they are designed to check the legality of decisions of the state administration with the aim to guarantee an effective protection of human rights and legitimate interests of private persons through a regular, conform, fast and reasonable judicial administrative process. Further improvements in this field are presently an integral crucial part of all the relevant national strategic documents including the ones on judiciary reform and public administration reform, as adopted by each of the three countries; those strategic documents are designed among others to provide effective protection of their citizens` rights` of administrative justice. On the basis of the later, the paper finally is aimed at highlighting selective common challenges and problems of the three countries on their European road, while claiming (among others) that the current status quo situation in each of them may be overcome only if there is a proper implementation of the administrative courts decisions and a far stricter international monitoring process thereof. A new approach and strong political commitment from the highest political leadership is thus absolutely needed to ensure the principles of transparency, accountability and merit in public administration. The main methods used in this paper include the analytical and comparative ones due to the very character of the paper itself.

Keywords: administrative courts , administrative justice, administrative procedure, benefit, effective administrative justice, human rights, implementation, monitoring, reform

Procedia PDF Downloads 154
151 Methodological Approach to the Elaboration and Implementation of the Spatial-Urban Plan for the Special Purpose Area: Case-Study of Infrastructure Corridor of Highway E-80, Section Nis-Merdare, Serbia

Authors: Nebojsa Stefanovic, Sasa Milijic, Natasa Danilovic Hristic

Abstract:

Spatial plan of the special purpose area constitutes a basic tool in the planning of infrastructure corridor of a highway. The aim of the plan is to define the planning basis and provision of spatial conditions for the construction and operation of the highway, as well as for developing other infrastructure systems in the corridor. This paper presents a methodology and approach to the preparation of the Spatial Plan for the special purpose area for the infrastructure corridor of the highway E-80, Section Niš-Merdare in Serbia. The applied methodological approach is based on the combined application of the integrative and participatory method in the decision-making process on the sustainable development of the highway corridor. It was found that, for the planning and management of the infrastructure corridor, a key problem is coordination of spatial and urban planning, strategic environmental assessment and sectoral traffic planning and designing. Through the development of the plan, special attention is focused on increasing the accessibility of the local and regional surrounding, reducing the adverse impacts on the development of settlements and the economy, protection of natural resources, natural and cultural heritage, and the development of other infrastructure systems in the corridor of the highway. As a result of the applied methodology, this paper analyzes the basic features such as coverage, the concept, protected zones, service facilities and objects, the rules of development and construction, etc. Special emphasis is placed to methodology and results of the Strategic Environmental Assessment of the Spatial Plan, and to the importance of protection measures, with the special significance of air and noise protection measures. For evaluation in the Strategic Environmental Assessment, a multicriteria expert evaluation (semi-quantitative method) of planned solutions was used in relation to the set of goals and relevant indicators, based on the basic set of indicators of sustainable development. Evaluation of planned solutions encompassed the significance and size, spatial conditions and probability of the impact of planned solutions on the environment, and the defined goals of strategic assessment. The framework of the implementation of the Spatial Plan is presented, which is determined for the simultaneous elaboration of planning solutions at two levels: the strategic level of the spatial plan and detailed urban plan level. It is also analyzed the relationship of the Spatial Plan to other applicable planning documents for the planning area. The effects of this methodological approach relate to enabling integrated planning of the sustainable development of the infrastructure corridor of the highway and its surrounding area, through coordination of spatial, urban and sectoral traffic planning and design, as well as the participation of all key actors in the adoption and implementation of planned decisions. By the conclusions of the paper, it is pointed to the direction for further research, particularly in terms of harmonizing methodology of planning documentation and preparation of technical-design documentation.

Keywords: corridor, environment, highway, impact, methodology, spatial plan, urban

Procedia PDF Downloads 212
150 Improving Online Learning Engagement through a Kid-Teach-Kid Approach for High School Students during the Pandemic

Authors: Alexander Huang

Abstract:

Online learning sessions have become an indispensable complement to in-classroom-learning sessions in the past two years due to the emergence of Covid-19. Due to social distance requirements, many courses and interaction-intensive sessions, ranging from music classes to debate camps, are online. However, online learning imposes a significant challenge for engaging students effectively during the learning sessions. To resolve this problem, Project PWR, a non-profit organization formed by high school students, developed an online kid-teach-kid learning environment to boost students' learning interests and further improve students’ engagement during online learning. Fundamentally, the kid-teach-kid learning model creates an affinity space to form learning groups, where like-minded peers can learn and teach their interests. The role of the teacher can also help a kid identify the instructional task and set the rules and procedures for the activities. The approach also structures initial discussions to reveal a range of ideas, similar experiences, thinking processes, language use, and lower student-to-teacher ratio, which become enriched online learning experiences for upcoming lessons. In such a manner, a kid can practice both the teacher role and the student role to accumulate experiences on how to convey ideas and questions over the online session more efficiently and effectively. In this research work, we conducted two case studies involving a 3D-Design course and a Speech and Debate course taught by high-school kids. Through Project PWR, a kid first needs to design the course syllabus based on a provided template to become a student-teacher. Then, the Project PWR academic committee evaluates the syllabus and offers comments and suggestions for changes. Upon the approval of a syllabus, an experienced and voluntarily adult mentor is assigned to interview the student-teacher and monitor the lectures' progress. Student-teachers construct a comprehensive final evaluation for their students, which they grade at the end of the course. Moreover, each course requires conducting midterm and final evaluations through a set of surveyed replies provided by students to assess the student-teacher’s performance. The uniqueness of Project PWR lies in its established kid-teach-kids affinity space. Our research results showed that Project PWR could create a closed-loop system where a student can help a teacher improve and vice versa, thus improving the overall students’ engagement. As a result, Project PWR’s approach can train teachers and students to become better online learners and give them a solid understanding of what to prepare for and what to expect from future online classes. The kid-teach-kid learning model can significantly improve students' engagement in the online courses through the Project PWR to effectively supplement the traditional teacher-centric model that the Covid-19 pandemic has impacted substantially. Project PWR enables kids to share their interests and bond with one another, making the online learning environment effective and promoting positive and effective personal online one-on-one interactions.

Keywords: kid-teach-kid, affinity space, online learning, engagement, student-teacher

Procedia PDF Downloads 143
149 Intended Use of Genetically Modified Organisms, Advantages and Disadvantages

Authors: Pakize Ozlem Kurt Polat

Abstract:

GMO (genetically modified organism) is the result of a laboratory process where genes from the DNA of one species are extracted and artificially forced into the genes of an unrelated plant or animal. This technology includes; nucleic acid hybridization, recombinant DNA, RNA, PCR, cell culture and gene cloning techniques. The studies are divided into three groups of properties transferred to the transgenic plant. Up to 59% herbicide resistance characteristic of the transfer, 28% resistance to insects and the virus seems to be related to quality characteristics of 13%. Transgenic crops are not included in the commercial production of each product; mostly commercial plant is soybean, maize, canola, and cotton. Day by day increasing GMO interest can be listed as follows; Use in the health area (Organ transplantation, gene therapy, vaccines and drug), Use in the industrial area (vitamins, monoclonal antibodies, vaccines, anti-cancer compounds, anti -oxidants, plastics, fibers, polyethers, human blood proteins, and are used to produce carotenoids, emulsifiers, sweeteners, enzymes , food preservatives structure is used as a flavor enhancer or color changer),Use in agriculture (Herbicide resistance, Resistance to insects, Viruses, bacteria, fungi resistance to disease, Extend shelf life, Improving quality, Drought , salinity, resistance to extreme conditions such as frost, Improve the nutritional value and quality), we explain all this methods step by step in this research. GMO has advantages and disadvantages, which we explain all of them clearly in full text, because of this topic, worldwide researchers have divided into two. Some researchers thought that the GMO has lots of disadvantages and not to be in use, some of the researchers has opposite thought. If we look the countries law about GMO, we should know Biosafety law for each country and union. For this Biosecurity reasons, the problems caused by the transgenic plants, including Turkey, to minimize 130 countries on 24 May 2000, ‘the United Nations Biosafety Protocol’ signed nudes. This protocol has been prepared in addition to Cartagena Biosafety Protocol entered into force on September 11, 2003. This protocol GMOs in general use by addressing the risks to human health, biodiversity and sustainable transboundary movement of all GMOs that may affect the prevention, transit covers were dealt and used. Under this protocol we have to know the, ‘US Regulations GMO’, ‘European Union Regulations GMO’, ‘Turkey Regulations GMO’. These three different protocols have different applications and rules. World population increasing day by day and agricultural fields getting smaller for this reason feeding human and animal we should improve agricultural product yield and quality. Scientists trying to solve this problem and one solution way is molecular biotechnology which is including the methods of GMO too. Before decide to support or against the GMO, should know the GMO protocols and it effects.

Keywords: biotechnology, GMO (genetically modified organism), molecular marker

Procedia PDF Downloads 234
148 Removal of Heavy Metals by Ultrafiltration Assisted with Chitosan or Carboxy-Methyl Cellulose

Authors: Boukary Lam, Sebastien Deon, Patrick Fievet, Nadia Crini, Gregorio Crini

Abstract:

Treatment of heavy metal-contaminated industrial wastewater has become a major challenge over the last decades. Conventional processes for the treatment of metal-containing effluents do not always simultaneously satisfy both legislative and economic criteria. In this context, coupling of processes can then be a promising alternative to the conventional approaches used by industry. The polymer-assisted ultrafiltration (PAUF) process is one of these coupling processes. Its principle is based on a sequence of steps with reaction (e.g., complexation) between metal ions and a polymer and a step involving the rejection of the formed species by means of a UF membrane. Unlike free ions, which can cross the UF membrane due to their small size, the polymer/ion species, the size of which is larger than pore size, are rejected. The PAUF process was deeply investigated herein in the case of removal of nickel ions by adding chitosan and carboxymethyl cellulose (CMC). Experiments were conducted with synthetic solutions containing 1 to 100 ppm of nickel ions with or without the presence of NaCl (0.05 to 0.2 M), and an industrial discharge water (containing several metal ions) with and without polymer. Chitosan with a molecular weight of 1.8×105 g mol⁻¹ and a degree of acetylation close to 15% was used. CMC with a degree of substitution of 0.7 and a molecular weight of 9×105 g mol⁻¹ was employed. Filtration experiments were performed under cross-flow conditions with a filtration cell equipped with a polyamide thin film composite flat-sheet membrane (3.5 kDa). Without the step of polymer addition, it was found that nickel rejection decreases from 80 to 0% with increasing metal ion concentration and salt concentration. This behavior agrees qualitatively with the Donnan exclusion principle: the increase in the electrolyte concentration screens the electrostatic interaction between ions and the membrane fixed the charge, which decreases their rejection. It was shown that addition of a sufficient amount of polymer (greater than 10⁻² M of monomer unit) can offset this decrease and allow good metal removal. However, the permeation flux was found to be somewhat reduced due to the increase in osmotic pressure and viscosity. It was also highlighted that the increase in pH (from 3 to 9) has a strong influence on removal performances: the higher pH value, the better removal performance. The two polymers have shown similar performance enhancement at natural pH. However, chitosan has proved more efficient in slightly basic conditions (above its pKa) whereas CMC has demonstrated very weak rejection performances when pH is below its pKa. In terms of metal rejection, chitosan is thus probably the better option for basic or strongly acid (pH < 4) conditions. Nevertheless, CMC should probably be preferred to chitosan in natural conditions (5 < pH < 8) since its impact on the permeation flux is less significant. Finally, ultrafiltration of an industrial discharge water has shown that the increase in metal ion rejection induced by the polymer addition is very low due to the competing phenomenon between the various ions present in the complex mixture.

Keywords: carboxymethyl cellulose, chitosan, heavy metals, nickel ion, polymer-assisted ultrafiltration

Procedia PDF Downloads 163
147 Classification of ECG Signal Based on Mixture of Linear and Non-Linear Features

Authors: Mohammad Karimi Moridani, Mohammad Abdi Zadeh, Zahra Shahiazar Mazraeh

Abstract:

In recent years, the use of intelligent systems in biomedical engineering has increased dramatically, especially in the diagnosis of various diseases. Also, due to the relatively simple recording of the electrocardiogram signal (ECG), this signal is a good tool to show the function of the heart and diseases associated with it. The aim of this paper is to design an intelligent system for automatically detecting a normal electrocardiogram signal from abnormal one. Using this diagnostic system, it is possible to identify a person's heart condition in a very short time and with high accuracy. The data used in this article are from the Physionet database, available in 2016 for use by researchers to provide the best method for detecting normal signals from abnormalities. Data is of both genders and the data recording time varies between several seconds to several minutes. All data is also labeled normal or abnormal. Due to the low positional accuracy and ECG signal time limit and the similarity of the signal in some diseases with the normal signal, the heart rate variability (HRV) signal was used. Measuring and analyzing the heart rate variability with time to evaluate the activity of the heart and differentiating different types of heart failure from one another is of interest to the experts. In the preprocessing stage, after noise cancelation by the adaptive Kalman filter and extracting the R wave by the Pan and Tampkinz algorithm, R-R intervals were extracted and the HRV signal was generated. In the process of processing this paper, a new idea was presented that, in addition to using the statistical characteristics of the signal to create a return map and extraction of nonlinear characteristics of the HRV signal due to the nonlinear nature of the signal. Finally, the artificial neural networks widely used in the field of ECG signal processing as well as distinctive features were used to classify the normal signals from abnormal ones. To evaluate the efficiency of proposed classifiers in this paper, the area under curve ROC was used. The results of the simulation in the MATLAB environment showed that the AUC of the MLP and SVM neural network was 0.893 and 0.947, respectively. As well as, the results of the proposed algorithm in this paper indicated that the more use of nonlinear characteristics in normal signal classification of the patient showed better performance. Today, research is aimed at quantitatively analyzing the linear and non-linear or descriptive and random nature of the heart rate variability signal, because it has been shown that the amount of these properties can be used to indicate the health status of the individual's heart. The study of nonlinear behavior and dynamics of the heart's neural control system in the short and long-term provides new information on how the cardiovascular system functions, and has led to the development of research in this field. Given that the ECG signal contains important information and is one of the common tools used by physicians to diagnose heart disease, but due to the limited accuracy of time and the fact that some information about this signal is hidden from the viewpoint of physicians, the design of the intelligent system proposed in this paper can help physicians with greater speed and accuracy in the diagnosis of normal and patient individuals and can be used as a complementary system in the treatment centers.

Keywords: neart rate variability, signal processing, linear and non-linear features, classification methods, ROC Curve

Procedia PDF Downloads 264
146 Polymer Matrices Based on Natural Compounds: Synthesis and Characterization

Authors: Sonia Kudlacik-Kramarczyk, Anna Drabczyk, Dagmara Malina, Bozena Tyliszczak, Agnieszka Sobczak-Kupiec

Abstract:

Introduction: In the preparation of polymer materials, compounds of natural origin are currently gaining more and more interest. This is particularly noticeable in the case of synthesis of materials considered for biomedical use. Then, selected material has to meet many requirements. It should be characterized by non-toxicity, biodegradability and biocompatibility. Therefore special attention is directed to substances such as polysaccharides, proteins or substances that are the basic building components of proteins, i.e. amino acids. These compounds may be crosslinked with other reagents that leads to the preparation of polymer matrices. Such amino acids as e.g. cysteine or histidine. On the other hand, previously mentioned requirements may be met by polymers obtained as a result of biosynthesis, e.g. polyhydroxybutyrate. This polymer belongs to the group of aliphatic polyesters that is synthesized by microorganisms (selected strain of bacteria) under specific conditions. It is possible to modify matrices based on given polymer with substances of various origin. Such a modification may result in the change of their properties or/and in providing the material with new features desirable in viewpoint of specific application. Described materials are synthesized using UV radiation. Process of photopolymerization is fast, waste-free and enables to obtain final products with favorable properties. Methodology: Polymer matrices have been prepared by means of photopolymerization. First step involved the preparation of solutions of particular reagents and mixing them in the appropriate ratio. Next, crosslinking agent and photoinitiator have been added to the reaction mixture and the whole was poured into the Petri dish and treated with UV radiation. After the synthesis, polymer samples were dried at room temperature and subjected to the numerous analyses aimed at the determining their physicochemical properties. Firstly, sorption properties of obtained polymer matrices have been determined. Next, mechanical properties have been characterized, i.e. tensile strength. The ability to deformation under applied stress of all prepared polymer matrices has been checked. Such a property is important in viewpoint of the application of analyzed materials e.g. as wound dressings. Wound dressings have to be elastic because depending on the location of the wound and its mobility, such a dressing has to adhere properly to the wound. Furthermore, considering the use of the materials for biomedical purposes it is essential to determine its behavior in environments simulating these ones occurring in human body. Therefore incubation studies using selected liquids have also been conducted. Conclusions: As a result of photopolymerization process, polymer matrices based on natural compounds have been prepared. These exhibited favorable mechanical properties and swelling ability. Moreover, biocompatibility in relation to simulated body fluids has been stated. Therefore it can be concluded that analyzed polymer matrices constitute an interesting materials that may be considered for biomedical use and may be subjected to the further more advanced analyses using specific cell lines.

Keywords: photopolymerization, polymer matrices, simulated body fluids, swelling properties

Procedia PDF Downloads 128
145 Understanding the Lithiation/Delithiation Mechanism of Si₁₋ₓGeₓ Alloys

Authors: Laura C. Loaiza, Elodie Salager, Nicolas Louvain, Athmane Boulaoued, Antonella Iadecola, Patrik Johansson, Lorenzo Stievano, Vincent Seznec, Laure Monconduit

Abstract:

Lithium-ion batteries (LIBs) have an important place among energy storage devices due to their high capacity and good cyclability. However, the advancements in portable and transportation applications have extended the research towards new horizons, and today the development is hampered, e.g., by the capacity of the electrodes employed. Silicon and germanium are among the considered modern anode materials as they can undergo alloying reactions with lithium while delivering high capacities. It has been demonstrated that silicon in its highest lithiated state can deliver up to ten times more capacity than graphite (372 mAh/g): 4200 mAh/g for Li₂₂Si₅ and 3579 mAh/g for Li₁₅Si₄, respectively. On the other hand, germanium presents a capacity of 1384 mAh/g for Li₁₅Ge₄, and a better electronic conductivity and Li ion diffusivity as compared to Si. Nonetheless, the commercialization potential of Ge is limited by its cost. The synergetic effect of Si₁₋ₓGeₓ alloys has been proven, the capacity is increased compared to Ge-rich electrodes and the capacity retention is increased compared to Si-rich electrodes, but the exact performance of this type of electrodes will depend on factors like specific capacity, C-rates, cost, etc. There are several reports on various formulations of Si₁₋ₓGeₓ alloys with promising LIB anode performance with most work performed on complex nanostructures resulting from synthesis efforts implying high cost. In the present work, we studied the electrochemical mechanism of the Si₀.₅Ge₀.₅ alloy as a realistic micron-sized electrode formulation using carboxymethyl cellulose (CMC) as the binder. A combination of a large set of in situ and operando techniques were employed to investigate the structural evolution of Si₀.₅Ge₀.₅ during lithiation and delithiation processes: powder X-ray diffraction (XRD), X-ray absorption spectroscopy (XAS), Raman spectroscopy, and 7Li solid state nuclear magnetic resonance spectroscopy (NMR). The results have presented a whole view of the structural modifications induced by the lithiation/delithiation processes. The Si₀.₅Ge₀.₅ amorphization was observed at the beginning of discharge. Further lithiation induces the formation of a-Liₓ(Si/Ge) intermediates and the crystallization of Li₁₅(Si₀.₅Ge₀.₅)₄ at the end of the discharge. At really low voltages a reversible process of overlithiation and formation of Li₁₅₊δ(Si₀.₅Ge₀.₅)₄ was identified and related with a structural evolution of Li₁₅(Si₀.₅Ge₀.₅)₄. Upon charge, the c-Li₁₅(Si₀.₅Ge₀.₅)₄ was transformed into a-Liₓ(Si/Ge) intermediates. At the end of the process an amorphous phase assigned to a-SiₓGey was recovered. Thereby, it was demonstrated that Si and Ge are collectively active along the cycling process, upon discharge with the formation of a ternary Li₁₅(Si₀.₅Ge₀.₅)₄ phase (with a step of overlithiation) and upon charge with the rebuilding of the a-Si-Ge phase. This process is undoubtedly behind the enhanced performance of Si₀.₅Ge₀.₅ compared to a physical mixture of Si and Ge.

Keywords: lithium ion battery, silicon germanium anode, in situ characterization, X-Ray diffraction

Procedia PDF Downloads 286
144 Cement Matrix Obtained with Recycled Aggregates and Micro/Nanosilica Admixtures

Authors: C. Mazilu, D. P. Georgescu, A. Apostu, R. Deju

Abstract:

Cement mortars and concretes are some of the most used construction materials in the world, global cement production being expected to grow to approx. 5 billion tons, until 2030. But, cement is an energy intensive material, the cement industry being responsible for cca. 7% of the world's CO2 emissions. Also, natural aggregates represent non-renewable resources, exhaustible, which must be used efficiently. A way to reduce the negative impact on the environment is the use of additional hydraulically active materials, as a partial substitute for cement in mortars and concretes and/or the use of recycled concrete aggregates (RCA) for the recovery of construction waste, according to EU Directive 2018/851. One of the most effective active hydraulic admixtures is microsilica and more recently, with the technological development on a nanometric scale, nanosilica. Studies carried out in recent years have shown that the introduction of SiO2 nanoparticles into cement matrix improves the properties, even compared to microsilica. This is due to the very small size of the nanosilica particles (<100nm) and the very large specific surface, which helps to accelerate cement hydration and acts as a nucleating agent to generate even more calcium hydrosilicate which densifies and compacts the structure. The cementitious compositions containing recycled concrete aggregates (RCA) present, in generally, inferior properties compared to those obtained with natural aggregates. Depending on the degree of replacement of natural aggregate, decreases the workability of mortars and concretes with RAC, decrease mechanical resistances and increase drying shrinkage; all being determined, in particular, by the presence to the old mortar attached to the original aggregate from the RAC, which makes its porosity high and the mixture of components to require more water for preparation. The present study aims to use micro and nanosilica for increase the performance of some mortars and concretes obtained with RCA. The research focused on two types of cementitious systems: a special mortar composition used for encapsulating Low Level radioactive Waste (LLW); a composition of structural concrete, class C30/37, with the combination of exposure classes XC4+XF1 and settlement class S4. The mortar was made with 100% recycled aggregate, 0-5 mm sort and in the case of concrete, 30% recycled aggregate was used for 4-8 and 8-16 sorts, according to EN 206, Annex E. The recycled aggregate was obtained from a specially made concrete for this study, which after 28 days was crushed with the help of a Retsch jaw crusher and further separated by sieving on granulometric sorters. The partial replacement of cement was done progressively, in the case of the mortar composition, with microsilica (3, 6, 9, 12, 15% wt.), nanosilica (0.75, 1.5, 2.25% wt.), respectively mixtures of micro and nanosilica. The optimal combination of silica, from the point of view of mechanical resistance, was later used also in the case of the concrete composition. For the chosen cementitious compositions, the influence of micro and/or nanosilica on the properties in the fresh state (workability, rheological characteristics) and hardened state (mechanical resistance, water absorption, freeze-thaw resistance, etc.) is highlighted.

Keywords: cement, recycled concrete aggregates, micro/nanosilica, durability

Procedia PDF Downloads 68
143 Potency of Some Dietary Acidifiers on Productive Performance and Controlling Salmonella enteritidis in Broilers

Authors: Mohamed M. Zaki, Maha M. Hady

Abstract:

Salmonella spp. have been categorized as the world’s biggest threats to human health and poultry products are mostly incriminated sources. In Egypt, it was found that S. enteritidis and S. typhimurium are the most prevalent ones in poultry farms. It is recommended to eliminate salmonella from living bird by competing for salmonella contamination in feed in order to establish a healthy gut. The Feed acidifiers are the group of feed additives containing low-molecular-weight organic acids and/ or their salts which act as performance promoters by lowering the pH in the gut, optimizes digestion and inhibit bacterial growth. The inclusion of organic acid in pure form nonetheless effective in feed, yet, it is difficult to handle in feed mills as it is corrosive and produce more losses during pelleting process. The current study aimed at to evaluate the impact of incorporation of sodium diformate (SDF) and a commercial acidifier, CA (a mixture of butyric and propionic acids and their ammonium salts) at 0.4% dietary levels on broilers performance and the control S. enteritidis infection. Two hundreds and seventy unsexed cobb chickens were allotted in one of three treatments (90/ group) which were, the control (no acidifier, C- &C+), the 0.4% SDF (SDF- & SDF +) and the 0.4% CA (CA- & CA +) dietary levels for 35 days. Before the allocation of the groups, ten extra birds and a diet sample were bacteriologically examined to ensure negative contamination with salmonella. The birds were raised on deep-litter separated pens and had free access to feed and water all the time. The experimentally formulated diets were kept at 40C. After 24h access to the different dietary treatments, all the birds in the positive groups (n=15/ replicate) were inoculated intra-crop with 0.2 ml of 24 h broth culture of S. entertidis containing 1X 107 organisms while the negative-treated groups were inoculated with the same amount of the negative broth and second inoculation was done at 22 d of age. Colocal swabs were collected individually from all birds 2 h pre-inoculation to assure the absence of salmonella, then 1, 3, 5, 7, 21 days post-inoculation to recover salmonella. Performance parameter (body weight gain and feed efficiency) were calculated. Mortalities were recorded and reisolation of the salmonella was adopted to ensure it was the inoculated ones. The results revealed that the dietary acidification with sodium diformate significantly improved broilers performance and tends to produce heavier birds as compared to the negative control and CA groups. Moreover, the dietary inclusion of both acidifiers at level of 0.4% was able to eliminate mortalities completely at the relevant inoculation time. Regarding the shedding of S. enteritidius in positive groups, the SDF treatment resulted in significant (p<0.05) cessation of the shedding at 3 days post-inoculation compared to 7 days post-inoculation for the CA-group. In conclusion, sodium diformate at 0.4% dietary level in broiler diets has a valuable effect not only on broilers performance but also by eliminating S. enteritidis the main source of salmonella contamination in poultry farms which is feed.

Keywords: acidifier, broilers, Salmonalla spp, sodium diformate

Procedia PDF Downloads 288
142 Improving Binding Selectivity in Molecularly Imprinted Polymers from Templates of Higher Biomolecular Weight: An Application in Cancer Targeting and Drug Delivery

Authors: Ben Otange, Wolfgang Parak, Florian Schulz, Michael Alexander Rubhausen

Abstract:

The feasibility of extending the usage of molecular imprinting technique in complex biomolecules is demonstrated in this research. This technique is promising in diverse applications in areas such as drug delivery, diagnosis of diseases, catalysts, and impurities detection as well as treatment of various complications. While molecularly imprinted polymers MIP remain robust in the synthesis of molecules with remarkable binding sites that have high affinities to specific molecules of interest, extending the usage to complex biomolecules remains futile. This work reports on the successful synthesis of MIP from complex proteins: BSA, Transferrin, and MUC1. We show in this research that despite the heterogeneous binding sites and higher conformational flexibility of the chosen proteins, relying on their respective epitopes and motifs rather than the whole template produces highly sensitive and selective MIPs for specific molecular binding. Introduction: Proteins are vital in most biological processes, ranging from cell structure and structural integrity to complex functions such as transport and immunity in biological systems. Unlike other imprinting templates, proteins have heterogeneous binding sites in their complex long-chain structure, which makes their imprinting to be marred by challenges. In addressing this challenge, our attention is inclined toward the targeted delivery, which will use molecular imprinting on the particle surface so that these particles may recognize overexpressed proteins on the target cells. Our goal is thus to make surfaces of nanoparticles that specifically bind to the target cells. Results and Discussions: Using epitopes of BSA and MUC1 proteins and motifs with conserved receptors of transferrin as the respective templates for MIPs, significant improvement in the MIP sensitivity to the binding of complex protein templates was noted. Through the Fluorescence Correlation Spectroscopy FCS measurements on the size of protein corona after incubation of the synthesized nanoparticles with proteins, we noted a high affinity of MIPs to the binding of their respective complex proteins. In addition, quantitative analysis of hard corona using SDS-PAGE showed that only a specific protein was strongly bound on the respective MIPs when incubated with similar concentrations of the protein mixture. Conclusion: Our findings have shown that the merits of MIPs can be extended to complex molecules of higher biomolecular mass. As such, the unique merits of the technique, including high sensitivity and selectivity, relative ease of synthesis, production of materials with higher physical robustness, and higher stability, can be extended to more templates that were previously not suitable candidates despite their abundance and usage within the body.

Keywords: molecularly imprinted polymers, specific binding, drug delivery, high biomolecular mass-templates

Procedia PDF Downloads 55
141 Accounting and Prudential Standards of Banks and Insurance Companies in EU: What Stakes for Long Term Investment?

Authors: Sandra Rigot, Samira Demaria, Frederic Lemaire

Abstract:

The starting point of this research is the contemporary capitalist paradox: there is a real scarcity of long term investment despite the boom of potential long term investors. This gap represents a major challenge: there are important needs for long term financing in developed and emerging countries in strategic sectors such as energy, transport infrastructure, information and communication networks. Moreover, the recent financial and sovereign debt crises, which have respectively reduced the ability of financial banking intermediaries and governments to provide long term financing, questions the identity of the actors able to provide long term financing, their methods of financing and the most appropriate forms of intermediation. The issue of long term financing is deemed to be very important by the EU Commission, as it issued a 2013 Green Paper (GP) on long-term financing of the EU economy. Among other topics, the paper discusses the impact of the recent regulatory reforms on long-term investment, both in terms of accounting (in particular fair value) and prudential standards for banks. For banks, prudential and accounting standards are also crucial. Fair value is indeed well adapted to the trading book in a short term view, but this method hardly suits for a medium and long term portfolio. Banks’ ability to finance the economy and long term projects depends on their ability to distribute credit and the way credit is valued (fair value or amortised cost) leads to different banking strategies. Furthermore, in the banking industry, accounting standards are directly connected to the prudential standards, as the regulatory requirements of Basel III use accounting figures with prudential filter to define the needs for capital and to compute regulatory ratios. The objective of these regulatory requirements is to prevent insolvency and financial instability. In the same time, they can represent regulatory constraints to long term investing. The balance between financial stability and the need to stimulate long term financing is a key question raised by the EU GP. Does fair value accounting contributes to short-termism in the investment behaviour? Should prudential rules be “appropriately calibrated” and “progressively implemented” not to prevent banks from providing long-term financing? These issues raised by the EU GP lead us to question to what extent the main regulatory requirements incite or constrain banks to finance long term projects. To that purpose, we study the 292 responses received by the EU Commission during the public consultation. We analyze these contributions focusing on particular questions related to fair value accounting and prudential norms. We conduct a two stage content analysis of the responses. First, we proceed to a qualitative coding to identify arguments of respondents and subsequently we run a quantitative coding in order to conduct statistical analyses. This paper provides a better understanding of the position that a large panel of European stakeholders have on these issues. Moreover, it adds to the debate on fair value accounting and its effects on prudential requirements for banks. This analysis allows us to identify some short term bias in banking regulation.

Keywords: basel 3, fair value, securitization, long term investment, banks, insurers

Procedia PDF Downloads 293
140 Strategies of Risk Management for Smallholder Farmers in South Africa: A Case Study on Pigeonpea (Cajanus cajan) Production

Authors: Sanari Chalin Moriri, Kwabena Kingsley Ayisi, Alina Mofokeng

Abstract:

Dryland smallholder farmers in South Africa are vulnerable to all kinds of risks, and it negatively affects crop productivity and profit. Pigeonpea is a leguminous and multipurpose crop that provides food, fodder, and wood for smallholder farmers. The majority of these farmers are still growing pigeonpea from traditional unimproved seeds, which comprise a mixture of genotypes. The objectives of the study were to identify the key risk factors that affect pigeonpea productivity and to develop management strategies on how to alleviate the risk factors in pigeonpea production. The study was conducted in two provinces (Limpopo and Mpumalanga) of South Africa in six municipalities during the 2020/2021 growing seasons. The non-probability sampling method using purposive and snowball sampling techniques were used to collect data from the farmers through a structured questionnaire. A total of 114 pigeonpea producers were interviewed individually using a questionnaire. Key stakeholders in each municipality were also identified, invited, and interviewed to verify the information given by farmers. Data collected were subjected to SPSS statistical software 25 version. The findings of the study were that majority of farmers affected by risk factors were women, subsistence, and old farmers resulted in low food production. Drought, unavailability of improved pigeonpea seeds for planting, access to information, and processing equipment were found to be the main risk factors contributing to low crop productivity in farmer’s fields. Above 80% of farmers lack knowledge on the improvement of the crop and also on the processing techniques to secure high prices during the crop off-season. Market availability, pricing, and incidence of pests and diseases were found to be minor risk factors which were triggered by the major risk factors. The minor risk factors can be corrected only if the major risk factors are first given the necessary attention. About 10% of the farmers found to use the crop as a mulch to reduce soil temperatures and to improve soil fertility. The study revealed that most of the farmers were unaware of its utilisation as fodder, much, medicinal, nitrogen fixation, and many more. The risk of frequent drought in dry areas of South Africa where farmers solely depend on rainfall poses a serious threat to crop productivity. The majority of these risk factors are caused by climate change due to unrealistic, low rainfall with extreme temperatures poses a threat to food security, water, and the environment. The use of drought-tolerant, multipurpose legume crops such as pigeonpea, access to new information, provision of processing equipment, and support from all stakeholders will help in addressing food security for smallholder farmers. Policies should be revisited to address the prevailing risk factors faced by farmers and involve them in addressing the risk factors. Awareness should be prioritized in promoting the crop to improve its production and commercialization in the dryland farming system of South Africa.

Keywords: management strategies, pigeonpea, risk factors, smallholder farmers

Procedia PDF Downloads 213
139 Antioxidant Potential of Sunflower Seed Cake Extract in Stabilization of Soybean Oil

Authors: Ivanor Zardo, Fernanda Walper Da Cunha, Júlia Sarkis, Ligia Damasceno Ferreira Marczak

Abstract:

Lipid oxidation is one of the most important deteriorating processes in oil industry, resulting in the losses of nutritional value of oils as well as changes in color, flavor and other physiological properties. Autoxidation of lipids occurs naturally between molecular oxygen and the unsaturation of fatty acids, forming fat-free radicals, peroxide free radicals and hydroperoxides. In order to avoid the lipid oxidation in vegetable oils, synthetic antioxidants such as butylated hydroxyanisole (BHA), butylated hydroxytoluene (BHT) and tertiary butyl hydro-quinone (TBHQ) are commonly used. However, the use of synthetic antioxidants has been associated with several health side effects and toxicity. The use of natural antioxidants as stabilizers of vegetable oils is being suggested as a sustainable alternative to synthetic antioxidants. The alternative that has been studied is the use of natural extracts obtained mainly from fruits, vegetables and seeds, which have a well-known antioxidant activity related mainly to the presence of phenolic compounds. The sunflower seed cake is rich in phenolic compounds (1 4% of the total mass), being the chlorogenic acid the major constituent. The aim of this study was to evaluate the in vitro application of the phenolic extract obtained from the sunflower seed cake as a retarder of the lipid oxidation reaction in soybean oil and to compare the results with a synthetic antioxidant. For this, the soybean oil, provided from the industry without any addition of antioxidants, was subjected to an accelerated storage test for 17 days at 65 °C. Six samples with different treatments were submitted to the test: control sample, without any addition of antioxidants; 100 ppm of synthetic antioxidant BHT; mixture of 50 ppm of BHT and 50 ppm of phenolic compounds; and 100, 500 and 1200 ppm of phenolic compounds. The phenolic compounds concentration in the extract was expressed in gallic acid equivalents. To evaluate the oxidative changes of the samples, aliquots were collected after 0, 3, 6, 10 and 17 days and analyzed for the peroxide, diene and triene conjugate values. The soybean oil sample initially had a peroxide content of 2.01 ± 0.27 meq of oxygen/kg of oil. On the third day of the treatment, only the samples treated with 100, 500 and 1200 ppm of phenolic compounds showed a considerable oxidation retard compared to the control sample. On the sixth day of the treatment, the samples presented a considerable increase in the peroxide value (higher than 13.57 meq/kg), and the higher the concentration of phenolic compounds, the lower the peroxide value verified. From the tenth day on, the samples had a very high peroxide value (higher than 55.39 meq/kg), where only the sample containing 1200 ppm of phenolic compounds presented significant oxidation retard. The samples containing the phenolic extract were more efficient to avoid the formation of the primary oxidation products, indicating effectiveness to retard the reaction. Similar results were observed for dienes and trienes. Based on the results, phenolic compounds, especially chlorogenic acid (the major phenolic compound of sunflower seed cake), can be considered as a potential partial or even total substitute for synthetic antioxidants.

Keywords: chlorogenic acid, natural antioxidant, vegetables oil deterioration, waste valorization

Procedia PDF Downloads 264
138 School Accidents in Educational Establishment in Tunisia: A Five Years Retrospective Survey in the Governorate of Mahdia

Authors: Lamia Bouzgarrou, Amira Omrane, Leila Mrabet, Taoufik Khalfallah

Abstract:

Background and aims: School accidents are one of the leading causes of morbidity and mortality among pupils and students. Indeed, they may induce an elevated number of lost school days, heavy emotional and physical disabilities, and financial costs on the victims and their families. This study aims to evaluate the annual incidence of school accidents in the central Tunisian governorate of Mahdia and to identify the epidemiological profile of victims and risk factors of these accidents. Methods: A retrospective study was conducted over the period of 5 school years, focusing on school accidents that occurred in public educational institutions (primary, basic, secondary and university) in the governorate of Mahdia (area = 2 966 km² and number of inhabitants in 2014 = 410 812). All accidents declared near the only official insurance of this type of injuries (MASU: Mutual School and University Accidents), and initially taken in charge at the University Hospital of Mahdia were included. Data was collected from the MASU reporting forms and the medical records of emergency and other specialized hospital departments. Results: With 3248 identified victims, the annual incidence of school accidents was equal to 0.69 per 100 pupils and students per year. The average age of victims was 14.51 ± 0.059 years and the sex ratio was 1.58. Pupils aged between 12 and 15 years, were concerned by 46.7% of the identified accidents. The practice of sports was the most relevant circumstances of these accidents (76.2 %). In 56.58 % of cases, falls were the leading mechanism. Bruises and fractures were the most frequent lesions (32.43 % and 30.51 %). Serious school accidents were noted in 28% of cases with hospitalization in 2.27 % of them. The average lost school days, was 12.23±1.73 days. Accidents occurring during sports or leisure activities were significantly more serious (p= 0.021). Furthermore, the frequency of hospitalization was significantly higher among boys (2.81% vs. 1.43%; p= 0.035), students ≤11 years (p= 0.008), and following crush trauma (p= 0.000). In addition, the surgical interventions were statistically more frequent among male victims (p=0.00), accidents occurring during physical education sessions (p=0.000); those associated to falls (p=0.000) and to crushes mechanisms (p=0.002), and injuries affecting lower limbs (p=0.000). Following this Multi-varied analysis concluded that the severity of school accident is correlated to the activity practiced during the trauma and the geographical location of the school. Conclusion: Children and adolescents are one of the most vulnerable groups against incidents with the risk of permanent disability, mainly related to the perturbation of the growth process and physiological limitations. Our five-year study, objectified a real elevate incidence of school accident among children and adolescents, with a considerable rate of severe injuries. In any community, the promotion of adolescents and children’s health is an important indicator of the public health level. Thus, it’s important to develop a multidisciplinary prevention strategy of school accident, based on safety and security rules and adapted to the specificity of our context.

Keywords: children and adolescents, children health, injuries and disability, school accident

Procedia PDF Downloads 118
137 The Impact of the Use of Some Multiple Intelligence-Based Teaching Strategies on Developing Moral Intelligence and Inferential Jurisprudential Thinking among Secondary School Female Students in Saudi Arabia

Authors: Sameerah A. Al-Hariri Al-Zahrani

Abstract:

The current study aims at getting acquainted with the impact of the use of some multiple intelligence-based teaching strategies on developing moral intelligence and inferential jurisprudential thinking among secondary school female students. The study has endeavored to answer the following questions: What is the impact of the use of some multiple intelligence-based teaching strategies on developing inferential jurisprudential thinking and moral intelligence among first-year secondary school female students? In the frame of this main research question, the study seeks to answer the following sub-questions: (i) What are the inferential jurisprudential thinking skills among first-year secondary school female students? (ii) What are the components of moral intelligence among first year secondary school female students? (iii) What is the impact of the use of some multiple intelligence‐based teaching strategies (such as the strategies of analyzing values, modeling, Socratic discussion, collaborative learning, peer collaboration, collective stories, building emotional moments, role play, one-minute observation) on moral intelligence among first-year secondary school female students? (iv) What is the impact of the use of some multiple intelligence‐based teaching strategies (such as the strategies of analyzing values, modeling, Socratic discussion, collaborative learning, peer collaboration, collective stories, building emotional moments, role play, one-minute observation) on developing the capacity for inferential jurisprudential thinking of juristic rules among first-year secondary school female students? The study has used the descriptive-analytical methodology in surveying, analyzing, and reviewing the literature on previous studies in order to benefit from them in building the tools of the study and the materials of experimental treatment. The study has also used the experimental method to study the impact of the independent variable (multiple intelligence strategies) on the two dependent variables (moral intelligence and inferential jurisprudential thinking) in first-year secondary school female students’ learning. The sample of the study is made up of 70 female students that have been divided into two groups: an experimental group consisting of 35 students who have been taught through multiple intelligence strategies, and a control group consisting of the other 35 students who have been taught normally. The two tools of the study (inferential jurisprudential thinking test and moral intelligence scale) have been implemented on the two groups as a pre-test. The female researcher taught the experimental group and implemented the two tools of the study. After the experiment, which lasted eight weeks, was over, the study showed the following results: (i) The existence of significant statistical differences (0.05) between the mean average of the control group and that of the experimental group in the inferential jurisprudential thinking test (recognition of the evidence of jurisprudential rule, recognition of the motive for the jurisprudential rule, jurisprudential inferencing, analogical jurisprudence) in favor of the experimental group. (ii) The existence of significant statistical differences (0.05) between the mean average of the control group and that of the experimental group in the components of the moral intelligence scale (sympathy, conscience, moral wisdom, tolerance, justice, respect) in favor of the experimental group. The study has, thus, demonstrated the impact of the use of some multiple intelligence-based teaching strategies on developing moral intelligence and inferential jurisprudential thinking.

Keywords: moral intelligence, teaching, inferential jurisprudential thinking, secondary school

Procedia PDF Downloads 160
136 Thermal Energy Storage Based on Molten Salts Containing Nano-Particles: Dispersion Stability and Thermal Conductivity Using Multi-Scale Computational Modelling

Authors: Bashar Mahmoud, Lee Mortimer, Michael Fairweather

Abstract:

New methods have recently been introduced to improve the thermal property values of molten nitrate salts (a binary mixture of NaNO3:KNO3in 60:40 wt. %), by doping them with minute concentration of nanoparticles in the range of 0.5 to 1.5 wt. % to form the so-called: Nano-heat-transfer-fluid, apt for thermal energy transfer and storage applications. The present study aims to assess the stability of these nanofluids using the advanced computational modelling technique, Lagrangian particle tracking. A multi-phase solid-liquid model is used, where the motion of embedded nanoparticles in the suspended fluid is treated by an Euler-Lagrange hybrid scheme with fixed time stepping. This technique enables measurements of various multi-scale forces whose characteristic (length and timescales) are quite different. Two systems are considered, both consisting of 50 nm Al2O3 ceramic nanoparticles suspended in fluids of different density ratios. This includes both water (5 to 95 °C) and molten nitrate salt (220 to 500 °C) at various volume fractions ranging between 1% to 5%. Dynamic properties of both phases are coupled to the ambient temperature of the fluid suspension. The three-dimensional computational region consists of a 1μm cube and particles are homogeneously distributed across the domain. Periodic boundary conditions are enforced. The particle equations of motion are integrated using the fourth order Runge-Kutta algorithm with a very small time-step, Δts, set at 10-11 s. The implemented technique demonstrates the key dynamics of aggregated nanoparticles and this involves: Brownian motion, soft-sphere particle-particle collisions, and Derjaguin, Landau, Vervey, and Overbeek (DLVO) forces. These mechanisms are responsible for the predictive model of aggregation of nano-suspensions. An energy transport-based method of predicting the thermal conductivity of the nanofluids is also used to determine thermal properties of the suspension. The simulation results confirms the effectiveness of the technique. The values are in excellent agreement with the theoretical and experimental data obtained from similar studies. The predictions indicates the role of Brownian motion and DLVO force (represented by both the repulsive electric double layer and an attractive Van der Waals) and its influence in the level of nanoparticles agglomeration. As to the nano-aggregates formed that was found to play a key role in governing the thermal behavior of nanofluids at various particle concentration. The presentation will include a quantitative assessment of these forces and mechanisms, which would lead to conclusions about nanofluids, heat transfer performance and thermal characteristics and its potential application in solar thermal energy plants.

Keywords: thermal energy storage, molten salt, nano-fluids, multi-scale computational modelling

Procedia PDF Downloads 191
135 Globalisation and Diplomacy: How Can Small States Improve the Practice of Diplomacy to Secure Their Foreign Policy Objectives?

Authors: H. M. Ross-McAlpine

Abstract:

Much of what is written on diplomacy, globalization and the global economy addresses the changing nature of relationships between major powers. While the most dramatic and influential changes have resulted from these developing relationships the world is not, on deeper inspection, governed neatly by major powers. Due to advances in technology, the shifting balance of power and a changing geopolitical order, small states have the ability to exercise a greater influence than ever before. Increasingly interdependent and ever complex, our world is too delicate to be handled by a mighty few. The pressure of global change requires small states to adapt their diplomatic practices and diversify their strategic alliances and relationships. The nature and practice of diplomacy must be re-evaluated in light of the pressures resulting from globalization. This research examines: how small states can best secure their foreign policy objectives? Small state theory is used as a foundation for exploring the case study of New Zealand. The research draws on secondary sources to evaluate the existing theory in relation to modern practices of diplomacy. As New Zealand lacks the required economic and military power to play an active, influential role in international affairs what strategies are used to exert influence? Furthermore, New Zealand lies in a remote corner of the Pacific and is geographically isolated from its nearest neighbors how does this affect security and trade priorities? The findings note a significant shift since the 1970’s in New Zealand’s diplomatic relations. This shift is arguably a direct result of globalization, regionalism and a growing independence from the traditional bi-lateral relationships. The need to source predictable trade, investment and technology are an essential driving force for New Zealand’s diplomatic relations. A lack of hard power aligns New Zealand’s prosperity with a secure, rules-based international system that increases the likelihood of a stable and secure global order. New Zealand’s diplomacy and prosperity has been intrinsically reliant on its reputation. A vital component of New Zealand’s diplomacy is preserving a reputation for integrity and global responsibility. It is the use of this soft power that facilitates the influence that New Zealand enjoys on the world stage. To weave a comprehensive network of successful diplomatic relationships, New Zealand must maintain a reputation of international credibility. Globalization has substantially influenced the practice of diplomacy for New Zealand. The current world order places economic and military might in the hands of a few, subsequently requiring smaller states to use other means for securing their interests. There are clear strategies evident in New Zealand’s diplomacy practice that draw attention to how other smaller states might best secure their foreign policy objectives. While these findings are limited, as with all case study research, there is value in applying the findings to other small states struggling to secure their interests in the wake of rapid globalization.

Keywords: diplomacy, foreign policy, globalisation, small state

Procedia PDF Downloads 398
134 Electricity Market Reforms Towards Clean Energy Transition andnd Their Impact in India

Authors: Tarun Kumar Dalakoti, Debajyoti Majumder, Aditya Prasad Das, Samir Chandra Saxena

Abstract:

India’s ambitious target to achieve a 50 percent share of energy from non-fossil fuels and the 500-gigawatt (GW) renewable energy capacity before the deadline of 2030, coupled with the global pursuit of sustainable development, will compel the nation to embark on a rapid clean energy transition. As a result, electricity market reforms will emerge as critical policy instruments to facilitate this transition and achieve ambitious environmental targets. This paper will present a comprehensive analysis of the various electricity market reforms to be introduced in the Indian Electricity sector to facilitate the integration of clean energy sources and will assess their impact on the overall energy landscape. The first section of this paper will delve into the policy mechanisms to be introduced by the Government of India and the Central Electricity Regulatory Commission to promote clean energy deployment. These mechanisms include extensive provisions for the integration of renewables in the Indian Electricity Grid Code, 2023. The section will also cover the projection of RE Generation as highlighted in the National Electricity Plan, 2023. It will discuss the introduction of Green Energy Market segments, the waiver of Inter-State Transmission System (ISTS) charges for inter-state sale of solar and wind power, the notification of Promoting Renewable Energy through Green Energy Open Access Rules, and the bundling of conventional generating stations with renewable energy sources. The second section will evaluate the tangible impact of these electricity market reforms. By drawing on empirical studies and real-world case examples, the paper will assess the penetration rate of renewable energy sources in India’s electricity markets, the decline of conventional fuel-based generation, and the consequent reduction in carbon emissions. Furthermore, it will explore the influence of these reforms on electricity prices, the impact on various market segments due to the introduction of green contracts, and grid stability. The paper will also discuss the operational challenges to be faced due to the surge of RE Generation sources as a result of the implementation of the above-mentioned electricity market reforms, including grid integration issues, intermittency concerns with renewable energy sources, and the need for increasing grid resilience for future high RE in generation mix scenarios. In conclusion, this paper will emphasize that electricity market reforms will be pivotal in accelerating the global transition towards clean energy systems. It will underscore the importance of a holistic approach that combines effective policy design, robust regulatory frameworks, and active participation from market actors. Through a comprehensive examination of the impact of these reforms, the paper will shed light on the significance of India’s sustained commitment to a cleaner, more sustainable energy future.

Keywords: renewables, Indian electricity grid code, national electricity plan, green energy market

Procedia PDF Downloads 44
133 Hardware Implementation for the Contact Force Reconstruction in Tactile Sensor Arrays

Authors: María-Luisa Pinto-Salamanca, Wilson-Javier Pérez-Holguín

Abstract:

Reconstruction of contact forces is a fundamental technique for analyzing the properties of a touched object and is essential for regulating the grip force in slip control loops. This is based on the processing of the distribution, intensity, and direction of the forces during the capture of the sensors. Currently, efficient hardware alternatives have been used more frequently in different fields of application, allowing the implementation of computationally complex algorithms, as is the case with tactile signal processing. The use of hardware for smart tactile sensing systems is a research area that promises to improve the processing time and portability requirements of applications such as artificial skin and robotics, among others. The literature review shows that hardware implementations are present today in almost all stages of smart tactile detection systems except in the force reconstruction process, a stage in which they have been less applied. This work presents a hardware implementation of a model-driven reported in the literature for the contact force reconstruction of flat and rigid tactile sensor arrays from normal stress data. From the analysis of a software implementation of such a model, this implementation proposes the parallelization of tasks that facilitate the execution of matrix operations and a two-dimensional optimization function to obtain a vector force by each taxel in the array. This work seeks to take advantage of the parallel hardware characteristics of Field Programmable Gate Arrays, FPGAs, and the possibility of applying appropriate techniques for algorithms parallelization using as a guide the rules of generalization, efficiency, and scalability in the tactile decoding process and considering the low latency, low power consumption, and real-time execution as the main parameters of design. The results show a maximum estimation error of 32% in the tangential forces and 22% in the normal forces with respect to the simulation by the Finite Element Modeling (FEM) technique of Hertzian and non-Hertzian contact events, over sensor arrays of 10×10 taxels of different sizes. The hardware implementation was carried out on an MPSoC XCZU9EG-2FFVB1156 platform of Xilinx® that allows the reconstruction of force vectors following a scalable approach, from the information captured by means of tactile sensor arrays composed of up to 48 × 48 taxels that use various transduction technologies. The proposed implementation demonstrates a reduction in estimation time of x / 180 compared to software implementations. Despite the relatively high values of the estimation errors, the information provided by this implementation on the tangential and normal tractions and the triaxial reconstruction of forces allows to adequately reconstruct the tactile properties of the touched object, which are similar to those obtained in the software implementation and in the two FEM simulations taken as reference. Although errors could be reduced, the proposed implementation is useful for decoding contact forces for portable tactile sensing systems, thus helping to expand electronic skin applications in robotic and biomedical contexts.

Keywords: contact forces reconstruction, forces estimation, tactile sensor array, hardware implementation

Procedia PDF Downloads 196
132 Real-Time Neuroimaging for Rehabilitation of Stroke Patients

Authors: Gerhard Gritsch, Ana Skupch, Manfred Hartmann, Wolfgang Frühwirt, Hannes Perko, Dieter Grossegger, Tilmann Kluge

Abstract:

Rehabilitation of stroke patients is dominated by classical physiotherapy. Nowadays, a field of research is the application of neurofeedback techniques in order to help stroke patients to get rid of their motor impairments. Especially, if a certain limb is completely paralyzed, neurofeedback is often the last option to cure the patient. Certain exercises, like the imagination of the impaired motor function, have to be performed to stimulate the neuroplasticity of the brain, such that in the neighboring parts of the injured cortex the corresponding activity takes place. During the exercises, it is very important to keep the motivation of the patient at a high level. For this reason, the missing natural feedback due to a movement of the effected limb may be replaced by a synthetic feedback based on the motor-related brain function. To generate such a synthetic feedback a system is needed which measures, detects, localizes and visualizes the motor related µ-rhythm. Fast therapeutic success can only be achieved if the feedback features high specificity, comes in real-time and without large delay. We describe such an approach that offers a 3D visualization of µ-rhythms in real time with a delay of 500ms. This is accomplished by combining smart EEG preprocessing in the frequency domain with source localization techniques. The algorithm first selects the EEG channel featuring the most prominent rhythm in the alpha frequency band from a so-called motor channel set (C4, CZ, C3; CP6, CP4, CP2, CP1, CP3, CP5). If the amplitude in the alpha frequency band of this certain electrode exceeds a threshold, a µ-rhythm is detected. To prevent detection of a mixture of posterior alpha activity and µ-activity, the amplitudes in the alpha band outside the motor channel set are not allowed to be in the same range as the main channel. The EEG signal of the main channel is used as template for calculating the spatial distribution of the µ - rhythm over all electrodes. This spatial distribution is the input for a inverse method which provides the 3D distribution of the µ - activity within the brain which is visualized in 3D as color coded activity map. This approach mitigates the influence of lid artifacts on the localization performance. The first results of several healthy subjects show that the system is capable of detecting and localizing the rarely appearing µ-rhythm. In most cases the results match with findings from visual EEG analysis. Frequent eye-lid artifacts have no influence on the system performance. Furthermore, the system will be able to run in real-time. Due to the design of the frequency transformation the processing delay is 500ms. First results are promising and we plan to extend the test data set to further evaluate the performance of the system. The relevance of the system with respect to the therapy of stroke patients has to be shown in studies with real patients after CE certification of the system. This work was performed within the project ‘LiveSolo’ funded by the Austrian Research Promotion Agency (FFG) (project number: 853263).

Keywords: real-time EEG neuroimaging, neurofeedback, stroke, EEG–signal processing, rehabilitation

Procedia PDF Downloads 388
131 Metal Contents in Bird Feathers (Columba livia) from Mt Etna Volcano: Volcanic Plume Contribution and Biological Fractionation

Authors: Edda E. Falcone, Cinzia Federico, Sergio Bellomo, Lorenzo Brusca, Manfredi Longo, Walter D’Alessandro

Abstract:

Although trace metals are an essential element for living beings, they can become toxic at high concentrations. Their potential toxicity is related not only to the total content in the environment but mostly upon their bioavailability. Volcanoes are important natural metal emitters and they can deeply affect the quality of air, water and soils, as well as the human health. Trace metals tend to accumulate in the tissues of living organisms, depending on the metal contents in food, air and water and on the exposure time. Birds are considered as bioindicators of interest, because their feathers directly reflects the metals uptake from the blood. Birds are exposed to the atmospheric pollution through the contact with rainfall, dust, and aerosol, and they accumulate metals over the whole life cycle. We report on the first data combining the rainfall metal content in three different areas of Mt Etna, variably fumigated by the volcanic plume, and the metal contents in the feathers of pigeons, collected in the same areas. Rainfall samples were collected from three rain gauges placed at different elevation on the Eastern flank of the volcano, the most exposed to airborne plume, filtered, treated with HNO₃ Suprapur-grade and analyzed for Fe, Cr, Co, Ni, Se, Zn, Cu, Sr, Ba, Cd and As by ICP-MS technique, and major ions by ion chromatography. Feathers were collected from single individuals, in the same areas where the rain gauges were installed. Additionally, some samples were collected in an urban area, poorly interested by the volcanic plume. The samples were rinsed in MilliQ water and acetone, dried at 50°C until constant weight and digested in a mixture of 2:1 HNO₃ (65%) - H₂O₂ (30%) Suprapur-grade for 25-50 mg of sample, in a bath at near-to-boiling temperature. The solutions were diluted up to 20 ml prior to be analyzed by ICP-MS. The rainfall samples most contaminated by the plume were collected at close distance from the summit craters (less than 6 km), and show lower pH values and higher concentrations for all analyzed metals relative to those from the sites at lower elevation. Analyzed samples are enriched in both metals directly emitted by the volcanic plume and transported by acidic gases (SO₂, HCl, HF), and metals leached from the airborne volcanic ash. Feathers show different patterns in the different sites related to the exposure to natural or anthropogenic pollutants. They show abundance ratios similar to rainfall for lithophile elements (Ba, Sr), whereas are enriched in Zn and Se, known for their antioxidant properties, probably as adaptive response to oxidative stress induced by toxic metal exposure. The pigeons revealed a clear heterogeneity of metal uptake in the different parts of the volcano, as an effect of volcanic plume impact. Additionally, some physiological processes can modify the fate of some metals after uptake and this offer some insights for translational studies.

Keywords: bioindicators, environmental pollution, feathers, trace metals, volcanic plume

Procedia PDF Downloads 143
130 Detection of High Fructose Corn Syrup in Honey by Near Infrared Spectroscopy and Chemometrics

Authors: Mercedes Bertotto, Marcelo Bello, Hector Goicoechea, Veronica Fusca

Abstract:

The National Service of Agri-Food Health and Quality (SENASA), controls honey to detect contamination by synthetic or natural chemical substances and establishes and controls the traceability of the product. The utility of near-infrared spectroscopy for the detection of adulteration of honey with high fructose corn syrup (HFCS) was investigated. First of all, a mixture of different authentic artisanal Argentinian honey was prepared to cover as much heterogeneity as possible. Then, mixtures were prepared by adding different concentrations of high fructose corn syrup (HFCS) to samples of the honey pool. 237 samples were used, 108 of them were authentic honey and 129 samples corresponded to honey adulterated with HFCS between 1 and 10%. They were stored unrefrigerated from time of production until scanning and were not filtered after receipt in the laboratory. Immediately prior to spectral collection, honey was incubated at 40°C overnight to dissolve any crystalline material, manually stirred to achieve homogeneity and adjusted to a standard solids content (70° Brix) with distilled water. Adulterant solutions were also adjusted to 70° Brix. Samples were measured by NIR spectroscopy in the range of 650 to 7000 cm⁻¹. The technique of specular reflectance was used, with a lens aperture range of 150 mm. Pretreatment of the spectra was performed by Standard Normal Variate (SNV). The ant colony optimization genetic algorithm sample selection (ACOGASS) graphical interface was used, using MATLAB version 5.3, to select the variables with the greatest discriminating power. The data set was divided into a validation set and a calibration set, using the Kennard-Stone (KS) algorithm. A combined method of Potential Functions (PF) was chosen together with Partial Least Square Linear Discriminant Analysis (PLS-DA). Different estimators of the predictive capacity of the model were compared, which were obtained using a decreasing number of groups, which implies more demanding validation conditions. The optimal number of latent variables was selected as the number associated with the minimum error and the smallest number of unassigned samples. Once the optimal number of latent variables was defined, we proceeded to apply the model to the training samples. With the calibrated model for the training samples, we proceeded to study the validation samples. The calibrated model that combines the potential function methods and PLSDA can be considered reliable and stable since its performance in future samples is expected to be comparable to that achieved for the training samples. By use of Potential Functions (PF) and Partial Least Square Linear Discriminant Analysis (PLS-DA) classification, authentic honey and honey adulterated with HFCS could be identified with a correct classification rate of 97.9%. The results showed that NIR in combination with the PT and PLS-DS methods can be a simple, fast and low-cost technique for the detection of HFCS in honey with high sensitivity and power of discrimination.

Keywords: adulteration, multivariate analysis, potential functions, regression

Procedia PDF Downloads 125
129 Critical Evaluation of Long Chain Hydrocarbons with Biofuel Potential from Marine Diatoms Isolated from the West Coast of India

Authors: Indira K., Valsamma Joseph, I. S. Bright

Abstract:

Introduction :Biofuels could replace fossil fuels and reduce our carbon footprint on the planet by technological advancements needed for sustainable and economic fuel production. Micro algae have proven to be a promising source to meet the current energy demand because of high lipid content and production of high biomass rapidly. Marine diatoms, which are key contributors in the biofuel sector and also play a significant role in primary productivity and ecology with high biodiversity and genetic and chemical diversity, are less well understood than other microalgae for producing hydrocarbons. Method :The marine diatom samples selected for hydrocarbon analysis were a total of eleven, out of which 9 samples were from the culture collection of NCAAH, and the remaining two of them were isolated by serial dilution method to get a pure culture from a mixed culture of microalgae obtained from the various cruise stations (350&357) FORV Sagar Sampada along the west coast of India. These diatoms were mass cultured in F/2 media, and the biomass harvested. The crude extract was obtained from the biomass by homogenising with n-hexane, and the hydrocarbons was further obtained by passing the crude extract through 500mg Bonna Agela SPE column and the quantitative analysis was done by GCHRMS analysis using HP-5 column and Helium gas was used as a carrier gas(1ml/min). The injector port temperature was 2400C, the detector temperature was 2500C, and the oven was initially kept at 600C for 1 minute and increased to 2200C at the rate of 60C per minute, and the analysis of a mixture of long chain hydrocarbons was done .Results:In the qualitative analysis done, the most potent hydrocarbon was found to be Psammodictyon Panduriforme (NCAAH-9) with a hydrocarbon mass of 37.27mg/g of the biomass and 2.1% of the total biomass 0f 1.395g and the other potent producer is Biddulphia(NCAAH 6) with hydrocarbon mass of 25.4mg/g of biomass and percentage of hydrocarbon is 1.03%. In the quantitative analysis by GCHRMS, the long chain hydrocarbons found in most of the marine diatoms were undecane, hexadecane, octadecane 3ethyl 5,2 ethyl butyl, Eicosane7hexyl, hexacosane, heptacosane, heneicosane, octadecane 3 methyl, triacontane. The exact mass of the long chain hydrocarbons in all the marine diatom samples was found to be Nonadecane 12C191H40, Tritriacontane,13-decyl-13-heptyl 12C501H102, Octadecane,3ethyl-5-(2-ethylbutyl 12C261H54, tetratetracontane 12C441H89, Eicosane, 7-hexyl 12C261H54. Conclusion:All the marine diatoms screened produced long chain hydrocarbons which can be used as diesel fuel with good cetane value example, hexadecane, undecane. All the long chain hydrocarbons can further undergo catalytic cracking to produce short chain alkanes which can give good octane values and can be used as gasoline. Optimisation of hydrocarbon production with the most potent marine diatom yielded long chain hydrocarbons of good fuel quality.

Keywords: biofuel, hydrocarbons, marine diatoms, screening

Procedia PDF Downloads 79