Search results for: important bird areas
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 19483

Search results for: important bird areas

223 Absolute Quantification of the Bexsero Vaccine Component Factor H Binding Protein (fHbp) by Selected Reaction Monitoring: The Contribution of Mass Spectrometry in Vaccinology

Authors: Massimiliano Biagini, Marco Spinsanti, Gabriella De Angelis, Sara Tomei, Ilaria Ferlenghi, Maria Scarselli, Alessia Biolchi, Alessandro Muzzi, Brunella Brunelli, Silvana Savino, Marzia M. Giuliani, Isabel Delany, Paolo Costantino, Rino Rappuoli, Vega Masignani, Nathalie Norais

Abstract:

The gram-negative bacterium Neisseria meningitidis serogroup B (MenB) is an exclusively human pathogen representing the major cause of meningitides and severe sepsis in infants and children but also in young adults. This pathogen is usually present in the 30% of healthy population that act as a reservoir, spreading it through saliva and respiratory fluids during coughing, sneezing, kissing. Among surface-exposed protein components of this diplococcus, factor H binding protein is a lipoprotein proved to be a protective antigen used as a component of the recently licensed Bexsero vaccine. fHbp is a highly variable meningococcal protein: to reflect its remarkable sequence variability, it has been classified in three variants (or two subfamilies), and with poor cross-protection among the different variants. Furthermore, the level of fHbp expression varies significantly among strains, and this has also been considered an important factor for predicting MenB strain susceptibility to anti-fHbp antisera. Different methods have been used to assess fHbp expression on meningococcal strains, however, all these methods use anti-fHbp antibodies, and for this reason, the results are affected by the different affinity that antibodies can have to different antigenic variants. To overcome the limitations of an antibody-based quantification, we developed a quantitative Mass Spectrometry (MS) approach. Selected Reaction Monitoring (SRM) recently emerged as a powerful MS tool for detecting and quantifying proteins in complex mixtures. SRM is based on the targeted detection of ProteoTypicPeptides (PTPs), which are unique signatures of a protein that can be easily detected and quantified by MS. This approach, proven to be highly sensitive, quantitatively accurate and highly reproducible, was used to quantify the absolute amount of fHbp antigen in total extracts derived from 105 clinical isolates, evenly distributed among the three main variant groups and selected to be representative of the fHbp circulating subvariants around the world. We extended the study at the genetic level investigating the correlation between the differential level of expression and polymorphisms present within the genes and their promoter sequences. The implications of fHbp expression on the susceptibility of the strain to killing by anti-fHbp antisera are also presented. To date this is the first comprehensive fHbp expression profiling in a large panel of Neisseria meningitidis clinical isolates driven by an antibody-independent MS-based methodology, opening the door to new applications in vaccine coverage prediction and reinforcing the molecular understanding of released vaccines.

Keywords: quantitative mass spectrometry, Neisseria meningitidis, vaccines, bexsero, molecular epidemiology

Procedia PDF Downloads 283
222 Autophagy Promotes Vascular Smooth Muscle Cell Migration in vitro and in vivo

Authors: Changhan Ouyang, Zhonglin Xie

Abstract:

In response to proatherosclerotic factors such as oxidized lipids, or to therapeutic interventions such as angioplasty, stents, or bypass surgery, vascular smooth muscle cells (VSMCs) migrate from the media to the intima, resulting in intimal hyperplasia, restenosis, graft failure, or atherosclerosis. These proatherosclerotic factors also activate autophagy in VSMCs. However, the functional role of autophagy in vascular health and disease remains poorly understood. In the present study, we determined the role of autophagy in the regulation of VSMC migration. Autophagy activity in cultured human aortic smooth muscle cells (HASMCs) and mouse carotid arteries was measured by Western blot analysis of microtubule-associated protein 1 light chain 3 B (LC3B) and P62. The VSMC migration was determined by scratch wound assay and transwell migration assay. Ex vivo smooth muscle cell migration was determined using aortic ring assay. The in vivo SMC migration was examined by staining the carotid artery sections with smooth muscle alpha actin (alpha SMA) after carotid artery ligation. To examine the relationship between autophagy and neointimal hyperplasia, C57BL/6J mice were subjected to carotid artery ligation. Seven days after injury, protein levels of Atg5, Atg7, Beclin1, and LC3B drastically increased and remained higher in the injured arteries three weeks after the injury. In parallel with the activation of autophagy, vascular injury-induced neointimal hyperplasia as estimated by increased intima/media ratio. The en face staining of carotid artery showed that vascular injury enhanced alpha SMA staining in the intimal cells as compared with the sham operation. Treatment of HASMCs with platelet-derived growth factor (PDGF), one of the major factors for vascular remodeling in response to vascular injury, increased Atg7 and LC3 II protein levels and enhanced autophagosome formation. In addition, aortic ring assay demonstrated that PDGF treated aortic rings displayed an increase in neovessel formation compared with control rings. Whole mount staining for CD31 and alpha SMA in PDGF treated neovessels revealed that the neovessel structures were stained by alpha SMA but not CD31. In contrast, pharmacological and genetic suppression of autophagy inhibits VSMC migration. Especially, gene silencing of Atg7 inhibited VSMC migration induced by PDGF. Furthermore, three weeks after ligation, markedly decreased neointimal formation was found in mice treated with chloroquine, an inhibitor of autophagy. Quantitative morphometric analysis of the injured vessels revealed a marked reduction in the intima/media ratio in the mice treated with chloroquine. Conclusion: Autophagy activation increases VSMC migration while autophagy suppression inhibits VSMC migration. These findings suggest that autophagy suppression may be an important therapeutic strategy for atherosclerosis and intimal hyperplasia.

Keywords: autophagy, vascular smooth muscle cell, migration, neointimal formation

Procedia PDF Downloads 286
221 The Assessment of Infiltrated Wastewater on the Efficiency of Recovery Reuse and Irrigation Scheme: North Gaza Emergency Sewage Treatment Project as a Case Study

Authors: Yaser S. Kishawi, Sadi R. Ali

Abstract:

Part of Palestine, Gaza Strip (365 km2 and 1.8 million habitants) is considered a semi-arid zone relies solely on the Coastal Aquifer. The coastal aquifer is only source of water with only 5-10% suitable for human use. This barely covers the domestic and agricultural needs of Gaza Strip. Palestinian Water Authority Strategy is finding non-conventional water resource from treated wastewater to cover agricultural requirements and serve the population. A new WWTP project is to replace the old-overloaded Biet Lahia WWTP. The project consists of three parts; phase A (pressure line and infiltration basins-IBs), phase B (a new WWTP) and phase C (Recovery and Reuse Scheme–RRS– to capture the spreading plume). Currently, only phase A is functioning. Nearly 23 Mm3 of partially treated wastewater were infiltrated into the aquifer. Phase B and phase C witnessed many delays and this forced a reassessment of the RRS original design. An Environmental Management Plan was conducted from Jul 2013 to Jun 2014 on 13 existing monitoring wells surrounding the project location. This is to measure the efficiency of the SAT system and the spread of the contamination plume with relation to the efficiency of the proposed RRS. Along with the proposed location of the 27 recovery wells as part of the proposed RRS. The results of monitored wells were assessed compared with PWA baseline data. This was put into a groundwater model to simulate the plume to propose the best suitable solution to the delays. The redesign mainly manipulated the pumping rate of wells, proposed locations and functioning schedules (including wells groupings). The proposed simulations were examined using visual MODFLOW V4.2 to simulate the results. The results of monitored wells were assessed based on the location of the monitoring wells related to the proposed recovery wells locations (200m, 500m, and 750m away from the IBs). Near the 500m line (the first row of proposed recovery wells), an increase of nitrate (from 30 to 70mg/L) compare to a decrease in Chloride (1500 to below 900mg/L) was found during the monitoring period which indicated an expansion of plume to this distance. On this rate with the required time to construct the recovery scheme, keeping the original design the RRS will fail to capture the plume. Based on that many simulations were conducted leading into three main scenarios. The scenarios manipulated the starting dates, the pumping rate and the locations of recovery wells. A simulation of plume expansion and path-lines were extracted from the model monitoring how to prevent the expansion towards the nearby municipal wells. It was concluded that the location is the most important factor in determining the RRS efficiency. Scenario III was adopted and showed effective results even with a reduced pumping rates. This scenario proposed adding two additional recovery wells in a location beyond the 750m line to compensate the delays and effectively capture the plume. A continuous monitoring program for current and future monitoring wells should be in place to support the proposed scenario and ensure maximum protection.

Keywords: soil aquifer treatment, recovery reuse scheme, infiltration basins, North Gaza

Procedia PDF Downloads 182
220 Magneto-Luminescent Biocompatible Complexes Based on Alloyed Quantum Dots and Superparamagnetic Iron Oxide Nanoparticles

Authors: A. Matiushkina, A. Bazhenova, I. Litvinov, E. Kornilova, A. Dubavik, A. Orlova

Abstract:

Magnetic-luminescent complexes based on superparamagnetic iron oxide nanoparticles (SPIONs) and semiconductor quantum dots (QDs) have been recognized as a new class of materials that have high potential in modern medicine. These materials can serve for theranostics of oncological diseases, and also as a target agent for drug delivery. They combine the qualities characteristic of magnetic nanoparticles, that is, magneto-controllability and the ability to local heating under the influence of an external magnetic field, as well as phosphors, due to luminescence of which, for example, early tumor imaging is possible. The complexity of creating complexes is the energy transfer between particles, which quenches the luminescence of QDs in complexes with SPIONs. In this regard, a relatively new type of alloyed (CdₓZn₁₋ₓSeᵧS₁₋ᵧ)-ZnS QDs is used in our work. The presence of a sufficiently thick gradient semiconductor shell in alloyed QDs makes it possible to reduce the probability of energy transfer from QDs to SPIONs in complexes. At the same time, Forster Resonance Energy Transfer (FRET) is a perfect instrument to confirm the formation of complexes based on QDs and different-type energy acceptors. The formation of complexes in the aprotic bipolar solvent dimethyl sulfoxide is ensured by the coordination of the carboxyl group of the stabilizing QD molecule (L-cysteine) on the surface iron atoms of the SPIONs. An analysis of the photoluminescence (PL) spectra has shown that a sequential increase in the SPIONs concentration in the samples is accompanied by effective quenching of the luminescence of QDs. However, it has not confirmed the formation of complexes yet, because of a decrease in the PL intensity of QDs due to reabsorption of light by SPIONs. Therefore, a study of the PL kinetics of QDs at different SPIONs concentrations was made, which demonstrates that an increase in the SPIONs concentration is accompanied by a symbatic reduction in all characteristic PL decay times. It confirms the FRET from QDs to SPIONs, which indicates the QDs/SPIONs complex formation, rather than a spontaneous aggregation of QDs, which is usually accompanied by a sharp increase in the percentage of the QD fraction with the shortest characteristic PL decay time. The complexes have been studied by the magnetic circular dichroism (MCD) spectroscopy that allows one to estimate the response of magnetic material to the applied magnetic field and also can be useful to check SPIONs aggregation. An analysis of the MCD spectra has shown that the complexes have zero residual magnetization, which is an important factor for using in biomedical applications, and don't contain SPIONs aggregates. Cell penetration, biocompatibility, and stability of QDs/SPIONs complexes in cancer cells have been studied using HeLa cell line. We have found that the complexes penetrate in HeLa cell and don't demonstrate cytotoxic effect up to 25 nM concentration. Our results clearly demonstrate that alloyed (CdₓZn₁₋ₓSeᵧS₁₋ᵧ)-ZnS QDs can be successfully used in complexes with SPIONs reached new hybrid nanostructures, which combine bright luminescence for tumor imaging and magnetic properties for targeted drug delivery and magnetic hyperthermia of tumors. Acknowledgements: This work was supported by the Ministry of Science and Higher Education of Russian Federation, goszadanie no. 2019-1080 and was financially supported by Government of Russian Federation, Grant 08-08.

Keywords: alloyed quantum dots, magnetic circular dichroism, magneto-luminescent complexes, superparamagnetic iron oxide nanoparticles

Procedia PDF Downloads 90
219 Comprehensive Machine Learning-Based Glucose Sensing from Near-Infrared Spectra

Authors: Bitewulign Mekonnen

Abstract:

Context: This scientific paper focuses on the use of near-infrared (NIR) spectroscopy to determine glucose concentration in aqueous solutions accurately and rapidly. The study compares six different machine learning methods for predicting glucose concentration and also explores the development of a deep learning model for classifying NIR spectra. The objective is to optimize the detection model and improve the accuracy of glucose prediction. This research is important because it provides a comprehensive analysis of various machine-learning techniques for estimating aqueous glucose concentrations. Research Aim: The aim of this study is to compare and evaluate different machine-learning methods for predicting glucose concentration from NIR spectra. Additionally, the study aims to develop and assess a deep-learning model for classifying NIR spectra. Methodology: The research methodology involves the use of machine learning and deep learning techniques. Six machine learning regression models, including support vector machine regression, partial least squares regression, extra tree regression, random forest regression, extreme gradient boosting, and principal component analysis-neural network, are employed to predict glucose concentration. The NIR spectra data is randomly divided into train and test sets, and the process is repeated ten times to increase generalization ability. In addition, a convolutional neural network is developed for classifying NIR spectra. Findings: The study reveals that the SVMR, ETR, and PCA-NN models exhibit excellent performance in predicting glucose concentration, with correlation coefficients (R) > 0.99 and determination coefficients (R²)> 0.985. The deep learning model achieves high macro-averaging scores for precision, recall, and F1-measure. These findings demonstrate the effectiveness of machine learning and deep learning methods in optimizing the detection model and improving glucose prediction accuracy. Theoretical Importance: This research contributes to the field by providing a comprehensive analysis of various machine-learning techniques for estimating glucose concentrations from NIR spectra. It also explores the use of deep learning for the classification of indistinguishable NIR spectra. The findings highlight the potential of machine learning and deep learning in enhancing the prediction accuracy of glucose-relevant features. Data Collection and Analysis Procedures: The NIR spectra and corresponding references for glucose concentration are measured in increments of 20 mg/dl. The data is randomly divided into train and test sets, and the models are evaluated using regression analysis and classification metrics. The performance of each model is assessed based on correlation coefficients, determination coefficients, precision, recall, and F1-measure. Question Addressed: The study addresses the question of whether machine learning and deep learning methods can optimize the detection model and improve the accuracy of glucose prediction from NIR spectra. Conclusion: The research demonstrates that machine learning and deep learning methods can effectively predict glucose concentration from NIR spectra. The SVMR, ETR, and PCA-NN models exhibit superior performance, while the deep learning model achieves high classification scores. These findings suggest that machine learning and deep learning techniques can be used to improve the prediction accuracy of glucose-relevant features. Further research is needed to explore their clinical utility in analyzing complex matrices, such as blood glucose levels.

Keywords: machine learning, signal processing, near-infrared spectroscopy, support vector machine, neural network

Procedia PDF Downloads 64
218 Correlation Analysis of Reactivity in the Oxidation of Para and Meta-Substituted Benzyl Alcohols by Benzimidazolium Dichromate in Non-Aqueous Media: A Kinetic and Mechanistic Aspects

Authors: Seema Kothari, Dinesh Panday

Abstract:

An observed correlation of the reaction rates with the changes in the nature of substituent present on one of the reactants often reveals the nature of transition state. Selective oxidation of organic compounds under non-aqueous media is an important transformation in synthetic organic chemistry. Inorganic chromates and dichromates being drastic oxidant and are generally insoluble in most organic solvents, a number of different chromium (VI) derivatives have been synthesized. Benzimidazolium dichromate (BIDC) is one of the recently reported Cr(VI) reagents which is neither hygroscopic nor light sensitive being, therefore, much stable. Not many reports on the kinetics of the oxidations by BIDC are seemed to be available in the literature. In the present investigation, the kinetics and mechanism of benzyl alcohol (BA) and a number of para- and meta-substituted benzyl alcohols by benzimidazolium dichromate (BIDC), in dimethyl sulphoxide, is reported. The reactions were followed spectrophotometrically at 364 nm by monitoring the decrease in [BIDC] for up to 85-90% reaction, the temperature being constant. The observed oxidation product is the corresponding benzaldehyde. The reactions were of first order with respect to each the alcohol and BIDC. The reactions are catalyzed by proton, and the dependence is of the form: kobs = a + b[H+]. The reactions thus follow both, an acid-dependent and acid-independent paths. The oxidation of [1,1 2H2]benzyl alcohol exhibited the presence of a substantial kinetic isotope effect ( kH/kD = 6.20 at 298 K ). This indicated the cleavage of a α-C-H bond in the rate-determining step. An analysis of the temperature dependence of the deuterium isotope effect showed that the loss of hydrogen proceeds through a concerted cyclic process. The rate of oxidation of BA was determined in 19 organic solvents. An analysis of the solvent effect by Swain’s equation indicated that though both the anion and cation-solvating powers of the solvent contribute to the observed solvent effect, the role of cation-solvation is major. The rates of the para and meta compounds, at 298 K, failed to exhibit a significant correlation in terms of Hammett or Brown's substituent constants. The rates were then subjected to analyses in terms of dual substituent parameter (DSP) equations. The rates of oxidation of the para-substituted benzyl alcohols show an excellent correlation with Taft's σI and σRBA values. However, the rates for the meta-substituted benzyl alcohols show an excellent correlation with σI and σR0. The polar reaction constants are negative indicating an electron-deficient transition state. Hence the overall mechanism is proposed to involve the formation of a chromate ester in a fast pre-equilibrium and then a decomposition of the ester in a subsequent slow step via a cyclic concerted symmetrical transition state, involving hydride-ion transfer, leading to the product. The first order dependence on alcohol may be accounted in terms of the small value of the formation constant of the ester intermediate. An another reaction mechanism accounting the acid-catalysis involve the formation of a protonated BIDC prior to formation of an ester intermediate which subsequently decomposes in a slow step leading to the product.

Keywords: benzimidazolium dichromate, benzyl alcohols, correlation analysis, kinetics, oxidation

Procedia PDF Downloads 319
217 Operational Characteristics of the Road Surface Improvement

Authors: Iuri Salukvadze

Abstract:

Construction takes importance role in the history of mankind, there is not a single thing-product in our lives in which the builder’s work was not to be materialized, because to create all of it requires setting up factories, roads, and bridges, etc. The function of the Republic of Georgia, as part of the connecting Europe-Asia transport corridor, is significantly increased. In the context of transit function a large part of the cargo traffic belongs to motor transport, hence the improvement of motor roads transport infrastructure is rather important and rise the new, increased operational demands for existing as well as new motor roads. Construction of the durable road surface is related to rather large values, but because of high transport-operational properties, such as high-speed, less fuel consumption, less depreciation of tires, etc. If the traffic intensity is high, therefore the reimbursement of expenses occurs rapidly and accordingly is increasing income. If the traffic intensity is relatively small, it is recommended to use lightened structures of road carpet in order to pay for capital investments amounted to no more than normative one. The road carpet is divided into the following basic types: asphaltic concrete and cement concrete. Asphaltic concrete is the most perfect type of road carpet. It is arranged in two or three layers on rigid foundation and will be compacted. Asphaltic concrete is artificial building material, which due stratum will be selected and measured from stone skeleton and sand, interconnected by bitumen and a mixture of mineral powder. Less strictly selected similar material is called as bitumen-mineral mixture. Asphaltic concrete is non-rigid building material and well durable on vertical loadings; it is less resistant to the impact of horizontal forces. The cement concrete is monolithic and durable material, it is well durable the horizontal loads and is less resistant related to vertical loads. The cement concrete consists from strictly selected, measured stone material and sand, the binder is cement. The cement concrete road carpet represents separate slabs of sizes from 3 ÷ 5 op to 6 ÷ 8 meters. The slabs are reinforced by a rather complex system. Between the slabs are arranged seams that are designed for avoiding of additional stresses due temperature fluctuations on the length of slabs. For the joint behavior of separate slabs, they are connected by metal rods. Rods provide the changes in the length of slabs and distribute to the slab vertical forces and bending moments. The foundation layers will be extremely durable, for that is required high-quality stone material, cement, and metal. The qualification work aims to: in order for improvement of traffic conditions on motor roads to prolong operational conditions and improving their characteristics. The work consists from three chapters, 80 pages, 5 tables and 5 figures. In the work are stated general concepts as well as carried out by various companies using modern methods tests and their results. In the chapter III are stated carried by us tests related to this issue and specific examples to improving the operational characteristics.

Keywords: asphalt, cement, cylindrikal sample of asphalt, building

Procedia PDF Downloads 198
216 Improved Morphology in Sequential Deposition of the Inverted Type Planar Heterojunction Solar Cells Using Cheap Additive (DI-H₂O)

Authors: Asmat Nawaz, Ceylan Zafer, Ali K. Erdinc, Kaiying Wang, M. Nadeem Akram

Abstract:

Hybrid halide Perovskites with the general formula ABX₃, where X = Cl, Br or I, are considered as an ideal candidates for the preparation of photovoltaic devices. The most commonly and successfully used hybrid halide perovskite for photovoltaic applications is CH₃NH₃PbI₃ and its analogue prepared from lead chloride, commonly symbolized as CH₃NH₃PbI₃_ₓClₓ. Some researcher groups are using lead free (Sn replaces Pb) and mixed halide perovskites for the fabrication of the devices. Both mesoporous and planar structures have been developed. By Comparing mesoporous structure in which the perovskite materials infiltrate into mesoporous metal oxide scaffold, the planar architecture is much simpler and easy for device fabrication. In a typical perovskite solar cell, a perovskite absorber layer is sandwiched between the hole and electron transport. Upon the irradiation, carriers are created in the absorber layer that can travel through hole and electron transport layers and the interface in between. We fabricated inverted planar heterojunction structure ITO/PEDOT/ Perovskite/PCBM/Al, based solar cell via two-step spin coating method. This is also called Sequential deposition method. A small amount of cheap additive H₂O was added into PbI₂/DMF to make a homogeneous solution. We prepared four different solution such as (W/O H₂O, 1% H₂O, 2% H₂O, 3% H₂O). After preparing, the whole night stirring at 60℃ is essential for the homogenous precursor solutions. We observed that the solution with 1% H₂O was much more homogenous at room temperature as compared to others. The solution with 3% H₂O was precipitated at once at room temperature. The four different films of PbI₂ were formed on PEDOT substrates by spin coating and after that immediately (before drying the PbI₂) the substrates were immersed in the methyl ammonium iodide solution (prepared in isopropanol) for the completion of the desired perovskite film. After getting desired films, rinse the substrates with isopropanol to remove the excess amount of methyl ammonium iodide and finally dried it on hot plate only for 1-2 minutes. In this study, we added H₂O in the PbI₂/DMF precursor solution. The concept of additive is widely used in the bulk- heterojunction solar cells to manipulate the surface morphology, leading to the enhancement of the photovoltaic performance. There are two most important parameters for the selection of additives. (a) Higher boiling point w.r.t host material (b) good interaction with the precursor materials. We observed that the morphology of the films was improved and we achieved a denser, uniform with less cavities and almost full surface coverage films but only using precursor solution having 1% H₂O. Therefore, we fabricated the complete perovskite solar cell by sequential deposition technique with precursor solution having 1% H₂O. We concluded that with the addition of additives in the precursor solutions one can easily be manipulate the morphology of the perovskite film. In the sequential deposition method, thickness of perovskite film is in µm and the charge diffusion length of PbI₂ is in nm. Therefore, by controlling the thickness using other deposition methods for the fabrication of solar cells, we can achieve the better efficiency.

Keywords: methylammonium lead iodide, perovskite solar cell, precursor composition, sequential deposition

Procedia PDF Downloads 218
215 The Importance of School Culture in Supporting Student Mental Health Following the COVID-19 Pandemic: Insights from a Qualitative Study

Authors: Rhiannon Barker, Gregory Hartwell, Matt Egan, Karen Lock

Abstract:

Background: Evidence suggests that mental health (MH) issues in children and young people (CYP) in the UK are on the rise. Of particular concern is data that indicates that the pandemic, together with the impact of school closures, have accentuated already pronounced inequalities; children from families on low incomes or from black and minority ethnic groups are reportedly more likely to have been adversely impacted. This study aimed to help identify specific support which may facilitate the building of a positive school climate and protect student mental health, particularly in the wake of school closures following the pandemic. It has important implications for integrated working between schools and statutory health services. Methods: The research comprised of three parts; scoping, case studies, and a stakeholder workshop to explore and consolidate results. The scoping phase included a literature review alongside interviews with a range of stakeholders from government, academia, and the third sector. Case studies were then conducted in two London state schools. Results: Our research identified how student MH was being impacted by a range of factors located at different system levels, both internal to the school and in the wider community. School climate, relating both to a shared system of beliefs and values, as well as broader factors including style of leadership, teaching, discipline, safety, and relationships -all played a role in the experience of school life and, consequently, the MH of both students and staff. Participants highlighted the importance of a whole school approach and ensuring that support for student MH was not separated from academic achievement, as well as the importance of identifying and applying universal measuring systems to establish levels of MH need. Our findings suggest that a school’s climate is influenced by the style and strength of its leadership, while this school climate - together with mechanisms put in place to respond to MH needs (both statutory and non-statutory) - plays a key role in supporting student MH. Implications: Schools in England have a responsibility to decide on the nature of MH support provided for their students, and there is no requirement for them to report centrally on the form this provision takes. The reality on the ground, as our study suggests, is that MH provision varies significantly between schools, particularly in relation to ‘lower’ levels of need which are not covered by statutory requirements. A valid concern may be that in the huge raft of possible options schools have to support CYP wellbeing, too much is left to chance. Work to support schools in rebuilding their cultures post-lockdowns must include the means to identify and promote appropriate tools and techniques to facilitate regular measurement of student MH. This will help establish both the scale of the problem and monitor the effectiveness of the response. A strong vision from a school’s leadership team that emphasises the importance of student wellbeing, running alongside (but not overshadowed by) academic attainment, should help shape a school climate to promote beneficial MH outcomes. The sector should also be provided with support to improve the consistency and efficacy of MH provision in schools across the country.

Keywords: mental health, schools, young people, whole-school culture

Procedia PDF Downloads 37
214 Modelling Pest Immigration into Rape Seed Crops under Past and Future Climate Conditions

Authors: M. Eickermann, F. Ronellenfitsch, J. Junk

Abstract:

Oilseed rape (Brassica napus L.) is one of the most important crops throughout Europe, but pressure due to pest insects and pathogens can reduce yield amount substantially. Therefore, the usage of pesticide applications is outstanding in this crop. In addition, climate change effects can interact with phenology of the host plant and their pests and can apply additional pressure on the yield. Next to the pollen beetle, Meligethes aeneus L., the seed-damaging pest insects, cabbage seed weevil (Ceutorhynchus obstrictus Marsham) and the brassica pod midge (Dasineura brassicae Winn.) are of main economic impact to the yield. While females of C. obstrictus are infesting oilseed rape by depositing single eggs into young pods, the females of D. brassicae are using this local damage in the pod for their own oviposition, while depositing batches of 20-30 eggs. Without a former infestation by the cabbage seed weevil, a significant yield reduction by the brassica pod midge can be denied. Based on long-term, multisided field experiments, a comprehensive data-set on pest migration to crops of B. napus has been built up in the last ten years. Five observational test sides, situated in different climatic regions in Luxembourg were controlled between February until the end of May twice a week. Pest migration was recorded by using yellow water pan-traps. Caught insects were identified in the laboratory according to species specific identification keys. By a combination of pest observations and corresponding meteorological observations, the set-up of models to predict the migration periods of the seed-damaging pests was possible. This approach is the basis for a computer-based decision support tool, to assist the farmer in identifying the appropriate time point of pesticide application. In addition, the derived algorithms of that decision support tool can be combined with climate change projections in order to assess the future potential threat caused by the seed-damaging pest species. Regional climate change effects for Luxembourg have been intensively studied in recent years. Significant changes to wetter winters and drier summers, as well as a prolongation of the vegetation period mainly caused by higher spring temperature, have also been reported. We used the COSMO-CLM model to perform a time slice experiment for Luxembourg with a spatial resolution of 1.3 km. Three ten year time slices were calculated: The reference time span (1991-2000), the near (2041-2050) and the far future (2091-2100). Our results projected a significant shift of pest migration to an earlier onset of the year. In addition, a prolongation of the possible migration period could be observed. Because D. brassiace is depending on the former oviposition activity by C. obstrictus to infest its host plant successfully, the future dependencies of both pest species will be assessed. Based on this approach the future risk potential of both seed-damaging pests is calculated and the status as pest species is characterized.

Keywords: CORDEX projections, decision support tool, Brassica napus, pests

Procedia PDF Downloads 352
213 A Framework for Automated Nuclear Waste Classification

Authors: Seonaid Hume, Gordon Dobie, Graeme West

Abstract:

Detecting and localizing radioactive sources is a necessity for safe and secure decommissioning of nuclear facilities. An important aspect for the management of the sort-and-segregation process is establishing the spatial distributions and quantities of the waste radionuclides, their type, corresponding activity, and ultimately classification for disposal. The data received from surveys directly informs decommissioning plans, on-site incident management strategies, the approach needed for a new cell, as well as protecting the workforce and the public. Manual classification of nuclear waste from a nuclear cell is time-consuming, expensive, and requires significant expertise to make the classification judgment call. Also, in-cell decommissioning is still in its relative infancy, and few techniques are well-developed. As with any repetitive and routine tasks, there is the opportunity to improve the task of classifying nuclear waste using autonomous systems. Hence, this paper proposes a new framework for the automatic classification of nuclear waste. This framework consists of five main stages; 3D spatial mapping and object detection, object classification, radiological mapping, source localisation based on gathered evidence and finally, waste classification. The first stage of the framework, 3D visual mapping, involves object detection from point cloud data. A review of related applications in other industries is provided, and recommendations for approaches for waste classification are made. Object detection focusses initially on cylindrical objects since pipework is significant in nuclear cells and indeed any industrial site. The approach can be extended to other commonly occurring primitives such as spheres and cubes. This is in preparation of stage two, characterizing the point cloud data and estimating the dimensions, material, degradation, and mass of the objects detected in order to feature match them to an inventory of possible items found in that nuclear cell. Many items in nuclear cells are one-offs, have limited or poor drawings available, or have been modified since installation, and have complex interiors, which often and inadvertently pose difficulties when accessing certain zones and identifying waste remotely. Hence, this may require expert input to feature match objects. The third stage, radiological mapping, is similar in order to facilitate the characterization of the nuclear cell in terms of radiation fields, including the type of radiation, activity, and location within the nuclear cell. The fourth stage of the framework takes the visual map for stage 1, the object characterization from stage 2, and radiation map from stage 3 and fuses them together, providing a more detailed scene of the nuclear cell by identifying the location of radioactive materials in three dimensions. The last stage involves combining the evidence from the fused data sets to reveal the classification of the waste in Bq/kg, thus enabling better decision making and monitoring for in-cell decommissioning. The presentation of the framework is supported by representative case study data drawn from an application in decommissioning from a UK nuclear facility. This framework utilises recent advancements of the detection and mapping capabilities of complex radiation fields in three dimensions to make the process of classifying nuclear waste faster, more reliable, cost-effective and safer.

Keywords: nuclear decommissioning, radiation detection, object detection, waste classification

Procedia PDF Downloads 175
212 Quantifying Firm-Level Environmental Innovation Performance: Determining the Sustainability Value of Patent Portfolios

Authors: Maximilian Elsen, Frank Tietze

Abstract:

The development and diffusion of green technologies are crucial for achieving our ambitious climate targets. The Paris Agreement commits its members to develop strategies for achieving net zero greenhouse gas emissions by the second half of the century. Governments, executives, and academics are working on net-zero strategies and the business of rating organisations on their environmental, social and governance (ESG) performance has grown tremendously in its public interest. ESG data is now commonly integrated into traditional investment analysis and an important factor in investment decisions. Creating these metrics, however, is inherently challenging as environmental and social impacts are hard to measure and uniform requirements on ESG reporting are lacking. ESG metrics are often incomplete and inconsistent as they lack fully accepted reporting standards and are often of qualitative nature. This study explores the use of patent data for assessing the environmental performance of companies by focusing on their patented inventions in the space of climate change mitigation and adaptation technologies (CCMAT). The present study builds on the successful identification of CCMAT patents. In this context, the study adopts the Y02 patent classification, a fully cross-sectional tagging scheme that is fully incorporated in the Cooperative Patent Classification (CPC), to identify Climate Change Adaptation Technologies. The Y02 classification was jointly developed by the European Patent Office (EPO) and the United States Patent and Trademark Office (USPTO) and provides means to examine technologies in the field of mitigation and adaptation to climate change across relevant technologies. This paper develops sustainability-related metrics for firm-level patent portfolios. We do so by adopting a three-step approach. First, we identify relevant CCMAT patents based on their classification as Y02 CPC patents. Second, we examine the technological strength of the identified CCMAT patents by including more traditional metrics from the field of patent analytics while considering their relevance in the space of CCMAT. Such metrics include, among others, the number of forward citations a patent receives, as well as the backward citations and the size of the focal patent family. Third, we conduct our analysis on a firm level by sector for a sample of companies from different industries and compare the derived sustainability performance metrics with the firms’ environmental and financial performance based on carbon emissions and revenue data. The main outcome of this research is the development of sustainability-related metrics for firm-level environmental performance based on patent data. This research has the potential to complement existing ESG metrics from an innovation perspective by focusing on the environmental performance of companies and putting them into perspective to conventional financial performance metrics. We further provide insights into the environmental performance of companies on a sector level. This study has implications of both academic and practical nature. Academically, it contributes to the research on eco-innovation and the literature on innovation and intellectual property (IP). Practically, the study has implications for policymakers by deriving meaningful insights into the environmental performance from an innovation and IP perspective. Such metrics are further relevant for investors and potentially complement existing ESG data.

Keywords: climate change mitigation, innovation, patent portfolios, sustainability

Procedia PDF Downloads 56
211 Identification of Failures Occurring on a System on Chip Exposed to a Neutron Beam for Safety Applications

Authors: S. Thomet, S. De-Paoli, F. Ghaffari, J. M. Daveau, P. Roche, O. Romain

Abstract:

In this paper, we present a hardware module dedicated to understanding the fail reason of a System on Chip (SoC) exposed to a particle beam. Impact of Single-Event Effects (SEE) on processor-based SoCs is a concern that has increased in the past decade, particularly for terrestrial applications with automotive safety increasing requirements, as well as consumer and industrial domains. The SEE created by the impact of a particle on an SoC may have consequences that can end to instability or crashes. Specific hardening techniques for hardware and software have been developed to make such systems more reliable. SoC is then qualified using cosmic ray Accelerated Soft-Error Rate (ASER) to ensure the Soft-Error Rate (SER) remains in mission profiles. Understanding where errors are occurring is another challenge because of the complexity of operations performed in an SoC. Common techniques to monitor an SoC running under a beam are based on non-intrusive debug, consisting of recording the program counter and doing some consistency checking on the fly. To detect and understand SEE, we have developed a module embedded within the SoC that provide support for recording probes, hardware watchpoints, and a memory mapped register bank dedicated to software usage. To identify CPU failure modes and the most important resources to probe, we have carried out a fault injection campaign on the RTL model of the SoC. Probes are placed on generic CPU registers and bus accesses. They highlight the propagation of errors and allow identifying the failure modes. Typical resulting errors are bit-flips in resources creating bad addresses, illegal instructions, longer than expected loops, or incorrect bus accesses. Although our module is processor agnostic, it has been interfaced to a RISC-V by probing some of the processor registers. Probes are then recorded in a ring buffer. Associated hardware watchpoints are allowing to do some control, such as start or stop event recording or halt the processor. Finally, the module is also providing a bank of registers where the firmware running on the SoC can log information. Typical usage is for operating system context switch recording. The module is connected to a dedicated debug bus and is interfaced to a remote controller via a debugger link. Thus, a remote controller can interact with the monitoring module without any intrusiveness on the SoC. Moreover, in case of CPU unresponsiveness, or system-bus stall, the recorded information can still be recovered, providing the fail reason. A preliminary version of the module has been integrated into a test chip currently being manufactured at ST in 28-nm FDSOI technology. The module has been triplicated to provide reliable information on the SoC behavior. As the primary application domain is automotive and safety, the efficiency of the module will be evaluated by exposing the test chip under a fast-neutron beam by the end of the year. In the meantime, it will be tested with alpha particles and electromagnetic fault injection (EMFI). We will report in the paper on fault-injection results as well as irradiation results.

Keywords: fault injection, SoC fail reason, SoC soft error rate, terrestrial application

Procedia PDF Downloads 205
210 Propagation of Ultra-High Energy Cosmic Rays through Extragalactic Magnetic Fields: An Exploratory Study of the Distance Amplification from Rectilinear Propagation

Authors: Rubens P. Costa, Marcelo A. Leigui de Oliveira

Abstract:

The comprehension of features on the energy spectra, the chemical compositions, and the origins of Ultra-High Energy Cosmic Rays (UHECRs) - mainly atomic nuclei with energies above ~1.0 EeV (exa-electron volts) - are intrinsically linked to the problem of determining the magnitude of their deflections in cosmic magnetic fields on cosmological scales. In addition, as they propagate from the source to the observer, modifications are expected in their original energy spectra, anisotropy, and the chemical compositions due to interactions with low energy photons and matter. This means that any consistent interpretation of the nature and origin of UHECRs has to include the detailed knowledge of their propagation in a three-dimensional environment, taking into account the magnetic deflections and energy losses. The parameter space range for the magnetic fields in the universe is very large because the field strength and especially their orientation have big uncertainties. Particularly, the strength and morphology of the Extragalactic Magnetic Fields (EGMFs) remain largely unknown, because of the intrinsic difficulty of observing them. Monte Carlo simulations of charged particles traveling through a simulated magnetized universe is the straightforward way to study the influence of extragalactic magnetic fields on UHECRs propagation. However, this brings two major difficulties: an accurate numerical modeling of charged particles diffusion in magnetic fields, and an accurate numerical modeling of the magnetized Universe. Since magnetic fields do not cause energy losses, it is important to impose that the particle tracking method conserve the particle’s total energy and that the energy changes are results of the interactions with background photons only. Hence, special attention should be paid to computational effects. Additionally, because of the number of particles necessary to obtain a relevant statistical sample, the particle tracking method must be computationally efficient. In this work, we present an analysis of the propagation of ultra-high energy charged particles in the intergalactic medium. The EGMFs are considered to be coherent within cells of 1 Mpc (mega parsec) diameter, wherein they have uniform intensities of 1 nG (nano Gauss). Moreover, each cell has its field orientation randomly chosen, and a border region is defined such that at distances beyond 95% of the cell radius from the cell center smooth transitions have been applied in order to avoid discontinuities. The smooth transitions are simulated by weighting the magnetic field orientation by the particle's distance to the two nearby cells. The energy losses have been treated in the continuous approximation parameterizing the mean energy loss per unit path length by the energy loss length. We have shown, for a particle with the typical energy of interest the integration method performance in the relative error of Larmor radius, without energy losses and the relative error of energy. Additionally, we plotted the distance amplification from rectilinear propagation as a function of the traveled distance, particle's magnetic rigidity, without energy losses, and particle's energy, with energy losses, to study the influence of particle's species on these calculations. The results clearly show when it is necessary to use a full three-dimensional simulation.

Keywords: cosmic rays propagation, extragalactic magnetic fields, magnetic deflections, ultra-high energy

Procedia PDF Downloads 106
209 A Constructionist View of Projects, Social Media and Tacit Knowledge in a College Classroom: An Exploratory Study

Authors: John Zanetich

Abstract:

Designing an educational activity that encourages inquiry and collaboration is key to engaging students in meaningful learning. Educational Information and Communications Technology (EICT) plays an important role in facilitating cooperative and collaborative learning in the classroom. The EICT also facilitates students’ learning and development of the critical thinking skills needed to solve real world problems. Projects and activities based on constructivism encourage students to embrace complexity as well as find relevance and joy in their learning. It also enhances the students’ capacity for creative and responsible real-world problem solving. Classroom activities based on constructivism offer students an opportunity to develop the higher–order-thinking skills of defining problems and identifying solutions. Participating in a classroom project is an activity for both acquiring experiential knowledge and applying new knowledge to practical situations. It also provides an opportunity for students to integrate new knowledge into a skill set using reflection. Classroom projects can be developed around a variety of learning objects including social media, knowledge management and learning communities. The construction of meaning through project-based learning is an approach that encourages interaction and problem-solving activities. Projects require active participation, collaboration and interaction to reach the agreed upon outcomes. Projects also serve to externalize the invisible cognitive and social processes taking place in the activity itself and in the student experience. This paper describes a classroom project designed to elicit interactions by helping students to unfreeze existing knowledge, to create new learning experiences, and then refreeze the new knowledge. Since constructivists believe that students construct their own meaning through active engagement and participation as well as interactions with others. knowledge management can be used to guide the exchange of both tacit and explicit knowledge in interpersonal interactions between students and guide the construction of meaning. This paper uses an action research approach to the development of a classroom project and describes the use of technology, social media and the active use of tacit knowledge in the college classroom. In this project, a closed group Facebook page becomes the virtual classroom where interaction is captured and measured using engagement analytics. In the virtual learning community, the principles of knowledge management are used to identify the process and components of the infrastructure of the learning process. The project identifies class member interests and measures student engagement in a learning community by analyzing regular posting on the Facebook page. These posts are used to foster and encourage interactions, reflect a student’s interest and serve as reaction points from which viewers of the post convert the explicit information in the post to implicit knowledge. The data was collected over an academic year and was provided, in part, by the Google analytic reports on Facebook and self-reports of posts by members. The results support the use of active tacit knowledge activities, knowledge management and social media to enhance the student learning experience and help create the knowledge that will be used by students to construct meaning.

Keywords: constructivism, knowledge management, tacit knowledge, social media

Procedia PDF Downloads 195
208 Fuzzy Time Series- Markov Chain Method for Corn and Soybean Price Forecasting in North Carolina Markets

Authors: Selin Guney, Andres Riquelme

Abstract:

Among the main purposes of optimal and efficient forecasts of agricultural commodity prices is to guide the firms to advance the economic decision making process such as planning business operations and marketing decisions. Governments are also the beneficiaries and suppliers of agricultural price forecasts. They use this information to establish a proper agricultural policy, and hence, the forecasts affect social welfare and systematic errors in forecasts could lead to a misallocation of scarce resources. Various empirical approaches have been applied to forecast commodity prices that have used different methodologies. Most commonly-used approaches to forecast commodity sectors depend on classical time series models that assume values of the response variables are precise which is quite often not true in reality. Recently, this literature has mostly evolved to a consideration of fuzzy time series models that provide more flexibility in terms of the classical time series models assumptions such as stationarity, and large sample size requirement. Besides, fuzzy modeling approach allows decision making with estimated values under incomplete information or uncertainty. A number of fuzzy time series models have been developed and implemented over the last decades; however, most of them are not appropriate for forecasting repeated and nonconsecutive transitions in the data. The modeling scheme used in this paper eliminates this problem by introducing Markov modeling approach that takes into account both the repeated and nonconsecutive transitions. Also, the determination of length of interval is crucial in terms of the accuracy of forecasts. The problem of determining the length of interval arbitrarily is overcome and a methodology to determine the proper length of interval based on the distribution or mean of the first differences of series to improve forecast accuracy is proposed. The specific purpose of this paper is to propose and investigate the potential of a new forecasting model that integrates methodologies for determining the proper length of interval based on the distribution or mean of the first differences of series and Fuzzy Time Series- Markov Chain model. Moreover, the accuracy of the forecasting performance of proposed integrated model is compared to different univariate time series models and the superiority of proposed method over competing methods in respect of modelling and forecasting on the basis of forecast evaluation criteria is demonstrated. The application is to daily corn and soybean prices observed at three commercially important North Carolina markets; Candor, Cofield and Roaring River for corn and Fayetteville, Cofield and Greenville City for soybeans respectively. One main conclusion from this paper is that using fuzzy logic improves the forecast performance and accuracy; the effectiveness and potential benefits of the proposed model is confirmed with small selection criteria value such MAPE. The paper concludes with a discussion of the implications of integrating fuzzy logic and nonarbitrary determination of length of interval for the reliability and accuracy of price forecasts. The empirical results represent a significant contribution to our understanding of the applicability of fuzzy modeling in commodity price forecasts.

Keywords: commodity, forecast, fuzzy, Markov

Procedia PDF Downloads 198
207 Prospects of Low Immune Response Transplants Based on Acellular Organ Scaffolds

Authors: Inna Kornienko, Svetlana Guryeva, Anatoly Shekhter, Elena Petersen

Abstract:

Transplantation is an effective treatment option for patients suffering from different end-stage diseases. However, it is plagued by a constant shortage of donor organs and the subsequent need of a lifelong immunosuppressive therapy for the patient. Currently some researchers look towards using of pig organs to replace human organs for transplantation since the matrix derived from porcine organs is a convenient substitute for the human matrix. As an initial step to create a new ex vivo tissue engineered model, optimized protocols have been created to obtain organ-specific acellular matrices and evaluated their potential as tissue engineered scaffolds for culture of normal cells and tumor cell lines. These protocols include decellularization by perfusion in a bioreactor system and immersion-agitation on an orbital shaker with use of various detergents (SDS, Triton X-100) and freezing. Complete decellularization – in terms of residual DNA amount – is an important predictor of probability of immune rejection of materials of natural origin. However, the signs of cellular material may still remain within the matrix even after harsh decellularization protocols. In this regard, the matrices obtained from tissues of low-immunogenic pigs with α3Galactosyl-tranferase gene knock out (GalT-KO) may be a promising alternative to native animal sources. The research included a study of induced effect of frozen and fresh fragments of GalT-KO skin on healing of full-thickness plane wounds in 80 rats. Commercially available wound dressings (Ksenoderm, Hyamatrix and Alloderm) as well as allogenic skin were used as a positive control and untreated wounds were analyzed as a negative control. The results were evaluated on the 4th day after grafting, which corresponds to the time of start of normal wound epithelization. It has been shown that a non-specific immune response in models treated with GalT-Ko pig skin was milder than in all the control groups. Research has been performed to measure technical skin characteristics: stiffness and elasticity properties, corneometry, tevametry, and cutometry. These metrics enabled the evaluation of hydratation level, corneous layer husking level, as well as skin elasticity and micro- and macro-landscape. These preliminary data may contribute to development of personalized transplantable organs from GalT-Ko pigs with significantly limited potential of immune rejection. By applying growth factors to a decellularized skin sample it is possible to achieve various regenerative effects based on the particular situation. In this particular research BMP2 and Heparin-binding EGF-like growth factor have been used. Ideally, a bioengineered organ must be biocompatible, non-immunogenic and support cell growth. Porcine organs are attractive for xenotransplantation if severe immunologic concerns can be bypassed. The results indicate that genetically modified pig tissues with knock-outed α3Galactosyl-tranferase gene may be used for production of low-immunogenic matrix suitable for transplantation.

Keywords: decellularization, low-immunogenic, matrix, scaffolds, transplants

Procedia PDF Downloads 257
206 Temperature Distribution Inside Hybrid photovoltaic-Thermoelectric Generator Systems and their Dependency on Exposition Angles

Authors: Slawomir Wnuk

Abstract:

Due to widespread implementation of the renewable energy development programs the, solar energy use increasing constantlyacross the world. Accordingly to REN21, in 2020, both on-grid and off-grid solar photovoltaic systems installed capacity reached 760 GWDCand increased by 139 GWDC compared to previous year capacity. However, the photovoltaic solar cells used for primary solar energy conversion into electrical energy has exhibited significant drawbacks. The fundamentaldownside is unstable andlow efficiencythe energy conversion being negatively affected by a rangeof factors. To neutralise or minimise the impact of those factors causing energy losses, researchers have come out withvariedideas. One ofpromising technological solutionsoffered by researchers is PV-MTEG multilayer hybrid system combiningboth photovoltaic cells and thermoelectric generators advantages. A series of experiments was performed on Glasgow Caledonian University laboratory to investigate such a system in operation. In the experiments, the solar simulator Sol3A series was employed as a stable solar irradiation source, and multichannel voltage and temperature data loggers were utilised for measurements. The two layer proposed hybrid systemsimulation model was built up and tested for its energy conversion capability under a variety of the exposure angles to the solar irradiation with a concurrent examination of the temperature distribution inside proposed PV-MTEG structure. The same series of laboratory tests were carried out for a range of various loads, with the temperature and voltage generated being measured and recordedfor each exposure angle and load combination. It was found that increase of the exposure angle of the PV-MTEG structure to an irradiation source causes the decrease of the temperature gradient ΔT between the system layers as well as reduces overall system heating. The temperature gradient’s reduction influences negatively the voltage generation process. The experiments showed that for the exposureangles in the range from 0° to 45°, the ‘generated voltage – exposure angle’ dependence is reflected closely by the linear characteristics. It was also found that the voltage generated by MTEG structures working with the optimal load determined and applied would drop by approximately 0.82% per each 1° degree of the exposure angle increase. This voltage drop occurs at the higher loads applied, getting more steep with increasing the load over the optimal value, however, the difference isn’t significant. Despite of linear character of the generated by MTEG voltage-angle dependence, the temperature reduction between the system structure layers andat tested points on its surface was not linear. In conclusion, the PV-MTEG exposure angle appears to be important parameter affecting efficiency of the energy generation by thermo-electrical generators incorporated inside those hybrid structures. The research revealedgreat potential of the proposed hybrid system. The experiments indicated interesting behaviour of the tested structures, and the results appear to provide valuable contribution into thedevelopment and technological design process for large energy conversion systems utilising similar structural solutions.

Keywords: photovoltaic solar systems, hybrid systems, thermo-electrical generators, renewable energy

Procedia PDF Downloads 63
205 International Coffee Trade in Solidarity with the Zapatista Rebellion: Anthropological Perspectives on Commercial Ethics within Political Antagonistic Movements

Authors: Miria Gambardella

Abstract:

The influence of solidarity demonstrations towards the Zapatista National Liberation Army has been constantly present over the years, both locally and internationally, guaranteeing visibility to the cause, shaping the movement’s choices, and influencing its hopes of impact worldwide. Most of the coffee produced by the autonomous cooperatives from Chiapas is exported, therefore making coffee trade the main income from international solidarity networks. The question arises about the implications of the relations established between the communities in resistance in Southeastern Mexico and international solidarity movements, specifically on the strategies adopted to conciliate army's demands for autonomy and economic asymmetries between Zapatista cooperatives producing coffee and European collectives who hold purchasing power. In order to deepen the inquiry on those topics, a year-long multi-site investigation was carried out. The first six months of fieldwork were based in Barcelona, where Zapatista coffee was first traded in Spain and where one of the historical and most important European solidarity groups can be found. The last six months of fieldwork were carried out directly in Chiapas, in contact with coffee producers, Zapatista political authorities, international activists as well as vendors, and the rest of the network implicated in coffee production, roasting, and sale. The investigation was based on qualitative research methods, including participatory observation, focus groups, and semi-structured interviews. The analysis did not only focus on retracing the steps of the market chain as if it could be considered a linear and unilateral process, but it rather aimed at exploring actors’ reciprocal perceptions, roles, and dynamics of power. Demonstrations of solidarity and the money circulation they imply aim at changing the system in place and building alternatives, among other things, on the economic level. This work analyzes the formulation of discourse and the organization of solidarity activities that aim at building opportunities for action within a highly politicized economic sphere to which access must be regularly legitimized. The meaning conveyed by coffee is constructed on a symbolic level by the attribution of moral criteria to transactions. The latter participate in the construction of imaginaries that circulate through solidarity movements with the Zapatista rebellion. Commercial exchanges linked to solidarity networks turned out to represent much more than monetary transactions. The social, cultural, and political spheres are invested by ethics, which penetrates all aspects of militant action. It is at this level that the boundaries of different collective actors connect, contaminating each other: merely following the money flow would have been limiting in order to account for a reality within which imaginary is one of the main currencies. The notions of “trust”, “dignity” and “reciprocity” are repeatedly mobilized to negotiate discontinuous and multidirectional flows in the attempt to balance and justify commercial relations in a politicized context that characterizes its own identity through demonizing “market economy” and its dehumanizing powers.

Keywords: coffee trade, economic anthropology, international cooperation, Zapatista National Liberation Army

Procedia PDF Downloads 58
204 Speech and Swallowing Function after Tonsillo-Lingual Sulcus Resection with PMMC Flap Reconstruction: A Case Study

Authors: K. Rhea Devaiah, B. S. Premalatha

Abstract:

Background: Tonsillar Lingual sulcus is the area between the tonsils and the base of the tongue. The surgical resection of the lesions in the head and neck results in changes in speech and swallowing functions. The severity of the speech and swallowing problem depends upon the site and extent of the lesion, types and extent of surgery and also the flexibility of the remaining structures. Need of the study: This paper focuses on the importance of speech and swallowing rehabilitation in an individual with the lesion in the Tonsillar Lingual Sulcus and post-operative functions. Aim: Evaluating the speech and swallow functions post-intensive speech and swallowing rehabilitation. The objectives are to evaluate the speech intelligibility and swallowing functions after intensive therapy and assess the quality of life. Method: The present study describes a report of an individual aged 47years male, with the diagnosis of basaloid squamous cell carcinoma, left tonsillar lingual sulcus (pT2n2M0) and underwent wide local excision with left radical neck dissection with PMMC flap reconstruction. Post-surgery the patient came with a complaint of reduced speech intelligibility, and difficulty in opening the mouth and swallowing. Detailed evaluation of the speech and swallowing functions were carried out such as OPME, articulation test, speech intelligibility, different phases of swallowing and trismus evaluation. Self-reported questionnaires such as SHI-E(Speech handicap Index- Indian English), DHI (Dysphagia handicap Index) and SESEQ -K (Self Evaluation of Swallowing Efficiency in Kannada) were also administered to know what the patient feels about his problem. Based on the evaluation, the patient was diagnosed with pharyngeal phase dysphagia associated with trismus and reduced speech intelligibility. Intensive speech and swallowing therapy was advised weekly twice for the duration of 1 hour. Results: Totally the patient attended 10 intensive speech and swallowing therapy sessions. Results indicated misarticulation of speech sounds such as lingua-palatal sounds. Mouth opening was restricted to one finger width with difficulty chewing, masticating, and swallowing the bolus. Intervention strategies included Oro motor exercise, Indirect swallowing therapy, usage of a trismus device to facilitate mouth opening, and change in the food consistency to help to swallow. A practice session was held with articulation drills to improve the production of speech sounds and also improve speech intelligibility. Significant changes in articulatory production and speech intelligibility and swallowing abilities were observed. The self-rated quality of life measures such as DHI, SHI and SESE Q-K revealed no speech handicap and near-normal swallowing ability indicating the improved QOL after the intensive speech and swallowing therapy. Conclusion: Speech and swallowing therapy post carcinoma in the tonsillar lingual sulcus is crucial as the tongue plays an important role in both speech and swallowing. The role of Speech-language and swallowing therapists in oral cancer should be highlighted in treating these patients and improving the overall quality of life. With intensive speech-language and swallowing therapy post-surgery for oral cancer, there can be a significant change in the speech outcome and swallowing functions depending on the site and extent of lesions which will thereby improve the individual’s QOL.

Keywords: oral cancer, speech and swallowing therapy, speech intelligibility, trismus, quality of life

Procedia PDF Downloads 84
203 Promotion of Healthy Food Choices in School Children through Nutrition Education

Authors: Vinti Davar

Abstract:

Introduction: Childhood overweight increases the risk for certain medical and psychological conditions. Millions of school-age children worldwide are affected by serious yet easily treatable and preventable illnesses that inhibit their ability to learn. Healthier children stay in school longer, attend more regularly, learn more and become healthier and more productive adults. Schools are an important setting for nutrition education because one can reach most children, teachers and parents. These years offer a key window for shaping their lifetime habits, which have an impact on their health throughout life. Against this background, an attempt was made to impart nutrition education to school children in Haryana state of India to promote healthy food choices and assess the effectiveness of this program. Methodology: This study was completed in two phases. During the first phase, pre-intervention anthropometric and dietary survey was conducted; the teaching materials for nutrition intervention program were developed and tested; and the questionnaire was validated. In the second phase, an intervention was implemented in two schools of Kurukshetra, Haryana for six months by personal visits once a week. A total of 350 children in the age group of 6-12 years were selected. Out of these, 279 children, 153 boys and 126 girls completed the study. The subjects were divided into four groups namely: underweight, normal, overweight and obese based on body mass index-for-age categories. A power point colorful presentation to improve the quality of tiffin, snacks and meals emphasizing inclusion of all food groups especially vegetables every day and fruits at least 3-4 days per week was used. An extra 20 minutes of aerobic exercise daily was likewise organized and a healthy school environment created. Provision of clean drinking water by school authorities was ensured. Selling of soft drinks and energy-dense snacks in the school canteen as well as advertisements about soft drink and snacks on the school walls were banned. Post intervention, anthropometric indices and food selections were reassessed. Results: The results of this study reiterate the critical role of nutrition education and promotion in improving the healthier food choices by school children. It was observed that normal, overweight and obese children participating in nutrition education intervention program significantly (p≤0.05) increased their daily seasonal fruit and vegetable consumption. Fat and oil consumption was significantly reduced by overweight and obese subjects. Fast food intake was controlled by obese children. The nutrition knowledge of school children significantly improved (p≤0.05) from pre to post intervention. A highly significant increase (p≤0.00) was noted in the nutrition attitude score after intervention in all four groups. Conclusion: This study has shown that a well-planned nutrition education program could improve nutrition knowledge and promote positive changes in healthy food choices. A nutrition program inculcates wholesome eating and active life style habits in children and adolescents that could not only prevent them from chronic diseases and early death but also reduce healthcare cost and enhance the quality of life of citizens and thereby nations.

Keywords: children, eating habits healthy food, obesity, school going, fast foods

Procedia PDF Downloads 184
202 Quantum Dots Incorporated in Biomembrane Models for Cancer Marker

Authors: Thiago E. Goto, Carla C. Lopes, Helena B. Nader, Anielle C. A. Silva, Noelio O. Dantas, José R. Siqueira Jr., Luciano Caseli

Abstract:

Quantum dots (QD) are semiconductor nanocrystals that can be employed in biological research as a tool for fluorescence imagings, having the potential to expand in vivo and in vitro analysis as cancerous cell biomarkers. Particularly, cadmium selenide (CdSe) magic-sized quantum dots (MSQDs) exhibit stable luminescence that is feasible for biological applications, especially for imaging of tumor cells. For these facts, it is interesting to know the mechanisms of action of how such QDs mark biological cells. For that, simplified models are a suitable strategy. Among these models, Langmuir films of lipids formed at the air-water interface seem to be adequate since they can mimic half a membrane. They are monomolecular films formed at liquid-gas interfaces that can spontaneously form when organic solutions of amphiphilic compounds are spread on the liquid-gas interface. After solvent evaporation, the monomolecular film is formed, and a variety of techniques, including tensiometric, spectroscopic and optic can be applied. When the monolayer is formed by membrane lipids at the air-water interface, a model for half a membrane can be inferred where the aqueous subphase serve as a model for external or internal compartment of the cell. These films can be transferred to solid supports forming the so-called Langmuir-Blodgett (LB) films, and an ampler variety of techniques can be additionally used to characterize the film, allowing for the formation of devices and sensors. With these ideas in mind, the objective of this work was to investigate the specific interactions of CdSe MSQDs with tumorigenic and non-tumorigenic cells using Langmuir monolayers and LB films of lipids and specific cell extracts as membrane models for diagnosis of cancerous cells. Surface pressure-area isotherms and polarization modulation reflection-absorption spectroscopy (PM-IRRAS) showed an intrinsic interaction between the quantum dots, inserted in the aqueous subphase, and Langmuir monolayers, constructed either of selected lipids or of non-tumorigenic and tumorigenic cells extracts. The quantum dots expanded the monolayers and changed the PM-IRRAS spectra for the lipid monolayers. The mixed films were then compressed to high surface pressures and transferred from the floating monolayer to solid supports by using the LB technique. Images of the films were then obtained with atomic force microscopy (AFM) and confocal microscopy, which provided information about the morphology of the films. Similarities and differences between films with different composition representing cell membranes, with or without CdSe MSQDs, was analyzed. The results indicated that the interaction of quantum dots with the bioinspired films is modulated by the lipid composition. The properties of the normal cell monolayer were not significantly altered, whereas for the tumorigenic cell monolayer models, the films presented significant alteration. The images therefore exhibited a stronger effect of CdSe MSQDs on the models representing cancerous cells. As important implication of these findings, one may envisage for new bioinspired surfaces based on molecular recognition for biomedical applications.

Keywords: biomembrane, langmuir monolayers, quantum dots, surfaces

Procedia PDF Downloads 172
201 The Shared Breath Project: Inhabiting Each Other’s Words and Being

Authors: Beverly Redman

Abstract:

With the Theatre Season of 2020-2021 cancelled due to COVID-19 at Purdue University, Fort Wayne, IN, USA, faculty directors found themselves scrambling to create theatre production opportunities for their students in the Department of Theatre. Redman, Chair of the Department, found her community to be suffering from anxieties brought on by a confluence of issues: the global-scale Covid-19 Pandemic, the United States’ Black Lives Matter protests erupting in cities all across the country and the coming Presidential election, arguably the most important and most contentious in the country’s history. Redman wanted to give her students the opportunity to speak not only on these issues but also to be able to record who they were at this time in their personal lives, as well as in this broad socio-political context. She also wanted to invite them into an experience of feeling empathy, too, at a time when empathy in this world seems to be sorely lacking. Returning to a mode of Devising Theatre she had used with community groups in the past, in which storytelling and re-enactment of participants’ life events combined with oral history documentation practices, Redman planned The Shared Breath Project. The process involved three months of workshops, in which participants alternated between theatre exercises and oral history collection and documentation activities as a way of generating original material for a theatre production. The goal of the first half of the project was for each participant to produce a solo piece in the form of a monologue after many generations of potential material born out of gammes, improvisations, interviews and the like. Along the way, many film and audio clips recorded the process of each person’s written documentation—documentation prepared by the subject him or herself but also by others in the group assigned to listen, watch and record. Then, in the second half of the project—and only once each participant had taken their own contributions from raw improvisatory self-presentations and through the stages of composition and performative polish, participants then exchanged their pieces. The second half of the project involved taking on each other’s words, mannerisms, gestures, melodic and rhythmic speech patterns and inhabiting them through the rehearsal process as their own, thus the title, The Shared Breath Project. Here, in stage two the acting challenges evolved to be those of capturing the other and becoming the other through accurate mimicry that embraces Denis Diderot’s concept of the Paradox of Acting, in that the actor is both seeming and being simultaneous. This paper shares the carefully documented process of making the live-streamed theatre production that resulted from these workshops, writing processes and rehearsals, and forming, The Shared Breath Project, which ultimately took the students’ Realist, life-based pieces and edited them into a single unified theatre production. The paper also utilizes research on the Paradox of Acting, putting a Post-Structuralist spin on Diderot’s theory. Here, the paper suggests the limitations of inhabiting the other by allowing that the other is always already a thing impenetrable but nevertheless worthy of unceasing empathetic, striving and delving in an epoch in which slow, careful attention to our fellows is in short supply.

Keywords: otherness, paradox of acting, oral history theatre, devised theatre, political theatre, community-based theatre, peoples’ theatre

Procedia PDF Downloads 161
200 Role of Functional Divergence in Specific Inhibitor Design: Using γ-Glutamyltranspeptidase (GGT) as a Model Protein

Authors: Ved Vrat Verma, Rani Gupta, Manisha Goel

Abstract:

γ-glutamyltranspeptidase (GGT: EC 2.3.2.2) is an N-terminal nucleophile hydrolase conserved in all three domains of life. GGT plays a key role in glutathione metabolism where it catalyzes the breakage of the γ-glutamyl bonds and transfer of γ-glutamyl group to water (hydrolytic activity) or amino acids or short peptides (transpeptidase activity). GGTs from bacteria, archaea, and eukaryotes (human, rat and mouse) are homologous proteins sharing >50% sequence similarity and conserved four layered αββα sandwich like three dimensional structural fold. These proteins though similar in their structure to each other, are quite diverse in their enzyme activity: some GGTs are better at hydrolysis reactions but poor in transpeptidase activity, whereas many others may show opposite behaviour. GGT is known to be involved in various diseases like asthma, parkinson, arthritis, and gastric cancer. Its inhibition prior to chemotherapy treatments has been shown to sensitize tumours to the treatment. Microbial GGT is known to be a virulence factor too, important for the colonization of bacteria in host. However, all known inhibitors (mimics of its native substrate, glutamate) are highly toxic because they interfere with other enzyme pathways. However, a few successful efforts have been reported previously in designing species specific inhibitors. We aim to leverage the diversity seen in GGT family (pathogen vs. eukaryotes) for designing specific inhibitors. Thus, in the present study, we have used DIVERGE software to identify sites in GGT proteins, which are crucial for the functional and structural divergence of these proteins. Since, type II divergence sites vary in clade specific manner, so type II divergent sites were our focus of interest throughout the study. Type II divergent sites were identified for pathogen vs. eukaryotes clusters and sites were marked on clade specific representative structures HpGGT (2QM6) and HmGGT (4ZCG) of pathogen and eukaryotes clade respectively. The crucial divergent sites within 15 A radii of the binding cavity were highlighted, and in-silico mutations were performed on these sites to delineate the role of these sites on the mechanism of catalysis and protein folding. Further, the amino acid network (AAN) analysis was also performed by Cytoscape to delineate assortative mixing for cavity divergent sites which could strengthen our hypothesis. Additionally, molecular dynamics simulations were performed for wild complexes and mutant complexes close to physiological conditions (pH 7.0, 0.1 M ionic strength and 1 atm pressure) and the role of putative divergence sites and structural integrities of the homologous proteins have been analysed. The dynamics data were scrutinized in terms of RMSD, RMSF, non-native H-bonds and salt bridges. The RMSD, RMSF fluctuations of proteins complexes are compared, and the changes at protein ligand binding sites were highlighted. The outcomes of our study highlighted some crucial divergent sites which could be used for novel inhibitors designing in a species-specific manner. Since, for drug development, it is challenging to design novel drug by targeting similar protein which exists in eukaryotes, so this study could set up an initial platform to overcome this challenge and help to deduce the more effective targets for novel drug discovery.

Keywords: γ-glutamyltranspeptidase, divergence, species-specific, drug design

Procedia PDF Downloads 244
199 Capturing Healthcare Expert’s Knowledge Digitally: A Scoping Review of Current Approaches

Authors: Sinead Impey, Gaye Stephens, Declan O’Sullivan

Abstract:

Mitigating organisational knowledge loss presents challenges for knowledge managers. Expert knowledge is embodied in people and captured in ‘routines, processes, practices and norms’ as well as in the paper system. These knowledge stores have limitations in so far as they make knowledge diffusion beyond geography or over time difficult. However, technology could present a potential solution by facilitating the capture and management of expert knowledge in a codified and sharable format. Before it can be digitised, however, the knowledge of healthcare experts must be captured. Methods: As a first step in a larger project on this topic, a scoping review was conducted to identify how expert healthcare knowledge is captured digitally. The aim of the review was to identify current healthcare knowledge capture practices, identify gaps in the literature, and justify future research. The review followed a scoping review framework. From an initial 3,430 papers retrieved, 22 were deemed relevant and included in the review. Findings: Two broad approaches –direct and indirect- with themes and subthemes emerged. ‘Direct’ describes a process whereby knowledge is taken directly from subject experts. The themes identified were: ‘Researcher mediated capture’ and ‘Digital mediated capture’. The latter was further distilled into two sub-themes: ‘Captured in specified purpose platforms (SPP)’ and ‘Captured in a virtual community of practice (vCoP)’. ‘Indirect’ processes rely on extracting new knowledge using artificial intelligence techniques from previously captured data. Using this approach, the theme ‘Generated using artificial intelligence methods’ was identified. Although presented as distinct themes, some papers retrieved discuss combining more than one approach to capture knowledge. While no approach emerged as superior, two points arose from the literature. Firstly, human input was evident across themes, even with indirect approaches. Secondly, a range of challenges common among approaches was highlighted. These were (i) ‘Capturing an expert’s knowledge’- Difficulties surrounding capturing an expert’s knowledge related to identifying the ‘expert’ say from the very experienced and how to capture their tacit or difficult to articulate knowledge. (ii) ‘Confirming quality of knowledge’- Once captured, challenges noted surrounded how to validate knowledge captured and, therefore, quality. (iii) ‘Continual knowledge capture’- Once knowledge is captured, validated, and used in a system; however, the process is not complete. Healthcare is a knowledge-rich environment with new evidence emerging frequently. As such, knowledge needs to be reviewed, updated, or removed (redundancy) as appropriate. Although some methods were proposed to address this, such as plausible reasoning or case-based reasoning, conclusions could not be drawn from the papers retrieved. It was, therefore, highlighted as an area for future research. Conclusion: The results described two broad approaches – direct and indirect. Three themes were identified: ‘Researcher mediated capture (Direct)’; ‘Digital mediated capture (Direct)’ and ‘Generated using artificial intelligence methods (Indirect)’. While no single approach was deemed superior, common challenges noted among approaches were: ‘capturing an expert’s knowledge’, ‘confirming quality of knowledge’, and ‘continual knowledge capture’. However, continual knowledge capture was not fully explored in the papers retrieved and was highlighted as an important area for future research. Acknowledgments: This research is partially funded by the ADAPT Centre under the SFI Research Centres Programme (Grant 13/RC/2106) and is co-funded under the European Regional Development Fund.

Keywords: expert knowledge, healthcare, knowledge capture and knowledge management

Procedia PDF Downloads 116
198 Temporal and Spacial Adaptation Strategies in Aerodynamic Simulation of Bluff Bodies Using Vortex Particle Methods

Authors: Dario Milani, Guido Morgenthal

Abstract:

Fluid dynamic computation of wind caused forces on bluff bodies e.g light flexible civil structures or high incidence of ground approaching airplane wings, is one of the major criteria governing their design. For such structures a significant dynamic response may result, requiring the usage of small scale devices as guide-vanes in bridge design to control these effects. The focus of this paper is on the numerical simulation of the bluff body problem involving multiscale phenomena induced by small scale devices. One of the solution methods for the CFD simulation that is relatively successful in this class of applications is the Vortex Particle Method (VPM). The method is based on a grid free Lagrangian formulation of the Navier-Stokes equations, where the velocity field is modeled by particles representing local vorticity. These vortices are being convected due to the free stream velocity as well as diffused. This representation yields the main advantages of low numerical diffusion, compact discretization as the vorticity is strongly localized, implicitly accounting for the free-space boundary conditions typical for this class of FSI problems, and a natural representation of the vortex creation process inherent in bluff body flows. When the particle resolution reaches the Kolmogorov dissipation length, the method becomes a Direct Numerical Simulation (DNS). However, it is crucial to note that any solution method aims at balancing the computational cost against the accuracy achievable. In the classical VPM method, if the fluid domain is discretized by Np particles, the computational cost is O(Np2). For the coupled FSI problem of interest, for example large structures such as long-span bridges, the aerodynamic behavior may be influenced or even dominated by small structural details such as barriers, handrails or fairings. For such geometrically complex and dimensionally large structures, resolving the complete domain with the conventional VPM particle discretization might become prohibitively expensive to compute even for moderate numbers of particles. It is possible to reduce this cost either by reducing the number of particles or by controlling its local distribution. It is also possible to increase the accuracy of the solution without increasing substantially the global computational cost by computing a correction of the particle-particle interaction in some regions of interest. In this paper different strategies are presented in order to extend the conventional VPM method to reduce the computational cost whilst resolving the required details of the flow. The methods include temporal sub stepping to increase the accuracy of the particles convection in certain regions as well as dynamically re-discretizing the particle map to locally control the global and the local amount of particles. Finally, these methods will be applied on a test case and the improvements in the efficiency as well as the accuracy of the proposed extension to the method are presented. The important benefits in terms of accuracy and computational cost of the combination of these methods will be thus presented as long as their relevant applications.

Keywords: adaptation, fluid dynamic, remeshing, substepping, vortex particle method

Procedia PDF Downloads 240
197 Baseline Data for Insecticide Resistance Monitoring in Tobacco Caterpillar, Spodoptera litura (Fabricius) (Lepidoptera: Noctuidae) on Cole Crops

Authors: Prabhjot Kaur, B.K. Kang, Balwinder Singh

Abstract:

The tobacco caterpillar, Spodoptera litura (Fabricius) (Lepidoptera: Noctuidae) is an agricultural important pest species. S. litura has a wide host range of approximately recorded 150 plant species worldwide. In Punjab, this pest attains sporadic status primarily on cauliflower, Brassica oleracea (L.). This pest destroys vegetable crop and particularly prefers the cruciferae family. However, it is also observed feeding on other crops such as arbi, Colocasia esculenta (L.), mung bean, Vigna radiata (L.), sunflower, Helianthus annuus (L.), cotton, Gossypium hirsutum (L.), castor, Ricinus communis (L.), etc. Larvae of this pest completely devour the leaves of infested plant resulting in huge crop losses which ranges from 50 to 70 per cent. Indiscriminate and continuous use of insecticides has contributed in development of insecticide resistance in insects and caused the environmental degradation as well. Moreover, a base line data regarding the toxicity of the newer insecticides would help in understanding the level of resistance developed in this pest and any possible cross-resistance there in, which could be assessed in advance. Therefore, present studies on development of resistance in S. litura against four new chemistry insecticides (emamectin benzoate, chlorantraniliprole, indoxacarb and spinosad) were carried out in the Toxicology laboratory, Department of Entomology, Punjab Agricultural University, Ludhiana, Punjab, India during the year 2011-12. Various stages of S. litura (eggs, larvae) were collected from four different locations (Malerkotla, Hoshiarpur, Amritsar and Samrala) of Punjab. Resistance is developed in third instars of lepidopterous pests. Therefore, larval bioassays were conducted to estimate the response of field populations of thirty third-instar larvae of S. litura under laboratory conditions at 25±2°C and 65±5 per cent relative humidity. Leaf dip bioassay technique with diluted insecticide formulations recommended by Insecticide Resistance Action Committee (IRAC) was performed in the laboratory with seven to ten treatments depending on the insecticide class, respectively. LC50 values were estimated by probit analysis after correction to record control mortality data which was used to calculate the resistance ratios (RR). The LC50 values worked out for emamectin benzoate, chlorantraniliprole, indoxacarb, spinosad are 0.081, 0.088, 0.380, 4.00 parts per million (ppm) against pest populations collected from Malerkotla; 0.051, 0.060, 0.250, 3.00 (ppm) of Amritsar; 0.002, 0.001, 0.0076, 0.10 ppm for Samrala and 0.000014, 0.00001, 0.00056, 0.003 ppm against pest population of Hoshiarpur, respectively. The LC50 values for populations collected from these four locations were in the order Malerkotla>Amritsar>Samrala>Hoshiarpur for the insecticides (emamectin benzoate, chlorantraniliprole, indoxacarb and spinosad) tested. Based on LC50 values obtained, emamectin benzoate (0.000014 ppm) was found to be the most toxic among all the tested populations, followed by chlorantraniliprole (0.00001 ppm), indoxacarb (0.00056 ppm) and spinosad (0.003 ppm), respectively. The pairwise correlation coefficients of LC50 values indicated that there was lack of cross resistance for emamectin benzoate, chlorantraniliprole, spinosad, indoxacarb in populations of S. litura from Punjab. These insecticides may prove to be promising substitutes for the effective control of insecticide resistant populations of S. litura in Punjab state, India.

Keywords: Spodoptera litura, insecticides, toxicity, resistance

Procedia PDF Downloads 317
196 Oil and Proteins of Sardine (Sardina Pilchardus) Compared with Casein or Mixture of Vegetable Oils Improves Dyslipidemia and Reduces Inflammation and Oxidative Stress in Hypercholesterolemic and Obese Rats

Authors: Khelladi Hadj Mostefa, Krouf Djamil, Taleb-Dida Nawel

Abstract:

Background: Obesity results from a prolonged imbalance between energy intake and energy expenditure, as depending on basal metabolic rate. Oils and proteins from sea have important therapeutic (such as obesity and hypercholesterolemia) and antioxidant effects. Sardine are a widely consumed fish in the Mediterranean region. Its consumption provides humans with various nutrients such as oils (rich in omega 3 plyunsaturated fatty acids)) and proteins. Methods: Sardine oil (SO) and sardine proteins (SP) were extracted and purified. Mixture of vegetable oils (olive-walnut-sunflower) were prepared from oils produced in Algeria. Eighteen wistar rats are fed a high fat diet enriched with 1% cholesterol for 30 days to induce obesity and hypercholesterolemia. The rats are divided into 3 groups. The first group consumes 20% sardine protein combined with 5% sardine oil (38% SFA (saturated fatty acids), 31% MIFA (monounsaturated fatty acids) and 31% PIFA (polyunsaturated fatty acids)) (SPso). The second group consumes 20% sardine protein combined with 5% of a mixture of vegetable oils (VO) containing 13% SFA, 58% MIFA and 29% PIFA (PSvo), and the third group consuming 20% casein combined with 5% of the mixture of vegetable oils and serves as a semi-synthetic reference (CASvo). Body weights and glycaemia are measured weekly After 28 days of experimentation, the rats are sacrificed, the blood and the liver removed. Serum assays of total cholesterol (TC) and triglycerides (TG) were performed by enzymatic colorimetric methods. Evaluation of lipid peroxidation was performed by assaying thiobarbituric acid reactive species (TBARS) and hydroperoxides values. The protein oxidation was performed by assaying carbonyl derivatives values. Finally, evaluation of antioxidant defense is made by measuring the activity of antioxidant enzymes, the superoxide dismutase (SOD) and the catalase (CAT).Results: After 28 days, the body weight (BW) of the rats increased significantly in SPso and SPvo groups compared to CAS group, by +11% and 7%, respectively. Cholesterolemia (TC) increased significantly in the SPso and SPvo groups compared to the CAS group (P<0.01), while triglyceridemia (TG) decreased significantly in the SPso group compared to SPvo and CAS groups (P<0.01). Albumin (marker of inflammation) increased in the PSs group compared to SPvo and CAS groups by +35% and +13%, respectively. The serum TBARS levels are -40% lower in SPso group compared to SPvo group, and they are -80% and -76% lower in SPso compared to SPvo and CAS groups, respectively. The level of carbonyls derivatives in the serum and liver are significantly reduced in the SPso group compared to the SPvo and CAS groups. Superoxide dismutase (SOD) activity decreased in liver of SPso group compared to SPvo group (P<0.01). While that of CAT is increased in liver tissue of SPso group compared to SPvo group (P<0.01). Conclusion: Sardine oil combined with sardine protein has a hypotriglyceridemic effect, reduces body weight, attenuates inflammation and seems to protect against lipid peroxidation and protein oxidation and increases antioxidant defense in hypercholesterolemic and obese rats. This could be in favor of a protective effect against obesity and cardiovascular diseases.

Keywords: rat, obesity, hypercholesterolemia, sardine protein, sardine oil, vegetable oils mixture, lipid peroxidation, protein oxidation, antioxidant defense

Procedia PDF Downloads 40
195 International Solar Alliance: A Case for Indian Solar Diplomacy

Authors: Swadha Singh

Abstract:

International Solar Alliance is the foremost treaty-based global organization concerned with tapping the potential of sun-abundant nations between the Tropics of Cancer and Capricorn and enables co-operation among them. As a founding member of the International Solar Alliance, India exhibits its positioning as an upcoming leader in clean energy. India has set ambitious goals and targets to expand the share of solar in its energy mix and is playing a proactive role both at the regional and global levels. ISA aims to serve multiple goals- bring about scale commercialization of solar power, boost domestic manufacturing, and leverage solar diplomacy in African countries, amongst others. Against this backdrop, this paper attempts to examine the ways in which ISA as an intergovernmental organization under Indian leadership can leverage the cause of clean energy (solar) diplomacy and effectively shape partnerships and collaborations with other developing countries in terms of sharing solar technology, capacity building, risk mitigation, mobilizing financial investment and providing an aggregate market. A more specific focus of ISA is on the developing countries, which in the absence of a collective, are constrained by technology and capital scarcity, despite being naturally endowed with solar resources. Solar rich but finance-constrained economies face political risk, foreign exchange risk, and off-taker risk. Scholars argue that aligning India’s climate change discourse and growth prospects in its engagements, collaborations, and partnerships at the bilateral, multilateral and regional level can help promote trade, attract investments, and promote resilient energy transition both in India and in partner countries. For developing countries, coming together in an action-oriented way on issues of climate and clean energy is particularly important since it is developing and underdeveloped countries that face multiple and coalescing challenges such as the adverse impact of climate change, uneven and low access to reliable energy, and pressing employment needs. Investing in green recovery is agreed to be an assured way to create resilient value chains, create sustainable livelihoods, and help mitigate climate threats. If India is able to ‘green its growth’ process, it holds the potential to emerge as a climate leader internationally. It can use its experience in the renewable sector to guide other developing countries in balancing multiple similar objectives of development, energy security, and sustainability. The challenges underlying solar expansion in India have lessons to offer other developing countries, giving India an opportunity to assume a leadership role in solar diplomacy and expand its geopolitical influence through inter-governmental organizations such as ISA. It is noted that India has limited capacity to directly provide financial funds and support and is not a leading manufacturer of cheap solar equipment, as does China; however, India can nonetheless leverage its large domestic market to scale up the commercialization of solar power and offer insights and learnings to similarly placed abundant solar countries. The paper examines the potential of and limits placed on India’s solar diplomacy.

Keywords: climate diplomacy, energy security, solar diplomacy, renewable energy

Procedia PDF Downloads 100
194 Deep Learning for SAR Images Restoration

Authors: Hossein Aghababaei, Sergio Vitale, Giampaolo Ferraioli

Abstract:

In the context of Synthetic Aperture Radar (SAR) data, polarization is an important source of information for Earth's surface monitoring. SAR Systems are often considered to transmit only one polarization. This constraint leads to either single or dual polarimetric SAR imaging modalities. Single polarimetric systems operate with a fixed single polarization of both transmitted and received electromagnetic (EM) waves, resulting in a single acquisition channel. Dual polarimetric systems, on the other hand, transmit in one fixed polarization and receive in two orthogonal polarizations, resulting in two acquisition channels. Dual polarimetric systems are obviously more informative than single polarimetric systems and are increasingly being used for a variety of remote sensing applications. In dual polarimetric systems, the choice of polarizations for the transmitter and the receiver is open. The choice of circular transmit polarization and coherent dual linear receive polarizations forms a special dual polarimetric system called hybrid polarimetry, which brings the properties of rotational invariance to geometrical orientations of features in the scene and optimizes the design of the radar in terms of reliability, mass, and power constraints. The complete characterization of target scattering, however, requires fully polarimetric data, which can be acquired with systems that transmit two orthogonal polarizations. This adds further complexity to data acquisition and shortens the coverage area or swath of fully polarimetric images compared to the swath of dual or hybrid polarimetric images. The search for solutions to augment dual polarimetric data to full polarimetric data will therefore take advantage of full characterization and exploitation of the backscattered field over a wider coverage with less system complexity. Several methods for reconstructing fully polarimetric images using hybrid polarimetric data can be found in the literature. Although the improvements achieved by the newly investigated and experimented reconstruction techniques are undeniable, the existing methods are, however, mostly based upon model assumptions (especially the assumption of reflectance symmetry), which may limit their reliability and applicability to vegetation and forest scenarios. To overcome the problems of these techniques, this paper proposes a new framework for reconstructing fully polarimetric information from hybrid polarimetric data. The framework uses Deep Learning solutions to augment hybrid polarimetric data without relying on model assumptions. A convolutional neural network (CNN) with a specific architecture and loss function is defined for this augmentation problem by focusing on different scattering properties of the polarimetric data. In particular, the method controls the CNN training process with respect to several characteristic features of polarimetric images defined by the combination of different terms in the cost or loss function. The proposed method is experimentally validated with real data sets and compared with a well-known and standard approach from the literature. From the experiments, the reconstruction performance of the proposed framework is superior to conventional reconstruction methods. The pseudo fully polarimetric data reconstructed by the proposed method also agree well with the actual fully polarimetric images acquired by radar systems, confirming the reliability and efficiency of the proposed method.

Keywords: SAR image, polarimetric SAR image, convolutional neural network, deep learnig, deep neural network

Procedia PDF Downloads 45