Search results for: selection mechanisms
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4806

Search results for: selection mechanisms

396 Food Safety in Wine: Removal of Ochratoxin a in Contaminated White Wine Using Commercial Fining Agents

Authors: Antònio Inês, Davide Silva, Filipa Carvalho, Luís Filipe-Riberiro, Fernando M. Nunes, Luís Abrunhosa, Fernanda Cosme

Abstract:

The presence of mycotoxins in foodstuff is a matter of concern for food safety. Mycotoxins are toxic secondary metabolites produced by certain molds, being ochratoxin A (OTA) one of the most relevant. Wines can also be contaminated with these toxicants. Several authors have demonstrated the presence of mycotoxins in wine, especially ochratoxin A. Its chemical structure is a dihydro-isocoumarin connected at the 7-carboxy group to a molecule of L-β-phenylalanine via an amide bond. As these toxicants can never be completely removed from the food chain, many countries have defined levels in food in order to attend health concerns. OTA contamination of wines might be a risk to consumer health, thus requiring treatments to achieve acceptable standards for human consumption. The maximum acceptable level of OTA in wines is 2.0 μg/kg according to the Commission regulation No. 1881/2006. Therefore, the aim of this work was to reduce OTA to safer levels using different fining agents, as well as their impact on white wine physicochemical characteristics. To evaluate their efficiency, 11 commercial fining agents (mineral, synthetic, animal and vegetable proteins) were used to get new approaches on OTA removal from white wine. Trials (including a control without addition of a fining agent) were performed in white wine artificially supplemented with OTA (10 µg/L). OTA analyses were performed after wine fining. Wine was centrifuged at 4000 rpm for 10 min and 1 mL of the supernatant was collected and added of an equal volume of acetonitrile/methanol/acetic acid (78:20:2 v/v/v). Also, the solid fractions obtained after fining, were centrifuged (4000 rpm, 15 min), the resulting supernatant discarded, and the pellet extracted with 1 mL of the above solution and 1 mL of H2O. OTA analysis was performed by HPLC with fluorescence detection. The most effective fining agent in removing OTA (80%) from white wine was a commercial formulation that contains gelatin, bentonite and activated carbon. Removals between 10-30% were obtained with potassium caseinate, yeast cell walls and pea protein. With bentonites, carboxymethylcellulose, polyvinylpolypyrrolidone and chitosan no considerable OTA removal was verified. Following, the effectiveness of seven commercial activated carbons was also evaluated and compared with the commercial formulation that contains gelatin, bentonite and activated carbon. The different activated carbons were applied at the concentration recommended by the manufacturer in order to evaluate their efficiency in reducing OTA levels. Trial and OTA analysis were performed as explained previously. The results showed that in white wine all activated carbons except one reduced 100% of OTA. The commercial formulation that contains gelatin, bentonite and activated carbon reduced only 73% of OTA concentration. These results may provide useful information for winemakers, namely for the selection of the most appropriate oenological product for OTA removal, reducing wine toxicity and simultaneously enhancing food safety and wine quality.

Keywords: wine, ota removal, food safety, fining

Procedia PDF Downloads 538
395 Interfacial Reactions between Aromatic Polyamide Fibers and Epoxy Matrix

Authors: Khodzhaberdi Allaberdiev

Abstract:

In order to understand the interactions on the interface polyamide fibers and epoxy matrix in fiber- reinforced composites were investigated industrial aramid fibers: armos, svm, terlon using individual epoxy matrix components, epoxies: diglycidyl ether of bisphenol A (DGEBA), three- and diglycidyl derivatives of m, p-amino-, m, p-oxy-, o, m,p-carboxybenzoic acids, the models: curing agent, aniline and the compound, that depict of the structure the primary addition reaction the amine to the epoxy resin, N-di (oxyethylphenoxy) aniline. The chemical structure of the surface of untreated and treated polyamide fibers analyzed using Fourier transform infrared spectroscopy (FTIR). The impregnation of fibers with epoxy matrix components and N-di (oxyethylphenoxy) aniline has been carried out by heating 150˚C (6h). The optimum fiber loading is at 65%.The result a thermal treatment is the covalent bonds formation , derived from a combined of homopolymerization and crosslinking mechanisms in the interfacial region between the epoxy resin and the surface of fibers. The reactivity of epoxy resins on interface in microcomposites (MC) also depends from processing aids treated on surface of fiber and the absorbance moisture. The influences these factors as evidenced by the conversion of epoxy groups values in impregnated with DGEBA of the terlons: industrial, dried (in vacuum) and purified samples: 5.20 %, 4.65% and 14.10%, respectively. The same tendency for svm and armos fibers is observed. The changes in surface composition of these MC were monitored by X-ray photoelectron spectroscopy (XPS). In the case of the purified fibers, functional groups of fibers act as well as a catalyst and curing agent of epoxy resin. It is found that the value of the epoxy groups conversion for reinforced formulations depends on aromatic polyamides nature and decreases in the order: armos >svm> terlon. This difference is due of the structural characteristics of fibers. The interfacial interactions also examined between polyglycidyl esters substituted benzoic acids and polyamide fibers in the MC. It is found that on interfacial interactions these systems influences as well as the structure and the isomerism of epoxides. The IR-spectrum impregnated fibers with aniline showed that the polyamide fibers appreciably with aniline do not react. FTIR results of treated fibers with N-di (oxyethylphenoxy) aniline fibers revealed dramatically changes IR-characteristic of the OH groups of the amino alcohol. These observations indicated hydrogen bondings and covalent interactions between amino alcohol and functional groups of fibers. This result also confirms appearance of the exo peak on Differential Scanning Calorimetry (DSC) curve of the MC. Finally, the theoretical evaluation non-covalent interactions between individual epoxy matrix components and fibers has been performed using the benzanilide and its derivative contaning the benzimidazole moiety as a models of terlon and svm,armos, respectively. Quantum-topological analysis also demonstrated the existence hydrogen bond between amide group of models and epoxy matrix components.All the results indicated that on the interface polyamide fibers and epoxy matrix exist not only covalent, but and non-covalent the interactions during the preparation of MC.

Keywords: epoxies, interface, modeling, polyamide fibers

Procedia PDF Downloads 266
394 Pesticides Monitoring in Surface Waters of the São Paulo State, Brazil

Authors: Fabio N. Moreno, Letícia B. Marinho, Beatriz D. Ruiz, Maria Helena R. B. Martins

Abstract:

Brazil is a top consumer of pesticides worldwide, and the São Paulo State is one of the highest consumers among the Brazilian federative states. However, representative data about the occurrence of pesticides in surface waters of the São Paulo State is scarce. This paper aims to present the results of pesticides monitoring executed within the Water Quality Monitoring Network of CETESB (The Environmental Agency of the São Paulo State) between the 2018-2022 period. Surface water sampling points (21 to 25) were selected within basins of predominantly agricultural land-use (5 to 85% of cultivated areas). The samples were collected throughout the year, including high-flow and low-flow conditions. The frequency of sampling varied between 6 to 4 times per year. Selection of pesticide molecules for monitoring followed a prioritizing process from EMBRAPA (Brazilian Agricultural Research Corporation) databases of pesticide use. Pesticides extractions in aqueous samples were performed according to USEPA 3510C and 3546 methods following quality assurance and quality control procedures. Determination of pesticides in water (ng L-1) extracts were performed by high-performance liquid chromatography coupled with mass spectrometry (HPLC-MS) and by gas chromatography with nitrogen phosphorus (GC-NPD) and electron capture detectors (GC-ECD). The results showed higher frequencies (20- 65%) in surface water samples for Carbendazim (fungicide), Diuron/Tebuthiuron (herbicides) and Fipronil/Imidaclopride (insecticides). The frequency of observations for these pesticides were generally higher in monitoring points located in sugarcane cultivated areas. The following pesticides were most frequently quantified above the Aquatic life benchmarks for freshwater (USEPA Office of Pesticide Programs, 2023) or Brazilian Federal Regulatory Standards (CONAMA Resolution no. 357/2005): Atrazine, Imidaclopride, Carbendazim, 2,4D, Fipronil, and Chlorpiryfos. Higher median concentrations for Diuron and Tebuthiuron in the rainy months (october to march) indicated pesticide transport through surface runoff. However, measurable concentrations in the dry season (april to september) for Fipronil and Imidaclopride also indicates pathways related to subsurface or base flow discharge after pesticide soil infiltration and leaching or dry deposition following pesticide air spraying. With exception to Diuron, no temporal trends related to median concentrations of the most frequently quantified pesticides were observed. These results are important to assist policymakers in the development of strategies aiming at reducing pesticides migration to surface waters from agricultural areas. Further studies will be carried out in selected points to investigate potential risks as a result of pesticides exposure on aquatic biota.

Keywords: pesticides monitoring, são paulo state, water quality, surface waters

Procedia PDF Downloads 59
393 The Power-Knowledge Relationship in the Italian Education System between the 19th and 20th Century

Authors: G. Iacoviello, A. Lazzini

Abstract:

This paper focuses on the development of the study of accounting in the Italian education system between the 19th and 20th centuries. It also focuses on the subsequent formation of a scientific and experimental forma mentis that would prepare students for administrative and managerial activities in industry, commerce and public administration. From a political perspective, the period was characterized by two dominant movements - liberalism (1861-1922) and fascism (1922-1945) - that deeply influenced accounting practices and the entire Italian education system. The materials used in the study include both primary and secondary sources. The primary sources used to inform this study are numerous original documents issued from 1890-1935 by the government and maintained in the Historical Archive of the State in Rome. The secondary sources have supported both the development of the theoretical framework and the definition of the historical context. This paper assigns to the educational system the role of cultural producer. Foucauldian analysis identifies the problem confronted by the critical intellectual in finding a way to deploy knowledge through a 'patient labour of investigation' that highlights the contingency and fragility of the circumstances that have shaped current practices and theories. Education can be considered a powerful and political process providing students with values, ideas, and models that they will subsequently use to discipline themselves, remaining as close to them as possible. It is impossible for power to be exercised without knowledge, just as it is impossible for knowledge not to engender power. The power-knowledge relationship can be usefully employed for explaining how power operates within society, how mechanisms of power affect everyday lives. Power is employed at all levels and through many dimensions including government. Schools exercise ‘epistemological power’ – a power to extract a knowledge of individuals from individuals. Because knowledge is a key element in the operation of power, the procedures applied to the formation and accumulation of knowledge cannot be considered neutral instruments for the presentation of the real. Consequently, the same institutions that produce and spread knowledge can be considered part of the ‘power-knowledge’ interrelation. Individuals have become both objects and subject in the development of knowledge. If education plays a fundamental role in shaping all aspects of communities in the same way, the structural changes resulting from economic, social and cultural development affect the educational systems. Analogously, the important changes related to social and economic development required legislative intervention to regulate the functioning of different areas in society. Knowledge can become a means of social control used by the government to manage populations. It can be argued that the evolution of Italy’s education systems is coherent with the idea that power and knowledge do not exist independently but instead are coterminous. This research aims to reduce such a gap by analysing the role of the state in the development of accounting education in Italy.

Keywords: education system, government, knowledge, power

Procedia PDF Downloads 139
392 Internal Concept of Integrated Health by Agrarian Society in Malagasy Highlands for the Last Century

Authors: O. R. Razanakoto, L. Temple

Abstract:

Living in a least developed country, the Malagasy society has a weak capacity to internalize progress, including health concerns. Since the arrival in the fifteenth century of Arabic script, called Sorabe, that was mainly dedicated to the aristocracy, until the colonial era beginning at the end of the nineteenth century and that has popularized the current usual script of the occidental civilization, the upcoming manuscripts that deal with apparent scientific or at least academic issue have been slowly established. So that, the Malagasy communities’ way of life is not well documented yet to allow a precise understanding of the major concerns, reason, and purpose of the existence of the farmers that compose them. A question arises, according to literature, how does Malagasy community that is dominated by agrarian society conceive the conservation of its wellbeing? This study aims to emphasize the scope and the limits of the « One Health » concept or of the Health Integrated Approach (HIA) that evolves at global scale, with regard to the specific context of local Malagasy smallholder farms. It is expected to identify how this society represents linked risks and the mechanisms between human health, animal health, plant health, and ecosystem health within the last 100 years. To do so, the framework to conduct systematic review for agricultural research has been deployed to access available literature. This task has been coupled with the reading of articles that are not indexed by online scientific search engine but that mention part of a history of agriculture and of farmers in Madagascar. This literature review has informed the interactions between human illnesses and those affecting animals and plants (breeded or wild) with any unexpected event (ecological or economic) that has modified the equilibrium of the ecosystem, or that has disturbed the livelihoods of agrarian communities. Besides, drivers that may either accentuate or attenuate the devasting effects of these illnesses and changes were revealed. The study has established that the reasons of human worries are not only physiological. Among the factors that regulate global health, food system and contemporary medicine have helped to the improvement of life expectancy from 55 to 63 years in Madagascar during the last 50 years. However, threats to global health are still occurring. New human or animal illnesses and livestock / plant pathology or enemies may also appear, whereas ancient illnesses that are supposed to have disappeared may be back. This study has highlighted how much important are the risks associated to the impact of unmanaged externalities that weaken community’s life. Many risks, and also solutions, come from abroad and have long term effects even though those happen as punctual event. Thus, a constructivist strategy is suggested to the « One Health » global concept throughout the record of local facts. This approach should facilitate the exploration of methodological pathways and the identification of relevant indicators for research related to HIA.

Keywords: agrarian system, health integrated approach, history, madagascar, resilience, risk

Procedia PDF Downloads 110
391 Molecular Dynamics Simulation of Realistic Biochar Models with Controlled Microporosity

Authors: Audrey Ngambia, Ondrej Masek, Valentina Erastova

Abstract:

Biochar is an amorphous carbon-rich material generated from the pyrolysis of biomass with multifarious properties and functionality. Biochar has shown proven applications in the treatment of flue gas and organic and inorganic pollutants in soil and water/wastewater as a result of its multiple surface functional groups and porous structures. These properties have also shown potential in energy storage and carbon capture. The availability of diverse sources of biomass to produce biochar has increased interest in it as a sustainable and environmentally friendly material. The properties and porous structures of biochar vary depending on the type of biomass and high heat treatment temperature (HHT). Biochars produced at HHT between 400°C – 800°C generally have lower H/C and O/C ratios, higher porosities, larger pore sizes and higher surface areas with temperature. While all is known experimentally, there is little knowledge on the porous role structure and functional groups play on processes occurring at the atomistic scale, which are extremely important for the optimization of biochar for application, especially in the adsorption of gases. Atomistic simulations methods have shown the potential to generate such amorphous materials; however, most of the models available are composed of only carbon atoms or graphitic sheets, which are very dense or with simple slit pores, all of which ignore the important role of heteroatoms such as O, N, S and pore morphologies. Hence, developing realistic models that integrate these parameters are important to understand their role in governing adsorption mechanisms that will aid in guiding the design and optimization of biochar materials for target applications. In this work, molecular dynamics simulations in the isobaric ensemble are used to generate realistic biochar models taking into account experimentally determined H/C, O/C, N/C, aromaticity, micropore size range, micropore volumes and true densities of biochars. A pore generation approach was developed using virtual atoms, which is a Lennard-Jones sphere of varying van der Waals radius and softness. Its interaction via a soft-core potential with the biochar matrix allows the creation of pores with rough surfaces while varying the van der Waals radius parameters gives control to the pore-size distribution. We focused on microporosity, creating average pore sizes of 0.5 - 2 nm in diameter and pore volumes in the range of 0.05 – 1 cm3/g, which corresponds to experimental gas adsorption micropore sizes of amorphous porous biochars. Realistic biochar models with surface functionalities, micropore size distribution and pore morphologies were developed, and they could aid in the study of adsorption processes in confined micropores.

Keywords: biochar, heteroatoms, micropore size, molecular dynamics simulations, surface functional groups, virtual atoms

Procedia PDF Downloads 71
390 Nanoparticle Exposure Levels in Indoor and Outdoor Demolition Sites

Authors: Aniruddha Mitra, Abbas Rashidi, Shane Lewis, Jefferson Doehling, Alexis Pawlak, Jacob Schwartz, Imaobong Ekpo, Atin Adhikari

Abstract:

Working or living close to demolition sites can increase risks of dust-related health problems. Demolition of concrete buildings may produce crystalline silica dust, which can be associated with a broad range of respiratory diseases including silicosis and lung cancers. Previous studies demonstrated significant associations between demolition dust exposure and increase in the incidence of mesothelioma or asbestos cancer. Dust is a generic term used for minute solid particles of typically <500 µm in diameter. Dust particles in demolition sites vary in a wide range of sizes. Larger particles tend to settle down from the air. On the other hand, the smaller and lighter solid particles remain dispersed in the air for a long period and pose sustained exposure risks. Submicron ultrafine particles and nanoparticles are respirable deeper into our alveoli beyond our body’s natural respiratory cleaning mechanisms such as cilia and mucous membranes and are likely to be retained in the lower airways. To our knowledge, how various demolition tasks release nanoparticles are largely unknown and previous studies mostly focused on course dust, PM2.5, and PM10. General belief is that the dust generated during demolition tasks are mostly large particles formed through crushing, grinding, or sawing of various concrete and wooden structures. Therefore, little consideration has been given to the generated submicron ultrafine and nanoparticles and their exposure levels. These data are, however, critically important because recent laboratory studies have demonstrated cytotoxicity of nanoparticles on lung epithelial cells. The above-described knowledge gaps were addressed in this study by a novel newly developed nanoparticle monitor, which was used for nanoparticle monitoring at two adjacent indoor and outdoor building demolition sites in southern Georgia. Nanoparticle levels were measured (n = 10) by TSI NanoScan SMPS Model 3910 at four different distances (5, 10, 15, and 30 m) from the work location as well as in control sites. Temperature and relative humidity levels were recorded. Indoor demolition works included acetylene torch, masonry drilling, ceiling panel removal, and other miscellaneous tasks. Whereas, outdoor demolition works included acetylene torch and skid-steer loader use to remove a HVAC system. Concentration ranges of nanoparticles of 13 particle sizes at the indoor demolition site were: 11.5 nm: 63 – 1054/cm³; 15.4 nm: 170 – 1690/cm³; 20.5 nm: 321 – 730/cm³; 27.4 nm: 740 – 3255/cm³; 36.5 nm: 1,220 – 17,828/cm³; 48.7 nm: 1,993 – 40,465/cm³; 64.9 nm: 2,848 – 58,910/cm³; 86.6 nm: 3,722 – 62,040/cm³; 115.5 nm: 3,732 – 46,786/cm³; 154 nm: 3,022 – 21,506/cm³; 205.4 nm: 12 – 15,482/cm³; 273.8 nm: Keywords: demolition dust, industrial hygiene, aerosol, occupational exposure

Procedia PDF Downloads 423
389 Trafficking of Women and Children and Solutions to Combat It: The Case of Nigeria

Authors: Olatokunbo Yakeem

Abstract:

Human trafficking is a crime against gross violations of human rights. Trafficking in persons is a severe socio-economic dilemma that affects the national and international dimensions. Human trafficking or modern-day-slavery emanated from slavery, and it has been in existence before the 6ᵗʰ century. Today, no country is exempted from dehumanizing human beings, and as a result, it has been an international issue. The United Nations (UN) presented the International Protocol to fight human trafficking worldwide, which brought about the international definition of human trafficking. The protocol is to prevent, suppress, and punish trafficking in persons, especially women and children. The trafficking protocol has a link with transnational organised crime rather than migration. Over a hundred and fifty countries nationwide have enacted their criminal and panel code trafficking legislation from the UN trafficking protocol. Sex trafficking is the most common type of exploitation of women and children. Other forms of this crime involve exploiting vulnerable victims through forced labour, child involvement in warfare, domestic servitude, debt bondage, and organ removal for transplantation. Trafficking of women and children into sexual exploitation represents the highest form of human trafficking than other types of exploitation. Trafficking of women and children can either happen internally or across the border. It affects all kinds of people, regardless of their race, social class, culture, religion, and education levels. However, it is more of a gender-based issue against females. Furthermore, human trafficking can lead to life-threatening infections, mental disorders, lifetime trauma, and even the victim's death. The study's significance is to explore why the root causes of women and children trafficking in Nigeria are based around poverty, entrusting children in the hands of relatives and friends, corruption, globalization, weak legislation, and ignorance. The importance of this study is to establish how the national, regional, and international organisations are using the 3P’s Protection, Prevention, and Prosecution) to tackle human trafficking. The methodology approach for this study will be a qualitative paradigm. The rationale behind this selection is that the qualitative method will identify the phenomenon and interpret the findings comprehensively. The data collection will take the form of semi-structured in-depth interviews through telephone and email. The researcher will use a descriptive thematic analysis to analyse the data by using complete coding. In summary, this study aims to recommend to the Nigerian federal government to include human trafficking as a subject in their educational curriculum for early intervention to prevent children from been coerced by criminal gangs. And the research aims to find the root causes of women and children trafficking. Also, to look into the effectiveness of the strategies in place to eradicate human trafficking globally. In the same vein, the research objective is to investigate how the anti-trafficking bodies such as law enforcement and NGOs collaborate to tackle the upsurge in human trafficking.

Keywords: children, Nigeria, trafficking, women

Procedia PDF Downloads 183
388 Spectroscopic Autoradiography of Alpha Particles on Geologic Samples at the Thin Section Scale Using a Parallel Ionization Multiplier Gaseous Detector

Authors: Hugo Lefeuvre, Jerôme Donnard, Michael Descostes, Sophie Billon, Samuel Duval, Tugdual Oger, Herve Toubon, Paul Sardini

Abstract:

Spectroscopic autoradiography is a method of interest for geological sample analysis. Indeed, researchers may face different issues such as radioelement identification and quantification in the field of environmental studies. Imaging gaseous ionization detectors find their place in geosciences for conducting specific measurements of radioactivity to improve the monitoring of natural processes using naturally-occurring radioactive tracers, but also for the nuclear industry linked to the mining sector. In geological samples, the location and identification of the radioactive-bearing minerals at the thin-section scale remains a major challenge as the detection limit of the usual elementary microprobe techniques is far higher than the concentration of most of the natural radioactive decay products. The spatial distribution of each decay product in the case of uranium in a geomaterial is interesting for relating radionuclides concentration to the mineralogy. The present study aims to provide spectroscopic autoradiography analysis method for measuring the initial energy of alpha particles with a parallel ionization multiplier gaseous detector. The analysis method has been developed thanks to Geant4 modelling of the detector. The track of alpha particles recorded in the gas detector allow the simultaneous measurement of the initial point of emission and the reconstruction of the initial particle energy by a selection based on the linear energy distribution. This spectroscopic autoradiography method was successfully used to reproduce the alpha spectra from a 238U decay chain on a geological sample at the thin-section scale. The characteristics of this measurement are an energy spectrum resolution of 17.2% (FWHM) at 4647 keV and a spatial resolution of at least 50 µm. Even if the efficiency of energy spectrum reconstruction is low (4.4%) compared to the efficiency of a simple autoradiograph (50%), this novel measurement approach offers the opportunity to select areas on an autoradiograph to perform an energy spectrum analysis within that area. This opens up possibilities for the detailed analysis of heterogeneous geological samples containing natural alpha emitters such as uranium-238 and radium-226. This measurement will allow the study of the spatial distribution of uranium and its descendants in geo-materials by coupling scanning electron microscope characterizations. The direct application of this dual modality (energy-position) of analysis will be the subject of future developments. The measurement of the radioactive equilibrium state of heterogeneous geological structures, and the quantitative mapping of 226Ra radioactivity are now being actively studied.

Keywords: alpha spectroscopy, digital autoradiography, mining activities, natural decay products

Procedia PDF Downloads 151
387 Music Genre Classification Based on Non-Negative Matrix Factorization Features

Authors: Soyon Kim, Edward Kim

Abstract:

In order to retrieve information from the massive stream of songs in the music industry, music search by title, lyrics, artist, mood, and genre has become more important. Despite the subjectivity and controversy over the definition of music genres across different nations and cultures, automatic genre classification systems that facilitate the process of music categorization have been developed. Manual genre selection by music producers is being provided as statistical data for designing automatic genre classification systems. In this paper, an automatic music genre classification system utilizing non-negative matrix factorization (NMF) is proposed. Short-term characteristics of the music signal can be captured based on the timbre features such as mel-frequency cepstral coefficient (MFCC), decorrelated filter bank (DFB), octave-based spectral contrast (OSC), and octave band sum (OBS). Long-term time-varying characteristics of the music signal can be summarized with (1) the statistical features such as mean, variance, minimum, and maximum of the timbre features and (2) the modulation spectrum features such as spectral flatness measure, spectral crest measure, spectral peak, spectral valley, and spectral contrast of the timbre features. Not only these conventional basic long-term feature vectors, but also NMF based feature vectors are proposed to be used together for genre classification. In the training stage, NMF basis vectors were extracted for each genre class. The NMF features were calculated in the log spectral magnitude domain (NMF-LSM) as well as in the basic feature vector domain (NMF-BFV). For NMF-LSM, an entire full band spectrum was used. However, for NMF-BFV, only low band spectrum was used since high frequency modulation spectrum of the basic feature vectors did not contain important information for genre classification. In the test stage, using the set of pre-trained NMF basis vectors, the genre classification system extracted the NMF weighting values of each genre as the NMF feature vectors. A support vector machine (SVM) was used as a classifier. The GTZAN multi-genre music database was used for training and testing. It is composed of 10 genres and 100 songs for each genre. To increase the reliability of the experiments, 10-fold cross validation was used. For a given input song, an extracted NMF-LSM feature vector was composed of 10 weighting values that corresponded to the classification probabilities for 10 genres. An NMF-BFV feature vector also had a dimensionality of 10. Combined with the basic long-term features such as statistical features and modulation spectrum features, the NMF features provided the increased accuracy with a slight increase in feature dimensionality. The conventional basic features by themselves yielded 84.0% accuracy, but the basic features with NMF-LSM and NMF-BFV provided 85.1% and 84.2% accuracy, respectively. The basic features required dimensionality of 460, but NMF-LSM and NMF-BFV required dimensionalities of 10 and 10, respectively. Combining the basic features, NMF-LSM and NMF-BFV together with the SVM with a radial basis function (RBF) kernel produced the significantly higher classification accuracy of 88.3% with a feature dimensionality of 480.

Keywords: mel-frequency cepstral coefficient (MFCC), music genre classification, non-negative matrix factorization (NMF), support vector machine (SVM)

Procedia PDF Downloads 303
386 Various Shaped ZnO and ZnO/Graphene Oxide Nanocomposites and Their Use in Water Splitting Reaction

Authors: Sundaram Chandrasekaran, Seung Hyun Hur

Abstract:

Exploring strategies for oxygen vacancy engineering under mild conditions and understanding the relationship between dislocations and photoelectrochemical (PEC) cell performance are challenging issues for designing high performance PEC devices. Therefore, it is very important to understand that how the oxygen vacancies (VO) or other defect states affect the performance of the photocatalyst in photoelectric transfer. So far, it has been found that defects in nano or micro crystals can have two possible significances on the PEC performance. Firstly, an electron-hole pair produced at the interface of photoelectrode and electrolyte can recombine at the defect centers under illumination of light, thereby reducing the PEC performances. On the other hand, the defects could lead to a higher light absorption in the longer wavelength region and may act as energy centers for the water splitting reaction that can improve the PEC performances. Even if the dislocation growth of ZnO has been verified by the full density functional theory (DFT) calculations and local density approximation calculations (LDA), it requires further studies to correlate the structures of ZnO and PEC performances. Exploring the hybrid structures composed of graphene oxide (GO) and ZnO nanostructures offer not only the vision of how the complex structure form from a simple starting materials but also the tools to improve PEC performances by understanding the underlying mechanisms of mutual interactions. As there are few studies for the ZnO growth with other materials and the growth mechanism in those cases has not been clearly explored yet, it is very important to understand the fundamental growth process of nanomaterials with the specific materials, so that rational and controllable syntheses of efficient ZnO-based hybrid materials can be designed to prepare nanostructures that can exhibit significant PEC performances. Herein, we fabricated various ZnO nanostructures such as hollow sphere, bucky bowl, nanorod and triangle, investigated their pH dependent growth mechanism, and correlated the PEC performances with them. Especially, the origin of well-controlled dislocation-driven growth and its transformation mechanism of ZnO nanorods to triangles on the GO surface were discussed in detail. Surprisingly, the addition of GO during the synthesis process not only tunes the morphology of ZnO nanocrystals and also creates more oxygen vacancies (oxygen defects) in the lattice of ZnO, which obviously suggest that the oxygen vacancies be created by the redox reaction between GO and ZnO in which the surface oxygen is extracted from the surface of ZnO by the functional groups of GO. On the basis of our experimental and theoretical analysis, the detailed mechanism for the formation of specific structural shapes and oxygen vacancies via dislocation, and its impact in PEC performances are explored. In water splitting performance, the maximum photocurrent density of GO-ZnO triangles was 1.517mA/cm-2 (under UV light ~ 360 nm) vs. RHE with high incident photon to current conversion Efficiency (IPCE) of 10.41%, which is the highest among all samples fabricated in this study and also one of the highest IPCE reported so far obtained from GO-ZnO triangular shaped photocatalyst.

Keywords: dislocation driven growth, zinc oxide, graphene oxide, water splitting

Procedia PDF Downloads 294
385 Comparative Effects of Resveratrol and Energy Restriction on Liver Fat Accumulation and Hepatic Fatty Acid Oxidation

Authors: Iñaki Milton-Laskibar, Leixuri Aguirre, Maria P. Portillo

Abstract:

Introduction: Energy restriction is an effective approach in preventing liver steatosis. However, due to social and economic reasons among others, compliance with this treatment protocol is often very poor, especially in the long term. Resveratrol, a natural polyphenolic compound that belongs to stilbene group, has been widely reported to imitate the effects of energy restriction. Objective: To analyze the effects of resveratrol under normoenergetic feeding conditions and under a mild energy restriction on liver fat accumulation and hepatic fatty acid oxidation. Methods: 36 male six-week-old rats were fed a high-fat high-sucrose diet for 6 weeks in order to induce steatosis. Then, rats were divided into four groups and fed a standard diet for 6 additional weeks: control group (C), resveratrol group (RSV, resveratrol 30 mg/kg/d), restricted group (R, 15 % energy restriction) and combined group (RR, 15 % energy restriction and resveratrol 30 mg/kg/d). Liver triacylglycerols (TG) and total cholesterol contents were measured by using commercial kits. Carnitine palmitoyl transferase 1a (CPT 1a) and citrate synthase (CS) activities were measured spectrophotometrically. TFAM (mitochondrial transcription factor A) and peroxisome proliferator-activator receptor alpha (PPARα) protein contents, as well as the ratio acetylated peroxisome proliferator-activated receptor gamma coactivator 1-alpha (PGC1α)/Total PGC1α were analyzed by Western blot. Statistical analysis was performed by using one way ANOVA and Newman-Keuls as post-hoc test. Results: No differences were observed among the four groups regarding liver weight and cholesterol content, but the three treated groups showed reduced TG when compared to the control group, being the restricted groups the ones showing the lowest values (with no differences between them). Higher CPT 1a and CS activities were observed in the groups supplemented with resveratrol (RSV and RR), with no difference between them. The acetylated PGC1α /total PGC1α ratio was lower in the treated groups (RSV, R and RR) than in the control group, with no differences among them. As far as TFAM protein expression is concerned, only the RR group reached a higher value. Finally, no changes were observed in PPARα protein expression. Conclusions: Resveratrol administration is an effective intervention for liver triacylglycerol content reduction, but a mild energy restriction is even more effective. The mechanisms of action of these two strategies are different. Thus resveratrol, but not energy restriction, seems to act by increasing fatty acid oxidation, although mitochondriogenesis seems not to be induced. When both treatments (resveratrol administration and a mild energy restriction) were combined, no additive or synergic effects were appreciated. Acknowledgements: MINECO-FEDER (AGL2015-65719-R), Basque Government (IT-572-13), University of the Basque Country (ELDUNANOTEK UFI11/32), Institut of Health Carlos III (CIBERobn). Iñaki Milton is a fellowship from the Basque Government.

Keywords: energy restriction, fat, liver, oxidation, resveratrol

Procedia PDF Downloads 211
384 Sensor and Sensor System Design, Selection and Data Fusion Using Non-Deterministic Multi-Attribute Tradespace Exploration

Authors: Matthew Yeager, Christopher Willy, John Bischoff

Abstract:

The conceptualization and design phases of a system lifecycle consume a significant amount of the lifecycle budget in the form of direct tasking and capital, as well as the implicit costs associated with unforeseeable design errors that are only realized during downstream phases. Ad hoc or iterative approaches to generating system requirements oftentimes fail to consider the full array of feasible systems or product designs for a variety of reasons, including, but not limited to: initial conceptualization that oftentimes incorporates a priori or legacy features; the inability to capture, communicate and accommodate stakeholder preferences; inadequate technical designs and/or feasibility studies; and locally-, but not globally-, optimized subsystems and components. These design pitfalls can beget unanticipated developmental or system alterations with added costs, risks and support activities, heightening the risk for suboptimal system performance, premature obsolescence or forgone development. Supported by rapid advances in learning algorithms and hardware technology, sensors and sensor systems have become commonplace in both commercial and industrial products. The evolving array of hardware components (i.e. sensors, CPUs, modular / auxiliary access, etc…) as well as recognition, data fusion and communication protocols have all become increasingly complex and critical for design engineers during both concpetualization and implementation. This work seeks to develop and utilize a non-deterministic approach for sensor system design within the multi-attribute tradespace exploration (MATE) paradigm, a technique that incorporates decision theory into model-based techniques in order to explore complex design environments and discover better system designs. Developed to address the inherent design constraints in complex aerospace systems, MATE techniques enable project engineers to examine all viable system designs, assess attribute utility and system performance, and better align with stakeholder requirements. Whereas such previous work has been focused on aerospace systems and conducted in a deterministic fashion, this study addresses a wider array of system design elements by incorporating both traditional tradespace elements (e.g. hardware components) as well as popular multi-sensor data fusion models and techniques. Furthermore, statistical performance features to this model-based MATE approach will enable non-deterministic techniques for various commercial systems that range in application, complexity and system behavior, demonstrating a significant utility within the realm of formal systems decision-making.

Keywords: multi-attribute tradespace exploration, data fusion, sensors, systems engineering, system design

Procedia PDF Downloads 183
383 Effects of Virtual Reality Treadmill Training on Gait and Balance Performance of Patients with Stroke: Review

Authors: Hanan Algarni

Abstract:

Background: Impairment of walking and balance skills has negative impact on functional independence and community participation after stroke. Gait recovery is considered a primary goal in rehabilitation by both patients and physiotherapists. Treadmill training coupled with virtual reality technology is a new emerging approach that offers patients with feedback, open and random skills practice while walking and interacting with virtual environmental scenes. Objectives: To synthesize the evidence around the effects of the VR treadmill training on gait speed and balance primarily, functional independence and community participation secondarily in stroke patients. Methods: Systematic review was conducted; search strategy included electronic data bases: MEDLINE, AMED, Cochrane, CINAHL, EMBASE, PEDro, Web of Science, and unpublished literature. Inclusion criteria: Participant: adult >18 years, stroke, ambulatory, without severe visual or cognitive impartments. Intervention: VR treadmill training alone or with physiotherapy. Comparator: any other interventions. Outcomes: gait speed, balance, function, community participation. Characteristics of included studies were extracted for analysis. Risk of bias assessment was performed using Cochrane's ROB tool. Narrative synthesis of findings was undertaken and summary of findings in each outcome was reported using GRADEpro. Results: Four studies were included involving 84 stroke participants with chronic hemiparesis. Interventions intensity ranged (6-12 sessions, 20 minutes-1 hour/session). Three studies investigated the effects on gait speed and balance. 2 studies investigated functional outcomes and one study assessed community participation. ROB assessment showed 50% unclear risk of selection bias and 25% of unclear risk of detection bias across the studies. Heterogeneity was identified in the intervention effects at post training and follow up. Outcome measures, training intensity and durations also varied across the studies, grade of evidence was low for balance, moderate for speed and function outcomes, and high for community participation. However, it is important to note that grading was done on few numbers of studies in each outcome. Conclusions: The summary of findings suggests positive and statistically significant effects (p<0.05) of VR treadmill training compared to other interventions on gait speed, dynamic balance skills, function and participation directly after training. However, the effects were not sustained at follow up in two studies (2 weeks-1 month) and other studies did not perform follow up measurements. More RCTs with larger sample sizes and higher methodological quality are required to examine the long term effects of VR treadmill effects on function independence and community participation after stroke, in order to draw conclusions and produce stronger robust evidence.

Keywords: virtual reality, treadmill, stroke, gait rehabilitation

Procedia PDF Downloads 274
382 A New Model to Perform Preliminary Evaluations of Complex Systems for the Production of Energy for Buildings: Case Study

Authors: Roberto de Lieto Vollaro, Emanuele de Lieto Vollaro, Gianluca Coltrinari

Abstract:

The building sector is responsible, in many industrialized countries, for about 40% of the total energy requirements, so it seems necessary to devote some efforts in this area in order to achieve a significant reduction of energy consumption and of greenhouse gases emissions. The paper presents a study aiming at providing a design methodology able to identify the best configuration of the system building/plant, from a technical, economic and environmentally point of view. Normally, the classical approach involves a building's energy loads analysis under steady state conditions, and subsequent selection of measures aimed at improving the energy performance, based on previous experience made by architects and engineers in the design team. Instead, the proposed approach uses a sequence of two well known scientifically validated calculation methods (TRNSYS and RETScreen), that allow quite a detailed feasibility analysis. To assess the validity of the calculation model, an existing, historical building in Central Italy, that will be the object of restoration and preservative redevelopment, was selected as a case-study. The building is made of a basement and three floors, with a total floor area of about 3,000 square meters. The first step has been the determination of the heating and cooling energy loads of the building in a dynamic regime by means of TRNSYS, which allows to simulate the real energy needs of the building in function of its use. Traditional methodologies, based as they are on steady-state conditions, cannot faithfully reproduce the effects of varying climatic conditions and of inertial properties of the structure. With TRNSYS it is possible to obtain quite accurate and reliable results, that allow to identify effective combinations building-HVAC system. The second step has consisted of using output data obtained with TRNSYS as input to the calculation model RETScreen, which enables to compare different system configurations from the energy, environmental and financial point of view, with an analysis of investment, and operation and maintenance costs, so allowing to determine the economic benefit of possible interventions. The classical methodology often leads to the choice of conventional plant systems, while RETScreen provides a financial-economic assessment for innovative energy systems and low environmental impact. Computational analysis can help in the design phase, particularly in the case of complex structures with centralized plant systems, by comparing the data returned by the calculation model RETScreen for different design options. For example, the analysis performed on the building, taken as a case study, found that the most suitable plant solution, taking into account technical, economic and environmental aspects, is the one based on a CCHP system (Combined Cooling, Heating, and Power) using an internal combustion engine.

Keywords: energy, system, building, cooling, electrical

Procedia PDF Downloads 573
381 Management Potentialities Of Rice Blast Disease Caused By Magnaporthe Grisae Using New Nanofungicides Derived From Chitosan

Authors: Abdulaziz Bashir Kutawa, Khairulmazmi Ahmad, Mohd Zobir Hussein, Asgar Ali, Mohd Aswad Abdul Wahab, Amara Rafi, Mahesh Tiran Gunasena, Muhammad Ziaur Rahman, Md Imam Hossain, Syazwan Afif Mohd Zobir

Abstract:

Various abiotic and biotic stresses have an impact on rice production all around the world. The most serious and prevalent disease in rice plants, known as rice blast, is one of the major obstacles to the production of rice. It is one of the diseases that has the greatest negative effects on rice farming globally, the disease is caused by a fungus called Magnaporthe grisae. Since nanoparticles were shown to have an inhibitory impact on certain types of fungus, nanotechnology is a novel notion to enhance agriculture by battling plant diseases. Utilizing nanocarrier systems enables the active chemicals to be absorbed, attached, and encapsulated to produce efficient nanodelivery formulations. The objectives of this research work were to determine the efficacy and mode of action of the nanofungicides (in-vitro) and in field conditions (in-vivo). Ionic gelation method was used in the development of the nanofungicides. Using the poisoned media method, the synthesized agronanofungicides' in-vitro antifungal activity was assessed against M. grisae. The potato dextrose agar (PDA) was amended in several concentrations; 0.001, 0.005, 0.01, 0.025, 0.05, 0.1, 0.15, 0.20, 0.25, 0.30, and 0.35 ppm for the nanofungicides. Medium with the only solvent served as a control. Every day, mycelial growth was measured, and PIRG (percentage inhibition of radial growth) was also computed. Every day, mycelial growth was measured, and PIRG (percentage inhibition of radial growth) was also computed. Based on the results of the zone of inhibition, the chitosan-hexaconazole agronanofungicide (2g/mL) was the most effective fungicide to inhibit the growth of the fungus with 100% inhibition at 0.2, 0.25, 0.30, and 0.35 ppm, respectively. Then followed by carbendazim analytical fungicide that inhibited the growth of the fungus (100%) at 5, 10, 25, 50, and 100 ppm, respectively. The least were found to be propiconazole and basamid fungicides with 100% inhibition only at 100 ppm. The scanning electron microscope (SEM), confocal laser scanning microscope (CLSM), and transmission electron microscope (TEM) were used to study the mechanisms of action of the M. grisae fungal cells. The results showed that both carbendazim, chitosan-hexaconazole, and HXE were found to be the most effective fungicides in disrupting the mycelia of the fungus, and internal structures of the fungal cells. The results of the field assessment showed that the CHDEN treatment (5g/L, double dosage) was found to be the most effective fungicide to reduce the intensity of the rice blast disease with DSI of 17.56%, lesion length (0.43 cm), DR of 82.44%, AUDPC of 260.54 Unit2, and PI of 65.33%, respectively. The least treatment was found to be chitosan-hexaconazole-dazomet (2.5g/L, MIC). The usage of CHDEN and CHEN nanofungicides will significantly assist in lessening the severity of rice blast in the fields, increasing output and profit for rice farmers.

Keywords: chitosan, hexaconazole, disease incidence, and magnaporthe grisae

Procedia PDF Downloads 69
380 Bioinformatic Prediction of Hub Genes by Analysis of Signaling Pathways, Transcriptional Regulatory Networks and DNA Methylation Pattern in Colon Cancer

Authors: Ankan Roy, Niharika, Samir Kumar Patra

Abstract:

Anomalous nexus of complex topological assemblies and spatiotemporal epigenetic choreography at chromosomal territory may forms the most sophisticated regulatory layer of gene expression in cancer. Colon cancer is one of the leading malignant neoplasms of the lower gastrointestinal tract worldwide. There is still a paucity of information about the complex molecular mechanisms of colonic cancerogenesis. Bioinformatics prediction and analysis helps to identify essential genes and significant pathways for monitoring and conquering this deadly disease. The present study investigates and explores potential hub genes as biomarkers and effective therapeutic targets for colon cancer treatment. Colon cancer patient sample containing gene expression profile datasets, such as GSE44076, GSE20916, and GSE37364 were downloaded from Gene Expression Omnibus (GEO) database and thoroughly screened using the GEO2R tool and Funrich software to find out common 2 differentially expressed genes (DEGs). Other approaches, including Gene Ontology (GO) and KEGG pathway analysis, Protein-Protein Interaction (PPI) network construction and hub gene investigation, Overall Survival (OS) analysis, gene correlation analysis, methylation pattern analysis, and hub gene-Transcription factors regulatory network construction, were performed and validated using various bioinformatics tool. Initially, we identified 166 DEGs, including 68 up-regulated and 98 down-regulated genes. Up-regulated genes are mainly associated with the Cytokine-cytokine receptor interaction, IL17 signaling pathway, ECM-receptor interaction, Focal adhesion and PI3K-Akt pathway. Downregulated genes are enriched in metabolic pathways, retinol metabolism, Steroid hormone biosynthesis, and bile secretion. From the protein-protein interaction network, thirty hub genes with high connectivity are selected using the MCODE and cytoHubba plugin. Survival analysis, expression validation, correlation analysis, and methylation pattern analysis were further verified using TCGA data. Finally, we predicted COL1A1, COL1A2, COL4A1, SPP1, SPARC, and THBS2 as potential master regulators in colonic cancerogenesis. Moreover, our experimental data highlights that disruption of lipid raft and RAS/MAPK signaling cascade affects this gene hub at mRNA level. We identified COL1A1, COL1A2, COL4A1, SPP1, SPARC, and THBS2 as determinant hub genes in colon cancer progression. They can be considered as biomarkers for diagnosis and promising therapeutic targets in colon cancer treatment. Additionally, our experimental data advertise that signaling pathway act as connecting link between membrane hub and gene hub.

Keywords: hub genes, colon cancer, DNA methylation, epigenetic engineering, bioinformatic predictions

Procedia PDF Downloads 128
379 Prevalence and Risk Factors of Musculoskeletal Disorders among School Teachers in Mangalore: A Cross Sectional Study

Authors: Junaid Hamid Bhat

Abstract:

Background: Musculoskeletal disorders are one of the main causes of occupational illness. Mechanisms and the factors like repetitive work, physical effort and posture, endangering the risk of musculoskeletal disorders would now appear to have been properly identified. Teacher’s exposure to work-related musculoskeletal disorders appears to be insufficiently described in the literature. Little research has investigated the prevalence and risk factors of musculoskeletal disorders in teaching profession. Very few studies are available in this regard and there are no studies evident in India. Purpose: To determine the prevalence of musculoskeletal disorders and to identify and measure the association of such risk factors responsible for developing musculoskeletal disorders among school teachers. Methodology: An observational cross sectional study was carried out. 500 school teachers from primary, middle, high and secondary schools were selected, based on eligibility criteria. A signed consent was obtained and a self-administered, validated questionnaire was used. Descriptive statistics was used to compute the statistical mean and standard deviation, frequency and percentage to estimate the prevalence of musculoskeletal disorders among school teachers. The data analysis was done by using SPSS version 16.0. Results: Results indicated higher pain prevalence (99.6%) among school teachers during the past 12 months. Neck pain (66.1%), low back pain (61.8%) and knee pain (32.0%) were the most prevalent musculoskeletal complaints of the subjects. Prevalence of shoulder pain was also found to be high among school teachers (25.9%). 52.0% subjects reported pain as disabling in nature, causing sleep disturbance (44.8%) and pain was found to be associated with work (87.5%). A significant association was found between musculoskeletal disorders and sick leaves/absenteeism. Conclusion: Work-related musculoskeletal disorders particularly neck pain, low back pain, and knee pain, is highly prevalent and risk factors are responsible for the development of same in school teachers. There is little awareness of musculoskeletal disorders among school teachers, due to work load and prolonged/static postures. Further research should concentrate on specific risk factors like repetitive movements, psychological stress, and ergonomic factors and should be carried out all over the country and the school teachers should be studied carefully over a period of time. Also, an ergonomic investigation is needed to decrease the work-related musculoskeletal disorder problems. Implication: Recall bias and self-reporting can be considered as limitations. Also, cause and effect inferences cannot be ascertained. Based on these results, it is important to disseminate general recommendations for prevention of work-related musculoskeletal disorders with regards to the suitability of furniture, equipment and work tools, environmental conditions, work organization and rest time to school teachers. School teachers in the early stage of their careers should try to adapt the ergonomically favorable position whilst performing their work for a safe and healthy life later. Employers should be educated on practical aspects of prevention to reduce musculoskeletal disorders, since changes in workplace and work organization and physical/recreational activities are required.

Keywords: work related musculoskeletal disorders, school teachers, risk factors funding, medical and health sciences

Procedia PDF Downloads 277
378 In vitro Antimicrobial Resistance Pattern of Bovine Mastitis Bacteria in Ethiopia

Authors: Befekadu Urga Wakayo

Abstract:

Introduction: Bacterial infections represent major human and animal health problems in Ethiopia. In the face of poor antibiotic regulatory mechanisms, development of antimicrobial resistance (AMR) to commonly used drugs has become a growing health and livelihood threat in the country. Monitoring and control of AMR demand close coloration between human and veterinary services as well as other relevant stakeholders. However, risk of AMR transfer from animal to human population’s remains poorly explored in Ethiopia. This systematic research literature review attempted to give an overview on AMR challenges of bovine mastitis bacteria in Ethiopia. Methodology: A web based research literature search and analysis strategy was used. Databases are considered including; PubMed, Google Scholar, Ethiopian Veterinary Association (EVA) and Ethiopian Society of Animal Production (ESAP). The key search terms and phrases were; Ethiopia, dairy, cattle, mastitis, bacteria isolation, antibiotic sensitivity and antimicrobial resistance. Ultimately, 15 research reports were used for the current analysis. Data extraction was performed using a structured Microsoft Excel format. Frequency AMR prevalence (%) was registered directly or calculated from reported values. Statistical analysis was performed on SPSS – 16. Variables were summarized by giving frequencies (n or %), Mean ± SE and demonstrative box plots. One way ANOVA and independent t test were used to evaluate variations in AMR prevalence estimates (Ln transformed). Statistical significance was determined at p < 0.050). Results: AMR in bovine mastitis bacteria was investigated in a total of 592 in vitro antibiotic sensitivity trials involving 12 different mastitis bacteria (including 1126 Gram positive and 77 Gram negative isolates) and 14 antibiotics. Bovine mastitis bacteria exhibited AMR to most of the antibiotics tested. Gentamycin had the lowest average AMR in both Gram positive (2%) and negative (1.8%) bacteria. Gram negative mastitis bacteria showed higher mean in vitro resistance levels to; Erythromycin (72.6%), Tetracycline (56.65%), Amoxicillin (49.6%), Ampicillin (47.6%), Clindamycin (47.2%) and Penicillin (40.6%). Among Gram positive mastitis bacteria higher mean in vitro resistance was observed in; Ampicillin (32.8%), Amoxicillin (32.6%), Penicillin (24.9%), Streptomycin (20.2%), Penicillinase Resistant Penicillin’s (15.4%) and Tetracycline (14.9%). More specifically, S. aurues exhibited high mean AMR against Penicillin (76.3%) and Ampicillin (70.3%) followed by Amoxicillin (45%), Streptomycin (40.6%), Tetracycline (24.5%) and Clindamycin (23.5%). E. coli showed high mean AMR to Erythromycin (78.7%), Tetracycline (51.5%), Ampicillin (49.25%), Amoxicillin (43.3%), Clindamycin (38.4%) and Penicillin (33.8%). Streptococcus spp. demonstrated higher (p =0.005) mean AMR against Kanamycin (> 20%) and full sensitivity (100%) to Clindamycin. Overall, mean Tetracycline (p = 0.013), Gentamycin (p = 0.001), Polymixin (p = 0.034), Erythromycin (p = 0.011) and Ampicillin (p = 0.009) resistance increased from the 2010’s than the 2000’s. Conclusion; the review indicated a rising AMR challenge among bovine mastitis bacteria in Ethiopia. Corresponding, public health implications demand a deeper, integrated investigation.

Keywords: antimicrobial resistance, dairy cattle, Ethiopia, Mastitis bacteria

Procedia PDF Downloads 245
377 Artificial Intelligence Models for Detecting Spatiotemporal Crop Water Stress in Automating Irrigation Scheduling: A Review

Authors: Elham Koohi, Silvio Jose Gumiere, Hossein Bonakdari, Saeid Homayouni

Abstract:

Water used in agricultural crops can be managed by irrigation scheduling based on soil moisture levels and plant water stress thresholds. Automated irrigation scheduling limits crop physiological damage and yield reduction. Knowledge of crop water stress monitoring approaches can be effective in optimizing the use of agricultural water. Understanding the physiological mechanisms of crop responding and adapting to water deficit ensures sustainable agricultural management and food supply. This aim could be achieved by analyzing and diagnosing crop characteristics and their interlinkage with the surrounding environment. Assessments of plant functional types (e.g., leaf area and structure, tree height, rate of evapotranspiration, rate of photosynthesis), controlling changes, and irrigated areas mapping. Calculating thresholds of soil water content parameters, crop water use efficiency, and Nitrogen status make irrigation scheduling decisions more accurate by preventing water limitations between irrigations. Combining Remote Sensing (RS), the Internet of Things (IoT), Artificial Intelligence (AI), and Machine Learning Algorithms (MLAs) can improve measurement accuracies and automate irrigation scheduling. This paper is a review structured by surveying about 100 recent research studies to analyze varied approaches in terms of providing high spatial and temporal resolution mapping, sensor-based Variable Rate Application (VRA) mapping, the relation between spectral and thermal reflectance and different features of crop and soil. The other objective is to assess RS indices formed by choosing specific reflectance bands and identifying the correct spectral band to optimize classification techniques and analyze Proximal Optical Sensors (POSs) to control changes. The innovation of this paper can be defined as categorizing evaluation methodologies of precision irrigation (applying the right practice, at the right place, at the right time, with the right quantity) controlled by soil moisture levels and sensitiveness of crops to water stress, into pre-processing, processing (retrieval algorithms), and post-processing parts. Then, the main idea of this research is to analyze the error reasons and/or values in employing different approaches in three proposed parts reported by recent studies. Additionally, as an overview conclusion tried to decompose different approaches to optimizing indices, calibration methods for the sensors, thresholding and prediction models prone to errors, and improvements in classification accuracy for mapping changes.

Keywords: agricultural crops, crop water stress detection, irrigation scheduling, precision agriculture, remote sensing

Procedia PDF Downloads 71
376 The Structural Alteration of DNA Native Structure of Staphylococcus aureus Bacteria by Designed Quinoxaline Small Molecules Result in Their Antibacterial Properties

Authors: Jeet Chakraborty, Sanjay Dutta

Abstract:

Antibiotic resistance by bacteria has proved to be a severe threat to mankind in recent times, and this fortifies an urgency to design and develop potent antibacterial small molecules/compounds with nonconventional mechanisms than the conventional ones. DNA carries the genetic signature of any organism, and bacteria maintain their genomic DNA inside the cell in a well-regulated compact form with the help of various nucleoid associated proteins like HU, HNS, etc. These proteins control various fundamental processes like gene expression, replication, etc., inside the cell. Alteration of the native DNA structure of bacteria can lead to severe consequences in cellular processes inside the bacterial cell that ultimately result in the death of the organism. The change in the global DNA structure by small molecules initiates a plethora of cellular responses that have not been very well investigated. Echinomycin and Triostin-A are biologically active Quinoxaline small molecules that typically consist of a quinoxaline chromophore attached with an octadepsipeptide ring. They bind to double-stranded DNA in a sequence-specific way and have high activity against a wide variety of bacteria, mainly against Gram-positive ones. To date, few synthetic quinoxaline scaffolds were synthesized, displaying antibacterial potential against a broad scale of pathogenic bacteria. QNOs (Quinoxaline N-oxides) are known to target DNA and instigate reactive oxygen species (ROS) production in bacteria, thereby exhibiting antibacterial properties. The divergent role of Quinoxaline small molecules in medicinal research qualifies them for the evaluation of their antimicrobial properties as a potential candidate. The previous study from our lab has given new insights on a 6-nitroquinoxaline derivative 1d as an intercalator of DNA, which induces conformational changes in DNA upon binding.7 The binding event observed was dependent on the presence of a crucial benzyl substituent on the quinoxaline moiety. This was associated with a large induced CD (ICD) appearing in a sigmoidal pattern upon the interaction of 1d with dsDNA. The induction of DNA superstructures by 1d at high Drug:DNA ratios was observed that ultimately led to DNA condensation. Eviction of invitro-assembled nucleosome upon treatment with a high dose of 1d was also observed. In this work, monoquinoxaline derivatives of 1d were synthesized by various modifications of the 1d scaffold. The set of synthesized 6-nitroquinoxaline derivatives along with 1d were all subjected to antibacterial evaluation across five different bacteria species. Among the compound set, 3a displayed potent antibacterial activity against Staphylococcus aureus bacteria. 3a was further subjected to various biophysical studies to check whether the DNA structural alteration potential was still intact. The biological response of S. aureus cells upon treatment with 3a was studied using various cell biology processes, which led to the conclusion that 3d can initiate DNA damage in the S. aureus cells. Finally, the potential of 3a in disrupting preformed S.aureus and S.epidermidis biofilms was also studied.

Keywords: DNA structural change, antibacterial, intercalator, DNA superstructures, biofilms

Procedia PDF Downloads 169
375 Application of NBR 14861: 2011 for the Design of Prestress Hollow Core Slabs Subjected to Shear

Authors: Alessandra Aparecida Vieira França, Adriana de Paula Lacerda Santos, Mauro Lacerda Santos Filho

Abstract:

The purpose of this research i to study the behavior of precast prestressed hollow core slabs subjected to shear. In order to achieve this goal, shear tests were performed using hollow core slabs 26,5cm thick, with and without a concrete cover of 5 cm, without cores filled, with two cores filled and three cores filled with concrete. The tests were performed according to the procedures recommended by FIP (1992), the EN 1168:2005 and following the method presented in Costa (2009). The ultimate shear strength obtained within the tests was compared with the values of theoretical resistant shear calculated in accordance with the codes, which are being used in Brazil, noted: NBR 6118:2003 and NBR 14861:2011. When calculating the shear resistance through the equations presented in NBR 14861:2011, it was found that provision is much more accurate for the calculation of the shear strength of hollow core slabs than the NBR 6118 code. Due to the large difference between the calculated results, even for slabs without cores filled, the authors consulted the committee that drafted the NBR 14861:2011 and found that there is an error in the text of the standard, because the coefficient that is suggested, actually presents the double value than the needed one! The ABNT, later on, soon issued an amendment of NBR 14861:2011 with the necessary corrections. During the tests for the present study, it was confirmed that the concrete filling the cores contributes to increase the shear strength of hollow core slabs. But in case of slabs 26,5 cm thick, the quantity should be limited to a maximum of two cores filled, because most of the results for slabs with three cores filled were smaller. This confirmed the recommendation of NBR 14861:2011which is consistent with standard practice. After analyzing the configuration of cracking and failure mechanisms of hollow core slabs during the shear tests, strut and tie models were developed representing the forces acting on the slab at the moment of rupture. Through these models the authors were able to calculate the tensile stress acting on the concrete ties (ribs) and scaled the geometry of these ties. The conclusions of the research performed are the experiments results have shown that the mechanism of failure of the hollow-core slabs can be predicted using the strut-and-tie procedure, within a good range of accuracy. In addition, the needed of the correction of the Brazilian standard to review the correction factor σcp duplicated (in NBR14861/2011), and the limitation of the number of cores (Holes) to be filled with concrete, to increase the strength of the slab for the shear resistance. It is also suggested the increasing the amount of test results with 26.5 cm thick, and a larger range of thickness slabs, in order to obtain results of shear tests with cores concreted after the release of prestressing force. Another set of shear tests on slabs must be performed in slabs with cores filled and cover concrete reinforced with welded steel mesh for comparison with results of theoretical values calculated by the new revision of the standard NBR 14861:2011.

Keywords: prestressed hollow core slabs, shear, strut, tie models

Procedia PDF Downloads 333
374 An in silico Approach for Exploring the Intercellular Communication in Cancer Cells

Authors: M. Cardenas-Garcia, P. P. Gonzalez-Perez

Abstract:

Intercellular communication is a necessary condition for cellular functions and it allows a group of cells to survive as a population. Throughout this interaction, the cells work in a coordinated and collaborative way which facilitates their survival. In the case of cancerous cells, these take advantage of intercellular communication to preserve their malignancy, since through these physical unions they can send signs of malignancy. The Wnt/β-catenin signaling pathway plays an important role in the formation of intercellular communications, being also involved in a large number of cellular processes such as proliferation, differentiation, adhesion, cell survival, and cell death. The modeling and simulation of cellular signaling systems have found valuable support in a wide range of modeling approaches, which cover a wide spectrum ranging from mathematical models; e.g., ordinary differential equations, statistical methods, and numerical methods– to computational models; e.g., process algebra for modeling behavior and variation in molecular systems. Based on these models, different simulation tools have been developed from mathematical ones to computational ones. Regarding cellular and molecular processes in cancer, its study has also found a valuable support in different simulation tools that, covering a spectrum as mentioned above, have allowed the in silico experimentation of this phenomenon at the cellular and molecular level. In this work, we simulate and explore the complex interaction patterns of intercellular communication in cancer cells using the Cellulat bioinformatics tool, a computational simulation tool developed by us and motivated by two key elements: 1) a biochemically inspired model of self-organizing coordination in tuple spaces, and 2) the Gillespie’s algorithm, a stochastic simulation algorithm typically used to mimic systems of chemical/biochemical reactions in an efficient and accurate way. The main idea behind the Cellulat simulation tool is to provide an in silico experimentation environment that complements and guides in vitro experimentation in intra and intercellular signaling networks. Unlike most of the cell signaling simulation tools, such as E-Cell, BetaWB and Cell Illustrator which provides abstractions to model only intracellular behavior, Cellulat is appropriate for modeling both intracellular signaling and intercellular communication, providing the abstractions required to model –and as a result, simulate– the interaction mechanisms that involve two or more cells, that is essential in the scenario discussed in this work. During the development of this work we made evident the application of our computational simulation tool (Cellulat) for the modeling and simulation of intercellular communication between normal and cancerous cells, and in this way, propose key molecules that may prevent the arrival of malignant signals to the cells that surround the tumor cells. In this manner, we could identify the significant role that has the Wnt/β-catenin signaling pathway in cellular communication, and therefore, in the dissemination of cancer cells. We verified, using in silico experiments, how the inhibition of this signaling pathway prevents that the cells that surround a cancerous cell are transformed.

Keywords: cancer cells, in silico approach, intercellular communication, key molecules, modeling and simulation

Procedia PDF Downloads 249
373 Dragonflies (Odonata) Reflect Climate Warming Driven Changes in High Mountain Invertebrates Populations

Authors: Nikola Góral, Piotr Mikołajczuk, Paweł Buczyński

Abstract:

Much scientific research in the last 20 years has focused on the influence of global warming on the distribution and phenology of living organisms. Three potential responses to climate change are predicted: individual species may become extinct, adapt to new conditions in their existing range or change their range by migrating to places where climatic conditions are more favourable. It means not only migration to areas in other latitudes, but also different altitudes. In the case of dragonflies (Odonata), monitoring in Western Europe has shown that in response to global warming, dragonflies tend to change their range to a more northern one. The strongest response to global warming is observed in arctic and alpine species, as well as in species capable of migrating over long distances. The aim of the research was to assess whether the fauna of aquatic insects in high-mountain habitats has changed as a result of climate change and, if so, how big and what type these changes are. Dragonflies were chosen as a model organism because of their fast reaction to changes in the environment: they have high migration abilities and short life cycle. The state of the populations of boreal-mountain species and the extent to which lowland species entered high altitudes was assessed. The research was carried out on 20 sites in Western Sudetes, Southern Poland. They were located at an altitude of between 850 and 1250 m. The selected sites were representative of many types of valuable alpine habitats (subalpine raised bog, transitional spring bog, habitats associated with rivers and mountain streams). Several sites of anthropogenic origin were also selected. Thanks to this selection, a wide characterization of the fauna of the Karkonosze was made and it was compared whether the studied processes proceeded differently, depending on whether the habitat is primary or secondary. Both imagines and larvae were examined (by taking hydrobiological samples with a kick-net), and exuviae were also collected. Individual species dragonflies were characterized in terms of their reproductive, territorial and foraging behaviour. During each inspection, the basic physicochemical parameters of the water were measured. The population of the high-mountain dragonfly Somatochlora alpestris turned out to be in a good condition. This species was noted at several sites. Some of those sites were situated relatively low (995 m AMSL), which proves that the thermal conditions at the lower altitudes might be still optimal for this species. The protected by polish law species Somatochlora arctica, Aeshna subarctica and Leucorrhinia albifrons, as well as strongly associated with bogs Leucorrhinia dubia and Aeshna juncea bogs were observed. However, they were more frequent and more numerous in habitats of anthropogenic origin, which may suggest minor changes in the habitat preferences of dragonflies. The subject requires further research and observations over a longer time scale.

Keywords: alpine species, bioindication, global warming, habitat preferences, population dynamics

Procedia PDF Downloads 150
372 Modeling Visual Memorability Assessment with Autoencoders Reveals Characteristics of Memorable Images

Authors: Elham Bagheri, Yalda Mohsenzadeh

Abstract:

Image memorability refers to the phenomenon where certain images are more likely to be remembered by humans than others. It is a quantifiable and intrinsic attribute of an image. Understanding how visual perception and memory interact is important in both cognitive science and artificial intelligence. It reveals the complex processes that support human cognition and helps to improve machine learning algorithms by mimicking the brain's efficient data processing and storage mechanisms. To explore the computational underpinnings of image memorability, this study examines the relationship between an image's reconstruction error, distinctiveness in latent space, and its memorability score. A trained autoencoder is used to replicate human-like memorability assessment inspired by the visual memory game employed in memorability estimations. This study leverages a VGG-based autoencoder that is pre-trained on the vast ImageNet dataset, enabling it to recognize patterns and features that are common to a wide and diverse range of images. An empirical analysis is conducted using the MemCat dataset, which includes 10,000 images from five broad categories: animals, sports, food, landscapes, and vehicles, along with their corresponding memorability scores. The memorability score assigned to each image represents the probability of that image being remembered by participants after a single exposure. The autoencoder is finetuned for one epoch with a batch size of one, attempting to create a scenario similar to human memorability experiments where memorability is quantified by the likelihood of an image being remembered after being seen only once. The reconstruction error, which is quantified as the difference between the original and reconstructed images, serves as a measure of how well the autoencoder has learned to represent the data. The reconstruction error of each image, the error reduction, and its distinctiveness in latent space are calculated and correlated with the memorability score. Distinctiveness is measured as the Euclidean distance between each image's latent representation and its nearest neighbor within the autoencoder's latent space. Different structural and perceptual loss functions are considered to quantify the reconstruction error. The results indicate that there is a strong correlation between the reconstruction error and the distinctiveness of images and their memorability scores. This suggests that images with more unique distinct features that challenge the autoencoder's compressive capacities are inherently more memorable. There is also a negative correlation between the reduction in reconstruction error compared to the autoencoder pre-trained on ImageNet, which suggests that highly memorable images are harder to reconstruct, probably due to having features that are more difficult to learn by the autoencoder. These insights suggest a new pathway for evaluating image memorability, which could potentially impact industries reliant on visual content and mark a step forward in merging the fields of artificial intelligence and cognitive science. The current research opens avenues for utilizing neural representations as instruments for understanding and predicting visual memory.

Keywords: autoencoder, computational vision, image memorability, image reconstruction, memory retention, reconstruction error, visual perception

Procedia PDF Downloads 91
371 Closing the Loop between Building Sustainability and Stakeholder Engagement: Case Study of an Australian University

Authors: Karishma Kashyap, Subha D. Parida

Abstract:

Rapid population growth and urbanization is creating pressure throughout the world. This has a dramatic effect on a lot of elements which include water, food, transportation, energy, infrastructure etc. as few of the key services. Built environment sector is growing concurrently to meet the needs of urbanization. Due to such large scale development of buildings, there is a need for them to be monitored and managed efficiently. Along with appropriate management, climate adaptation is highly crucial as well because buildings are one of the major sources of greenhouse gas emission in their operation phase. Buildings to be adaptive need to provide a triple bottom approach to sustainability i.e., being socially, environmentally and economically sustainable. Hence, in order to deliver these sustainability outcomes, there is a growing understanding and thrive towards switching to green buildings or renovating new ones as per green standards wherever possible. Academic institutions in particular have been following this trend globally. This is highly significant as universities usually have high occupancy rates because they manage a large building portfolio. Also, as universities accommodate the future generation of architects, policy makers etc., they have the potential of setting themselves as a best industry practice model for research and innovation for the rest to follow. Hence their climate adaptation, sustainable growth and performance management becomes highly crucial in order to provide the best services to users. With the objective of evaluating appropriate management mechanisms within academic institutions, a feasibility study was carried out in a recent 5-Star Green Star rated university building (housing the School of Construction) in Victoria (south-eastern state of Australia). The key aim was to understand the behavioral and social aspect of the building users, management and the impact of their relationship on overall building sustainability. A survey was used to understand the building occupant’s response and reactions in terms of their work environment and management. A report was generated based on the survey results complemented with utility and performance data which were then used to evaluate the management structure of the university. Followed by the report, interviews were scheduled with the facility and asset managers in order to understand the approach they use to manage the different buildings in their university campuses (old, new, refurbished), respective building and parameters incorporated in maintaining the Green Star performance. The results aimed at closing the communication and feedback loop within the respective institutions and assist the facility managers to deliver appropriate stakeholder engagement. For the wider design community, analysis of the data highlights the applicability and significance of prioritizing key stakeholders, integrating desired engagement policies within an institution’s management structures and frameworks and their effect on building performance

Keywords: building optimization, green building, post occupancy evaluation, stakeholder engagement

Procedia PDF Downloads 357
370 Benefits of The ALIAmide Palmitoyl-Glucosamine Co-Micronized with Curcumin for Osteoarthritis Pain: A Preclinical Study

Authors: Enrico Gugliandolo, Salvatore Cuzzocrea, Rosalia Crupi

Abstract:

Osteoarthritis (OA) is one of the most common chronic pain conditions in dogs and cats. OA pain is currently viewed as a mixed phenomenon involving both inflammatory and neuropathic mechanisms at the peripheral (joint) and central (spinal and supraspinal) levels. Oxidative stress has been implicated in OA pain. Although nonsteroidal anti-inflammatory drugs are commonly prescribed for OA pain, they should be used with caution in pets because of adverse effects in the long term and controversial efficacy on neuropathic pain. An unmet need remains for safe and effective long-term treatments for OA pain. Palmitoyl-glucosamine (PGA) is an analogue of the ALIAamide palmitoylethanolamide, i.e., a body’s own endocannabinoid-like compound playing a sentinel role in nociception. PGA, especially in the micronized formulation, was shown safe and effective in OA pain. The aim of this study was to investigate the effect of a co-micronized formulation of PGA with the natural antioxidant curcumin (PGA-cur) on OA pain. Ten Sprague-Dawley male rats were used for each treatment group. The University of Messina Review Board for the care and use of animals authorized the study. On day 0, rats were anesthetized (5.0% isoflurane in 100% O2) and received intra-articular injection of MIA (3 mg in 25 μl saline) in the right knee joint, with the left being injected an equal volume of saline. Starting the third day after MIA injection, treatments were administered orally three times per week for 21 days, at the following doses: PGA 20 mg/kg, curcumin 10 mg/kg, PGA-cur (2:1 ratio) 30 mg/kg. On day 0 and 3, 7, 14 and 21 days post-injection, mechanical allodynia was measured using a dynamic plantar Von Frey hair aesthesiometer and expressed as paw withdrawal threshold (PWT) and latency (PWL). Motor functional recovery of the rear limb was evaluated on the same time points by walking track analysis using the sciatic functional index. On day 21 post-MIA injection, the concentration of the following inflammatory and nociceptive mediators was measured in serum using commercial ELISA kits: tumor necrosis factor alpha (TNF-α), interleukin-1 beta (IL-1β), nerve growth factor (NGF) and matrix metalloproteinase-1-3-9 (MMP-1, MMP-3, MMP-9). The results were analyzed by ANOVA followed by Bonferroni post-hoc test for multiple comparisons. Micronized PGA reduced neuropathic pain, as shown by the significant higher PWT and PWL values compared to vehicle group (p < 0.0001 for all the evaluated time points). The effect of PGA-cur was superior at all time points (p < 0.005). PGA-cur restored motor function already on day 14 (p < 0.005), while micronized PGA was effective a week later (D21). MIA-induced increase in the serum levels of all the investigated mediators was inhibited by PGA-cur (p < 0.01). PGA was also effective, except on IL-1 and MMP-3. Curcumin alone was inactive in all the experiments at any time point. The encouraging results suggest that PGA-cur may represent a valuable option in OA pain management and warrant further confirmation in well-powered clinical trials.

Keywords: ALIAmides, curcumin, osteoarthritis, palmitoyl-glucosamine

Procedia PDF Downloads 115
369 Influence of Mandrel’s Surface on the Properties of Joints Produced by Magnetic Pulse Welding

Authors: Ines Oliveira, Ana Reis

Abstract:

Magnetic Pulse Welding (MPW) is a cold solid-state welding process, accomplished by the electromagnetically driven, high-speed and low-angle impact between two metallic surfaces. It has the same working principle of Explosive Welding (EXW), i.e. is based on the collision of two parts at high impact speed, in this case, propelled by electromagnetic force. Under proper conditions, i.e., flyer velocity and collision point angle, a permanent metallurgical bond can be achieved between widely dissimilar metals. MPW has been considered a promising alternative to the conventional welding processes and advantageous when compared to other impact processes. Nevertheless, MPW current applications are mostly academic. Despite the existing knowledge, the lack of consensus regarding several aspects of the process calls for further investigation. As a result, the mechanical resistance, morphology and structure of the weld interface in MPW of Al/Cu dissimilar pair were investigated. The effect of process parameters, namely gap, standoff distance and energy, were studied. It was shown that welding only takes place if the process parameters are within an optimal range. Additionally, the formation of intermetallic phases cannot be completely avoided in the weld of Al/Cu dissimilar pair by MPW. Depending on the process parameters, the intermetallic compounds can appear as continuous layer or small pockets. The thickness and the composition of the intermetallic layer depend on the processing parameters. Different intermetallic phases can be identified, meaning that different temperature-time regimes can occur during the process. It is also found that lower pulse energies are preferred. The relationship between energy increase and melting is possibly related to multiple sources of heating. Higher values of pulse energy are associated with higher induced currents in the part, meaning that more Joule heating will be generated. In addition, more energy means higher flyer velocity, the air existing in the gap between the parts to be welded is expelled, and this aerodynamic drag (fluid friction) is proportional to the square of the velocity, further contributing to the generation of heat. As the kinetic energy also increases with the square of velocity, the dissipation of this energy through plastic work and jet generation will also contribute to an increase in temperature. To reduce intermetallic phases, porosity, and melt pockets, pulse energy should be minimized. The bond formation is affected not only by the gap, standoff distance, and energy but also by the mandrel’s surface conditions. No correlation was clearly identified between surface roughness/scratch orientation and joint strength. Nevertheless, the aspect of the interface (thickness of the intermetallic layer, porosity, presence of macro/microcracks) is clearly affected by the surface topology. Welding was not established on oil contaminated surfaces, meaning that the jet action is not enough to completely clean the surface.

Keywords: bonding mechanisms, impact welding, intermetallic compounds, magnetic pulse welding, wave formation

Procedia PDF Downloads 211
368 Effect of Juvenile Hormone on Respiratory Metabolism during Non-Diapausing Sesamia cretica Wandering Larvae (Lepidoptera: Noctuidae)

Authors: E. A. Abdel-Hakim

Abstract:

The corn stemborer Sesamia cretica (Lederer), has been viewed in many parts of the world as a major pest of cultivated maize, graminaceous crops and sugarcane. Its life cycle is comprised of two different phases, one is the growth and developmental phase (non-diapause) and the other is diapause phase which takes place at the last larval instar. Several problems associated with the use of conventional insecticides, have strongly demonstrated the need for applying alternative safe compounds. Prominent among the prototypes of such prospective chemicals are the juvenoids; i.e. the insect (JH) mimics. In fact, the hormonal effect on metabolism has long been viewed as a secondary consequence of its direct action on specific energy-requiring biosynthetic mechanisms. Therefore, the present study was undertaken essentially in a rather systematic fashion as a contribution towards clarifying metabolic and energetic changes taking place during non-diapause wandering larvae as regulated by (JH) mimic. For this purpose, we applied two different doses of JH mimic (Ro 11-0111) in a single (standard) dose of 100µg or in a single dose of 20 µg/g bw in1µl acetone topically at the onset of nondiapause wandering larvae (WL). Energetic data were obtained by indirect calorimetry methods by conversion of respiratory gas exchange volumetric data, as measured manometrically using a Warburg constant respirometer, to caloric units (g-cal/g fw/h). The findings obtained can be given in brief; these treated larvae underwent supernumerary larval moults. However, this potential the wandering larvae proved to possess whereby restoration of larval programming for S. cretica to overcome stresses even at this critical developmental period. The results obtained, particularly with the high dose used, show that 98% wandering larvae were rescued to survive up to one month (vs. 5 days for normal controls), finally the formation of larval-adult intermediates. Also, the solvent controls had resulted in about 22% additional, but stationary moultings. The basal respiratory metabolism (O2 uptake and CO2 output) of the (WL), whether un-treated or larvae not had followed reciprocal U-shaped curves all along of their developmental duration. The lowest points stood nearly to the day of prepupal formation (571±187 µl O2/gfw/h and 553±181 µl CO2/gfw/h) during un-treated in contrast to the larvae treated with JH (210±48 µl O2/gfw/h and 335±81 µl CO2/gfw/h). Un-treated (normal) larvae proved to utilize carbohydrates as the principal source for energy supply; being fully oxidised without sparing any appreciable amount for endergonic conversion to fats. While, the juvenoid-treated larvae and compared with the acetone-treated control equivalents, there existed no distinguishable differences between them; both had been observed utilising carbohydrates as the sole source of energy demand and converting endergonically almost similar percentages to fats. The overall profile, treated and un-treated (WL) utilized carbohydrates as the principal source for energy demand during this stage.

Keywords: juvenile hormone, respiratory metabolism, Sesamia cretica, wandering phase

Procedia PDF Downloads 294
367 Fabrication of Electrospun Green Fluorescent Protein Nano-Fibers for Biomedical Applications

Authors: Yakup Ulusu, Faruk Ozel, Numan Eczacioglu, Abdurrahman Ozen, Sabriye Acikgoz

Abstract:

GFP discovered in the mid-1970s, has been used as a marker after replicated genetic study by scientists. In biotechnology, cell, molecular biology, the GFP gene is frequently used as a reporter of expression. In modified forms, it has been used to make biosensors. Many animals have been created that express GFP as an evidence that a gene can be expressed throughout a given organism. Proteins labeled with GFP identified locations are determined. And so, cell connections can be monitored, gene expression can be reported, protein-protein interactions can be observed and signals that create events can be detected. Additionally, monitoring GFP is noninvasive; it can be detected by under UV-light because of simply generating fluorescence. Moreover, GFP is a relatively small and inert molecule, that does not seem to treat any biological processes of interest. The synthesis of GFP has some steps like, to construct the plasmid system, transformation in E. coli, production and purification of protein. GFP carrying plasmid vector pBAD–GFPuv was digested using two different restriction endonuclease enzymes (NheI and Eco RI) and DNA fragment of GFP was gel purified before cloning. The GFP-encoding DNA fragment was ligated into pET28a plasmid using NheI and Eco RI restriction sites. The final plasmid was named pETGFP and DNA sequencing of this plasmid indicated that the hexa histidine-tagged GFP was correctly inserted. Histidine-tagged GFP was expressed in an Escherichia coli BL21 DE3 (pLysE) strain. The strain was transformed with pETGFP plasmid and grown on LuiraBertoni (LB) plates with kanamycin and chloramphenicol selection. E. coli cells were grown up to an optical density (OD 600) of 0.8 and induced by the addition of a final concentration of 1mM isopropyl-thiogalactopyranoside (IPTG) and then grown for additional 4 h. The amino-terminal hexa-histidine-tag facilitated purification of the GFP by using a His Bind affinity chromatography resin (Novagen). Purity of GFP protein was analyzed by a 12 % sodium dodecyl sulphate polyacrylamide gel electrophoresis (SDS-PAGE). The concentration of protein was determined by UV absorption at 280 nm (Varian Cary 50 Scan UV/VIS spectrophotometer). Synthesis of GFP-Polymer composite nanofibers was produced by using GFP solution (10mg/mL) and polymer precursor Polyvinylpyrrolidone, (PVP, Mw=1300000) as starting materials and template, respectively. For the fabrication of nanofibers with the different fiber diameter; a sol–gel solution comprising of 0.40, 0.60 and 0.80 g PVP (depending upon the desired fiber diameter) and 100 mg GFP in 10 mL water: ethanol (3:2) mixtures were prepared and then the solution was covered on collecting plate via electro spinning at 10 kV with a feed-rate of 0.25 mL h-1 using Spellman electro spinning system. Results show that GFP-based nano-fiber can be used plenty of biomedical applications such as bio-imaging, bio-mechanic, bio-material and tissue engineering.

Keywords: biomaterial, GFP, nano-fibers, protein expression

Procedia PDF Downloads 320