Search results for: component composition
977 Preparations of Fruit Nectars from Fresh Fruit Juices-Analyses before and after Storage
Authors: Youcef Amir
Abstract:
The consumption of beverages continues to grow worldwide due to increasing demography, but pure fruit juices and high-quality nectars can induce protective effects on human health because of their natural bioactive components. In contrast, sodas and gaseous drinks containing synthetic food additives are considered as responsible for consumers of several pathologies such as obesity, diabetes, and non-alcoholic fatty liver disease. The nutritional and therapeutic virtues of fruit juices are generally a remarkable antioxidant power, anti-cancer activity linked to their richness of indigestible and indigestible sugars, vitamins, mineral salts, carotenoids and phenolic compounds. The main reasons, which led us to produce these fruit derivatives, are the non-availability of the fresh fruits mentioned above all along the year and also the existence of variations in the chemical composition of these different fruits as well as for the major or minor components. We tested, therefore, the physicochemical characteristics of each fruit juice and pulp apart and afterward those of the cocktails formulated. The fresh juices used during our experiments were obtained from the following fruits from north-central Algeria: prickly pear, pomegranate, melon, red oranges. The formulations of these fruit juices were tested after several trials comprising sensorial analysis, physicochemical factors (pH, titratable acidity, Brix degree, formal index, water content, total ash, total and reducing sugars, vitamin C, carotenoids, phenolic compounds) and microbial analysis after a storage period. To the pure juices proportions, citric acid E330, sucrose, and water were added followed by pasteurisation. These products were analysed from the physicochemical, microbial and sensorial viewpoints after a storage period of one month according to national legislation to evaluate their stability. The results of the physicochemical parameters of the prepared beverages had shown good physicochemical results, acceptable sensorial characteristics and microbial stability and safety before and after a storage period. We measured appreciable amounts of minor compounds with health properties.Keywords: fruit juices, microbial analyses, nectars, physico chemical characteristics, sensorial analysis, storage period
Procedia PDF Downloads 229976 Evaluation of the Phenolic Composition of Curcumin from Different Turmeric (Curcuma longa L.) Extracts: A Comprehensive Study Based on Chemical Turmeric Extract, Turmeric Tea and Fresh Turmeric Juice
Authors: Beyza Sukran Isik, Gokce Altin, Ipek Yalcinkaya, Evren Demircan, Asli Can Karaca, Beraat Ozcelik
Abstract:
Turmeric (Curcuma longa L.), is used as a food additive (spice), preservative and coloring agent in Asian countries, including China and South East Asia. It is also considered as a medicinal plant. Traditional Indian medicine evaluates turmeric powder for the treatment of biliary disorders, rheumatism, and sinusitis. It has rich polyphenol content. Turmeric has yellow color mainly because of the presence of three major pigments; curcumin 1,7-bis(4-hydroxy-3-methoxyphenyl)-1, 6-heptadiene-3,5-dione), demethoxy-curcumin and bis demothoxy-curcumin. These curcuminoids are recognized to have high antioxidant activities. Curcumin is the major constituent of Curcuma species. Method: To prepare turmeric tea, 0.5 gram of turmeric powder was brewed with 250 ml of water at 90°C, 10 minutes. 500 grams of fresh turmeric washed and shelled prior to squeezing. Both turmeric tea and turmeric juice pass through 45 lm filters and stored at -20°C in the dark for further analyses. Curcumin was extracted from 20 grams of turmeric powder by 70 ml ethanol solution (95:5 ethanol/water v/v) in a water bath at 80°C, 6 hours. Extraction was contributed for 2 hours at the end of 6 hours by addition of 30 ml ethanol. Ethanol was removed by rotary evaporator. Remained extract stored at -20°C in the dark. Total phenolic content and phenolic profile were determined by spectrophotometric analysis and ultra-fast liquid chromatography (UFLC), respectively. Results: The total phenolic content of ethanolic extract of turmeric, turmeric juice, and turmeric tea were determined 50.72, 31.76 and 29.68 ppt, respectively. The ethanolic extract of turmeric, turmeric juice, and turmeric tea have been injected into UFLC and analyzed for curcumin contents. The curcumin content in ethanolic extract of turmeric, turmeric juice, and turmeric tea were 4067.4, 156.7 ppm and 1.1 ppm, respectively. Significance: Turmeric is known as a good source of curcumin. According to the results, it can be stated that its tea is not sufficient way for curcumin consumption. Turmeric juice can be preferred to turmeric tea for higher curcumin content. Ethanolic extract of turmeric showed the highest content of turmeric in both spectrophotometric and chromatographic analyses. Nonpolar solvents and carriers which have polar binding sites have to be considered for curcumin consumption due to its nonpolar nature.Keywords: phenolic compounds, spectrophotometry, turmeric, UFLC
Procedia PDF Downloads 201975 Geochemical and Petrological Survey in Northern Ethiopia Basement Rocks for Investigation of Gold and Base Metal Mineral Potential in Finarwa, Southeast Tigray, Ethiopia
Authors: Siraj Beyan Mohamed, Woldia University
Abstract:
The study is accompanied in northern Ethiopian basement rocks, Finarwa area, and its surrounding areas, south eastern Tigray. From the field observations, the geology of the area haven been described and mapped based on mineral composition, texture, structure, and colour of both fresh and weather rocks. Inductively coupled plasma mass spectrometry (ICP-MS) and atomic absorption spectrometry (AAS) have conducted to analysis gold and base metal mineralization. The ore mineral under microscope are commonly base metal sulphides pyrrhotite, Chalcopyrite, pentilanditeoccurring in variable proportions. Galena, chalcopyrite, pyrite, and gold mineral are hosted in quartz vein. Pyrite occurs both in quartz vein and enclosing rocks as a primary mineral. The base metal sulfides occur as disseminated, vein filling, and replacement. Geochemical analyses result determination of the threshold of geochemical anomalies is directly related to the identification of mineralization information. From samples, stream sediment samples and the soil samples indicated that the most promising mineralization occur in the prospect area are gold(Au), copper (Cu), and zinc (Zn). This is also supported by the abundance of chalcopyrite and sphalerite in some highly altered samples. The stream sediment geochemical survey data shows relatively higher values for zinc compared to Pb and Cu. The moderate concentration of the base metals in some of the samples indicates availability base metal mineralization in the study area requiring further investigation. The rock and soil geochemistry shows the significant concentration of gold with maximum value of 0.33ppm and 0.97 ppm in the south western part of the study area. In Finarwa, artisanal gold mining has become an increasingly widespread economic activity of the local people undertaken by socially differentiated groups with a wide range of education levels and economic backgrounds incorporating a wide variety of ‘labour intensive activities without mechanisation.Keywords: gold, base metal, anomaly, threshold
Procedia PDF Downloads 126974 Performance Improvement of Long-Reach Optical Access Systems Using Hybrid Optical Amplifiers
Authors: Shreyas Srinivas Rangan, Jurgis Porins
Abstract:
The internet traffic has increased exponentially due to the high demand for data rates by the users, and the constantly increasing metro networks and access networks are focused on improving the maximum transmit distance of the long-reach optical networks. One of the common methods to improve the maximum transmit distance of the long-reach optical networks at the component level is to use broadband optical amplifiers. The Erbium Doped Fiber Amplifier (EDFA) provides high amplification with low noise figure but due to the characteristics of EDFA, its operation is limited to C-band and L-band. In contrast, the Raman amplifier exhibits a wide amplification spectrum, and negative noise figure values can be achieved. To obtain such results, high powered pumping sources are required. Operating Raman amplifiers with such high-powered optical sources may cause fire hazards and it may damage the optical system. In this paper, we implement a hybrid optical amplifier configuration. EDFA and Raman amplifiers are used in this hybrid setup to combine the advantages of both EDFA and Raman amplifiers to improve the reach of the system. Using this setup, we analyze the maximum transmit distance of the network by obtaining a correlation diagram between the length of the single-mode fiber (SMF) and the Bit Error Rate (BER). This hybrid amplifier configuration is implemented in a Wavelength Division Multiplexing (WDM) system with a BER of 10⁻⁹ by using NRZ modulation format, and the gain uniformity noise ratio (signal-to-noise ratio (SNR)), the efficiency of the pumping source, and the optical signal gain efficiency of the amplifier are studied experimentally in a mathematical modelling environment. Numerical simulations were implemented in RSoft OptSim simulation software based on the nonlinear Schrödinger equation using the Split-Step method, the Fourier transform, and the Monte Carlo method for estimating BER.Keywords: Raman amplifier, erbium doped fibre amplifier, bit error rate, hybrid optical amplifiers
Procedia PDF Downloads 71973 The Power of Inferences and Assumptions: Using a Humanities Education Approach to Help Students Learn to Think Critically
Authors: Randall E. Osborne
Abstract:
A four-step ‘humanities’ thought model has been used in an interdisciplinary course for almost two decades and has been proven to aid in student abilities to become more inclusive in their world view. Lack of tolerance for ambiguity can interfere with this progression so we developed an assignment that seems to have assisted students in developing more tolerance for ambiguity and, therefore, opened them up to make more progress on the critical thought model. A four-step critical thought model (built from a humanities education approach) is used in an interdisciplinary course on prejudice, discrimination, and hate in an effort to minimize egocentrism and promote sociocentrism in college students. A fundamental barrier to this progression is a lack of tolerance for ambiguity. The approach to the course is built on the assumption that Tolerance for Ambiguity (characterized by a dislike of uncertain, ambiguous or situations in which expected behaviors are uncertain, will like serve as a barrier (if tolerance is low) or facilitator (if tolerance is high) of active ‘engagement’ with assignments. Given that active engagement with course assignments would be necessary to promote an increase in critical thought and the degree of multicultural attitude change, tolerance for ambiguity inhibits critical thinking and, ultimately multicultural attitude change. As expected, those students showing the least amount of decrease (or even an increase) in intolerance across the semester, earned lower grades in the course than those students who showed a significant decrease in intolerance, t(1,19) = 4.659, p < .001. Students who demonstrated the most change in their Tolerance for Ambiguity (showed an increasing ability to tolerate ambiguity) earned the highest grades in the course. This is, especially, significant because faculty did not know student scores on this measure until after all assignments had been graded and course grades assigned. An assignment designed to assist students in making their assumption and inferences processes visible so they could be explored, was implemented with the goal of this exploration then promoting more tolerance for ambiguity, which, as already outlined, promotes critical thought. The assignment offers students two options and then requires them to explore what they have learned about inferences and/or assumptions This presentation outlines the assignment and demonstrates the humanities model, what students learn from particular assignments and how it fosters a change in Tolerance for Ambiguity which, serves as the foundational component of critical thinking.Keywords: critical thinking, humanities education, sociocentrism, tolerance for ambiguity
Procedia PDF Downloads 274972 The Effectiveness and the Factors Affect Farmer’s Adoption of Technological Innovation Citrus Gerga Lebong in Bengkulu Indonesia
Authors: Umi Pudji Astuti, Dedi Sugandi
Abstract:
The effectiveness of agricultural extension is determined by the component in the agricultural extension system among others are agricultural extension methods. Effective methods should be selected and defined based on the characteristics of the target, the resources, the materials, and the objectives to be achieved. Citrus agribusiness development in Lebong is certainly supported by the role of stakeholders and citrus farmers, as well as the proper dissemination methods. Adoption in the extension process substantially can be interpreted as the changes of behavior process such as knowledge (cognitive), attitudes (affective), and skill (psycho-motoric) in a person after receiving "innovation" from extension submitted by target communities. Knowledge and perception are needed as a first step in adopting a innovation, especially of citrus agribusiness development in Lebong. The process of Specific technology adoption is influenced by internal factors and farmer perceptions of technological innovation. Internal factors such as formal education, experience trying to farm, owned land, production farm goods. The output of this study: 1) to analyze the effectiveness of field trial methods in improving cognitive and affective farmers; 2) Knowing the relationship of adoption level and knowledge of farmers; 3) to analyze the factors that influence farmers' adoption of citrus technology innovation. The method of this study is through the survey to 40 respondents in Rimbo Pengadang Sub District, Lebong District in 2014. Analyzing data is done by descriptive and statistical parametric (multiple linear functions). The results showed that: 1) Field trip method is effective to improve the farmer knowledge (23,17% ) and positively affect the farmer attitude; 2) the knowledge level of PTKJS innovation farmers "positively and very closely related".; 3) the factors that influence the level of farmers' adoption are internal factors (education, knowledge, and the intensity of training), and external factors respondents (distance from the house to the garden and from the house to production facilities shop).Keywords: affect, adoption technology, citrus gerga, effectiveness dissemination
Procedia PDF Downloads 194971 Effect of Curing Temperature on the Textural and Rheological of Gelatine-SDS Hydrogels
Authors: Virginia Martin Torrejon, Binjie Wu
Abstract:
Gelatine is a protein biopolymer obtained from the partial hydrolysis of animal tissues which contain collagen, the primary structural component in connective tissue. Gelatine hydrogels have attracted considerable research in recent years as an alternative to synthetic materials due to their outstanding gelling properties, biocompatibility and compostability. Surfactants, such as sodium dodecyl sulfate (SDS), are often used in hydrogels solutions as surface modifiers or solubility enhancers, and their incorporation can influence the hydrogel’s viscoelastic properties and, in turn, its processing and applications. Literature usually focuses on studying the impact of formulation parameters (e.g., gelatine content, gelatine strength, additives incorporation) on gelatine hydrogels properties, but processing parameters, such as curing temperature, are commonly overlooked. For example, some authors have reported a decrease in gel strength at lower curing temperatures, but there is a lack of research on systematic viscoelastic characterisation of high strength gelatine and gelatine-SDS systems at a wide range of curing temperatures. This knowledge is essential to meet and adjust the technological requirements for different applications (e.g., viscosity, setting time, gel strength or melting/gelling temperature). This work investigated the effect of curing temperature (10, 15, 20, 23 and 25 and 30°C) on the elastic modulus (G’) and melting temperature of high strength gelatine-SDS hydrogels, at 10 wt% and 20 wt% gelatine contents, by small-amplitude oscillatory shear rheology coupled with Fourier Transform Infrared Spectroscopy. It also correlates the gel strength obtained by rheological measurements with the gel strength measured by texture analysis. Gelatine and gelatine-SDS hydrogels’ rheological behaviour strongly depended on the curing temperature, and its gel strength and melting temperature can be slightly modified to adjust it to given processing and applications needs. Lower curing temperatures led to gelatine and gelatine-SDS hydrogels with considerably higher storage modulus. However, their melting temperature was lower than those gels cured at higher temperatures and lower gel strength. This effect was more considerable at longer timescales. This behaviour is attributed to the development of thermal-resistant structures in the lower strength gels cured at higher temperatures.Keywords: gelatine gelation kinetics, gelatine-SDS interactions, gelatine-surfactant hydrogels, melting and gelling temperature of gelatine gels, rheology of gelatine hydrogels
Procedia PDF Downloads 102970 Art History as Inspiration for Chefs. An Autoethnographic Research About Art History Education in a Restaurant
Authors: Marta Merkl
Abstract:
The ongoing project what the paper will present is about how the author introduces chefs to the history of art through a selected piece of art. The author is originally an art historian, but since 2019 she has been working on her PhD research topic related to designing dining experiences in the restaurant context, including the role of sensory experiences and storytelling. Due to a scholarship, she can participate in the re-design of a fine dining restaurant called Onyx in Budapest, which was awarded two Michelin stars before the pandemic caused by COVID-19. The management of the restaurant wants to broaden the chefs' horizons and develop their creativity by introducing them to each chapter of the visual arts. There is a kind of polyphony in the mass of information about what should a chef, a food designer, or anybody who make food in everyday basis use as a source of inspiration for inventing and preparing new dishes: nostalgia, raw material, cookbooks, etc. In today's world of fine dining, nature is the main inspiration for outstanding achievements, as exemplified by the Slovenian restaurant Hiša Franko** and its chef Ana Roš. The starting point for the project and the research was the idea of using art history as an inspiration for gastronomy. The research relies on data collection via interviews, ethnography, and autoethnography. In this case, the reflective introspection of the researcher is also relevant because the researcher is an important part of the process (GOULD, 1995). The paper overviews the findings of the autoethnography literature relevant to our topic. In the literature review, it will be also pointed out that sustainability, eating as an experience, and the world of art can be linked. As ERDMANN and co-authors (1999) argues that the health dimension of sustainability has a component called 'joy of eating,' which implies strong ties to the experiential nature of eating. Therefore, it is worth to compare with PINE and GILMORE's (1998) theory of experience economy and with CSÍKSZENTMIHÁLYI's (1999) concept of flow, which give examples of gastronomy and art. The aim of the research is to map experiences of the pilot project, the discourse between the art world and the gastronomy actors. Another noteworthy aspect is whether the chefs are willing to use art history as an inspiration.Keywords: art history, autoethnography, chef, education, experience, food preparation, inspiration, sustainability
Procedia PDF Downloads 145969 Influence of Smoking on Fine And Ultrafine Air Pollution Pm in Their Pulmonary Genetic and Epigenetic Toxicity
Authors: Y. Landkocz, C. Lepers, P.J. Martin, B. Fougère, F. Roy Saint-Georges. A. Verdin, F. Cazier, F. Ledoux, D. Courcot, F. Sichel, P. Gosset, P. Shirali, S. Billet
Abstract:
In 2013, the International Agency for Research on Cancer (IARC) classified air pollution and fine particles as carcinogenic to humans. Causal relationships exist between elevated ambient levels of airborne particles and increase of mortality and morbidity including pulmonary diseases, like lung cancer. However, due to a double complexity of both physicochemical Particulate Matter (PM) properties and tumor mechanistic processes, mechanisms of action remain not fully elucidated. Furthermore, because of several common properties between air pollution PM and tobacco smoke, like the same route of exposure and chemical composition, potential mechanisms of synergy could exist. Therefore, smoking could be an aggravating factor of the particles toxicity. In order to identify some mechanisms of action of particles according to their size, two samples of PM were collected: PM0.03 2.5 and PM0.33 2.5 in the urban-industrial area of Dunkerque. The overall cytotoxicity of the fine particles was determined on human bronchial cells (BEAS-2B). Toxicological study focused then on the metabolic activation of the organic compounds coated onto PM and some genetic and epigenetic changes induced on a co-culture model of BEAS-2B and alveolar macrophages isolated from bronchoalveolar lavages performed in smokers and non-smokers. The results showed (i) the contribution of the ultrafine fraction of atmospheric particles to genotoxic (eg. DNA double-strand breaks) and epigenetic mechanisms (eg. promoter methylation) involved in tumor processes, and (ii) the influence of smoking on the cellular response. Three main conclusions can be discussed. First, our results showed the ability of the particles to induce deleterious effects potentially involved in the stages of initiation and promotion of carcinogenesis. The second conclusion is that smoking affects the nature of the induced genotoxic effects. Finally, the in vitro developed cell model, using bronchial epithelial cells and alveolar macrophages can take into account quite realistically, some of the existing cell interactions existing in the lung.Keywords: air pollution, fine and ultrafine particles, genotoxic and epigenetic alterations, smoking
Procedia PDF Downloads 348968 The Psychometric Properties of an Instrument to Estimate Performance in Ball Tasks Objectively
Authors: Kougioumtzis Konstantin, Rylander Pär, Karlsteen Magnus
Abstract:
Ball skills as a subset of fundamental motor skills are predictors for performance in sports. Currently, most tools evaluate ball skills utilizing subjective ratings. The aim of this study was to examine the psychometric properties of a newly developed instrument to objectively measure ball handling skills (BHS-test) utilizing digital instrument. Participants were a convenience sample of 213 adolescents (age M = 17.1 years, SD =3.6; 55% females, 45% males) recruited from upper secondary schools and invited to a sports hall for the assessment. The 8-item instrument incorporated both accuracy-based ball skill tests and repetitive-performance tests with a ball. Testers counted performance manually in the four tests (one throwing and three juggling tasks). Furthermore, assessment was technologically enhanced in the other four tests utilizing a ball machine, a Kinect camera and balls with motion sensors (one balancing and three rolling tasks). 3D printing technology was used to construct equipment, while all results were administered digitally with smart phones/tablets, computers and a specially constructed application to send data to a server. The instrument was deemed reliable (α = .77) and principal component analysis was used in a random subset (53 of the participants). Furthermore, latent variable modeling was employed to confirm the structure with the remaining subset (160 of the participants). The analysis showed good factorial-related validity with one factor explaining 57.90 % of the total variance. Four loadings were larger than .80, two more exceeded .76 and the other two were .65 and .49. The one factor solution was confirmed by a first order model with one general factor and an excellent fit between model and data (χ² = 16.12, DF = 20; RMSEA = .00, CI90 .00–.05; CFI = 1.00; SRMR = .02). The loadings on the general factor ranged between .65 and .83. Our findings indicate good reliability and construct validity for the BHS-test. To develop the instrument further, more studies are needed with various age-groups, e.g. children. We suggest using the BHS-test for diagnostic or assessment purpose for talent development and sports participation interventions that focus on ball games.Keywords: ball-handling skills, ball-handling ability, technologically-enhanced measurements, assessment
Procedia PDF Downloads 94967 Algorithm for Predicting Cognitive Exertion and Cognitive Fatigue Using a Portable EEG Headset for Concussion Rehabilitation
Authors: Lou J. Pino, Mark Campbell, Matthew J. Kennedy, Ashleigh C. Kennedy
Abstract:
A concussion is complex and nuanced, with cognitive rest being a key component of recovery. Cognitive overexertion during rehabilitation from a concussion is associated with delayed recovery. However, daily living imposes cognitive demands that may be unavoidable and difficult to quantify. Therefore, a portable tool capable of alerting patients before cognitive overexertion occurs could allow patients to maintain their quality of life while preventing symptoms and recovery setbacks. EEG allows for a sensitive measure of cognitive exertion. Clinical 32-lead EEG headsets are not practical for day-to-day concussion rehabilitation management. However, there are now commercially available and affordable portable EEG headsets. Thus, these headsets can potentially be used to continuously monitor cognitive exertion during mental tasks to alert the wearer of overexertion, with the aim of preventing the occurrence of symptoms to speed recovery times. The objective of this study was to test an algorithm for predicting cognitive exertion from EEG data collected from a portable headset. EEG data were acquired from 10 participants (5 males, 5 females). Each participant wore a portable 4 channel EEG headband while completing 10 tasks: rest (eyes closed), rest (eyes open), three levels of the increasing difficulty of logic puzzles, three levels of increasing difficulty in multiplication questions, rest (eyes open), and rest (eyes closed). After each task, the participant was asked to report their perceived level of cognitive exertion using the NASA Task Load Index (TLX). Each participant then completed a second session on a different day. A customized machine learning model was created using data from the first session. The performance of each model was then tested using data from the second session. The mean correlation coefficient between TLX scores and predicted cognitive exertion was 0.75 ± 0.16. The results support the efficacy of the algorithm for predicting cognitive exertion. This demonstrates that the algorithms developed in this study used with portable EEG devices have the potential to aid in the concussion recovery process by monitoring and warning patients of cognitive overexertion. Preventing cognitive overexertion during recovery may reduce the number of symptoms a patient experiences and may help speed the recovery process.Keywords: cognitive activity, EEG, machine learning, personalized recovery
Procedia PDF Downloads 220966 Cluster Analysis and Benchmarking for Performance Optimization of a Pyrochlore Processing Unit
Authors: Ana C. R. P. Ferreira, Adriano H. P. Pereira
Abstract:
Given the frequent variation of mineral properties throughout the Araxá pyrochlore deposit, even if a good homogenization work has been carried out before feeding the processing plants, an operation with quality and performance’s high variety standard is expected. These results could be improved and standardized if the blend composition parameters that most influence the processing route are determined, and then the types of raw materials are grouped by them, finally presenting a great reference with operational settings for each group. Associating the physical and chemical parameters of a unit operation through benchmarking or even an optimal reference of metallurgical recovery and product quality reflects in the reduction of the production costs, optimization of the mineral resource, and guarantee of greater stability in the subsequent processes of the production chain that uses the mineral of interest. Conducting a comprehensive exploratory data analysis to identify which characteristics of the ore are most relevant to the process route, associated with the use of Machine Learning algorithms for grouping the raw material (ore) and associating these with reference variables in the process’ benchmark is a reasonable alternative for the standardization and improvement of mineral processing units. Clustering methods through Decision Tree and K-Means were employed, associated with algorithms based on the theory of benchmarking, with criteria defined by the process team in order to reference the best adjustments for processing the ore piles of each cluster. A clean user interface was created to obtain the outputs of the created algorithm. The results were measured through the average time of adjustment and stabilization of the process after a new pile of homogenized ore enters the plant, as well as the average time needed to achieve the best processing result. Direct gains from the metallurgical recovery of the process were also measured. The results were promising, with a reduction in the adjustment time and stabilization when starting the processing of a new ore pile, as well as reaching the benchmark. Also noteworthy are the gains in metallurgical recovery, which reflect a significant saving in ore consumption and a consequent reduction in production costs, hence a more rational use of the tailings dams and life optimization of the mineral deposit.Keywords: mineral clustering, machine learning, process optimization, pyrochlore processing
Procedia PDF Downloads 144965 A Study of Semantic Analysis of LED Illustrated Traffic Directional Arrow in Different Style
Authors: Chia-Chen Wu, Chih-Fu Wu, Pey-Weng Lien, Kai-Chieh Lin
Abstract:
In the past, the most comprehensively adopted light source was incandescent light bulbs, but with the appearance of LED light sources, traditional light sources have been gradually replaced by LEDs because of its numerous superior characteristics. However, many of the standards do not apply to LEDs as the two light sources are characterized differently. This also intensifies the significance of studies on LEDs. As a Kansei design study investigating the visual glare produced by traffic arrows implemented with LEDs, this study conducted a semantic analysis on the styles of traffic arrows used in domestic and international occasions. The results will be able to reduce drivers’ misrecognition that results in the unsuccessful arrival at the destination, or in traffic accidents. This study started with a literature review and surveyed the status quo before conducting experiments that were divided in two parts. The first part involved a screening experiment of arrow samples, where cluster analysis was conducted to choose five representative samples of LED displays. The second part was a semantic experiment on the display of arrows using LEDs, where the five representative samples and the selected ten adjectives were incorporated. Analyzing the results with Quantification Theory Type I, it was found that among the composition of arrows, fletching was the most significant factor that influenced the adjectives. In contrast, a “no fletching” design was more abstract and vague. It lacked the ability to convey the intended message and might bear psychological negative connotation including “dangerous,” “forbidden,” and “unreliable.” The arrow design consisting of “> shaped fletching” was found to be more concrete and definite, showing positive connotation including “safe,” “cautious,” and “reliable.” When a stimulus was placed at a farther distance, the glare could be significantly reduced; moreover, the visual evaluation scores would be higher. On the contrary, if the fletching and the shaft had a similar proportion, looking at the stimuli caused higher evaluation at a closer distance. The above results will be able to be applied to the design of traffic arrows by conveying information definitely and rapidly. In addition, drivers’ safety could be enhanced by understanding the cause of glare and improving visual recognizability.Keywords: LED, arrow, Kansei research, preferred imagery
Procedia PDF Downloads 247964 A Novel Application of CORDYCEPIN (Cordycepssinensis Extract): Maintaining Stem Cell Pluripotency and Improving iPS Generation Efficiency
Authors: Shih-Ping Liu, Cheng-Hsuan Chang, Yu-Chuen Huang, Shih-Yin Chen, Woei-Cherng Shyu
Abstract:
Embryonic stem cells (ES) and induced pluripotnet stem cells (iPS) are both pluripotent stem cells. For mouse stem cells culture technology, leukemia inhibitory factor (LIF) was used to maintain the pluripotency of stem cells in vitro. However, LIF is an expensive reagent. The goal of this study was to find out a pure compound extracted from Chinese herbal medicine that could maintain stem cells pluripotency to replace LIF and improve the iPS generation efficiency. From 20 candidates traditional Chinese medicine we found that Cordycepsmilitaris triggered the up-regulation of stem cells activating genes (Oct4 and Sox2) expression levels in MEF cells. Cordycepin, a major active component of Cordycepsmilitaris, also could up-regulate Oct4 and Sox2 gene expression. Furthermore, we used ES and iPS cells and treated them with different concentrations of Cordycepin (replaced LIF in the culture medium) to test whether it was useful to maintain the pluripotency. The results showed higher expression levels of several stem cells markers in 10 μM Cordycepin-treated ES and iPS cells compared to controls that did not contain LIF, including alkaline phosphatase, SSEA1, and Nanog. Embryonic body formation and differentiation confirmed that 10 μM Cordycepin-containing medium was capable to maintain stem cells pluripotency after four times passages. For mechanism analysis, microarray analysis indicated extracellular matrix and Jak/Stat signaling pathway as the top two deregulated pathways. In ECM pathway, we determined that the integrin αVβ5 expression levels and phosphorylated Src levels increased after Cordycepin treatment. In addition, the phosphorylated Jak2 and phosphorylated Sat3 protein levels were increased after Cordycepin treatment and suppressed with the Jak2 inhibitor, AG490. The expression of cytokines associated with Jak2/Stat3 signaling pathway were also up-regulated by Q-PCR and ELISA assay. Lastly, we used Oct4-GFP MEF cells to test iPS generation efficiency following Cordycepin treatment. We observed that 10 Μm Cordycepin significantly increased the iPS generation efficiency in day 21. In conclusion, we demonstrated Cordycepin could maintain the pluripotency of stem cells through both of ECM and Jak2/Stat3 signaling pathway and improved iPS generation efficiency.Keywords: cordycepin, iPS cells, Jak2/Stat3 signaling pathway, molecular biology
Procedia PDF Downloads 439963 Different Approaches to Teaching a Database Course to Undergraduate and Graduate Students
Authors: Samah Senbel
Abstract:
Database Design is a fundamental part of the Computer Science and Information technology curricula in any school, as well as in the study of management, business administration, and data analytics. In this study, we compare the performance of two groups of students studying the same database design and implementation course at Sacred Heart University in the fall of 2018. Both courses used the same textbook and were taught by the same professor, one for seven graduate students and one for 26 undergraduate students (juniors). The undergraduate students were aged around 20 years old with little work experience, while the graduate students averaged 35 years old and all were employed in computer-related or management-related jobs. The textbook used was 'Database Systems, Design, Implementation, and Management' by Coronel and Morris, and the course was designed to follow the textbook roughly a chapter per week. The first 6 weeks covered the design aspect of a database, followed by a paper exam. The next 6 weeks covered the implementation aspect of the database using SQL followed by a lab exam. Since the undergraduate students are on a 16 week semester, we spend the last three weeks of the course covering NoSQL. This part of the course was not included in this study. After the course was over, we analyze the results of the two groups of students. An interesting discrepancy was observed: In the database design part of the course, the average grade of the graduate students was 92%, while that of the undergraduate students was 77% for the same exam. In the implementation part of the course, we observe the opposite: the average grade of the graduate students was 65% while that of the undergraduate students was 73%. The overall grades were quite similar: the graduate average was 78% and that of the undergraduates was 75%. Based on these results, we concluded that having both classes follow the same time schedule was not beneficial, and an adjustment is needed. The graduates could spend less time on design and the undergraduates would benefit from more design time. In the fall of 2019, 30 students registered for the undergraduate course and 15 students registered for the graduate course. To test our conclusion, the undergraduates spend about 67% of time (eight classes) on the design part of the course and 33% (four classes) on the implementation part, using the exact exams as the previous year. This resulted in an improvement in their average grades on the design part from 77% to 83% and also their implementation average grade from 73% to 79%. In conclusion, we recommend using two separate schedules for teaching the database design course. For undergraduate students, it is important to spend more time on the design part rather than the implementation part of the course. While for the older graduate students, we recommend spending more time on the implementation part, as it seems that is the part they struggle with, even though they have a higher understanding of the design component of databases.Keywords: computer science education, database design, graduate and undergraduate students, pedagogy
Procedia PDF Downloads 123962 The Persistence of Abnormal Return on Assets: An Exploratory Analysis of the Differences between Industries and Differences between Firms by Country and Sector
Authors: José Luis Gallizo, Pilar Gargallo, Ramon Saladrigues, Manuel Salvador
Abstract:
This study offers an exploratory statistical analysis of the persistence of annual profits across a sample of firms from different European Union (EU) countries. To this end, a hierarchical Bayesian dynamic model has been used which enables the annual behaviour of those profits to be broken down into a permanent structural and a transitory component, while also distinguishing between general effects affecting the industry as a whole to which each firm belongs and specific effects affecting each firm in particular. This breakdown enables the relative importance of those fundamental components to be more accurately evaluated by country and sector. Furthermore, Bayesian approach allows for testing different hypotheses about the homogeneity of the behaviour of the above components with respect to the sector and the country where the firm develops its activity. The data analysed come from a sample of 23,293 firms in EU countries selected from the AMADEUS data-base. The period analysed ran from 1999 to 2007 and 21 sectors were analysed, chosen in such a way that there was a sufficiently large number of firms in each country sector combination for the industry effects to be estimated accurately enough for meaningful comparisons to be made by sector and country. The analysis has been conducted by sector and by country from a Bayesian perspective, thus making the study more flexible and realistic since the estimates obtained do not depend on asymptotic results. In general terms, the study finds that, although the industry effects are significant, more important are the firm specific effects. That importance varies depending on the sector or the country in which the firm carries out its activity. The influence of firm effects accounts for around 81% of total variation and display a significantly lower degree of persistence, with adjustment speeds oscillating around 34%. However, this pattern is not homogeneous but depends on the sector and country analysed. Industry effects depends also on sector and country analysed have a more marginal importance, being significantly more persistent, with adjustment speeds oscillating around 7-8% with this degree of persistence being very similar for most of sectors and countries analysed.Keywords: dynamic models, Bayesian inference, MCMC, abnormal returns, persistence of profits, return on assets
Procedia PDF Downloads 402961 Determination of Phenolic Contents and Antioxidant Activities of Chenopodium quinoa Willd. Seed Extracts
Authors: Nilgün Öztürk, Hakan Sabahtin Ali, Hülya Tuba Kıyan
Abstract:
The genus Chenopodium belongs to Amaranthaceae, is represented by approximately 250 species in the world and 15 species and three subspecies in Turkey. Chenopodium species are traditionally used to treat chest and abdominal pain, shortness of breath, cough and neurological disorders. Chenopodium quinoa Willd. (Quinoa) is native to Andes region of South America (especially Peru and Bolivia) and cultivated in many countries include also Turkey in the world nowadays. The seeds of quinoa are rich in protein, and the phytochemical composition consists of antioxidant substances such as polyphenolic compounds, flavonoids, vitamins, and minerals; anticancer and neuroprotective compounds such as tocotrienols; anti-inflammatory compounds such as carotenoids and anthocyanins and also saponins and starch. Food products of quinoa such as quinoa cereal bar, pasta and cornflakes are used in the diet made during many disorders like obesity, cardiovascular disorder, hypertension and Celiac disease. Also quinoa seems to have antimicrobial, anti-inflammatory and cholesterol-lowering properties because of its bioactive compounds. In this present study, the aqueous ethanolic extracts of the seeds of three different coloured genotypes of quinoa were investigated for their antioxidant activities using 1,1-diphenyl-2-picrylhydrazyl (DPPH) radical scavenging activity, ferrous ion-chelating effect, ferric-reducing antioxidant power, ABTS radical cation decolorization assays and total phenolic contents using Folin-Ciocalteu assay. Among the three genotypes of quinoa; the aqueous ethanolic extract of the red genotype had the highest total phenolic content (83.54 ± 2.12 mg gallic acid/100 g extract) whereas the extract of the white genotype had the lowest total phenolic content (70.66 ± 0.25 mg gallic acid/100 g). According to the antioxidant activity results; the extracts showed moderate reducing power effect whereas weak ABTS radical cation decolorization and ferrous ion-chelating effect and also too weak DPPH radical scavenging activity when compared to the positive standards.Keywords: amaranthaceae, antioxidant activity, Chenopodium quinoa willd., total phenolic content
Procedia PDF Downloads 181960 Intensification of Heat Transfer Using AL₂O₃-Cu/Water Hybrid Nanofluid in a Circular Duct Using Inserts
Authors: Muluken Biadgelegn Wollele, Mebratu Assaye Mengistu
Abstract:
Nanotechnology has created new opportunities for improving industrial efficiency and performance. One of the proposed approaches to improving the effectiveness of temperature exchangers is the use of nanofluids to improve heat transfer performance. The thermal conductivity of nanoparticles, as well as their size, diameter, and volume concentration, all played a role in influencing the rate of heat transfer. Nanofluids are commonly used in automobiles, energy storage, electronic component cooling, solar absorbers, and nuclear reactors. Convective heat transfer must be improved when designing thermal systems in order to reduce heat exchanger size, weight, and cost. Using roughened surfaces to promote heat transfer has been tried several times. Thus, both active and passive heat transfer methods show potential in terms of heat transfer improvement. There will be an added advantage of enhanced heat transfer due to the two methods adopted; however, pressure drop must be considered during flow. Thus, the current research aims to increase heat transfer by adding a twisted tap insert in a plain tube using a working fluid hybrid nanofluid (Al₂O₃-Cu) with a base fluid of water. A circular duct with inserts, a tube length of 3 meters, a hydraulic diameter of 0.01 meters, and tube walls with a constant heat flux of 20 kW/m² and a twist ratio of 125 was used to investigate Al₂O₃-Cu/H₂O hybrid nanofluid with inserts. The temperature distribution is better than with conventional tube designs due to stronger tangential contact and swirls in the twisted tape. The Nusselt number values of plain twisted tape tubes are 1.5–2.0 percent higher than those of plain tubes. When twisted tape is used instead of plain tube, performance evaluation criteria improve by 1.01 times. A heat exchanger that is useful for a number of heat exchanger applications can be built utilizing a mixed flow of analysis that incorporates passive and active methodologies.Keywords: nanofluids, active method, passive method, Nusselt number, performance evaluation criteria
Procedia PDF Downloads 75959 A First-Principles Molecular Dynamics Study on Li+ Solvation Structures in THF/MTHF Containing Electrolytes for Lithium Metal Batteries.
Authors: Chiu-Neng Su, Santhanamoorthi Nachimuthu, Jyh-Chiang Jiang
Abstract:
In lithium-ion batteries (LIBs) the solid–electrolyte interphase (SEI) layer, which forms on the anode surface, plays a crucial role in stabilizing battery performance. Over the past two decades, efforts to enhance LIB electrolytes have primarily focused on refining the quality of SEI components. Despite these endeavors, several observed phenomena remain inadequately improved the SEI layer. Consequently, there has been a significant surge in research interest regarding the behavior of electrolyte solvation structures to elucidate improvements in battery performance. Thus, in this study, we aimed to explore the solvation structures of LiPF₆ in a mixture of organic solvents, tetrahydrofuran (THF) and 2-methyl-tetrahydrofuran (MTHF) using ab-initio molecular dynamics (AIMD) simulations. Our work investigated the solvation structure of electrolytes with different salt concentrations: low-concentration electrolyte (1.0M LiPF6 in 1:1v/v mixture of THF and MTHF), and high-concentration electrolyte (2.0M LiPF₆ in 1:1v/v mixture of THF and MTHF) and compared them with that of conventional electrolyte (1.0M LiPF₆ in 1:1v/v mixture of ethylene carbonate (EC) and dimethyl carbonate (DMC)). Furthermore, the reduction stability of Li+ solvation structures in these electrolyte systems are investigated. It is found that the first solvation shell of Li+ primary consists of THF. We also analyzed the molecular orbital energy levels to understand the reducing stability of these solvents. Compared with the solvation sheath of commercial electrolyte, the THF/MTHF-containing electrolytes have a higher lowest unoccupied molecular orbital (LUMO) energy level, resulting in improved reduction and interface stability. It has been shown that Li-Al alloy can significantly improve cycle life and promote the formation of a dense SEI layer. Therefore, this study aims to construct the solvation structures obtained from calculations of the pure electrolyte system on the surface of Al-Li alloy. Additionally, AIMD simulations will be conducted to investigate chemical reactions at the interface. This investigation aims to elucidate the composition of the SEI layer formed. Furthermore, Bader charges are used to determine the origin and flow of electrons, thereby revealing the sequence of reduction reactions for generating SEI layers.Keywords: lithium, aluminum, alloy, battery, solvation structure
Procedia PDF Downloads 25958 Radiation Protection and Licensing for an Experimental Fusion Facility: The Italian and European Approaches
Authors: S. Sandri, G. M. Contessa, C. Poggi
Abstract:
An experimental nuclear fusion device could be seen as a step toward the development of the future nuclear fusion power plant. If compared with other possible solutions to the energy problem, nuclear fusion has advantages that ensure sustainability and security. In particular considering the radioactivity and the radioactive waste produced, in a nuclear fusion plant the component materials could be selected in order to limit the decay period, making it possible the recycling in a new reactor after about 100 years from the beginning of the decommissioning. To achieve this and other pertinent goals many experimental machines have been developed and operated worldwide in the last decades, underlining that radiation protection and workers exposure are critical aspects of these facilities due to the high flux, high energy neutrons produced in the fusion reactions. Direct radiation, material activation, tritium diffusion and other related issues pose a real challenge to the demonstration that these devices are safer than the nuclear fission facilities. In Italy, a limited number of fusion facilities have been constructed and operated since 30 years ago, mainly at the ENEA Frascati Center, and the radiation protection approach, addressed by the national licensing requirements, shows that it is not always easy to respect the constraints for the workers' exposure to ionizing radiation. In the current analysis, the main radiation protection issues encountered in the Italian Fusion facilities are considered and discussed, and the technical and legal requirements are described. The licensing process for these kinds of devices is outlined and compared with that of other European countries. The following aspects are considered throughout the current study: i) description of the installation, plant and systems, ii) suitability of the area, buildings, and structures, iii) radioprotection structures and organization, iv) exposure of personnel, v) accident analysis and relevant radiological consequences, vi) radioactive wastes assessment and management. In conclusion, the analysis points out the needing of a special attention to the radiological exposure of the workers in order to demonstrate at least the same level of safety as that reached at the nuclear fission facilities.Keywords: fusion facilities, high energy neutrons, licensing process, radiation protection
Procedia PDF Downloads 353957 Problems concerning Formation of Institutional Framework for Electronic Democracy in Georgia
Authors: Giorgi Katamadze
Abstract:
Open public service and accountability towards citizens is an important feature of democratic state based on rule of law. Effective use of electronic resources simplifies bureaucratic procedures, makes direct communications, helps exchange information, ensures government’s openness and in general helps develop electronic/digital democracy. Development of electronic democracy should be a strategic dimension of Georgian governance. Formation of electronic democracy, its functional improvement should become an important dimension of the state’s information policy. Electronic democracy is based on electronic governance and implies modern information and communication systems, their adaptation to universal standards. E-democracy needs involvement of governments, voters, political parties and social groups in an electronic form. In the last years the process of interaction between the citizen and the state becomes simpler. This process is achieved by the use of modern technological systems which gives to a citizen a possibility to use different public services online. For example, the website my.gov.ge makes interaction between the citizen, business and the state more simple, comfortable and secure. A higher standard of accountability and interaction is being established. Electronic democracy brings new forms of interactions between the state and the citizen: e-engagement – participation of society in state politics via electronic systems; e-consultation – electronic interaction among public officials, citizens and interested groups; e-controllership – electronic rule and control of public expenses and service. Public transparency is one of the milestones of electronic democracy as well as representative democracy as only on mutual trust and accountability can democracy be established. In Georgia, institutional changes concerning establishment and development of electronic democracy are not enough. Effective planning and implementation of a comprehensive and multi component e-democracy program (central, regional, local levels) requires telecommunication systems, institutional (public service, competencies, logical system) and informational (relevant conditions for public involvement) support. Therefore, a systematic project of formation of electronic governance should be developed which will include central, regional, municipal levels and certain aspects of development of instrumental basis for electronic governance.Keywords: e-democracy, e-governance, e-services, information technology, public administration
Procedia PDF Downloads 338956 Wax Patterns for Integrally Cast Rotors/Stators of Aeroengine Gas Turbines
Authors: Pradyumna R., Sridhar S., A. Satyanarayana, Alok S. Chauhan, Baig M. A. H.
Abstract:
Modern turbine engines for aerospace applications need precision investment cast components such as integrally cast rotors and stators, for their hot end turbine stages. Traditionally, these turbines are used as starter engines. In recent times, such engines are also used for strategic missile applications. The rotor/stator castings consist of a central hub (shrouded in some designs) over which a number of aerofoil shaped blades are located. Since these components cannot be machined, investment casting is the only available route for manufacture and hence stringent dimensional aerospace quality has to be in-built in the casting process itself. In the process of investment casting, pattern generation by injection of wax into dedicated dies/moulds is the first critical step. Traditional approach deals in producing individual blades with hub/shroud features through wax injection and assembly of a set of such injected patterns onto a dedicated and precisely manufactured fixture to wax-weld and generate an integral wax pattern, a process known as the ‘segmental approach’. It is possible to design a single-injection die with retractable metallic inserts in the case of untwisted blades of stator patterns without the shroud. Such an approach is also possible for twisted blades of rotors with highly complex design of inter-blade inserts and retraction mechanisms. DMRL has for long established methods and procedures for the above to successfully supply precision castings for various defence related projects. In recent times, urea based soluble insert approach has also been successfully applied to overcome the need to design and manufacture a precision assembly fixture, leading to substantial reduction in component development times. Present paper deals in length various approaches tried and established at DMRL to generate precision wax patterns for aerospace quality turbine rotors and stators. In addition to this, the importance of simulation in solving issues related to wax injection is also touched upon.Keywords: die/mold and fixtures, integral rotor/stator, investment casting, wax patterns, simulation
Procedia PDF Downloads 342955 Adaptive Assemblies: A Scalable Solution for Atlanta's Affordable Housing Crisis
Authors: Claudia Aguilar, Amen Farooq
Abstract:
Among other cities in the United States, the city of Atlanta is experiencing levels of growth that surpass anything we have witnessed in the last century. With the surge of population influx, the available housing is practically bursting at the seams. Supply is low, and demand is high. In effect, the average one-bedroom apartment runs for 1,800 dollars per month. The city is desperately seeking new opportunities to provide affordable housing at an expeditious rate. This has been made evident by the recent updates to the city’s zoning. With the recent influx in the housing market, young professionals, in particular millennials, are desperately looking for alternatives to stay within the city. To remedy Atlanta’s affordable housing crisis, the city of Atlanta is planning to introduce 40 thousand of new affordable housing units by 2026. To achieve the urgent need for more affordable housing, the architectural response needs to adapt to overcome this goal. A method that has proven successful in modern housing is to practice modular means of development. A method that has been constrained to the dimensions of the max load for an eighteen-wheeler. This approach has diluted the architect’s ability to produce site-specific, informed design and rather contributes to the “cookie cutter” stigma that the method has been labeled with. This thesis explores the design methodology for modular housing by revisiting its constructability and adaptability. This research focuses on a modular housing type that could break away from the constraints of transport and deliver adaptive reconfigurable assemblies. The adaptive assemblies represent an integrated design strategy for assembling the future of affordable dwelling units. The goal is to take advantage of a component-based system and explore a scalable solution to modular housing. This proposal aims specifically to design a kit of parts that are made to be easily transported and assembled but also gives the ability to customize the use of components to benefit all unique conditions. The benefits of this concept could include decreased construction time, cost, on-site labor, and disruption while providing quality housing with affordable and flexible options.Keywords: adaptive assemblies, modular architecture, adaptability, constructibility, kit of parts
Procedia PDF Downloads 86954 The Evaluation of Antioxidant and Antimicrobial Activities of Essential Oil and Aqueous, Methanol, Ethanol, Ethyl Acetate and Acetone Extract of Hypericum scabrum
Authors: A. Heshmati, M. Y Alikhani, M. T. Godarzi, M. R. Sadeghimanesh
Abstract:
Herbal essential oil and extracts are a good source of natural antioxidants and antimicrobial compounds. Hypericum is one of the potential sources of these compounds. In this study, the antioxidant and antimicrobial activity of essential oil and aqueous, methanol, ethanol, ethyl acetate and acetone extract of Hypericum scabrum was assessed. Flowers of Hypericum scabrum were collected from the surrounding mountains of Hamadan province and after drying in the shade, the essential oil of the plant was extracted by Clevenger and water, methanol, ethanol, ethyl acetate and acetone extract was obtained by maceration method. Essential oil compounds were identified using the GC-Mass. The Folin-Ciocalteau and aluminum chloride (AlCl3) colorimetric method was used to measure the amount of phenolic acid and flavonoids, respectively. Antioxidant activity was evaluated using DPPH and FRAP. The minimum inhibitory concentration (MIC) and the minimum bacterial/fungicide concentration (MBC/MFC) of essential oil and extracts were evaluated against Staphylococcus aureus, Bacillus cereus, Pseudomonas aeruginosa, Salmonella typhimurium, Aspergillus flavus and Candida albicans. The essential oil yield of was 0.35%, the lowest and highest extract yield was related to ethyl acetate and water extract. The most component of essential oil was α-Pinene (46.35%). The methanol extracts had the highest phenolic acid (95.65 ± 4.72 µg galic acid equivalent/g dry plant) and flavonoids (25.39 ± 2.73 µg quercetin equivalent/g dry plant). The percentage of DPPH radical inhibition showed positive correlation with concentrations of essential oil or extract. The methanol and ethanol extract had the highest DDPH radical inhibitory. Essential oil and extracts of Hypericum had antimicrobial activity against the microorganisms studied in this research. The MIC and MBC values for essential oils were in the range of 25-25.6 and 25-50 μg/mL, respectively. For the extracts, these values were 1.5625-100 and 3.125-100 μg/mL, respectively. Methanol extracts had the highest antimicrobial activity. Essential oil and extract of Hypericum scabrum, especially methanol extract, have proper antimicrobial and antioxidant activity, and it can be used to control the oxidation and inhibit the growth of pathogenic and spoilage microorganisms. In addition, it can be used as a substitute for synthetic antioxidant and antimicrobial compounds.Keywords: antimicrobial, antioxidant, extract, hypericum
Procedia PDF Downloads 331953 Controlling RPV Embrittlement through Wet Annealing in Support of Life Extension
Authors: E. A. Krasikov
Abstract:
As a main barrier against radioactivity outlet reactor pressure vessel (RPV) is a key component in terms of NPP safety. Therefore, present-day demands in RPV reliability enhance have to be met by all possible actions for RPV in-service embrittlement mitigation. Annealing treatment is known to be the effective measure to restore the RPV metal properties deteriorated by neutron irradiation. There are two approaches to annealing. The first one is so-called ‘dry’ high temperature (~475°C) annealing. It allows obtaining practically complete recovery, but requires the removal of the reactor core and internals. External heat source (furnace) is required to carry out RPV heat treatment. The alternative approach is to anneal RPV at a maximum coolant temperature which can be obtained using the reactor core or primary circuit pumps while operating within the RPV design limits. This low temperature «wet» annealing, although it cannot be expected to produce complete recovery, is more attractive from the practical point of view especially in cases when the removal of the internals is impossible. The first RPV «wet» annealing was done using nuclear heat (US Army SM-1A reactor). The second one was done by means of primary pumps heat (Belgian BR-3 reactor). As a rule, there is no recovery effect up to annealing and irradiation temperature difference of 70°C. It is known, however, that along with radiation embrittlement neutron irradiation may mitigate the radiation damage in metals. Therefore, we have tried to test the possibility to use the effect of radiation-induced ductilization in ‘wet’ annealing technology by means of nuclear heat utilization as heat and neutron irradiation sources at once. In support of the above-mentioned conception the 3-year duration reactor experiment on 15Cr3NiMoV type steel with preliminary irradiation at operating PWR at 270°C and following extra irradiation (87 h at 330°C) at IR-8 test reactor was fulfilled. In fact, embrittlement was partly suppressed up to value equivalent to 1,5 fold neutron fluence decrease. The degree of recovery in case of radiation enhanced annealing is equal to 27% whereas furnace annealing results in zero effect under existing conditions. Mechanism of the radiation-induced damage mitigation is proposed. It is hoped that «wet » annealing technology will help provide a better management of the RPV degradation as a factor affecting the lifetime of nuclear power plants which, together with associated management methods, will help facilitate safe and economic long-term operation of PWRs.Keywords: controlling, embrittlement, radiation, steel, wet annealing
Procedia PDF Downloads 380952 Influence of Deficient Materials on the Reliability of Reinforced Concrete Members
Authors: Sami W. Tabsh
Abstract:
The strength of reinforced concrete depends on the member dimensions and material properties. The properties of concrete and steel materials are not constant but random variables. The variability of concrete strength is due to batching errors, variations in mixing, cement quality uncertainties, differences in the degree of compaction and disparity in curing. Similarly, the variability of steel strength is attributed to the manufacturing process, rolling conditions, characteristics of base material, uncertainties in chemical composition, and the microstructure-property relationships. To account for such uncertainties, codes of practice for reinforced concrete design impose resistance factors to ensure structural reliability over the useful life of the structure. In this investigation, the effects of reductions in concrete and reinforcing steel strengths from the nominal values, beyond those accounted for in the structural design codes, on the structural reliability are assessed. The considered limit states are flexure, shear and axial compression based on the ACI 318-11 structural concrete building code. Structural safety is measured in terms of a reliability index. Probabilistic resistance and load models are compiled from the available literature. The study showed that there is a wide variation in the reliability index for reinforced concrete members designed for flexure, shear or axial compression, especially when the live-to-dead load ratio is low. Furthermore, variations in concrete strength have minor effect on the reliability of beams in flexure, moderate effect on the reliability of beams in shear, and sever effect on the reliability of columns in axial compression. On the other hand, changes in steel yield strength have great effect on the reliability of beams in flexure, moderate effect on the reliability of beams in shear, and mild effect on the reliability of columns in axial compression. Based on the outcome, it can be concluded that the reliability of beams is sensitive to changes in the yield strength of the steel reinforcement, whereas the reliability of columns is sensitive to variations in the concrete strength. Since the embedded target reliability in structural design codes results in lower structural safety in beams than in columns, large reductions in material strengths compromise the structural safety of beams much more than they affect columns.Keywords: code, flexure, limit states, random variables, reinforced concrete, reliability, reliability index, shear, structural safety
Procedia PDF Downloads 430951 Mechanism of Action of New Sustainable Flame Retardant Additives in Polyamide 6,6
Authors: I. Belyamani, M. K. Hassan, J. U. Otaigbe, W. R. Fielding, K. A. Mauritz, J. S. Wiggins, W. L. Jarrett
Abstract:
We have investigated the flame-retardant efficiency of special new phosphate glass (P-glass) compositions having different glass transition temperatures (Tg) on the processing conditions of polyamide 6,6 (PA6,6) and the final hybrid flame retardancy (FR). We have showed that the low Tg P glass composition (i.e., ILT 1) is a promising flame retardant for PA6,6 at a concentration of up to 15 wt. % compared to intermediate (IIT 3) and high (IHT 1) Tg P glasses. Cone calorimetry data showed that the ILT 1 decreased both the peak heat release rate and the total heat amount released from the PA6,6/ILT 1 hybrids, resulting in an efficient formation of a glassy char layer. These intriguing findings prompted to address several questions concerning the mechanism of action of the different P glasses studied. The general mechanism of action of phosphorous based FR additives occurs during the combustion stage by enhancing the morphology of the char and the thermal shielding effect. However, the present work shows that P glass based FR additives act during melt processing of PA6,6/P glass hybrids. Dynamic mechanical analysis (DMA) revealed that the Tg of PA6,6/ILT 1 was significantly shifted to a lower Tg (~65 oC) and another transition appeared at high temperature (~ 166 oC), thus indicating a strong interaction between PA6,6 and ILT 1. This was supported by a drop in the melting point and crystallinity of the PA6,6/ILT 1 hybrid material as detected by differential scanning calorimetry (DSC). The dielectric spectroscopic investigation of the networks’ molecular level structural variations (i.e. hybrids chain motion, Tg and sub-Tg relaxations) agreed very well with the DMA and DSC findings; it was found that the three different P glass compositions did not show any effect on the PA6,6 sub-Tg relaxations (related to the NH2 and OH chain end groups motions). Nevertheless, contrary to IIT 3 and IHT 1 based hybrids, the PA6,6/ILT 1 hybrid material showed an evidence of splitting the PA6,6 Tg relaxations into two peaks. Finally, the CPMAS 31P-NMR data confirmed the miscibility between ILT 1 and PA6,6 at the molecular level, as a much larger enhancement in cross-polarization for the PA6,6/15%ILT 1 hybrids was observed. It can be concluded that compounding low Tg P-glass (i.e. ILT 1) with PA6,6 facilitates hydrolytic chain scission of the PA6,6 macromolecules through a potential chemical interaction between phosphate and the alpha-Carbon of the amide bonds of the PA6,6, leading to better flame retardant properties.Keywords: broadband dielectric spectroscopy, composites, flame retardant, polyamide, phosphate glass, sustainable
Procedia PDF Downloads 239950 Application of Thermal Dimensioning Tools to Consider Different Strategies for the Disposal of High-Heat-Generating Waste
Authors: David Holton, Michelle Dickinson, Giovanni Carta
Abstract:
The principle of geological disposal is to isolate higher-activity radioactive wastes deep inside a suitable rock formation to ensure that no harmful quantities of radioactivity reach the surface environment. To achieve this, wastes will be placed in an engineered underground containment facility – the geological disposal facility (GDF) – which will be designed so that natural and man-made barriers work together to minimise the escape of radioactivity. Internationally, various multi-barrier concepts have been developed for the disposal of higher-activity radioactive wastes. High-heat-generating wastes (HLW, spent fuel and Pu) provide a number of different technical challenges to those associated with the disposal of low-heat-generating waste. Thermal management of the disposal system must be taken into consideration in GDF design; temperature constraints might apply to the wasteform, container, buffer and host rock. Of these, the temperature limit placed on the buffer component of the engineered barrier system (EBS) can be the most constraining factor. The heat must therefore be managed such that the properties of the buffer are not compromised to the extent that it cannot deliver the required level of safety. The maximum temperature of a buffer surrounding a container at the centre of a fixed array of heat-generating sources, arises due to heat diffusing from neighbouring heat-generating wastes, incrementally contributing to the temperature of the EBS. A range of strategies can be employed for managing heat in a GDF, including the spatial arrangements or patterns of those containers; different geometrical configurations can influence the overall thermal density in a disposal facility (or area within a facility) and therefore the maximum buffer temperature. A semi-analytical thermal dimensioning tool and methodology have been applied at a generic stage to explore a range of strategies to manage the disposal of high-heat-generating waste. A number of examples, including different geometrical layouts and chequer-boarding, have been illustrated to demonstrate how these tools can be used to consider safety margins and inform strategic disposal options when faced with uncertainty, at a generic stage of the development of a GDF.Keywords: buffer, geological disposal facility, high-heat-generating waste, spent fuel
Procedia PDF Downloads 286949 Influence of Nanomaterials on the Properties of Shape Memory Polymeric Materials
Authors: Katielly Vianna Polkowski, Rodrigo Denizarte de Oliveira Polkowski, Cristiano Grings Herbert
Abstract:
The use of nanomaterials in the formulation of polymeric materials modifies their molecular structure, offering an infinite range of possibilities for the development of smart products, being of great importance for science and contemporary industry. Shape memory polymers are generally lightweight, have high shape recovery capabilities, they are easy to process and have properties that can be adapted for a variety of applications. Shape memory materials are active materials that have attracted attention due to their superior damping properties when compared to conventional structural materials. The development of methodologies capable of preparing new materials, which use graphene in their structure, represents technological innovation that transforms low-cost products into advanced materials with high added value. To obtain an improvement in the shape memory effect (SME) of polymeric materials, it is possible to use graphene in its composition containing low concentration by mass of graphene nanoplatelets (GNP), graphene oxide (GO) or other functionalized graphene, via different mixture process. As a result, there was an improvement in the SME, regarding the increase in the values of maximum strain. In addition, the use of graphene contributes to obtaining nanocomposites with superior electrical properties, greater crystallinity, as well as resistance to material degradation. The methodology used in the research is Systematic Review, scientific investigation, gathering relevant studies on influence of nanomaterials on the properties of shape memory polymeric, using the literature database as a source and study methods. In the present study, a systematic reviewwas performed of all papers published from 2014 to 2022 regarding graphene and shape memory polymeric througha search of three databases. This study allows for easy identification of themost relevant fields of study with respect to graphene and shape memory polymeric, as well as the main gaps to beexplored in the literature. The addition of graphene showed improvements in obtaining higher values of maximum deformation of the material, attributed to a possible slip between stacked or agglomerated nanostructures, as well as an increase in stiffness due to the increase in the degree of phase separation that results in a greater amount physical cross-links, referring to the formation of shortrange rigid domains.Keywords: graphene, shape memory, smart materials, polymers, nanomaterials
Procedia PDF Downloads 85948 The Role and Effects of Communication on Occupational Safety: A Review
Authors: Pieter A. Cornelissen, Joris J. Van Hoof
Abstract:
The interest in improving occupational safety started almost simultaneously with the beginning of the Industrial Revolution. Yet, it was not until the late 1970’s before the role of communication was considered in scientific research regarding occupational safety. In recent years the importance of communication as a means to improve occupational safety has increased. Not only as communication might have a direct effect on safety performance and safety outcomes, but also as it can be viewed as a major component of other important safety-related elements (e.g., training, safety meetings, leadership). And while safety communication is an increasingly important topic in research, its operationalization is often vague and differs among studies. This is not only problematic when comparing results, but also in applying these results to practice and the work floor. By means of an in-depth analysis—building on an existing dataset—this review aims to overcome these problems. The initial database search yielded 25.527 articles, which was reduced to a research corpus of 176 articles. Focusing on the 37 articles of this corpus that addressed communication (related to safety outcomes and safety performance), the current study will provide a comprehensive overview of the role and effects of safety communication and outlines the conditions under which communication contributes to a safer work environment. The study shows that in literature a distinction is commonly made between safety communication (i.e., the exchange or dissemination of safety-related information) and feedback (i.e. a reactive form of communication). And although there is a consensus among researchers that both communication and feedback positively affect safety performance, there is a debate about the directness of this relationship. Whereas some researchers assume a direct relationship between safety communication and safety performance, others state that this relationship is mediated by safety climate. One of the key findings is that despite the strongly present view that safety communication is a formal and top-down safety management tool, researchers stress the importance of open communication that encourages and allows employees to express their worries, experiences, views, and share information. This raises questions with regard to other directions (e.g., bottom-up, horizontal) and forms of communication (e.g., informal). The current review proposes a framework to overcome the often vague and different operationalizations of safety communication. The proposed framework can be used to characterize safety communication in terms of stakeholders, direction, and characteristics of communication (e.g., medium usage).Keywords: communication, feedback, occupational safety, review
Procedia PDF Downloads 303