Search results for: public order and security act
556 An Early Intervention Framework for Supporting Students’ Mathematical Development in the Transition to University STEM Programmes
Authors: Richard Harrison
Abstract:
Developing competency in mathematics and related critical thinking skills is essential to the education of undergraduate students of Science, Technology, Engineering and Mathematics (STEM). Recently, the HE sector has been impacted by a seemingly widening disconnect between the mathematical competency of incoming first-year STEM students and their entrance qualification tariffs. Despite relatively high grades in A-Level Mathematics, students may initially lack fundamental skills in key areas such as algebraic manipulation and have limited capacity to apply problem solving strategies. Compounded by compensatory measures applied to entrance qualifications during the pandemic, there has been an associated decline in student performance on introductory university mathematics modules. In the UK, a number of online resources have been developed to help scaffold the transition to university mathematics. However, in general, these do not offer a structured learning journey focused on individual developmental needs, nor do they offer an experience coherent with the teaching and learning characteristics of the destination institution. In order to address some of these issues, a bespoke framework has been designed and implemented on our VLE in the Faculty of Engineering & Physical Sciences (FEPS) at the University of Surrey. Called the FEPS Maths Support Framework, it was conceived to scaffold the mathematical development of individuals prior to entering the university and during the early stages of their transition to undergraduate studies. More than 90% of our incoming STEM students voluntarily participate in the process. Students complete a set of initial diagnostic questions in the late summer. Based on their performance and feedback on these questions, they are subsequently guided to self-select specific mathematical topic areas for review using our proprietary resources. This further assists students in preparing for discipline related diagnostic tests. The framework helps to identify students who are mathematically weak and facilitates early intervention to support students according to their specific developmental needs. This paper presents a summary of results from a rich data set captured from the framework over a 3-year period. Quantitative data provides evidence that students have engaged and developed during the process. This is further supported by process evaluation feedback from the students. Ranked performance data associated with seven key mathematical topic areas and eight engineering and science discipline areas reveals interesting patterns which can be used to identify more generic relative capabilities of the discipline area cohorts. In turn, this facilitates evidence based management of the mathematical development of the new cohort, informing any associated adjustments to teaching and learning at a more holistic level. Evidence is presented establishing our framework as an effective early intervention strategy for addressing the sector-wide issue of supporting the mathematical development of STEM students transitioning to HEKeywords: competency, development, intervention, scaffolding
Procedia PDF Downloads 66555 Assessment of Heavy Metals Contamination Levels in Groundwater: A Case Study of the Bafia Agricultural Area, Centre Region Cameroon
Authors: Carine Enow-Ayor Tarkang, Victorine Neh Akenji, Dmitri Rouwet, Jodephine Njdma, Andrew Ako Ako, Franco Tassi, Jules Remy Ngoupayou Ndam
Abstract:
Groundwater is the major water resource in the whole of Bafia used for drinking, domestic, poultry and agricultural purposes, and being an area of intense agriculture, there is a great necessity to do a quality assessment. Bafia is one of the main food suppliers in the Centre region of Cameroon, and so to meet their demands, the farmers make use of fertilizers and other agrochemicals to increase their yield. Less than 20% of the population in Bafia has access to piped-borne water due to the national shortage, according to the authors best knowledge very limited studies have been carried out in the area to increase awareness of the groundwater resources. The aim of this study was to assess heavy metal contamination levels in ground and surface waters and to evaluate the effects of agricultural inputs on water quality in the Bafia area. 57 water samples (including 31 wells, 20 boreholes, 4 rivers and 2 springs) were analyzed for their physicochemical parameters, while collected samples were filtered, acidified with HNO3 and analyzed by ICP-MS for their heavy metal content (Fe, Ti, Sr, Al, Mn). Results showed that most of the water samples are acidic to slightly neutral and moderately mineralized. Ti concentration was significantly high in the area (mean value 130µg/L), suggesting another Ti source besides the natural input from Titanium oxides. The high amounts of Mn and Al in some cases also pointed to additional input, probably from fertilizers that are used in the farmlands. Most of the water samples were found to be significantly contaminated with heavy metals exceeding the WHO allowable limits (Ti-94.7%, Al-19.3%, Mn-14%, Fe-5.2% and Sr-3.5% above limits), especially around farmlands and topographic low areas. The heavy metal concentration was evaluated using the heavy metal pollution index (HPI), heavy metal evaluation index (HEI) and degree of contamination (Cd), while the Ficklin diagram was used for the water based on changes in metal content and pH. The high mean values of HPI and Cd (741 and 5, respectively), which exceeded the critical limit, indicate that the water samples are highly contaminated, with intense pollution from Ti, Al and Mn. Based on the HPI and Cd, 93% and 35% of the samples, respectively, are unacceptable for drinking purposes. The lowest HPI value point also had the lowest EC (50 µS/cm), indicating lower mineralization and less anthropogenic influence. According to the Ficklin diagram, 89% of the samples fell within the near-neutral low-metal domain, while 9% fell in the near-neutral extreme-metal domain. Two significant factors were extracted from the PCA, explaining 70.6% of the total variance. The first factor revealed intense anthropogenic activity (especially from fertilizers), while the second factor revealed water-rock interactions. Agricultural activities thus have an impact on the heavy metal content of groundwater in the area; hence, much attention should be given to the affected areas in order to protect human health/life and thus sustainably manage this precious resource.Keywords: Bafia, contamination, degree of contamination, groundwater, heavy metal pollution index
Procedia PDF Downloads 87554 Valorization of Surveillance Data and Assessment of the Sensitivity of a Surveillance System for an Infectious Disease Using a Capture-Recapture Model
Authors: Jean-Philippe Amat, Timothée Vergne, Aymeric Hans, Bénédicte Ferry, Pascal Hendrikx, Jackie Tapprest, Barbara Dufour, Agnès Leblond
Abstract:
The surveillance of infectious diseases is necessary to describe their occurrence and help the planning, implementation and evaluation of risk mitigation activities. However, the exact number of detected cases may remain unknown whether surveillance is based on serological tests because identifying seroconversion may be difficult. Moreover, incomplete detection of cases or outbreaks is a recurrent issue in the field of disease surveillance. This study addresses these two issues. Using a viral animal disease as an example (equine viral arteritis), the goals were to establish suitable rules for identifying seroconversion in order to estimate the number of cases and outbreaks detected by a surveillance system in France between 2006 and 2013, and to assess the sensitivity of this system by estimating the total number of outbreaks that occurred during this period (including unreported outbreaks) using a capture-recapture model. Data from horses which exhibited at least one positive result in serology using viral neutralization test between 2006 and 2013 were used for analysis (n=1,645). Data consisted of the annual antibody titers and the location of the subjects (towns). A consensus among multidisciplinary experts (specialists in the disease and its laboratory diagnosis, epidemiologists) was reached to consider seroconversion as a change in antibody titer from negative to at least 32 or as a three-fold or greater increase. The number of seroconversions was counted for each town and modeled using a unilist zero-truncated binomial (ZTB) capture-recapture model with R software. The binomial denominator was the number of horses tested in each infected town. Using the defined rules, 239 cases located in 177 towns (outbreaks) were identified from 2006 to 2013. Subsequently, the sensitivity of the surveillance system was estimated as the ratio of the number of detected outbreaks to the total number of outbreaks that occurred (including unreported outbreaks) estimated using the ZTB model. The total number of outbreaks was estimated at 215 (95% credible interval CrI95%: 195-249) and the surveillance sensitivity at 82% (CrI95%: 71-91). The rules proposed for identifying seroconversion may serve future research. Such rules, adjusted to the local environment, could conceivably be applied in other countries with surveillance programs dedicated to this disease. More generally, defining ad hoc algorithms for interpreting the antibody titer could be useful regarding other human and animal diseases and zoonosis when there is a lack of accurate information in the literature about the serological response in naturally infected subjects. This study shows how capture-recapture methods may help to estimate the sensitivity of an imperfect surveillance system and to valorize surveillance data. The sensitivity of the surveillance system of equine viral arteritis is relatively high and supports its relevance to prevent the disease spreading.Keywords: Bayesian inference, capture-recapture, epidemiology, equine viral arteritis, infectious disease, seroconversion, surveillance
Procedia PDF Downloads 300553 Pedagogy of the Oppressed: Fifty Years Later. Implications for Policy and Reforms
Authors: Mohammad Ibrahim Alladin
Abstract:
The Pedagogy of the Oppressed by Paulo Freire was first published in 1970. Since its publication it has become one of most cited book in the social sciences. Over a million copies have been sold worldwide. The Pedagogy of the Oppressed by Paulo Freire was published in 1970 (New York: Herder and Herder), The book has caused a “revolution” in the education world and his theory has been examined and analysed. It has influenced educational policy, curriculum development and teacher education. The revolution started half a century ago. “Paolo Freire’s Pedagogy of the Oppressed develops a theory of education fitted to the needs of the disenfranchised and marginalized members of capitalist societies. Combining educational and political philosophy, the book offers an analysis of oppression and a theory of liberation. Freire believes that traditional education serves to support the dominance of the powerful within society and thereby maintain the powerful’s social, political, and economic status quo. To overcome the oppression endemic to an exploitative society, education must be remade to inspire and enable the oppressed in their struggle for liberation. This new approach to education focuses on consciousness-raising, dialogue, and collaboration between teacher and student in the effort to achieve greater humanization for all. For Freire, education is political and functions either to preserve the current social order or to transform it. The theories of education and revolutionary action he offers in Pedagogy of the Oppressed are addressed educators committed to the struggle for liberation from oppression. Freire’s own commitment to this struggle developed through years of teaching literacy to Brazilian and Chilean peasants and laborers. His efforts at educational and political reform resulted in a brief period of imprisonment followed exile from his native Brazil for fifteen years. In Pedagogy of the Oppressed begins Freire asserts the importance of consciousness-raising, or conscientização, as the means enabling the oppressed to recognize their oppression and commit to the effort to overcome it, taking full responsibility for themselves in the struggle for liberation. He addresses the “fear of freedom,” which inhibits the oppressed from assuming this responsibility. He also cautions against the dangers of sectarianism, which can undermine the revolutionary purpose as well as serve as a refuge for the committed conservative. Freire provides an alternative view of education by attacking tradition education and knowledge. He is highly critical of how is imparted and how knowledge is structured that limits the learner’s thinking. Hence, education becomes oppressive and school functions as an institution of social control. Since its publication, education has gone through a series of reforms and in some areas total transformation. This paper addresses the following: The role of education in social transformation The teacher/learner relationship :Critical thinking The paper essentially examines what happened in the last fifty years since Freire’s book. It seeks to explain what happened to Freire’s education revolution, and what is the status of the movement that started almost fifty years ago.Keywords: pedagogy, reform, curriculum, teacher education
Procedia PDF Downloads 96552 Lignin Valorization: Techno-Economic Analysis of Three Lignin Conversion Routes
Authors: Iris Vural Gursel, Andrea Ramirez
Abstract:
Effective utilization of lignin is an important mean for developing economically profitable biorefineries. Current literature suggests that large amounts of lignin will become available in second generation biorefineries. New conversion technologies will, therefore, be needed to carry lignin transformation well beyond combustion to produce energy, but towards high-value products such as chemicals and transportation fuels. In recent years, significant progress on catalysis has been made to improve transformation of lignin, and new catalytic processes are emerging. In this work, a techno-economic assessment of two of these novel conversion routes and comparison with more established lignin pyrolysis route were made. The aim is to provide insights into the potential performance and potential hotspots in order to guide the experimental research and ease the commercialization by early identifying cost drivers, strengths, and challenges. The lignin conversion routes selected for detailed assessment were: (non-catalytic) lignin pyrolysis as the benchmark, direct hydrodeoxygenation (HDO) of lignin and hydrothermal lignin depolymerisation. Products generated were mixed oxygenated aromatic monomers (MOAMON), light organics, heavy organics, and char. For the technical assessment, a basis design followed by process modelling in Aspen was done using experimental yields. A design capacity of 200 kt/year lignin feed was chosen that is equivalent to a 1 Mt/y scale lignocellulosic biorefinery. The downstream equipment was modelled to achieve the separation of the product streams defined. For determining external utility requirement, heat integration was considered and when possible gasses were combusted to cover heating demand. The models made were used in generating necessary data on material and energy flows. Next, an economic assessment was carried out by estimating operating and capital costs. Return on investment (ROI) and payback period (PBP) were used as indicators. The results of the process modelling indicate that series of separation steps are required. The downstream processing was found especially demanding in the hydrothermal upgrading process due to the presence of significant amount of unconverted lignin (34%) and water. Also, external utility requirements were found to be high. Due to the complex separations, hydrothermal upgrading process showed the highest capital cost (50 M€ more than benchmark). Whereas operating costs were found the highest for the direct HDO process (20 M€/year more than benchmark) due to the use of hydrogen. Because of high yields to valuable heavy organics (32%) and MOAMON (24%), direct HDO process showed the highest ROI (12%) and the shortest PBP (5 years). This process is found feasible with a positive net present value. However, it is very sensitive to the prices used in the calculation. The assessments at this stage are associated with large uncertainties. Nevertheless, they are useful for comparing alternatives and identifying whether a certain process should be given further consideration. Among the three processes investigated here, the direct HDO process was seen to be the most promising.Keywords: biorefinery, economic assessment, lignin conversion, process design
Procedia PDF Downloads 262551 Green Production of Chitosan Nanoparticles and their Potential as Antimicrobial Agents
Authors: L. P. Gomes, G. F. Araújo, Y. M. L. Cordeiro, C. T. Andrade, E. M. Del Aguila, V. M. F. Paschoalin
Abstract:
The application of nanoscale materials and nanostructures is an emerging area, these since materials may provide solutions to technological and environmental challenges in order to preserve the environment and natural resources. To reach this goal, the increasing demand must be accompanied by 'green' synthesis methods. Chitosan is a natural, nontoxic, biopolymer derived by the deacetylation of chitin and has great potential for a wide range of applications in the biological and biomedical areas, due to its biodegradability, biocompatibility, non-toxicity and versatile chemical and physical properties. Chitosan also presents high antimicrobial activities against a wide variety of pathogenic and spoilage microorganisms. Ultrasonication is a common tool for the preparation and processing of polymer nanoparticles. It is particularly effective in breaking up aggregates and in reducing the size and polydispersity of nanoparticles. High-intensity ultrasonication has the potential to modify chitosan molecular weight and, thus, alter or improve chitosan functional properties. The aim of this study was to evaluate the influence of sonication intensity and time on the changes of commercial chitosan characteristics, such as molecular weight and its potential antibacterial activity against Gram-negative bacteria. The nanoparticles (NPs) were produced from two commercial chitosans, of medium molecular weight (CS-MMW) and low molecular weight (CS-LMW) from Sigma-Aldrich®. These samples (2%) were solubilized in 100 mM sodium acetate pH 4.0, placed on ice and irradiated with an ultrasound SONIC ultrasonic probe (model 750 W), equipped with a 1/2" microtip during 30 min at 4°C. It was used on constant duty cycle and 40% amplitude with 1/1s intervals. The ultrasonic degradation of CS-MMW and CS-LMW were followed up by means of ζ-potential (Brookhaven Instruments, model 90Plus) and dynamic light scattering (DLS) measurements. After sonication, the concentrated samples were diluted 100 times and placed in fluorescence quartz cuvettes (Hellma 111-QS, 10 mm light path). The distributions of the colloidal particles were calculated from the DLS and ζ-potential are measurements taken for the CS-MMW and CS-LMW solutions before and after (CS-MMW30 and CS-LMW30) sonication for 30 min. Regarding the results for the chitosan sample, the major bands can be distinguished centered at Radius hydrodynamic (Rh), showed different distributions for CS-MMW (Rh=690.0 nm, ζ=26.52±2.4), CS-LMW (Rh=607.4 and 2805.4 nm, ζ=24.51±1.29), CS-MMW30 (Rh=201.5 and 1064.1 nm, ζ=24.78±2.4) and CS-LMW30 (Rh=492.5, ζ=26.12±0.85). The minimal inhibitory concentration (MIC) was determined using different chitosan samples concentrations. MIC values were determined against to E. coli (106 cells) harvested from an LB medium (Luria-Bertani BD™) after 18h growth at 37 ºC. Subsequently, the cell suspension was serially diluted in saline solution (0.8% NaCl) and plated on solid LB at 37°C for 18 h. Colony-forming units were counted. The samples showed different MICs against E. coli for CS-LMW (1.5mg), CS-MMW30 (1.5 mg/mL) and CS-LMW30 (1.0 mg/mL). The results demonstrate that the production of nanoparticles by modification of their molecular weight by ultrasonication is simple to be performed and dispense acid solvent addition. Molecular weight modifications are enough to provoke changes in the antimicrobial potential of the nanoparticles produced in this way.Keywords: antimicrobial agent, chitosan, green production, nanoparticles
Procedia PDF Downloads 329550 Hydrogen Production from Auto-Thermal Reforming of Ethanol Catalyzed by Tri-Metallic Catalyst
Authors: Patrizia Frontera, Anastasia Macario, Sebastiano Candamano, Fortunato Crea, Pierluigi Antonucci
Abstract:
The increasing of the world energy demand makes today biomass an attractive energy source, based on the minimizing of CO2 emission and on the global warming reduction purposes. Recently, COP-21, the international meeting on global climate change, defined the roadmap for sustainable worldwide development, based on low-carbon containing fuel. Hydrogen is an energy vector able to substitute the conventional fuels from petroleum. Ethanol for hydrogen production represents a valid alternative to the fossil sources due to its low toxicity, low production costs, high biodegradability, high H2 content and renewability. Ethanol conversion to generate hydrogen by a combination of partial oxidation and steam reforming reactions is generally called auto-thermal reforming (ATR). The ATR process is advantageous due to the low energy requirements and to the reduced carbonaceous deposits formation. Catalyst plays a pivotal role in the ATR process, especially towards the process selectivity and the carbonaceous deposits formation. Bimetallic or trimetallic catalysts, as well as catalysts with doped-promoters supports, may exhibit high activity, selectivity and deactivation resistance with respect to the corresponding monometallic ones. In this work, NiMoCo/GDC, NiMoCu/GDC and NiMoRe/GDC (where GDC is Gadolinia Doped Ceria support and the metal composition is 60:30:10 for all catalyst) have been prepared by impregnation method. The support, Gadolinia 0.2 Doped Ceria 0.8, was impregnated by metal precursors solubilized in aqueous ethanol solution (50%) at room temperature for 6 hours. After this, the catalysts were dried at 100°C for 8 hours and, subsequently, calcined at 600°C in order to have the metal oxides. Finally, active catalysts were obtained by reduction procedure (H2 atmosphere at 500°C for 6 hours). All sample were characterized by different analytical techniques (XRD, SEM-EDX, XPS, CHNS, H2-TPR and Raman Spectorscopy). Catalytic experiments (auto-thermal reforming of ethanol) were carried out in the temperature range 500-800°C under atmospheric pressure, using a continuous fixed-bed microreactor. Effluent gases from the reactor were analyzed by two Varian CP4900 chromarographs with a TCD detector. The analytical investigation focused on the preventing of the coke deposition, the metals sintering effect and the sulfur poisoning. Hydrogen productivity, ethanol conversion and products distribution were measured and analyzed. At 600°C, all tri-metallic catalysts show the best performance: H2 + CO reaching almost the 77 vol.% in the final gases. While NiMoCo/GDC catalyst shows the best selectivity to hydrogen whit respect to the other tri-metallic catalysts (41 vol.% at 600°C). On the other hand, NiMoCu/GDC and NiMoRe/GDC demonstrated high sulfur poisoning resistance (up to 200 cc/min) with respect to the NiMoCo/GDC catalyst. The correlation among catalytic results and surface properties of the catalysts will be discussed.Keywords: catalysts, ceria, ethanol, gadolinia, hydrogen, Nickel
Procedia PDF Downloads 155549 The Effect of Manure Loaded Biochar on Soil Microbial Communities
Authors: T. Weber, D. MacKenzie
Abstract:
The script in this paper describes the use of advanced simulation environment using electronic systems (microcontroller, operational amplifiers, and FPGA). The simulation was used for non-linear dynamic systems behaviour with required observer structure working with parallel real-time simulation based on state-space representation. The proposed deposited model was used for electrodynamic effects including ionising effects and eddy current distribution also. With the script and proposed method, it is possible to calculate the spatial distribution of the electromagnetic fields in real-time and such systems. For further purpose, the spatial temperature distribution may also be used. With upon system, the uncertainties and disturbances may be determined. This provides the estimation of the more precise system states for the required system and additionally the estimation of the ionising disturbances that arise due to radiation effects in space systems. The results have also shown that a system can be developed specifically with the real-time calculation (estimation) of the radiation effects only. Electronic systems can take damage caused by impacts with charged particle flux in space or radiation environment. TID (Total Ionising Dose) of 1 Gy and Single Effect Transient (SET) free operation up to 50 MeVcm²/mg may assure certain functions. Single-Event Latch-up (SEL) results on the placement of several transistors in the shared substrate of an integrated circuit; ionising radiation can activate an additional parasitic thyristor. This short circuit between semiconductor-elements can destroy the device without protection and measurements. Single-Event Burnout (SEB) on the other hand, increases current between drain and source of a MOSFET and destroys the component in a short time. A Single-Event Gate Rupture (SEGR) can destroy a dielectric of semiconductor also. In order to be able to react to these processes, it must be calculated within a shorter time that ionizing radiation and dose is present. For this purpose, sensors may be used for the realistic evaluation of the diffusion and ionizing effects of the test system. For this purpose, the Peltier element is used for the evaluation of the dynamic temperature increases (dT/dt), from which a measure of the ionization processes and thus radiation will be detected. In addition, the piezo element may be used to record highly dynamic vibrations and oscillations to absorb impacts of charged particle flux. All available sensors shall be used to calibrate the spatial distributions also. By measured value of size and known location of the sensors, the entire distribution in space can be calculated retroactively or more accurately. With the formation, the type of ionisation and the direct effect to the systems and thus possible prevent processes can be activated up to the shutdown. The results show possibilities to perform more qualitative and faster simulations independent of space-systems and radiation environment also. The paper gives additionally an overview of the diffusion effects and their mechanisms.Keywords: cattle, biochar, manure, microbial activity
Procedia PDF Downloads 103548 Inputs and Outputs of Innovation Processes in the Colombian Services Sector
Authors: Álvaro Turriago-Hoyos
Abstract:
Most research tends to see innovation as an explanatory factor in achieving high levels of competitiveness and productivity. More recent studies have begun to analyze the determinants of innovation in the services sector as opposed to the much-discussed industrial sector of a country’s economy. This research paper focuses on the services sector in Colombia, one of Latin America’s fastest growing and biggest economies. Over the past decade, much of Colombia’s economic expansion has relied on commodity exports (mainly oil and coffee) whilst the industrial sector has performed relatively poorly. Such developments highlight the potential of the innovative role played by the services sector of the Colombian economy and its future growth prospects. This research paper analyzes the relationship between inputs, which at the same time are internal sources of innovation (such as R&D activities), and external sources that are improved by technology acquisition. The outputs are basically the four kinds of innovation that the OECD Oslo Manual recognizes: product, process, marketing and organizational innovations. The instrument used to measure this input-output relationship is based on Knowledge Production Function approaches. We run Probit models in order to identify the existing relationships between the above inputs and outputs, but also to identify spill-overs derived from interactions of the components of the value chain of the services firms analyzed: customers, suppliers, competitors, and complementary firms. Data are obtained from the Colombian National Administrative Department of Statistics for the period 2008 to 2013 published in the II and III Colombian National Innovation Survey. A short summary of the results obtained lead to conclude that firm size and a firm’s level of technological development turn out to be important discriminating factors for the description of the innovative process at the firm level. The model’s outcomes show a positive impact on the probability of introducing any kind of innovation both on R&D and Technology Acquisition investment. Also, cooperation agreements with customers, research institutes, competitors, and the suppliers are significant. Belonging to a particular industrial group is an important determinant but only to product and organizational innovation. It is possible to establish that Health Services, Education, Computer, Wholesale trade, and Financial Intermediation are the ISIC sectors, which report the highest number of frequencies of the considered set of firms. Those five sectors of the sixteen considered, in all cases, explained more than half of the total of all kinds of innovations. Product Innovation, which is followed by Marketing Innovation, gets the highest results. Displaying the same set of firms distinguishing by size, and belonging to high and low tech services sector shows that the larger the firms the larger a number of innovations, but also that always high-tech firms show a better innovation performance.Keywords: Colombia, determinants of innovation, innovation, services sector
Procedia PDF Downloads 268547 Antioxidant Potential of Sunflower Seed Cake Extract in Stabilization of Soybean Oil
Authors: Ivanor Zardo, Fernanda Walper Da Cunha, Júlia Sarkis, Ligia Damasceno Ferreira Marczak
Abstract:
Lipid oxidation is one of the most important deteriorating processes in oil industry, resulting in the losses of nutritional value of oils as well as changes in color, flavor and other physiological properties. Autoxidation of lipids occurs naturally between molecular oxygen and the unsaturation of fatty acids, forming fat-free radicals, peroxide free radicals and hydroperoxides. In order to avoid the lipid oxidation in vegetable oils, synthetic antioxidants such as butylated hydroxyanisole (BHA), butylated hydroxytoluene (BHT) and tertiary butyl hydro-quinone (TBHQ) are commonly used. However, the use of synthetic antioxidants has been associated with several health side effects and toxicity. The use of natural antioxidants as stabilizers of vegetable oils is being suggested as a sustainable alternative to synthetic antioxidants. The alternative that has been studied is the use of natural extracts obtained mainly from fruits, vegetables and seeds, which have a well-known antioxidant activity related mainly to the presence of phenolic compounds. The sunflower seed cake is rich in phenolic compounds (1 4% of the total mass), being the chlorogenic acid the major constituent. The aim of this study was to evaluate the in vitro application of the phenolic extract obtained from the sunflower seed cake as a retarder of the lipid oxidation reaction in soybean oil and to compare the results with a synthetic antioxidant. For this, the soybean oil, provided from the industry without any addition of antioxidants, was subjected to an accelerated storage test for 17 days at 65 °C. Six samples with different treatments were submitted to the test: control sample, without any addition of antioxidants; 100 ppm of synthetic antioxidant BHT; mixture of 50 ppm of BHT and 50 ppm of phenolic compounds; and 100, 500 and 1200 ppm of phenolic compounds. The phenolic compounds concentration in the extract was expressed in gallic acid equivalents. To evaluate the oxidative changes of the samples, aliquots were collected after 0, 3, 6, 10 and 17 days and analyzed for the peroxide, diene and triene conjugate values. The soybean oil sample initially had a peroxide content of 2.01 ± 0.27 meq of oxygen/kg of oil. On the third day of the treatment, only the samples treated with 100, 500 and 1200 ppm of phenolic compounds showed a considerable oxidation retard compared to the control sample. On the sixth day of the treatment, the samples presented a considerable increase in the peroxide value (higher than 13.57 meq/kg), and the higher the concentration of phenolic compounds, the lower the peroxide value verified. From the tenth day on, the samples had a very high peroxide value (higher than 55.39 meq/kg), where only the sample containing 1200 ppm of phenolic compounds presented significant oxidation retard. The samples containing the phenolic extract were more efficient to avoid the formation of the primary oxidation products, indicating effectiveness to retard the reaction. Similar results were observed for dienes and trienes. Based on the results, phenolic compounds, especially chlorogenic acid (the major phenolic compound of sunflower seed cake), can be considered as a potential partial or even total substitute for synthetic antioxidants.Keywords: chlorogenic acid, natural antioxidant, vegetables oil deterioration, waste valorization
Procedia PDF Downloads 264546 The Complementary Effect of Internal Control System and Whistleblowing Policy on Prevention and Detection of Fraud in Nigerian Deposit Money Banks
Authors: Dada Durojaye Joshua
Abstract:
The study examined the combined effect of internal control system and whistle blowing policy while it pursues the following specific objectives, which are to: examine the relationship between monitoring activities and fraud’s detection and prevention; investigate the effect of control activities on fraud’s detection and prevention in Nigerian Deposit Money Banks (DMBs). The population of the study comprises the 89,275 members of staff in the 20 DMBs in Nigeria as at June 2019. Purposive and convenient sampling techniques were used in the selection of the 80 members of staff at the supervisory level of the Internal Audit Departments of the head offices of the sampled banks, that is, selecting 4 respondents (Audit Executive/Head, Internal Control; Manager, Operation Risk Management; Head, Financial Crime Control; the Chief Compliance Officer) from each of the 20 DMBs in Nigeria. A standard questionnaire was adapted from 2017/2018 Internal Control Questionnaire and Assessment, Bureau of Financial Monitoring and Accountability Florida Department of Economic Opportunity. It was modified to serve the purpose for which it was meant to serve. It was self-administered to gather data from the 80 respondents at the respective headquarters of the sampled banks at their respective locations across Nigeria. Two likert-scales was used in achieving the stated objectives. A logit regression was used in analysing the stated hypotheses. It was found that effect of monitoring activities using the construct of conduct of ongoing or separate evaluation (COSE), evaluation and communication of deficiencies (ECD) revealed that monitoring activities is significant and positively related to fraud’s detection and prevention in Nigerian DMBS. So also, it was found that control activities using selection and development of control activities (SDCA), selection and development of general controls over technology to prevent financial fraud (SDGCTF), development of control activities that gives room for transparency through procedures that put policies into actions (DCATPPA) contributed to influence fraud detection and prevention in the Nigerian DMBs. In addition, it was found that transparency, accountability, reliability, independence and value relevance have significant effect on fraud detection and prevention ibn Nigerian DMBs. The study concluded that the board of directors demonstrated independence from management and exercises oversight of the development and performance of internal control. Part of the conclusion was that there was accountability on the part of the owners and preparers of the financial reports and that the system gives room for the members of staff to account for their responsibilities. Among the recommendations was that the management of Nigerian DMBs should create and establish a standard Internal Control System strong enough to deter fraud in order to encourage continuity of operations by ensuring liquidity, solvency and going concern of the banks. It was also recommended that the banks create a structure that encourages whistleblowing to complement the internal control system.Keywords: internal control, whistleblowing, deposit money banks, fraud prevention, fraud detection
Procedia PDF Downloads 80545 Examining the Usefulness of an ESP Textbook for Information Technology: Learner Perspectives
Authors: Yun-Husan Huang
Abstract:
Many English for Specific Purposes (ESP) textbooks are distributed globally as the content development is often obliged to compromises between commercial and pedagogical demands. Therefore, the issue of regional application and usefulness of globally published ESP textbooks has received much debate. For ESP instructors, textbook selection is definitely a priority consideration for curriculum design. An appropriate ESP textbook can facilitate teaching and learning, while an inappropriate one may cause a disaster for both teachers and students. This study aims to investigate the regional application and usefulness of an ESP textbook for information technology (IT). Participants were 51 sophomores majoring in Applied Informatics and Multimedia at a university in Taiwan. As they were non-English majors, their English proficiency was mostly at elementary and elementary-to-intermediate levels. This course was offered for two semesters. The textbook selected was Oxford English for Information Technology. At class end, the students were required to complete a survey comprising five choices of Very Easy, Easy, Neutral, Difficult, and Very Difficult for each item. Based on the content design of the textbook, the survey investigated how the students viewed the difficulty of grammar, listening, speaking, reading, and writing materials of the textbook. In terms of difficulty, results reveal that only 22% of them found the grammar section difficult and very difficult. For listening, 71% responded difficult and very difficult. For general reading, 55% responded difficult and very difficult. For speaking, 56% responded difficult and very difficult. For writing, 78% responded difficult and very difficult. For advanced reading, 90% reported difficult and very difficult. These results indicate that, except the grammar section, more than half of the students found the textbook contents difficult in terms of listening, speaking, reading, and writing materials. Such contradictory results between the easy grammar section and the difficult four language skills sections imply that the textbook designers do not well understand the English learning background of regional ESP learners. For the participants, the learning contents of the grammar section were the general grammar level of junior high school, while the learning contents of the four language skills sections were more of the levels of college English majors. Implications from the findings are obtained for instructors and textbook designers. First of all, existing ESP textbooks for IT are few and thus textbook selections for instructors are insufficient. Second, existing globally published textbooks for IT cannot be applied to learners of all English proficiency levels, especially the low level. With limited textbook selections, third, instructors should modify the selected textbook contents or supplement extra ESP materials to meet the proficiency level of target learners. Fourth, local ESP publishers should collaborate with local ESP instructors who understand best the learning background of their students in order to develop appropriate ESP textbooks for local learners. Even though the instructor reduced learning contents and simplified tests in curriculum design, in conclusion, the students still found difficult. This implies that in addition to the instructor’s professional experience, there is a need to understand the usefulness of the textbook from learner perspectives.Keywords: ESP textbooks, ESP materials, ESP textbook design, learner perspectives on ESP textbooks
Procedia PDF Downloads 340544 An Integrated Approach to Child Care Earthquake Preparedness through “Telemachus” Project
Authors: A. Kourou, S. Kyriakopoulos, N. Anyfanti
Abstract:
A lot of children under the age of five spend their daytime hours away from their home, in a kindergarten. Caring for children is a serious subject, and their safety in case of earthquake is the first priority. Being aware of earthquakes helps to prioritize the needs and take the appropriate actions to limit the effects. Earthquakes occurring anywhere at any time require emergency planning. Earthquake planning is a cooperative effort and childcare providers have unique roles and responsibilities. Greece has high seismicity and Ionian Islands Region has the highest seismic activity of the country. The last five years Earthquake Planning and Protection Organization (EPPO), which is a national organization, has analyzed the needs and requirements of kindergartens on earthquake protection issues. In this framework it has been noticed that although the State requires child care centers to hold drills, the standards for emergency preparedness in these centers are varied, and a lot of them had not written plans for emergencies. For these reasons, EPPO supports the development of emergency planning guidance and familiarizes the day care centers’ staff being prepared for earthquakes. Furthermore, the Handbook on Day Care Earthquake Planning that has been developed by EPPO helps the providers to understand that emergency planning is essential to risk reduction. Preparedness and training should be ongoing processes, thus EPPO implements every year dozens of specific seminars on children’s disaster related needs. This research presents the results of a survey that detects the level of earthquake preparedness of kindergartens in all over the country and Ionian Islands too. A closed-form questionnaire of 20 main questions was developed for the survey in order to detect the aspects of participants concerning the earthquake preparedness actions at individual, family and day care environment level. 2668 questionnaires were gathered from March 2014 to May 2019, and analyzed by EPPO’s Department of Education. Moreover, this paper presents the EPPO’s educational activities targeted to the Ionian Islands Region that implemented in the framework of “Telemachus” Project. To provide safe environment for children to learn, and staff to work is the foremost goal of any State, community and kindergarten. This project is funded under the Priority Axis "Environmental Protection and Sustainable Development" of Operational Plan "Ionian Islands 2014-2020". It is increasingly accepted that emergency preparedness should be thought of as an ongoing process rather than a one-time activity. Creating an earthquake safe daycare environment that facilitates learning is a challenging task. Training, drills, and update of emergency plan should take place throughout the year at kindergartens to identify any gaps and to ensure the emergency procedures. EPPO will continue to work closely with regional and local authorities to actively address the needs of children and kindergartens before, during and after earthquakes.Keywords: child care centers, education on earthquake, emergency planning, kindergartens, Ionian Islands Region of Greece
Procedia PDF Downloads 118543 The Effects of Goal Setting and Feedback on Inhibitory Performance
Authors: Mami Miyasaka, Kaichi Yanaoka
Abstract:
Attention Deficit/Hyperactivity Disorder (ADHD) is a neurodevelopmental disorder characterized by inattention, hyperactivity, and impulsivity; symptoms often manifest during childhood. In children with ADHD, the development of inhibitory processes is impaired. Inhibitory control allows people to avoid processing unnecessary stimuli and to behave appropriately in various situations; thus, people with ADHD require interventions to improve inhibitory control. Positive or negative reinforcements (i.e., reward or punishment) help improve the performance of children with such difficulties. However, in order to optimize impact, reward and punishment must be presented immediately following the relevant behavior. In regular elementary school classrooms, such supports are uncommon; hence, an alternative practical intervention method is required. One potential intervention involves setting goals to keep children motivated to perform tasks. This study examined whether goal setting improved inhibitory performances, especially for children with severe ADHD-related symptoms. We also focused on giving feedback on children's task performances. We expected that giving children feedback would help them set reasonable goals and monitor their performance. Feedback can be especially effective for children with severe ADHD-related symptoms because they have difficulty monitoring their own performance, perceiving their errors, and correcting their behavior. Our prediction was that goal setting by itself would be effective for children with mild ADHD-related symptoms, and goal setting based on feedback would be effective for children with severe ADHD-related symptoms. Japanese elementary school children and their parents were the sample for this study. Children performed two kinds of go/no-go tasks, and parents completed a checklist about their children's ADHD symptoms, the ADHD Rating Scale-IV, and the Conners 3rd edition. The go/no-go task is a cognitive task to measure inhibitory performance. Children were asked to press a key on the keyboard when a particular symbol appeared on the screen (go stimulus) and to refrain from doing so when another symbol was displayed (no-go stimulus). Errors obtained in response to a no-go stimulus indicated inhibitory impairment. To examine the effect of goal-setting on inhibitory control, 37 children (Mage = 9.49 ± 0.51) were required to set a performance goal, and 34 children (Mage = 9.44 ± 0.50) were not. Further, to manipulate the presence of feedback, in one go/no-go task, no information about children’s scores was provided; however, scores were revealed for the other type of go/no-go tasks. The results revealed a significant interaction between goal setting and feedback. However, three-way interaction between ADHD-related inattention, feedback, and goal setting was not significant. These results indicated that goal setting was effective for improving the performance of the go/no-go task only with feedback, regardless of ADHD severity. Furthermore, we found an interaction between ADHD-related inattention and feedback, indicating that informing inattentive children of their scores made them unexpectedly more impulsive. Taken together, giving feedback was, unexpectedly, too demanding for children with severe ADHD-related symptoms, but the combination of goal setting with feedback was effective for improving their inhibitory control. We discuss effective interventions for children with ADHD from the perspective of goal setting and feedback. This work was supported by the 14th Hakuho Research Grant for Child Education of the Hakuho Foundation.Keywords: attention deficit disorder with hyperactivity, feedback, goal-setting, go/no-go task, inhibitory control
Procedia PDF Downloads 104542 Complementary Effect of Wistleblowing Policy and Internal Control System on Prevention and Detection of Fraud in Nigerian Deposit Money Banks
Authors: Dada Durojaye Joshua
Abstract:
The study examined the combined effect of internal control system and whistle blowing policy while it pursues the following specific objectives, which are to: examine the relationship between monitoring activities and fraud’s detection and prevention; investigate the effect of control activities on fraud’s detection and prevention in Nigerian Deposit Money Banks (DMBs). The population of the study comprises the 89,275 members of staff in the 20 DMBs in Nigeria as at June 2019. Purposive and convenient sampling techniques were used in the selection of the 80 members of staff at the supervisory level of the Internal Audit Departments of the head offices of the sampled banks, that is, selecting 4 respondents (Audit Executive/Head, Internal Control; Manager, Operation Risk Management; Head, Financial Crime Control; the Chief Compliance Officer) from each of the 20 DMBs in Nigeria. A standard questionnaire was adapted from 2017/2018 Internal Control Questionnaire and Assessment, Bureau of Financial Monitoring and Accountability Florida Department of Economic Opportunity. It was modified to serve the purpose for which it was meant to serve. It was self-administered to gather data from the 80 respondents at the respective headquarters of the sampled banks at their respective locations across Nigeria. Two likert-scales was used in achieving the stated objectives. A logit regression was used in analysing the stated hypotheses. It was found that effect of monitoring activities using the construct of conduct of ongoing or separate evaluation (COSE), evaluation and communication of deficiencies (ECD) revealed that monitoring activities is significant and positively related to fraud’s detection and prevention in Nigerian DMBS. So also, it was found that control activities using selection and development of control activities (SDCA), selection and development of general controls over technology to prevent financial fraud (SDGCTF), development of control activities that gives room for transparency through procedures that put policies into actions (DCATPPA) contributed to influence fraud detection and prevention in the Nigerian DMBs. In addition, it was found that transparency, accountability, reliability, independence and value relevance have significant effect on fraud detection and prevention ibn Nigerian DMBs. The study concluded that the board of directors demonstrated independence from management and exercises oversight of the development and performance of internal control. Part of the conclusion was that there was accountability on the part of the owners and preparers of the financial reports and that the system gives room for the members of staff to account for their responsibilities. Among the recommendations was that the management of Nigerian DMBs should create and establish a standard Internal Control System strong enough to deter fraud in order to encourage continuity of operations by ensuring liquidity, solvency and going concern of the banks. It was also recommended that the banks create a structure that encourages whistleblowing to complement the internal control system.Keywords: internal control, whistleblowing, deposit money banks, fraud prevention, fraud detection
Procedia PDF Downloads 73541 The Forms of Representation in Architectural Design Teaching: The Cases of Politecnico Di Milano and Faculty of Architecture of the University of Porto
Authors: Rafael Sousa Santos, Clara Pimena Do Vale, Barbara Bogoni, Poul Henning Kirkegaard
Abstract:
The representative component, a determining aspect of the architect's training, has been marked by an exponential and unprecedented development. However, the multiplication of possibilities has also multiplied uncertainties about architectural design teaching, and by extension, about the very principles of architectural education. In this paper, it is intended to present the results of a research developed on the following problem: the relation between the forms of representation and the architectural design teaching-learning processes. The research had as its object the educational model of two schools – the Politecnico di Milano (POLIMI) and the Faculty of Architecture of the University of Porto (FAUP) – and was led by three main objectives: to characterize the educational model followed in both schools focused on the representative component and its role; to interpret the relation between forms of representation and the architectural design teaching-learning processes; to consider their possibilities of valorisation. Methodologically, the research was conducted according to a qualitative embedded multiple-case study design. The object – i.e., the educational model – was approached in both POLIMI and FAUP cases considering its Context and three embedded unities of analysis: the educational Purposes, Principles, and Practices. In order to guide the procedures of data collection and analysis, a Matrix for the Characterization (MCC) was developed. As a methodological tool, the MCC allowed to relate the three embedded unities of analysis with the three main sources of evidence where the object manifests itself: the professors, expressing how the model is assumed; the architectural design classes, expressing how the model is achieved; and the students, expressing how the model is acquired. The main research methods used were the naturalistic and participatory observation, in-person-interview and documentary and bibliographic review. The results reveal the importance of the representative component in the educational model of both cases, despite the differences in its role. In POLIMI's model, representation is particularly relevant in the teaching of architectural design, while in FAUP’s model, it plays a transversal role – according to an idea of 'general training through hand drawing'. In fact, the difference between models relative to representation can be partially understood by the level of importance that each gives to hand drawing. Regarding the teaching of architectural design, the two cases are distinguished in the relation with the representative component: while in POLIMI the forms of representation serve essentially an instrumental purpose, in FAUP they tend to be considered also for their methodological dimension. It seems that the possibilities for valuing these models reside precisely in the relation between forms of representation and architectural design teaching. It is expected that the knowledge base developed in this research may have three main contributions: to contribute to the maintenance of the educational model of POLIMI and FAUP; through the precise description of the methodological procedures, to contribute by transferability to similar studies; through the critical and objective framework of the problem underlying the forms of representation and its relation with architectural design teaching, to contribute to the broader discussion concerning the contemporary challenges on architectural education.Keywords: architectural design teaching, architectural education, educational models, forms of representation
Procedia PDF Downloads 123540 Empowering Indigenous Epistemologies in Geothermal Development
Authors: Te Kīpa Kēpa B. Morgan, Oliver W. Mcmillan, Dylan N. Taute, Tumanako N. Fa'aui
Abstract:
Epistemologies are ways of knowing. Indigenous Peoples are aware that they do not perceive and experience the world in the same way as others. So it is important when empowering Indigenous epistemologies, such as that of the New Zealand Māori, to also be able to represent a scientific understanding within the same analysis. A geothermal development assessment tool has been developed by adapting the Mauri Model Decision Making Framework. Mauri is a metric that is capable of representing the change in the life-supporting capacity of things and collections of things. The Mauri Model is a method of grouping mauri indicators as dimension averages in order to allow holistic assessment and also to conduct sensitivity analyses for the effect of worldview bias. R-shiny is the coding platform used for this Vision Mātauranga research which has created an expert decision support tool (DST) that combines a stakeholder assessment of worldview bias with an impact assessment of mauri-based indicators to determine the sustainability of proposed geothermal development. The initial intention was to develop guidelines for quantifying mātauranga Māori impacts related to geothermal resources. To do this, three typical scenarios were considered: a resource owner wishing to assess the potential for new geothermal development; another party wishing to assess the environmental and cultural impacts of the proposed development; an assessment that focuses on the holistic sustainability of the resource, including its surface features. Indicator sets and measurement thresholds were developed that are considered necessary considerations for each assessment context and these have been grouped to represent four mauri dimensions that mirror the four well-being criteria used for resource management in Aotearoa, New Zealand. Two case studies have been conducted to test the DST suitability for quantifying mātauranga Māori and other biophysical factors related to a geothermal system. This involved estimating mauri0meter values for physical features such as temperature, flow rate, frequency, colour, and developing indicators to also quantify qualitative observations about the geothermal system made by Māori. A retrospective analysis has then been conducted to verify different understandings of the geothermal system. The case studies found that the expert DST is useful for geothermal development assessment, especially where hapū (indigenous sub-tribal grouping) are conflicted regarding the benefits and disadvantages of their’ and others’ geothermal developments. These results have been supplemented with evaluations for the cumulative impacts of geothermal developments experienced by different parties using integration techniques applied to the time history curve of the expert DST worldview bias weighted plotted against the mauri0meter score. Cumulative impacts represent the change in resilience or potential of geothermal systems, which directly assists with the holistic interpretation of change from an Indigenous Peoples’ perspective.Keywords: decision support tool, holistic geothermal assessment, indigenous knowledge, mauri model decision-making framework
Procedia PDF Downloads 187539 Preparation, Characterization and Photocatalytic Activity of a New Noble Metal Modified TiO2@SrTiO3 and SrTiO3 Photocatalysts
Authors: Ewelina Grabowska, Martyna Marchelek
Abstract:
Among the various semiconductors, nanosized TiO2 has been widely studied due to its high photosensitivity, low cost, low toxicity, and good chemical and thermal stability. However, there are two main drawbacks to the practical application of pure TiO2 films. One is that TiO2 can be induced only by ultraviolet (UV) light due to its intrinsic wide bandgap (3.2 eV for anatase and 3.0 eV for rutile), which limits its practical efficiency for solar energy utilization since UV light makes up only 4-5% of the solar spectrum. The other is that a high electron-hole recombination rate will reduce the photoelectric conversion efficiency of TiO2. In order to overcome the above drawbacks and modify the electronic structure of TiO2, some semiconductors (eg. CdS, ZnO, PbS, Cu2O, Bi2S3, and CdSe) have been used to prepare coupled TiO2 composites, for improving their charge separation efficiency and extending the photoresponse into the visible region. It has been proved that the fabrication of p-n heterostructures by combining n-type TiO2 with p-type semiconductors is an effective way to improve the photoelectric conversion efficiency of TiO2. SrTiO3 is a good candidate for coupling TiO2 and improving the photocatalytic performance of the photocatalyst because its conduction band edge is more negative than TiO2. Due to the potential differences between the band edges of these two semiconductors, the photogenerated electrons transfer from the conduction band of SrTiO3 to that of TiO2. Conversely, the photogenerated electrons transfer from the conduction band of SrTiO3 to that of TiO2. Then the photogenerated charge carriers can be efficiently separated by these processes, resulting in the enhancement of the photocatalytic property in the photocatalyst. Additionally, one of the methods for improving photocatalyst performance is addition of nanoparticles containing one or two noble metals (Pt, Au, Ag and Pd) deposited on semiconductor surface. The mechanisms were proposed as (1) the surface plasmon resonance of noble metal particles is excited by visible light, facilitating the excitation of the surface electron and interfacial electron transfer (2) some energy levels can be produced in the band gap of TiO2 by the dispersion of noble metal nanoparticles in the TiO2 matrix; (3) noble metal nanoparticles deposited on TiO2 act as electron traps, enhancing the electron–hole separation. In view of this, we recently obtained series of TiO2@SrTiO3 and SrTiO3 photocatalysts loaded with noble metal NPs. using photodeposition method. The M- TiO2@SrTiO3 and M-SrTiO3 photocatalysts (M= Rh, Rt, Pt) were studied for photodegradation of phenol in aqueous phase under UV-Vis and visible irradiation. Moreover, in the second part of our research hydroxyl radical formations were investigated. Fluorescence of irradiated coumarin solution was used as a method of ˙OH radical detection. Coumarin readily reacts with generated hydroxyl radicals forming hydroxycoumarins. Although the major hydroxylation product is 5-hydroxycoumarin, only 7-hydroxyproduct of coumarin hydroxylation emits fluorescent light. Thus, this method was used only for hydroxyl radical detection, but not for determining concentration of hydroxyl radicals.Keywords: composites TiO2, SrTiO3, photocatalysis, phenol degradation
Procedia PDF Downloads 222538 Decentralized Peak-Shaving Strategies for Integrated Domestic Batteries
Authors: Corentin Jankowiak, Aggelos Zacharopoulos, Caterina Brandoni
Abstract:
In a context of increasing stress put on the electricity network by the decarbonization of many sectors, energy storage is likely to be the key mitigating element, by acting as a buffer between production and demand. In particular, the highest potential for storage is when connected closer to the loads. Yet, low voltage storage struggles to penetrate the market at a large scale due to the novelty and complexity of the solution, and the competitive advantage of fossil fuel-based technologies regarding regulations. Strong and reliable numerical simulations are required to show the benefits of storage located near loads and promote its development. The present study was restrained from excluding aggregated control of storage: it is assumed that the storage units operate independently to one another without exchanging information – as is currently mostly the case. A computationally light battery model is presented in detail and validated by direct comparison with a domestic battery operating in real conditions. This model is then used to develop Peak-Shaving (PS) control strategies as it is the decentralized service from which beneficial impacts are most likely to emerge. The aggregation of flatter, peak- shaved consumption profiles is likely to lead to flatter and arbitraged profile at higher voltage layers. Furthermore, voltage fluctuations can be expected to decrease if spikes of individual consumption are reduced. The crucial part to achieve PS lies in the charging pattern: peaks depend on the switching on and off of appliances in the dwelling by the occupants and are therefore impossible to predict accurately. A performant PS strategy must, therefore, include a smart charge recovery algorithm that can ensure enough energy is present in the battery in case it is needed without generating new peaks by charging the unit. Three categories of PS algorithms are introduced in detail. First, using a constant threshold or power rate for charge recovery, followed by algorithms using the State Of Charge (SOC) as a decision variable. Finally, using a load forecast – of which the impact of the accuracy is discussed – to generate PS. A performance metrics was defined in order to quantitatively evaluate their operating regarding peak reduction, total energy consumption, and self-consumption of domestic photovoltaic generation. The algorithms were tested on load profiles with a 1-minute granularity over a 1-year period, and their performance was assessed regarding these metrics. The results show that constant charging threshold or power are far from optimal: a certain value is not likely to fit the variability of a residential profile. As could be expected, forecast-based algorithms show the highest performance. However, these depend on the accuracy of the forecast. On the other hand, SOC based algorithms also present satisfying performance, making them a strong alternative when the reliable forecast is not available.Keywords: decentralised control, domestic integrated batteries, electricity network performance, peak-shaving algorithm
Procedia PDF Downloads 118537 Bioleaching of Precious Metals from an Oil-fired Ash Using Organic Acids Produced by Aspergillus niger in Shake Flasks and a Bioreactor
Authors: Payam Rasoulnia, Seyyed Mohammad Mousavi
Abstract:
Heavy fuel oil firing power plants produce huge amounts of ashes as solid wastes, which seriously need to be managed and processed. Recycling precious metals of V and Ni from these oil-fired ashes which are considered as secondary sources of metals recovery, not only has a great economic importance for use in industry, but also it is noteworthy from the environmental point of view. Vanadium is an important metal that is mainly used in the steel industry because of its physical properties of hardness, tensile strength, and fatigue resistance. It is also utilized in oxidation catalysts, titanium–aluminum alloys and vanadium redox batteries. In the present study bioleaching of vanadium and nickel from an oil-fired ash sample was conducted using Aspergillus niger fungus. The experiments were carried out using spent-medium bioleaching method in both Erlenmeyer flasks and also bubble column bioreactor, in order to compare them together. In spent-medium bioleaching the solid waste is not in direct contact with the fungus and consequently the fungal growth is not retarded and maximum organic acids are produced. In this method the metals are leached through biogenic produced organic acids present in the medium. In shake flask experiments the fungus was cultured for 15 days, where the maximum production of organic acids was observed, while in bubble column bioreactor experiments a 7 days fermentation period was applied. The amount of produced organic acids were measured using high performance liquid chromatography (HPLC) and the results showed that depending on the fermentation period and the scale of experiments, the fungus has different major lixiviants. In flask tests, citric acid was the main produced organic acid by the fungus and the other organic acids including gluconic, oxalic, and malic were excreted in much lower concentrations, while in the bioreactor oxalic acid was the main lixiviant and it was produced considerably. In Erlenmeyer flasks during 15 days fermentation of Aspergillus niger, 8080 ppm citric acid and 1170 ppm oxalic acid was produced, while in bubble column bioreactor over 7 days of fungal growth, 17185 ppm oxalic acid and 1040 ppm citric acid was secreted. The leaching tests using the spent-media obtained from both of fermentation experiments, were performed at the same conditions of leaching duration of 7 days, leaching temperature of 60 °C and pulp density up to 3% (w/v). The results revealed that in Erlenmeyer flask experiments 97% of V and 50% of Ni were extracted while using spent medium produced in bubble column bioreactor, V and Ni recoveries were achieved to 100% and 33%, respectively. These recovery yields indicate that in both scales almost total vanadium can be recovered, while nickel recovery was lower. With help of the bioreactor spent-medium nickel recovery yield was lower than that of obtained from the flask experiments, which it could be due to precipitation of some values of Ni in presence of high levels of oxalic acid existing in its spent medium.Keywords: Aspergillus niger, bubble column bioreactor, oil-fired ash, spent-medium bioleaching
Procedia PDF Downloads 229536 Social Factors That Contribute to Promoting and Supporting Resilience in Children and Youth following Environmental Disasters: A Mixed Methods Approach
Authors: Caroline McDonald-Harker, Julie Drolet
Abstract:
Abstract— In the last six years Canada In the last six years Canada has experienced two major and catastrophic environmental disasters– the 2013 Southern Alberta flood and the 2016 Fort McMurray, Alberta wildfire. These two disasters resulted in damages exceeding 12 billion dollars, the costliest disasters in Canadian history. In the aftermath of these disasters, many families faced the loss of homes, places of employment, schools, recreational facilities, and also experienced social, emotional, and psychological difficulties. Children and youth are among the most vulnerable to the devastating effects of disasters due to the physical, cognitive, and social factors related to their developmental life stage. Yet children and youth also have the capacity to be resilient and act as powerful catalyst for change in their own lives and wider communities following disaster. Little is known, particularly from a sociological perspective, about the specific factors that contribute to resilience in children and youth, and effective ways to support their overall health and well-being. This paper focuses on the voices and experiences of children and youth residing in these two disaster-affected communities in Alberta, Canada and specifically examines: 1) How children and youth’s lives are impacted by the tragedy, devastation, and upheaval of disaster; 2) Ways that children and youth demonstrate resilience when directly faced with the adversarial circumstances of disaster; and 3) The cumulative internal and external factors that contribute to bolstering and supporting resilience among children and youth post-disaster. This paper discusses the characteristics associated with high levels of resilience in 183 children and youth ages 5 to 17 based on quantitative and qualitative data obtained through a mix methods approach. Child and youth participants were administered the Children and Youth Resilience Measure (CYRM-28) in order to examine factors that influence resilience processes including: individual, caregiver, and context factors. The CYRM-28 was then supplemented with qualitative interviews with children and youth to contextualize the CYRM-28 resiliency factors and provide further insight into their overall disaster experience. Findings reveal that high levels of resilience among child and youth participants is associated with both individual factors and caregiver factors, specifically positive outlook, effective communication, peer support, and physical and psychological caregiving. Individual and caregiver factors helped mitigate the negative effects of disaster, thus bolstering resilience in children and youth. This paper discusses the implications that these findings have for understanding the specific mechanisms that support the resiliency processes and overall recovery of children and youth following disaster; the importance of bridging the gap between children and youth’s needs and the services and supports provided to them post-disaster; and the need to develop resiliency processes and practices that empower children and youth as active agents of change in their own lives following disaster. These findings contribute to furthering knowledge about pragmatic and representative changes to resources, programs, and policies surrounding disaster response, recovery, and mitigation.Keywords: children and youth, disaster, environment, resilience
Procedia PDF Downloads 125535 Innovations and Challenges: Multimodal Learning in Cybersecurity
Authors: Tarek Saadawi, Rosario Gennaro, Jonathan Akeley
Abstract:
There is rapidly growing demand for professionals to fill positions in Cybersecurity. This is recognized as a national priority both by government agencies and the private sector. Cybersecurity is a very wide technical area which encompasses all measures that can be taken in an electronic system to prevent criminal or unauthorized use of data and resources. This requires defending computers, servers, networks, and their users from any kind of malicious attacks. The need to address this challenge has been recognized globally but is particularly acute in the New York metropolitan area, home to some of the largest financial institutions in the world, which are prime targets of cyberattacks. In New York State alone, there are currently around 57,000 jobs in the Cybersecurity industry, with more than 23,000 unfilled positions. The Cybersecurity Program at City College is a collaboration between the Departments of Computer Science and Electrical Engineering. In Fall 2020, The City College of New York matriculated its first students in theCybersecurity Master of Science program. The program was designed to fill gaps in the previous offerings and evolved out ofan established partnership with Facebook on Cybersecurity Education. City College has designed a program where courses, curricula, syllabi, materials, labs, etc., are developed in cooperation and coordination with industry whenever possible, ensuring that students graduating from the program will have the necessary background to seamlessly segue into industry jobs. The Cybersecurity Program has created multiple pathways for prospective students to obtain the necessary prerequisites to apply in order to build a more diverse student population. The program can also be pursued on a part-time basis which makes it available to working professionals. Since City College’s Cybersecurity M.S. program was established to equip students with the advanced technical skills needed to thrive in a high-demand, rapidly-evolving field, it incorporates a range of pedagogical formats. From its outset, the Cybersecurity program has sought to provide both the theoretical foundations necessary for meaningful work in the field along with labs and applied learning projects aligned with skillsets required by industry. The efforts have involved collaboration with outside organizations and with visiting professors designing new courses on topics such as Adversarial AI, Data Privacy, Secure Cloud Computing, and blockchain. Although the program was initially designed with a single asynchronous course in the curriculum with the rest of the classes designed to be offered in-person, the advent of the COVID-19 pandemic necessitated a move to fullyonline learning. The shift to online learning has provided lessons for future development by providing examples of some inherent advantages to the medium in addition to its drawbacks. This talk will address the structure of the newly-implemented Cybersecurity Master’s Program and discuss the innovations, challenges, and possible future directions.Keywords: cybersecurity, new york, city college, graduate degree, master of science
Procedia PDF Downloads 148534 The Development of the Psychosomatic Nursing Model from an Evidence-Based Action Research on Proactive Mental Health Care for Medical Inpatients
Authors: Chia-Yi Wu, Jung-Chen Chang, Wen-Yu Hu, Ming-Been Lee
Abstract:
In nearly all physical health conditions, suicide risk is increased compared to healthy people even after adjustment for age, gender, mental health, and substance use diagnoses. In order to highlight the importance of suicide risk assessment for the inpatients and early identification and engagement for inpatients’ mental health problems, a study was designed aiming at developing a comprehensive psychosomatic nursing engagement (PSNE) model with standardized operation procedures informing how nurses communicate, assess, and engage with the inpatients with emotional distress. The purpose of the study was to promote the gatekeeping role of clinical nurses in performing brief assessment and interventions to detect depression and anxiety symptoms among the inpatients, particularly in non-psychiatric wards. The study will be carried out in a 2000-bed university hospital in Northern Taiwan in 2019. We will select a ward for trial and develop feasible procedures and in-job training course for the nurses to offer mental health care, which will also be validated through professional consensus meeting. The significance of the study includes the following three points: (1) The study targets at an important but less-researched area of PSNE model in the cultural background of Taiwan, where hospital service is highly accessible, but mental health and suicide risk assessment are hardly provided by non-psychiatric healthcare personnel. (2) The issue of PSNE could be efficient and cost-effective in the identification of suicide risks at an early stage to prevent inpatient suicide or to reduce future suicide risk by early treatment of mental illnesses among the high-risk group of hospitalized patients who are more than three-times lethal to suicide. (3) Utilizing a brief tool with its established APP ('The Five-item Brief Symptom Rating Scale, BSRS-5'), we will invent the standardized procedure of PSNE and referral steps in collaboration with the medical teams across the study hospital. New technological tools nested within nursing assessment/intervention will concurrently be invented to facilitate better care quality. The major outcome measurements will include tools for early identification of common mental distress and suicide risks, i.e., the BSRS-5, revised BSRS-5, and the 9-item Concise Mental Health Checklist (CMHC-9). The main purpose of using the CMHC-9 in clinical suicide risk assessment is mainly to provide care and build-up therapeutic relationship with the client, so it will also be used to nursing training highlighting the skills of supportive care. Through early identification of the inpatients’ depressive symptoms or other mental health care needs such as insomnia, anxiety, or suicide risk, the majority of the nursing clinicians would be able to engage in critical interventions that alleviate the inpatients’ suffering from mental health problems, given a feasible nursing input.Keywords: mental health care, clinical outcome improvement, clinical nurses, suicide prevention, psychosomatic nursing
Procedia PDF Downloads 109533 Decorative Plant Motifs in Traditional Art and Craft Practices: Pedagogical Perspectives
Authors: Geetanjali Sachdev
Abstract:
This paper explores the decorative uses of plant motifs and symbols in traditional Indian art and craft practices in order to assess their pedagogical significance within the context of plant study in higher education in art and design. It examines existing scholarship on decoration and plants in Indian art and craft practices. The impulse to elaborate upon an existing form or surface is an intrinsic part of many Indian traditional art and craft traditions where a deeply ingrained love for decoration exists. Indian craftsmen use an array of motifs and embellishments to adorn surfaces across a range of practices, and decoration is widely seen in textiles, jewellery, temple sculptures, vehicular art, architecture, and various other art, craft, and design traditions. Ornamentation in Indian cultural traditions has been attributed to religious and spiritual influences in the lives of India’s art and craft practitioners. Through adornment, surfaces and objects were ritually transformed to function both spiritually and physically. Decorative formations facilitate spiritual development and attune our minds to concepts that support contemplation. Within practices of ornamentation and adornment, there is extensive use of botanical motifs as Indian art and craft practitioners have historically been drawn towards nature as a source of inspiration. This is due to the centrality of agriculture in the lives of Indian people as well as in religion, where plants play a key role in religious rituals and festivals. Plant representations thus abound in two-dimensional and three-dimensional surface designs and patterns where the motifs range from being realistic, highly stylized, and curvilinear forms to geometric and abstract symbols. Existing scholarship reveals that these botanical embellishments reference a wide range of plants that include native and non-indigenous plants, as well as imaginary and mythical plants. Structural components of plant anatomy, such as leaves, stems, branches and buds, and flowers, are part of the repertoire of design motifs used, as are plant forms indicating different stages of growth, such as flowering buds and flowers in full bloom. Symmetry is a characteristic feature, and within the decorative register of various practices, plants are part of border zones and bands, connecting corners and all-over patterns, used as singular motifs and floral sprays on panels, and as elements within ornamental scenes. The results of the research indicate that decoration as a mode of inquiry into plants can serve as a platform to learn about local and global biodiversity and plant anatomy and develop artistic modes of thinking symbolically, metaphorically, imaginatively, and relationally about the plant world. The conclusion is drawn that engaging with ornamental modes of plant representation in traditional Indian art and craft practices is pedagogically significant for two reasons. Decoration as a mode of engagement cultivates both botanical and artistic understandings of plants. It also links learners with the indigenous art and craft traditions of their own culture.Keywords: art and design pedagogy, decoration, plant motifs, traditional art and craft
Procedia PDF Downloads 86532 Monitoring of Educational Achievements of Kazakhstani 4th and 9th Graders
Authors: Madina Tynybayeva, Sanya Zhumazhanova, Saltanat Kozhakhmetova, Merey Mussabayeva
Abstract:
One of the leading indicators of the education quality is the level of students’ educational achievements. The processes of modernization of Kazakhstani education system have predetermined the need to improve the national system by assessing the quality of education. The results of assessment greatly contribute to addressing questions about the current state of the educational system in the country. The monitoring of students’ educational achievements (MEAS) is the systematic measurement of the quality of education for compliance with the state obligatory standard of Kazakhstan. This systematic measurement is independent of educational organizations and approved by the order of the Minister of Education and Scienceof Kazakhstan. The MEAS was conducted in the regions of Kazakhstanfor the first time in 2022 by the National Testing Centre. The measurement does not have legal consequences either for students or for educational organizations. Students’ achievements were measured in three subject areas: reading, mathematics and science literacy. MEAS was held for the first time in April this year, 105 thousand students from 1436 schools of Kazakhstan took part in the testing. The monitoring was accompanied by a survey of students, teachers, and school leaders. The goal is to identify which contextual factors affect learning outcomes. The testing was carried out in a computer format. The test tasks of MEAS are ranked according to the three levels of difficulty: basic, medium, and high. Fourth graders are asked to complete 30 closed-type tasks. The average score of the results is 21 points out of 30, which means 70% of tasks were successfully completed. The total number of test tasks for 9th grade students – 75 questions. The results of ninth graders are comparatively lower, the success rate of completing tasks is 63%. MEAS participants did not reveal a statistically significant gap in results in terms of the language of instruction, territorial status, and type of school. The trend of reducing the gap in these indicators is also noted in the framework of recent international studies conducted across the country, in particular PISA for schools in Kazakhstan. However, there is a regional gap in MOES performance. The difference in the values of the indicators of the highest and lowest scores of the regions was 11% of the success of completing tasks in the 4th grade, 14% in the 9thgrade. The results of the 4th grade students in reading, mathematics, and science literacy are: 71.5%, 70%, and 66.9%, respectively. The results of ninth-graders in reading, mathematics, and science literacy are 69.6%, 54%, and 60.8%, respectively. From the surveys, it was revealed that the educational achievements of students are considerably influenced by such factors as the subject competences of teachers, as well as the school climate and motivation of students. Thus, the results of MEAS indicate the need for an integrated approach to improving the quality of education. In particular, the combination of improving the content of curricula and textbooks, internal and external assessment of the educational achievements of students, educational programs of pedagogical specialties, and advanced training courses is required.Keywords: assessment, secondary school, monitoring, functional literacy, kazakhstan
Procedia PDF Downloads 108531 Vision and Challenges of Developing VR-Based Digital Anatomy Learning Platforms and a Solution Set for 3D Model Marking
Authors: Gizem Kayar, Ramazan Bakir, M. Ilkay Koşar, Ceren U. Gencer, Alperen Ayyildiz
Abstract:
Anatomy classes are crucial for general education of medical students, whereas learning anatomy is quite challenging and requires memorization of thousands of structures. In traditional teaching methods, learning materials are still based on books, anatomy mannequins, or videos. This results in forgetting many important structures after several years. However, more interactive teaching methods like virtual reality, augmented reality, gamification, and motion sensors are becoming more popular since such methods ease the way we learn and keep the data in mind for longer terms. During our study, we designed a virtual reality based digital head anatomy platform to investigate whether a fully interactive anatomy platform is effective to learn anatomy and to understand the level of teaching and learning optimization. The Head is one of the most complicated human anatomy structures, with thousands of tiny, unique structures. This makes the head anatomy one of the most difficult parts to understand during class sessions. Therefore, we developed a fully interactive digital tool with 3D model marking, quiz structures, 2D/3D puzzle structures, and VR support so as to integrate the power of VR and gamification. The project has been developed in Unity game engine with HTC Vive Cosmos VR headset. The head anatomy 3D model has been selected with full skeletal, muscular, integumentary, head, teeth, lymph, and vein system. The biggest issue during the development was the complexity of our model and the marking of it in the 3D world system. 3D model marking requires to access to each unique structure in the counted subsystems which means hundreds of marking needs to be done. Some parts of our 3D head model were monolithic. This is why we worked on dividing such parts to subparts which is very time-consuming. In order to subdivide monolithic parts, one must use an external modeling tool. However, such tools generally come with high learning curves, and seamless division is not ensured. Second option was to integrate tiny colliders to all unique items for mouse interaction. However, outside colliders which cover inner trigger colliders cause overlapping, and these colliders repel each other. Third option is using raycasting. However, due to its own view-based nature, raycasting has some inherent problems. As the model rotate, view direction changes very frequently, and directional computations become even harder. This is why, finally, we studied on the local coordinate system. By taking the pivot point of the model into consideration (back of the nose), each sub-structure is marked with its own local coordinate with respect to the pivot. After converting the mouse position to the world position and checking its relation with the corresponding structure’s local coordinate, we were able to mark all points correctly. The advantage of this method is its applicability and accuracy for all types of monolithic anatomical structures.Keywords: anatomy, e-learning, virtual reality, 3D model marking
Procedia PDF Downloads 100530 Case-Based Reasoning for Modelling Random Variables in the Reliability Assessment of Existing Structures
Authors: Francesca Marsili
Abstract:
The reliability assessment of existing structures with probabilistic methods is becoming an increasingly important and frequent engineering task. However probabilistic reliability methods are based on an exhaustive knowledge of the stochastic modeling of the variables involved in the assessment; at the moment standards for the modeling of variables are absent, representing an obstacle to the dissemination of probabilistic methods. The framework according to probability distribution functions (PDFs) are established is represented by the Bayesian statistics, which uses Bayes Theorem: a prior PDF for the considered parameter is established based on information derived from the design stage and qualitative judgments based on the engineer past experience; then, the prior model is updated with the results of investigation carried out on the considered structure, such as material testing, determination of action and structural properties. The application of Bayesian statistics arises two different kind of problems: 1. The results of the updating depend on the engineer previous experience; 2. The updating of the prior PDF can be performed only if the structure has been tested, and quantitative data that can be statistically manipulated have been collected; performing tests is always an expensive and time consuming operation; furthermore, if the considered structure is an ancient building, destructive tests could compromise its cultural value and therefore should be avoided. In order to solve those problems, an interesting research path is represented by investigating Artificial Intelligence (AI) techniques that can be useful for the automation of the modeling of variables and for the updating of material parameters without performing destructive tests. Among the others, one that raises particular attention in relation to the object of this study is constituted by Case-Based Reasoning (CBR). In this application, cases will be represented by existing buildings where material tests have already been carried out and an updated PDFs for the material mechanical parameters has been computed through a Bayesian analysis. Then each case will be composed by a qualitative description of the material under assessment and the posterior PDFs that describe its material properties. The problem that will be solved is the definition of PDFs for material parameters involved in the reliability assessment of the considered structure. A CBR system represent a good candi¬date in automating the modelling of variables because: 1. Engineers already draw an estimation of the material properties based on the experience collected during the assessment of similar structures, or based on similar cases collected in literature or in data-bases; 2. Material tests carried out on structure can be easily collected from laboratory database or from literature; 3. The system will provide the user of a reliable probabilistic description of the variables involved in the assessment that will also serve as a tool in support of the engineer’s qualitative judgments. Automated modeling of variables can help in spreading probabilistic reliability assessment of existing buildings in the common engineering practice, and target at the best intervention and further tests on the structure; CBR represents a technique which may help to achieve this.Keywords: reliability assessment of existing buildings, Bayesian analysis, case-based reasoning, historical structures
Procedia PDF Downloads 339529 A Hybrid Film: NiFe₂O₄ Nanoparticles in Poly-3-Hydroxybutyrate as an Antibacterial Agent
Authors: Karen L. Rincon-Granados, América R. Vázquez-Olmos, Adriana-Patricia Rodríguez-Hernández, Gina Prado-Prone, Margarita Rivera, Roberto Y. Sato-Berrú
Abstract:
In this work, a hybrid film based on poly-3-hydroxybutyrate (P3HB) and nickel ferrite (NiFe₂O₄) nanoparticles (NPs) was obtained by a simple and reproducible methodology in order to study its antibacterial and cytotoxic properties. The motivation for this research is the current antimicrobial resistance (RAM). This is a threat to human health and development worldwide. RAM is caused by the emergence of bacterial strains resistant to traditional antibiotics that were used as treatment. Due to this, the need to investigate new alternatives for preventing and treating bacterial infections emerges. In this sense, metal oxide NPs have aroused great interest due to their unique physicochemical properties. However, their use is limited by the nanostructured nature, commonly obtained by chemical and physical synthesis methods, as powders or colloidal dispersions. Therefore, the incorporation of nanostructured materials in polymer matrices to obtain hybrid materials that allow disinfecting and preventing the spread of bacteria on various surfaces. Accordingly, this work presents the synthesis and study of the antibacterial properties of the P3HB@NiFe₂O₄ hybrid film as a potential material to inhibit bacterial growth. The NiFe₂O₄ NPs were previously synthesized by a mechanochemical method. The P3HB and P3HB@NiFe₂O₄ films were obtained by the solvent casting method. The films were characterized by X-ray diffraction (XRD), Raman scattering, and scanning electron microscopy (SEM). The XRD pattern showed that the NiFe₂O₄ NPs were incorporated into the P3HB polymer matrix and retained their nanometric sizes. By energy dispersive X-ray spectroscopy (EDS), it was observed that the NPs are homogeneously distributed in the film. The bactericidal effect of the films obtained was evaluated in vitro using the broth surface method against two opportunistic and nosocomial pathogens, Staphylococcus aureus and Pseudomonas aeruginosa. The bacterial growth results showed that the P3HB@NiFe₂O₄ hybrid film was inhibited by 97% and 96% for S. aureus and P. aeruginosa, respectively. Surprisingly, the P3HB film inhibited both bacterial strains by around 90%. The cytotoxicity of the NiFe₂O₄ NPs, P3HB@NiFe₂O₄ hybrid film, and the P3HB film was evaluated using human skin cells, keratinocytes, and fibroblasts, finding that the NPs are biocompatible. The P3HB film and hybrids are cytotoxic, which demonstrated that although P3HB is known and reported as a biocompatible polymer, under our work conditions, P3HB was cytotoxic. Its bactericidal effect could be related to this activity. Its films are bactericidal and cytotoxic to keratinocytes and fibroblasts, the first barrier of human skin. Despite this, the hybrid film of P3HB@NiFe₂O₄ presents synergy with the bactericidal effect between P3HB and NPs, increasing bacterial inhibition. In addition, NPs decrease the cytotoxicity of P3HB to keratinocytes. The methodology used in this work was successful in producing hybrid films with antibacterial activity. However, future challenges are generated to find relationships between NPs and P3HB that allow taking advantage of their bactericidal properties and do not compromise biocompatibility.Keywords: poly-3-hydroxybutyrate, nanoparticles, hybrid film, antibacterial
Procedia PDF Downloads 84528 Performance Optimization of Polymer Materials Thanks to Sol-Gel Chemistry for Fuel Cells
Authors: Gondrexon, Gonon, Mendil-Jakani, Mareau
Abstract:
Proton Exchange Membrane Fuel Cells (PEMFCs) seems to be a promising device used for converting hydrogen into electricity. PEMFC is made of a Membrane Electrode Assembly (MEA) composed of a Proton Exchange Membrane (PEM) sandwiched by two catalytic layers. Nowadays, specific performances are targeted in order to ensure the long-term expansion of this technology. Current polymers used (perfluorinated as Nafion®) are unsuitable (loss of mechanical properties) for the high-temperature range. To overcome this issue, sulfonated polyaromatic polymers appear to be a good alternative since it has very good thermomechanical properties. However, their proton conductivity and chemical stability (oxidative resistance to H2O2 formed during fuel cell (FC) operating) are very low. In our team, we patented an original concept of hybrid membranes able to fulfill the specific requirements for PEMFC. This idea is based on the improvement of commercialized polymer membrane via an easy and processable stabilization thanks to sol-gel (SG) chemistry with judicious embeded chemical functions. This strategy is thus breaking up with traditional approaches (design of new copolymers, use of inorganic charges/additives). In 2020, we presented the elaboration and functional properties of a 1st generation of hybrid membranes with promising performances and durability. The latter was made by self-condensing a SG phase with 3(mercaptopropyl)trimethoxysilane (MPTMS) inside a commercial sPEEK host membrane. The successful in-situ condensation reactions of the MPTMS was demonstrated by measures of mass uptakes, FTIR spectroscopy (presence of C-Haliphatics) and solid state NMR 29Si (T2 & T3 signals of self-condensation products). The ability of the SG phase to prevent the oxidative degradation of the sPEEK phase (thanks to thiol chemical functions) was then proved with H2O2 accelerating tests and FC operating tests. A 2nd generation made of thiourea functionalized SG precursors (named HTU & TTU) was made after. By analysing in depth the morphologies of these different hybrids by direct space analysis (AFM/SEM/TEM) and reciprocal space analysis (SANS/SAXS/WAXS), we highlighted that both SG phase morphology and its localisation into the host has a huge impact on the PEM functional properties observed. This relationship is also dependent on the chemical function embedded. The hybrids obtained have shown very good chemical resistance during aging test (exposed to H2O2) compared to the commercial sPEEK. But the chemical function used is considered as “sacrificial” and cannot react indefinitely with H2O2. Thus, we are now working on a 3rd generation made of both sacrificial/regenerative chemical functions which are expected to inhibit the chemical aging of sPEEK more efficiently. With this work, we are confident to reach a predictive approach of the key parameters governing the final properties.Keywords: fuel cells, ionomers, membranes, sPEEK, chemical stability
Procedia PDF Downloads 72527 Production and Characterization of Biochars from Torrefaction of Biomass
Authors: Serdar Yaman, Hanzade Haykiri-Acma
Abstract:
Biomass is a CO₂-neutral fuel that is renewable and sustainable along with having very huge global potential. Efficient use of biomass in power generation and production of biomass-based biofuels can mitigate the greenhouse gasses (GHG) and reduce dependency on fossil fuels. There are also other beneficial effects of biomass energy use such as employment creation and pollutant reduction. However, most of the biomass materials are not capable of competing with fossil fuels in terms of energy content. High moisture content and high volatile matter yields of biomass make it low calorific fuel, and it is very significant concern over fossil fuels. Besides, the density of biomass is generally low, and it brings difficulty in transportation and storage. These negative aspects of biomass can be overcome by thermal pretreatments that upgrade the fuel property of biomass. That is, torrefaction is such a thermal process in which biomass is heated up to 300ºC under non-oxidizing conditions to avoid burning of the material. The treated biomass is called as biochar that has considerably lower contents of moisture, volatile matter, and oxygen compared to the parent biomass. Accordingly, carbon content and the calorific value of biochar increase to the level which is comparable with that of coal. Moreover, hydrophilic nature of untreated biomass that leads decay in the structure is mostly eliminated, and the surface properties of biochar turn into hydrophobic character upon torrefaction. In order to investigate the effectiveness of torrefaction process on biomass properties, several biomass species such as olive milling residue (OMR), Rhododendron (small shrubby tree with bell-shaped flowers), and ash tree (timber tree) were chosen. The fuel properties of these biomasses were analyzed through proximate and ultimate analyses as well as higher heating value (HHV) determination. For this, samples were first chopped and ground to a particle size lower than 250 µm. Then, samples were subjected to torrefaction in a horizontal tube furnace by heating from ambient up to temperatures of 200, 250, and 300ºC at a heating rate of 10ºC/min. The biochars obtained from this process were also tested by the methods applied to the parent biomass species. Improvement in the fuel properties was interpreted. That is, increasing torrefaction temperature led to regular increases in the HHV in OMR, and the highest HHV (6065 kcal/kg) was gained at 300ºC. Whereas, torrefaction at 250ºC was seen optimum for Rhododendron and ash tree since torrefaction at 300ºC had a detrimental effect on HHV. On the other hand, the increase in carbon contents and reduction in oxygen contents were determined. Burning characteristics of the biochars were also studied using thermal analysis technique. For this purpose, TA Instruments SDT Q600 model thermal analyzer was used and the thermogravimetric analysis (TGA), derivative thermogravimetry (DTG), differential scanning calorimetry (DSC), and differential thermal analysis (DTA) curves were compared and interpreted. It was concluded that torrefaction is an efficient method to upgrade the fuel properties of biomass and the biochars from which have superior characteristics compared to the parent biomasses.Keywords: biochar, biomass, fuel upgrade, torrefaction
Procedia PDF Downloads 374