Search results for: knowledge services industry
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 14741

Search results for: knowledge services industry

281 Isoflavonoid Dynamic Variation in Red Clover Genotypes

Authors: Andrés Quiroz, Emilio Hormazábal, Ana Mutis, Fernando Ortega, Loreto Méndez, Leonardo Parra

Abstract:

Red clover root borer, Hylastinus obscurus Marsham (Coleoptera: Curculionidae), is the main insect pest associated to red clover, Trifolium pratense L. An average of 1.5 H. obscurus per plant can cause 5.5% reduction in forage yield in pastures of two to three years old. Moreover, insect attack can reach 70% to 100% of the plants. To our knowledge, there is no a chemical strategy for controlling this pest. Therefore alternative strategies for controlling H. obscurus are a high priority for red clover producers. One of this alternative is related to the study of secondary metabolites involved in intrinsic chemical defenses developed by plants, such as isoflavonoids. The isoflavonoids formononetin and daidzein have elicited an antifeedant and phagostimult effect on H. obscurus respectively. However, we do not know how is the dynamic variation of these isoflavonoids under field conditions. The main objective of this work was to evaluate the variation of the antifeedant isoflavonoids formononetin, the phagostimulant isoflavonoids daidzein, and their respective glycosides over time in different ecotypes of red clover. Fourteen red clover ecotypes (8 cultivars and 6 experimental lines), were collected at INIA-Carillanca (La Araucanía, Chile). These plants were established in October 2015 under irrigated conditions. The cultivars were distributed in a randomized complete block with three replicates. The whole plants were sampled in four times: 15th October 2016, 12th December 2016, 27th January 2017 and 16th March 2017 with sufficient amount of soil to avoid root damage. A polar fraction of isoflavonoid was obtained from 20 mg of lyophilized root tissue extracted with 2 mL of 80% MeOH for 16 h using an orbital shaker in the dark at room temperature. After, an aliquot of 1.4 mL of the supernatant was evaporated, and the residue was resuspended in 300 µL of 45% MeOH. The identification and quantification of isoflavonoid root extracts were performed by the injection of 20 µL into a Shimadzu HPLC equipped with a C-18 column. The sample was eluted with a mobile phase composed of AcOH: H₂O (1:9 v/v) as solvent A and CH₃CN as solvent B. The detection was performed at 260 nm. The results showed that the amount of aglycones was higher than the respective glycosides. This result is according to the biosynthetic pathway of flavonoids, where the formation of glycoside is further to the glycosides biosynthesis. The amount of formononetin was higher than daidzein. In roots, where H. obscurus spent the most part of its live cycle, the highest content of formononetin was found in G 27, Pawera, Sabtoron High, Redqueli-INIA and Superqueli-INIA cvs. (2.1, 1.8, 1.8, 1.6 and 1.0 mg g⁻¹ respectively); and the lowest amount of daidzein were found Superqueli-INIA (0.32 mg g⁻¹) and in the experimental line Sel Syn Int4 (0.24 mg g⁻¹). This ecotype showed a high content of formononetin (0.9 mg g⁻¹). This information, associated with cultural practices, could help farmers and breeders to reduce H. obscurus in grassland, selecting ecotypes with high content of formononetin and low amount of daidzein in the roots of red clover plants. Acknowledgements: FONDECYT 1141245 and 11130715.

Keywords: daidzein, formononetin, isoflavonoid glycosides, trifolium pratense

Procedia PDF Downloads 191
280 Selling Electric Vehicles: Experiences from Car Salesmen in Sweden

Authors: Jens Hagman, Jenny Janhager Stier, Ellen Olausson, Anne Y. Faxer, Ana Magazinius

Abstract:

Sweden has the second highest electric vehicle (plug-in hybrid and battery electric vehicle) sales per capita in Europe but in relation to sales of internal combustion engine electric vehicles sales are still minuscular (< 4%). Much research effort has been placed on various technical and user focused barriers and enablers for adoption of electric vehicles. Less effort has been placed on investigating the retail (dealership-customer) sales process of vehicles in general and electric vehicles in particular. Arguably, no one ought to be better informed about needs and desires of potential electric vehicle buyers than car salesmen, originating from their daily encounters with customers at the dealership. The aim of this paper is to explore the conditions of selling electric vehicle from a car salesmen’s perspective. This includes identifying barriers and enablers for electric vehicle sales originating from internal (dealership and brand) and external (customer, government) sources. In this interview study five car brands (manufacturers) that sell both electric and internal combustion engine vehicles have been investigated. A total of 15 semi-structured interviews have been conducted (three per brand, in rural and urban settings and at different dealerships). Initial analysis reveals several barriers and enablers, experienced by car salesmen, which influence electric vehicle sales. Examples of as reported by car salesmen identified barriers are: -Electric vehicles earn car salesmen less commission on average compared to internal combustion engine vehicles. -It takes more time to sell and deliver an electric vehicle than an internal combustion engine vehicle. -Current leasing contracts entails relatively low second-hand value estimations for electric vehicles and thus a high leasing fee, which negatively affects the attractiveness of electric vehicles for private consumers in particular. -High purchasing price discourages many consumers from considering electric vehicles. -The education and knowledge level of electric vehicles differs between car salesmen, which could affect their self-confidence in meeting well prepared and question prone electric vehicle buyers. Examples of identified enablers are: -Company car tax regulation promotes sales of electric vehicles; in particular, plug-in hybrid electric vehicles are sold extensively to companies (up to 95 % of sales). -Low operating cost of electric vehicles such as fuel and service is an advantage when understood by consumers. -The drive performance of electric vehicles (quick, silent and fun to drive) is attractive to consumers. -Environmental aspects are considered important for certain consumer groups. -Fast technological improvements, such as increased range are opening up a wider market for electric vehicles. -For one of the brands; attractive private lease campaigns have proved effective to promote sales. This paper gives insights of an important but often overlooked aspect for the diffusion of electric vehicles (and durable products in general); the interaction between car salesmen and customers at the critical acquiring moment. Extracted through interviews with multiple car salesmen. The results illuminate untapped potential for sellers (salesmen, dealerships and brands) to mitigating sales barriers and strengthening sales enablers and thus becoming a more important actor in the electric vehicle diffusion process.

Keywords: customer barriers, electric vehicle promotion, sales of electric vehicles, interviews with car salesmen

Procedia PDF Downloads 204
279 Implementation of Green Deal Policies and Targets in Energy System Optimization Models: The TEMOA-Europe Case

Authors: Daniele Lerede, Gianvito Colucci, Matteo Nicoli, Laura Savoldi

Abstract:

The European Green Deal is the first internationally agreed set of measures to contrast climate change and environmental degradation. Besides the main target of reducing emissions by at least 55% by 2030, it sets the target of accompanying European countries through an energy transition to make the European Union into a modern, resource-efficient, and competitive net-zero emissions economy by 2050, decoupling growth from the use of resources and ensuring a fair adaptation of all social categories to the transformation process. While the general purpose to allow the realization of the purposes of the Green Deal already dates back to 2019, strategies and policies keep being developed coping with recent circumstances and achievements. However, general long-term measures like the Circular Economy Action Plan, the proposals to shift from fossil natural gas to renewable and low-carbon gases, in particular biomethane and hydrogen, and to end the sale of gasoline and diesel cars by 2035, will all have significant effects on energy supply and demand evolution across the next decades. The interactions between energy supply and demand over long-term time frames are usually assessed via energy system models to derive useful insights for policymaking and to address technological choices and research and development. TEMOA-Europe is a newly developed energy system optimization model instance based on the minimization of the total cost of the system under analysis, adopting a technologically integrated, detailed, and explicit formulation and considering the evolution of the system in partial equilibrium in competitive markets with perfect foresight. TEMOA-Europe is developed on the TEMOA platform, an open-source modeling framework totally implemented in Python, therefore ensuring third-party verification even on large and complex models. TEMOA-Europe is based on a single-region representation of the European Union and EFTA countries on a time scale between 2005 and 2100, relying on a set of assumptions for socio-economic developments based on projections by the International Energy Outlook and a large technological dataset including 7 sectors: the upstream and power sectors for the production of all energy commodities and the end-use sectors, including industry, transport, residential, commercial and agriculture. TEMOA-Europe also includes an updated hydrogen module considering its production, storage, transportation, and utilization. Besides, it can rely on a wide set of innovative technologies, ranging from nuclear fusion and electricity plants equipped with CCS in the power sector to electrolysis-based steel production processes and steel in the industrial sector – with a techno-economic characterization based on public literature – to produce insightful energy scenarios and especially to cope with the very long analyzed time scale. The aim of this work is to examine in detail the scheme of measures and policies for the realization of the purposes of the Green Deal and to transform them into a set of constraints and new socio-economic development pathways. Based on them, TEMOA-Europe will be used to produce and comparatively analyze scenarios to assess the consequences of Green Deal-related measures on the future evolution of the energy mix over the whole energy system in an economic optimization environment.

Keywords: European Green Deal, energy system optimization modeling, scenario analysis, TEMOA-Europe

Procedia PDF Downloads 83
278 Parameter Selection and Monitoring for Water-Powered Percussive Drilling in Green-Fields Mineral Exploration

Authors: S. J. Addinell, T. Richard, B. Evans

Abstract:

The Deep Exploration Technologies Cooperative Research Centre (DET CRC) is researching and developing a new coiled tubing based greenfields mineral exploration drilling system utilising downhole water powered percussive drill tooling. This new drilling system is aimed at significantly reducing the costs associated with identifying mineral resource deposits beneath deep, barron cover. This system has shown superior rates of penetration in water-rich hard rock formations at depths exceeding 500 meters. Several key challenges exist regarding the deployment and use of these bottom hole assemblies for mineral exploration, and this paper discusses some of the key technical challenges. This paper presents experimental results obtained from the research program during laboratory and field testing of the prototype drilling system. A study of the morphological aspects of the cuttings generated during the percussive drilling process is presented and shows a strong power law relationship for particle size distributions. Several percussive drilling parameters such as RPM, applied fluid pressure and weight on bit have been shown to influence the particle size distributions of the cuttings generated. This has direct influence on other drilling parameters such as flow loop performance, cuttings dewatering, and solids control. Real-time, accurate knowledge of percussive system operating parameters will assist the driller in maximising the efficiency of the drilling process. The applied fluid flow, fluid pressure, and rock properties are known to influence the natural oscillating frequency of the percussive hammer, but this paper also shows that drill bit design, drill bit wear and the applied weight on bit can also influence the oscillation frequency. Due to the changing drilling conditions and therefore changing operating parameters, real-time understanding of the natural operating frequency is paramount to achieving system optimisation. Several techniques to understand the oscillating frequency have been investigated and presented. With a conventional top drive drilling rig, spectral analysis of applied fluid pressure, hydraulic feed force pressure, hold back pressure and drill string vibrations have shown the presence of the operating frequency of the bottom hole tooling. Unfortunately, however, with the implementation of a coiled tubing drilling rig, implementing a positive displacement downhole motor to provide drill bit rotation, these signals are not available for interrogation at the surface and therefore another method must be considered. The investigation and analysis of ground vibrations using geophone sensors, similar to seismic-while-drilling techniques have indicated the presence of the natural oscillating frequency of the percussive hammer. This method is shown to provide a robust technique for the determination of the downhole percussive oscillation frequency when used with a coiled tubing drill rig.

Keywords: cuttings characterization, drilling optimization, oscillation frequency, percussive drilling, spectral analysis

Procedia PDF Downloads 208
277 The Bidirectional Effect between Parental Burnout and the Child’s Internalized and/or Externalized Behaviors

Authors: Aline Woine, Moïra Mikolajczak, Virginie Dardier, Isabelle Roskam

Abstract:

Background information: Becoming a parent is said to be the happiest event one can ever experience in one’s life. This popular (and almost absolute) truth–which no reasonable and decent human being would ever dare question on pain of being singled out as a bad parent–contrasts with the nuances that reality offers. Indeed, while many parents do thrive in their parenting role, some others falter and become progressively overwhelmed by their parenting role, ineluctably caught in a spiral of exhaustion. Parental burnout (henceforth PB) sets in when parental demands (stressors) exceed parental resources. While it is now generally acknowledged that PB affects the parent’s behavior in terms of neglect and violence toward their offspring, little is known about the impact that the syndrome might have on the children’s internalized (anxious and depressive symptoms, somatic complaints, etc.) and/or externalized (irritability, violence, aggressiveness, conduct disorder, oppositional disorder, etc.) behaviors. Furthermore, at the time of writing, to our best knowledge, no research has yet tested the reverse effect, namely, that of the child's internalized and/or externalized behaviors on the onset and/or maintenance of parental burnout symptoms. Goals and hypotheses: The present pioneering research proposes to fill an important gap in the existing literature related to PB by investigating the bidirectional effect between PB and the child’s internalized and/or externalized behaviors. Relying on a cross-lagged longitudinal study with three waves of data collection (4 months apart), our study tests a transactional model with bidirectional and recursive relations between observed variables and at the three waves, as well as autoregressive paths and cross-sectional correlations. Methods: As we write this, wave-two data are being collected via Qualtrics, and we expect a final sample of about 600 participants composed of French-speaking (snowball sample) and English-speaking (Prolific sample) parents. Structural equation modeling is employed using Stata version 17. In order to retain as much statistical power as possible, we use all available data and therefore apply the maximum likelihood with a missing value (mlmv) as the method of estimation to compute the parameter estimates. To limit (in so far is possible) the shared method variance bias in the evaluation of the child’s behavior, the study relies on a multi-informant evaluation approach. Expected results: We expect our three-wave longitudinal study to show that PB symptoms (measured at T1) raise the occurrence/intensity of the child’s externalized and/or internalized behaviors (measured at T2 and T3). We further expect the child’s occurrence/intensity of externalized and/or internalized behaviors (measured at T1) to augment the risk for PB (measured at T2 and T3). Conclusion: Should our hypotheses be confirmed, our results will make an important contribution to the understanding of both PB and children’s behavioral issues, thereby opening interesting theoretical and clinical avenues.

Keywords: exhaustion, structural equation modeling, cross-lagged longitudinal study, violence and neglect, child-parent relationship

Procedia PDF Downloads 50
276 Role of Baseline Measurements in Assessing Air Quality Impact of Shale Gas Operations

Authors: Paula Costa, Ana Picado, Filomena Pinto, Justina Catarino

Abstract:

Environmental impact associated with large scale shale gas development is of major concern to the public, policy makers and other stakeholders. To assess this impact on the atmosphere, it is important to monitoring ambient air quality prior to and during all shale gas operation stages. Baseline observations can provide a standard of the pre-shale gas development state of the environment. The lack of baseline concentrations was identified as an important knowledge gap to assess the impact of emissions to the air due to shale gas operations. In fact baseline monitoring of air quality are missing in several regions, where there is a strong possibility of future shale gas exploration. This makes it difficult to properly identify, quantify and characterize environmental impacts that may be associated with shale gas development. The implementation of a baseline air monitoring program is imperative to be able to assess the total emissions related with shale gas operations. In fact, any monitoring programme should be designed to provide indicative information on background levels. A baseline air monitoring program should identify and characterize targeted air pollutants, most frequently described from monitoring and emission measurements, as well as those expected from hydraulic fracturing activities, and establish ambient air conditions prior to start-up of potential emission sources from shale gas operations. This program has to be planned for at least one year accounting for ambient variations. In the literature, in addition to GHG emissions of CH4, CO2 and nitrogen oxides (NOx), fugitive emissions from shale gas production can release volatile organic compounds (VOCs), aldehydes (formaldehyde, acetaldehyde) and hazardous air pollutants (HAPs). The VOCs include a.o., benzene, toluene, ethyl benzene, xylenes, hexanes, 2,2,4-trimethylpentane, styrene. The concentrations of six air pollutants (ozone, particulate matter (PM), carbon monoxide (CO), nitrogen oxides (NOx), sulphur oxides (SOx), and lead) whose regional ambient air levels are regulated by the Environmental Protection Agency (EPA), are often discussed. However, the main concern in the emissions to air associated to shale gas operations, seems to be the leakage of methane. Methane is identified as a compound of major concern due to its strong global warming potential. The identification of methane leakage from shale gas activities is complex due to the existence of several other CH4 sources (e.g. landfill, agricultural activity or gas pipeline/compressor station). An integrated monitoring study of methane emissions may be a suitable mean of distinguishing the contribution of different sources of methane to ambient levels. All data analysis needs to be carefully interpreted taking, also, into account the meteorological conditions of the site. This may require the implementation of a more intensive monitoring programme. So, it is essential the development of a low-cost sampling strategy, suitable for establishing pre-operations baseline data as well as an integrated monitoring program to assess the emissions from shale gas operation sites. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 640715.

Keywords: air emissions, baseline, green house gases, shale gas

Procedia PDF Downloads 304
275 Changes in Rainfall and Temperature and Its Impact on Crop Production in Moyamba District, Southern Sierra Leone

Authors: Keiwoma Mark Yila, Mathew Lamrana Siaffa Gboku, Mohamed Sahr Lebbie, Lamin Ibrahim Kamara

Abstract:

Rainfall and temperature are the important variables which are often used to trace climate variability and change. A perception study and analysis of climatic data were conducted to assess the changes in rainfall and temperature and their impact on crop production in Moyamba district, Sierra Leone. For the perception study, 400 farmers were randomly selected from farmer-based organizations (FBOs) in 4 chiefdoms, and 30 agricultural extension workers (AWEs) in the Moyamba district were purposely selected as respondents. Descriptive statistics and Kendall’s test of concordance was used to analyze the data collected from the farmers and AEWs. Data for the analysis of variability and trends of rainfall and temperature from 1991 to 2020 were obtained from the Sierra Leone Meteorological Agency and Njala University and grouped into monthly, seasonal and annual time series. Regression analysis was used to determine the statistical values and trend lines for the seasonal and annual time series data. The Mann-Kendall test and Sen’s Slope Estimator were used to analyze the trends' significance and magnitude, respectively. The results of both studies show evidence of climate change in the Moyamba district. A substantial number of farmers and AEWs perceived a decrease in the annual rainfall amount, length of the rainy season, a late start and end of the rainy season, an increase in the temperature during the day and night, and a shortened harmattan period over the last 30 years. Analysis of the meteorological data shows evidence of variability in the seasonal and annual distribution of rainfall and temperature, a decreasing and non-significant trend in the rainy season and annual rainfall, and an increasing and significant trend in seasonal and annual temperature from 1991 to 2020. However, the observed changes in rainfall and temperature by the farmers and AEWs partially agree with the results of the analyzed meteorological data. The majority of the farmers perceived that; adverse weather conditions have negatively affected crop production in the district. Droughts, high temperatures, and irregular rainfall are the three major adverse weather events that farmers perceived to have contributed to a substantial loss in the yields of the major crops cultivated in the district. In response to the negative effects of adverse weather events, a substantial number of farmers take no action due to their lack of knowledge and technical or financial capacity to implement climate-sensitive agricultural (CSA) practices. Even though few farmers are practising some CSA practices in their farms, there is an urgent need to build the capacity of farmers and AEWs to adapt to and mitigate the negative impacts of climate change. The most priority support needed by farmers is the provision of climate-resilient crop varieties, whilst the AEWs need training on CSA practices.

Keywords: climate change, crop productivity, farmer’s perception, rainfall, temperature, Sierra Leone

Procedia PDF Downloads 53
274 Validation and Fit of a Biomechanical Bipedal Walking Model for Simulation of Loads Induced by Pedestrians on Footbridges

Authors: Dianelys Vega, Carlos Magluta, Ney Roitman

Abstract:

The simulation of loads induced by walking people in civil engineering structures is still challenging It has been the focus of considerable research worldwide in the recent decades due to increasing number of reported vibration problems in pedestrian structures. One of the most important key in the designing of slender structures is the Human-Structure Interaction (HSI). How moving people interact with structures and the effect it has on their dynamic responses is still not well understood. To rely on calibrated pedestrian models that accurately estimate the structural response becomes extremely important. However, because of the complexity of the pedestrian mechanisms, there are still some gaps in knowledge and more reliable models need to be investigated. On this topic several authors have proposed biodynamic models to represent the pedestrian, whether these models provide a consistent approximation to physical reality still needs to be studied. Therefore, this work comes to contribute to a better understanding of this phenomenon bringing an experimental validation of a pedestrian walking model and a Human-Structure Interaction model. In this study, a bi-dimensional bipedal walking model was used to represent the pedestrians along with an interaction model which was applied to a prototype footbridge. Numerical models were implemented in MATLAB. In parallel, experimental tests were conducted in the Structures Laboratory of COPPE (LabEst), at Federal University of Rio de Janeiro. Different test subjects were asked to walk at different walking speeds over instrumented force platforms to measure the walking force and an accelerometer was placed at the waist of each subject to measure the acceleration of the center of mass at the same time. By fitting the step force and the center of mass acceleration through successive numerical simulations, the model parameters are estimated. In addition, experimental data of a walking pedestrian on a flexible structure was used to validate the interaction model presented, through the comparison of the measured and simulated structural response at mid span. It was found that the pedestrian model was able to adequately reproduce the ground reaction force and the center of mass acceleration for normal and slow walking speeds, being less efficient for faster speeds. Numerical simulations showed that biomechanical parameters such as leg stiffness and damping affect the ground reaction force, and the higher the walking speed the greater the leg length of the model. Besides, the interaction model was also capable to estimate with good approximation the structural response, that remained in the same order of magnitude as the measured response. Some differences in frequency spectra were observed, which are presumed to be due to the perfectly periodic loading representation, neglecting intra-subject variabilities. In conclusion, this work showed that the bipedal walking model could be used to represent walking pedestrians since it was efficient to reproduce the center of mass movement and ground reaction forces produced by humans. Furthermore, although more experimental validations are required, the interaction model also seems to be a useful framework to estimate the dynamic response of structures under loads induced by walking pedestrians.

Keywords: biodynamic models, bipedal walking models, human induced loads, human structure interaction

Procedia PDF Downloads 103
273 The Proposal for a Framework to Face Opacity and Discrimination ‘Sins’ Caused by Consumer Creditworthiness Machines in the EU

Authors: Diogo José Morgado Rebelo, Francisco António Carneiro Pacheco de Andrade, Paulo Jorge Freitas de Oliveira Novais

Abstract:

Not everything in AI-power consumer credit scoring turns out to be a wonder. When using AI in Creditworthiness Assessment (CWA), opacity and unfairness ‘sins’ must be considered to the task be deemed Responsible. AI software is not always 100% accurate, which can lead to misclassification. Discrimination of some groups can be exponentiated. A hetero personalized identity can be imposed on the individual(s) affected. Also, autonomous CWA sometimes lacks transparency when using black box models. However, for this intended purpose, human analysts ‘on-the-loop’ might not be the best remedy consumers are looking for in credit. This study seeks to explore the legality of implementing a Multi-Agent System (MAS) framework in consumer CWA to ensure compliance with the regulation outlined in Article 14(4) of the Proposal for an Artificial Intelligence Act (AIA), dated 21 April 2021 (as per the last corrigendum by the European Parliament on 19 April 2024), Especially with the adoption of Art. 18(8)(9) of the EU Directive 2023/2225, of 18 October, which will go into effect on 20 November 2026, there should be more emphasis on the need for hybrid oversight in AI-driven scoring to ensure fairness and transparency. In fact, the range of EU regulations on AI-based consumer credit will soon impact the AI lending industry locally and globally, as shown by the broad territorial scope of AIA’s Art. 2. Consequently, engineering the law of consumer’s CWA is imperative. Generally, the proposed MAS framework consists of several layers arranged in a specific sequence, as follows: firstly, the Data Layer gathers legitimate predictor sets from traditional sources; then, the Decision Support System Layer, whose Neural Network model is trained using k-fold Cross Validation, provides recommendations based on the feeder data; the eXplainability (XAI) multi-structure comprises Three-Step-Agents; and, lastly, the Oversight Layer has a 'Bottom Stop' for analysts to intervene in a timely manner. From the analysis, one can assure a vital component of this software is the XAY layer. It appears as a transparent curtain covering the AI’s decision-making process, enabling comprehension, reflection, and further feasible oversight. Local Interpretable Model-agnostic Explanations (LIME) might act as a pillar by offering counterfactual insights. SHapley Additive exPlanation (SHAP), another agent in the XAI layer, could address potential discrimination issues, identifying the contribution of each feature to the prediction. Alternatively, for thin or no file consumers, the Suggestion Agent can promote financial inclusion. It uses lawful alternative sources such as the share of wallet, among others, to search for more advantageous solutions to incomplete evaluation appraisals based on genetic programming. Overall, this research aspires to bring the concept of Machine-Centered Anthropocentrism to the table of EU policymaking. It acknowledges that, when put into service, credit analysts no longer exert full control over the data-driven entities programmers have given ‘birth’ to. With similar explanatory agents under supervision, AI itself can become self-accountable, prioritizing human concerns and values. AI decisions should not be vilified inherently. The issue lies in how they are integrated into decision-making and whether they align with non-discrimination principles and transparency rules.

Keywords: creditworthiness assessment, hybrid oversight, machine-centered anthropocentrism, EU policymaking

Procedia PDF Downloads 13
272 The Production of Biofertilizer from Naturally Occurring Microorganisms by Using Nuclear Technologies

Authors: K. S. Al-Mugren, A. Yahya, S. Alodah, R. Alharbi, S. H. Almsaid , A. Alqahtani, H. Jaber, A. Basaqer, N. Alajra, N. Almoghati, A. Alsalman, Khalid Alharbi

Abstract:

Context: The production of biofertilizers from naturally occurring microorganisms is an area of research that aims to enhance agricultural practices by utilizing local resources. This research project focuses on isolating and screening indigenous microorganisms with PK-fixing and phosphate solubilizing characteristics from local sources. Research Aim: The aim of this project is to develop a biofertilizer product using indigenous microorganisms and composted agro waste as a carrier. The objective is to enhance crop productivity and soil fertility through the application of biofertilizers. Methodology: The research methodology includes several key steps. Firstly, indigenous microorganisms will be isolated from local resources using the ten-fold serial dilutions technique. Screening assays will be conducted to identify microorganisms with phosphate solubilizing and PK-fixing activities. Agro-waste materials will be collected from local agricultural sources, and composting experiments will be conducted to convert them into organic matter-rich compost. Physicochemical analysis will be performed to assess the composition of the composted agro-waste. Gamma and X-ray irradiation will be used to sterilize the carrier material. The sterilized carrier will be tested for sterility using the ten-fold serial dilutions technique. Finally, selected indigenous microorganisms will be developed into biofertilizer products. Findings: The research aims to find suitable indigenous microorganisms with phosphate solubilizing and PK-fixing characteristics for biofertilizer production. Additionally, the research aims to assess the suitability of composted agro waste as a carrier for biofertilizers. The impact of gamma irradiation sterilization on pathogen elimination will also be investigated. Theoretical Importance: This research contributes to the understanding of utilizing indigenous microorganisms and composted agro waste for biofertilizer production. It expands knowledge on the potential benefits of biofertilizers in enhancing crop productivity and soil fertility. Data Collection and Analysis Procedures: The data collection process involves isolating indigenous microorganisms, conducting screening assays, collecting and composting agro waste, analyzing the physicochemical composition of composted agro waste, and testing carrier sterilization. The analysis procedures include assessing the abilities of indigenous microorganisms, evaluating the composition of composted agro waste, and determining the sterility of the carrier material. Conclusion: The research project aims to develop biofertilizer products using indigenous microorganisms and composted agro waste as a carrier. Through the isolation and screening of indigenous microorganisms, the project aims to enhance crop productivity and soil fertility by utilizing local resources. The research findings will contribute to the understanding of the suitability of composted agro waste as a carrier and the efficacy of gamma irradiation sterilization. The research outcomes will have theoretical importance in the field of biofertilizer production and agricultural practices.

Keywords: biofertilizer, microorganisms, agro waste, nuclear technologies

Procedia PDF Downloads 79
271 Positive Incentives to Reduce Private Car Use: A Theory-Based Critical Analysis

Authors: Rafael Alexandre Dos Reis

Abstract:

Research has shown a substantial increase in the participation of Conventionally Fuelled Vehicles (CFVs) in the urban transport modal split. The reasons for this unsustainable reality are multiple, from economic interventions to individual behaviour. The development and delivery of positive incentives for the adoption of more environmental-friendly modes of transport is an emerging strategy to help in tackling the problem of excessive use of conventionally fuelled vehicles. The efficiency of this approach, like other information-based schemes, can benefit from the knowledge of their potential impacts in theoretical constructs of multiple behaviour change theories. The goal of this research is to critically analyse theories of behaviour that are relevant to transport research and the impacts of positive incentives on the theoretical determinants of behaviour, strengthening the current body of evidence about the benefits of this approach. The main method to investigate this will involve a literature review on two main topics: the current theories of behaviour that have empirical support in transport research and the past or ongoing positive incentives programs that had an impact on car use reduction. The reviewed programs of positive incentives were the following: The TravelSmart®; Spitsmijden®; Incentives for Singapore Commuters® (INSINC); COMMUTEGREENER®; MOVESMARTER®; STREETLIFE®; SUPERHUB®; SUNSET® and the EMPOWER® project. The theories analysed were the heory of Planned Behaviour (TPB); The Norm Activation Theory (NAM); Social Learning Theory (SLT); The Theory of Interpersonal Behaviour (TIB); The Goal-Setting Theory (GST) and The Value-Belief-Norm Theory (VBN). After the revisions of the theoretical constructs of each of the theories and their influence on car use, it can be concluded that positive incentives schemes impact on behaviour change in the following manners: -Changing individual’s attitudes through informational incentives; -Increasing feelings of moral obligations to reduce the use of CFVs; -Increase the perceived social pressure to engage in more sustainable mobility behaviours through the use of comparison mechanisms in social media, for example; -Increase the perceived control of behaviour through informational incentives and training incentives; -Increasing personal norms with reinforcing information; -Providing tools for self-monitoring and self-evaluation; -Providing real experiences in alternative modes to the car; -Making the observation of others’ car use reduction possible; -Informing about consequences of behaviour and emphasizing the individual’s responsibility with society and the environment; -Increasing the perception of the consequences of car use to an individual’s valued objects; -Increasing the perceived ability to reduce threats to environment; -Help establishing goals to reduce car use; - iving personalized feedback on the goal; -Increase feelings of commitment to the goal; -Reducing the perceived complexity of the use of alternatives to the car. It is notable that the emerging technique of delivering positive incentives are systematically connected to causal determinants of travel behaviour. The preliminary results of the reviewed programs evidence how positive incentives might strengthen these determinants and help in the process of behaviour change.

Keywords: positive incentives, private car use reduction, sustainable behaviour, voluntary travel behaviour change

Procedia PDF Downloads 313
270 Characterization of Potato Starch/Guar Gum Composite Film Modified by Ecofriendly Cross-Linkers

Authors: Sujosh Nandi, Proshanta Guha

Abstract:

Synthetic plastics are preferred for food packaging due to high strength, stretch-ability, good water vapor and gas barrier properties, transparency and low cost. However, environmental pollution generated by these synthetic plastics is a major concern of modern human civilization. Therefore, use of biodegradable polymers as a substitute for synthetic non-biodegradable polymers are encouraged to be used even after considering drawbacks related to mechanical and barrier properties of the films. Starch is considered one of the potential raw material for the biodegradable polymer, encounters poor water barrier property and mechanical properties due to its hydrophilic nature. That apart, recrystallization of starch molecules occurs during aging which decreases flexibility and increases elastic modulus of the film. The recrystallization process can be minimized by blending of other hydrocolloids having similar structural compatibility, into the starch matrix. Therefore, incorporation of guar gum having a similar structural backbone, into the starch matrix can introduce a potential film into the realm of biodegradable polymer. However, hydrophilic nature of both starch and guar gum, water barrier property of the film is low. One of the prospective solution to enhance this could be modification of the potato starch/guar gum (PSGG) composite film using cross-linker. Over the years, several cross-linking agents such as phosphorus oxychloride, sodium trimetaphosphate, etc. have been used to improve water vapor permeability (WVP) of the films. However, these chemical cross-linking agents are toxic, expensive and take longer time to degrade. Therefore, naturally available carboxylic acid (tartaric acid, malonic acid, succinic acid, etc.) had been used as a cross-linker and found that water barrier property enhanced substantially. As per our knowledge, no works have been reported with tartaric acid and succinic acid as a cross-linking agent blended with the PSGG films. Therefore, the objective of the present study was to examine the changes in water vapor barrier property and mechanical properties of the PSGG films after cross-linked with tartaric acid (TA) and succinic acid (SA). The cross-linkers were blended with PSGG film-forming solution at four different concentrations (4, 8, 12 & 16%) and cast on teflon plate at 37°C for 20 h. From the fourier-transform infrared spectroscopy (FTIR) study of the developed films, a band at 1720cm-1 was observed which is attributed to the formation of ester group in the developed films. On the other hand, it was observed that tensile strength (TS) of the cross-linked film decreased compared to non-cross linked films, whereas strain at break increased by several folds. Moreover, the results depicted that tensile strength diminished with increasing the concentration of TA or SA and lowest TS (1.62 MPa) was observed for 16% SA. That apart, maximum strain at break was also observed for TA at 16% and the reason behind this could be a lesser degree of crystallinity of the TA cross-linked films compared to SA. However, water vapor permeability of succinic acid cross-linked film was reduced significantly, but it was enhanced significantly by addition of tartaric acid.

Keywords: cross linking agent, guar gum, organic acids, potato starch

Procedia PDF Downloads 90
269 Perception Differences in Children Learning to Golf with Traditional versus Modified (Scaled) Equipment

Authors: Lindsey D. Sams, Dean R. Gorman, Cathy D. Lirgg, Steve W. Dittmore, Jack C. Kern

Abstract:

Golf is a lifetime sport that provides numerous physical and psychological benefits. The game has struggled with attrition and retention within minority groups and this has exposed the lack of a modified introduction to the game that is uniformly accessible and developmentally appropriate. Factors that have been related to sport participatory behaviors include perceived competence, enjoyment and intention. The purpose of this study was to examine self-reported perception differences in competence and enjoyment between learners using modified and traditional equipment as well as the potential effects these factors could have on intent for future participation. For this study, SNAG Golf was chosen to serve as the scaled equipment used by the modified equipment group. The participants in this study were 99 children (24 traditional equipment users/ 75 modified equipment users) located across the U.S. with ages ranging from 7 to 12 years (2nd-5th grade). Utilizing a convenience sampling method, data was obtained on a voluntary basis through surveys measuring children’s golf participation and self-perceptions concerning perceived competence, enjoyment and intention to continue participation. The scales used for perceived competence and enjoyment included Susan Harter’s Self-Perception Profile for Children (SPPC) along with the Physical Activity Enjoyment Scale (PACES). Analysis revealed no significant differences for enjoyment, perceived competence or intention between children learning with traditional golf equipment and modified golf equipment. This was true even though traditional equipment users reported significantly higher experience levels than that of modified users. Intention was regressed on the enjoyment and perceived competence variables. Congruent with current literature, enjoyment was a strong predictor of intention to continue participation, for both groups. Modified equipment users demonstrated significantly lower experience levels but reported similar levels of competence, enjoyment and intent to continue participation as reported by the more experienced, and potentially more skilled, traditional users. The ability to immediately generate these positive affects suggests the potential adoption of a more effective way to learn golf and a method that is conducive to participatory behaviors related to attrition and retention. These implications in turn, highlight an equipment candidate ideal for inception into physical education programs where new learners are introduced to various sports in safe and developmentally appropriate environments. A major goal of this study was to provide foundational research that instigates the further examination of golf’s introductory teaching methodologies, as there is a lack of its presence in current literature. Future research recommendations range from improvements in the current research design to expansive approaches related to the topic, such as progressive skill development, knowledge of the game’s tactical and strategic concepts, playing ability and teaching effectiveness when utilizing modified versus traditional equipment.

Keywords: adaptive sports, enjoyment, golf participation, modified equipment, perceived competence, SNAG golf

Procedia PDF Downloads 316
268 The Lacuna in Understanding of Forensic Science amongst Law Practitioners in India

Authors: Poulomi Bhadra, Manjushree Palit, Sanjeev P. Sahni

Abstract:

Forensic science uses all branches of science for criminal investigation and trial and has increasingly emerged as an important tool in the administration of justice. However, the growth and development of this field in India has not been as rapid or widespread as compared to the more developed Western countries. For successful administration of justice, it is important that all agencies involved in law enforcement adopt an inter-professional approach towards forensic science, which is presently lacking. In light of the alarmingly high average acquittal rate in India, this study aims to examine the lack of understanding and appreciation of the importance and scope of forensic evidence and expert opinions amongst law professionals such as lawyers and judges. Based on a study of trial court cases from Delhi and surrounding areas, the study underline the areas in forensics where the criminal justice system has noticeably erred. Using this information, the authors examine the extent of forensic understanding amongst legal professionals and attempt to conclusively identify the areas in which they need further appraisal. A cross-sectional study done using a structured questionnaire was conducted amongst law professionals across age, gender, type and years of experience in court, to determine their understanding of DNA, fingerprints and other interdisciplinary scientific materials used as forensic evidence. In our study, we understand the levels of understanding amongst lawyers with regards to DNA and fingerprint evidence, and how it affects trial outcomes. We also aim to understand the factors that prevent credible and advanced awareness amongst legal personnel, amongst others. The survey identified the areas in modern and advanced forensics, such as forensic entomology, anthropology, cybercrime etc., in which Indian legal professionals are yet to attain a functional understanding. It also brings to light, what is commonly termed as the ‘CSI-effect’ in the Western courtrooms, and provides scope to study the existence of this phenomenon and its effects on the Indian courts and their judgements. This study highlighted the prevalence of unchallenged expert testimony presented by the prosecution in criminal trials and impressed upon the judicial system the need for independent analysis and evaluation of the scientist’s data and/or testimony by the defense. Overall, this study aims to define a clearer and rigid understanding of why legal professionals should have basic understanding of the interdisciplinary nature of forensic sciences. Based on the aforementioned findings, the author suggests various measures by which judges and lawyers might obtain an extensive knowledge of the advances and promising potentialities of forensic science. This includes promoting a forensic curriculum in legal studies at Bachelor’s and Master’s level as well as in mid-career professional courses. Formation of forensic-legal consultancies, in consultation with the Department of Justice, will not only assist in training police, military and law personnel but will also encourage legal research in this field. These suggestions also aim to bridge the communication gap that presently exists between law practitioners, forensic scientists and the general community’s awareness of the criminal justice system.

Keywords: forensic science, Indian legal professionals, interdisciplinary awareness, legal education

Procedia PDF Downloads 319
267 Ionophore-Based Materials for Selective Optical Sensing of Iron(III)

Authors: Natalia Lukasik, Ewa Wagner-Wysiecka

Abstract:

Development of selective, fast-responsive, and economical sensors for diverse ions detection and determination is one of the most extensively studied areas due to its importance in the field of clinical, environmental and industrial analysis. Among chemical sensors, vast popularity has gained ionophore-based optical sensors, where the generated analytical signal is a consequence of the molecular recognition of ion by the ionophore. Change of color occurring during host-guest interactions allows for quantitative analysis and for 'naked-eye' detection without the need of using sophisticated equipment. An example of application of such sensors is colorimetric detection of iron(III) cations. Iron as one of the most significant trace elements plays roles in many biochemical processes. For these reasons, the development of reliable, fast, and selective methods of iron ions determination is highly demanded. Taking all mentioned above into account a chromogenic amide derivative of 3,4-dihydroxybenzoic acid was synthesized, and its ability to iron(III) recognition was tested. To the best of authors knowledge (according to chemical abstracts) the obtained ligand has not been described in the literature so far. The catechol moiety was introduced to the ligand structure in order to mimic the action of naturally occurring siderophores-iron(III)-selective receptors. The ligand–ion interactions were studied using spectroscopic methods: UV-Vis spectrophotometry and infrared spectroscopy. The spectrophotometric measurements revealed that the amide exhibits affinity to iron(III) in dimethyl sulfoxide and fully aqueous solution, what is manifested by the change of color from yellow to green. Incorporation of the tested amide into a polymeric matrix (cellulose triacetate) ensured effective recognition of iron(III) at pH 3 with the detection limit 1.58×10⁻⁵ M. For the obtained sensor material parameters like linear response range, response time, selectivity, and possibility of regeneration were determined. In order to evaluate the effect of the size of the sensing material on iron(III) detection nanospheres (in the form of nanoemulsion) containing the tested amide were also prepared. According to DLS (dynamic light scattering) measurements, the size of the nanospheres is 308.02 ± 0.67 nm. Work parameters of the nanospheres were determined and compared with cellulose triacetate-based material. Additionally, for fast, qualitative experiments the test strips were prepared by adsorption of the amide solution on a glass microfiber material. Visual limit of detection of iron(III) at pH 3 by the test strips was estimated at the level 10⁻⁴ M. In conclusion, reported here amide derived from 3,4- dihydroxybenzoic acid proved to be an effective candidate for optical sensing of iron(III) in fully aqueous solutions. N. L. kindly acknowledges financial support from National Science Centre Poland the grant no. 2017/01/X/ST4/01680. Authors thank for financial support from Gdansk University of Technology grant no. 032406.

Keywords: ion-selective optode, iron(III) recognition, nanospheres, optical sensor

Procedia PDF Downloads 125
266 The Ecuador Healthy Food Environment Policy Index (Food-EPI)

Authors: Samuel Escandón, María J. Peñaherrera-Vélez, Signe Vargas-Rosvik, Carlos Jerves Córdova, Ximena Vélez-Calvo, Angélica Ochoa-Avilés

Abstract:

Overweight and obesity are considered risk factors in childhood for developing nutrition-related non-communicable diseases (NCDs), such as diabetes, cardiovascular diseases, and cancer. In Ecuador, 35.4% of 5- to 11-year-olds and 29.6% of 12- to 19-year-olds are overweight or obese. Globally, unhealthy food environments characterized by high consumption of processed/ultra-processed food and rapid urbanization are highly related to the increasing nutrition-related non-communicable diseases. The evidence shows that in low- and middle-income countries (LMICs), fiscal policies and regulatory measures significantly reduce unhealthy food environments, achieving substantial advances in health. However, in some LMICs, little is known about the impact of governments' action to implement healthy food-environment policies. This study aimed to generate evidence on the state of implementation of public policy focused on food environments for the prevention of overweight and obesity in children and adolescents in Ecuador compared to global best practices and to target key recommendations for reinforcing the current strategies. After adapting the INFORMAS' Healthy Food Environment Policy Index (Food‐EPI) to the Ecuadorian context, the Policy and Infrastructure support components were assessed. Individual online interviews were performed using fifty-one indicators to analyze the level of implementation of policies directly or indirectly related to preventing overweight and obesity in children and adolescents compared to international best practices. Additionally, a participatory workshop was conducted to identify the critical indicators and generate recommendations to reinforce or improve the political action around them. In total, 17 government and non-government experts were consulted. From 51 assessed indicators, only the one corresponding to the nutritional information and ingredients labelling registered an implementation level higher than 60% (67%) compared to the best international practices. Among the 17 indicators determined as priorities by the participants, those corresponding to the provision of local products in school meals and the limitation of unhealthy-products promotion in traditional and digital media had the lowest level of implementation (34% and 11%, respectively) compared to global best practices. The participants identified more barriers (e.g., lack of continuity of effective policies across government administrations) than facilitators (e.g., growing interest from the Ministry of Environment because of the eating-behavior environmental impact) for Ecuador to move closer to the best international practices. Finally, within the participants' recommendations, we highlight the need for policy-evaluation systems, information transparency on the impact of the policies, transformation of successful strategies into laws or regulations to make them mandatory, and regulation of power and influence from the food industry (conflicts of interest). Actions focused on promoting a more active role of society in the stages of policy formation and achieving more articulated actions between the different government levels/institutions for implementing the policy are necessary to generate a noteworthy impact on preventing overweight and obesity in children and adolescents. Including systems for internal evaluation of existing strategies to strengthen successful actions, create policies to fill existing gaps and reform policies that do not generate significant impact should be a priority for the Ecuadorian government to improve the country's food environments.

Keywords: children and adolescents, food-EPI, food policies, healthy food environment

Procedia PDF Downloads 37
265 Effectiveness of Dry Needling with and without Ultrasound Guidance in Patients with Knee Osteoarthritis and Patellofemoral Pain Syndrome: A Systematic Review and Meta-Analysis

Authors: Johnson C. Y. Pang, Amy S. N. Fu, Ryan K. L. Lee, Allan C. L. Fu

Abstract:

Dry needling (DN) is one of the puncturing methods that involves the insertion of needles into the tender spots of the human body without the injection of any substance. DN has long been used to treat the patient with knee pain caused by knee osteoarthritis (KOA) and patellofemoral pain syndrome (PFPS), but the effectiveness is still inconsistent. This study aimed to conduct a systematic review and meta-analysis to assess the intervention methods and effects of DN with and without ultrasound guidance for treating pain and dysfunctions in people with KOA and PFPS. Design: This systematic review adhered to the PRISMA reporting guidelines. The registration number of the study protocol published in the PROSPERO database was CRD42021221419. Six electronic databases were searched manually through CINAHL Complete (1976-2020), Cochrane Library (1996-2020), EMBASE (1947-2020), Medline (1946-2020), PubMed (1966-2020), and Psychinfo (1806-2020) in November 2020. Randomized controlled trials (RCTs) and controlled clinical trials were included to examine the effects of DN on knee pain, including KOA and PFPS. The key concepts included were: DN, acupuncture, ultrasound guidance, KOA, and PFPS. Risk of bias assessment and qualitative analysis were conducted by two independent reviewers using the PEDro score. Results: Fourteen articles met the inclusion criteria, and eight of them were high-quality papers in accordance with the PEDro score. There were variations in the techniques of DN. These included the direction, depth of insertion, number of needles, duration of stay, needle manipulation, and the number of treatment sessions. Meta-analysis was conducted on eight articles. DN group showed positive short-term effects (from immediate after DN to less than 3 months) on pain reduction for both KOA and PFPS with the overall standardized mean difference (SMD) of -1.549 (95% CI=-0.588 to -2.511); with great heterogeneity (P=0.002, I²=96.3%). In subgroup analysis, DN demonstrated significant effects in pain reduction on PFPS (p < 0.001) that could not be found in subjects with KOA (P=0.302). At 3-month post-intervention, DN also induced significant pain reduction in both subjects with KOA and PFPS with an overall SMD of -0.916 (95% CI=-0.133 to -1.699, and great heterogeneity (P=0.022, I²=95.63%). Besides, DN induced significant short-term improvement in function with the overall SMD=6.069; 95% CI=8.595 to 3.544; with great heterogeneity (P<0.001, I²=98.56%) when analyzed was conducted on both KOA and PFPS groups. In subgroup analysis, only PFPS showed a positive result with SMD=6.089, P<0.001; while KOA showed statistically insignificant with P=0.198 in short-term effect. Similarly, at 3-month post-intervention, significant improvement in function after DN was found when the analysis was conducted in both groups with the overall SMD=5.840; 95% CI=9.252 to 2.428; with great heterogeneity (P<0.001, I²=99.1%), but only PFPS showed significant improvement in sub-group analysis (P=0.002, I²=99.1%). Conclusions: The application of DN in KOA and PFPS patients varies among practitioners. DN is effective in reducing pain and dysfunction at short-term and 3-month post-intervention in individuals with PFPS. To our best knowledge, no study has reported the effects of DN with ultrasound guidance on KOA and PFPS. The longer-term effects of DN on KOA and PFPS are waiting for further study.

Keywords: dry needling, knee osteoarthritis, patellofemoral pain syndrome, ultrasound guidance

Procedia PDF Downloads 111
264 Empirical Decomposition of Time Series of Power Consumption

Authors: Noura Al Akkari, Aurélie Foucquier, Sylvain Lespinats

Abstract:

Load monitoring is a management process for energy consumption towards energy savings and energy efficiency. Non Intrusive Load Monitoring (NILM) is one method of load monitoring used for disaggregation purposes. NILM is a technique for identifying individual appliances based on the analysis of the whole residence data retrieved from the main power meter of the house. Our NILM framework starts with data acquisition, followed by data preprocessing, then event detection, feature extraction, then general appliance modeling and identification at the final stage. The event detection stage is a core component of NILM process since event detection techniques lead to the extraction of appliance features. Appliance features are required for the accurate identification of the household devices. In this research work, we aim at developing a new event detection methodology with accurate load disaggregation to extract appliance features. Time-domain features extracted are used for tuning general appliance models for appliance identification and classification steps. We use unsupervised algorithms such as Dynamic Time Warping (DTW). The proposed method relies on detecting areas of operation of each residential appliance based on the power demand. Then, detecting the time at which each selected appliance changes its states. In order to fit with practical existing smart meters capabilities, we work on low sampling data with a frequency of (1/60) Hz. The data is simulated on Load Profile Generator software (LPG), which was not previously taken into consideration for NILM purposes in the literature. LPG is a numerical software that uses behaviour simulation of people inside the house to generate residential energy consumption data. The proposed event detection method targets low consumption loads that are difficult to detect. Also, it facilitates the extraction of specific features used for general appliance modeling. In addition to this, the identification process includes unsupervised techniques such as DTW. To our best knowledge, there exist few unsupervised techniques employed with low sampling data in comparison to the many supervised techniques used for such cases. We extract a power interval at which falls the operation of the selected appliance along with a time vector for the values delimiting the state transitions of the appliance. After this, appliance signatures are formed from extracted power, geometrical and statistical features. Afterwards, those formed signatures are used to tune general model types for appliances identification using unsupervised algorithms. This method is evaluated using both simulated data on LPG and real-time Reference Energy Disaggregation Dataset (REDD). For that, we compute performance metrics using confusion matrix based metrics, considering accuracy, precision, recall and error-rate. The performance analysis of our methodology is then compared with other detection techniques previously used in the literature review, such as detection techniques based on statistical variations and abrupt changes (Variance Sliding Window and Cumulative Sum).

Keywords: general appliance model, non intrusive load monitoring, events detection, unsupervised techniques;

Procedia PDF Downloads 51
263 Application of Large Eddy Simulation-Immersed Boundary Volume Penalization Method for Heat and Mass Transfer in Granular Layers

Authors: Artur Tyliszczak, Ewa Szymanek, Maciej Marek

Abstract:

Flow through granular materials is important to a vast array of industries, for instance in construction industry where granular layers are used for bulkheads and isolators, in chemical engineering and catalytic reactors where large surfaces of packed granular beds intensify chemical reactions, or in energy production systems, where granulates are promising materials for heat storage and heat transfer media. Despite the common usage of granulates and extensive research performed in this field, phenomena occurring between granular solid elements or between solids and fluid are still not fully understood. In the present work we analyze the heat exchange process between the flowing medium (gas, liquid) and solid material inside the granular layers. We consider them as a composite of isolated solid elements and inter-granular spaces in which a gas or liquid can flow. The structure of the layer is controlled by shapes of particular granular elements (e.g., spheres, cylinders, cubes, Raschig rings), its spatial distribution or effective characteristic dimension (total volume or surface area). We will analyze to what extent alteration of these parameters influences on flow characteristics (turbulent intensity, mixing efficiency, heat transfer) inside the layer and behind it. Analysis of flow inside granular layers is very complicated because the use of classical experimental techniques (LDA, PIV, fibber probes) inside the layers is practically impossible, whereas the use of probes (e.g. thermocouples, Pitot tubes) requires drilling of holes inside the solid material. Hence, measurements of the flow inside granular layers are usually performed using for instance advanced X-ray tomography. In this respect, theoretical or numerical analyses of flow inside granulates seem crucial. Application of discrete element methods in combination with the classical finite volume/finite difference approaches is problematic as a mesh generation process for complex granular material can be very arduous. A good alternative for simulation of flow in complex domains is an immersed boundary-volume penalization (IB-VP) in which the computational meshes have simple Cartesian structure and impact of solid objects on the fluid is mimicked by source terms added to the Navier-Stokes and energy equations. The present paper focuses on application of the IB-VP method combined with large eddy simulation (LES). The flow solver used in this work is a high-order code (SAILOR), which was used previously in various studies, including laminar/turbulent transition in free flows and also for flows in wavy channels, wavy pipes and over various shape obstacles. In these cases a formal order of approximation turned out to be in between 1 and 2, depending on the test case. The current research concentrates on analyses of the flows in dense granular layers with elements distributed in a deterministic regular manner and validation of the results obtained using LES-IB method and body-fitted approach. The comparisons are very promising and show very good agreement. It is found that the size, number of elements and their distribution have huge impact on the obtained results. Ordering of the granular elements (or lack of it) affects both the pressure drop and efficiency of the heat transfer as it significantly changes mixing process.

Keywords: granular layers, heat transfer, immersed boundary method, numerical simulations

Procedia PDF Downloads 104
262 The Impact of Developing an Educational Unit in the Light of Twenty-First Century Skills in Developing Language Skills for Non-Arabic Speakers: A Proposed Program for Application to Students of Educational Series in Regular Schools

Authors: Erfan Abdeldaim Mohamed Ahmed Abdalla

Abstract:

The era of the knowledge explosion in which we live requires us to develop educational curricula quantitatively and qualitatively to adapt to the twenty-first-century skills of critical thinking, problem-solving, communication, cooperation, creativity, and innovation. The process of developing the curriculum is as significant as building it; in fact, the development of curricula may be more difficult than building them. And curriculum development includes analyzing needs, setting goals, designing the content and educational materials, creating language programs, developing teachers, applying for programmes in schools, monitoring and feedback, and then evaluating the language programme resulting from these processes. When we look back at the history of language teaching during the twentieth century, we find that developing the delivery method is the most crucial aspect of change in language teaching doctrines. The concept of delivery method in teaching is a systematic set of teaching practices based on a specific theory of language acquisition. This is a key consideration, as the process of development must include all the curriculum elements in its comprehensive sense: linguistically and non-linguistically. The various Arabic curricula provide the student with a set of units, each unit consisting of a set of linguistic elements. These elements are often not logically arranged, and more importantly, they neglect essential points and highlight other less important ones. Moreover, the educational curricula entail a great deal of monotony in the presentation of content, which makes it hard for the teacher to select adequate content; so that the teacher often navigates among diverse references to prepare a lesson and hardly finds the suitable one. Similarly, the student often gets bored when learning the Arabic language and fails to fulfill considerable progress in it. Therefore, the problem is not related to the lack of curricula, but the problem is the development of the curriculum with all its linguistic and non-linguistic elements in accordance with contemporary challenges and standards for teaching foreign languages. The Arabic library suffers from a lack of references for curriculum development. In this paper, the researcher investigates the elements of development, such as the teacher, content, methods, objectives, evaluation, and activities. Hence, a set of general guidelines in the field of educational development were reached. The paper highlights the need to identify weaknesses in educational curricula, decide the twenty-first-century skills that must be employed in Arabic education curricula, and the employment of foreign language teaching standards in current Arabic Curricula. The researcher assumes that the series of teaching Arabic to speakers of other languages in regular schools do not address the skills of the twenty-first century, which is what the researcher tries to apply in the proposed unit. The experimental method is the method of this study. It is based on two groups: experimental and control. The development of an educational unit will help build suitable educational series for students of the Arabic language in regular schools, in which twenty-first-century skills and standards for teaching foreign languages will be addressed and be more useful and attractive to students.

Keywords: curriculum, development, Arabic language, non-native, skills

Procedia PDF Downloads 48
261 Framework Proposal on How to Use Game-Based Learning, Collaboration and Design Challenges to Teach Mechatronics

Authors: Michael Wendland

Abstract:

This paper presents a framework to teach a methodical design approach by the help of using a mixture of game-based learning, design challenges and competitions as forms of direct assessment. In today’s world, developing products is more complex than ever. Conflicting goals of product cost and quality with limited time as well as post-pandemic part shortages increase the difficulty. Common design approaches for mechatronic products mitigate some of these effects by helping the users with their methodical framework. Due to the inherent complexity of these products, the number of involved resources and the comprehensive design processes, students very rarely have enough time or motivation to experience a complete approach in one semester course. But, for students to be successful in the industrial world, it is crucial to know these methodical frameworks and to gain first-hand experience. Therefore, it is necessary to teach these design approaches in a real-world setting and keep the motivation high as well as learning to manage upcoming problems. This is achieved by using a game-based approach and a set of design challenges that are given to the students. In order to mimic industrial collaboration, they work in teams of up to six participants and are given the main development target to design a remote-controlled robot that can manipulate a specified object. By setting this clear goal without a given solution path, a constricted time-frame and limited maximal cost, the students are subjected to similar boundary conditions as in the real world. They must follow the methodical approach steps by specifying requirements, conceptualizing their ideas, drafting, designing, manufacturing and building a prototype using rapid prototyping. At the end of the course, the prototypes will be entered into a contest against the other teams. The complete design process is accompanied by theoretical input via lectures which is immediately transferred by the students to their own design problem in practical sessions. To increase motivation in these sessions, a playful learning approach has been chosen, i.e. designing the first concepts is supported by using lego construction kits. After each challenge, mandatory online quizzes help to deepen the acquired knowledge of the students and badges are awarded to those who complete a quiz, resulting in higher motivation and a level-up on a fictional leaderboard. The final contest is held in presence and involves all teams with their functional prototypes that now need to contest against each other. Prices for the best mechanical design, the most innovative approach and for the winner of the robotic contest are awarded. Each robot design gets evaluated with regards to the specified requirements and partial grades are derived from the results. This paper concludes with a critical review of the proposed framework, the game-based approach for the designed prototypes, the reality of the boundary conditions, the problems that occurred during the design and manufacturing process, the experiences and feedback of the students and the effectiveness of their collaboration as well as a discussion of the potential transfer to other educational areas.

Keywords: design challenges, game-based learning, playful learning, methodical framework, mechatronics, student assessment, constructive alignment

Procedia PDF Downloads 49
260 Investigations on the Application of Avalanche Simulations: A Survey Conducted among Avalanche Experts

Authors: Korbinian Schmidtner, Rudolf Sailer, Perry Bartelt, Wolfgang Fellin, Jan-Thomas Fischer, Matthias Granig

Abstract:

This study focuses on the evaluation of snow avalanche simulations, based on a survey that has been carried out among avalanche experts. In the last decades, the application of avalanche simulation tools has gained recognition within the realm of hazard management. Traditionally, avalanche runout models were used to predict extreme avalanche runout and prepare avalanche maps. This has changed rather dramatically with the application of numerical models. For safety regulations such as road safety simulation tools are now being coupled with real-time meteorological measurements to predict frequent avalanche hazard. That places new demands on model accuracy and requires the simulation of physical processes that previously could be ignored. These simulation tools are based on a deterministic description of the avalanche movement allowing to predict certain quantities (e.g. pressure, velocities, flow heights, runout lengths etc.) of the avalanche flow. Because of the highly variable regimes of the flowing snow, no uniform rheological law describing the motion of an avalanche is known. Therefore, analogies to fluid dynamical laws of other materials are stated. To transfer these constitutional laws to snow flows, certain assumptions and adjustments have to be imposed. Besides these limitations, there exist high uncertainties regarding the initial and boundary conditions. Further challenges arise when implementing the underlying flow model equations into an algorithm executable by a computer. This implementation is constrained by the choice of adequate numerical methods and their computational feasibility. Hence, the model development is compelled to introduce further simplifications and the related uncertainties. In the light of these issues many questions arise on avalanche simulations, on their assets and drawbacks, on potentials for improvements as well as their application in practice. To address these questions a survey among experts in the field of avalanche science (e.g. researchers, practitioners, engineers) from various countries has been conducted. In the questionnaire, special attention is drawn on the expert’s opinion regarding the influence of certain variables on the simulation result, their uncertainty and the reliability of the results. Furthermore, it was tested to which degree a simulation result influences the decision making for a hazard assessment. A discrepancy could be found between a large uncertainty of the simulation input parameters as compared to a relatively high reliability of the results. This contradiction can be explained taking into account how the experts employ the simulations. The credibility of the simulations is the result of a rather thoroughly simulation study, where different assumptions are tested, comparing the results of different flow models along with the use of supplemental data such as chronicles, field observation, silent witnesses i.a. which are regarded as essential for the hazard assessment and for sanctioning simulation results. As the importance of avalanche simulations grows within the hazard management along with their further development studies focusing on the modeling fashion could contribute to a better understanding how knowledge of the avalanche process can be gained by running simulations.

Keywords: expert interview, hazard management, modeling, simulation, snow avalanche

Procedia PDF Downloads 299
259 An Analysis of Younger Consumers’ Perceptions, Purchasing Decisions, and Pro-Environmental Behavior: A Market Experiment on Green Advertising

Authors: Mokhlisur Rahman

Abstract:

Consumers have developed a sense of responsibility in the past decade, reflecting on their purchasing behavior after viewing an advertisement. Consumers tend to buy ideal products that enable them to be judged by their close network in the opinion world. In such value considerations, any information that feeds consumers' desire for social status helps, which becomes capital for educating consumers on the importance of purchasing green products for manufacturing companies. Companies' effort in manufacturing green products to get high conversion demands a good deal of promotion with quality information and engaging representation. Additionally, converting people from traditional to eco-friendly products requires innovative alternatives to replace the existing product. Considering consumers' understanding of products and their purchasing behavior, it becomes essential for the brands to know the extent to which consumers' level of awareness of the ecosystem is to make them more responsive to green products. Another is brand image plays a vital role in consumers' perception regarding the credibility of the claim regarding the product. Brand image is a significant positive influence on the younger generation, and younger generations tend to engage more in pro-environmental behavior, including purchasing sustainable products. For example, Adidas senses the necessity of satisfying consumers with something that brings more profits and serves the planet. Several of their eco-friendly products are already in the market, and one is UltraBOOST DNA parley, made from 3D-printed recycled ocean waste. As a big brand image, Adidas has leveraged an interest among the younger generation by incorporating sustainability into its advertising. Therefore, influential brands' effort in the sustainable revolution through engaging advertisement makes it more prominent by educating consumers about the reason behind launching the product. This study investigates younger consumers' attitudes toward sustainability, brand recognition, exposure to green advertising, willingness to receive more green advertising, purchasing green products, and motivation. The study conducts a market experiment by creating two video advertisements: a sustainable product video advertisement and a non-sustainable product video advertisement. Both the videos have similar content design and the same length of 2 minutes, but the messages are different based on the identical product type college bags. The first video advertisement promotes eco-friendly college bags made from biodegradable raw materials, and the second promotes non-sustainable college bags made from plastics. After viewing the videos, consumers make purchasing decisions and complete an online survey to collect their attitudes toward sustainable products. The study finds the importance of a sense of responsibility to the consumers for climate change issues. Also, it empowers people to take a step, even small, and increases environmental awareness. This study provides companies with the knowledge to participate in sustainable product launches by collecting consumers' perceptions and attitudes toward green products. Also, it shows how important it is to build a brand's image for the younger generation.

Keywords: brand-image, environment, green-advertising, sustainability, younger-consumer

Procedia PDF Downloads 46
258 Promoting Compassionate Communication in a Multidisciplinary Fellowship: Results from a Pilot Evaluation

Authors: Evonne Kaplan-Liss, Val Lantz-Gefroh

Abstract:

Arts and humanities are often incorporated into medical education to help deepen understanding of the human condition and the ability to communicate from a place of compassion. However, a gap remains in our knowledge of compassionate communication training for postgraduate medical professionals (as opposed to students and residents); how training opportunities include and impact the artists themselves, and how train-the-trainer models can support learners to become teachers. In this report, the authors present results from a pilot evaluation of the UC San Diego Health: Sanford Compassionate Communication Fellowship, a 60-hour experiential program that uses theater, narrative reflection, poetry, literature, and journalism techniques to train a multidisciplinary cohort of medical professionals and artists in compassionate communication. In the culminating project, fellows design and implement their own projects as teachers of compassionate communication in their respective workplaces. Qualitative methods, including field notes and 30-minute Zoom interviews with each fellow, were used to evaluate the impact of the fellowship. The cohort included both artists (n=2) and physicians representing a range of specialties (n=7), such as occupational medicine, palliative care, and pediatrics. The authors coded the data using thematic analysis for evidence of how the multidisciplinary nature of the fellowship impacted the fellows’ experiences. The findings show that the multidisciplinary cohort contributed to a greater appreciation of compassionate communication in general. Fellows expressed that the ability to witness how those in different fields approached compassionate communication enhanced their learning and helped them see how compassion can be expressed in various contexts, which was both “exhilarating” and “humbling.” One physician expressed that the fellowship has been “really helpful to broaden my perspective on the value of good communication.” Fellows shared how what they learned in the fellowship translated to increased compassionate communication, not only in their professional roles but in their personal lives as well. A second finding was the development of a supportive community. Because each fellow brought their own experiences and expertise, there was a sense of genuine ability to contribute as well as a desire to learn from others. A “brave space” was created by the fellowship facilitators and the inclusion of arts-based activities: a space that invited vulnerability and welcomed fellows to make their own meaning without prescribing any one answer or right way to approach compassionate communication. This brave space contributed to a strong connection among the fellows and reports of increased well-being, as well as multiple collaborations post-fellowship to carry forward compassionate communication training at their places of work. Results show initial evidence of the value of a multidisciplinary fellowship for promoting compassionate communication for both artists and physicians. The next steps include maintaining the supportive fellowship community and collaborations with a post-fellowship affiliate faculty program; scaling up the fellowship with non-physicians (e.g., nurses and physician assistants); and collecting data from family members, colleagues, and patients to understand how the fellowship may be creating a ripple effect outside of the fellowship through fellows’ compassionate communication.

Keywords: compassionate communication, communication in healthcare, multidisciplinary learning, arts in medicine

Procedia PDF Downloads 42
257 Integrative Omics-Portrayal Disentangles Molecular Heterogeneity and Progression Mechanisms of Cancer

Authors: Binder Hans

Abstract:

Cancer is no longer seen as solely a genetic disease where genetic defects such as mutations and copy number variations affect gene regulation and eventually lead to aberrant cell functioning which can be monitored by transcriptome analysis. It has become obvious that epigenetic alterations represent a further important layer of (de-)regulation of gene activity. For example, aberrant DNA methylation is a hallmark of many cancer types, and methylation patterns were successfully used to subtype cancer heterogeneity. Hence, unraveling the interplay between different omics levels such as genome, transcriptome and epigenome is inevitable for a mechanistic understanding of molecular deregulation causing complex diseases such as cancer. This objective requires powerful downstream integrative bioinformatics methods as an essential prerequisite to discover the whole genome mutational, transcriptome and epigenome landscapes of cancer specimen and to discover cancer genesis, progression and heterogeneity. Basic challenges and tasks arise ‘beyond sequencing’ because of the big size of the data, their complexity, the need to search for hidden structures in the data, for knowledge mining to discover biological function and also systems biology conceptual models to deduce developmental interrelations between different cancer states. These tasks are tightly related to cancer biology as an (epi-)genetic disease giving rise to aberrant genomic regulation under micro-environmental control and clonal evolution which leads to heterogeneous cellular states. Machine learning algorithms such as self organizing maps (SOM) represent one interesting option to tackle these bioinformatics tasks. The SOMmethod enables recognizing complex patterns in large-scale data generated by highthroughput omics technologies. It portrays molecular phenotypes by generating individualized, easy to interpret images of the data landscape in combination with comprehensive analysis options. Our image-based, reductionist machine learning methods provide one interesting perspective how to deal with massive data in the discovery of complex diseases, gliomas, melanomas and colon cancer on molecular level. As an important new challenge, we address the combined portrayal of different omics data such as genome-wide genomic, transcriptomic and methylomic ones. The integrative-omics portrayal approach is based on the joint training of the data and it provides separate personalized data portraits for each patient and data type which can be analyzed by visual inspection as one option. The new method enables an integrative genome-wide view on the omics data types and the underlying regulatory modes. It is applied to high and low-grade gliomas and to melanomas where it disentangles transversal and longitudinal molecular heterogeneity in terms of distinct molecular subtypes and progression paths with prognostic impact.

Keywords: integrative bioinformatics, machine learning, molecular mechanisms of cancer, gliomas and melanomas

Procedia PDF Downloads 122
256 Predicting Food Waste and Losses Reduction for Fresh Products in Modified Atmosphere Packaging

Authors: Matar Celine, Gaucel Sebastien, Gontard Nathalie, Guilbert Stephane, Guillard Valerie

Abstract:

To increase the very short shelf life of fresh fruits and vegetable, Modified Atmosphere Packaging (MAP) allows an optimal atmosphere composition to be maintained around the product and thus prevent its decay. This technology relies on the modification of internal packaging atmosphere due to equilibrium between production/consumption of gases by the respiring product and gas permeation through the packaging material. While, to the best of our knowledge, benefit of MAP for fresh fruits and vegetable has been widely demonstrated in the literature, its effect on shelf life increase has never been quantified and formalized in a clear and simple manner leading difficult to anticipate its economic and environmental benefit, notably through the decrease of food losses. Mathematical modelling of mass transfers in the food/packaging system is the basis for a better design and dimensioning of the food packaging system. But up to now, existing models did not permit to estimate food quality nor shelf life gain reached by using MAP. However, shelf life prediction is an indispensable prerequisite for quantifying the effect of MAP on food losses reduction. The objective of this work is to propose an innovative approach to predict shelf life of MAP food product and then to link it to a reduction of food losses and wastes. In this purpose, a ‘Virtual MAP modeling tool’ was developed by coupling a new predictive deterioration model (based on visual surface prediction of deterioration encompassing colour, texture and spoilage development) with models of the literature for respiration and permeation. A major input of this modelling tool is the maximal percentage of deterioration (MAD) which was assessed from dedicated consumers’ studies. Strawberries of the variety Charlotte were selected as the model food for its high perishability, high respiration rate; 50-100 ml CO₂/h/kg produced at 20°C, allowing it to be a good representative of challenging post-harvest storage. A value of 13% was determined as a limit of acceptability for the consumers, permitting to define products’ shelf life. The ‘Virtual MAP modeling tool’ was validated in isothermal conditions (5, 10 and 20°C) and in dynamic temperature conditions mimicking commercial post-harvest storage of strawberries. RMSE values were systematically lower than 3% for respectively, O₂, CO₂ and deterioration profiles as a function of time confirming the goodness of model fitting. For the investigated temperature profile, a shelf life gain of 0.33 days was obtained in MAP compared to the conventional storage situation (no MAP condition). Shelf life gain of more than 1 day could be obtained for optimized post-harvest conditions as numerically investigated. Such shelf life gain permitted to anticipate a significant reduction of food losses at the distribution and consumer steps. This food losses' reduction as a function of shelf life gain has been quantified using a dedicated mathematical equation that has been developed for this purpose.

Keywords: food losses and wastes, modified atmosphere packaging, mathematical modeling, shelf life prediction

Procedia PDF Downloads 158
255 Revolutionizing Oil Palm Replanting: Geospatial Terrace Design for High-precision Ground Implementation Compared to Conventional Methods

Authors: Nursuhaili Najwa Masrol, Nur Hafizah Mohammed, Nur Nadhirah Rusyda Rosnan, Vijaya Subramaniam, Sim Choon Cheak

Abstract:

Replanting in oil palm cultivation is vital to enable the introduction of planting materials and provides an opportunity to improve the road, drainage, terrace design, and planting density. Oil palm replanting is fundamentally necessary every 25 years. The adoption of the digital replanting blueprint is imperative as it can assist the Malaysia Oil Palm industry in addressing challenges such as labour shortages and limited expertise related to replanting tasks. Effective replanting planning should commence at least 6 months prior to the actual replanting process. Therefore, this study will help to plan and design the replanting blueprint with high-precision translation on the ground. With the advancement of geospatial technology, it is now feasible to engage in thoroughly researched planning, which can help maximize the potential yield. A blueprint designed before replanting is to enhance management’s ability to optimize the planting program, address manpower issues, or even increase productivity. In terrace planting blueprints, geographic tools have been utilized to design the roads, drainages, terraces, and planting points based on the ARM standards. These designs are mapped with location information and undergo statistical analysis. The geospatial approach is essential in precision agriculture and ensuring an accurate translation of design to the ground by implementing high-accuracy technologies. In this study, geospatial and remote sensing technologies played a vital role. LiDAR data was employed to determine the Digital Elevation Model (DEM), enabling the precise selection of terraces, while ortho imagery was used for validation purposes. Throughout the designing process, Geographical Information System (GIS) tools were extensively utilized. To assess the design’s reliability on the ground compared with the current conventional method, high-precision GPS instruments like EOS Arrow Gold and HIPER VR GNSS were used, with both offering accuracy levels between 0.3 cm and 0.5cm. Nearest Distance Analysis was generated to compare the design with actual planting on the ground. The analysis revealed that it could not be applied to the roads due to discrepancies between actual roads and the blueprint design, which resulted in minimal variance. In contrast, the terraces closely adhered to the GPS markings, with the most variance distance being less than 0.5 meters compared to actual terraces constructed. Considering the required slope degrees for terrace planting, which must be greater than 6 degrees, the study found that approximately 65% of the terracing was constructed at a 12-degree slope, while over 50% of the terracing was constructed at slopes exceeding the minimum degrees. Utilizing blueprint replanting promising strategies for optimizing land utilization in agriculture. This approach harnesses technology and meticulous planning to yield advantages, including increased efficiency, enhanced sustainability, and cost reduction. From this study, practical implementation of this technique can lead to tangible and significant improvements in agricultural sectors. In boosting further efficiencies, future initiatives will require more sophisticated techniques and the incorporation of precision GPS devices for upcoming blueprint replanting projects besides strategic progression aims to guarantee the precision of both blueprint design stages and its subsequent implementation on the field. Looking ahead, automating digital blueprints are necessary to reduce time, workforce, and costs in commercial production.

Keywords: replanting, geospatial, precision agriculture, blueprint

Procedia PDF Downloads 53
254 Characterization of Bio-Inspired Thermoelastoplastic Composites Filled with Modified Cellulose Fibers

Authors: S. Cichosz, A. Masek

Abstract:

A new cellulose hybrid modification approach, which is undoubtedly a scientific novelty, is introduced. The study reports the properties of cellulose (Arbocel UFC100 – Ultra Fine Cellulose) and characterizes cellulose filled polymer composites based on an ethylene-norbornene copolymer (TOPAS Elastomer E-140). Moreover, the approach of physicochemical two-stage cellulose treatment is introduced: solvent exchange (to ethanol or hexane) and further chemical modification with maleic anhydride (MA). Furthermore, the impact of the drying process on cellulose properties was investigated. Suitable measurements were carried out to characterize cellulose fibers: spectroscopic investigation (Fourier Transform Infrared Spektrofotometer-FTIR, Near InfraRed spectroscopy-NIR), thermal analysis (Differential scanning calorimetry, Thermal gravimetric analysis ) and Karl Fischer titration. It should be emphasized that for all UFC100 treatments carried out, a decrease in moisture content was evidenced. FT-IR reveals a drop in absorption band intensity at 3334 cm-1, the peak is associated with both –OH moieties and water. Similar results were obtained with Karl Fischer titration. Based on the results obtained, it may be claimed that the employment of ethanol contributes greatly to the lowering of cellulose water absorption ability (decrease of moisture content to approximately 1.65%). Additionally, regarding polymer composite properties, crucial data has been obtained from the mechanical and thermal analysis. The highest material performance was noted in the case of the composite sample that contained cellulose modified with MA after a solvent exchange with ethanol. This specimen exhibited sufficient tensile strength, which is almost the same as that of the neat polymer matrix – in the region of 40 MPa. Moreover, both the Payne effect and filler efficiency factor, calculated based on dynamic mechanical analysis (DMA), reveal the possibility of the filler having a reinforcing nature. What is also interesting is that, according to the Payne effect results, fibers dried before the further chemical modification are assumed to allow more regular filler structure development in the polymer matrix (Payne effect maximum at 1.60 MPa), compared with those not dried (Payne effect in the range 0.84-1.26 MPa). Furthermore, taking into consideration the data gathered from DSC and TGA, higher thermal stability is obtained in case of the materials filled with fibers that were dried before the carried out treatments (degradation activation energy in the region of 195 kJ/mol) in comparison with the polymer composite samples filled with unmodified cellulose (degradation activation energy of approximately 180 kJ/mol). To author’s best knowledge this work results in the introduction of a novel, new filler hybrid treatment approach. Moreover, valuable data regarding the properties of composites filled with cellulose fibers of various moisture contents have been provided. It should be emphasized that plant fiber-based polymer bio-materials described in this research might contribute significantly to polymer waste minimization because they are more readily degraded.

Keywords: cellulose fibers, solvent exchange, moisture content, ethylene-norbornene copolymer

Procedia PDF Downloads 97
253 Information and Communication Technology Skills of Finnish Students in Particular by Gender

Authors: Antero J. S. Kivinen, Suvi-Sadetta Kaarakainen

Abstract:

Digitalization touches every aspect of contemporary society, changing the way we live our everyday life. Contemporary society is sometimes described as knowledge society including unprecedented amount of information people face daily. The tools to manage this information flow are ICT-skills which are both technical skills and reflective skills needed to manage incoming information. Therefore schools are under constant pressure of revision. In the latest Programme for International Student Assessment (PISA) girls have been outperforming boys in all Organization for Economic Co-operation and Development (OECD) member countries and the gender gap between girls and boys is widest in Finland. This paper presents results of the Comprehensive Schools in the Digital Age project of RUSE, University of Turku. The project is in connection with Finnish Government Analysis, Assessment and Research Activities. First of all, this paper examines gender differences in ICT-skills of Finnish upper comprehensive school students. Secondly, it explores in which way differences are changing when students proceed to upper secondary and vocational education. ICT skills are measured using a performance-based ICT-skill test. Data is collected in 3 phases, January-March 2017 (upper comprehensive schools, n=5455), September-December 2017 (upper secondary and vocational schools, n~3500) and January-March 2018 (Upper comprehensive schools). The age of upper comprehensive school student’s is 15-16 and upper secondary and vocational school 16-18. The test is divided into 6 categories: basic operations, productivity software, social networking and communication, content creation and publishing, applications and requirements for the ICT study programs. Students have filled a survey about their ICT-usage and study materials they use in school and home. Cronbach's alpha was used to estimate the reliability of the ICT skill test. Statistical differences between genders were examined using two-tailed independent samples t-test. Results of first data from upper comprehensive schools show that there is no statistically significant difference in ICT-skill tests total scores between genders (boys 10.24 and girls 10.64, maximum being 36). Although, there were no gender difference in total test scores, there are differences in above mentioned six categories. Girls get better scores on school related and social networking test subjects while boys perform better on more technical oriented subjects. Test scores on basic operations are quite low for both groups. Perhaps these can partly be explained by the fact that the test was made on computers and majority of students ICT-usage consist of smartphones and tablets. Against this background it is important to analyze further the reasons for these differences. In a context of ongoing digitalization of everyday life and especially working life, the significant purpose of this analyses is to find answers how to guarantee the adequate ICT skills for all students.

Keywords: basic education, digitalization, gender differences, ICT-skills, upper comprehensive education, upper secondary education, vocational education

Procedia PDF Downloads 110
252 Deterioration Prediction of Pavement Load Bearing Capacity from FWD Data

Authors: Kotaro Sasai, Daijiro Mizutani, Kiyoyuki Kaito

Abstract:

Expressways in Japan have been built in an accelerating manner since the 1960s with the aid of rapid economic growth. About 40 percent in length of expressways in Japan is now 30 years and older and has become superannuated. Time-related deterioration has therefore reached to a degree that administrators, from a standpoint of operation and maintenance, are forced to take prompt measures on a large scale aiming at repairing inner damage deep in pavements. These measures have already been performed for bridge management in Japan and are also expected to be embodied for pavement management. Thus, planning methods for the measures are increasingly demanded. Deterioration of layers around road surface such as surface course and binder course is brought about at the early stages of whole pavement deterioration process, around 10 to 30 years after construction. These layers have been repaired primarily because inner damage usually becomes significant after outer damage, and because surveys for measuring inner damage such as Falling Weight Deflectometer (FWD) survey and open-cut survey are costly and time-consuming process, which has made it difficult for administrators to focus on inner damage as much as they have been supposed to. As expressways today have serious time-related deterioration within them deriving from the long time span since they started to be used, it is obvious the idea of repairing layers deep in pavements such as base course and subgrade must be taken into consideration when planning maintenance on a large scale. This sort of maintenance requires precisely predicting degrees of deterioration as well as grasping the present situations of pavements. Methods for predicting deterioration are determined to be either mechanical or statistical. While few mechanical models have been presented, as far as the authors know of, previous studies have presented statistical methods for predicting deterioration in pavements. One describes deterioration process by estimating Markov deterioration hazard model, while another study illustrates it by estimating Proportional deterioration hazard model. Both of the studies analyze deflection data obtained from FWD surveys and present statistical methods for predicting deterioration process of layers around road surface. However, layers of base course and subgrade remain unanalyzed. In this study, data collected from FWD surveys are analyzed to predict deterioration process of layers deep in pavements in addition to surface layers by a means of estimating a deterioration hazard model using continuous indexes. This model can prevent the loss of information of data when setting rating categories in Markov deterioration hazard model when evaluating degrees of deterioration in roadbeds and subgrades. As a result of portraying continuous indexes, the model can predict deterioration in each layer of pavements and evaluate it quantitatively. Additionally, as the model can also depict probability distribution of the indexes at an arbitrary point and establish a risk control level arbitrarily, it is expected that this study will provide knowledge like life cycle cost and informative content during decision making process referring to where to do maintenance on as well as when.

Keywords: deterioration hazard model, falling weight deflectometer, inner damage, load bearing capacity, pavement

Procedia PDF Downloads 357