Search results for: Lea B. Milan
32 Plasma Treatment of a Lignite Using Water-Stabilized Plasma Torch at Atmospheric Pressure
Authors: Anton Serov, Alan Maslani, Michal Hlina, Vladimir Kopecky, Milan Hrabovsky
Abstract:
Recycling of organic waste is an increasingly hot topic in recent years. This issue becomes even more interesting if the raw material for the fuel production can be obtained as the result of that recycling. A process of high-temperature decomposition of a lignite (a non-hydrolysable complex organic compound) was studied on the plasma gasification reactor PLASGAS, where water-stabilized plasma torch was used as a source of high enthalpy plasma. The plasma torch power was 120 kW and allowed heating of the reactor to more than 1000 °C. The material feeding rate in the gasification reactor was selected 30 and 60 kg per hour that could be compared with small industrial production. An efficiency estimation of the thermal decomposition process was done. A balance of the torch energy distribution was studied as well as an influence of the lignite particle size and an addition of methane (CH4) in a reaction volume on the syngas composition (H2+CO). It was found that the ratio H2:CO had values in the range of 1,5 to 2,5 depending on the experimental conditions. The recycling process occurred at atmospheric pressure that was one of the important benefits because of the lack of expensive vacuum pump systems. The work was supported by the Grant Agency of the Czech Republic under the project GA15-19444S.Keywords: atmospheric pressure, lignite, plasma treatment, water-stabilized plasma torch
Procedia PDF Downloads 37131 Evaulation of Food Safety Management in Central Elementary School Canteens in Tuguegarao City, Philippines
Authors: Lea B. Milan
Abstract:
This descriptive study evaluated the existing Food Safety Management in Central Elementary School Canteens of Region 3. It made used of survey questionnaires, interview guide questions and validated knowledge test on food for data gathering. Results of the study revealed that school principals and canteen managers shared responsibilities in food safety management of school canteen. It also showed that the schools applied different methods of communication, monitoring and evaluation of food safety management. The study further revealed that implementation of monitoring and evaluation of food safety compliance are not being practiced in all elementary schools in the region. The study also showed that school canteens in the Region 3 do not have the thermometers and timers to use to conduct proper monitoring of foods during storage, preparation and serving. It was also found out from the study that canteen personnel lacks the basic knowledge and trainings on food safety. Potential source of physical, chemical and biological hazards that could contaminate foods were also found present in the canteen facilities of the elementary schools in the region. Moreover, evaluation showed that the existing implementation of food safety management in the Central Elementary School Canteens of Region 3 were below the expected level and the need to strengthen the appreciation and advocacy on food safety management in school canteens of Region 3 is still wanting.Keywords: food safety management, food safety school catering, food safety, school food safety management
Procedia PDF Downloads 37530 Influence of La³⁺ on Structural, Magnetic, Optical and Dielectric Properties in CoFe₂O₄ Nanoparticles Synthesized by Starch-Assisted Sol-Gel Combustion Method
Authors: Raghvendra Singh Yadav, Ivo Kuřitka, Jarmila Vilcakova, Pavel Urbánek, Michal Machovsky, Milan Masař, Martin Holek
Abstract:
Herein, we reported the influence of La³⁺ substitution on structural, magnetic and dielectric properties of CoFe₂O₄ nanoparticles synthesized by starch-assisted sol-gel combustion method. X-ray diffraction pattern confirmed the formation of cubic spinel structure of La³⁺ ions doped CoFe₂O₄ nanoparticles. Raman and Fourier Transform Infrared spectroscopy study also confirmed cubic spinel structure of La³⁺ substituted CoFe₂O₄ nanoparticles. The field emission scanning electron microscopy study revealed that La³⁺ substituted CoFe2O4 nanoparticles were in the range of 10-40 nm. The magnetic properties of La³⁺ substituted CoFe₂O₄ nanoparticles were investigated by using vibrating sample magnetometer. The variation in saturation magnetization, coercivity and remanent magnetization with La³⁺ concentration in CoFe2O4 nanoparticles was observed. The variation of real and imaginary part of dielectric constant, tan δ, and AC conductivity were studied with change of concentration of La³⁺ ions in CoFe₂O₄ nanoparticles. The variation in optical properties was studied via UV-Vis absorption spectroscopy. Acknowledgment: This work was supported by the Ministry of Education, Youth and Sports of the Czech Republic – Program NPU I (LO1504).Keywords: starch, sol-gel combustion method, nanoparticles, magnetic properties, dielectric properties
Procedia PDF Downloads 31329 Safety Risks of Gaseous Toxic Compounds Released from Li Batteries
Authors: Jan Karl, Ondrej Suchy, Eliska Fiserova, Milan Ruzicka
Abstract:
The evolving electromobility and all the electronics also bring an increase of danger with used Li-batteries. Li-batteries have been used in many industries, and currently many types of the batteries are available. Batteries have different compositions that affect their behavior. In the field of Li-battery safety, there are some areas of little discussion, such as extinguishing of fires caused by Li-batteries as well as toxicity of gaseous compounds released from Li batteries, transport or storage. Technical Institute of Fire Protection, which is a part of Fire Brigades of the Czech Republic, is dealing with the safety of Li batteries. That is the reason why we are dealing with toxicity of gaseous compounds released under conditions of fire, mechanical damage, overcharging and other emergencies that may occur. This is necessary for protection of intervening of fire brigade units, people in the vicinity and other envirnomental consequences. In this work, different types of batteries (Li-ion, Li-Po, LTO, LFP) with different kind of damage were tested, and the toxicity and total amount of released gases were studied. These values were evaluated according to their environmental hazard. FTIR spectroscopy was used for the evaluation of toxicity. We used a FTIR gas cell for continuous measurement. The total amount of released gases was determined by collecting the total gas phase through the absorbers and then determining the toxicants absorbed into the solutions. Based on the obtained results, it is possible to determine the protective equipment necessary for the event of an emergency with a Li-battery, to define the environmental load and the immediate danger in an emergency.Keywords: Li-battery, toxicity, gaseous toxic compounds, FTIR spectroscopy
Procedia PDF Downloads 15128 Mechanical Characterization of Porcine Skin with the Finite Element Method Based Inverse Optimization Approach
Authors: Djamel Remache, Serge Dos Santos, Michael Cliez, Michel Gratton, Patrick Chabrand, Jean-Marie Rossi, Jean-Louis Milan
Abstract:
Skin tissue is an inhomogeneous and anisotropic material. Uniaxial tensile testing is one of the primary testing techniques for the mechanical characterization of skin at large scales. In order to predict the mechanical behavior of materials, the direct or inverse analytical approaches are often used. However, in case of an inhomogeneous and anisotropic material as skin tissue, analytical approaches are not able to provide solutions. The numerical simulation is thus necessary. In this work, the uniaxial tensile test and the FEM (finite element method) based inverse method were used to identify the anisotropic mechanical properties of porcine skin tissue. The uniaxial tensile experiments were performed using Instron 8800 tensile machine®. The uniaxial tensile test was simulated with FEM, and then the inverse optimization approach (or the inverse calibration) was used for the identification of mechanical properties of the samples. Experimentally results were compared to finite element solutions. The results showed that the finite element model predictions of the mechanical behavior of the tested skin samples were well correlated with experimental results.Keywords: mechanical skin tissue behavior, uniaxial tensile test, finite element analysis, inverse optimization approach
Procedia PDF Downloads 40627 Structural, Magnetic, Dielectric and Electrical Properties of Gd3+ Doped Cobalt Ferrite Nanoparticles
Authors: Raghvendra Singh Yadav, Ivo Kuřitka, Jarmila Vilcakova, Jaromir Havlica, Lukas Kalina, Pavel Urbánek, Michal Machovsky, Milan Masař, Martin Holek
Abstract:
In this work, CoFe₂₋ₓGdₓO₄ (x=0.00, 0.05, 0.10, 0.15, 0.20) spinel ferrite nanoparticles are synthesized by sonochemical method. The structural properties and cation distribution are investigated using X-ray Diffraction (XRD), Raman Spectroscopy, Fourier Transform Infrared Spectroscopy and X-ray photoelectron spectroscopy. The morphology and elemental analysis are screened using field emission scanning electron microscopy (FE-SEM) and energy dispersive X-ray spectroscopy, respectively. The particle size measured by FE-SEM and XRD analysis confirm the formation of nanoparticles in the range of 7-10 nm. The electrical properties show that the Gd³⁺ doped cobalt ferrite (CoFe₂₋ₓGdₓO₄; x= 0.20) exhibit enhanced dielectric constant (277 at 100 Hz) and ac conductivity (20.17 x 10⁻⁹ S/cm at 100 Hz). The complex impedance measurement study reveals that as Gd³⁺ doping concentration increases, the impedance Z’ and Z’ ’ decreases. The influence of Gd³⁺ doping in cobalt ferrite nanoparticles on the magnetic property is examined by using vibrating sample magnetometer. Magnetic property measurement reveal that the coercivity decreases with Gd³⁺ substitution from 234.32 Oe (x=0.00) to 12.60 Oe (x=0.05) and further increases from 12.60 Oe (x=0.05) to 68.62 Oe (x=0.20). The saturation magnetization decreases with Gd³⁺ substitution from 40.19 emu/g (x=0.00) to 21.58 emu/g (x=0.20). This decrease follows the three-sublattice model suggested by Yafet-Kittel (Y-K). The Y-K angle increases with the increase of Gd³⁺ doping in cobalt ferrite nanoparticles.Keywords: sonochemical method, nanoparticles, magnetic property, dielectric property, electrical property
Procedia PDF Downloads 35226 Dynamic Stability Assessment of Different Wheel Sized Bicycles Based on Current Frame Design Practice with ISO Requirement for Bicycle Safety
Authors: Milan Paudel, Fook Fah Yap, Anil K. Bastola
Abstract:
The difficulties in riding small wheel bicycles and their lesser stability have been perceived for a long time. Although small wheel bicycles are designed using the similar approach and guidelines that have worked well for big wheel bicycles, the performance of the big wheelers and the smaller wheelers are markedly different. Since both the big wheelers and small wheelers have same fundamental geometry, most blame the small wheel for this discrepancy in the performance. This paper reviews existing guidelines for bicycle design, especially the front steering geometry for the bicycle, and provides a systematic and quantitative analysis of different wheel sized bicycles. A validated mathematical model has been used as a tool to assess the dynamic performance of the bicycles in term of their self-stability. The results obtained were found to corroborate the subjective perception of cyclists for small wheel bicycles. The current approach for small wheel bicycle design requires higher speed to be self-stable. However, it was found that increasing the headtube angle and selecting a proper trail could improve the dynamic performance of small wheel bicycles. A range of parameters for front steering geometry has been identified for small wheel bicycles that have comparable stability as big wheel bicycles. Interestingly, most of the identified geometries are found to be beyond the ISO recommended range and seem to counter the current approach of small wheel bicycle design. Therefore, it was successfully shown that the guidelines for big wheelers do not translate directly to small wheelers, but careful selection of the front geometry could make small wheel bicycles as stable as big wheel bicycles.Keywords: big wheel bicycle, design approach, ISO requirements, small wheel bicycle, stability and performance
Procedia PDF Downloads 19025 Dielectric, Electrical and Magnetic Properties of Elastomer Filled with in situ Thermally Reduced Graphene Oxide and Spinel Ferrite NiFe₂O₄ Nanoparticles
Authors: Raghvendra Singh Yadav, Ivo Kuritka, Jarmila Vilcakova, Pavel Urbanek, Michal Machovsky, David Skoda, Milan Masar
Abstract:
The elastomer nanocomposites were synthesized by solution mixing method with an elastomer as a matrix and in situ thermally reduced graphene oxide (RGO) and spinel ferrite NiFe₂O₄ nanoparticles as filler. Spinel ferrite NiFe₂O₄ nanoparticles were prepared by the starch-assisted sol-gel auto-combustion method. The influence of filler on the microstructure, morphology, dielectric, electrical and magnetic properties of Reduced Graphene Oxide-Nickel Ferrite-Elastomer nanocomposite was characterized by X-ray diffraction, Raman spectroscopy, Fourier transform infrared spectroscopy, field emission scanning electron microscopy, X-ray photoelectron spectroscopy, the Dielectric Impedance analyzer, and vibrating sample magnetometer. Scanning electron microscopy study revealed that the fillers were incorporated in elastomer matrix homogeneously. The dielectric constant and dielectric tangent loss of nanocomposites was decreased with the increase of frequency, whereas, the dielectric constant increases with the addition of filler. Further, AC conductivity was increased with the increase of frequency and addition of fillers. Furthermore, the prepared nanocomposites exhibited ferromagnetic behavior. This work was supported by the Ministry of Education, Youth and Sports of the Czech Republic – Program NPU I (LO1504).Keywords: polymer-matrix composites, nanoparticles as filler, dielectric property, magnetic property
Procedia PDF Downloads 16824 Spatial Distribution and Time Series Analysis of COVID-19 Pandemic in Italy: A Geospatial Perspective
Authors: Muhammad Farhan Ul Moazzam, Tamkeen Urooj Paracha, Ghani Rahman, Byung Gul Lee, Nasir Farid, Adnan Arshad
Abstract:
The novel coronavirus pandemic disease (COVID-19) affected the whole globe, though there is a lack of clinical studies and its epidemiological features. But as per the observation, it has been seen that most of the COVID-19 infected patients show mild to moderate symptoms, and they get better without any medical assistance due to a better immune system to generate antibodies against the novel coronavirus. In this study, the active cases, serious cases, recovered cases, deaths and total confirmed cases had been analyzed using the geospatial inverse distance weightage technique (IDW) within the time span of 2nd March to 3rd June 2020. As of 3rd June, the total number of COVID-19 cases in Italy were 231,238, total deaths 33,310, serious cases 350, recovered cases 158,951, and active cases were 39,177, which has been reported by the Ministry of Health, Italy. March 2nd-June 3rd, 2020 a sum of 231,238 cases has been reported in Italy out of which 38.68% cases reported in the Lombardia region with a death rate of 18%, which is high from its national mortality rate followed by Emilia-Romagna (14.89% deaths), Piemonte (12.68% deaths), and Vento (10% deaths). As per the total cases in the region, the highest number of recoveries has been observed in Umbria (92.52%), followed by Basilicata (87%), Valle d'Aosta (86.85%), and Trento (84.54%). The COVID-19 evolution in Italy has been particularly found in the major urban area, i.e., Rome, Milan, Naples, Bologna, and Florence. Geospatial technology played a vital role in this pandemic by tracking infected patient, active cases, and recovered cases. Geospatial techniques are very important in terms of monitoring and planning to control the pandemic spread in the country.Keywords: COVID-19, public health, geospatial analysis, IDW, Italy
Procedia PDF Downloads 15123 The Effects of Social Media on the Dreams of Preadolescent Girls
Authors: Saveria Capecchi
Abstract:
The aim of the quali-quantitative research conducted in the spring of 2021 (still in the midst of the Covid-19 emergency) was to analyze the relationship between the imaginary of 142 girls aged 10-12 from two Italian cities in the Emilia-Romagna region (the capital, Bologna, and Parma) and the influence that various socialization agents can have on it, with particular attention to social media. In order to investigate the relationship between imagination and media, two tools were used: first, the girls wrote an essay in class about their future lives, imagining waking up one morning as a thirty-year-old adults. Six types of "dreams" reflecting the values and lifestyles characteristic of contemporary Italian society emerged. Additionally, the girls completed a questionnaire on their leisure time and, in particular, media consumption aimed at identifying their favorite characters. The results provided insights into understanding the reference values and lifestyles that define their subculture (they belong to the so-called Generation Z). Another interesting aspect of this research is the possibility of comparing the results with those of a similar study on preadolescent imaginary conducted in 1995, involving 292 girls from Milan and Bologna, belonging to the Millennial generation. The narratives about the imagined adult life reflect some crucial changes undergone by Italian society in a quarter of a century: there are advancements towards gender equality, and the imagined family is increasingly detached from tradition. There is a more persistent dream of a life marked by beauty, wealth, and fame, while at the same time, there is a greater social commitment, from solidarity with marginalized people to environmentalism. Furthermore, the mentioned new digital and robotic professions will project us into the near future.Keywords: gender equality, gender stereotypes, imaginary, preadolescents, social media
Procedia PDF Downloads 5222 The Relationships between Market Orientation and Competitiveness of Companies in Banking Sector
Authors: Patrik Jangl, Milan Mikuláštík
Abstract:
The objective of the paper is to measure and compare market orientation of Swiss and Czech banks, as well as examine statistically the degree of influence it has on competitiveness of the institutions. The analysis of market orientation is based on the collecting, analysis and correct interpretation of the data. Descriptive analysis of market orientation describe current situation. Research of relation of competitiveness and market orientation in the sector of big international banks is suggested with the expectation of existence of a strong relationship. Partially, the work served as reconfirmation of suitability of classic methodologies to measurement of banks’ market orientation. Two types of data were gathered. Firstly, by measuring subjectively perceived market orientation of a company and secondly, by quantifying its competitiveness. All data were collected from a sample of small, mid-sized and large banks. We used numerical secondary character data from the international statistical financial Bureau Van Dijk’s BANKSCOPE database. Statistical analysis led to the following results. Assuming classical market orientation measures to be scientifically justified, Czech banks are statistically less market-oriented than Swiss banks. Secondly, among small Swiss banks, which are not broadly internationally active, small relationship exist between market orientation measures and market share based competitiveness measures. Thirdly, among all Swiss banks, a strong relationship exists between market orientation measures and market share based competitiveness measures. Above results imply existence of a strong relation of this measure in sector of big international banks. A strong statistical relationship has been proven to exist between market orientation measures and equity/total assets ratio in Switzerland.Keywords: market orientation, competitiveness, marketing strategy, measurement of market orientation, relation between market orientation and competitiveness, banking sector
Procedia PDF Downloads 47421 Digitizing Masterpieces in Italian Museums: Techniques, Challenges and Consequences from Giotto to Caravaggio
Authors: Ginevra Addis
Abstract:
The possibility of reproducing physical artifacts in a digital format is one of the opportunities offered by the technological advancements in information and communication most frequently promoted by museums. Indeed, the study and conservation of our cultural heritage have seen significant advancement due to the three-dimensional acquisition and modeling technology. A variety of laser scanning systems has been developed, based either on optical triangulation or on time-of-flight measurement, capable of producing digital 3D images of complex structures with high resolution and accuracy. It is necessary, however, to explore the challenges and opportunities that this practice brings within museums. The purpose of this paper is to understand what change is introduced by digital techniques in those museums that are hosting digital masterpieces. The methodology used will investigate three distinguished Italian exhibitions, related to the territory of Milan, trying to analyze the following issues about museum practices: 1) how digitizing art masterpieces increases the number of visitors; 2) what the need that calls for the digitization of artworks; 3) which techniques are most used; 4) what the setting is; 5) the consequences of a non-publication of hard copies of catalogues; 6) envision of these practices in the future. Findings will show how interconnection plays an important role in rebuilding a collection spread all over the world. Secondly how digital artwork duplication and extension of reality entail new forms of accessibility. Thirdly, that collection and preservation through digitization of images have both a social and educational mission. Fourthly, that convergence of the properties of different media (such as web, radio) is key to encourage people to get actively involved in digital exhibitions. The present analysis will suggest further research that should create museum models and interaction spaces that act as catalysts for innovation.Keywords: digital masterpieces, education, interconnection, Italian museums, preservation
Procedia PDF Downloads 17420 The Differentiation of Performances among Immigrant Entrepreneurs: A Biographical Approach
Authors: Daniela Gnarini
Abstract:
This paper aims to contribute to the field of immigrants' entrepreneurial performance. The debate on immigrant entrepreneurship has been dominated by cultural explanations, which argue that immigrants’ entrepreneurial results are linked to groups’ characteristics. However, this approach does not consider important dimensions that influence entrepreneurial performances. Furthermore, cultural theories do not take into account the huge differences in performances also within the same ethnic group. For these reason, this study adopts a biographical approach, both at theoretical and at methodological level, which can allow to understand the main aspects that make the difference in immigrants' entrepreneurial performances, by exploring the narratives of immigrant entrepreneurs, who operate in the restaurant sector in two different Italian metropolitan areas: Milan and Rome. Through the qualitative method of biographical interviews, this study analyses four main dimensions and their combinations: a) individuals' entrepreneurial and migratory path: this aspect is particularly relevant to understand the biographical resources of immigrant entrepreneurs and their change and evolution during time; b) entrepreneurs' social capital, with a particular focus on their networks, through the adoption of a transnational perspective, that takes into account both the local level and the transnational connections. This study highlights that, though entrepreneurs’ connections are significant, especially as far as those with family members are concerned, often their entrepreneurial path assumes an individualised trajectory. c) Entrepreneurs' human capital, including both formal education and skills acquired through informal channels. The latter are particularly relevant since in the interviews and data collected the role of informal transmission emerges. d) Embeddedness within the social, political and economic context, to understand the main constraints and opportunities both at local and national level. The comparison between two different metropolitan areas within the same country helps to understand this dimension.Keywords: biographies, immigrant entrepreneurs, life stories, performance
Procedia PDF Downloads 22319 Day Ahead and Intraday Electricity Demand Forecasting in Himachal Region using Machine Learning
Authors: Milan Joshi, Harsh Agrawal, Pallaw Mishra, Sanand Sule
Abstract:
Predicting electricity usage is a crucial aspect of organizing and controlling sustainable energy systems. The task of forecasting electricity load is intricate and requires a lot of effort due to the combined impact of social, economic, technical, environmental, and cultural factors on power consumption in communities. As a result, it is important to create strong models that can handle the significant non-linear and complex nature of the task. The objective of this study is to create and compare three machine learning techniques for predicting electricity load for both the day ahead and intraday, taking into account various factors such as meteorological data and social events including holidays and festivals. The proposed methods include a LightGBM, FBProphet, combination of FBProphet and LightGBM for day ahead and Motifs( Stumpy) based on Mueens algorithm for similarity search for intraday. We utilize these techniques to predict electricity usage during normal days and social events in the Himachal Region. We then assess their performance by measuring the MSE, RMSE, and MAPE values. The outcomes demonstrate that the combination of FBProphet and LightGBM method is the most accurate for day ahead and Motifs for intraday forecasting of electricity usage, surpassing other models in terms of MAPE, RMSE, and MSE. Moreover, the FBProphet - LightGBM approach proves to be highly effective in forecasting electricity load during social events, exhibiting precise day ahead predictions. In summary, our proposed electricity forecasting techniques display excellent performance in predicting electricity usage during normal days and special events in the Himachal Region.Keywords: feature engineering, FBProphet, LightGBM, MASS, Motifs, MAPE
Procedia PDF Downloads 7018 Digital Rehabilitation for Navigation Impairment
Authors: Milan N. A. Van Der Kuil, Anne M. A. Visser-Meily, Andrea W. M. Evers, Ineke J. M. Van Der Ham
Abstract:
Navigation ability is essential for autonomy and mobility in daily life. In patients with acquired brain injury, navigation impairment is frequently impaired; however, in this study, we tested the effectiveness of a serious gaming training protocol as a tool for cognitive rehabilitation to reduce navigation impairment. In total, 38 patients with acquired brain injury and subjective navigation complaints completed the experiment, with a partially blind, randomized control trial design. An objective navigation test was used to construct a strengths and weaknesses profile for each patient. Subsequently, patients received personalized compensation training that matched their strengths and weaknesses by addressing an egocentric or allocentric strategy or a strategy aimed at minimizing the use of landmarks. Participants in the experimental condition received psychoeducation and a home-based rehabilitation game with a series of exercises (e.g., map reading, place finding, and turn memorization). The exercises were developed to stimulate the adoption of more beneficial strategies, according to the compensatory approach. Self-reported navigation ability (wayfinding questionnaire), participation level, and objective navigation performance were measured before and after 1 and 4 weeks after completing the six-week training program. Results indicate that the experimental group significantly improved in subjective navigation ability both 1 and 4 weeks after completion of the training, in comparison to the score before training and the scores of the control group. Similarly, goal attainment showed a significant increase after the first and fourth week after training. Objective navigation performance was not affected by the training. This navigation training protocol provides an effective solution to address navigation impairment after acquired brain injury, with clear improvements in subjective performance and goal attainment of the participants. The outcomes of the training should be re-examined after implementation in a clinical setting.Keywords: spatial navigation, cognitive rehabilitation, serious gaming, acquired brain injury
Procedia PDF Downloads 17417 Local Governments Supporting Environmentally Sustainable Meals to Protect the Planet and People
Authors: Magdy Danial Riad
Abstract:
Introduction: The ability of our world to support the expanding population after 2050 is at risk due to the food system's global role in poor health, climate change, and resource depletion. Healthy, equitable, and sustainable food systems must be achieved from the point of production through consumption in order to meet several of the sustainable development goals (SDG) targets. There is evidence that changing the local food environment can effectively change dietary habits in a community. The purpose of this article is to outline the policy initiatives taken by local governments to support environmentally friendly eating habits. Methods: Five databases were searched for peer-reviewed articles that described local government authorities' implementation of environmentally sustainable eating habits, were located in cities that had signed the Milan Urban Food Policy Pact, were published after 2015, were available in English, and described policy interventions. Data extraction was a two-step approach that started with extracting information from the included study and ended with locating information unique to policies in the grey literature. Results: 45 papers that described a variety of policy initiatives from low-, middle-, and high-income countries met the inclusion criteria. A variety of desired dietary behaviors were the focus of policy action, including reducing food waste, procuring food locally and in season, boosting breastfeeding, avoiding overconsumption, and consuming more plant-based meals and fewer items derived from animals. Conclusions: In order to achieve SDG targets, local governments are under pressure to implement evidence-based interventions. This study can help direct local governments toward evidence-based policy measures to improve regional food systems and support ecologically friendly eating habits.Keywords: meals, planet, poor health, eating habits
Procedia PDF Downloads 5116 Optimization of SOL-Gel Copper Oxide Layers for Field-Effect Transistors
Authors: Tomas Vincze, Michal Micjan, Milan Pavuk, Martin Weis
Abstract:
In recent years, alternative materials are gaining attention to replace polycrystalline and amorphous silicon, which are a standard for low requirement devices, where silicon is unnecessarily and high cost. For that reason, metal oxides are envisioned as the new materials for these low-requirement applications such as sensors, solar cells, energy storage devices, or field-effect transistors. Their most common way of layer growth is sputtering; however, this is a high-cost fabrication method, and a more industry-suitable alternative is the sol-gel method. In this group of materials, many oxides exhibit a semiconductor-like behavior with sufficiently high mobility to be applied as transistors. The sol-gel method is a cost-effective deposition technique for semiconductor-based devices. Copper oxides, as p-type semiconductors with free charge mobility up to 1 cm2/Vs., are suitable replacements for poly-Si or a-Si:H devices. However, to reach the potential of silicon devices, a fine-tuning of material properties is needed. Here we focus on the optimization of the electrical parameters of copper oxide-based field-effect transistors by modification of precursor solvent (usually 2-methoxy ethanol). However, to achieve solubility and high-quality films, a better solvent is required. Since almost no solvents have both high dielectric constant and high boiling point, an alternative approach was proposed with blend solvents. By mixing isopropyl alcohol (IPA) and 2-methoxy ethanol (2ME) the precursor reached better solubility. The quality of the layers fabricated using mixed solutions was evaluated in accordance with the surface morphology and electrical properties. The IPA:2ME solution mixture reached optimum results for the weight ratio of 1:3. The cupric oxide layers for optimal mixture had the highest crystallinity and highest effective charge mobility.Keywords: copper oxide, field-effect transistor, semiconductor, sol-gel method
Procedia PDF Downloads 13315 Directivity in the Dramatherapeutic Process for People with Addictive Behaviour
Authors: Jakub Vávra, Milan Valenta, Petr Kosek
Abstract:
This article presents a perspective on the conduct of the dramatherapy process with persons with addictive behaviours with regard to the directiveness of the process. Although drama therapy as one of the creative arts approaches is rather non-directive in nature, depending on the clientele, there may be a need to structure the process more and, depending on the needs of the clients, to guide the process more directive. The specificity for people with addictive behaviours is discussed through the prism of the dramatherapeutic perspective, where we can find both a psychotherapeutic component as well as a component touching on expression and art, which is rather non-directive in nature. Within the context of practice with clients, this theme has repeatedly emerged and dramatherapists themselves have sought to find ways of coping with clients' demands and needs for structure and guidance within the dramatherapy process. Some of the outcomes from the supervision work also guided the research. Based on this insight, the research questions were approached. The first research question asks: in what ways is directive in dramatherapy manifested and manifested in the process? The second research question then complements the first and asks: to which phenomena are directivity in dramatherapy linked? In relation to the research questions, data were collected using focus groups and field notes. The qualitative approach of Content analysis and Relational analysis was chosen as the methodology. For analyzing qualitative research, we chose an Inductive coding scheme: Open coding, Axial coding, Pattern matching, Member checking, and Creating a coding scheme. In the presented partial research results, we find recurrent schemes related to directive coding in drama therapy. As an important element, directive leadership emerges in connection with safety for the client group, then in connection with the clients' order and also the department of the facility, and last but not least, to the personality of the drama therapist. By careful analysis and looking for patterns in the research results, we can see connections that are impossible to interpret at this stage but already provide clues to our understanding of the topic and open up further avenues for research in this area.Keywords: dramatherapy, directivity, personal approach, aims of dramatherapy process, safetyness
Procedia PDF Downloads 6714 Speaking Anxiety: Sources, Coping Mechanisms and Teacher Management
Authors: Mylene T. Caytap-Milan
Abstract:
This study was materialized with the purpose of determining the anxieties of students towards spoken English, sources of the specified anxiety, coping mechanisms to counter the apprehensions, and teacher management to reduce the anxiety within the classroom. Being qualitative in nature, interview as the data gathering tool was utilized with an audio-recorder. Participants of the study included thirteen teachers and students of speech classes in a state university in Region I, Philippines. Data elicited were transcribed in verbatim, confirmed by the participants, coded and categorized, and themed accordingly. A triangulation method was applied to establish the stronger validity of the data. Findings confirmed teachers’ and students’ awareness of the existence of Anxiety in speaking English (ASE). Based on the data gathered from the teachers, the following themes on students’ ASE were identified: (1) No Brain and Mouth Coordination, (2) Center of Attention, and (3) Acting Out Loud. However, the following themes were formulated based on the responses made by the students themselves: (1) The Common Feeling, (2) The Incompetent Me, and (3) The Limelight. With regard the sources of students’ ASE according to teachers are the following: (1) It Began at Home, (2) It Continued in School, (3) It’s not for me at all. On the other hand, the sources of students’ ASE according to students themselves are: (1) It Comes from Within, (2) It wasn’t Nursed Well, and (3) They’re Looking for Errors. In terms of coping with ASE, students identified the following mechanisms, which were themed into: (1) Acceptance, (2) Application, and (3) Apathy. Moreover, to reduce the ASE phenomenon within the classroom, the teachers demonstrate the following roles according to themes: (1) The Compass, (2) The Counselor, (3) The Referee, (4) The Polyglot, and (5) The English Nazi. Based on the findings, the following conclusions were drawn: (1) ASE can both serve positive and negative influences to the English speaking skills of students, (2) ASE can be reduced with teachers’ provision of more English speaking opportunities and with students’ initiative of personal training, (3) ASE can be reduced when English is introduced and practiced by children at an early age, and (4) ASE is inevitable in the affective domain thus teachers are encouraged to apply psychological positivism in the classroom. Studies related to the present undertaking may refer to the succeeding recommendations: (1) experiment on activities that will reduce anxiety ASE, (2) involve a psychologist for more critical but reliable results and recommendations, and (3) conduct the study among high school and primary students.Keywords: coping mechanisms, sources, speaking anxiety, teacher management
Procedia PDF Downloads 11413 Exceptional Cost and Time Optimization with Successful Leak Repair and Restoration of Oil Production: West Kuwait Case Study
Authors: Nasser Al-Azmi, Al-Sabea Salem, Abu-Eida Abdullah, Milan Patra, Mohamed Elyas, Daniel Freile, Larisa Tagarieva
Abstract:
Well intervention was done along with Production Logging Tools (PLT) to detect sources of water, and to check well integrity for two West Kuwait oil wells started to produce 100 % water. For the first well, to detect the source of water, PLT was performed to check the perforations, no production observed from the bottom two perforation intervals, and an intake of water was observed from the top most perforation. Then a decision was taken to extend the PLT survey from tag depth to the Y-tool. For the second well, the aim was to detect the source of water and if there was a leak in the 7’’liner in front of the upper zones. Data could not be recorded in flowing conditions due to the casing deformation at almost 8300 ft. For the first well from the interpretation of PLT and well integrity data, there was a hole in the 9 5/8'' casing from 8468 ft to 8494 ft producing almost the majority of water, which is 2478 bbl/d. The upper perforation from 10812 ft to 10854 ft was taking 534 stb/d. For the second well, there was a hole in the 7’’liner from 8303 ft MD to 8324 ft MD producing 8334.0 stb/d of water with an intake zone from10322.9-10380.8 ft MD taking the whole fluid. To restore the oil production, W/O rig was mobilized to prevent dump flooding, and during the W/O, the leaking interval was confirmed for both wells. The leakage was cement squeezed and tested at 900-psi positive pressure and 500-psi drawdown pressure. The cement squeeze job was successful. After W/O, the wells kept producing for cleaning, and eventually, the WC reduced to 0%. Regular PLT and well integrity logs are required to study well performance, and well integrity issues, proper cement behind casing is essential to well longevity and well integrity, and the presence of the Y-tool is essential as monitoring of well parameters and ESP to facilitate well intervention tasks. Cost and time optimization in oil and gas and especially during rig operations is crucial. PLT data quality and the accuracy of the interpretations contributed a lot to identify the leakage interval accurately and, in turn, saved a lot of time and reduced the repair cost with almost 35 to 45 %. The added value here was more related to the cost reduction and effective and quick proper decision making based on the economic environment.Keywords: leak, water shut-off, cement, water leak
Procedia PDF Downloads 11312 Investigation of Several New Ionic Liquids’ Behaviour during ²¹⁰PB/²¹⁰BI Cherenkov Counting in Waters
Authors: Nataša Todorović, Jovana Nikolov, Ivana Stojković, Milan Vraneš, Jovana Panić, Slobodan Gadžurić
Abstract:
The detection of ²¹⁰Pb levels in aquatic environments evokes interest in various scientific studies. Its precise determination is important not only for the radiological assessment of drinking waters but also ²¹⁰Pb, and ²¹⁰Po distribution in the marine environment are significant for the assessment of the removal rates of particles from the ocean and particle fluxes during transport along the coast, as well as particulate organic carbon export in the upper ocean. Measurement techniques for ²¹⁰Pb determination, gamma spectrometry, alpha spectrometry, or liquid scintillation counting (LSC) are either time-consuming or demand expensive equipment or complicated chemical pre-treatments. However, one other possibility is to measure ²¹⁰Pb on an LS counter if it is in equilibrium with its progeny ²¹⁰Bi - through the Cherenkov counting method. It is unaffected by the chemical quenching and assumes easy sample preparation but has the drawback of lower counting efficiencies than standard LSC methods, typically from 10% up to 20%. The aim of the presented research in this paper is to investigate the possible increment of detection efficiency of Cherenkov counting during ²¹⁰Pb/²¹⁰Bi detection on an LS counter Quantulus 1220. Considering naturally low levels of ²¹⁰Pb in aqueous samples, the addition of ionic liquids to the counting vials with the analysed samples has the benefit of detection limit’s decrement during ²¹⁰Pb quantification. Our results demonstrated that ionic liquid, 1-butyl-3-methylimidazolium salicylate, is more efficient in Cherenkov counting efficiency increment than the previously explored 2-hydroxypropan-1-amminium salicylate. Consequently, the impact of a few other ionic liquids that were synthesized with the same cation group (1-butyl-3-methylimidazolium benzoate, 1-butyl-3-methylimidazolium 3-hydroxybenzoate, and 1-butyl-3-methylimidazolium 4-hydroxybenzoate) was explored in order to test their potential influence on Cherenkov counting efficiency. It was confirmed that, among the explored ones, only ionic liquids in the form of salicylates exhibit a wavelength shifting effect. Namely, the addition of small amounts (around 0.8 g) of 1-butyl-3-methylimidazolium salicylate increases the detection efficiency from 16% to >70%, consequently reducing the detection threshold by more than four times. Moreover, the addition of ionic liquids could find application in the quantification of other radionuclides besides ²¹⁰Pb/²¹⁰Bi via Cherenkov counting method.Keywords: liquid scintillation counting, ionic liquids, Cherenkov counting, ²¹⁰PB/²¹⁰BI in water
Procedia PDF Downloads 10011 Effect of Term of Preparation on Performance of Cool Chamber Stored White Poplar Hardwood Cuttings in Nursery
Authors: Branislav Kovačević, Andrej Pilipović, Zoran Novčić, Marina Milović, Lazar Kesić, Milan Drekić, Saša Pekeč, Leopold Poljaković Pajnik, Saša Orlović
Abstract:
Poplars present one of the most important tree species used for phytoremediation in the northern hemisphere. They can be used either as direct “cleaners” of the contaminated soils or as buffer zones preventing the contaminant plume to the surrounding environment. In order to produce appropriate planting material for this purpose, there is a long process of the breeding of the most favorable candidates. Although the development of the poplar propagation technology has been evolving for decades, white poplar nursery production, as well as the establishment of short-rotation coppice plantations, still considerably depends on the success of hardwood cuttings’ survival. This is why easy rooting is among the most desirable properties in white poplar breeding. On the other hand, there are many opportunities for the optimization of the technological procedures in order to meet the demands of particular genotype (clonal technology). In this study the effect of the term of hardwood cuttings’ preparation of four white poplar clones on their survival and further growth of rooted cuttings in nursery conditions were tested. There were three terms of cuttings’ preparation: the beginning of February (2nd Feb 2023), the beginning of March (3rd Mar 2023) and the end of March (21nd Mar 2023), which is regarded as the standard term. The cuttings were stored in cool chamber at 2±2°C. All cuttings were planted on the same date (11th Apr 2023), in soil prepared with rotary tillage, and then cultivated by usual nursey procedures. According to the results obtained after the bud set (29th Sept 2023) there were significant differences in the survival and growth of rooted cuttings between examined terms of cutting preparation. Also, there were significant differences in the reaction of examined clones on terms of cutting preparation. In total, the best results provided cuttings prepared at the first term (2nd Feb 2023) (survival rate of 39.4%), while performance after two later preparation terms was significantly poorer (20.5% after second and 16.5% after third term). These results stress the significance of dormancy preservation in cuttings of examined white poplar clones for their survival, which could be especially important in context of climate change. Differences in clones’ reaction to term of cutting preparation suggest necessity of adjustment of the technology to the needs of particular clone i.e. design of clone specific technology.Keywords: rooting, Populus alba, nursery, clonal technology
Procedia PDF Downloads 6210 Using Google Distance Matrix Application Programming Interface to Reveal and Handle Urban Road Congestion Hot Spots: A Case Study from Budapest
Authors: Peter Baji
Abstract:
In recent years, a growing body of literature emphasizes the increasingly negative impacts of urban road congestion in the everyday life of citizens. Although there are different responses from the public sector to decrease traffic congestion in urban regions, the most effective public intervention is using congestion charges. Because travel is an economic asset, its consumption can be controlled by extra taxes or prices effectively, but this demand-side intervention is often unpopular. Measuring traffic flows with the help of different methods has a long history in transport sciences, but until recently, there was not enough sufficient data for evaluating road traffic flow patterns on the scale of an entire road system of a larger urban area. European cities (e.g., London, Stockholm, Milan), in which congestion charges have already been introduced, designated a particular zone in their downtown for paying, but it protects only the users and inhabitants of the CBD (Central Business District) area. Through the use of Google Maps data as a resource for revealing urban road traffic flow patterns, this paper aims to provide a solution for a fairer and smarter congestion pricing method in cities. The case study area of the research contains three bordering districts of Budapest which are linked by one main road. The first district (5th) is the original downtown that is affected by the congestion charge plans of the city. The second district (13th) lies in the transition zone, and it has recently been transformed into a new CBD containing the biggest office zone in Budapest. The third district (4th) is a mainly residential type of area on the outskirts of the city. The raw data of the research was collected with the help of Google’s Distance Matrix API (Application Programming Interface) which provides future estimated traffic data via travel times between freely fixed coordinate pairs. From the difference of free flow and congested travel time data, the daily congestion patterns and hot spots are detectable in all measured roads within the area. The results suggest that the distribution of congestion peak times and hot spots are uneven in the examined area; however, there are frequently congested areas which lie outside the downtown and their inhabitants also need some protection. The conclusion of this case study is that cities can develop a real-time and place-based congestion charge system that forces car users to avoid frequently congested roads by changing their routes or travel modes. This would be a fairer solution for decreasing the negative environmental effects of the urban road transportation instead of protecting a very limited downtown area.Keywords: Budapest, congestion charge, distance matrix API, application programming interface, pilot study
Procedia PDF Downloads 1949 Exploring the Neural Correlates of Different Interaction Types: A Hyperscanning Investigation Using the Pattern Game
Authors: Beata Spilakova, Daniel J. Shaw, Radek Marecek, Milan Brazdil
Abstract:
Hyperscanning affords a unique insight into the brain dynamics underlying human interaction by simultaneously scanning two or more individuals’ brain responses while they engage in dyadic exchange. This provides an opportunity to observe dynamic brain activations in all individuals participating in interaction, and possible interbrain effects among them. The present research aims to provide an experimental paradigm for hyperscanning research capable of delineating among different forms of interaction. Specifically, the goal was to distinguish between two dimensions: (1) interaction structure (concurrent vs. turn-based) and (2) goal structure (competition vs cooperation). Dual-fMRI was used to scan 22 pairs of participants - each pair matched on gender, age, education and handedness - as they played the Pattern Game. In this simple interactive task, one player attempts to recreate a pattern of tokens while the second player must either help (cooperation) or prevent the first achieving the pattern (competition). Each pair played the game iteratively, alternating their roles every round. The game was played in two consecutive sessions: first the players took sequential turns (turn-based), but in the second session they placed their tokens concurrently (concurrent). Conventional general linear model (GLM) analyses revealed activations throughout a diffuse collection of brain regions: The cooperative condition engaged medial prefrontal cortex (mPFC) and posterior cingulate cortex (PCC); in the competitive condition, significant activations were observed in frontal and prefrontal areas, insula cortices and the thalamus. Comparisons between the turn-based and concurrent conditions revealed greater precuneus engagement in the former. Interestingly, mPFC, PCC and insulae are linked repeatedly to social cognitive processes. Similarly, the thalamus is often associated with a cognitive empathy, thus its activation may reflect the need to predict the opponent’s upcoming moves. Frontal and prefrontal activation most likely represent the higher attentional and executive demands of the concurrent condition, whereby subjects must simultaneously observe their co-player and place his own tokens accordingly. The activation of precuneus in the turn-based condition may be linked to self-other distinction processes. Finally, by performing intra-pair correlations of brain responses we demonstrate condition-specific patterns of brain-to-brain coupling in mPFC and PCC. Moreover, the degree of synchronicity in these neural signals related to performance on the game. The present results, then, show that different types of interaction recruit different brain systems implicated in social cognition, and the degree of inter-player synchrony within these brain systems is related to nature of the social interaction.Keywords: brain-to-brain coupling, hyperscanning, pattern game, social interaction
Procedia PDF Downloads 3388 Early Return to Play in Football Player after ACL Injury: A Case Report
Authors: Nicola Milani, Carla Bellissimo, Davide Pogliana, Davide Panzin, Luca Garlaschelli, Giulia Facchinetti, Claudia Casson, Luca Marazzina, Andrea Sartori, Simone Rivaroli, Jeff Konin
Abstract:
The patient is a 26 year-old male amateur football player from Milan, Italy; (81kg; 185cm; BMI 23.6 kg/m²). He sustained a non-contact anterior cruciate ligament tear to his right knee in June 2021. In September 2021, his right knee ligament was reconstructed using a semitendinosus graft. The injury occurred during a football match on natural grass with typical shoes on a warm day (32 degrees celsius). Playing as a defender he sustained the injury during a change of direction, where the foot was fixated on the grass. He felt pain and was unable to continue playing the match. The surgeon approved his rehabilitation to begin two weeks post-operative. The initial physiotherapist assessment determined performing two training sessions per day within the first three months. In the first three weeks, the pain was 4/10 on Numerical Rating Scale (NRS), no swelling, a range of motion was 0-110°, with difficulty fully extending his knee and minimal quadriceps activation. Crutches were discontinued at four weeks with improved walking. Active exercise, electrostimulator, physical therapy, massages, osteopathy, and passive motion were initiated. At week 6, he completed his first functional movement screen; the score was 16/21 with no pain and no swelling. At week 8, the isokinetic test showed a 23% differential deficit between the two legs in maximum strength (at 90°/s). At week 10, he improved to 15% of injury-induced deficit which suggested he was ready to start running. At week 12, the athlete sustained his first threshold test. At week 16, he performed his first return to sports movement assessment, which revealed a 10% stronger difference between the legs. At week 16, he had his second threshold test. At week 17, his first on-field test revealed a 5% differential deficit between the two legs in the hop test. At week 18, isokinetic test demonstrates that the uninjured leg was 7% stronger than the recovering leg in maximum strength (at 90°/s). At week 20, his second on-field test revealed a 2% difference in hop test; at week 21, his third isokinetic test demonstrated a difference of 5% in maximum strength (at 90°/s). At week 21, he performed his second return to sports movement assessment which revealed a 2% difference between the limbs. Since it was the end of the championship, the team asked him to partake in the playoffs; moreover the player was very motivated to participate in the playoffs also because he was the captain of the team. Together with the player and the team, we decided to let him play even though we were aware of a heightened risk of injury than what is reported in the literature because of two factors: biological recovery times and the results of the tests we performed. In the decision making process about the athlete’s recovery time, it is important to balance the information available from the literature with the desires of the patient to avoid frustration.Keywords: ACL, football, rehabilitation, return to play
Procedia PDF Downloads 1177 Mapping Alternative Education in Italy: The Case of Popular and Second-Chance Schools and Interventions in Lombardy
Authors: Valeria Cotza
Abstract:
School drop-out is a multifactorial phenomenon that in Italy concerns all those underage students who, at different school stages (up to 16 years old) or training (up to 18 years old), manifest educational difficulties from dropping out of compulsory education without obtaining a qualification to repetition rates and absenteeism. From the 1980s to the 2000s, there was a progressive attenuation of the economic and social model towards a multifactorial reading of the phenomenon, and the European Commission noted the importance of learning about the phenomenon through approaches able to integrate large-scale quantitative surveys with qualitative analyses. It is not a matter of identifying the contextual factors affecting the phenomenon but problematising them by means of systemic and comprehensive in-depth analysis. So, a privileged point of observation and field of intervention are those schools that propose alternative models of teaching and learning to the traditional ones, such as popular and second-chance schools. Alternative schools and interventions grew in these years in Europe as well as in the US and Latin America, working in the direction of greater equity to create the conditions (often absent in conventional schools) for everyone to achieve educational goals. Against extensive Anglo-Saxon and US literature on this topic, there is yet no unambiguous definition of alternative education, especially in Europe, where second-chance education has been most studied. There is little literature on a second chance in Italy and almost none on alternative education (with the exception of method schools, to which in Italy the concept of “alternative” is linked). This research aims to fill the gap by systematically surveying the alternative interventions in the area and beginning to explore some models of popular and second-chance schools and experiences through a mixed methods approach. So, the main research objectives concern the spread of alternative education in the Lombardy region, the main characteristics of these schools and interventions, and their effectiveness in terms of students’ well-being and school results. This paper seeks to answer the first point by presenting the preliminary results of the first phase of the project dedicated to mapping. Through the Google Forms platform, a questionnaire is being distributed to all schools in Lombardy and some schools in the rest of Italy to map the presence of alternative schools and interventions and their main characteristics. The distribution is also taking place thanks to the support of the Milan Territorial and Lombardy Regional School Offices. Moreover, other social realities outside the school system (such as cooperatives and cultural associations) can be questioned. The schools and other realities to be questioned outside Lombardy will also be identified with the support of INDIRE (Istituto Nazionale per Documentazione, Innovazione e Ricerca Educativa, “National Institute for Documentation, Innovation and Educational Research”) and based on existing literature and the indicators of “Futura” Plan of the PNRR (Piano Nazionale di Ripresa e Resilienza, “National Recovery and Resilience Plan”). Mapping will be crucial and functional for the subsequent qualitative and quantitative phase, which will make use of statistical analysis and constructivist grounded theory.Keywords: school drop-out, alternative education, popular and second-chance schools, map
Procedia PDF Downloads 826 A Comparison Between Different Discretization Techniques for the Doyle-Fuller-Newman Li+ Battery Model
Authors: Davide Gotti, Milan Prodanovic, Sergio Pinilla, David Muñoz-Torrero
Abstract:
Since its proposal, the Doyle-Fuller-Newman (DFN) lithium-ion battery model has gained popularity in the electrochemical field. In fact, this model provides the user with theoretical support for designing the lithium-ion battery parameters, such as the material particle or the diffusion coefficient adjustment direction. However, the model is mathematically complex as it is composed of several partial differential equations (PDEs) such as Fick’s law of diffusion, the MacInnes and Ohm’s equations, among other phenomena. Thus, to efficiently use the model in a time-domain simulation environment, the selection of the discretization technique is of a pivotal importance. There are several numerical methods available in the literature that can be used to carry out this task. In this study, a comparison between the explicit Euler, Crank-Nicolson, and Chebyshev discretization methods is proposed. These three methods are compared in terms of accuracy, stability, and computational times. Firstly, the explicit Euler discretization technique is analyzed. This method is straightforward to implement and is computationally fast. In this work, the accuracy of the method and its stability properties are shown for the electrolyte diffusion partial differential equation. Subsequently, the Crank-Nicolson method is considered. It represents a combination of the implicit and explicit Euler methods that has the advantage of being of the second order in time and is intrinsically stable, thus overcoming the disadvantages of the simpler Euler explicit method. As shown in the full paper, the Crank-Nicolson method provides accurate results when applied to the DFN model. Its stability does not depend on the integration time step, thus it is feasible for both short- and long-term tests. This last remark is particularly important as this discretization technique would allow the user to implement parameter estimation and optimization techniques such as system or genetic parameter identification methods using this model. Finally, the Chebyshev discretization technique is implemented in the DFN model. This discretization method features swift convergence properties and, as other spectral methods used to solve differential equations, achieves the same accuracy with a smaller number of discretization nodes. However, as shown in the literature, these methods are not suitable for handling sharp gradients, which are common during the first instants of the charge and discharge phases of the battery. The numerical results obtained and presented in this study aim to provide the guidelines on how to select the adequate discretization technique for the DFN model according to the type of application to be performed, highlighting the pros and cons of the three methods. Specifically, the non-eligibility of the simple Euler method for longterm tests will be presented. Afterwards, the Crank-Nicolson and the Chebyshev discretization methods will be compared in terms of accuracy and computational times under a wide range of battery operating scenarios. These include both long-term simulations for aging tests, and short- and mid-term battery charge/discharge cycles, typically relevant in battery applications like grid primary frequency and inertia control and electrical vehicle breaking and acceleration.Keywords: Doyle-Fuller-Newman battery model, partial differential equations, discretization, numerical methods
Procedia PDF Downloads 215 Towards an Effective Approach for Modelling near Surface Air Temperature Combining Weather and Satellite Data
Authors: Nicola Colaninno, Eugenio Morello
Abstract:
The urban environment affects local-to-global climate and, in turn, suffers global warming phenomena, with worrying impacts on human well-being, health, social and economic activities. Physic-morphological features of the built-up space affect urban air temperature, locally, causing the urban environment to be warmer compared to surrounding rural. This occurrence, typically known as the Urban Heat Island (UHI), is normally assessed by means of air temperature from fixed weather stations and/or traverse observations or based on remotely sensed Land Surface Temperatures (LST). The information provided by ground weather stations is key for assessing local air temperature. However, the spatial coverage is normally limited due to low density and uneven distribution of the stations. Although different interpolation techniques such as Inverse Distance Weighting (IDW), Ordinary Kriging (OK), or Multiple Linear Regression (MLR) are used to estimate air temperature from observed points, such an approach may not effectively reflect the real climatic conditions of an interpolated point. Quantifying local UHI for extensive areas based on weather stations’ observations only is not practicable. Alternatively, the use of thermal remote sensing has been widely investigated based on LST. Data from Landsat, ASTER, or MODIS have been extensively used. Indeed, LST has an indirect but significant influence on air temperatures. However, high-resolution near-surface air temperature (NSAT) is currently difficult to retrieve. Here we have experimented Geographically Weighted Regression (GWR) as an effective approach to enable NSAT estimation by accounting for spatial non-stationarity of the phenomenon. The model combines on-site measurements of air temperature, from fixed weather stations and satellite-derived LST. The approach is structured upon two main steps. First, a GWR model has been set to estimate NSAT at low resolution, by combining air temperature from discrete observations retrieved by weather stations (dependent variable) and the LST from satellite observations (predictor). At this step, MODIS data, from Terra satellite, at 1 kilometer of spatial resolution have been employed. Two time periods are considered according to satellite revisit period, i.e. 10:30 am and 9:30 pm. Afterward, the results have been downscaled at 30 meters of spatial resolution by setting a GWR model between the previously retrieved near-surface air temperature (dependent variable), the multispectral information as provided by the Landsat mission, in particular the albedo, and Digital Elevation Model (DEM) from the Shuttle Radar Topography Mission (SRTM), both at 30 meters. Albedo and DEM are now the predictors. The area under investigation is the Metropolitan City of Milan, which covers an area of approximately 1,575 km2 and encompasses a population of over 3 million inhabitants. Both models, low- (1 km) and high-resolution (30 meters), have been validated according to a cross-validation that relies on indicators such as R2, Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE). All the employed indicators give evidence of highly efficient models. In addition, an alternative network of weather stations, available for the City of Milano only, has been employed for testing the accuracy of the predicted temperatures, giving and RMSE of 0.6 and 0.7 for daytime and night-time, respectively.Keywords: urban climate, urban heat island, geographically weighted regression, remote sensing
Procedia PDF Downloads 1934 A Two-Step, Temperature-Staged, Direct Coal Liquefaction Process
Authors: Reyna Singh, David Lokhat, Milan Carsky
Abstract:
The world crude oil demand is projected to rise to 108.5 million bbl/d by the year 2035. With reserves estimated at 869 billion tonnes worldwide, coal is an abundant resource. This work was aimed at producing a high value hydrocarbon liquid product from the Direct Coal Liquefaction (DCL) process at, comparatively, mild operating conditions. Via hydrogenation, the temperature-staged approach was investigated. In a two reactor lab-scale pilot plant facility, the objectives included maximising thermal dissolution of the coal in the presence of a hydrogen donor solvent in the first stage, subsequently promoting hydrogen saturation and hydrodesulphurization (HDS) performance in the second. The feed slurry consisted of high grade, pulverized bituminous coal on a moisture-free basis with a size fraction of < 100μm; and Tetralin mixed in 2:1 and 3:1 solvent/coal ratios. Magnetite (Fe3O4) at 0.25wt% of the dry coal feed was added for the catalysed runs. For both stages, hydrogen gas was used to maintain a system pressure of 100barg. In the first stage, temperatures of 250℃ and 300℃, reaction times of 30 and 60 minutes were investigated in an agitated batch reactor. The first stage liquid product was pumped into the second stage vertical reactor, which was designed to counter-currently contact the hydrogen rich gas stream and incoming liquid flow in the fixed catalyst bed. Two commercial hydrotreating catalysts; Cobalt-Molybdenum (CoMo) and Nickel-Molybdenum (NiMo); were compared in terms of their conversion, selectivity and HDS performance at temperatures 50℃ higher than the respective first stage tests. The catalysts were activated at 300°C with a hydrogen flowrate of approximately 10 ml/min prior to the testing. A gas-liquid separator at the outlet of the reactor ensured that the gas was exhausted to the online VARIOplus gas analyser. The liquid was collected and sampled for analysis using Gas Chromatography-Mass Spectrometry (GC-MS). Internal standard quantification methods for the sulphur content, the BTX (benzene, toluene, and xylene) and alkene quality; alkanes and polycyclic aromatic hydrocarbon (PAH) compounds in the liquid products were guided by ASTM standards of practice for hydrocarbon analysis. In the first stage, using a 2:1 solvent/coal ratio, an increased coal to liquid conversion was favoured by a lower operating temperature of 250℃, 60 minutes and a system catalysed by magnetite. Tetralin functioned effectively as the hydrogen donor solvent. A 3:1 ratio favoured increased concentrations of the long chain alkanes undecane and dodecane, unsaturated alkenes octene and nonene and PAH compounds such as indene. The second stage product distribution showed an increase in the BTX quality of the liquid product, branched chain alkanes and a reduction in the sulphur concentration. As an HDS performer and selectivity to the production of long and branched chain alkanes, NiMo performed better than CoMo. CoMo is selective to a higher concentration of cyclohexane. For 16 days on stream each, NiMo had a higher activity than CoMo. The potential to cover the demand for low–sulphur, crude diesel and solvents from the production of high value hydrocarbon liquid in the said process, is thus demonstrated.Keywords: catalyst, coal, liquefaction, temperature-staged
Procedia PDF Downloads 6463 Extension of Moral Agency to Artificial Agents
Authors: Sofia Quaglia, Carmine Di Martino, Brendan Tierney
Abstract:
Artificial Intelligence (A.I.) constitutes various aspects of modern life, from the Machine Learning algorithms predicting the stocks on Wall streets to the killing of belligerents and innocents alike on the battlefield. Moreover, the end goal is to create autonomous A.I.; this means that the presence of humans in the decision-making process will be absent. The question comes naturally: when an A.I. does something wrong when its behavior is harmful to the community and its actions go against the law, which is to be held responsible? This research’s subject matter in A.I. and Robot Ethics focuses mainly on Robot Rights and its ultimate objective is to answer the questions: (i) What is the function of rights? (ii) Who is a right holder, what is personhood and the requirements needed to be a moral agent (therefore, accountable for responsibility)? (iii) Can an A.I. be a moral agent? (ontological requirements) and finally (iv) if it ought to be one (ethical implications). With the direction to answer this question, this research project was done via a collaboration between the School of Computer Science in the Technical University of Dublin that oversaw the technical aspects of this work, as well as the Department of Philosophy in the University of Milan, who supervised the philosophical framework and argumentation of the project. Firstly, it was found that all rights are positive and based on consensus; they change with time based on circumstances. Their function is to protect the social fabric and avoid dangerous situations. The same goes for the requirements considered necessary to be a moral agent: those are not absolute; in fact, they are constantly redesigned. Hence, the next logical step was to identify what requirements are regarded as fundamental in real-world judicial systems, comparing them to that of ones used in philosophy. Autonomy, free will, intentionality, consciousness and responsibility were identified as the requirements to be considered a moral agent. The work went on to build a symmetrical system between personhood and A.I. to enable the emergence of the ontological differences between the two. Each requirement is introduced, explained in the most relevant theories of contemporary philosophy, and observed in its manifestation in A.I. Finally, after completing the philosophical and technical analysis, conclusions were drawn. As underlined in the research questions, there are two issues regarding the assignment of moral agency to artificial agent: the first being that all the ontological requirements must be present and secondly being present or not, whether an A.I. ought to be considered as an artificial moral agent. From an ontological point of view, it is very hard to prove that an A.I. could be autonomous, free, intentional, conscious, and responsible. The philosophical accounts are often very theoretical and inconclusive, making it difficult to fully detect these requirements on an experimental level of demonstration. However, from an ethical point of view it makes sense to consider some A.I. as artificial moral agents, hence responsible for their own actions. When considering artificial agents as responsible, there can be applied already existing norms in our judicial system such as removing them from society, and re-educating them, in order to re-introduced them to society. This is in line with how the highest profile correctional facilities ought to work. Noticeably, this is a provisional conclusion and research must continue further. Nevertheless, the strength of the presented argument lies in its immediate applicability to real world scenarios. To refer to the aforementioned incidents, involving the murderer of innocents, when this thesis is applied it is possible to hold an A.I. accountable and responsible for its actions. This infers removing it from society by virtue of its un-usability, re-programming it and, only when properly functioning, re-introducing it successfullyKeywords: artificial agency, correctional system, ethics, natural agency, responsibility
Procedia PDF Downloads 188