Search results for: density distribution
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8116

Search results for: density distribution

256 Hydrogen Production from Auto-Thermal Reforming of Ethanol Catalyzed by Tri-Metallic Catalyst

Authors: Patrizia Frontera, Anastasia Macario, Sebastiano Candamano, Fortunato Crea, Pierluigi Antonucci

Abstract:

The increasing of the world energy demand makes today biomass an attractive energy source, based on the minimizing of CO2 emission and on the global warming reduction purposes. Recently, COP-21, the international meeting on global climate change, defined the roadmap for sustainable worldwide development, based on low-carbon containing fuel. Hydrogen is an energy vector able to substitute the conventional fuels from petroleum. Ethanol for hydrogen production represents a valid alternative to the fossil sources due to its low toxicity, low production costs, high biodegradability, high H2 content and renewability. Ethanol conversion to generate hydrogen by a combination of partial oxidation and steam reforming reactions is generally called auto-thermal reforming (ATR). The ATR process is advantageous due to the low energy requirements and to the reduced carbonaceous deposits formation. Catalyst plays a pivotal role in the ATR process, especially towards the process selectivity and the carbonaceous deposits formation. Bimetallic or trimetallic catalysts, as well as catalysts with doped-promoters supports, may exhibit high activity, selectivity and deactivation resistance with respect to the corresponding monometallic ones. In this work, NiMoCo/GDC, NiMoCu/GDC and NiMoRe/GDC (where GDC is Gadolinia Doped Ceria support and the metal composition is 60:30:10 for all catalyst) have been prepared by impregnation method. The support, Gadolinia 0.2 Doped Ceria 0.8, was impregnated by metal precursors solubilized in aqueous ethanol solution (50%) at room temperature for 6 hours. After this, the catalysts were dried at 100°C for 8 hours and, subsequently, calcined at 600°C in order to have the metal oxides. Finally, active catalysts were obtained by reduction procedure (H2 atmosphere at 500°C for 6 hours). All sample were characterized by different analytical techniques (XRD, SEM-EDX, XPS, CHNS, H2-TPR and Raman Spectorscopy). Catalytic experiments (auto-thermal reforming of ethanol) were carried out in the temperature range 500-800°C under atmospheric pressure, using a continuous fixed-bed microreactor. Effluent gases from the reactor were analyzed by two Varian CP4900 chromarographs with a TCD detector. The analytical investigation focused on the preventing of the coke deposition, the metals sintering effect and the sulfur poisoning. Hydrogen productivity, ethanol conversion and products distribution were measured and analyzed. At 600°C, all tri-metallic catalysts show the best performance: H2 + CO reaching almost the 77 vol.% in the final gases. While NiMoCo/GDC catalyst shows the best selectivity to hydrogen whit respect to the other tri-metallic catalysts (41 vol.% at 600°C). On the other hand, NiMoCu/GDC and NiMoRe/GDC demonstrated high sulfur poisoning resistance (up to 200 cc/min) with respect to the NiMoCo/GDC catalyst. The correlation among catalytic results and surface properties of the catalysts will be discussed.

Keywords: catalysts, ceria, ethanol, gadolinia, hydrogen, Nickel

Procedia PDF Downloads 152
255 Strategic Interventions to Address Health Workforce and Current Disease Trends, Nakuru, Kenya

Authors: Paul Moses Ndegwa, Teresia Kabucho, Lucy Wanjiru, Esther Wanjiru, Brian Githaiga, Jecinta Wambui

Abstract:

Health outcome has improved in the country since 2013 following the adoption of the new constitution in Kenya with devolved governance with administration and health planning functions transferred to county governments. 2018-2022 development agenda prioritized universal healthcare coverage, food security, and nutrition, however, the emergence of Covid-19 and the increase of non-communicable diseases pose a challenge and constrain in an already overwhelmed health system. A study was conducted July-November 2021 to establish key challenges in achieving universal healthcare coverage within the county and best practices for improved non-communicable disease control. 14 health workers ranging from nurses, doctors, public health officers, clinical officers, and pharmaceutical technologists were purposely engaged to provide critical information through questionnaires by a trained duo observing ethical procedures on confidentiality. Data analysis. Communicable diseases are major causes of morbidity and mortality. Non-communicable diseases contribute to approximately 39% of deaths. More than 45% of the population does not have access to safe drinking water. Study noted geographic inequality with respect to distribution and use of health resources including competing non-health priorities. 56% of health workers are nurses, 13% clinical officers, 7% doctors, 9%public health workers, 2% are pharmaceutical technologists. Poor-quality data limits the validity of disease-burdened estimates and research activities. Risk factors include unsafe water, sanitation, hand washing, unsafe sex, and malnutrition. Key challenge in achieving universal healthcare coverage is the rise in the relative contribution of non-communicable diseases. Improve targeted disease control with effective and equitable resource allocation. Develop high infectious disease control mechanisms. Improvement of quality data for decision making. Strengthen electronic data-capture systems. Increase investments in the health workforce to improve health service provision and achievement of universal health coverage. Create a favorable environment to retain health workers. Fill in staffing gaps resulting in shortages of doctors (7%). Develop a multi-sectional approach to health workforce planning and management. Need to invest in mechanisms that generate contextual evidence on current and future health workforce needs. Ensure retention of qualified, skilled, and motivated health workforce. Deliver integrated people-centered health services.

Keywords: multi-sectional approach, equity, people-centered, health workforce retention

Procedia PDF Downloads 113
254 Evaluation of Cryoablation Procedures in Treatment of Atrial Fibrillation from 3 Years' Experiences in a Single Heart Center

Authors: J. Yan, B. Pieper, B. Bucsky, B. Nasseri, S. Klotz, H. H. Sievers, S. Mohamed

Abstract:

Cryoablation is evermore applied for interventional treatment of paroxysmal (PAAF) or persistent atrial fibrillation (PEAF). In the cardiac surgery, this procedure is often combined with coronary arterial bypass graft (CABG) and valve operations. Three different methods are feasible in this sense in respect to practicing extents and mechanisms such as lone left atrial cryoablation, Cox-Maze IV and III in our heart center. 415 patients (68 ± 0.8ys, male 68.2%) with predisposed atrial fibrillation who initially required either coronary or valve operations were enrolled and divided into 3 matched groups according to deployed procedures: CryoLA-group (cryoablation of lone left atrium, n=94); Cox-Maze-IV-group (n=93) and Cox-Maze-III-group (n=8). All patients additionally received closure of the left atrial appendage (LAA) and regularly underwent three-year ambulant follow-up assessments (3, 6, 9, 12, 18, 24, 30 and 36 months). Burdens of atrial fibrillation were assessed directly by means of cardiac monitor (Reveal XT, Medtronic) or of 3-day Holter electrocardiogram. Herewith, attacks frequencies of AF and their circadian patterns were systemically analyzed. Furthermore, anticoagulants and regular rate-/rhythm-controlling medications were evaluated and listed in terms of anti-rate and anti-rhythm regimens. Concerning PAAF treatment, Cox Maze IV procedure provided therapeutically acceptable effect as lone left atrium (LA) cryoablation did (5.25 ± 5.25% vs. 10.39 ± 9.96% AF-burden, p > 0.05). Interestingly, Cox Maze III method presented a better short-term effect in the PEAF therapy in comparison to lone cryoablation of LA and Cox Maze IV (0.25 ± 0.23% vs. 15.31 ± 5.99% and 9.10 ± 3.73% AF-burden within the first year, p < 0.05). But this therapeutic advantage went lost during ongoing follow-ups (26.65 ± 24.50% vs. 8.33 ± 8.06% and 15.73 ± 5.88% in 3rd follow-up year). In this way, lone LA-cryoablation established its antiarrhythmic efficacy and 69.5% patients were released from the Vit-K-antagonists, while Cox Maze IV liberated 67.2% patients from continuous anticoagulant medication. The AF-recurrences mostly performed such attacks property as less than 60min duration for all 3 procedures (p > 0.05). In the sense of the circadian distribution of the recurrence attacks, weighted by ongoing follow-ups, lone LA cryoablation achieved and stabilized the antiarrhythmic effects over time, which was especially observed in the treatment of PEAF, while Cox Maze IV and III had their antiarrhythmic effects weakened progressively. This phenomenon was likewise evaluable in the therapy of circadian rhythm of reverting AF-attacks. Furthermore, the strategy of rate control was much more often applied to support and maintain therapeutic successes obtained than the one of rhythm control. Derived from experiences in our heart center, lone LA cryoablation presented equivalent effects in the treatment of AF in comparison to Cox Maze IV and III procedures. These therapeutic successes were especially investigable in the patients suffering from persistent AF (PEAF). Additional supportive strategies such as rate control regime should be initialized and implemented to improve the therapeutic effects of the cryoablations according to appropriate criteria.

Keywords: AF-burden, atrial fibrillation, cardiac monitor, COX MAZE, cryoablation, Holter, LAA

Procedia PDF Downloads 202
253 Reconstruction of Signal in Plastic Scintillator of PET Using Tikhonov Regularization

Authors: L. Raczynski, P. Moskal, P. Kowalski, W. Wislicki, T. Bednarski, P. Bialas, E. Czerwinski, A. Gajos, L. Kaplon, A. Kochanowski, G. Korcyl, J. Kowal, T. Kozik, W. Krzemien, E. Kubicz, Sz. Niedzwiecki, M. Palka, Z. Rudy, O. Rundel, P. Salabura, N.G. Sharma, M. Silarski, A. Slomski, J. Smyrski, A. Strzelecki, A. Wieczorek, M. Zielinski, N. Zon

Abstract:

The J-PET scanner, which allows for single bed imaging of the whole human body, is currently under development at the Jagiellonian University. The J-PET detector improves the TOF resolution due to the use of fast plastic scintillators. Since registration of the waveform of signals with duration times of few nanoseconds is not feasible, a novel front-end electronics allowing for sampling in a voltage domain at four thresholds was developed. To take fully advantage of these fast signals a novel scheme of recovery of the waveform of the signal, based on ideas from the Tikhonov regularization (TR) and Compressive Sensing methods, is presented. The prior distribution of sparse representation is evaluated based on the linear transformation of the training set of waveform of the signals by using the Principal Component Analysis (PCA) decomposition. Beside the advantage of including the additional information from training signals, a further benefit of the TR approach is that the problem of signal recovery has an optimal solution which can be determined explicitly. Moreover, from the Bayes theory the properties of regularized solution, especially its covariance matrix, may be easily derived. This step is crucial to introduce and prove the formula for calculations of the signal recovery error. It has been proven that an average recovery error is approximately inversely proportional to the number of samples at voltage levels. The method is tested using signals registered by means of the single detection module of the J-PET detector built out from the 30 cm long BC-420 plastic scintillator strip. It is demonstrated that the experimental and theoretical functions describing the recovery errors in the J-PET scenario are largely consistent. The specificity and limitations of the signal recovery method in this application are discussed. It is shown that the PCA basis offers high level of information compression and an accurate recovery with just eight samples, from four voltage levels, for each signal waveform. Moreover, it is demonstrated that using the recovered waveform of the signals, instead of samples at four voltage levels alone, improves the spatial resolution of the hit position reconstruction. The experiment shows that spatial resolution evaluated based on information from four voltage levels, without a recovery of the waveform of the signal, is equal to 1.05 cm. After the application of an information from four voltage levels to the recovery of the signal waveform, the spatial resolution is improved to 0.94 cm. Moreover, the obtained result is only slightly worse than the one evaluated using the original raw-signal. The spatial resolution calculated under these conditions is equal to 0.93 cm. It is very important information since, limiting the number of threshold levels in the electronic devices to four, leads to significant reduction of the overall cost of the scanner. The developed recovery scheme is general and may be incorporated in any other investigation where a prior knowledge about the signals of interest may be utilized.

Keywords: plastic scintillators, positron emission tomography, statistical analysis, tikhonov regularization

Procedia PDF Downloads 445
252 Spatial and Temporal Variability of Meteorological Drought Including Atmospheric Circulation in Central Europe

Authors: Andrzej Wałęga, Marta Cebulska, Agnieszka Ziernicka-Wojtaszek, Wojciech Młocek, Agnieszka Wałęga, Tommaso Caloiero

Abstract:

Drought is one of the natural phenomena influencing many aspects of human activities like food production, agriculture, industry, and the ecological conditions of the environment. In the area of the Polish Carpathians, there are periods with a deficit of rainwater and an increasing frequency in dry months, especially in the cold half of the year. The aim of this work is a spatial and temporal analysis of drought, expressed as SPI in a heterogenous area of the Polish Carpathian and of the highland Region in the Central part of Europe based on long-term precipitation data. Also, to our best knowledge, for the first time in this work, drought characteristics analyzed via the SPI were discussed based on the atmospheric circulation calendar. The study region is the Upper Vistula Basin, located in the southern and south-eastern part of Poland. In this work, monthly precipitation from 56 rainfall stations was analysed from 1961 to 2022. The 3-, 6-, 9-, and 12-month Standardized Precipitation Index (SPI) were used as indicators of meteorological drought. For the 3-month SPI, the main climatic mechanisms determining extreme droughts were defined based on the calendar of synoptic circulations. The Mann-Kendall test was used to detect the trend of extreme droughts. Statistically significant trends of SPI were observed on 52.7% of all analyzed stations, and in most cases, a positive trend was observed. Statistically significant trends were more frequently observed in stations located in the western part of the analyzed region. Long-term droughts, represented by the 12-month SPI, occurred in all stations but not in all years. Short-term droughts (3-month SPI) were most frequent in the winter season, 6 and 9-month SPI in winter and spring, and 12-month SPI in winter and autumn, respectively. The spatial distribution of drought was highly diverse. The most intensive drought occurred in 1984, with the 6-month SPI covering 98% of the analyzed region and the 9 and 12-month SPI covering 90% of the entire region. Droughts exhibit a seasonal pattern, with a dominant 10-year periodicity for all analyzed variants of SPI. Additionally, Fourier analysis revealed a 2-year periodicity for the 3-, 6-, and 9-month SPI and a 31-year periodicity for the 12-month SPI. The results provide insights into the typical climatic conditions in Poland, with strong seasonality in precipitation. The study highlighted that short-term extreme droughts, represented by the 3-month SPI, are often caused by anticyclonic situations with high-pressure wedges Ka and Wa, and anticyclonic West as observed in 52.3% of cases. These findings are crucial for understanding the spatial and temporal variability of short and long-term extreme droughts in Central Europe, particularly for the agriculture sector dominant in the northern part of the analyzed region, where drought frequency is highest.

Keywords: atmospheric circulation, drought, precipitation, SPI, the Upper Vistula Basin

Procedia PDF Downloads 74
251 Inflation and Deflation of Aircraft's Tire with Intelligent Tire Pressure Regulation System

Authors: Masoud Mirzaee, Ghobad Behzadi Pour

Abstract:

An aircraft tire is designed to tolerate extremely heavy loads for a short duration. The number of tires increases with the weight of the aircraft, as it is needed to be distributed more evenly. Generally, aircraft tires work at high pressure, up to 200 psi (14 bar; 1,400 kPa) for airliners and higher for business jets. Tire assemblies for most aircraft categories provide a recommendation of compressed nitrogen that supports the aircraft’s weight on the ground, including a mechanism for controlling the aircraft during taxi, takeoff; landing; and traction for braking. Accurate tire pressure is a key factor that enables tire assemblies to perform reliably under high static and dynamic loads. Concerning ambient temperature change, considering the condition in which the temperature between the origin and destination airport was different, tire pressure should be adjusted and inflated to the specified operating pressure at the colder airport. This adjustment superseding the normal tire over an inflation limit of 5 percent at constant ambient temperature is required because the inflation pressure remains constant to support the load of a specified aircraft configuration. On the other hand, without this adjustment, a tire assembly would be significantly under/over-inflated at the destination. Due to an increase of human errors in the aviation industry, exorbitant costs are imposed on the airlines for providing consumable parts such as aircraft tires. The existence of an intelligent system to adjust the aircraft tire pressure based on weight, load, temperature, and weather conditions of origin and destination airports, could have a significant effect on reducing the aircraft maintenance costs, aircraft fuel and further improving the environmental issues related to the air pollution. An intelligent tire pressure regulation system (ITPRS) contains a processing computer, a nitrogen bottle with 1800 psi, and distribution lines. Nitrogen bottle’s inlet and outlet valves are installed in the main wheel landing gear’s area and are connected through nitrogen lines to main wheels and nose wheels assy. Controlling and monitoring of nitrogen will be performed by a computer, which is adjusted according to the calculations of received parameters, including the temperature of origin and destination airport, the weight of cargo loads and passengers, fuel quantity, and wind direction. Correct tire inflation and deflation are essential in assuring that tires can withstand the centrifugal forces and heat of normal operations, with an adequate margin of safety for unusual operating conditions such as rejected takeoff and hard landings. ITPRS will increase the performance of the aircraft in all phases of takeoff, landing, and taxi. Moreover, this system will reduce human errors, consumption materials, and stresses imposed on the aircraft body.

Keywords: avionic system, improve efficiency, ITPRS, human error, reduced cost, tire pressure

Procedia PDF Downloads 247
250 Achieving Sustainable Agriculture with Treated Municipal Wastewater

Authors: Reshu Yadav, Himanshu Joshi, S. K. Tripathi

Abstract:

Fresh water is a scarce resource which is essential for humans and ecosystems, but its distribution is uneven. Agricultural production accounts for 70% of all surface water supplies. It is projected that against the expansion in the area equipped for irrigation by 0.6% per year, the global potential irrigation water demand would rise by 9.5% during 2021-25. This would, on one hand, have to compete against the sharply rising urban water demand. On the other, it would also have to face the fear of climate change, as temperatures rise and crop yields could drop from 10-30% in many large areas. The huge demand for irrigation combined with fresh water scarcity encourages to explore the reuse of wastewater as a resource. However, the use of such wastewater is often linked to the safety issues when used non judiciously or with poor safeguards while irrigating food crops. Paddy is one of the major crops globally and amongst the most important in South Asia and Africa. In many parts of the world, use of municipal wastewater has been promoted as a viable option in this regard. In developing and fast growing countries like India, regularly increasing wastewater generation rates may allow this option to be considered quite seriously. In view of this, a pilot field study was conducted at the Jagjeetpur Municipal Sewage treatment plant situated in the Haridwar town of Uttarakhand state, India. The objectives of the present study were to study the effect of treated wastewater on the production of various paddy varieties (Sharbati, PR-114, PB-1, Menaka, PB1121 and PB 1509) and emission of GHG gases (CO2, CH4 and N2O) as compared to the same varieties grown in the control plots irrigated with fresh water. Of late, the concept of water footprint assessment has emerged, which explains enumeration of various types of water footprints of an agricultural entity from its production to processing stages. Paddy, the most water demanding staple crop of Uttarakhand state, displayed a high green water footprint value of 2966.538 m3/ton. Most of the wastewater irrigated varieties displayed upto 6% increase in production, except Menaka and PB-1121, which showed a reduction in production (6% and 3% respectively), due to pest and insect infestation. The treated wastewater was observed to be rich in Nitrogen (55.94 mg/ml Nitrate), Phosphorus (54.24 mg/ml) and Potassium (9.78 mg/ml), thus rejuvenating the soil quality and not requiring any external nutritional supplements. Percentage increase of GHG gases on irrigation with treated municipal waste water as compared to control plots was observed as 0.4% - 8.6% (CH4), 1.1% - 9.2% (CO2), and 0.07% - 5.8% (N2O). The variety, Sharbati, displayed maximum production (5.5 ton/ha) and emerged as the most resistant variety against pests and insects. The emission values of CH4 ,CO2 and N2O were 729.31 mg/m2/d, 322.10 mg/m2/d and 400.21 mg/m2/d in water stagnant condition. This study highlighted a successful possibility of reuse of wastewater for non-potable purposes offering the potential for exploiting this resource that can replace or reduce existing use of fresh water sources in agricultural sector.

Keywords: greenhouse gases, nutrients, water footprint, wastewater irrigation

Procedia PDF Downloads 319
249 Biodegradation of Chlorophenol Derivatives Using Macroporous Material

Authors: Dmitriy Berillo, Areej K. A. Al-Jwaid, Jonathan L. Caplin, Andrew Cundy, Irina Savina

Abstract:

Chlorophenols (CPs) are used as a precursor in the production of higher CPs and dyestuffs, and as a preservative. Contamination by CPs of the ground water is located in the range from 0.15-100mg/L. The EU has set maximum concentration limits for pesticides and their degradation products of 0.1μg/L and 0.5μg/L, respectively. People working in industries which produce textiles, leather products, domestic preservatives, and petrochemicals are most heavily exposed to CPs. The International Agency for Research on Cancers categorized CPs as potential human carcinogens. Existing multistep water purification processes for CPs such as hydrogenation, ion exchange, liquid-liquid extraction, adsorption by activated carbon, forward and inverse osmosis, electrolysis, sonochemistry, UV irradiation, and chemical oxidation are not always cost effective and can cause the formation of even more toxic or mutagenic derivatives. Bioremediation of CPs derivatives utilizing microorganisms results in 60 to 100% decontamination efficiency and the process is more environmentally-friendly compared with existing physico-chemical methods. Microorganisms immobilized onto a substrate show many advantages over free bacteria systems, such as higher biomass density, higher metabolic activity, and resistance to toxic chemicals. They also enable continuous operation, avoiding the requirement for biomass-liquid separation. The immobilized bacteria can be reused several times, which opens the opportunity for developing cost-effective processes for wastewater treatment. In this study, we develop a bioremediation system for CPs based on macroporous materials, which can be efficiently used for wastewater treatment. Conditions for the preparation of the macroporous material from specific bacterial strains (Pseudomonas mendocina and Rhodococus koreensis) were optimized. The concentration of bacterial cells was kept constant; the difference was only the type of cross-linking agents used e.g. glutaraldehyde, novel polymers, which were utilized at concentrations of 0.5 to 1.5%. SEM images and rheology analysis of the material indicated a monolithic macroporous structure. Phenol was chosen as a model system to optimize the function of the cryogel material and to estimate its enzymatic activity, since it is relatively less toxic and harmful compared to CPs. Several types of macroporous systems comprising live bacteria were prepared. The viability of the cross-linked bacteria was checked using Live/Dead BacLight kit and Laser Scanning Confocal Microscopy, which revealed the presence of viable bacteria with the novel cross-linkers, whereas the control material cross-linked with glutaraldehyde(GA), contained mostly dead cells. The bioreactors based on bacteria were used for phenol degradation in batch mode at an initial concentration of 50mg/L, pH 7.5 and a temperature of 30°C. Bacterial strains cross-linked with GA showed insignificant ability to degrade phenol and for one week only, but a combination of cross-linking agents illustrated higher stability, viability and the possibility to be reused for at least five weeks. Furthermore, conditions for CPs degradation will be optimized, and the chlorophenol degradation rates will be compared to those for phenol. This is a cutting-edge bioremediation approach, which allows the purification of waste water from sustainable compounds without a separation step to remove free planktonic bacteria. Acknowledgments: Dr. Berillo D. A. is very grateful to Individual Fellowship Marie Curie Program for funding of the research.

Keywords: bioremediation, cross-linking agents, cross-linked microbial cell, chlorophenol degradation

Procedia PDF Downloads 212
248 Case-Based Reasoning for Modelling Random Variables in the Reliability Assessment of Existing Structures

Authors: Francesca Marsili

Abstract:

The reliability assessment of existing structures with probabilistic methods is becoming an increasingly important and frequent engineering task. However probabilistic reliability methods are based on an exhaustive knowledge of the stochastic modeling of the variables involved in the assessment; at the moment standards for the modeling of variables are absent, representing an obstacle to the dissemination of probabilistic methods. The framework according to probability distribution functions (PDFs) are established is represented by the Bayesian statistics, which uses Bayes Theorem: a prior PDF for the considered parameter is established based on information derived from the design stage and qualitative judgments based on the engineer past experience; then, the prior model is updated with the results of investigation carried out on the considered structure, such as material testing, determination of action and structural properties. The application of Bayesian statistics arises two different kind of problems: 1. The results of the updating depend on the engineer previous experience; 2. The updating of the prior PDF can be performed only if the structure has been tested, and quantitative data that can be statistically manipulated have been collected; performing tests is always an expensive and time consuming operation; furthermore, if the considered structure is an ancient building, destructive tests could compromise its cultural value and therefore should be avoided. In order to solve those problems, an interesting research path is represented by investigating Artificial Intelligence (AI) techniques that can be useful for the automation of the modeling of variables and for the updating of material parameters without performing destructive tests. Among the others, one that raises particular attention in relation to the object of this study is constituted by Case-Based Reasoning (CBR). In this application, cases will be represented by existing buildings where material tests have already been carried out and an updated PDFs for the material mechanical parameters has been computed through a Bayesian analysis. Then each case will be composed by a qualitative description of the material under assessment and the posterior PDFs that describe its material properties. The problem that will be solved is the definition of PDFs for material parameters involved in the reliability assessment of the considered structure. A CBR system represent a good candi¬date in automating the modelling of variables because: 1. Engineers already draw an estimation of the material properties based on the experience collected during the assessment of similar structures, or based on similar cases collected in literature or in data-bases; 2. Material tests carried out on structure can be easily collected from laboratory database or from literature; 3. The system will provide the user of a reliable probabilistic description of the variables involved in the assessment that will also serve as a tool in support of the engineer’s qualitative judgments. Automated modeling of variables can help in spreading probabilistic reliability assessment of existing buildings in the common engineering practice, and target at the best intervention and further tests on the structure; CBR represents a technique which may help to achieve this.

Keywords: reliability assessment of existing buildings, Bayesian analysis, case-based reasoning, historical structures

Procedia PDF Downloads 336
247 Implementing Equitable Learning Experiences to Increase Environmental Awareness and Science Proficiency in Alabama’s Schools and Communities

Authors: Carly Cummings, Maria Soledad Peresin

Abstract:

Alabama has a long history of racial injustice and unsatisfactory educational performance. In the 1870s Jim Crow laws segregated public schools and disproportionally allocated funding and resources to white institutions across the South. Despite the Supreme Court ruling to integrate schools following Brown vs. the Board of Education in 1954, Alabama’s school system continued to exhibit signs of segregation, compounded by “white flight” and the establishment of exclusive private schools, which still exist today. This discriminatory history has had a lasting impact of the state’s education system, reflected in modern school demographics and achievement data. It is well known that Alabama struggles with education performance, especially in science education. On average, minority groups scored the lowest in science proficiency. In Alabama, minority populations are concentrated in a region known as the Black Belt, which was once home to countless slave plantations and was the epicenter of the Civil Rights Movement. Today the Black Belt is characterized by a high density of woodlands and plays a significant role in Alabama’s leading economic industry-forest products. Given the economic importance of forestry and agriculture to the state, environmental science proficiency is essential to its stability; however, it is neglected in areas where it is needed most. To better understand the inequity of science education within Alabama, our study first investigates how geographic location, demographics and school funding relate to science achievement scores using ArcGIS and Pearson’s correlation coefficient. Additionally, our study explores the implementation of a relevant, problem-based, active learning lesson in schools. Relevant learning engages students by connecting material to their personal experiences. Problem-based active learning involves real-world problem-solving through hands-on experiences. Given Alabama’s significant woodland coverage, educational materials on forest products were developed with consideration of its relevance to students, especially those located in the Black Belt. Furthermore, to incorporate problem solving and active learning, the lesson centered around students using forest products to solve environmental challenges, such as water pollution- an increasing challenge within the state due to climate change. Pre and post assessment surveys were provided to teachers to measure the effectiveness of the lesson. In addition to pedagogical practices, community and mentorship programs are known to positively impact educational achievements. To this end, our work examines the results of surveys measuring educational professionals’ attitudes toward a local mentorship group within the Black Belt and its potential to address environmental and science literacy. Additionally, our study presents survey results from participants who attended an educational community event, gauging its effectiveness in increasing environmental and science proficiency. Our results demonstrate positive improvements in environmental awareness and science literacy with relevant pedagogy, mentorship, and community involvement. Implementing these practices can help provide equitable and inclusive learning environments and can better equip students with the skills and knowledge needed to bridge this historic educational gap within Alabama.

Keywords: equitable education, environmental science, environmental education, science education, racial injustice, sustainability, rural education

Procedia PDF Downloads 68
246 The Effect of Whole-Body Vertical Rhythm Training on Fatigue, Physical Activity, and Quality of Life to the Middle-Aged and Elderly with Hemodialysis Patients

Authors: Yen-Fen Shen, Meng-Fan Li

Abstract:

The study aims to investigate the effect of full-body vertical rhythmic training on fatigue, physical activity, and quality of life among middle-aged and elderly hemodialysis patients. The study adopted a quasi-experimental research method and recruited 43 long-term hemodialysis patients from a medical center in northern Taiwan, with 23 and 20 participants in the experimental and control groups, respectively. The experimental group received full-body vertical rhythmic training as an intervention, while the control group received standard hemodialysis care without any intervention. Both groups completed the measurements by using "Fatigue Scale", "Physical Activity Scale" and "Chinese version of the Kidney Disease Quality of Life Questionnaire" before and after the study. The experimental group underwent a 10-minute full-body vertical rhythmic training three times per week, which lasted for eight weeks before receiving regular hemodialysis treatment. The data were analyzed by SPSS 25 software, including descriptive statistics such as frequency distribution, percentages, means, and standard deviations, as well as inferential statistics, including chi-square, independent samples t-test, and paired samples t-test. The study results are summarized as follows: 1. There were no significant differences in demographic variables, fatigue, physical activity, and quality of life between the experimental and control groups in the pre-test. 2. After the intervention of the “full-body vertical rhythmic training,” the experimental group showed significantly better results in the category of "feeling tired and fatigued in the lower back", "physical functioning role limitation", "bodily pain", "social functioning", "mental health", and "impact of kidney disease on life quality." 3. The paired samples t-test results revealed that the control group experienced significant differences between the pre-test and post-test in the categories of feeling tired and fatigued in the lower back, bodily pain, social functioning mental health, and impact of kidney disease on life quality, with scores indicating a decline in life quality. Conversely, the experimental group only showed a significant worsening in bodily pain" and the impact of kidney disease on life quality, with lower change values compared to the control group. Additionally, there was an improvement in the condition of "feeling tired and fatigued in the lower back" for the experimental group. Conclusion: The intervention of the “full-body vertical rhythmic training” had a certain positive effect on the quality of life of the experimental group. While it may not entirely enhance patients' quality of life, it can mitigate the negative impact of kidney disease on certain aspects of the body. The study provides clinical practice, nursing education, and research recommendations based on the results and discusses the limitations of the research.

Keywords: hemodialysis, full-body vertical rhythmic training, fatigue, physical activity, quality of life

Procedia PDF Downloads 23
245 Expressing Locality in Learning English: A Study of English Textbooks for Junior High School Year VII-IX in Indonesia Context

Authors: Agnes Siwi Purwaning Tyas, Dewi Cahya Ambarwati

Abstract:

This paper concerns the language learning that develops as a habit formation and a constructive process while exercising an oppressive power to construct the learners. As a locus of discussion, the investigation problematizes the transfer of English language to Indonesian students of junior high school through the use of English textbooks ‘Real Time: An Interactive English Course for Junior High School Students Year VII-IX’. English language has long performed as a global language and it is a demand upon the non-English native speakers to master the language if they desire to become internationally recognized individuals. Generally, English teachers teach the language in accordance with the nature of language learning in which they are trained and expected to teach the language within the culture of the target language. This provides a potential soft cultural penetration of a foreign ideology through language transmission. In the context of Indonesia, learning English as international language is considered dilemmatic. Most English textbooks in Indonesia incorporate cultural elements of the target language which in some extent may challenge the sensitivity towards local cultural values. On the other hand, local teachers demand more English textbooks for junior high school students which can facilitate cultural dissemination of both local and global values and promote learners’ cultural traits of both cultures to avoid misunderstanding and confusion. It also aims to support language learning as bidirectional process instead of instrument of oppression. However, sensitizing and localizing this foreign language is not sufficient to restrain its soft infiltration. In due course, domination persists making the English language as an authoritative language and positioning the locality as ‘the other’. Such critical premise has led to a discursive analysis referring to how the cultural elements of the target language are presented in the textbooks and whether the local characteristics of Indonesia are able to gradually reduce the degree of the foreign oppressive ideology. The three textbooks researched were written by non-Indonesian author edited by two Indonesia editors published by a local commercial publishing company, PT Erlangga. The analytical elaboration examines the cultural characteristics in the forms of names, terminologies, places, objects and imageries –not the linguistic aspect– of both cultural domains; English and Indonesia. Comparisons as well as categorizations were made to identify the cultural traits of each language and scrutinize the contextual analysis. In the analysis, 128 foreign elements and 27 local elements were found in textbook for grade VII, 132 foreign elements and 23 local elements were found in textbook for grade VIII, while 144 foreign elements and 35 local elements were found in grade IX textbook, demonstrating the unequal distribution of both cultures. Even though the ideal pedagogical approach of English learning moves to a different direction by the means of inserting local elements, the learners are continuously imposed to the culture of the target language and forced to internalize the concept of values under the influence of the target language which tend to marginalize their native culture.

Keywords: bidirectional process, English, local culture, oppression

Procedia PDF Downloads 265
244 Gathering Space after Disaster: Understanding the Communicative and Collective Dimensions of Resilience through Field Research across Time in Hurricane Impacted Regions of the United States

Authors: Jack L. Harris, Marya L. Doerfel, Hyunsook Youn, Minkyung Kim, Kautuki Sunil Jariwala

Abstract:

Organizational resilience refers to the ability to sustain business or general work functioning despite wide-scale interruptions. We focus on organization and businesses as a pillar of their communities and how they attempt to sustain work when a natural disaster impacts their surrounding regions and economies. While it may be more common to think of resilience as a trait possessed by an organization, an emerging area of research recognizes that for organizations and businesses, resilience is a set of processes that are constituted through communication, social networks, and organizing. Indeed, five processes, robustness, rapidity, resourcefulness, redundancy, and external availability through social media have been identified as critical to organizational resilience. These organizing mechanisms involve multi-level coordination, where individuals intersect with groups, organizations, and communities. Because the nature of such interactions are often networks of people and organizations coordinating material resources, information, and support, they necessarily require some way to coordinate despite being displaced. Little is known, however, if physical and digital spaces can substitute one for the other. We thus are guided by the question, is digital space sufficient when disaster creates a scarcity of physical space? This study presents a cross-case comparison based on field research from four different regions of the United States that were impacted by Hurricanes Katrina (2005), Sandy (2012), Maria (2017), and Harvey (2017). These four cases are used to extend the science of resilience by examining multi-level processes enacted by individuals, communities, and organizations that together, contribute to the resilience of disaster-struck organizations, businesses, and their communities. Using field research about organizations and businesses impacted by the four hurricanes, we code data from interviews, participant observations, field notes, and document analysis drawn from New Orleans (post-Katrina), coastal New Jersey (post-Sandy), Houston Texas (post-Harvey), and the lower keys of Florida (post-Maria). This paper identifies an additional organizing mechanism, networked gathering spaces, where citizens and organizations, alike, coordinate and facilitate information sharing, material resource distribution, and social support. Findings show that digital space, alone, is not a sufficient substitute to effectively sustain organizational resilience during a disaster. Because the data are qualitative, we expand on this finding with specific ways in which organizations and the people who lead them worked around the problem of scarce space. We propose that gatherings after disaster are a sixth mechanism that contributes to organizational resilience.

Keywords: communication, coordination, disaster management, information and communication technologies, interorganizational relationships, resilience, work

Procedia PDF Downloads 171
243 Numerical Simulation of Hydraulic Fracture Propagation in Marine-continental Transitional Tight Sandstone Reservoirs by Boundary Element Method: A Case Study of Shanxi Formation in China

Authors: Jiujie Cai, Fengxia LI, Haibo Wang

Abstract:

After years of research, offshore oil and gas development now are shifted to unconventional reservoirs, where multi-stage hydraulic fracturing technology has been widely used. However, the simulation of complex hydraulic fractures in tight reservoirs is faced with geological and engineering difficulties, such as large burial depths, sand-shale interbeds, and complex stress barriers. The objective of this work is to simulate the hydraulic fracture propagation in the tight sandstone matrix of the marine-continental transitional reservoirs, where the Shanxi Formation in Tianhuan syncline of the Dongsheng gas field was used as the research target. The characteristic parameters of the vertical rock samples with rich beddings were clarified through rock mechanics experiments. The influence of rock mechanical parameters, vertical stress difference of pay-zone and bedding layer, and fracturing parameters (such as injection rates, fracturing fluid viscosity, and number of perforation clusters within single stage) on fracture initiation and propagation were investigated. In this paper, a 3-D fracture propagation model was built to investigate the complex fracture propagation morphology by boundary element method, considering the strength of bonding surface between layers, vertical stress difference and fracturing parameters (such as injection rates, fluid volume and viscosity). The research results indicate that on the condition of vertical stress difference (3 MPa), the fracture height can break through and enter the upper interlayer when the thickness of the overlying bedding layer is 6-9 m, considering effect of the weak bonding surface between layers. The fracture propagates within the pay zone when overlying interlayer is greater than 13 m. Difference in fluid volume distribution between clusters could be more than 20% when the stress difference of each cluster in the segment exceeds 2MPa. Fracture cluster in high stress zones cannot initiate when the stress difference in the segment exceeds 5MPa. The simulation results of fracture height are much higher if the effect of weak bonding surface between layers is not involved. By increasing the injection rates, increasing fracturing fluid viscosity, and reducing the number of clusters within single stage can promote the fracture height propagation through layers. Optimizing the perforation position and reducing the number of perforations can promote the uniform expansion of fractures. Typical curves of fracture height estimation were established for the tight sandstone of the Lower Permian Shanxi Formation. The model results have good consistency with micro-seismic monitoring results of hydraulic fracturing in Well 1HF.

Keywords: fracture propagation, boundary element method, fracture height, offshore oil and gas, marine-continental transitional reservoirs, rock mechanics experiment

Procedia PDF Downloads 125
242 Analysis of Electric Mobility in the European Union: Forecasting 2035

Authors: Domenico Carmelo Mongelli

Abstract:

The context is that of great uncertainty in the 27 countries belonging to the European Union which has adopted an epochal measure: the elimination of internal combustion engines for the traction of road vehicles starting from 2035 with complete replacement with electric vehicles. If on the one hand there is great concern at various levels for the unpreparedness for this change, on the other the Scientific Community is not preparing accurate studies on the problem, as the scientific literature deals with single aspects of the issue, moreover addressing the issue at the level of individual countries, losing sight of the global implications of the issue for the entire EU. The aim of the research is to fill these gaps: the technological, plant engineering, environmental, economic and employment aspects of the energy transition in question are addressed and connected to each other, comparing the current situation with the different scenarios that could exist in 2035 and in the following years until total disposal of the internal combustion engine vehicle fleet for the entire EU. The methodologies adopted by the research consist in the analysis of the entire life cycle of electric vehicles and batteries, through the use of specific databases, and in the dynamic simulation, using specific calculation codes, of the application of the results of this analysis to the entire EU electric vehicle fleet from 2035 onwards. Energy balance sheets will be drawn up (to evaluate the net energy saved), plant balance sheets (to determine the surplus demand for power and electrical energy required and the sizing of new plants from renewable sources to cover electricity needs), economic balance sheets (to determine the investment costs for this transition, the savings during the operation phase and the payback times of the initial investments), the environmental balances (with the different energy mix scenarios in anticipation of 2035, the reductions in CO2eq and the environmental effects are determined resulting from the increase in the production of lithium for batteries), the employment balances (it is estimated how many jobs will be lost and recovered in the reconversion of the automotive industry, related industries and in the refining, distribution and sale of petroleum products and how many will be products for technological innovation, the increase in demand for electricity, the construction and management of street electric columns). New algorithms for forecast optimization are developed, tested and validated. Compared to other published material, the research adds an overall picture of the energy transition, capturing the advantages and disadvantages of the different aspects, evaluating the entities and improvement solutions in an organic overall picture of the topic. The results achieved allow us to identify the strengths and weaknesses of the energy transition, to determine the possible solutions to mitigate these weaknesses and to simulate and then evaluate their effects, establishing the most suitable solutions to make this transition feasible.

Keywords: engines, Europe, mobility, transition

Procedia PDF Downloads 60
241 Distributed Energy Resources in Low-Income Communities: a Public Policy Proposal

Authors: Rodrigo Calili, Anna Carolina Sermarini, João Henrique Azevedo, Vanessa Cardoso de Albuquerque, Felipe Gonçalves, Gilberto Jannuzzi

Abstract:

The diffusion of Distributed Energy Resources (DER) has caused structural changes in the relationship between consumers and electrical systems. The Photovoltaic Distributed Generation (PVDG), in particular, is an essential strategy for achieving the 2030 Agenda goals, especially SDG 7 and SDG 13. However, it is observed that most projects involving this technology in Brazil are restricted to the wealthiest classes of society, not yet reaching the low-income population, aligned with theories of energy justice. Considering the research for energy equality, one of the policies adopted by governments is the social electricity tariff (SET), which provides discounts on energy tariffs/bills. However, just granting this benefit may not be effective, and it is possible to merge it with DER technologies, such as the PVDG. Thus, this work aims to evaluate the economic viability of the policy to replace the social electricity tariff (the current policy aimed at the low-income population in Brazil) by PVDG projects. To this end, a proprietary methodology was developed that included: mapping the stakeholders, identifying critical variables, simulating policy options, and carrying out an analysis in the Brazilian context. The simulation answered two key questions: in which municipalities low-income consumers would have lower bills with PVDG compared to SET; which consumers in a given city would have increased subsidies, which are now provided for solar energy in Brazil and for the social tariff. An economic model was created for verifying the feasibility of the proposed policy in each municipality in the country, considering geographic issues (tariff of a particular distribution utility, radiation from a specific location, etc.). To validate these results, four sensitivity analyzes were performed: variation of the simultaneity factor between generation and consumption, variation of the tariff readjustment rate, zeroing CAPEX, and exemption from state tax. The behind-the-meter modality of generation proved to be more promising than the construction of a shared plant. However, although the behind-the-meter modality presents better results than the shared plant, there is a greater complexity in adopting this modality due to issues related to the infrastructure of the most vulnerable communities (e.g., precarious electrical networks, need to reinforce roofs). Considering the shared power plant modality, many opportunities are still envisaged since the risk of investing in such a policy can be mitigated. Furthermore, this modality can be an alternative due to the mitigation of the risk of default, as it allows greater control of users and facilitates the process of operation and maintenance. Finally, it was also found, that in some regions of Brazil, the continuity of the SET presents more economic benefits than its replacement by PVDG. However, the proposed policy offers many opportunities. For future works, the model may include other parameters, such as cost with low-income populations’ engagement, and business risk. In addition, other renewable sources of distributed generation can be studied for this purpose.

Keywords: low income, subsidy policy, distributed energy resources, energy justice

Procedia PDF Downloads 111
240 Al2O3-Dielectric AlGaN/GaN Enhancement-Mode MOS-HEMTs by Using Ozone Water Oxidization Technique

Authors: Ching-Sung Lee, Wei-Chou Hsu, Han-Yin Liu, Hung-Hsi Huang, Si-Fu Chen, Yun-Jung Yang, Bo-Chun Chiang, Yu-Chuang Chen, Shen-Tin Yang

Abstract:

AlGaN/GaN high electron mobility transistors (HEMTs) have been intensively studied due to their intrinsic advantages of high breakdown electric field, high electron saturation velocity, and excellent chemical stability. They are also suitable for ultra-violet (UV) photodetection due to the corresponding wavelengths of GaN bandgap. To improve the optical responsivity by decreasing the dark current due to gate leakage problems and limited Schottky barrier heights in GaN-based HEMT devices, various metal-oxide-semiconductor HEMTs (MOS-HEMTs) have been devised by using atomic layer deposition (ALD), molecular beam epitaxy (MBE), metal-organic chemical vapor deposition (MOCVD), liquid phase deposition (LPD), and RF sputtering. The gate dielectrics include MgO, HfO2, Al2O3, La2O3, and TiO2. In order to provide complementary circuit operation, enhancement-mode (E-mode) devices have been lately studied using techniques of fluorine treatment, p-type capper, piezoneutralization layer, and MOS-gate structure. This work reports an Al2O3-dielectric Al0.25Ga0.75N/GaN E-mode MOS-HEMT design by using a cost-effective ozone water oxidization technique. The present ozone oxidization method advantages of low cost processing facility, processing simplicity, compatibility to device fabrication, and room-temperature operation under atmospheric pressure. It can further reduce the gate-to-channel distance and improve the transocnductance (gm) gain for a specific oxide thickness, since the formation of the Al2O3 will consume part of the AlGaN barrier at the same time. The epitaxial structure of the studied devices was grown by using the MOCVD technique. On a Si substrate, the layer structures include a 3.9 m C-doped GaN buffer, a 300 nm GaN channel layer, and a 5 nm Al0.25Ga0.75N barrier layer. Mesa etching was performed to provide electrical isolation by using an inductively coupled-plasma reactive ion etcher (ICP-RIE). Ti/Al/Au were thermally evaporated and annealed to form the source and drain ohmic contacts. The device was immersed into the H2O2 solution pumped with ozone gas generated by using an OW-K2 ozone generator. Ni/Au were deposited as the gate electrode to complete device fabrication of MOS-HEMT. The formed Al2O3 oxide thickness 7 nm and the remained AlGaN barrier thickness is 2 nm. A reference HEMT device has also been fabricated in comparison on the same epitaxial structure. The gate dimensions are 1.2 × 100 µm 2 with a source-to-drain spacing of 5 μm for both devices. The dielectric constant (k) of Al2O3 was characterized to be 9.2 by using C-V measurement. Reduced interface state density after oxidization has been verified by the low-frequency noise spectra, Hooge coefficients, and pulse I-V measurement. Improved device characteristics at temperatures of 300 K-450 K have been achieved for the present MOS-HEMT design. Consequently, Al2O3-dielectric Al0.25Ga0.75N/GaN E-mode MOS-HEMTs by using the ozone water oxidization method are reported. In comparison with a conventional Schottky-gate HEMT, the MOS-HEMT design has demonstrated excellent enhancements of 138% (176%) in gm, max, 118% (139%) in IDS, max, 53% (62%) in BVGD, 3 (2)-order reduction in IG leakage at VGD = -60 V at 300 (450) K. This work is promising for millimeter-wave integrated circuit (MMIC) and three-terminal active UV photodetector applications.

Keywords: MOS-HEMT, enhancement mode, AlGaN/GaN, passivation, ozone water oxidation, gate leakage

Procedia PDF Downloads 260
239 Lithological Mapping and Iron Deposits Identification in El-Bahariya Depression, Western Desert, Egypt, Using Remote Sensing Data Analysis

Authors: Safaa M. Hassan; Safwat S. Gabr, Mohamed F. Sadek

Abstract:

This study is proposed for the lithological and iron oxides detection in the old mine areas of El-Bahariya Depression, Western Desert, using ASTER and Landsat-8 remote sensing data. Four old iron ore occurrences, namely; El-Gedida, El-Haraa, Ghurabi, and Nasir mine areas found in the El-Bahariya area. This study aims to find new high potential areas for iron mineralization around El-Baharyia depression. Image processing methods such as principle component analysis (PCA) and band ratios (b4/b5, b5/b6, b6/b7, and 4/2, 6/7, band 6) images were used for lithological identification/mapping that includes the iron content in the investigated area. ASTER and Landsat-8 visible and short-wave infrared data found to help mapping the ferruginous sandstones, iron oxides as well as the clay minerals in and around the old mines area of El-Bahariya depression. Landsat-8 band ratio and the principle component of this study showed well distribution of the lithological units, especially ferruginous sandstones and iron zones (hematite and limonite) along with detection of probable high potential areas for iron mineralization which can be used in the future and proved the ability of Landsat-8 and ASTER data in mapping these features. Minimum Noise Fraction (MNF), Mixture Tuned Matched Filtering (MTMF), pixel purity index methods as well as Spectral Ange Mapper classifier algorithm have been successfully discriminated the hematite and limonite content within the iron zones in the study area. Various ASTER image spectra and ASD field spectra of hematite and limonite and the surrounding rocks are compared and found to be consistent in terms of the presence of absorption features at range from 1.95 to 2.3 μm for hematite and limonite. Pixel purity index algorithm and two sub-pixel spectral methods, namely Mixture Tuned Matched Filtering (MTMF) and matched filtering (MF) methods, are applied to ASTER bands to delineate iron oxides (hematite and limonite) rich zones within the rock units. The results are validated in the field by comparing image spectra of spectrally anomalous zone with the USGS resampled laboratory spectra of hematite and limonite samples using ASD measurements. A number of iron oxides rich zones in addition to the main surface exposures of the El-Gadidah Mine, are confirmed in the field. The proposed method is a successful application of spectral mapping of iron oxides deposits in the exposed rock units (i.e., ferruginous sandstone) and present approach of both ASTER and ASD hyperspectral data processing can be used to delineate iron-rich zones occurring within similar geological provinces in any parts of the world.

Keywords: Landsat-8, ASTER, lithological mapping, iron exploration, western desert

Procedia PDF Downloads 142
238 Cross-Country Mitigation Policies and Cross Border Emission Taxes

Authors: Massimo Ferrari, Maria Sole Pagliari

Abstract:

Pollution is a classic example of economic externality: agents who produce it do not face direct costs from emissions. Therefore, there are no direct economic incentives for reducing pollution. One way to address this market failure would be directly taxing emissions. However, because emissions are global, governments might as well find it optimal to wait let foreign countries to tax emissions so that they can enjoy the benefits of lower pollution without facing its direct costs. In this paper, we first document the empirical relation between pollution and economic output with static and dynamic regression methods. We show that there is a negative relation between aggregate output and the stock of pollution (measured as the stock of CO₂ emissions). This relationship is also highly non-linear, increasing at an exponential rate. In the second part of the paper, we develop and estimate a two-country, two-sector model for the US and the euro area. With this model, we aim at analyzing how the public sector should respond to higher emissions and what are the direct costs that these policies might have. In the model, there are two types of firms, brown firms (which produce a polluting technology) and green firms. Brown firms also produce an externality, CO₂ emissions, which has detrimental effects on aggregate output. As brown firms do not face direct costs from polluting, they do not have incentives to reduce emissions. Notably, emissions in our model are global: the stock of CO₂ in the economy affects all countries, independently from where it is produced. This simplified economy captures the main trade-off between emissions and production, generating a classic market failure. According to our results, the current level of emission reduces output by between 0.4 and 0.75%. Notably, these estimates lay in the upper bound of the distribution of those delivered by studies in the early 2000s. To address market failure, governments should step in introducing taxes on emissions. With the tax, brown firms pay a cost for polluting hence facing the incentive to move to green technologies. Governments, however, might also adopt a beggar-thy-neighbour strategy. Reducing emissions is costly, as moves production away from the 'optimal' production mix of brown and green technology. Because emissions are global, a government could just wait for the other country to tackle climate change, ripping the benefits without facing any costs. We study how this strategic game unfolds and show three important results: first, cooperation is first-best optimal from a global prospective; second, countries face incentives to deviate from the cooperating equilibria; third, tariffs on imported brown goods (the only retaliation policy in case of deviation from the cooperation equilibrium) are ineffective because the exchange rate would move to compensate. We finally study monetary policy under when costs for climate change rise and show that the monetary authority should react stronger to deviations of inflation from its target.

Keywords: climate change, general equilibrium, optimal taxation, monetary policy

Procedia PDF Downloads 157
237 Application of the Standard Deviation in Regulating Design Variation of Urban Solutions Generated through Evolutionary Computation

Authors: Mohammed Makki, Milad Showkatbakhsh, Aiman Tabony

Abstract:

Computational applications of natural evolutionary processes as problem-solving tools have been well established since the mid-20th century. However, their application within architecture and design has only gained ground in recent years, with an increasing number of academics and professionals in the field electing to utilize evolutionary computation to address problems comprised from multiple conflicting objectives with no clear optimal solution. Recent advances in computer science and its consequent constructive influence on the architectural discourse has led to the emergence of multiple algorithmic processes capable of simulating the evolutionary process in nature within an efficient timescale. Many of the developed processes of generating a population of candidate solutions to a design problem through an evolutionary based stochastic search process are often driven through the application of both environmental and architectural parameters. These methods allow for conflicting objectives to be simultaneously, independently, and objectively optimized. This is an essential approach in design problems with a final product that must address the demand of a multitude of individuals with various requirements. However, one of the main challenges encountered through the application of an evolutionary process as a design tool is the ability for the simulation to maintain variation amongst design solutions in the population while simultaneously increasing in fitness. This is most commonly known as the ‘golden rule’ of balancing exploration and exploitation over time; the difficulty of achieving this balance in the simulation is due to the tendency of either variation or optimization being favored as the simulation progresses. In such cases, the generated population of candidate solutions has either optimized very early in the simulation, or has continued to maintain high levels of variation to which an optimal set could not be discerned; thus, providing the user with a solution set that has not evolved efficiently to the objectives outlined in the problem at hand. As such, the experiments presented in this paper seek to achieve the ‘golden rule’ by incorporating a mathematical fitness criterion for the development of an urban tissue comprised from the superblock as its primary architectural element. The mathematical value investigated in the experiments is the standard deviation factor. Traditionally, the standard deviation factor has been used as an analytical value rather than a generative one, conventionally used to measure the distribution of variation within a population by calculating the degree by which the majority of the population deviates from the mean. A higher standard deviation value delineates a higher number of the population is clustered around the mean and thus limited variation within the population, while a lower standard deviation value is due to greater variation within the population and a lack of convergence towards an optimal solution. The results presented will aim to clarify the extent to which the utilization of the standard deviation factor as a fitness criterion can be advantageous to generating fitter individuals in a more efficient timeframe when compared to conventional simulations that only incorporate architectural and environmental parameters.

Keywords: architecture, computation, evolution, standard deviation, urban

Procedia PDF Downloads 131
236 Pharmacophore-Based Modeling of a Series of Human Glutaminyl Cyclase Inhibitors to Identify Lead Molecules by Virtual Screening, Molecular Docking and Molecular Dynamics Simulation Study

Authors: Ankur Chaudhuri, Sibani Sen Chakraborty

Abstract:

In human, glutaminyl cyclase activity is highly abundant in neuronal and secretory tissues and is preferentially restricted to hypothalamus and pituitary. The N-terminal modification of β-amyloids (Aβs) peptides by the generation of a pyro-glutamyl (pGlu) modified Aβs (pE-Aβs) is an important process in the initiation of the formation of neurotoxic plaques in Alzheimer’s disease (AD). This process is catalyzed by glutaminyl cyclase (QC). The expression of QC is characteristically up-regulated in the early stage of AD, and the hallmark of the inhibition of QC is the prevention of the formation of pE-Aβs and plaques. A computer-aided drug design (CADD) process was employed to give an idea for the designing of potentially active compounds to understand the inhibitory potency against human glutaminyl cyclase (QC). This work elaborates the ligand-based and structure-based pharmacophore exploration of glutaminyl cyclase (QC) by using the known inhibitors. Three dimensional (3D) quantitative structure-activity relationship (QSAR) methods were applied to 154 compounds with known IC50 values. All the inhibitors were divided into two sets, training-set, and test-sets. Generally, training-set was used to build the quantitative pharmacophore model based on the principle of structural diversity, whereas the test-set was employed to evaluate the predictive ability of the pharmacophore hypotheses. A chemical feature-based pharmacophore model was generated from the known 92 training-set compounds by HypoGen module implemented in Discovery Studio 2017 R2 software package. The best hypothesis was selected (Hypo1) based upon the highest correlation coefficient (0.8906), lowest total cost (463.72), and the lowest root mean square deviation (2.24Å) values. The highest correlation coefficient value indicates greater predictive activity of the hypothesis, whereas the lower root mean square deviation signifies a small deviation of experimental activity from the predicted one. The best pharmacophore model (Hypo1) of the candidate inhibitors predicted comprised four features: two hydrogen bond acceptor, one hydrogen bond donor, and one hydrophobic feature. The Hypo1 was validated by several parameters such as test set activity prediction, cost analysis, Fischer's randomization test, leave-one-out method, and heat map of ligand profiler. The predicted features were then used for virtual screening of potential compounds from NCI, ASINEX, Maybridge and Chembridge databases. More than seven million compounds were used for this purpose. The hit compounds were filtered by drug-likeness and pharmacokinetics properties. The selective hits were docked to the high-resolution three-dimensional structure of the target protein glutaminyl cyclase (PDB ID: 2AFU/2AFW) to filter these hits further. To validate the molecular docking results, the most active compound from the dataset was selected as a reference molecule. From the density functional theory (DFT) study, ten molecules were selected based on their highest HOMO (highest occupied molecular orbitals) energy and the lowest bandgap values. Molecular dynamics simulations with explicit solvation systems of the final ten hit compounds revealed that a large number of non-covalent interactions were formed with the binding site of the human glutaminyl cyclase. It was suggested that the hit compounds reported in this study could help in future designing of potent inhibitors as leads against human glutaminyl cyclase.

Keywords: glutaminyl cyclase, hit lead, pharmacophore model, simulation

Procedia PDF Downloads 130
235 Residents' Incomes in Local Government Unit as the Major Determinant of Local Budget Transparency in Croatia: Panel Data Analysis

Authors: Katarina Ott, Velibor Mačkić, Mihaela Bronić, Branko Stanić

Abstract:

The determinants of national budget transparency have been widely discussed in the literature, while research on determinants of local budget transparency are scarce and empirically inconclusive, particularly in the new, fiscally centralised, EU member states. To fill the gap, we combine two strands of the literature: that concerned with public administration and public finance, shedding light on the economic and financial determinants of local budget transparency, and that on the political economy of transparency (principal agent theory), covering the relationships among politicians and between politicians and voters. Our main hypothesis states that variables describing residents’ capacity have a greater impact on local budget transparency than variables indicating the institutional capacity of local government units (LGUs). Additional subhypotheses test the impact of each variable analysed on local budget transparency. We address the determinants of local budget transparency in Croatia, measured by the number of key local budget documents published on the LGUs’ websites. By using a data set of 128 cities and 428 municipalities over the 2015-2017 period and by applying panel data analysis based on Poisson and negative binomial distribution, we test our main hypothesis and sub-hypotheses empirically. We measure different characteristics of institutional and residents’ capacity for each LGU. Age, education and ideology of the mayor/municipality head, political competition indicators, number of employees, current budget revenues and direct debt per capita have been used as a measure of the institutional capacity of LGU. Residents’ capacity in each LGU has been measured through the numbers of citizens and their average age as well as by average income per capita. The most important determinant of local budget transparency is average residents' income per capita at both city and municipality level. The results are in line with most previous research results in fiscally decentralised countries. In the context of a fiscally centralised country with numerous small LGUs, most of whom have low administrative and fiscal capacity, this has a theoretical rationale in the legitimacy and principal-agent theory (opportunistic motives of the incumbent). The result is robust and significant, but because of the various other results that change between city and municipality levels (e.g. ideology and political competition), there is a need for further research (both on identifying other determinates and/or methods of analysis). Since in Croatia the fiscal capacity of a LGU depends heavily on the income of its residents, units with higher per capita incomes in many cases have also higher budget revenues allowing them to engage more employees and resources. In addition, residents’ incomes might be also positively associated with local budget transparency because of higher citizen demand for such transparency. Residents with higher incomes expect more public services and have more access to and experience in using the Internet, and will thus typically demand more budget information on the LGUs’ websites.

Keywords: budget transparency, count data, Croatia, local government, political economy

Procedia PDF Downloads 183
234 Li2o Loss of Lithium Niobate Nanocrystals during High-Energy Ball-Milling

Authors: Laura Kocsor, Laszlo Peter, Laszlo Kovacs, Zsolt Kis

Abstract:

The aim of our research is to prepare rare-earth-doped lithium niobate (LiNbO3) nanocrystals, having only a few dopant ions in the focal point of an exciting laser beam. These samples will be used to achieve individual addressing of the dopant ions by light beams in a confocal microscope setup. One method for the preparation of nanocrystalline materials is to reduce the particle size by mechanical grinding. High-energy ball-milling was used in several works to produce nano lithium niobate. Previously, it was reported that dry high-energy ball-milling of lithium niobate in a shaker mill results in the partial reduction of the material, which leads to a balanced formation of bipolarons and polarons yielding gray color together with oxygen release and Li2O segregation on the open surfaces. In the present work we focus on preparing LiNbO3 nanocrystals by high-energy ball-milling using a Fritsch Pulverisette 7 planetary mill. Every ball-milling process was carried out in zirconia vial with zirconia balls of different sizes (from 3 mm to 0.1 mm), wet grinding with water, and the grinding time being less than an hour. Gradually decreasing the ball size to 0.1 mm, an average particle size of about 10 nm could be obtained determined by dynamic light scattering and verified by scanning electron microscopy. High-energy ball-milling resulted in sample darkening evidenced by optical absorption spectroscopy measurements indicating that the material underwent partial reduction. The unwanted lithium oxide loss decreases the Li/Nb ratio in the crystal, strongly influencing the spectroscopic properties of lithium niobate. Zirconia contamination was found in ground samples proved by energy-dispersive X-ray spectroscopy measurements; however, it cannot be explained based on the hardness properties of the materials involved in the ball-milling process. It can be understood taking into account the presence of lithium hydroxide formed the segregated lithium oxide and water during the ball-milling process, through chemically induced abrasion. The quantity of the segregated Li2O was measured by coulometric titration. During the wet milling process in the planetary mill, it was found that the lithium oxide loss increases linearly in the early phase of the milling process, then a saturation of the Li2O loss can be seen. This change goes along with the disappearance of the relatively large particles until a relatively narrow size distribution is achieved in accord with the dynamic light scattering measurements. With the 3 mm ball size and 1100 rpm rotation rate, the mean particle size achieved is 100 nm, and the total Li2O loss is about 1.2 wt.% of the original LiNbO3. Further investigations have been done to minimize the Li2O segregation during the ball-milling process. Since the Li2O loss was observed to increase with the growing total surface of the particles, the influence of ball-milling parameters on its quantity has also been studied.

Keywords: high-energy ball-milling, lithium niobate, mechanochemical reaction, nanocrystals

Procedia PDF Downloads 133
233 Sugarcane Trash Biochar: Effect of the Temperature in the Porosity

Authors: Gabriela T. Nakashima, Elias R. D. Padilla, Joao L. Barros, Gabriela B. Belini, Hiroyuki Yamamoto, Fabio M. Yamaji

Abstract:

Biochar can be an alternative to use sugarcane trash. Biochar is a solid material obtained from pyrolysis, that is a biomass thermal degradation with low or no O₂ concentration. Pyrolysis transforms the carbon that is commonly found in other organic structures into a carbon with more stability that can resist microbial decomposition. Biochar has a versatility of uses such as soil fertility, carbon sequestration, energy generation, ecological restoration, and soil remediation. Biochar has a great ability to retain water and nutrients in the soil so that this material can improve the efficiency of irrigation and fertilization. The aim of this study was to characterize biochar produced from sugarcane trash in three different pyrolysis temperatures and determine the lowest temperature with the high yield and carbon content. Physical characterization of this biochar was performed to help the evaluation for the best production conditions. Sugarcane (Saccharum officinarum) trash was collected at Corredeira Farm, located in Ibaté, São Paulo State, Brazil. The farm has 800 hectares of planted area with an average yield of 87 t·ha⁻¹. The sugarcane varieties planted on the farm are: RB 855453, RB 867515, RB 855536, SP 803280, SP 813250. Sugarcane trash was dried and crushed into 50 mm pieces. Crucibles and lids were used to settle the sugarcane trash samples. The higher amount of sugarcane trash was added to the crucible to avoid the O₂ concentration. Biochar production was performed in three different pyrolysis temperatures (200°C, 325°C, 450°C) in 2 hours residence time in the muffle furnace. Gravimetric yield of biochar was obtained. Proximate analysis of biochar was done using ASTM E-872 and ABNT NBR 8112. Volatile matter and ash content were calculated by direct weight loss and fixed carbon content calculated by difference. Porosity measurement was evaluated using an automatic gas adsorption device, Autosorb-1, with CO₂ described by Nakatani. Approximately 0.5 g of biochar in 2 mm particle sizes were used for each measurement. Vacuum outgassing was performed as a pre-treatment in different conditions for each biochar temperature. The pore size distribution of micropores was determined using Horváth-Kawazoe method. Biochar presented different colors for each treatment. Biochar - 200°C presented a higher number of pieces with 10mm or more and did not present the dark black color like other treatments after 2 h residence time in muffle furnace. Also, this treatment had the higher content of volatiles and the lower amount of fixed carbon. In porosity analysis, while the temperature treatments increase, the amount of pores also increase. The increase in temperature resulted in a biochar with a better quality. The pores in biochar can help in the soil aeration, adsorption, water retention. Acknowledgment: This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior – Brazil – PROAP-CAPES, PDSE and CAPES - Finance Code 001.

Keywords: proximate analysis, pyrolysis, soil amendment, sugarcane straw

Procedia PDF Downloads 211
232 Cellular Mechanisms Involved in the Radiosensitization of Breast- and Lung Cancer Cells by Agents Targeting Microtubule Dynamics

Authors: Elsie M. Nolte, Annie M. Joubert, Roy Lakier, Maryke Etsebeth, Jolene M. Helena, Marcel Verwey, Laurence Lafanechere, Anne E. Theron

Abstract:

Treatment regimens for breast- and lung cancers may include both radiation- and chemotherapy. Ideally, a pharmaceutical agent which selectively sensitizes cancer cells to gamma (γ)-radiation would allow administration of lower doses of each modality, yielding synergistic anti-cancer benefits and lower metastasis occurrence, in addition to decreasing the side-effect profiles. A range of 2-methoxyestradiol (2-ME) analogues, namely 2-ethyl-3-O-sulphamoyl-estra-1,3,5 (10) 15-tetraene-3-ol-17one (ESE-15-one), 2-ethyl-3-O-sulphamoyl-estra-1,3,5(10),15-tetraen-17-ol (ESE-15-ol) and 2-ethyl-3-O-sulphamoyl-estra-1,3,5(10)16-tetraene (ESE-16) were in silico-designed by our laboratory, with the aim of improving the parent compound’s bioavailability in vivo. The main effect of these compounds is the disruption of microtubule dynamics with a resultant mitotic accumulation and induction of programmed cell death in various cancer cell lines. This in vitro study aimed to determine the cellular responses involved in the radiation sensitization effects of these analogues at low doses in breast- and lung cancer cell lines. The oestrogen receptor positive MCF-7-, oestrogen receptor negative MDA-MB-231- and triple negative BT-20 breast cancer cell lines as well as the A549 lung cancer cell line were used. The minimal compound- and radiation doses able to induce apoptosis were determined using annexin-V and cell cycle progression markers. These doses (cell line dependent) were used to pre-sensitize the cancer cells 24 hours prior to 6 gray (Gy) radiation. Experiments were conducted on samples exposed to the individual- as well as the combination treatment conditions in order to determine whether the combination treatment yielded an additive cell death response. Morphological studies included light-, fluorescence- and transmission electron microscopy. Apoptosis induction was determined by flow cytometry employing annexin V, cell cycle analysis, B-cell lymphoma 2 (Bcl-2) signalling, as well as reactive oxygen species (ROS) production. Clonogenic studies were performed by allowing colony formation for 10 days post radiation. Deoxyribonucleic acid (DNA) damage was quantified via γ-H2AX foci and micronuclei quantification. Amplification of the p53 signalling pathway was determined by western blot. Results indicated that exposing breast- and lung cancer cells to nanomolar concentrations of these analogues 24 hours prior to γ-radiation induced more cell death than the compound- and radiation treatments alone. Hypercondensed chromatin, decreased cell density, a damaged cytoskeleton and an increase in apoptotic body formation were observed in cells exposed to the combination treatment condition. An increased number of cells present in the sub-G1 phase as well as increased annexin-V staining, elevation of ROS formation and decreased Bcl-2 signalling confirmed the additive effect of the combination treatment. In addition, colony formation decreased significantly. p53 signalling pathways were significantly amplified in cells exposed to the analogues 24 hours prior to radiation, as was the amount of DNA damage. In conclusion, our results indicated that pre-treatment of breast- and lung cancer cells with low doses of 2-ME analogues sensitized breast- and lung cancer cells to γ-radiation and induced apoptosis more so than the individual treatments alone. Future studies will focus on the effect of the combination treatment on non-malignant cellular counterparts.

Keywords: cancer, microtubule dynamics, radiation therapy, radiosensitization

Procedia PDF Downloads 205
231 Adopting a New Policy in Maritime Law for Protecting Ship Mortgagees Against Maritime Liens

Authors: Mojtaba Eshraghi Arani

Abstract:

Ship financing is the vital element in the development of shipping industry because while the ship constitutes the owners’ main asset, she is considered a reliable security in the financiers’ viewpoint as well. However, it is most probable that a financier who has accepted a ship as security will face many creditors who are privileged and rank before him for collecting, out of the ship, the money that they are owed. In fact, according to the current rule of maritime law, which was established by “Convention Internationale pour l’Unification de Certaines Règles Relatives aux Privilèges et Hypothèques Maritimes, Brussels, 10 April 1926”, the mortgages, hypotheques, and other charges on vessels rank after several secured claims referred to as “maritime liens”. Such maritime liens are an exhaustive list of claims including but not limited to “expenses incurred in the common interest of the creditors to preserve the vessel or to procure its sale and the distribution of the proceeds of sale”, “tonnage dues, light or harbour dues, and other public taxes and charges of the same character”, “claims arising out of the contract of engagement of the master, crew and other persons hired on board”, “remuneration for assistance and salvage”, “the contribution of the vessel in general average”, “indemnities for collision or other damage caused to works forming part of harbours, docks, etc,” “indemnities for personal injury to passengers or crew or for loss of or damage to cargo”, “claims resulting form contracts entered into or acts done by the master”. The same rule survived with only some minor change in the categories of maritime liens in the substitute conventions 1967 and 1993. The status que in maritime law have always been considered as a major obstacle to the development of shipping market and has inevitably led to increase in the interest rates and other related costs of ship financing. It seems that the national and international policy makers have yet to change their mind being worried about the deviation from the old marine traditions. However, it is crystal clear that the continuation of status que will harm, to a great extent, the shipowners and, consequently, the international merchants as a whole. It is argued in this article that the raison d'être for many categories of maritime liens cease to exist anymore, in view of which, the international community has to recognize only a minimum category of maritime liens which are created in the common interests of all creditors; to this effect, only two category of “compensation due for the salvage of ship” and “extraordinary expenses indispensable for the preservation of the ship” can be declared as taking priority over the mortgagee rights, in anology with the Geneva Convention on the International Recognition of Rights in Aircrafts (1948). A qualitative method with the concept of interpretation of data collection has been used in this manuscript. The source of the data is the analysis of international conventions and domestic laws.

Keywords: ship finance, mortgage, maritime liens, brussels convenion, geneva convention 1948

Procedia PDF Downloads 70
230 Augmented Reality Enhanced Order Picking: The Potential for Gamification

Authors: Stavros T. Ponis, George D. Plakas-Koumadorakis, Sotiris P. Gayialis

Abstract:

Augmented Reality (AR) can be defined as a technology, which takes the capabilities of computer-generated display, sound, text and effects to enhance the user's real-world experience by overlaying virtual objects into the real world. By doing that, AR is capable of providing a vast array of work support tools, which can significantly increase employee productivity, enhance existing job training programs by making them more realistic and in some cases introduce completely new forms of work and task executions. One of the most promising AR industrial applications, as literature shows, is the use of Head Worn, monocular or binocular Displays (HWD) to support logistics and production operations, such as order picking, part assembly and maintenance. This paper presents the initial results of an ongoing research project for the introduction of a dedicated AR-HWD solution to the picking process of a Distribution Center (DC) in Greece operated by a large Telecommunication Service Provider (TSP). In that context, the proposed research aims to determine whether gamification elements should be integrated in the functional requirements of the AR solution, such as providing points for reaching objectives and creating leaderboards and awards (e.g. badges) for general achievements. Up to now, there is a an ambiguity on the impact of gamification in logistics operations since gamification literature mostly focuses on non-industrial organizational contexts such as education and customer/citizen facing applications, such as tourism and health. To the contrary, the gamification efforts described in this study focus in one of the most labor- intensive and workflow dependent logistics processes, i.e. Customer Order Picking (COP). Although introducing AR in COP, undoubtedly, creates significant opportunities for workload reduction and increased process performance the added value of gamification is far from certain. This paper aims to provide insights on the suitability and usefulness of AR-enhanced gamification in the hard and very demanding environment of a logistics center. In doing so, it will utilize a review of the current state-of-the art regarding gamification of production and logistics processes coupled with the results of questionnaire guided interviews with industry experts, i.e. logisticians, warehouse workers (pickers) and AR software developers. The findings of the proposed research aim to contribute towards a better understanding of AR-enhanced gamification, the organizational change it entails and the consequences it potentially has for all implicated entities in the often highly standardized and structured work required in the logistics setting. The interpretation of these findings will support the decision of logisticians regarding the introduction of gamification in their logistics processes by providing them useful insights and guidelines originating from a real life case study of a large DC operating more than 300 retail outlets in Greece.

Keywords: augmented reality, technology acceptance, warehouse management, vision picking, new forms of work, gamification

Procedia PDF Downloads 148
229 Mapping Alternative Education in Italy: The Case of Popular and Second-Chance Schools and Interventions in Lombardy

Authors: Valeria Cotza

Abstract:

School drop-out is a multifactorial phenomenon that in Italy concerns all those underage students who, at different school stages (up to 16 years old) or training (up to 18 years old), manifest educational difficulties from dropping out of compulsory education without obtaining a qualification to repetition rates and absenteeism. From the 1980s to the 2000s, there was a progressive attenuation of the economic and social model towards a multifactorial reading of the phenomenon, and the European Commission noted the importance of learning about the phenomenon through approaches able to integrate large-scale quantitative surveys with qualitative analyses. It is not a matter of identifying the contextual factors affecting the phenomenon but problematising them by means of systemic and comprehensive in-depth analysis. So, a privileged point of observation and field of intervention are those schools that propose alternative models of teaching and learning to the traditional ones, such as popular and second-chance schools. Alternative schools and interventions grew in these years in Europe as well as in the US and Latin America, working in the direction of greater equity to create the conditions (often absent in conventional schools) for everyone to achieve educational goals. Against extensive Anglo-Saxon and US literature on this topic, there is yet no unambiguous definition of alternative education, especially in Europe, where second-chance education has been most studied. There is little literature on a second chance in Italy and almost none on alternative education (with the exception of method schools, to which in Italy the concept of “alternative” is linked). This research aims to fill the gap by systematically surveying the alternative interventions in the area and beginning to explore some models of popular and second-chance schools and experiences through a mixed methods approach. So, the main research objectives concern the spread of alternative education in the Lombardy region, the main characteristics of these schools and interventions, and their effectiveness in terms of students’ well-being and school results. This paper seeks to answer the first point by presenting the preliminary results of the first phase of the project dedicated to mapping. Through the Google Forms platform, a questionnaire is being distributed to all schools in Lombardy and some schools in the rest of Italy to map the presence of alternative schools and interventions and their main characteristics. The distribution is also taking place thanks to the support of the Milan Territorial and Lombardy Regional School Offices. Moreover, other social realities outside the school system (such as cooperatives and cultural associations) can be questioned. The schools and other realities to be questioned outside Lombardy will also be identified with the support of INDIRE (Istituto Nazionale per Documentazione, Innovazione e Ricerca Educativa, “National Institute for Documentation, Innovation and Educational Research”) and based on existing literature and the indicators of “Futura” Plan of the PNRR (Piano Nazionale di Ripresa e Resilienza, “National Recovery and Resilience Plan”). Mapping will be crucial and functional for the subsequent qualitative and quantitative phase, which will make use of statistical analysis and constructivist grounded theory.

Keywords: school drop-out, alternative education, popular and second-chance schools, map

Procedia PDF Downloads 82
228 The Path to Ruthium: Insights into the Creation of a New Element

Authors: Goodluck Akaoma Ordu

Abstract:

Ruthium (Rth) represents a theoretical superheavy element with an atomic number of 119, proposed within the context of advanced materials science and nuclear physics. The conceptualization of Rth involves theoretical frameworks that anticipate its atomic structure, including a hypothesized stable isotope, Rth-320, characterized by 119 protons and 201 neutrons. The synthesis of Ruthium (Rth) hinges on intricate nuclear fusion processes conducted in state-of-the-art particle accelerators, notably utilizing Calcium-48 (Ca-48) as a projectile nucleus and Einsteinium-253 (Es-253) as a target nucleus. These experiments aim to induce fusion reactions that yield Ruthium isotopes, such as Rth-301, accompanied by neutron emission. Theoretical predictions outline various physical and chemical properties attributed to Ruthium (Rth). It is envisaged to possess a high density, estimated at around 25 g/cm³, with melting and boiling points anticipated to be exceptionally high, approximately 4000 K and 6000 K, respectively. Chemical studies suggest potential oxidation states of +2, +3, and +4, indicating a versatile reactivity, particularly with halogens and chalcogens. The atomic structure of Ruthium (Rth) is postulated to feature an electron configuration of [Rn] 5f^14 6d^10 7s^2 7p^2, reflecting its position in the periodic table as a superheavy element. However, the creation and study of superheavy elements like Ruthium (Rth) pose significant challenges. These elements typically exhibit very short half-lives, posing difficulties in their stabilization and detection. Research efforts are focused on identifying the most stable isotopes of Ruthium (Rth) and developing advanced detection methodologies to confirm their existence and properties. Specialized detectors are essential in observing decay patterns unique to Ruthium (Rth), such as alpha decay or fission signatures, which serve as key indicators of its presence and characteristics. The potential applications of Ruthium (Rth) span across diverse technological domains, promising innovations in energy production, material strength enhancement, and sensor technology. Incorporating Ruthium (Rth) into advanced energy systems, such as the Arc Reactor concept, could potentially amplify energy output efficiencies. Similarly, integrating Ruthium (Rth) into structural materials, exemplified by projects like the NanoArc gauntlet, could bolster mechanical properties and resilience. Furthermore, Ruthium (Rth)--based sensors hold promise for achieving heightened sensitivity and performance in various sensing applications. Looking ahead, the study of Ruthium (Rth) represents a frontier in both fundamental science and applied research. It underscores the quest to expand the periodic table and explore the limits of atomic stability and reactivity. Future research directions aim to delve deeper into Ruthium (Rth)'s atomic properties under varying conditions, paving the way for innovations in nanotechnology, quantum materials, and beyond. The synthesis and characterization of Ruthium (Rth) stand as a testament to human ingenuity and technological advancement, pushing the boundaries of scientific understanding and engineering capabilities. In conclusion, Ruthium (Rth) embodies the intersection of theoretical speculation and experimental pursuit in the realm of superheavy elements. It symbolizes the relentless pursuit of scientific excellence and the potential for transformative technological breakthroughs. As research continues to unravel the mysteries of Ruthium (Rth), it holds the promise of reshaping materials science and opening new frontiers in technological innovation.

Keywords: superheavy element, nuclear fusion, bombardment, particle accelerator, nuclear physics, particle physics

Procedia PDF Downloads 35
227 The Environmental Conflict over the Trans Mountain Pipeline Expansion in Burnaby, British Columbia, Canada

Authors: Emiliano Castillo

Abstract:

The aim of this research is to analyze the origins, the development and possible outcomes of the environmental conflict between grassroots organizations, indigenous communities, Kinder Morgan Corporation, and the Canadian government over the Trans Mountain pipeline expansion in Burnaby, British Columbia, Canada. Building on the political ecology and the environmental justice theoretical framework, this research examines the impacts and risks of tar sands extraction, production, and transportation on climate change, public health, the environment, and indigenous people´s rights over their lands. This study is relevant to the environmental justice and political ecology literature because it discusses the unequal distribution of environmental costs and economic benefits of tar sands development; and focuses on the competing interests, needs, values, and claims of the actors involved in the conflict. Furthermore, it will shed light on the context, conditions, and processes that lead to the organization and mobilization of a grassroots movement- comprised of indigenous communities, citizens, scientists, and non-governmental organizations- that draw significant media attention by opposing the Trans Mountain pipeline expansion. Similarly, the research will explain the differences and dynamics within the grassroots movement. This research seeks to address the global context of the conflict by studying the links between the decline of conventional oil production, the rise of unconventional fossil fuels (e.g. tar sands), climate change, and the struggles of low-income, ethnic, and racial minorities over the territorial expansion of extractive industries. Data will be collected from legislative documents, policy and technical reports, scientific journals, newspapers articles, participant observation, and semi-structured interviews with representatives and members of the grassroots organizations, indigenous communities, and Burnaby citizens that oppose the Trans Mountain pipeline. These interviews will focus on their perceptions of the risks of the Trans Mountain pipeline expansion; the roots of the anti-tar sands movement; the differences and dynamics within the movement; and the strategies to defend the livelihoods of local communities and the environment against tar sands development. This research will contribute to the understanding of the underlying causes of the environmental conflict between the Canadian government, Kinder Morgan, and grassroots organizations over tar sands extraction, production, and transportation in Burnaby, British Columbia, Canada. Moreover, this work will elucidate the transformations of society-nature relationships brought by tar sands development. Research findings will provide scientific information about how the resistance movement in British Columbia can challenge the dominant narrative on tar sands, exert greater influence in environmental politics, and efficiently defend Indigenous people´s rights to lands. Furthermore, this research will shed light into how grassroots movements can contribute towards the building of more inclusive and sustainable societies.

Keywords: environmental conflict, environmental justice, extractive industry, indigenous communities, political ecology, tar sands

Procedia PDF Downloads 277