Search results for: carbon dioxide capture
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4293

Search results for: carbon dioxide capture

843 Role of Calcination Treatment on the Structural Properties and Photocatalytic Activity of Nanorice N-Doped TiO₂ Catalyst

Authors: Totsaporn Suwannaruang, Kitirote Wantala

Abstract:

The purposes of this research were to synthesize titanium dioxide photocatalyst doped with nitrogen (N-doped TiO₂) by hydrothermal method and to test the photocatalytic degradation of paraquat under UV and visible light illumination. The effect of calcination treatment temperature on their physical and chemical properties and photocatalytic efficiencies were also investigated. The characterizations of calcined N-doped TiO₂ photocatalysts such as specific surface area, textural properties, bandgap energy, surface morphology, crystallinity, phase structure, elements and state of charges were investigated by Brunauer, Emmett, Teller (BET) and Barrett, Joyner, Halenda (BJH) equations, UV-Visible diffuse reflectance spectroscopy (UV-Vis-DRS) by using the Kubelka-Munk theory, Wide-angle X-ray scattering (WAXS), Focussed ion beam scanning electron microscopy (FIB-SEM), X-ray photoelectron spectroscopy (XPS) and X-ray absorption spectroscopy (XAS), respectively. The results showed that the effect of calcination temperature was significant on surface morphology, crystallinity, specific surface area, pore size diameter, bandgap energy and nitrogen content level, but insignificant on phase structure and oxidation state of titanium (Ti) atom. The N-doped TiO₂ samples illustrated only anatase crystalline phase due to nitrogen dopant in TiO₂ restrained the phase transformation from anatase to rutile. The samples presented the nanorice-like morphology. The expansion on the particle was found at 650 and 700°C of calcination temperature, resulting in increased pore size diameter. The bandgap energy was determined by Kubelka-Munk theory to be in the range 3.07-3.18 eV, which appeared slightly lower than anatase standard (3.20 eV), resulting in the nitrogen dopant could modify the optical absorption edge of TiO₂ from UV to visible light region. The nitrogen content was observed at 100, 300 and 400°C only. Also, the nitrogen element disappeared at 500°C onwards. The nitrogen (N) atom can be incorporated in TiO₂ structure with the interstitial site. The uncalcined (100°C) sample displayed the highest percent paraquat degradation under UV and visible light irradiation due to this sample revealed both the highest specific surface area and nitrogen content level. Moreover, percent paraquat removal significantly decreased with increasing calcination treatment temperature. The nitrogen content level in TiO₂ accelerated the rate of reaction with combining the effect of the specific surface area that generated the electrons and holes during illuminated with light. Therefore, the specific surface area and nitrogen content level demonstrated the important roles in the photocatalytic activity of paraquat under UV and visible light illumination.

Keywords: restraining phase transformation, interstitial site, chemical charge state, photocatalysis, paraquat degradation

Procedia PDF Downloads 149
842 Assessing the Influence of Station Density on Geostatistical Prediction of Groundwater Levels in a Semi-arid Watershed of Karnataka

Authors: Sakshi Dhumale, Madhushree C., Amba Shetty

Abstract:

The effect of station density on the geostatistical prediction of groundwater levels is of critical importance to ensure accurate and reliable predictions. Monitoring station density directly impacts the accuracy and reliability of geostatistical predictions by influencing the model's ability to capture localized variations and small-scale features in groundwater levels. This is particularly crucial in regions with complex hydrogeological conditions and significant spatial heterogeneity. Insufficient station density can result in larger prediction uncertainties, as the model may struggle to adequately represent the spatial variability and correlation patterns of the data. On the other hand, an optimal distribution of monitoring stations enables effective coverage of the study area and captures the spatial variability of groundwater levels more comprehensively. In this study, we investigate the effect of station density on the predictive performance of groundwater levels using the geostatistical technique of Ordinary Kriging. The research utilizes groundwater level data collected from 121 observation wells within the semi-arid Berambadi watershed, gathered over a six-year period (2010-2015) from the Indian Institute of Science (IISc), Bengaluru. The dataset is partitioned into seven subsets representing varying sampling densities, ranging from 15% (12 wells) to 100% (121 wells) of the total well network. The results obtained from different monitoring networks are compared against the existing groundwater monitoring network established by the Central Ground Water Board (CGWB). The findings of this study demonstrate that higher station densities significantly enhance the accuracy of geostatistical predictions for groundwater levels. The increased number of monitoring stations enables improved interpolation accuracy and captures finer-scale variations in groundwater levels. These results shed light on the relationship between station density and the geostatistical prediction of groundwater levels, emphasizing the importance of appropriate station densities to ensure accurate and reliable predictions. The insights gained from this study have practical implications for designing and optimizing monitoring networks, facilitating effective groundwater level assessments, and enabling sustainable management of groundwater resources.

Keywords: station density, geostatistical prediction, groundwater levels, monitoring networks, interpolation accuracy, spatial variability

Procedia PDF Downloads 48
841 Investigation on Solar Thermoelectric Generator Using D-Mannitol/Multi-Walled Carbon Nanotubes Composite Phase Change Materials

Authors: Zihua Wu, Yueming He, Xiaoxiao Yu, Yuanyuan Wang, Huaqing Xie

Abstract:

The match of Solar thermoelectric generator (STEG) and phase change materials (PCM) can enhance the solar energy storage and reduce environmental impact from the day-and-night transformation and weather changes. This work utilizes D-mannitol (DM) matrix as the suitable PCM for coupling with thermoelectric generator to achieve the middle-temperature solar energy storage performance at 165℃-167℃. DM/MWCNT composite phase change materials prepared by ball milling not only can keep a high phase change enthalpy of DM material but also have great photo-thermal conversion efficiency of 82%. Based on the self-made storage device container, the effect of PCM thickness on the solar energy storage performance is further discussed and analyzed. The experimental results prove that PCM-STEG coupling system can output more electric energy than pure STEG system because PCM can decline the heat transfer and storage thermal energy to further generate the electric energy through thermal-to-electric conversion when the light is removed. The increase of PCM thickness can reduce the heat transfer and enhance thermal storage, and then the power generation performance of PCM-STEG coupling system can be improved. As the increase of light intensity, the output electric energy of the coupling system rises accordingly, and the maximum amount of electrical energy can reach by 113.85 J at 1.6 W/cm2. The study of the PCM-STEG coupling system has certain reference for the development of solar energy storage and application.

Keywords: solar energy, solar thermoelectric generator, phase change materials, solar-to-electric energy, DM/MWCNT

Procedia PDF Downloads 60
840 Effect of Fiber Orientation on the Mechanical Properties of Fabricated Plate Using Basalt Fiber

Authors: Sharmili Routray, Kishor Chandra Biswal

Abstract:

The use of corrosion resistant fiber reinforced polymer (FRP) reinforcement is beneficial in structures particularly those exposed to deicing salts, and/or located in highly corrosive environment. Generally Glass, Carbon and Aramid fibers are used for the strengthening purpose of the structures. Due to the necessities of low weight and high strength materials, it is required to find out the suitable substitute with low cost. Recent developments in fiber production technology allow the strengthening of structures using Basalt fiber which is made from basalt rock. Basalt fiber has good range of thermal performance, high tensile strength, resistance to acids, good electro‐magnetic properties, inert nature, resistance to corrosion, radiation and UV light, vibration and impact loading. This investigation focuses on the effect of fibre content and fiber orientation of basalt fibre on mechanical properties of the fabricated composites. Specimen prepared with unidirectional Basalt fabric as reinforcing materials and epoxy resin as a matrix in polymer composite. In this investigation different fiber orientation are taken and the fabrication is done by hand lay-up process. The variation of the properties with the increasing number of plies of fiber in the composites is also studied. Specimens are subjected to tensile strength test and the failure of the composite is examined with the help of INSTRON universal testing Machine (SATEC) of 600 kN capacities. The average tensile strength and modulus of elasticity of BFRP plates are determined from the test Program.

Keywords: BFRP, fabrication, Fiber Reinforced Polymer (FRP), strengthening

Procedia PDF Downloads 284
839 Measuring the Economic Impact of Cultural Heritage: Comparative Analysis of the Multiplier Approach and the Value Chain Approach

Authors: Nina Ponikvar, Katja Zajc Kejžar

Abstract:

While the positive impacts of heritage on a broad societal spectrum have long been recognized and measured, the economic effects of the heritage sector are often less visible and frequently underestimated. At macro level, economic effects are usually studied based on one of the two mainstream approach, i.e. either the multiplier approach or the value chain approach. Consequently, there is limited comparability of the empirical results due to the use of different methodological approach in the literature. Furthermore, it is also not clear on which criteria the used approach was selected. Our aim is to bring the attention to the difference in the scope of effects that are encompassed by the two most frequent methodological approaches to valuation of economic effects of cultural heritage on macroeconomic level, i.e. the multiplier approach and the value chain approach. We show that while the multiplier approach provides a systematic, theory-based view of economic impacts but requires more data and analysis, the value chain approach has less solid theoretical foundations and depends on the availability of appropriate data to identify the contribution of cultural heritage to other sectors. We conclude that the multiplier approach underestimates the economic impact of cultural heritage, mainly due to the narrow definition of cultural heritage in the statistical classification and the inability to identify part of the contribution of cultural heritage that is hidden in other sectors. Yet it is not possible to clearly determine whether the value chain method overestimates or underestimates the actual economic impact of cultural heritage since there is a risk that the direct effects are overestimated and double counted, but not all indirect and induced effects are considered. Accordingly, these two approaches are not substitutes but rather complementary. Consequently, a direct comparison of the estimated impacts is not possible and should not be done due to the different scope. To illustrate the difference of the impact assessment of the cultural heritage, we apply both approaches to the case of Slovenia in the 2015-2022 period and measure the economic impact of cultural heritage sector in terms of turnover, gross value added and employment. The empirical results clearly show that the estimation of the economic impact of a sector using the multiplier approach is more conservative, while the estimates based on value added capture a much broader range of impacts. According to the multiplier approach, each euro in cultural heritage sector generates an additional 0.14 euros in indirect effects and an additional 0.44 euros in induced effects. Based on the value-added approach, the indirect economic effect of the “narrow” heritage sectors is amplified by the impact of cultural heritage activities on other sectors. Accordingly, every euro of sales and every euro of gross value added in the cultural heritage sector generates approximately 6 euros of sales and 4 to 5 euros of value added in other sectors. In addition, each employee in the cultural heritage sector is linked to 4 to 5 jobs in other sectors.

Keywords: economic value of cultural heritage, multiplier approach, value chain approach, indirect effects, slovenia

Procedia PDF Downloads 72
838 Removal of Nickel and Vanadium from Crude Oil by Using Solvent Extraction and Electrochemical Process

Authors: Aliya Kurbanova, Nurlan Akhmetov, Abilmansur Yeshmuratov, Yerzhigit Sugurbekov, Ramiz Zulkharnay, Gulzat Demeuova, Murat Baisariyev, Gulnar Sugurbekova

Abstract:

Last decades crude oils have tended to become more challenge to process due to increasing amounts of sour and heavy crude oils. Some crude oils contain high vanadium and nickel content, for example Pavlodar LLP crude oil, which contains more than 23.09 g/t nickel and 58.59 g/t vanadium. In this study, we used two types of metal removing methods such as solvent extraction and electrochemical. The present research is conducted for comparative analysis of the deasphalting with organic solvents (cyclohexane, carbon tetrachloride, chloroform) and electrochemical method. Applying the cyclic voltametric analysis (CVA) and Inductively coupled plasma mass spectrometry (ICP MS), these mentioned types of metal extraction methods were compared in this paper. Maximum efficiency of deasphalting, with cyclohexane as the solvent, in Soxhlet extractor was 66.4% for nickel and 51.2% for vanadium content from crude oil. Percentage of Ni extraction reached maximum of approximately 55% by using the electrochemical method in electrolysis cell, which was developed for this research and consists of three sections: oil and protonating agent (EtOH) solution between two conducting membranes which divides it from two capsules of 10% sulfuric acid and two graphite electrodes which cover all three parts in electrical circuit. Ions of metals pass through membranes and remain in acid solutions. The best result was obtained in 60 minutes with ethanol to oil ratio 25% to 75% respectively, current fits into the range from 0.3A to 0.4A, voltage changed from 12.8V to 17.3V.

Keywords: demetallization, deasphalting, electrochemical removal, heavy metals, petroleum engineering, solvent extraction

Procedia PDF Downloads 305
837 Microbial Reduction of Terpenes from Pine Wood Material

Authors: Bernhard Widhalm, Cornelia Rieder-Gradinger, Thomas Ters, Ewald Srebotnik, Thomas Kuncinger

Abstract:

Terpenes are natural components in softwoods and rank among the most frequently emitted volatile organic compounds (VOC) in the wood-processing industry. In this study, the main focus was on α- and β-pinene as well as Δ3-carene, which are the major terpenes in softwoods. To lower the total emission level of wood composites, defined terpene degrading microorganisms were applied to basic raw materials (e.g. pine wood particles and strands) in an optimised and industry-compatible testing procedure. In preliminary laboratory tests, bacterial species suitable for the utilisation of α-pinene as single carbon source in liquid culture were selected and then subjected to wood material inoculation. The two species Pseudomonas putida and Pseudomonas fluorescens were inoculated onto wood particles and strands and incubated at room temperature. Applying specific pre-cultivation and daily ventilation of the samples enabled a reduction of incubation time from six days to one day. SPME measurements and subsequent GC-MS analysis indicated a complete absence of α- and β-pinene emissions after 24 hours from pine wood particles. When using pine wood strands rather than particles, bacterial treatment resulted in a reduction of α- and β-pinene by 50%, while Δ3-carene emissions were reduced by 30% in comparison to untreated strands. Other terpenes were also reduced in the course of the microbial treatment. The method developed here appears to be feasible for industrial application. However, growth parameters such as time and temperature as well as the technical implementation of the inoculation step will have to be adapted for the production process.

Keywords: GC-MS, pseudomonas, SPME, terpenes

Procedia PDF Downloads 341
836 An Evaluation of the Use of Telematics for Improving the Driving Behaviours of Young People

Authors: James Boylan, Denny Meyer, Won Sun Chen

Abstract:

Background: Globally, there is an increasing trend of road traffic deaths, reaching 1.35 million in 2016 in comparison to 1.3 million a decade ago, and overall, road traffic injuries are ranked as the eighth leading cause of death for all age groups. The reported death rate for younger drivers aged 16-19 years is almost twice the rate reported for older drivers aged 25 and above, with a rate of 3.5 road traffic fatalities per annum for every 10,000 licenses held. Telematics refers to a system with the ability to capture real-time data about vehicle usage. The data collected from telematics can be used to better assess a driver's risk. It is typically used to measure acceleration, turn, braking, and speed, as well as to provide locational information. With the Australian government creating the National Telematics Framework, there has been an increase in the government's focus on using telematics data to improve road safety outcomes. The purpose of this study is to test the hypothesis that improvements in telematics measured driving behaviour to relate to improvements in road safety attitudes measured by the Driving Behaviour Questionnaire (DBQ). Methodology: 28 participants were recruited and given a telematics device to insert into their vehicles for the duration of the study. The participant's driving behaviour over the course of the first month will be compared to their driving behaviour in the second month to determine whether feedback from telematics devices improves driving behaviour. Participants completed the DBQ, evaluated using a 6-point Likert scale (0 = never, 5 = nearly all the time) at the beginning, after the first month, and after the second month of the study. This is a well-established instrument used worldwide. Trends in the telematics data will be captured and correlated with the changes in the DBQ using regression models in SAS. Results: The DBQ has provided a reliable measure (alpha = .823) of driving behaviour based on a sample of 23 participants, with an average of 50.5 and a standard deviation of 11.36, and a range of 29 to 76, with higher scores, indicating worse driving behaviours. This initial sample is well stratified in terms of gender and age (range 19-27). It is expected that in the next six weeks, a larger sample of around 40 will have completed the DBQ after experiencing in-vehicle telematics for 30 days, allowing a comparison with baseline levels. The trends in the telematics data over the first 30 days will be compared with the changes observed in the DBQ. Conclusions: It is expected that there will be a significant relationship between the improvements in the DBQ and the trends in reduced telematics measured aggressive driving behaviours supporting the hypothesis.

Keywords: telematics, driving behavior, young drivers, driving behaviour questionnaire

Procedia PDF Downloads 101
835 Ribotaxa: Combined Approaches for Taxonomic Resolution Down to the Species Level from Metagenomics Data Revealing Novelties

Authors: Oshma Chakoory, Sophie Comtet-Marre, Pierre Peyret

Abstract:

Metagenomic classifiers are widely used for the taxonomic profiling of metagenomic data and estimation of taxa relative abundance. Small subunit rRNA genes are nowadays a gold standard for the phylogenetic resolution of complex microbial communities, although the power of this marker comes down to its use as full-length. We benchmarked the performance and accuracy of rRNA-specialized versus general-purpose read mappers, reference-targeted assemblers and taxonomic classifiers. We then built a pipeline called RiboTaxa to generate a highly sensitive and specific metataxonomic approach. Using metagenomics data, RiboTaxa gave the best results compared to other tools (Kraken2, Centrifuge (1), METAXA2 (2), PhyloFlash (3)) with precise taxonomic identification and relative abundance description, giving no false positive detection. Using real datasets from various environments (ocean, soil, human gut) and from different approaches (metagenomics and gene capture by hybridization), RiboTaxa revealed microbial novelties not seen by current bioinformatics analysis opening new biological perspectives in human and environmental health. In a study focused on corals’ health involving 20 metagenomic samples (4), an affiliation of prokaryotes was limited to the family level with Endozoicomonadaceae characterising healthy octocoral tissue. RiboTaxa highlighted 2 species of uncultured Endozoicomonas which were dominant in the healthy tissue. Both species belonged to a genus not yet described, opening new research perspectives on corals’ health. Applied to metagenomics data from a study on human gut and extreme longevity (5), RiboTaxa detected the presence of an uncultured archaeon in semi-supercentenarians (aged 105 to 109 years) highlighting an archaeal genus, not yet described, and 3 uncultured species belonging to the Enorma genus that could be species of interest participating in the longevity process. RiboTaxa is user-friendly, rapid, allowing microbiota structure description from any environment and the results can be easily interpreted. This software is freely available at https://github.com/oschakoory/RiboTaxa under the GNU Affero General Public License 3.0.

Keywords: metagenomics profiling, microbial diversity, SSU rRNA genes, full-length phylogenetic marker

Procedia PDF Downloads 110
834 The Usage of Bridge Estimator for Hegy Seasonal Unit Root Tests

Authors: Huseyin Guler, Cigdem Kosar

Abstract:

The aim of this study is to propose Bridge estimator for seasonal unit root tests. Seasonality is an important factor for many economic time series. Some variables may contain seasonal patterns and forecasts that ignore important seasonal patterns have a high variance. Therefore, it is very important to eliminate seasonality for seasonal macroeconomic data. There are some methods to eliminate the impacts of seasonality in time series. One of them is filtering the data. However, this method leads to undesired consequences in unit root tests, especially if the data is generated by a stochastic seasonal process. Another method to eliminate seasonality is using seasonal dummy variables. Some seasonal patterns may result from stationary seasonal processes, which are modelled using seasonal dummies but if there is a varying and changing seasonal pattern over time, so the seasonal process is non-stationary, deterministic seasonal dummies are inadequate to capture the seasonal process. It is not suitable to use seasonal dummies for modeling such seasonally nonstationary series. Instead of that, it is necessary to take seasonal difference if there are seasonal unit roots in the series. Different alternative methods are proposed in the literature to test seasonal unit roots, such as Dickey, Hazsa, Fuller (DHF) and Hylleberg, Engle, Granger, Yoo (HEGY) tests. HEGY test can be also used to test the seasonal unit root in different frequencies (monthly, quarterly, and semiannual). Another issue in unit root tests is the lag selection. Lagged dependent variables are added to the model in seasonal unit root tests as in the unit root tests to overcome the autocorrelation problem. In this case, it is necessary to choose the lag length and determine any deterministic components (i.e., a constant and trend) first, and then use the proper model to test for seasonal unit roots. However, this two-step procedure might lead size distortions and lack of power in seasonal unit root tests. Recent studies show that Bridge estimators are good in selecting optimal lag length while differentiating nonstationary versus stationary models for nonseasonal data. The advantage of this estimator is the elimination of the two-step nature of conventional unit root tests and this leads a gain in size and power. In this paper, the Bridge estimator is proposed to test seasonal unit roots in a HEGY model. A Monte-Carlo experiment is done to determine the efficiency of this approach and compare the size and power of this method with HEGY test. Since Bridge estimator performs well in model selection, our approach may lead to some gain in terms of size and power over HEGY test.

Keywords: bridge estimators, HEGY test, model selection, seasonal unit root

Procedia PDF Downloads 331
833 Application of Vector Representation for Revealing the Richness of Meaning of Facial Expressions

Authors: Carmel Sofer, Dan Vilenchik, Ron Dotsch, Galia Avidan

Abstract:

Studies investigating emotional facial expressions typically reveal consensus among observes regarding the meaning of basic expressions, whose number ranges between 6 to 15 emotional states. Given this limited number of discrete expressions, how is it that the human vocabulary of emotional states is so rich? The present study argues that perceivers use sequences of these discrete expressions as the basis for a much richer vocabulary of emotional states. Such mechanisms, in which a relatively small number of basic components is expanded to a much larger number of possible combinations of meanings, exist in other human communications modalities, such as spoken language and music. In these modalities, letters and notes, which serve as basic components of spoken language and music respectively, are temporally linked, resulting in the richness of expressions. In the current study, in each trial participants were presented with sequences of two images containing facial expression in different combinations sampled out of the eight static basic expressions (total 64; 8X8). In each trial, using single word participants were required to judge the 'state of mind' portrayed by the person whose face was presented. Utilizing word embedding methods (Global Vectors for Word Representation), employed in the field of Natural Language Processing, and relying on machine learning computational methods, it was found that the perceived meanings of the sequences of facial expressions were a weighted average of the single expressions comprising them, resulting in 22 new emotional states, in addition to the eight, classic basic expressions. An interaction between the first and the second expression in each sequence indicated that every single facial expression modulated the effect of the other facial expression thus leading to a different interpretation ascribed to the sequence as a whole. These findings suggest that the vocabulary of emotional states conveyed by facial expressions is not restricted to the (small) number of discrete facial expressions. Rather, the vocabulary is rich, as it results from combinations of these expressions. In addition, present research suggests that using word embedding in social perception studies, can be a powerful, accurate and efficient tool, to capture explicit and implicit perceptions and intentions. Acknowledgment: The study was supported by a grant from the Ministry of Defense in Israel to GA and CS. CS is also supported by the ABC initiative in Ben-Gurion University of the Negev.

Keywords: Glove, face perception, facial expression perception. , facial expression production, machine learning, word embedding, word2vec

Procedia PDF Downloads 173
832 The Impact of a Sustainable Solar Heating System on the Growth of ‎Strawberry Plants in an Agricultural Greenhouse

Authors: Ilham Ihoume, Rachid Tadili, Nora Arbaoui

Abstract:

The use of solar energy is a crucial tactic in the agricultural industry's plan ‎‎to decrease greenhouse gas emissions. This clean source of energy can ‎greatly lower the sector's carbon footprint and make a significant impact in ‎the ‎fight against climate change. In this regard, this study examines the ‎effects ‎of a solar-based heating system, in a north-south oriented agricultural ‎green‎house on the development of strawberry plants during winter. This ‎system ‎relies on the circulation of water as a heat transfer fluid in a closed ‎circuit ‎installed on the greenhouse roof to store heat during the day and ‎release it ‎inside at night. A comparative experimental study was conducted ‎in two ‎greenhouses, one experimental with the solar heating system and the ‎other ‎for control without any heating system. Both greenhouses are located ‎on the ‎terrace of the Solar Energy and Environment Laboratory of the ‎Mohammed ‎V University in Rabat, Morocco. The developed heating system ‎consists of a ‎copper coil inserted in double glazing and placed on the roof of ‎the greenhouse, a water pump circulator, a battery, and a photovoltaic solar ‎panel to ‎power the electrical components. This inexpensive and ‎environmentally ‎friendly system allows the greenhouse to be heated during ‎the winter and ‎improves its microclimate system. This improvement resulted ‎in an increase ‎in the air temperature inside the experimental greenhouse by 6 ‎‎°C and 8 °C, ‎and a reduction in its relative humidity by 23% and 35% ‎compared to the ‎control greenhouse and the ambient air, respectively, ‎throughout the winter. ‎For the agronomic performance, it was observed that ‎the production was 17 ‎days earlier than in the control greenhouse‎.‎

Keywords: sustainability, thermal energy storage, solar energy, agriculture greenhouse

Procedia PDF Downloads 81
831 Appliance of the Analytic Hierarchy Process Methodology for the Selection of a Small Modular Reactors to Enhance Maritime Traffic Decarbonisation

Authors: Sara Martín, Ying Jie Zheng, César Hueso

Abstract:

International shipping is considered one of the largest sources of pollution in the world, accounting for 812 million tons of CO2 emissions in the year 2018. Current maritime decarbonisation is based on the implementation of new fuel alternatives, such as LNG, biofuels, and methanol, among others, which are less polluting as well as less efficient. Despite being a carbon-free and highly-developed technology, nuclear propulsion is hardly discussed as an alternative. Scientifically, it is believed that Small Modular Reactors (SMR) could be a promising solution to decarbonized maritime traffic due to their small dimensions and safety capabilities. However, as of today, there are no merchant ships powered by nuclear systems. Therefore, this project aims to understand the challenges of the development of nuclear-fuelled vessels by analysing all SMR designs to choose the most suitable one. In order not to fall into subjectivities, the Analytic Hierarchy Process (AHP) will be used to make the selection. This multiple-criteria evaluation technique analyses complex decisions by pairwise comparison of a number of evaluation criteria that can be applied to each SMR. The state-of-the-art 72 SMRs presented by the International Atomic Energy Agency (IAEA) will be analysed and ranked by a global parameter, calculated by applying the AHP methodology. The main target of the work is to find an adequate SMR system to power a ship. Top designs will be described in detail, and conclusions will be drawn from the results. This project has been conceived as an effort to foster the near-term development of zero-emission maritime traffic.

Keywords: international shipping, decarbonization, SMR, AHP, nuclear-fuelled vessels

Procedia PDF Downloads 118
830 Frictional Effects on the Dynamics of a Truncated Double-Cone Gravitational Motor

Authors: Barenten Suciu

Abstract:

In this work, effects of the friction and truncation on the dynamics of a double-cone gravitational motor, self-propelled on a straight V-shaped horizontal rail, are evaluated. Such mechanism has a variable radius of contact, and, on one hand, it is similar to a pulley mechanism that changes the potential energy into the kinetic energy of rotation, but on the other hand, it is similar to a pendulum mechanism that converts the potential energy of the suspended body into the kinetic energy of translation along a circular path. Movies of the self- propelled double-cones, made of S45C carbon steel and wood, along rails made of aluminum alloy, were shot for various opening angles of the rails. Kinematical features of the double-cones were estimated through the slow-motion processing of the recorded movies. Then, a kinematical model is derived under assumption that the distance traveled by the contact points on the rectilinear rails is identical with the distance traveled by the contact points on the truncated conical surface. Additionally, a dynamic model, for this particular contact problem, was proposed and validated against the experimental results. Based on such model, the traction force and the traction torque acting on the double-cone are identified. One proved that the rolling traction force is always smaller than the sliding friction force; i.e., the double-cone is rolling without slipping. Results obtained in this work can be used to achieve the proper design of such gravitational motor.

Keywords: Truncated double-cone, friction, rolling and sliding, dynamic model, gravitational motor

Procedia PDF Downloads 267
829 Enhanced Solar-Driven Evaporation Process via F-Mwcnts/Pvdf Photothermal Membrane for Forward Osmosis Draw Solution Recovery

Authors: Ayat N. El-Shazly, Dina Magdy Abdo, Hamdy Maamoun Abdel-Ghafar, Xiangju Song, Heqing Jiang

Abstract:

Product water recovery and draw solution (DS) reuse is the most energy-intensive stage in forwarding osmosis (FO) technology. Sucrose solution is the most suitable DS for FO application in food and beverages. However, sucrose DS recovery by conventional pressure-driven or thermal-driven concentration techniques consumes high energy. Herein, we developed a spontaneous and sustainable solar-driven evaporation process based on a photothermal membrane for the concentration and recovery of sucrose solution. The photothermal membrane is composed of multi-walled carbon nanotubes (f-MWCNTs)photothermal layer on a hydrophilic polyvinylidene fluoride (PVDF) substrate. The f-MWCNTs photothermal layer with a rough surface and interconnected network structures not only improves the light-harvesting and light-to-heat conversion performance but also facilitates the transport of water molecules. The hydrophilic PVDF substrate can promote the rapid transport of water for adequate water supply to the photothermal layer. As a result, the optimized f-MWCNTs/PVDF photothermal membrane exhibits an excellent light absorption of 95%, and a high surface temperature of 74 °C at 1 kW m−2 . Besides, it realizes an evaporation rate of 1.17 kg m−2 h−1 for 5% (w/v) of sucrose solution, which is about 5 times higher than that of the natural evaporation. The designed photothermal evaporation process is capable of concentrating sucrose solution efficiently from 5% to 75% (w/v), which has great potential in FO process and juice concentration.

Keywords: solar, pothothermal, membrane, MWCNT

Procedia PDF Downloads 95
828 Informal Green Infrastructure as Mobility Enabler in Informal Settlements of Quito

Authors: Ignacio W. Loor

Abstract:

In the context of informal settlements in Quito, this paper provides evidence that slopes and deep ravines typical of Andean cities, around which marginalized urban communities sit, constitute a platform for green infrastructure that supports mobility for pedestrians in an incremental fashion. This is informally shaped green infrastructure that provides connectivity to other mobility infrastructures such as roads and public transport, which permits relegated dwellers reach their daily destinations and reclaim their rights to the city. This is relevant in that walking has been increasingly neglected as a viable mean of transport in Latin American cities, in favor of rather motorized means, for which the mobility benefits of green infrastructure have remained invisible to policymakers, contributing to the progressive isolation of informal settlements. This research leverages greatly on an ecological rejuvenation programme led by the municipality of Quito and the Andean Corporation for Development (CAN) intended for rehabilitating the ecological functionalities of ravines. Accordingly, four ravines in different stages of rejuvenation were chosen, in order to through ethnographic methods, capture the practices they support to dwellers of informal settlements across different stages, particularly in terms of issues of mobility. Then, by presenting fragments of interviews, description of observed phenomena, photographs and narratives published in institutional reports and media, the production process of mobility infrastructure over unoccupied slopes and ravines, and the roles that this infrastructure plays in the mobility of dwellers and their quotidian practices are explained. For informal settlements, which normally feature scant urban infrastructure, mobility embodies an unfavourable driver for the possibilities of dwellers to actively participate in the social, economic and political dimensions of the city, for which their rights to the city are widely neglected. Nevertheless, informal green infrastructure for mobility provides some alleviation. This infrastructure is incremental, since its features and usability gradually evolves as users put into it knowledge, labour, devices, and connectivity to other infrastructures in different dimensions which increment its dependability. This is evidenced in the diffusion of knowledge of trails and routes of footpaths among users, the implementation of linking stairs and bridges, the improved access by producing public spaces adjacent to the ravines, the illuminating of surrounding roads, and ultimately, the restoring of ecological functions of ravines. However, the perpetuity of this type of infrastructure is also fragile and vulnerable to the course of urbanisation, densification, and expansion of gated privatised spaces.

Keywords: green infrastructure, informal settlements, urban mobility, walkability

Procedia PDF Downloads 154
827 Systematic Mapping Study of Digitization and Analysis of Manufacturing Data

Authors: R. Clancy, M. Ahern, D. O’Sullivan, K. Bruton

Abstract:

The manufacturing industry is currently undergoing a digital transformation as part of the mega-trend Industry 4.0. As part of this phase of the industrial revolution, traditional manufacturing processes are being combined with digital technologies to achieve smarter and more efficient production. To successfully digitally transform a manufacturing facility, the processes must first be digitized. This is the conversion of information from an analogue format to a digital format. The objective of this study was to explore the research area of digitizing manufacturing data as part of the worldwide paradigm, Industry 4.0. The formal methodology of a systematic mapping study was utilized to capture a representative sample of the research area and assess its current state. Specific research questions were defined to assess the key benefits and limitations associated with the digitization of manufacturing data. Research papers were classified according to the type of research and type of contribution to the research area. Upon analyzing 54 papers identified in this area, it was noted that 23 of the papers originated in Germany. This is an unsurprising finding as Industry 4.0 is originally a German strategy with supporting strong policy instruments being utilized in Germany to support its implementation. It was also found that the Fraunhofer Institute for Mechatronic Systems Design, in collaboration with the University of Paderborn in Germany, was the most frequent contributing Institution of the research papers with three papers published. The literature suggested future research directions and highlighted one specific gap in the area. There exists an unresolved gap between the data science experts and the manufacturing process experts in the industry. The data analytics expertise is not useful unless the manufacturing process information is utilized. A legitimate understanding of the data is crucial to perform accurate analytics and gain true, valuable insights into the manufacturing process. There lies a gap between the manufacturing operations and the information technology/data analytics departments within enterprises, which was borne out by the results of many of the case studies reviewed as part of this work. To test the concept of this gap existing, the researcher initiated an industrial case study in which they embedded themselves between the subject matter expert of the manufacturing process and the data scientist. Of the papers resulting from the systematic mapping study, 12 of the papers contributed a framework, another 12 of the papers were based on a case study, and 11 of the papers focused on theory. However, there were only three papers that contributed a methodology. This provides further evidence for the need for an industry-focused methodology for digitizing and analyzing manufacturing data, which will be developed in future research.

Keywords: analytics, digitization, industry 4.0, manufacturing

Procedia PDF Downloads 108
826 Triple Intercell Bar for Electrometallurgical Processes: A Design to Increase PV Energy Utilization

Authors: Eduardo P. Wiechmann, Jorge A. Henríquez, Pablo E. Aqueveque, Luis G. Muñoz

Abstract:

PV energy prices are declining rapidly. To take advantage of the benefits of those prices and lower the carbon footprint, operational practices must be modified. Undoubtedly, it challenges the electrowinning practice to operate at constant current throughout the day. This work presents a technology that contributes in providing modulation capacity to the electrode current distribution system. This is to raise the day time dc current and lower it at night. The system is a triple intercell bar that operates in current-source mode. The design is a capping board free dogbone type of bar that ensures an operation free of short circuits, hot swapability repairs and improved current balance. This current-source system eliminates the resetting currents circulating in equipotential bars. Twin auxiliary connectors are added to the main connectors providing secure current paths to bypass faulty or impaired contacts. All system conductive elements are positioned over a baseboard offering a large heat sink area to the ventilation of a facility. The system works with lower temperature than a conventional busbar. Of these attributes, the cathode current balance property stands out and is paramount for day/night modulation and the use of photovoltaic energy. A design based on a 3D finite element method model predicting electric and thermal performance under various industrial scenarios is presented. Preliminary results obtained in an electrowinning facility with industrial prototypes are included.

Keywords: electrowinning, intercell bars, PV energy, current modulation

Procedia PDF Downloads 148
825 Dual Mode Mobile Based Detection of Endogenous Hydrogen Sulfide for Determination of Live and Antibiotic Resistant Bacteria

Authors: Shashank Gahlaut, Chandrashekhar Sharan, J. P. Singh

Abstract:

Increasing incidence of antibiotic-resistant bacteria is a big concern for the treatment of pathogenic diseases. The effect of treatment of patients with antibiotics often leads to the evolution of antibiotic resistance in the pathogens. The detection of antibiotic or antimicrobial resistant bacteria (microbes) is quite essential as it is becoming one of the big threats globally. Here we propose a novel technique to tackle this problem. We are taking a step forward to prevent the infections and diseases due to drug resistant microbes. This detection is based on some unique features of silver (a noble metal) nanorods (AgNRs) which are fabricated by a physical deposition method called thermal glancing angle deposition (GLAD). Silver nanorods are found to be highly sensitive and selective for hydrogen sulfide (H2S) gas. Color and water wetting (contact angle) of AgNRs are two parameters what are effected in the presence of this gas. H₂S is one of the major gaseous products evolved in the bacterial metabolic process. It is also known as gasotransmitter that transmits some biological singles in living systems. Nitric Oxide (NO) and Carbon mono oxide (CO) are two another members of this family. Orlowski (1895) observed the emission of H₂S by the bacteria for the first time. Most of the microorganism produce these gases. Here we are focusing on H₂S gas evolution to determine live/dead and antibiotic-resistant bacteria. AgNRs array has been used for the detection of H₂S from micro-organisms. A mobile app is also developed to make it easy, portable, user-friendly, and cost-effective.

Keywords: antibiotic resistance, hydrogen sulfide, live and dead bacteria, mobile app

Procedia PDF Downloads 138
824 Two-Stage Estimation of Tropical Cyclone Intensity Based on Fusion of Coarse and Fine-Grained Features from Satellite Microwave Data

Authors: Huinan Zhang, Wenjie Jiang

Abstract:

Accurate estimation of tropical cyclone intensity is of great importance for disaster prevention and mitigation. Existing techniques are largely based on satellite imagery data, and research and utilization of the inner thermal core structure characteristics of tropical cyclones still pose challenges. This paper presents a two-stage tropical cyclone intensity estimation network based on the fusion of coarse and fine-grained features from microwave brightness temperature data. The data used in this network are obtained from the thermal core structure of tropical cyclones through the Advanced Technology Microwave Sounder (ATMS) inversion. Firstly, the thermal core information in the pressure direction is comprehensively expressed through the maximal intensity projection (MIP) method, constructing coarse-grained thermal core images that represent the tropical cyclone. These images provide a coarse-grained feature range wind speed estimation result in the first stage. Then, based on this result, fine-grained features are extracted by combining thermal core information from multiple view profiles with a distributed network and fused with coarse-grained features from the first stage to obtain the final two-stage network wind speed estimation. Furthermore, to better capture the long-tail distribution characteristics of tropical cyclones, focal loss is used in the coarse-grained loss function of the first stage, and ordinal regression loss is adopted in the second stage to replace traditional single-value regression. The selection of tropical cyclones spans from 2012 to 2021, distributed in the North Atlantic (NA) regions. The training set includes 2012 to 2017, the validation set includes 2018 to 2019, and the test set includes 2020 to 2021. Based on the Saffir-Simpson Hurricane Wind Scale (SSHS), this paper categorizes tropical cyclone levels into three major categories: pre-hurricane, minor hurricane, and major hurricane, with a classification accuracy rate of 86.18% and an intensity estimation error of 4.01m/s for NA based on this accuracy. The results indicate that thermal core data can effectively represent the level and intensity of tropical cyclones, warranting further exploration of tropical cyclone attributes under this data.

Keywords: Artificial intelligence, deep learning, data mining, remote sensing

Procedia PDF Downloads 52
823 Profiling Risky Code Using Machine Learning

Authors: Zunaira Zaman, David Bohannon

Abstract:

This study explores the application of machine learning (ML) for detecting security vulnerabilities in source code. The research aims to assist organizations with large application portfolios and limited security testing capabilities in prioritizing security activities. ML-based approaches offer benefits such as increased confidence scores, false positives and negatives tuning, and automated feedback. The initial approach using natural language processing techniques to extract features achieved 86% accuracy during the training phase but suffered from overfitting and performed poorly on unseen datasets during testing. To address these issues, the study proposes using the abstract syntax tree (AST) for Java and C++ codebases to capture code semantics and structure and generate path-context representations for each function. The Code2Vec model architecture is used to learn distributed representations of source code snippets for training a machine-learning classifier for vulnerability prediction. The study evaluates the performance of the proposed methodology using two datasets and compares the results with existing approaches. The Devign dataset yielded 60% accuracy in predicting vulnerable code snippets and helped resist overfitting, while the Juliet Test Suite predicted specific vulnerabilities such as OS-Command Injection, Cryptographic, and Cross-Site Scripting vulnerabilities. The Code2Vec model achieved 75% accuracy and a 98% recall rate in predicting OS-Command Injection vulnerabilities. The study concludes that even partial AST representations of source code can be useful for vulnerability prediction. The approach has the potential for automated intelligent analysis of source code, including vulnerability prediction on unseen source code. State-of-the-art models using natural language processing techniques and CNN models with ensemble modelling techniques did not generalize well on unseen data and faced overfitting issues. However, predicting vulnerabilities in source code using machine learning poses challenges such as high dimensionality and complexity of source code, imbalanced datasets, and identifying specific types of vulnerabilities. Future work will address these challenges and expand the scope of the research.

Keywords: code embeddings, neural networks, natural language processing, OS command injection, software security, code properties

Procedia PDF Downloads 100
822 Perceptions of Pregnant Women on the Transitional Use of Traditional Medicine in the Transitional District Western Uganda

Authors: Demmiele Matu Kiiza, Constantine Steven Labongo Loum, Julaina Obika Asinasi

Abstract:

Background: The use of traditional medicine in Uganda forms the preliminary therapeutic approaches among many people. Traditional medicines have been used in Uganda for many years, not only for the management of pregnancy-related complications but also for the management of other physical and psychological illnesses. Traditional medicines are always considered the first line of treatment by a considerable number of people. This study, therefore, sought to explore the lived experiences of pregnant women by assessing their perceptions of the transitional use of traditional medicine. Methods: Ethnography was used to capture data from an emic perspective. The ethnographic approach involved visiting a few selected pregnant women to observe and participate in the identification of traditional medicines. The ethnographic fieldwork was carried out within a period of three months. In-depth interviews were carried out and audio recorded and later transcribed verbatim. Data was thereafter analyzed thematically. The thematic analysis involved identifying statements made by research participants by transcribing audio and reading through field notes, coding was done, and themes were generated according to commonly mentioned experiences of using traditional medicine. Results: The findings revealed that women performed a ritual of ‘cutting the cord’ by making a small horizontal incision on the belly across the linea Nigra (also known as a pregnancy line) at around six months of pregnancy to avoid producing a baby with an umbilical cord tied around the baby’s neck. They also used crushed egg shells, crushed snail shells and herbs such as pawpaw roots, Entarahompo (crassocephalum vitelline), Ekyoganyanja (Erlangea tomentose), to manage Omushohokye (a term used by the study participants to refer to a situation where women pass out too much water when giving birth, producing a child with mold and oozing out of a milky liquid through the breasts before giving births); prepare for safe delivery and also to manage pregnancy-related complications. The study recommends the implementation of a traditional medicine use policy using a bottom-up approach. Designing and implementing of culturally sensitive maternal healthcare intervention programs and involving village health teams and the elderly in health education.

Keywords: traditional medicine, pregnant women, uganda, perceptions

Procedia PDF Downloads 81
821 Short-Term versus Long-Term Effect of Waterpipe Smoking Exposure on Cardiovascular Biomarkers in Mice

Authors: Abeer Rababa'h, Ragad Bsoul, Mohammad Alkhatatbeh, Karem Alzoubi

Abstract:

Introduction: Tobacco use is one of the main risk factors to cardiovascular diseases (CVD) and atherosclerosis in particular. WPS contains several toxic materials such as: nicotine, carcinogens, tar, carbon monoxide and heavy metals. Thus, WPS is considered to be as one of the toxic environmental factors that should be investigated intensively. Therefore, the aim of this study is to investigate the effect of WPS on several cardiovascular biological markers that may cause atherosclerosis in mice. The study also conducted to study the temporal effects of WPS on the atherosclerotic biomarkers upon short (2 weeks) and long-term (8 weeks) exposures. Methods: mice were exposed to WPS and heart homogenates were analyzed to elucidate the effects of WPS on matrix metalloproteinase (MMPs), endothelin-1 (ET-1) and, myeloperoxidase (MPO). Following protein estimation, enzyme-linked immunosorbent assays were done to measure the levels of MMPs (isoforms 1, 3, and 9), MPO, and ET-1 protein expressions. Results: our data showed that acute exposure to WPS significantly enhances the levels of MMP-3, MMP- 9, and MPO expressions (p < 0.05) compared to their corresponding control. However, the body was capable to normalize the level of expressions for such parameters following continuous exposure for 8 weeks (p > 0.05). Additionally, we showed that the level of ET-1 expression was significantly higher upon chronic exposure to WPS compared to both control and acute exposure groups (p < 0.05). Conclusion: Waterpipe exposure has a significant negative effect on atherosclerosis and the enhancement of the atherosclerotic biomarkers expression (MMP-3 and 9, MPO, and ET-1) might represent an early scavenger of compensatory efforts to maintain cardiac function after WP exposure.

Keywords: atherosclerotic biomarkers, cardiovascular disease, matrix metalloproteinase, waterpipe

Procedia PDF Downloads 342
820 Examining the Links between Fish Behaviour and Physiology for Resilience in the Anthropocene

Authors: Lauren A. Bailey, Amber R. Childs, Nicola C. James, Murray I. Duncan, Alexander Winkler, Warren M. Potts

Abstract:

Changes in behaviour and physiology are the most important responses of marine life to anthropogenic impacts such as climate change and over-fishing. Behavioural changes (such as a shift in distribution or changes in phenology) can ensure that a species remains in an environment suited for its optimal physiological performance. However, if marine life is unable to shift their distribution, they are reliant on physiological adaptation (either by broadening their metabolic curves to tolerate a range of stressors or by shifting their metabolic curves to maximize their performance at extreme stressors). However, since there are links between fish physiology and behaviour, changes to either of these traits may have reciprocal interactions. This paper reviews the current knowledge of the links between the behaviour and physiology of fishes, discusses these in the context of exploitation and climate change, and makes recommendations for future research needs. The review revealed that our understanding of the links between fish behaviour and physiology is rudimentary. However, both are hypothesized to be linked to stress responses along the hypothalamic pituitary axis. The link between physiological capacity and behaviour is particularly important as both determine the response of an individual to a changing climate and are under selection by fisheries. While it appears that all types of capture fisheries are likely to reduce the adaptive potential of fished populations to climate stressors, angling, which is primarily associated with recreational fishing, may induce fission of natural populations by removing individuals with bold behavioural traits and potentially the physiological traits required to facilitate behavioural change. Future research should focus on assessing how the links between physiological capacity and behaviour influence catchability, the response to climate change drivers, and post-release recovery. The plasticity of phenotypic traits should be examined under a range of stressors of differing intensity in several species and life history stages. Future studies should also assess plasticity (fission or fusion) in the phenotypic structuring of social hierarchy and how this influences habitat selection. Ultimately, to fully understand how physiology is influenced by the selective processes driven by fisheries, long-term monitoring of the physiological and behavioural structure of fished populations, their fitness, and catch rates are required.

Keywords: climate change, metabolic shifts, over-fishing, phenotypic plasticity, stress response

Procedia PDF Downloads 112
819 The Significance of Childhood in Shaping Family Microsystems from the Perspective of Biographical Learning: Narratives of Adults

Authors: Kornelia Kordiak

Abstract:

The research is based on a biographical approach and serves as a foundation for understanding individual human destinies through the analysis of the context of life experiences. It focuses on the significance of childhood in shaping family micro-worlds from the perspective of biographical learning. In this case, the family micro-world is interpreted as a complex of beliefs and judgments about elements of the ‘total universe’ based on the individual's experiences. The main aim of the research is to understand the importance of childhood in shaping family micro-worlds from the perspective of reflection on biographical learning. Additionally, it contributes to a deeper understanding of the familial experiences of the studied individuals who form these family micro-worlds and the course of the biographical learning process in the subjects. Biographical research aligns with an interpretative paradigm, where individuals are treated as active interpreters of the world, giving meaning to their experiences and actions based on their own values and beliefs. The research methods used in the project—narrative interview method and analysis of personal documents—enable obtaining a multidimensional perspective on the phenomenon under study. Narrative interviews serve as the main data collection method, allowing researchers to delve into various life contexts of individuals. Analysis of these narratives identifies key moments and life patterns, as well as discovers the significance of childhood in shaping family micro-worlds. Moreover, analysis of personal documents such as diaries or photographs enriches the understanding of the studied phenomena by providing additional contexts and perspectives. The research will be conducted in three phases: preparatory, main, and final. The anticipated schedule includes preparation of research tools, selection of research sample, conducting narrative interviews and analysis of personal documents, as well as analysis and interpretation of collected research material. The narrative interview method and document analysis will be utilized to capture various contexts and interpretations of childhood experiences and family relations. The research will contribute to a better understanding of family dynamics and individual developmental processes. It will allow for the identification and understanding of mechanisms of biographical learning and their significance in shaping identity and family relations. Analysis of adult narratives will enable the identification of factors determining patterns of behavior and attitudes in adult life, which may have significant implications for pedagogical practice.

Keywords: childhood, adulthood, biographical learning, narrative interview, analysis of personal documents, family micro-worlds

Procedia PDF Downloads 23
818 Comparison of Soil Test Extractants for Determination of Available Soil Phosphorus

Authors: Violina Angelova, Stefan Krustev

Abstract:

The aim of this work was to evaluate the effectiveness of different soil test extractants for the determination of available soil phosphorus in five internationally certified standard soils, sludge and clay (NCS DC 85104, NCS DC 85106, ISE 859, ISE 952, ISE 998). The certified samples were extracted with the following methods/extractants: CaCl₂, CaCl₂ and DTPA (CAT), double lactate (DL), ammonium lactate (AL), calcium acetate lactate (CAL), Olsen, Mehlich 3, Bray and Kurtz I, and Morgan, which are commonly used in soil testing laboratories. The phosphorus in soil extracts was measured colorimetrically using Spectroquant Pharo 100 spectrometer. The methods used in the study were evaluated according to the recovery of available phosphorus, facility of application and rapidity of performance. The relationships between methods are examined statistically. A good agreement of the results from different soil test was established for all certified samples. In general, the P values extracted by the nine extraction methods significantly correlated with each other. When grouping the soils according to pH, organic carbon content and clay content, weaker extraction methods showed analogous trends; also among the stronger extraction methods, common tendencies were found. Other factors influencing the extraction force of the different methods include soil: solution ratio, as well as the duration and power of shaking the samples. The mean extractable P in certified samples was found to be in the order of CaCl₂ < CAT < Morgan < Bray and Kurtz I < Olsen < CAL < DL < Mehlich 3 < AL. Although the nine methods extracted different amounts of P from the certified samples, values of P extracted by the different methods were strongly correlated among themselves. Acknowledgment: The financial support by the Bulgarian National Science Fund Projects DFNI Н04/9 and DFNI Н06/21 are greatly appreciated.

Keywords: available soil phosphorus, certified samples, determination, soil test extractants

Procedia PDF Downloads 144
817 A Study on the Effect of Cod to Sulphate Ratio on Performance of Lab Scale Upflow Anaerobic Sludge Blanket Reactor

Authors: Neeraj Sahu, Ahmad Saadiq

Abstract:

Anaerobic sulphate reduction has the potential for being effective and economically viable over conventional treatment methods for the treatment of sulphate-rich wastewater. However, a major challenge in anaerobic sulphate reduction is the diversion of a fraction of organic carbon towards methane production and some minor problem such as odour problems, corrosion, and increase of effluent chemical oxygen demand. A high-rate anaerobic technology has encouraged researchers to extend its application to the treatment of complex wastewaters with relatively low cost and energy consumption compared to physicochemical methods. Therefore, the aim of this study was to investigate the effects of COD/SO₄²⁻ ratio on the performance of lab scale UASB reactor. A lab-scale upflow anaerobic sludge blanket (UASB) reactor was operated for 170 days. In which first 60 days, for successful start-up with acclimation under methanogenesis and sulphidogenesis at COD/SO₄²⁻ of 18 and were operated at COD/SO₄²⁻ ratios of 12, 8, 4 and 1 to evaluate the effects of the presence of sulfate on the reactor performance. The reactor achieved maximum COD removal efficiency and biogas evolution at the end of acclimation (control). This phase lasted 53 days with 89.5% efficiency. The biogas was 0.6 L/d at (OLR) of 1.0 kg COD/m³d when it was treating synthetic wastewater with effective volume of reactor as 2.8 L. When COD/SO₄²⁻ ratio changed from 12 to 1, slight decrease in COD removal efficiencies (76.8–87.4%) was observed, biogas production decreased from 0.58 to 0.32 L/d, while the sulfate removal efficiency increased from 42.5% to 72.7%.

Keywords: anaerobic, chemical oxygen demand, organic loading rate, sulphate, up-flow anaerobic sludge blanket reactor

Procedia PDF Downloads 214
816 Experimental Study on Two-Step Pyrolysis of Automotive Shredder Residue

Authors: Letizia Marchetti, Federica Annunzi, Federico Fiorini, Cristiano Nicolella

Abstract:

Automotive shredder residue (ASR) is a mixture of waste that makes up 20-25% of end-of-life vehicles. For many years, ASR was commonly disposed of in landfills or incinerated, causing serious environmental problems. Nowadays, thermochemical treatments are a promising alternative, although the heterogeneity of ASR still poses some challenges. One of the emerging thermochemical treatments for ASR is pyrolysis, which promotes the decomposition of long polymeric chains by providing heat in the absence of an oxidizing agent. In this way, pyrolysis promotes the conversion of ASR into solid, liquid, and gaseous phases. This work aims to improve the performance of a two-step pyrolysis process. After the characterization of the analysed ASR, the focus is on determining the effects of residence time on product yields and gas composition. A batch experimental setup that reproduces the entire process was used. The setup consists of three sections: the pyrolysis section (made of two reactors), the separation section, and the analysis section. Two different residence times were investigated to find suitable conditions for the first sample of ASR. These first tests showed that the products obtained were more sensitive to residence time in the second reactor. Indeed, slightly increasing residence time in the second reactor managed to raise the yield of gas and carbon residue and decrease the yield of liquid fraction. Then, to test the versatility of the setup, the same conditions were applied to a different sample of ASR coming from a different chemical plant. The comparison between the two ASR samples shows that similar product yields and compositions are obtained using the same setup.

Keywords: automotive shredder residue, experimental tests, heterogeneity, product yields, two-step pyrolysis

Procedia PDF Downloads 110
815 Fed-Batch Mixotrophic Cultivation of Microalgae Scenedesmus sp., Using Airlift Photobioreactor

Authors: Lakshmidevi Rajendran, Bharathidasan Kanniappan, Gopi Raja, Muthukumar Karuppan

Abstract:

This study investigates the feasibility of fed-batch mixotrophic cultivation of microalgae Scenedesmus sp. in a 3-litre airlift photobioreactor under standard operating conditions. The results of this study suggest the algae species may serve as an excellent feed for aquatic species using organic byproducts. Microalgae Scenedesmus sp., was cultured using a synthetic wastewater by stepwise addition of crude glycerol concentration ranging from 2-10g/l under fed-batch mixotrophic mode for a period of 15 days. The attempts were made with the stepwise addition of crude glycerol as a carbon source in the initial growth phase to evade the inhibitory nature of high glycerol concentration on the growth of Scenedesmus sp. Crude glycerol was chosen since it is readily accessible as byproduct from biodiesel production sectors. Highest biomass concentration was achieved to be 2.43 g/l at the crude glycerol concentration of 6g/l after 10 days which is 3 fold times the increase in the biomass concentration compared with the control medium without the addition of glycerol. Biomass growth data obtained for the microalgae Scenedesmus sp. was fitted well with the modified Logistic equation. Substrate utilization kinetics was also employed to model the biomass productivity with respect to the various crude glycerol concentration. The results indicated that the supplement of crude glycerol to the mixotrophic culture of Scenedesmus sp., enhances the biomass concentration, chlorophyll and lutein productivity. Thus the application of fed-batch mixotrophic cultivation with stepwise addition of crude glycerol to Scenedesmus sp., provides a subtle way to reduce the production cost and improvisation in the large-scale cultivation along with biochemical compound synthesis.

Keywords: airlift photobioreactor, crude glycerol, microalgae Scenedesmus sp., mixotrophic cultivation, lutein production

Procedia PDF Downloads 177
814 Ethnic-Racial Breakdown in Psychological Research among Latinx Populations in the U.S.

Authors: Madeline Phillips, Luis Mendez

Abstract:

The 21st century has seen an increase in the amount and variety of psychological research on Latinx, the largest minority group in the U.S., with great variability from the individual’s cultural origin (e.g., ethnicity) to region (e.g., nationality). We were interested in exploring how scientists recruit, conduct and report research on Latinx samples. Ethnicity and race are important components of individuals and should be addressed to capture a broader and deeper understanding of psychological research findings. In order to explore Latinx/Hispanic work, the Journal of Latinx Psychology (JLP) and Hispanic Journal of Behavioral Sciences (HJBS) were analyzed for 1) measures of ethnicity and race in empirical studies 2) nationalities represented 3) how researchers reported ethnic-racial demographics. The analysis included publications from 2013-2018 and revealed two common themes of reporting ethnicity and race: overrepresentation/underrepresentation and overgeneralization. There is currently not a systematic way of reporting ethnicity and race among Latinx/Hispanic research, creating a vague sense of what and how ethnicity/race plays a role in the lives of participants. Second, studies used the Hispanic/Latinx terms interchangeably and are not consistent across publications. For the purpose of this project, we were only interested in publications with Latinx samples in the U.S. Therefore, studies outside of the U.S. and non-empirical studies were excluded. JLP went from N = 118 articles to N = 94 and HJBS went from N = 174 to N = 154. For this project, we developed a coding rubric for ethnicity/race that reflected the different ways researchers reported ethnicity and race and was compatible with the U.S. census. We coded which ethnicity/race was identified as the largest ethnic group in each sample. We used the ethnic-racial breakdown numbers or percentages if provided. There were also studies that simply did not report the ethnic composition besides Hispanic or Latinx. We found that in 80% of the samples, Mexicans are overrepresented compared to the population statistics of Latinx in the US. We observed all the ethnic-racial breakdowns, demonstrating the overrepresentation of Mexican samples and underrepresentation and/or lack of representation of certain ethnicities (e.g., Chilean, Guatemalan). Our results showed an overgeneralization of studies that cluster their participants to Latinx/Hispanic, 23 for JLP and 63 for HJBS. The authors discuss the importance of transparency from researchers in reporting the context of the sample, including country, state, neighborhood, and demographic variables that are relevant to the goals of the project, except when there may be an issue of privacy and/or confidentiality involved. In addition, the authors discuss the importance to recognize the variability within the Latinx population and how it is reflected in the scientific discourse.

Keywords: Latinx, Hispanic, race and ethnicity, diversity

Procedia PDF Downloads 109