Search results for: pozolanic efficiency ratio
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10339

Search results for: pozolanic efficiency ratio

2149 Synthesis of Fluorescent PET-Type “Turn-Off” Triazolyl Coumarin Based Chemosensors for the Sensitive and Selective Sensing of Fe⁺³ Ions in Aqueous Solutions

Authors: Aidan Battison, Neliswa Mama

Abstract:

Environmental pollution by ionic species has been identified as one of the biggest challenges to the sustainable development of communities. The widespread use of organic and inorganic chemical products and the release of toxic chemical species from industrial waste have resulted in a need for advanced monitoring technologies for environment protection, remediation and restoration. Some of the disadvantages of conventional sensing methods include expensive instrumentation, well-controlled experimental conditions, time-consuming procedures and sometimes complicated sample preparation. On the contrary, the development of fluorescent chemosensors for biological and environmental detection of metal ions has attracted a great deal of attention due to their simplicity, high selectivity, eidetic recognition, rapid response and real-life monitoring. Coumarin derivatives S1 and S2 (Scheme 1) containing 1,2,3-triazole moieties at position -3- have been designed and synthesized from azide and alkyne derivatives by CuAAC “click” reactions for the detection of metal ions. These compounds displayed a strong preference for Fe3+ ions with complexation resulting in fluorescent quenching through photo-induced electron transfer (PET) by the “sphere of action” static quenching model. The tested metal ions included Cd2+, Pb2+, Ag+, Na+, Ca2+, Cr3+, Fe3+, Al3+, Cd2+, Ba2+, Cu2+, Co2+, Hg2+, Zn2+ and Ni2+. The detection limits of S1 and S2 were determined to be 4.1 and 5.1 uM, respectively. Compound S1 displayed the greatest selectivity towards Fe3+ in the presence of competing for metal cations. S1 could also be used for the detection of Fe3+ in a mixture of CH3CN/H¬2¬O. Binding stoichiometry between S1 and Fe3+ was determined by using both Jobs-plot and Benesi-Hildebrand analysis. The binding was shown to occur in a 1:1 ratio between the sensor and a metal cation. Reversibility studies between S1 and Fe3+ were conducted by using EDTA. The binding site of Fe3+ to S1 was determined by using 13 C NMR and Molecular Modelling studies. Complexation was suggested to occur between the lone-pair of electrons from the coumarin-carbonyl and the triazole-carbon double bond.

Keywords: chemosensor, "click" chemistry, coumarin, fluorescence, static quenching, triazole

Procedia PDF Downloads 146
2148 Pre-harvest Application of Nutrients on Quality and Storability of Litchi CV Bombai

Authors: Nazmin Akter, Tariqul Islam, Abu Sayed

Abstract:

Food loss and waste have become critical global issues, with approximately one-third of the world's food production being wasted. Among the various food products, horticultural fruits and vegetables are especially susceptible to loss due to their relatively short shelf lives. Litchi (Litchi chinensis) is one of Bangladesh's most important horticultural fruits. But the problem with this fruit is its short shelf life by losing weight faster after harvest. The experiment was carried out at Hajee Mohammad Danesh Science and Technology University, Dinajpur-5200 Bangladesh during 2020-2021. The objective of this experiment was to see the impact of nutrients viz., urea (1%), calcium chloride (1%), borax (1%), and their combinations on fruit quality and shelf life of litchi cv. Bombai. The experiment was laid out in a randomized block design with 7 treatments and 3 replications. Two sprays of each treatment were applied from the last week of May to June (at 20-day intervals). The results indicated that all the treatments significantly improved the quality parameters of litchi fruits as compared to the control. In terms of physicochemical characteristics fruit weight (20.30g), fruit volume (20m ml), and pulp percent (17.14) were found maximum with minimum stone percent (11.09) with the application of urea 1% + borax 1%+ calcium chloride 1%. Maximum TSS (19.62oBrix), TSS/acidity ratio (24.57), maximum ascorbic acid (45.19 mg/100 g pulp), and minimum acidity (0.80%) were reported with the application of T6 (Urea 1% + borax 1%+ calcium chloride 1%) treatments whereas fruits treated with urea 1% + borax 1% gave maximum total sugars (26.64%) and reducing sugars (19.19%) as compared to control. In the case of storage characters, application of Urea 1% + borax 1%+ calcium chloride 1% resulted in a minimum physiological loss in weight (6.11%), (8.41%), and (10.65%) for 2 days, 4 days, and 6 days respectively. In conclusion, to obtain better quality and increased storage period of litchi fruits, two sprays of urea, borax, and calcium chloride (1%) could be used during the fruit growth and development period at fortnightly intervals.

Keywords: litchi chinensis, preharvest, quality, shelf life, postharvest

Procedia PDF Downloads 57
2147 Influence Zone of Strip Footing on Untreated and Cement Treated Sand Mat Underlain by Soft Clay (2nd reviewed)

Authors: Sharifullah Ahmed

Abstract:

Shallow foundation on soft soils without ground improvement can represent a high level of settlement. In such a case, an alternative to pile foundations may be shallow strip footings placed on a soil system in which the upper layer is untreated or cement-treated compacted sand to limit the settlement within a permissible level. This research work deals with a rigid plane-strain strip footing of 2.5m width placed on a soil consisting of untreated or cement treated sand layer underlain by homogeneous soft clay. Both the thin and thick compared the footing width was considered. The soft inorganic cohesive NC clay layer is considered undrained for plastic loading stages and drained in consolidation stages, and the sand layer is drained in all loading stages. FEM analysis was done using PLAXIS 2D Version 8.0 with a model consisting of clay deposits of 15m thickness and 18m width. The soft clay layer was modeled using the Hardening Soil Model, Soft Soil Model, Soft Soil Creep model, and the upper improvement layer was modeled using only the Hardening Soil Model. The system is considered fully saturated. The value of natural void ratio 1.2 is used. Total displacement fields of strip footing and subsoil layers in the case of Untreated and Cement treated Sand as Upper layer are presented. For Hi/B =0.6 or above, the distribution of major deformation within an upper layer and the influence zone of footing is limited in an upper layer which indicates the complete effectiveness of the upper layer in bearing the foundation effectively in case of the untreated upper layer. For Hi/B =0.3 or above, the distribution of major deformation occurred within an upper layer, and the function of footing is limited in the upper layer. This indicates the complete effectiveness of the cement-treated upper layer. Brittle behavior of cemented sand and fracture or cracks is not considered in this analysis.

Keywords: displacement, ground improvement, influence depth, PLAXIS 2D, primary and secondary settlement, sand mat, soft clay

Procedia PDF Downloads 78
2146 Exploring the Potential of Phase Change Materials in Construction Environments

Authors: A. Ait Ahsene F., B. Boughrara S.

Abstract:

The buildings sector accounts for a significant portion of global energy consumption, with much of this energy used to heat and cool indoor spaces. In this context, the integration of innovative technologies such as phase change materials (PCM) holds promising potential to improve the energy efficiency and thermal comfort of buildings. This research topic explores the benefits and challenges associated with the use of PCMs in buildings, focusing on their ability to store and release thermal energy to regulate indoor temperature. We investigated the different types of PCM available, their thermal properties, and their potential applications in various climate zones and building types. To evaluate and compare the performance of PCMs, our methodology includes a series of laboratory and field experiments. In the laboratory, we measure the thermal storage capacity, melting and solidification temperatures, latent heat, and thermal conductivity of various PCMs. These measurements make it possible to quantify the capacity of each PCM to store and release thermal energy, as well as its capacity to transfer this energy through the construction materials. Additionally, field studies are conducted to evaluate the performance of PCMs in real-world environments. We install PCM systems in real buildings and monitor their operation over time, measuring energy savings, occupant thermal comfort, and material durability. These empirical data allow us to compare the effectiveness of different types of PCMs under real-world use conditions. By combining the results of laboratory and field experiments, we provide a comprehensive analysis of the advantages and limitations of PCMs in buildings, as well as recommendations for their effective application in practice.

Keywords: energy saving, phase change materials, material sustainability, buildings sector

Procedia PDF Downloads 20
2145 Characteristics of Double-Stator Inner-Rotor Axial Flux Permanent Magnet Machine with Rotor Eccentricity

Authors: Dawoon Choi, Jian Li, Yunhyun Cho

Abstract:

Axial Flux Permanent Magnet (AFPM) machines have been widely used in various applications due to their important merits, such as compact structure, high efficiency and high torque density. This paper presents one of the most important characteristics in the design process of the AFPM device, which is a recent issue. To design AFPM machine, the predicting electromagnetic forces between the permanent magnets and stator is important. Because of the magnitude of electromagnetic force affects many characteristics such as machine size, noise, vibration, and quality of output power. Theoretically, this force is canceled by the equilibrium of force when it is in the middle of the gap, but it is inevitable to deviate due to manufacturing problems in actual machine. Such as large scale wind generator, because of the huge attractive force between rotor and stator disks, this is more serious in getting large power applications such as large. This paper represents the characteristics of Double-Stator Inner –Rotor AFPM machines when it has rotor eccentricity. And, unbalanced air-gap and inclined air-gap condition which is caused by rotor offset and tilt in a double-stator single inner-rotor AFPM machine are each studied in electromagnetic and mechanical aspects. The output voltage and cogging torque under un-normal air-gap condition of AF machines are firstly calculated using a combined analytical and numerical methods, followed by a structure analysis to study the effect to mechanical stress, deformation and bending forces on bearings. Results and conclusions given in this paper are instructive for the successful development of AFPM machines.

Keywords: axial flux permanent magnet machine, inclined air gap, unbalanced air gap, rotor eccentricity

Procedia PDF Downloads 197
2144 Comparing the Embodied Carbon Impacts of a Passive House with the BC Energy Step Code Using Life Cycle Assessment

Authors: Lorena Polovina, Maddy Kennedy-Parrott, Mohammad Fakoor

Abstract:

The construction industry accounts for approximately 40% of total GHG emissions worldwide. In order to limit global warming to 1.5 degrees Celsius, ambitious reductions in the carbon intensity of our buildings are crucial. Passive House presents an opportunity to reduce operational carbon by as much as 90% compared to a traditional building through improving thermal insulation, limiting thermal bridging, increasing airtightness and heat recovery. Up until recently, Passive House design was mainly concerned with meeting the energy demands without considering embodied carbon. As buildings become more energy-efficient, embodied carbon becomes more significant. The main objective of this research is to calculate the embodied carbon impact of a Passive House and compare it with the BC Energy Step Code (ESC). British Columbia is committed to increasing the energy efficiency of buildings through the ESC, which is targeting net-zero energy-ready buildings by 2032. However, there is a knowledge gap in the embodied carbon impacts of more energy-efficient buildings, in particular Part 3 construction. In this case study, life cycle assessments (LCA) are performed on Part 3, a multi-unit residential building in Victoria, BC. The actual building is not constructed to the Passive House standard; however, the building envelope and mechanical systems are designed to comply with the Passive house criteria, as well as Steps 1 and 4 of the BC Energy Step Code (ESC) for comparison. OneClick LCA is used to perform the LCA of the case studies. Several strategies are also proposed to minimize the total carbon emissions of the building. The assumption is that there will not be significant differences in embodied carbon between a Passive House and a Step 4 building due to the building envelope.

Keywords: embodied carbon, energy modeling, energy step code, life cycle assessment

Procedia PDF Downloads 132
2143 Quality Assessment of SSRU Program in Education

Authors: Rossukhon Makaramani, Supanan Sittilerd, Wipada Prasarnsaph

Abstract:

The study aimed to 1) examine management status of a Program in Education at the Faculty of Education, Suan Sunandha Rajabhat University (SSRU); 2) determine main components, indicators and criteria for constructing quality assessment framework; 3) assess quality of a SSRU Program in Education; and 4) provide recommendations to promote academic excellence. The program to be assessed was Bachelor of Education Program in Education (5 years), Revised Version 2009. Population and samples were stakeholders involving implementation of this program during an academic year 2012. Results were: 1) Management status of the Program in Education showed that the Faculty of Education depicted good level (4.20) in the third cycle of external quality assessment by the Office for National Education Standards and Quality Assessment (ONESQA). There were 1,192 students enrolling in the program, divided into 5 major fields of study. There were 50 faculty members, 37 holding master’s degrees and 13 holding doctorate degrees. Their academic position consisted of 35 lecturers, 10 assistant professors, and 5 associate professors. For program management, there was a committee of 5 members for the program and also a committee of 4 or 5 members for each major field of study. Among the faculty members, 41 persons taught in this program. The ratio between faculty and student was 1:26. The result of 2013 internal quality assessment indicated that system and mechanism of the program development and management was at fair level. However, the overall result yielded good level either by criteria of the Office of Higher Education Commission (4.29) or the NESQA (4.37); 2) Framework for assessing the quality of the program consisted of 4 dimensions and 15 indicators; 3) Assessment of the program yielded Good level of quality (4.04); 4) Recommendations to promote academic excellence included management and development of the program focusing on teacher reform toward highly recognized profession; cultivation of values, moral, ethics, and spirits of being a teacher; construction of specialized programs; development of faculty potentials; enhancement of the demonstration school’s readiness level; and provision of dormitories for learning.

Keywords: quality assessment, education program, Suan Sunandha Rajabhat University, academic excellence

Procedia PDF Downloads 282
2142 Elastic Behaviour of Graphene Nanoplatelets Reinforced Epoxy Resin Composites

Authors: V. K. Srivastava

Abstract:

Graphene has recently attracted an increasing attention in nanocomposites applications because it has 200 times greater strength than steel, making it the strongest material ever tested. Graphene, as the fundamental two-dimensional (2D) carbon structure with exceptionally high crystal and electronic quality, has emerged as a rapidly rising star in the field of material science. Graphene, as defined, as a 2D crystal, is composed of monolayers of carbon atoms arranged in a honeycombed network with six-membered rings, which is the interest of both theoretical and experimental researchers worldwide. The name comes from graphite and alkene. Graphite itself consists of many graphite-sheets stacked together by weak van der Waals forces. This is attributed to the monolayer of carbon atoms densely packed into honeycomb structure. Due to superior inherent properties of graphene nanoplatelets (GnP) over other nanofillers, GnP particles were added in epoxy resin with the variation of weight percentage. It is indicated that the DMA results of storage modulus, loss modulus and tan δ, defined as the ratio of elastic modulus and imaginary (loss) modulus versus temperature were affected with addition of GnP in the epoxy resin. In epoxy resin, damping (tan δ) is usually caused by movement of the molecular chain. The tan δ of the graphene nanoplatelets/epoxy resin composite is much lower than that of epoxy resin alone. This finding suggests that addition of graphene nanoplatelets effectively impedes movement of the molecular chain. The decrease in storage modulus can be interpreted by an increasing susceptibility to agglomeration, leading to less energy dissipation in the system under viscoelastic deformation. The results indicates the tan δ increased with the increase of temperature, which confirms that tan δ is associated with magnetic field strength. Also, the results show that the nanohardness increases with increase of elastic modulus marginally. GnP filled epoxy resin gives higher value than the epoxy resin, because GnP improves the mechanical properties of epoxy resin. Debonding of GnP is clearly observed in the micrograph having agglomeration of fillers and inhomogeneous distribution. Therefore, DMA and nanohardness studies indiacte that the elastic modulus of epoxy resin is increased with the addition of GnP fillers.

Keywords: agglomeration, elastic modulus, epoxy resin, graphene nanoplatelet, loss modulus, nanohardness, storage modulus

Procedia PDF Downloads 253
2141 Determining Design Parameters for Sizing of Hydronic Heating Systems in Concrete Thermally Activated Building Systems

Authors: Rahmat Ali, Inamullah Khan, Amjad Naseer, Abid A. Shah

Abstract:

Hydronic Heating and Cooling systems in concrete slab based buildings are increasingly becoming a popular substitute to conventional heating and cooling systems. In exploring the materials, techniques employed, and their relative performance measures, a fair bit of uncertainty exists. This research has identified the simplest method of determining the thermal field of a single hydronic pipe when acting as a part of a concrete slab, based on which the spacing and positioning of pipes for a best thermal performance and surface temperature control are determined. The pipe material chosen is the commonly used PEX pipe, which has an all-around performance and thermal characteristics with a thermal conductivity of 0.5W/mK. Concrete Test samples were constructed and their thermal fields tested under varying input conditions. Temperature sensing devices were embedded into the wet concrete at fixed distances from the pipe and other touch sensing temperature devices were employed for determining the extent of the thermal field and validation studies. In the first stage, it was found that the temperature along a specific distance was the same and that heat dissipation occurred in well-defined layers. The temperature obtained in concrete was then related to the different control parameters including water supply temperature. From the results, the temperature of water required for a specific temperature rise in concrete is determined. The thermally effective area is also determined which is then used to calculate the pipe spacing and positioning for the desired level of thermal comfort.

Keywords: thermally activated building systems, concrete slab temperature, thermal field, energy efficiency, thermal comfort, pipe spacing

Procedia PDF Downloads 318
2140 An Assessment of Impact of Financial Statement Fraud on Profit Performance of Manufacturing Firms in Nigeria: A Study of Food and Beverage Firms in Nigeria

Authors: Wale Agbaje

Abstract:

The aim of this research study is to assess the impact of financial statement fraud on profitability of some selected Nigerian manufacturing firms covering (2002-2016). The specific objectives focused on to ascertain the effect of incorrect asset valuation on return on assets (ROA) and to ascertain the relationship between improper expense recognition and return on assets (ROA). To achieve these objectives, descriptive research design was used for the study while secondary data were collected from the financial reports of the selected firms and website of security and exchange commission. The analysis of covariance (ANCOVA) was used and STATA II econometric method was used in the analysis of the data. Altman model and operating expenses ratio was adopted in the analysis of the financial reports to create a dummy variable for the selected firms from 2002-2016 and validation of the parameters were ascertained using various statistical techniques such as t-test, co-efficient of determination (R2), F-statistics and Wald chi-square. Two hypotheses were formulated and tested using the t-statistics at 5% level of significance. The findings of the analysis revealed that there is a significant relationship between financial statement fraud and profitability in Nigerian manufacturing industry. It was revealed that incorrect assets valuation has a significant positive relationship and so also is the improper expense recognition on return on assets (ROA) which serves as a proxy for profitability. The implication of this is that distortion of asset valuation and expense recognition leads to decreasing profit in the long run in the manufacturing industry. The study therefore recommended that pragmatic policy options need to be taken in the manufacturing industry to effectively manage incorrect asset valuation and improper expense recognition in order to enhance manufacturing industry performance in the country and also stemming of financial statement fraud should be adequately inculcated into the internal control system of manufacturing firms for the effective running of the manufacturing industry in Nigeria.

Keywords: Althman's Model, improper expense recognition, incorrect asset valuation, return on assets

Procedia PDF Downloads 145
2139 Ultrasensitive Detection and Discrimination of Cancer-Related Single Nucleotide Polymorphisms Using Poly-Enzyme Polymer Bead Amplification

Authors: Lorico D. S. Lapitan Jr., Yihan Xu, Yuan Guo, Dejian Zhou

Abstract:

The ability of ultrasensitive detection of specific genes and discrimination of single nucleotide polymorphisms is important for clinical diagnosis and biomedical research. Herein, we report the development of a new ultrasensitive approach for label-free DNA detection using magnetic nanoparticle (MNP) assisted rapid target capture/separation in combination with signal amplification using poly-enzyme tagged polymer nanobead. The sensor uses an MNP linked capture DNA and a biotin modified signal DNA to sandwich bind the target followed by ligation to provide high single-nucleotide polymorphism discrimination. Only the presence of a perfect match target DNA yields a covalent linkage between the capture and signal DNAs for subsequent conjugation of a neutravidin-modified horseradish peroxidase (HRP) enzyme through the strong biotin-nuetravidin interaction. This converts each captured DNA target into an HRP which can convert millions of copies of a non-fluorescent substrate (amplex red) to a highly fluorescent product (resorufin), for great signal amplification. The use of polymer nanobead each tagged with thousands of copies of HRPs as the signal amplifier greatly improves the signal amplification power, leading to greatly improved sensitivity. We show our biosensing approach can specifically detect an unlabeled DNA target down to 10 aM with a wide dynamic range of 5 orders of magnitude (from 0.001 fM to 100.0 fM). Furthermore, our approach has a high discrimination between a perfectly matched gene and its cancer-related single-base mismatch targets (SNPs): It can positively detect the perfect match DNA target even in the presence of 100 fold excess of co-existing SNPs. This sensing approach also works robustly in clinical relevant media (e.g. 10% human serum) and gives almost the same SNP discrimination ratio as that in clean buffers. Therefore, this ultrasensitive SNP biosensor appears to be well-suited for potential diagnostic applications of genetic diseases.

Keywords: DNA detection, polymer beads, signal amplification, single nucleotide polymorphisms

Procedia PDF Downloads 240
2138 Role of von Willebrand Factor Antigen as Non-Invasive Biomarker for the Prediction of Portal Hypertensive Gastropathy in Patients with Liver Cirrhosis

Authors: Mohamed El Horri, Amine Mouden, Reda Messaoudi, Mohamed Chekkal, Driss Benlaldj, Malika Baghdadi, Lahcene Benmahdi, Fatima Seghier

Abstract:

Background/aim: Recently, the Von Willebrand factor antigen (vWF-Ag)has been identified as a new marker of portal hypertension (PH) and its complications. Few studies talked about its role in the prediction of esophageal varices. VWF-Ag is considered a non-invasive approach, In order to avoid the endoscopic burden, cost, drawbacks, unpleasant and repeated examinations to the patients. In our study, we aimed to evaluate the ability of this marker in the prediction of another complication of portal hypertension, which is portal hypertensive gastropathy (PHG), the one that is diagnosed also by endoscopic tools. Patients and methods: It is about a prospective study, which include 124 cirrhotic patients with no history of bleeding who underwent screening endoscopy for PH-related complications like esophageal varices (EVs) and PHG. Routine biological tests were performed as well as the VWF-Ag testing by both ELFA and Immunoturbidimetric techniques. The diagnostic performance of our marker was assessed using sensitivity, specificity, positive predictive value, negative predictive value, accuracy, and receiver operating characteristic curves. Results: 124 patients were enrolled in this study, with a mean age of 58 years [CI: 55 – 60 years] and a sex ratio of 1.17. Viral etiologies were found in 50% of patients. Screening endoscopy revealed the presence of PHG in 20.2% of cases, while for EVsthey were found in 83.1% of cases. VWF-Ag levels, were significantly increased in patients with PHG compared to those who have not: 441% [CI: 375 – 506], versus 279% [CI: 253 – 304], respectively (p <0.0001). Using the area under the receiver operating characteristic curve (AUC), vWF-Ag was a good predictor for the presence of PHG. With a value higher than 320% and an AUC of 0.824, VWF-Ag had an 84% sensitivity, 74% specificity, 44.7% positive predictive value, 94.8% negative predictive value, and 75.8% diagnostic accuracy. Conclusion: VWF-Ag is a good non-invasive low coast marker for excluding the presence of PHG in patients with liver cirrhosis. Using this marker as part of a selective screening strategy might reduce the need for endoscopic screening and the coast of the management of these kinds of patients.

Keywords: von willebrand factor, portal hypertensive gastropathy, prediction, liver cirrhosis

Procedia PDF Downloads 186
2137 In vitro Method to Evaluate the Effect of Steam-Flaking on the Quality of Common Cereal Grains

Authors: Wanbao Chen, Qianqian Yao, Zhenming Zhou

Abstract:

Whole grains with intact pericarp are largely resistant to digestion by ruminants because entire kernels are not conducive to bacterial attachment. But processing methods makes the starch more accessible to microbes, and increases the rate and extent of starch degradation in the rumen. To estimate the feasibility of applying a steam-flaking as the processing technique of grains for ruminants, cereal grains (maize, wheat, barley and sorghum) were processed by steam-flaking (steam temperature 105°C, heating time, 45 min). And chemical analysis, in vitro gas production, volatile fatty acid concentrations, and energetic values were adopted to evaluate the effects of steam-flaking. In vitro cultivation was conducted for 48h with the rumen fluid collected from steers fed a total mixed ration consisted of 40% hay and 60% concentrates. The results showed that steam-flaking processing had a significant effect on the contents of neutral detergent fiber and acid detergent fiber (P < 0.01). The concentration of starch gelatinization degree in all grains was also great improved in steam-flaking grains, as steam-flaking processing disintegrates the crystal structure of cereal starch, which may subsequently facilitate absorption of moisture and swelling. Theoretical maximum gas production after steam-flaking processing showed no great difference. However, compared with intact grains, total gas production at 48 h and the rate of gas production were significantly (P < 0.01) increased in all types of grain. Furthermore, there was no effect of steam-flaking processing on total volatile fatty acid, but a decrease in the ratio between acetate and propionate was observed in the current in vitro fermentation. The present study also found that steam-flaking processing increased (P < 0.05) organic matter digestibility and energy concentration of the grains. The collective findings of the present study suggest that steam-flaking processing of grains could improve their rumen fermentation and energy utilization by ruminants. In conclusion, the utilization of steam-flaking would be practical to improve the quality of common cereal grains.

Keywords: cereal grains, gas production, in vitro rumen fermentation, steam-flaking processing

Procedia PDF Downloads 237
2136 Analyzing the Shearing-Layer Concept Applied to Urban Green System

Authors: S. Pushkar, O. Verbitsky

Abstract:

Currently, green rating systems are mainly utilized for correctly sizing mechanical and electrical systems, which have short lifetime expectancies. In these systems, passive solar and bio-climatic architecture, which have long lifetime expectancies, are neglected. Urban rating systems consider buildings and services in addition to neighborhoods and public transportation as integral parts of the built environment. The main goal of this study was to develop a more consistent point allocation system for urban building standards by using six different lifetime shearing layers: Site, Structure, Skin, Services, Space, and Stuff, each reflecting distinct environmental damages. This shearing-layer concept was applied to internationally well-known rating systems: Leadership in Energy and Environmental Design (LEED) for Neighborhood Development, BRE Environmental Assessment Method (BREEAM) for Communities, and Comprehensive Assessment System for Building Environmental Efficiency (CASBEE) for Urban Development. The results showed that LEED for Neighborhood Development and BREEAM for Communities focused on long-lifetime-expectancy building designs, whereas CASBEE for Urban Development gave equal importance to the Building and Service Layers. Moreover, although this rating system was applied using a building-scale assessment, “Urban Area + Buildings” focuses on a short-lifetime-expectancy system design, neglecting to improve the architectural design by considering bio-climatic and passive solar aspects.

Keywords: green rating system, urban community, sustainable design, standardization, shearing-layer concept, passive solar architecture

Procedia PDF Downloads 560
2135 A Vehicle Detection and Speed Measurement Algorithm Based on Magnetic Sensors

Authors: Panagiotis Gkekas, Christos Sougles, Dionysios Kehagias, Dimitrios Tzovaras

Abstract:

Cooperative intelligent transport systems (C-ITS) can greatly improve safety and efficiency in road transport by enabling communication, not only between vehicles themselves but also between vehicles and infrastructure. For that reason, traffic surveillance systems on the road are of great importance. This paper focuses on the development of an on-road unit comprising several magnetic sensors for real-time vehicle detection, movement direction, and speed measurement calculations. Magnetic sensors can feel and measure changes in the earth’s magnetic field. Vehicles are composed of many parts with ferromagnetic properties. Depending on sensors’ sensitivity, changes in the earth’s magnetic field caused by passing vehicles can be detected and analyzed in order to extract information on the properties of moving vehicles. In this paper, we present a prototype algorithm for real-time, high-accuracy, vehicle detection, and speed measurement, which can be implemented as a portable, low-cost, and non-invasive to existing infrastructure solution with the potential to replace existing high-cost implementations. The paper describes the algorithm and presents results from its preliminary lab testing in a close to real condition environment. Acknowledgments: Work presented in this paper was co-financed by the European Regional Development Fund of the European Union and Greek national funds through the Operational Program Competitiveness, Entrepreneurship, and Innovation (call RESEARCH–CREATE–INNOVATE) under contract no. Τ1EDK-03081 (project ODOS2020).

Keywords: magnetic sensors, vehicle detection, speed measurement, traffic surveillance system

Procedia PDF Downloads 108
2134 Application of Electro-Optical Hybrid Cables in Horizontal Well Production Logging

Authors: Daofan Guo, Dong Yang

Abstract:

For decades, well logging with coiled tubing has relied solely on surface data such as pump pressure, wellhead pressure, depth counter, and weight indicator readings. While this data serves the oil industry well, modern smart logging utilizes real-time downhole information, which automatically increases operational efficiency and optimizes intervention qualities. For example, downhole pressure, temperature, and depth measurement data can be transmitted through the electro-optical hybrid cable in the coiled tubing to surface operators on a real-time base. This paper mainly introduces the unique structural features and various applications of the electro-optical hybrid cables which were deployed into downhole with the help of coiled tubing technology. Fiber optic elements in the cable enable optical communications and distributed measurements, such as distributed temperature and acoustic sensing. The electrical elements provide continuous surface power for downhole tools, eliminating the limitations of traditional batteries, such as temperature, operating time, and safety concerns. The electrical elements also enable cable telemetry operation of cable tools. Both power supply and signal transmission were integrated into an electro-optical hybrid cable, and the downhole information can be captured by downhole electrical sensors and distributed optical sensing technologies, then travels up through an optical fiber to the surface, which greatly improves the accuracy of measurement data transmission.

Keywords: electro-optical hybrid cable, underground photoelectric composite cable, seismic cable, coiled tubing, real-time monitoring

Procedia PDF Downloads 123
2133 Extreme Value Theory Applied in Reliability Analysis: Case Study of Diesel Generator Fans

Authors: Jelena Vucicevic

Abstract:

Reliability analysis represents a very important task in different areas of work. In any industry, this is crucial for maintenance, efficiency, safety and monetary costs. There are ways to calculate reliability, unreliability, failure density and failure rate. In this paper, the results for the reliability of diesel generator fans were calculated through Extreme Value Theory. The Extreme Value Theory is not widely used in the engineering field. Its usage is well known in other areas such as hydrology, meteorology, finance. The significance of this theory is in the fact that unlike the other statistical methods it is focused on rare and extreme values, and not on average. It should be noted that this theory is not designed exclusively for extreme events, but for extreme values in any event. Therefore, this is a great opportunity to apply the theory and test if it could be applied in this situation. The significance of the work is the calculation of time to failure or reliability in a new way, using statistic. Another advantage of this calculation is that there is no need for technical details and it can be implemented in any part for which we need to know the time to fail in order to have appropriate maintenance, but also to maximize usage and minimize costs. In this case, calculations have been made on diesel generator fans but the same principle can be applied to any other part. The data for this paper came from a field engineering study of the time to failure of diesel generator fans. The ultimate goal was to decide whether or not to replace the working fans with a higher quality fan to prevent future failures. The results achieved in this method will show the approximation of time for which the fans will work as they should, and the percentage of probability of fans working more than certain estimated time. Extreme Value Theory can be applied not only for rare and extreme events, but for any event that has values which we can consider as extreme.

Keywords: extreme value theory, lifetime, reliability analysis, statistic, time to failure

Procedia PDF Downloads 315
2132 DYVELOP Method Implementation for the Research Development in Small and Middle Enterprises

Authors: Jiří F. Urbánek, David Král

Abstract:

Small and Middle Enterprises (SME) have a specific mission, characteristics, and behavior in global business competitive environments. They must respect policy, rules, requirements and standards in all their inherent and outer processes of supply - customer chains and networks. Paper aims and purposes are to introduce computational assistance, which enables us the using of prevailing operation system MS Office (SmartArt...) for mathematical models, using DYVELOP (Dynamic Vector Logistics of Processes) method. It is providing for SMS´s global environment the capability and profit to achieve its commitment regarding the effectiveness of the quality management system in customer requirements meeting and also the continual improvement of the organization’s and SME´s processes overall performance and efficiency, as well as its societal security via continual planning improvement. DYVELOP model´s maps - the Blazons are able mathematically - graphically express the relationships among entities, actors, and processes, including the discovering and modeling of the cycling cases and their phases. The blazons need live PowerPoint presentation for better comprehension of this paper mission – added value analysis. The crisis management of SMEs is obliged to use the cycles for successful coping of crisis situations.  Several times cycling of these cases is a necessary condition for the encompassment of the both the emergency event and the mitigation of organization´s damages. Uninterrupted and continuous cycling process is a good indicator and controlling actor of SME continuity and its sustainable development advanced possibilities.

Keywords: blazons, computational assistance, DYVELOP method, small and middle enterprises

Procedia PDF Downloads 327
2131 Effects of Kolavironon Liver Oxidative Stress and Beta-Cell Damage in Streptozotocin-Induced Diabetic Rats

Authors: Omolola R. Ayepola, Nicole L. Brooks, Oluwafemi O. Oguntibeju

Abstract:

The liver plays an important role in the regulation of blood glucose and is a target organ of hyperglycaemia. Hyperglycemia plays a crucial role in the onset of various liver diseases and may culminate into hepatopathy if untreated. Alteration in antioxidant defense and increase in oxidative stress that results in tissue injury is characteristic of diabetes. We evaluated the protective effects of kolaviron-a biflavonoid complex, on hepatic antioxidants, lipid peroxidation and apoptosis in the liver of diabetic rats. To induce type I diabetes, rats were injected with streptozotocin intraperitoneally at a single dose of 50 mg/kg. Oral treatment of diabetic rats with kolaviron (100 mg/kg) started on the 6th day after diabetes induction and continued for 6 weeks (5 times weekly). Diabetic rats exhibited a significant increase in the peroxidation of hepatic lipids as observed from the elevated level of malondialdehyde (MDA) estimated by High-Performance Liquid Chromatography. In addition, Oxygen Radical Absorbance Capacity (ORAC), ratio of reduced to oxidized glutathione (GSH/GSSG) and catalase (CAT) activity was decreased in the liver of diabetic rats. TUNEL assay revealed increased apoptotic cell death in the liver of diabetic rats. Examination of Pancreatic beta-cells by immunohistochemical methods revealed beta cell degeneration and reduction in beta cell/ islet area in the diabetic controls. Kolaviron-treatment increased the area of insulin immunoreactive beta-cells significantly. Kolaviron attenuated lipid peroxidation and apoptosis in the liver of diabetic rats, increased CAT activity GSH levels and the resultant GSH: GSSG. The ORAC of kolaviron-treated diabetic liver was restored to near-normal values. Kolaviron protects the liver against oxidative and apoptotic damage induced by hyperglycemia. The antidiabetic effect of kolaviron may also be related to its beneficial effects on beta-cell function.

Keywords: diabetes mellitus, kolaviron, oxidative stress, liver, apoptosis

Procedia PDF Downloads 375
2130 Generation of High-Quality Synthetic CT Images from Cone Beam CT Images Using A.I. Based Generative Networks

Authors: Heeba A. Gurku

Abstract:

Introduction: Cone Beam CT(CBCT) images play an integral part in proper patient positioning in cancer patients undergoing radiation therapy treatment. But these images are low in quality. The purpose of this study is to generate high-quality synthetic CT images from CBCT using generative models. Material and Methods: This study utilized two datasets from The Cancer Imaging Archive (TCIA) 1) Lung cancer dataset of 20 patients (with full view CBCT images) and 2) Pancreatic cancer dataset of 40 patients (only 27 patients having limited view images were included in the study). Cycle Generative Adversarial Networks (GAN) and its variant Attention Guided Generative Adversarial Networks (AGGAN) models were used to generate the synthetic CTs. Models were evaluated by visual evaluation and on four metrics, Structural Similarity Index Measure (SSIM), Peak Signal Noise Ratio (PSNR) Mean Absolute Error (MAE) and Root Mean Square Error (RMSE), to compare the synthetic CT and original CT images. Results: For pancreatic dataset with limited view CBCT images, our study showed that in Cycle GAN model, MAE, RMSE, PSNR improved from 12.57to 8.49, 20.94 to 15.29 and 21.85 to 24.63, respectively but structural similarity only marginally increased from 0.78 to 0.79. Similar, results were achieved with AGGAN with no improvement over Cycle GAN. However, for lung dataset with full view CBCT images Cycle GAN was able to reduce MAE significantly from 89.44 to 15.11 and AGGAN was able to reduce it to 19.77. Similarly, RMSE was also decreased from 92.68 to 23.50 in Cycle GAN and to 29.02 in AGGAN. SSIM and PSNR also improved significantly from 0.17 to 0.59 and from 8.81 to 21.06 in Cycle GAN respectively while in AGGAN SSIM increased to 0.52 and PSNR increased to 19.31. In both datasets, GAN models were able to reduce artifacts, reduce noise, have better resolution, and better contrast enhancement. Conclusion and Recommendation: Both Cycle GAN and AGGAN were significantly able to reduce MAE, RMSE and PSNR in both datasets. However, full view lung dataset showed more improvement in SSIM and image quality than limited view pancreatic dataset.

Keywords: CT images, CBCT images, cycle GAN, AGGAN

Procedia PDF Downloads 69
2129 Forecasting Nokoué Lake Water Levels Using Long Short-Term Memory Network

Authors: Namwinwelbere Dabire, Eugene C. Ezin, Adandedji M. Firmin

Abstract:

The prediction of hydrological flows (rainfall-depth or rainfall-discharge) is becoming increasingly important in the management of hydrological risks such as floods. In this study, the Long Short-Term Memory (LSTM) network, a state-of-the-art algorithm dedicated to time series, is applied to predict the daily water level of Nokoue Lake in Benin. This paper aims to provide an effective and reliable method enable of reproducing the future daily water level of Nokoue Lake, which is influenced by a combination of two phenomena: rainfall and river flow (runoff from the Ouémé River, the Sô River, the Porto-Novo lagoon, and the Atlantic Ocean). Performance analysis based on the forecasting horizon indicates that LSTM can predict the water level of Nokoué Lake up to a forecast horizon of t+10 days. Performance metrics such as Root Mean Square Error (RMSE), coefficient of correlation (R²), Nash-Sutcliffe Efficiency (NSE), and Mean Absolute Error (MAE) agree on a forecast horizon of up to t+3 days. The values of these metrics remain stable for forecast horizons of t+1 days, t+2 days, and t+3 days. The values of R² and NSE are greater than 0.97 during the training and testing phases in the Nokoué Lake basin. Based on the evaluation indices used to assess the model's performance for the appropriate forecast horizon of water level in the Nokoué Lake basin, the forecast horizon of t+3 days is chosen for predicting future daily water levels.

Keywords: forecasting, long short-term memory cell, recurrent artificial neural network, Nokoué lake

Procedia PDF Downloads 44
2128 Study of Geological Structure for Potential Fresh-Groundwater Aquifer Determination around Cidaun Beach, Cianjur Regency, West Java Province, Indonesia

Authors: Ilham Aji Dermawan, M. Sapari Dwi Hadian, R. Irvan Sophian, Iyan Haryanto

Abstract:

The study of the geological structure in the surrounding area of Cidaun, Cianjur Regency, West Java Province, Indonesia was conducted around the southern coast of Java Island. This study aims to determine the potentially structural trap deposits of freshwater resources in the study area, according to that the study area is an area directly adjacent to the beach, where the water around it did not seem fresh and brackish due to the exposure of sea water intrusion. This study uses the method of geomorphological analysis and geological mapping by taking the data directly in the field within 10x10 km of the research area. Geomorphological analysis was done by calculating the watershed drainage density value and roundness of watershed value ratio. The goal is to determine the permeability of the sub-soil conditions, rock constituent, and the flow of surface water. While the field geological mapping aims to take the geological structure data and then will do the reconstruction to determine the geological conditions of research area. The result, from geomorphology aspects, that the considered area of potential groundwater consisted of permeable surface material, permeable sub-soil, and low of water run-off flow. It is very good for groundwater recharge area. While the results of geological reconstruction after conducted of geological mapping is joints that present were initiated for the Cipandak Fault that cuts Cipandak River. That fault across until the Cibako Syncline fold through the Cibako River. This syncline is expected to place of influent groundwater aquifer. The tip of Cibako River then united with Cipandak River, where the Cipandak River extends through Cipandak Syncline fold axis in the southern regions close to its estuary. This syncline is expected to place of influent groundwater aquifer too.

Keywords: geological structure, groundwater, hydrogeology, influent aquifer, structural trap

Procedia PDF Downloads 192
2127 Production of Hydrophilic PVC Surfaces with Microwave Treatment for its Separation from Mixed Plastics by Froth Floatation

Authors: Srinivasa Reddy Mallampati, Chi-Hyeon Lee, Nguyen Thanh Truc, Byeong-Kyu Lee

Abstract:

Organic polymeric materials (plastics) are widely used in our daily life and various industrial fields. The separation of waste plastics is important for its feedstock and mechanical recycling. One of the major problems in incineration for thermal recycling or heat melting for material recycling is the polyvinyl chloride (PVC) contained in waste plastics. This is due to the production of hydrogen chloride, chlorine gas, dioxins, and furans originated from PVC. Therefore, the separation of PVC from waste plastics is necessary before recycling. The separation of heavy polymers (PVC 1.42, PMMA 1.12, PC 1.22 and PET 1.27 g/cm3 ) from light ones (PE and PP 0.99 g/cm3) can be achieved on the basis of their density. However it is difficult to separate PVC from other heavy polymers basis of density. There are no simple and inexpensive techniques to separate PVC from others. If hydrophobic the PVC surface is selectively changed into hydrophilic, where other polymers still have hydrophobic surface, flotation process can separate PVC from others. In the present study, the selective surface hydrophilization of polyvinyl chloride (PVC) by microwave treatment after alkaline/acid washing and with activated carbon was studied as the pre-treatment of its separation by the following froth flotation. In presence of activated carbon as absorbent, the microwave treatment could selectively increase the hydrophilicity of the PVC surface (i.e. PVC contact angle decreased about 19o) among other plastics mixture. At this stage, 100% PVC separation from other plastics could be achieved by the combination of the pre- microwave treatment with activated carbon and the following froth floatation. The hydrophilization of PVC by surface analysis would be due to the hydrophilic groups produced by microwave treatment with activated carbon. The effect of optimum condition and detailed mechanism onto separation efficiency in the froth floatation was also investigated.

Keywords: Hydrophilic, PVC, contact angle, additive, microwave, froth floatation, waste plastics

Procedia PDF Downloads 607
2126 Comparison of Different Machine Learning Algorithms for Solubility Prediction

Authors: Muhammet Baldan, Emel Timuçin

Abstract:

Molecular solubility prediction plays a crucial role in various fields, such as drug discovery, environmental science, and material science. In this study, we compare the performance of five machine learning algorithms—linear regression, support vector machines (SVM), random forests, gradient boosting machines (GBM), and neural networks—for predicting molecular solubility using the AqSolDB dataset. The dataset consists of 9981 data points with their corresponding solubility values. MACCS keys (166 bits), RDKit properties (20 properties), and structural properties(3) features are extracted for every smile representation in the dataset. A total of 189 features were used for training and testing for every molecule. Each algorithm is trained on a subset of the dataset and evaluated using metrics accuracy scores. Additionally, computational time for training and testing is recorded to assess the efficiency of each algorithm. Our results demonstrate that random forest model outperformed other algorithms in terms of predictive accuracy, achieving an 0.93 accuracy score. Gradient boosting machines and neural networks also exhibit strong performance, closely followed by support vector machines. Linear regression, while simpler in nature, demonstrates competitive performance but with slightly higher errors compared to ensemble methods. Overall, this study provides valuable insights into the performance of machine learning algorithms for molecular solubility prediction, highlighting the importance of algorithm selection in achieving accurate and efficient predictions in practical applications.

Keywords: random forest, machine learning, comparison, feature extraction

Procedia PDF Downloads 21
2125 Study of Methods to Reduce Carbon Emissions in Structural Engineering

Authors: Richard Krijnen, Alan Wang

Abstract:

As the world is aiming to reach net zero around 2050, structural engineers must begin finding solutions to contribute to this global initiative. Approximately 40% of global energy-related emissions are due to buildings and construction, and a building’s structure accounts for 50% of its embodied carbon, which indicates that structural engineers are key contributors to finding solutions to reach carbon neutrality. However, this task presents a multifaceted challenge as structural engineers must navigate technical, safety and economic considerations while striving to reduce emissions. This study reviews several options and considerations to reduce carbon emissions that structural engineers can use in their future designs without compromising the structural integrity of their proposed design. Low-carbon structures should adhere to several guiding principles. Firstly, prioritize the selection of materials with low carbon footprints, such as recyclable or alternative materials. Optimization of design and engineering methods is crucial to minimize material usage. Encouraging the use of recyclable and renewable materials reduces dependency on natural resources. Energy efficiency is another key consideration involving the design of structures to minimize energy consumption across various systems. Choosing local materials and minimizing transportation distances help in reducing carbon emissions during transport. Innovation, such as pre-fabrication and modular design or low-carbon concrete, can further cut down carbon emissions during manufacturing and construction. Collaboration among stakeholders and sharing experiences and resources are essential for advancing the development and application of low-carbon structures. This paper identifies current available tools and solutions to reduce embodied carbon in structures, which can be used as part of daily structural engineering practice.

Keywords: efficient structural design, embodied carbon, low-carbon material, sustainable structural design

Procedia PDF Downloads 23
2124 Price Prediction Line, Investment Signals and Limit Conditions Applied for the German Financial Market

Authors: Cristian Păuna

Abstract:

In the first decades of the 21st century, in the electronic trading environment, algorithmic capital investments became the primary tool to make a profit by speculations in financial markets. A significant number of traders, private or institutional investors are participating in the capital markets every day using automated algorithms. The autonomous trading software is today a considerable part in the business intelligence system of any modern financial activity. The trading decisions and orders are made automatically by computers using different mathematical models. This paper will present one of these models called Price Prediction Line. A mathematical algorithm will be revealed to build a reliable trend line, which is the base for limit conditions and automated investment signals, the core for a computerized investment system. The paper will guide how to apply these tools to generate entry and exit investment signals, limit conditions to build a mathematical filter for the investment opportunities, and the methodology to integrate all of these in automated investment software. The paper will also present trading results obtained for the leading German financial market index with the presented methods to analyze and to compare different automated investment algorithms. It was found that a specific mathematical algorithm can be optimized and integrated into an automated trading system with good and sustained results for the leading German Market. Investment results will be compared in order to qualify the presented model. In conclusion, a 1:6.12 risk was obtained to reward ratio applying the trigonometric method to the DAX Deutscher Aktienindex on 24 months investment. These results are superior to those obtained with other similar models as this paper reveal. The general idea sustained by this paper is that the Price Prediction Line model presented is a reliable capital investment methodology that can be successfully applied to build an automated investment system with excellent results.

Keywords: algorithmic trading, automated trading systems, high-frequency trading, DAX Deutscher Aktienindex

Procedia PDF Downloads 120
2123 Improvement of Microscopic Detection of Acid-Fast Bacilli for Tuberculosis by Artificial Intelligence-Assisted Microscopic Platform and Medical Image Recognition System

Authors: Hsiao-Chuan Huang, King-Lung Kuo, Mei-Hsin Lo, Hsiao-Yun Chou, Yusen Lin

Abstract:

The most robust and economical method for laboratory diagnosis of TB is to identify mycobacterial bacilli (AFB) under acid-fast staining despite its disadvantages of low sensitivity and labor-intensive. Though digital pathology becomes popular in medicine, an automated microscopic system for microbiology is still not available. A new AI-assisted automated microscopic system, consisting of a microscopic scanner and recognition program powered by big data and deep learning, may significantly increase the sensitivity of TB smear microscopy. Thus, the objective is to evaluate such an automatic system for the identification of AFB. A total of 5,930 smears was enrolled for this study. An intelligent microscope system (TB-Scan, Wellgen Medical, Taiwan) was used for microscopic image scanning and AFB detection. 272 AFB smears were used for transfer learning to increase the accuracy. Referee medical technicians were used as Gold Standard for result discrepancy. Results showed that, under a total of 1726 AFB smears, the automated system's accuracy, sensitivity and specificity were 95.6% (1,650/1,726), 87.7% (57/65), and 95.9% (1,593/1,661), respectively. Compared to culture, the sensitivity for human technicians was only 33.8% (38/142); however, the automated system can achieve 74.6% (106/142), which is significantly higher than human technicians, and this is the first of such an automated microscope system for TB smear testing in a controlled trial. This automated system could achieve higher TB smear sensitivity and laboratory efficiency and may complement molecular methods (eg. GeneXpert) to reduce the total cost for TB control. Furthermore, such an automated system is capable of remote access by the internet and can be deployed in the area with limited medical resources.

Keywords: TB smears, automated microscope, artificial intelligence, medical imaging

Procedia PDF Downloads 204
2122 Cost-Effectiveness of Forest Restoration in Nepal: A Case from Leasehold Forestry Initiatives

Authors: Sony Baral, Bijendra Basnyat, Kalyan Gauli

Abstract:

Forests are depleted throughout the world in the 1990s, and since then, various efforts have been undertaken for the restoration of the forest. A government of Nepal promoted various community based forest management in which leasehold forestry was the one introduce in 1990s, aiming to restore degraded forests land. However, few attempts have been made to systematically evaluate its cost effectiveness. Hence the study assesses the cost effectiveness of leasehold forestry intervention in the mid-hill district of Nepal following the cost and benefit analysis approach. The study followed quasi-experimental design and collected costs and benefits information from 320 leasehold forestry groups (with intervention) and 154 comparison groups (without intervention) through household survey, forest inventory and then validated with the stakeholders’ consultative workshop. The study found that both the benefits and costs from intervention outweighed without situation. The members of leasehold forestry groups were generating multiple benefits from the forests, such as firewood, grasses, fodder, and fruits, whereas those from comparison groups were mostly getting a single benefit. Likewise, extent of soil carbon is high in leasehold forests. Average expense per unit area is high in intervention sites due to high government investment for capacity building. Nevertheless, positive net present value and internal rate of return was observed for both situations. However, net present value from intervention, i.e., leasehold forestry, is almost double compared to comparison sites, revealing that community are getting higher benefits from restoration. The study concludes that leasehold forestry is a highly cost-effective intervention that contributes towards forest restoration that brings multiple benefits to rural poor.

Keywords: cost effectiveness, economic efficiency, intervention, restoration, leasehold forestry, nepal

Procedia PDF Downloads 83
2121 Numerical Study of Natural Convection in Isothermal Open Cavities

Authors: Gaurav Prabhudesai, Gaetan Brill

Abstract:

The sun's energy source comes from a hydrogen-to-helium thermonuclear reaction, generating a temperature of about 5760 K on its outer layer. On account of this high temperature, energy is radiated by the sun, a part of which reaches the earth. This sunlight, even after losing part of its energy en-route to scattering and absorption, provides a time and space averaged solar flux of 174.7 W/m^2 striking the earth’s surface. According to one study, the solar energy striking earth’s surface in one and a half hour is more than the energy consumption that was recorded in the year 2001 from all sources combined. Thus, technology for extraction of solar energy holds much promise for solving energy crisis. Of the many technologies developed in this regard, Concentrating Solar Power (CSP) plants with central solar tower and receiver system are very impressive because of their capability to provide a renewable energy that can be stored in the form of heat. One design of central receiver towers is an open cavity where sunlight is concentrated into by using mirrors (also called heliostats). This concentrated solar flux produces high temperature inside the cavity which can be utilized in an energy conversion process. The amount of energy captured is reduced by losses occurring at the cavity through all three modes viz., radiation to the atmosphere, conduction to the adjoining structure and convection. This study investigates the natural convection losses to the environment from the receiver. Computational fluid dynamics were used to simulate the fluid flow and heat transfer of the receiver; since no analytical solution can be obtained and no empirical correlations exist for the given geometry. The results provide guide lines for predicting natural convection losses for hexagonal and circular shaped open cavities. Additionally, correlations are given for various inclination angles and aspect ratios. These results provide methods to minimize natural convection through careful design of receiver geometry and modification of the inclination angle, and aspect ratio of the cavity.

Keywords: concentrated solar power (CSP), central receivers, natural convection, CFD, open cavities

Procedia PDF Downloads 272
2120 A Comparative Time-Series Analysis and Deep Learning Projection of Innate Radon Gas Risk in Canadian and Swedish Residential Buildings

Authors: Selim M. Khan, Dustin D. Pearson, Tryggve Rönnqvist, Markus E. Nielsen, Joshua M. Taron, Aaron A. Goodarzi

Abstract:

Accumulation of radioactive radon gas in indoor air poses a serious risk to human health by increasing the lifetime risk of lung cancer and is classified by IARC as a category one carcinogen. Radon exposure risks are a function of geologic, geographic, design, and human behavioural variables and can change over time. Using time series and deep machine learning modelling, we analyzed long-term radon test outcomes as a function of building metrics from 25,489 Canadian and 38,596 Swedish residential properties constructed between 1945 to 2020. While Canadian and Swedish properties built between 1970 and 1980 are comparable (96–103 Bq/m³), innate radon risks subsequently diverge, rising in Canada and falling in Sweden such that 21st Century Canadian houses show 467% greater average radon (131 Bq/m³) relative to Swedish equivalents (28 Bq/m³). These trends are consistent across housing types and regions within each country. The introduction of energy efficiency measures within Canadian and Swedish building codes coincided with opposing radon level trajectories in each nation. Deep machine learning modelling predicts that, without intervention, average Canadian residential radon levels will increase to 176 Bq/m³ by 2050, emphasizing the importance and urgency of future building code intervention to achieve systemic radon reduction in Canada.

Keywords: radon health risk, time-series, deep machine learning, lung cancer, Canada, Sweden

Procedia PDF Downloads 72