Search results for: forest fire hazard
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1926

Search results for: forest fire hazard

366 Disarmament and Rehabilitation of Women Maoists: A Case Study of Chhattisgarh, India

Authors: Pinal Patel

Abstract:

The study defines the problems and issues of women in Maoist groups, also referred as ‘Naxalites’, in Chhattisgarh, India. It analyses the causes and consequences of increasing number of women joining Maoists groups and measures taken by the central and state government to retreat them. The main aspect of the study is, how to counter the challenges to resolve the issues and restore normalcy in the life of women Maoists to resettle them in mainstream once they become physically inactive and wish to become part of the society. The rationale behind this study is that women Maoists once inactive, has no place either with Maoist camps/rebel groups or particularly in society. The problems faced by the women Maoists, in society as well as in Maoists camps, can be studied through social, economic, cultural, political and humanitarian aspects. The methodology of the study is dependent on primary sources of information which includes a research survey in majorly affected areas, statistical analysis. Secondary sources of information are helpful for understanding the background of the problem. Government’s strategy of rewarding with cash and providing resettlement and rehabilitation benefits including houses and jobs to ex-women Maoists and their families is a well formulated and feasible policy and effectively implemented by the concerned authorities. But, the survey results show that the policy has not been able to have impacts as it was intended. Because inactive and physically disabled women are still left deserted in deep forests to die and police or authorities are not able to reach them and bring them back. The difficult terrain and dense forest areas are major hurdles to reach to Maoists camps. Moreover, to make people aware of government’s surrendering and rehabilitation schemes and policies as communication networks are very poor due to the lack of development in the state.

Keywords: maoists, women, government, policy

Procedia PDF Downloads 96
365 Automated Manual Handling Risk Assessments: Practitioner Experienced Determinants of Automated Risk Analysis and Reporting Being a Benefit or Distraction

Authors: S. Cowley, M. Lawrance, D. Bick, R. McCord

Abstract:

Technology that automates manual handling (musculoskeletal disorder or MSD) risk assessments is increasingly available to ergonomists, engineers, generalist health and safety practitioners alike. The risk assessment process is generally based on the use of wearable motion sensors that capture information about worker movements for real-time or for posthoc analysis. Traditionally, MSD risk assessment is undertaken with the assistance of a checklist such as that from the SafeWork Australia code of practice, the expert assessor observing the task and ideally engaging with the worker in a discussion about the detail. Automation enables the non-expert to complete assessments and does not always require the assessor to be there. This clearly has cost and time benefits for the practitioner but is it an improvement on the assessment by the human. Human risk assessments draw on the knowledge and expertise of the assessor but, like all risk assessments, are highly subjective. The complexity of the checklists and models used in the process can be off-putting and sometimes will lead to the assessment becoming the focus and the end rather than a means to an end; the focus on risk control is lost. Automated risk assessment handles the complexity of the assessment for the assessor and delivers a simple risk score that enables decision-making regarding risk control. Being machine-based, they are objective and will deliver the same each time they assess an identical task. However, the WHS professional needs to know that this emergent technology asks the right questions and delivers the right answers. Whether it improves the risk assessment process and results or simply distances the professional from the task and the worker. They need clarity as to whether automation of manual task risk analysis and reporting leads to risk control or to a focus on the worker. Critically, they need evidence as to whether automation in this area of hazard management leads to better risk control or just a bigger collection of assessments. Practitioner experienced determinants of this automated manual task risk analysis and reporting being a benefit or distraction will address an understanding of emergent risk assessment technology, its use and things to consider when making decisions about adopting and applying these technologies.

Keywords: automated, manual-handling, risk-assessment, machine-based

Procedia PDF Downloads 99
364 A Study on an Evacuation Test to Measure Delay Time in Using an Evacuation Elevator

Authors: Kyungsuk Cho, Seungun Chae, Jihun Choi

Abstract:

Elevators are examined as one of evacuation methods in super-tall buildings. However, data on the use of elevators for evacuation at a fire are extremely scarce. Therefore, a test to measure delay time in using an evacuation elevator was conducted. In the test, time taken to get on and get off an elevator was measured and the case in which people gave up boarding when the capacity of the elevator was exceeded was also taken into consideration. 170 men and women participated in the test, 130 of whom were young people (20 ~ 50 years old) and 40 were senior citizens (over 60 years old). The capacity of the elevator was 25 people and it travelled between the 2nd and 4th floors. A video recording device was used to analyze the test. An elevator at an ordinary building, not a super-tall building, was used in the test to measure delay time in getting on and getting off an elevator. In order to minimize interference from other elements, elevator platforms on the 2nd and 4th floors were partitioned off. The elevator travelled between the 2nd and 4th floors where people got on and off. If less than 20 people got on the elevator which was empty, the data were excluded. If the elevator carrying 10 passengers stopped and less than 10 new passengers got on the elevator, the data were excluded. Getting-on an empty elevator was observed 49 times. The average number of passengers was 23.7, it took 14.98 seconds for the passengers to get on the empty elevator and the load factor was 1.67 N/s. It took the passengers, whose average number was 23.7, 10.84 seconds to get off the elevator and the unload factor was 2.33 N/s. When an elevator’s capacity is exceeded, the excessive number of people should get off. Time taken for it and the probability of the case were measure in the test. 37% of the times of boarding experienced excessive number of people. As the number of people who gave up boarding increased, the load factor of the ride decreased. When 1 person gave up boarding, the load factor was 1.55 N/s. The case was observed 10 times, which was 12.7% of the total. When 2 people gave up boarding, the load factor was 1.15 N/s. The case was observed 7 times, which was 8.9% of the total. When 3 people gave up boarding, the load factor was 1.26 N/s. The case was observed 4 times, which was 5.1% of the total. When 4 people gave up boarding, the load factor was 1.03 N/s. The case was observed 5 times, which was 6.3% of the total. Getting-on and getting-off time data for people who can walk freely were obtained from the test. In addition, quantitative results were obtained from the relation between the number of people giving up boarding and time taken for getting on. This work was supported by the National Research Council of Science & Technology (NST) grant by the Korea government (MSIP) (No. CRC-16-02-KICT).

Keywords: evacuation elevator, super tall buildings, evacuees, delay time

Procedia PDF Downloads 157
363 Utilization of Extracted Spirogyra sp. Media Fermented by Gluconacetobacter Xylinum for Cellulose Production as Raw Material for Paper Product

Authors: T. S. Desak Ketut, A.n. Isna, A.a. Ayu, D. P. Ririn, Suharjono Hadiatullah

Abstract:

The requirement of paper from year to year rise rapidly. The raising of cellulose requirement in paper production caused increasing of wood requirement with the effect that limited forest areal because of deforestation. Alternative cellulose that can be used for making paper is microbial cellulose. The objective of this research are to know the effectivity fermentation media Spirogyra sp. by Gluconacetobacter xylinum for cellulose production as material for the making of paper and to know effect composition bacterial cellulose composite product of Gluconacetobacter xylinum in Spirogyra sp. The method, was used, is as follow, 1) the effect assay from variation composition of fermentation media to bacterial cellulose production by Gluconacetobacter xylinum. 2) The effect assay of composition bacterial cellulose fermentation producted by Gluconacetobacter xylinum in extracted Spirogyra media to paper quality. The result of this research is variation fermentation media Spirogyra sp. affect to production of cellulose by Gluconacetobacter xylinum. Thus, result showed by the highest value and significantly different in thickness parameter, dry weight and wet weight of nata in sucrose concentration 7,5 % and urea 0,75 %. Composition composite of bacterial cellulose from fermentation product by Gluconacetobacter xylinum in media Spirogyra sp. affect to paper quality from wet nata and dry nata. Parameters thickness, weight, water absorpsion, density and gramatur showed highest result in sucrose concentration 7,5 % and urea concentration 0,75 %, except paper density from dry nata had highest result in sucrose and urea concentration 0%.

Keywords: cellulose, fermentation media, , Gluconacetobacter xylinum, paper, Spirogyra sp.

Procedia PDF Downloads 322
362 Seismic Isolation of Existing Masonry Buildings: Recent Case Studies in Italy

Authors: Stefano Barone

Abstract:

Seismic retrofit of buildings through base isolation represents a consolidated protection strategy against earthquakes. It consists in decoupling the ground motion from that of the structure and introducing anti-seismic devices at the base of the building, characterized by high horizontal flexibility and medium/high dissipative capacity. This allows to protect structural elements and to limit damages to non-structural ones. For these reasons, full functionality is guaranteed after an earthquake event. Base isolation is applied extensively to both new and existing buildings. For the latter, it usually does not require any interruption of the structure use and occupants evacuation, a special advantage for strategic buildings such as schools, hospitals, and military buildings. This paper describes the application of seismic isolation to three existing masonry buildings in Italy: Villa “La Maddalena” in Macerata (Marche region), “Giacomo Matteotti” and “Plinio Il Giovane” school buildings in Perugia (Umbria region). The seismic hazard of the sites is characterized by a Peak Ground Acceleration (PGA) of 0.213g-0.287g for the Life Safety Limit State and between 0.271g-0.359g for the Collapse Limit State. All the buildings are isolated with a combination of free sliders type TETRON® CD with confined elastomeric disk and anti-seismic rubber isolators type ISOSISM® HDRB to reduce the eccentricity between the center of mass and stiffness, thus limiting torsional effects during a seismic event. The isolation systems are designed to lengthen the original period of vibration (i.e., without isolators) by at least three times and to guarantee medium/high levels of energy dissipation capacity (equivalent viscous damping between 12.5% and 16%). This allows the structures to resist 100% of the seismic design action. This article shows the performances of the supplied anti-seismic devices with particular attention to the experimental dynamic response. Finally, a special focus is given to the main site activities required to isolate a masonry building.

Keywords: retrofit, masonry buildings, seismic isolation, energy dissipation, anti-seismic devices

Procedia PDF Downloads 49
361 Susceptibility of Different Clones of Eucalyptus Species against Gall Wasp, Leptocybe invasa Fisher and La Salle in Punjab, India

Authors: Ashwinder K. Dhaliwal, G. P. S. Dhillon

Abstract:

Eucalyptus is one of the most important forest tree species that can tolerate and grow well on degraded and unfertile soils which are not suitable for other tree species. Besides this, these trees have a short rotation and good economic value. However, the gall inducing wasp Leptocybe invasa Fisher and La Salle has been reported from many countries throughout the world. The spread of L. invasa is of huge economic concern as more than 20,000 ha of young Eucalyptus trees have already been affected in southern states of India. The host plant resistance being the first line of defense against insect pests demands the screening of different germplasm source against L. invasa. Keeping this in view, fourteen different clones of Eucalyptus spp. were evaluated for their susceptibility to L. invasa from a replicated clonal trial planted at Punjab Agricultural University, Ludhiana. The degree of gall infestation was recorded from three plants of each clone in each replication. Three branches selected from the lower, middle and upper canopy of the trees were selected for recording the total number of galls induced by L. invasa. The statistical analysis was done as per the procedure laid down for completely randomised block design (CRBD), analysis of variance (ANOVA), critical difference (CD) and variance components using Proc GLM (SAS software 9.3, SAS Institute Ltd. U.S.A). All possible treatment means were compared with Duncan’s multiple range test (DMRT) at 1 % probability level. The results showed that the clones C-9, C-45 and C-42 were completely free from the infestation of L. invasa. However, there was minor infestation of L. invasa on C-2135, C-413, C-407, C-35, C-72 and C-37 clones. The clone C-6 was severely infested by L. invasa followed by C-11, C-12, F-316 and C-25 clones. The information generated by this study will be helpful for future breeding and use in afforestation programmes.

Keywords: eucalyptus clones, gall wasp, Leptocybe invasa, screening, susceptibility

Procedia PDF Downloads 198
360 Soil Liquefaction Hazard Evaluation for Infrastructure in the New Bejaia Quai, Algeria

Authors: Mohamed Khiatine, Amal Medjnoun, Ramdane Bahar

Abstract:

The North Algeria is a highly seismic zone, as evidenced by the historical seismicity. During the past two decades, it has experienced several moderate to strong earthquakes. Therefore, the geotechnical engineering problems that involve dynamic loading of soils and soil-structure interaction system requires, in the presence of saturated loose sand formations, liquefaction studies. Bejaia city, located in North-East of Algiers, Algeria, is a part of the alluvial plain which covers an area of approximately 750 hectares. According to the Algerian seismic code, it is classified as moderate seismicity zone. This area had not experienced in the past urban development because of the different hazards identified by hydraulic and geotechnical studies conducted in the region. The low bearing capacity of the soil, its high compressibility and the risk of liquefaction and flooding are among these risks and are a constraint on urbanization. In this area, several cases of structures founded on shallow foundations have suffered damages. Hence, the soils need treatment to reduce the risk. Many field and laboratory investigations, core drilling, pressuremeter test, standard penetration test (SPT), cone penetrometer test (CPT) and geophysical down hole test, were performed in different locations of the area. The major part of the area consists of silty fine sand , sometimes heterogeneous, has not yet reached a sufficient degree of consolidation. The ground water depth changes between 1.5 and 4 m. These investigations show that the liquefaction phenomenon is one of the critical problems for geotechnical engineers and one of the obstacles found in design phase of projects. This paper presents an analysis to evaluate the liquefaction potential, using the empirical methods based on Standard Penetration Test (SPT), Cone Penetration Test (CPT) and shear wave velocity and numerical analysis. These liquefaction assessment procedures indicate that liquefaction can occur to considerable depths in silty sand of harbor zone of Bejaia.

Keywords: earthquake, modeling, liquefaction potential, laboratory investigations

Procedia PDF Downloads 337
359 Performance Improvement of Long-Reach Optical Access Systems Using Hybrid Optical Amplifiers

Authors: Shreyas Srinivas Rangan, Jurgis Porins

Abstract:

The internet traffic has increased exponentially due to the high demand for data rates by the users, and the constantly increasing metro networks and access networks are focused on improving the maximum transmit distance of the long-reach optical networks. One of the common methods to improve the maximum transmit distance of the long-reach optical networks at the component level is to use broadband optical amplifiers. The Erbium Doped Fiber Amplifier (EDFA) provides high amplification with low noise figure but due to the characteristics of EDFA, its operation is limited to C-band and L-band. In contrast, the Raman amplifier exhibits a wide amplification spectrum, and negative noise figure values can be achieved. To obtain such results, high powered pumping sources are required. Operating Raman amplifiers with such high-powered optical sources may cause fire hazards and it may damage the optical system. In this paper, we implement a hybrid optical amplifier configuration. EDFA and Raman amplifiers are used in this hybrid setup to combine the advantages of both EDFA and Raman amplifiers to improve the reach of the system. Using this setup, we analyze the maximum transmit distance of the network by obtaining a correlation diagram between the length of the single-mode fiber (SMF) and the Bit Error Rate (BER). This hybrid amplifier configuration is implemented in a Wavelength Division Multiplexing (WDM) system with a BER of 10⁻⁹ by using NRZ modulation format, and the gain uniformity noise ratio (signal-to-noise ratio (SNR)), the efficiency of the pumping source, and the optical signal gain efficiency of the amplifier are studied experimentally in a mathematical modelling environment. Numerical simulations were implemented in RSoft OptSim simulation software based on the nonlinear Schrödinger equation using the Split-Step method, the Fourier transform, and the Monte Carlo method for estimating BER.

Keywords: Raman amplifier, erbium doped fibre amplifier, bit error rate, hybrid optical amplifiers

Procedia PDF Downloads 44
358 A Deep Learning Model with Greedy Layer-Wise Pretraining Approach for Optimal Syngas Production by Dry Reforming of Methane

Authors: Maryam Zarabian, Hector Guzman, Pedro Pereira-Almao, Abraham Fapojuwo

Abstract:

Dry reforming of methane (DRM) has sparked significant industrial and scientific interest not only as a viable alternative for addressing the environmental concerns of two main contributors of the greenhouse effect, i.e., carbon dioxide (CO₂) and methane (CH₄), but also produces syngas, i.e., a mixture of hydrogen (H₂) and carbon monoxide (CO) utilized by a wide range of downstream processes as a feedstock for other chemical productions. In this study, we develop an AI-enable syngas production model to tackle the problem of achieving an equivalent H₂/CO ratio [1:1] with respect to the most efficient conversion. Firstly, the unsupervised density-based spatial clustering of applications with noise (DBSAN) algorithm removes outlier data points from the original experimental dataset. Then, random forest (RF) and deep neural network (DNN) models employ the error-free dataset to predict the DRM results. DNN models inherently would not be able to obtain accurate predictions without a huge dataset. To cope with this limitation, we employ reusing pre-trained layers’ approaches such as transfer learning and greedy layer-wise pretraining. Compared to the other deep models (i.e., pure deep model and transferred deep model), the greedy layer-wise pre-trained deep model provides the most accurate prediction as well as similar accuracy to the RF model with R² values 1.00, 0.999, 0.999, 0.999, 0.999, and 0.999 for the total outlet flow, H₂/CO ratio, H₂ yield, CO yield, CH₄ conversion, and CO₂ conversion outputs, respectively.

Keywords: artificial intelligence, dry reforming of methane, artificial neural network, deep learning, machine learning, transfer learning, greedy layer-wise pretraining

Procedia PDF Downloads 65
357 Dietary Vitamin D Intake and the Bladder Cancer Risk: A Pooled Analysis of Prospective Cohort Studies

Authors: Iris W. A. Boot, Anke Wesselius, Maurice P. Zeegers

Abstract:

Diet may play an essential role in the aetiology of bladder cancer (BC). Vitamin D is involved in various biological functions which have the potential to prevent BC development. Besides, vitamin D also influences the uptake of calcium and phosphorus , thereby possibly indirectly influencing the risk of BC. The aim of the present study was to investigate the relation between vitamin D intake and BC risk. Individual dietary data were pooled from three cohort studies. Food item intake was converted to daily intakes of vitamin D, calcium and phosphorus. Pooled multivariate hazard ratios (HRs), with corresponding 95% confidence intervals (CIs) were obtained using Cox-regression models. Analyses were adjusted for gender, age and smoking status (Model 1), and additionally for the food groups fruit, vegetables and meat (Model 2). Dose–response relationships (Model 1) were examined using a nonparametric test for trend. In total, 2,871 cases and 522,364 non-cases were included in the analyses. The present study showed an overall increased BC risk for high dietary vitamin D intake (HR: 1.14, 95% CI: 1.03-1.26). A similar increase BC risk with high vitamin D intake was observed among women and for the non-muscle invasive BC subtype, (HR: 1.41, 95% CI: 1.15-1.72, HR: 1.13, 95% CI: 1.01-1.27, respectively). High calcium intake decreased the BC risk among women (HR: 0.81, 95% CI: 0.67-0.97). A combined inverse effect on BC risk was observed for low vitamin D intake and high calcium intake (HR: 0.67, 95% CI: 0.48-0.93), while a positive effect was observed for high vitamin D intake in combination with low, moderate and high phosphorus (HR: 1.31, 95% CI: 1.09-1.59, HR: 1.17, 95% CI: 1.01-1.36, HR: 1.16, 95% CI: 1.03-1.31, respectively). Combining all nutrients showed a decreased BC risk for low vitamin D intake, high calcium and moderate phosphor intake (HR: 0.37, 95% CI: 0.18-0.75), and an increased BC risk for moderate intake of all the nutrients (HR: 1.18, 95% CI: 1.02-1.38), for high vitamin D and low calcium and phosphor intake (HR: 1.28, 95% CI: 1.01-1.62), and for moderate vitamin D and calcium and high phosphorus intake (HR: 1.27, 95% CI: 1.01-1.59). No significant dose-response analyses were observed. The findings of this study show an increased BC risk for high dietary vitamin D intake and a decreased risk for high calcium intake. Besides, the study highlights the importance of examining the effect of a nutrient in combination with complementary nutrients for risk assessment. Future research should focus on nutrients in a wider context and in nutritional patterns.

Keywords: bladder cancer, nutritional oncology, pooled cohort analysis, vitamin D

Procedia PDF Downloads 62
356 Generalized Additive Model for Estimating Propensity Score

Authors: Tahmidul Islam

Abstract:

Propensity Score Matching (PSM) technique has been widely used for estimating causal effect of treatment in observational studies. One major step of implementing PSM is estimating the propensity score (PS). Logistic regression model with additive linear terms of covariates is most used technique in many studies. Logistics regression model is also used with cubic splines for retaining flexibility in the model. However, choosing the functional form of the logistic regression model has been a question since the effectiveness of PSM depends on how accurately the PS been estimated. In many situations, the linearity assumption of linear logistic regression may not hold and non-linear relation between the logit and the covariates may be appropriate. One can estimate PS using machine learning techniques such as random forest, neural network etc for more accuracy in non-linear situation. In this study, an attempt has been made to compare the efficacy of Generalized Additive Model (GAM) in various linear and non-linear settings and compare its performance with usual logistic regression. GAM is a non-parametric technique where functional form of the covariates can be unspecified and a flexible regression model can be fitted. In this study various simple and complex models have been considered for treatment under several situations (small/large sample, low/high number of treatment units) and examined which method leads to more covariate balance in the matched dataset. It is found that logistic regression model is impressively robust against inclusion quadratic and interaction terms and reduces mean difference in treatment and control set equally efficiently as GAM does. GAM provided no significantly better covariate balance than logistic regression in both simple and complex models. The analysis also suggests that larger proportion of controls than treatment units leads to better balance for both of the methods.

Keywords: accuracy, covariate balances, generalized additive model, logistic regression, non-linearity, propensity score matching

Procedia PDF Downloads 340
355 Characterization of Complex Gold Ores for Preliminary Process Selection: The Case of Kapanda, Ibindi, Mawemeru, and Itumbi in Tanzania

Authors: Sospeter P. Maganga, Alphonce Wikedzi, Mussa D. Budeba, Samwel V. Manyele

Abstract:

This study characterizes complex gold ores (elemental and mineralogical composition, gold distribution, ore grindability, and mineral liberation) for preliminary process selection. About 200 kg of ore samples were collected from each location using systematic sampling by mass interval. Ores were dried, crushed, milled, and split into representative sub-samples (about 1 kg) for elemental and mineralogical composition analyses using X-ray fluorescence (XRF), fire assay finished with Atomic Absorption Spectrometer (AAS), and X-ray Diffraction (XRD) methods, respectively. The gold distribution was studied on size-by-size fractions, while ore grindability was determined using the standard Bond test. The mineral liberation analysis was conducted using ThermoFisher Scientific Mineral Liberation Analyzer (MLA) 650, where unsieved polished grain mounts (80% passing 700 µm) were used as MLA feed. Two MLA measurement modes, X-ray modal analysis (XMOD) and sparse phase liberation-grain X-ray mapping analysis (SPL-GXMAP), were employed. At least two cyanide consumers (Cu, Fe, Pb, and Zn) and kinetics impeders (Mn, S, As, and Bi) were present in all locations investigated. Copper content at Kapanda (0.77% Cu) and Ibindi (7.48% Cu) exceeded the recommended threshold of 0.5% Cu for direct cyanidation. The gold ore at Ibindi indicated a higher rate of grinding compared to other locations. This could be explained by the highest grindability (2.119 g/rev.) and lowest Bond work index (10.213 kWh/t) values. The pyrite-marcasite, chalcopyrite, galena, and siderite were identified as major gold, copper, lead, and iron-bearing minerals, respectively, with potential for economic extraction. However, only gold and copper can be recovered under conventional milling because of grain size issues (galena is exposed by 10%) and process complexity (difficult to concentrate and smelt iron from siderite). Therefore, the preliminary process selection is copper flotation followed by gold cyanidation for Kapanda and Ibindi ores, whereas gold cyanidation with additives such as glycine or ammonia is selected for Mawemeru and Itumbi ores because of low concentrations of Cu, Pb, Fe, and Zn minerals.

Keywords: complex gold ores, mineral liberation, ore characterization, ore grindability

Procedia PDF Downloads 55
354 Factors Associated with Recurrence and Long-Term Survival in Younger and Postmenopausal Women with Breast Cancer

Authors: Sopit Tubtimhin, Chaliya Wamaloon, Anchalee Supattagorn

Abstract:

Background and Significance: Breast cancer is the most frequently diagnosed and leading cause of cancer death among women. This study aims to determine factors potentially predicting recurrence and long-term survival after the first recurrence in surgically treated patients between postmenopausal and younger women. Methods and Analysis: A retrospective cohort study was performed on 498 Thai women with invasive breast cancer, who had undergone mastectomy and been followed-up at Ubon Ratchathani Cancer Hospital, Thailand. We collected based on a systematic chart audit from medical records and pathology reports between January 1, 2002, and December 31, 2011. The last follow-up time point for surviving patients was December 31, 2016. A Cox regression model was used to calculate hazard ratios of recurrence and death. Findings: The median age was 49 (SD ± 9.66) at the time of diagnosis, 47% was post-menopausal women ( ≥ 51years and not experienced any menstrual flow for a minimum of 12 months), and 53 % was younger women ( ˂ 51 years and have menstrual period). Median time from the diagnosis to the last follow-up or death was 10.81 [95% CI = 9.53-12.07] years in younger cases and 8.20 [95% CI = 6.57-9.82] years in postmenopausal cases. The recurrence-free survival (RFS) for younger estimates at 1, 5 and 10 years of 95.0 %, 64.0% and 58.93% respectively, appeared slightly better than the 92.7%, 58.1% and 53.1% for postmenopausal women [HRadj = 1.25, 95% CI = 0.95-1.64]. Regarding overall survival (OS) for younger at 1, 5 and 10 years were 97.7%, 72.7 % and 52.7% respectively, for postmenopausal patients, OS at 1, 5 and 10 years were 95.7%, 70.0% and 44.5 respectively, there were no significant differences in survival [HRadj = 1.23, 95% CI = 0.94 -1.64]. Multivariate analysis identified five risk factors for negatively impacting on survival were triple negative [HR= 2.76, 95% CI = 1.47-5.19], Her2-enriched [HR = 2.59, 95% CI = 1.37-4.91], luminal B [HR = 2.29, 95 % CI=1.35-3.89], not free margin [HR = 1.98, 95%CI=1.00-3.96] and patients who received only adjuvant chemotherapy [HR= 3.75, 95% CI = 2.00-7.04]. Statistically significant risks of overall cancer recurrence were Her2-enriched [HR = 5.20, 95% CI = 2.75-9.80], triple negative [HR = 3.87, 95% CI = 1.98-7.59], luminal B [HR= 2.59, 95% CI = 1.48-4.54,] and patients who received only adjuvant chemotherapy [HR= 2.59, 95% CI = 1.48-5.66]. Discussion and Implications: Outcomes from this studies have shown that postmenopausal women have been associated with increased risk of recurrence and mortality. As the results, it provides useful information for planning the screening and treatment of early-stage breast cancer in the future.

Keywords: breast cancer, menopause status, recurrence-free survival, overall survival

Procedia PDF Downloads 144
353 The Economics of Ecosystem Services and Biodiversity: Valuing Ecotourism-Local Perspectives to Global Discourses-Stakeholders’ Analysis

Authors: Diptimayee Nayak

Abstract:

Ecotourism has been recognised as a popular component of alternative tourism, which claims to guard host local environment and economy. This concept of ecological tourism (eco-tourism) has become more meaningful in evaluating the recreational function and services of any pristine ecosystem in context of ‘The Economics of Ecosystem and Biodiversity (TEEB)’. This ecotourism is said to be a local solution to the global problem of conserving ecosystems and optimising the utilisations of their services. This paper takes a case of recreational services of an Indian protected area ecosystems ‘Bhitarakanika mangrove protected area’ discussing how ecotourism is functioning taking the perspectives of different stakeholders. Specific stakeholders are taken for analysis, viz., tourists and local people, as they are believed to be the major beneficiaries of ecotourism. The stakeholders’ analysis is evaluated on the basis of travel cost techniques (by using truncated Poisson distribution model) for tourists and descriptive and analytical tools for local people. The evaluation of stakeholders’ analysis of ecotourism has gained its impetus after the formulation of Ecotourism guidelines by the Ministry of Environment and Forest (MoEF), Government of India. The paper concludes that ecotourism issues and challenges are site-specific and region-specific; without critically focussing challenges of ecotourism faced at local level the discourses of ecotourism at global level cannot be tackled. Mere integration and replication of policies at global level to be followed at local level will not be successful (top down policies). Rather mainstreaming the decision making process at local level with the global policy stature helps to solve global issues to a bigger extent (bottom up).

Keywords: ecosystem services, ecotourism, TEEB, economic valuation, stakeholders, travel cost techniques

Procedia PDF Downloads 224
352 Economic Evaluation of Cataract Eye Surgery by Health Attendant of Doctor and Nurse through the Social Insurance Board Cadr at General Hospital Anutapura Palu Central Sulawesi Indonesia

Authors: Sitti Rahmawati

Abstract:

Payment system of cataract surgery implemented by professional attendant of doctor and nurse has been increasing, through health insurance program and this has become one of the factors that affects a lot of government in the budget establishment. This system has been implemented in purpose of quality and expenditure control, i.e., controlling health overpayment to obtain benefit (moral hazard) by the user of insurance or health service provider. The increasing health cost becomes the main issue that hampers the society to receive required health service in cash payment-system. One of the efforts that should be taken by the government in health payment is by securing health insurance through society's health insurance. The objective of the study is to learn the capability of a patient to pay cataract eye operation for the elders. Method of study sample population in this study was patients who obtain health insurance board card for the society that was started in the first of tri-semester (January-March) 2015 and claimed in Indonesian software-Case Based Group as a purposive sampling of 40 patients. Results of the study show that total unit cost analysis of surgery service unit was obtained $75 for unit cost without AFC and salary of nurse and doctor. The operation tariff that has been implemented today at Anutapura hospitals in eye department is tariff without AFC and the salary of the employee is $80. The operation tariff of the unit cost calculation with double distribution model at $65. Conclusion, the calculation result of actual unit cost that is much greater causes incentive distribution system provided to an ophthalmologist at $37 and nurse at $20 for one operation. The surgery service tariff is still low; consequently, the hospital receives low revenue and the quality of health insurance in eye operation department is relatively low. In purpose of increasing the service quality, it requires adequately high cost to equip medical equipment and increase the number of professional health attendant in serving patients in cataract eye operation at hospital.

Keywords: economic evaluation, cataract operation, health attendant, health insurance system

Procedia PDF Downloads 147
351 Phytochemical Screening, Antioxidant and Antibacterial Activity of Annona cherimola Mill

Authors: Arun Jyothi Bheemagani, Chakrapani Pullagummi, Anupalli Roja Rani

Abstract:

Exploration of the chemical constituents of the plants and pharmacological screening may provide us the basis for the development of novel agents. Plants have provided us some of the very important life saving drugs used in the modern medicine. The aim of our work was to screen the phytochemical constituents and antimicrobial and antioxidant activities of methanol extract of leaves of Annona cherimola Mill plant from Tirumala forest, Tirupathi. It was originally called Chirimuya by the Inca people who lived where it was growing in the Andes of South America, is an edible fruit-bearing species of the genus Annona from the family Annonaceae. Annona cherimola Mill is a multipurpose tree with edible fruits and is one of the sources of the medicinal products. The antibacterial activity was measured by agar well diffusion method; the diameter of the zone of bacterial growth inhibition was measured after incubation of plates. The inhibitory effect was studied against the pathogenic bacteria (Klebsiella pneumonia, Bacillus subtilis, Staphylococcus aureus and Escherichia coli (E. coli). Antioxidant assays were also performed for the same extracts by spectrophotometric methods using known standard antioxidants as reference. The studied plant extracts were found to be very effective against the pathogenic microorganisms tested. The methanolic extract of Annona cherimola Mill from showed maximum activity against Escherichia coli and Staphylococcus aureus and the least concentration required showing the activity was 0.5mg/ml. Phytochemical screening of the plants revealed the presence of flavonoids, alkaloids, steroids and carbohydrates. Good presence of antioxidants was also found in the methanolic extracts.

Keywords: annona cherimola, phytochemicals, antioxidant and antibacterial activity, methanol extract

Procedia PDF Downloads 423
350 A LED Warning Vest as Safety Smart Textile and Active Cooperation in a Working Group for Building a Normative Standard

Authors: Werner Grommes

Abstract:

The institute of occupational safety and health works in a working group for building a normative standard for illuminated warning vests and did a lot of experiments and measurements as basic work (cooperation). Intelligent car headlamps are able to suppress conventional warning vests with retro-reflective stripes as a disturbing light. Illuminated warning vests are therefore required for occupational safety. However, they must not pose any danger to the wearer or other persons. Here, the risks of the batteries (lithium types), the maximum brightness (glare) and possible interference radiation from the electronics on the implant carrier must be taken into account. The all-around visibility, as well as the required range, play an important role here. For the study, many luminance measurements of already commercially available LEDs and electroluminescent warning vests, as well as their electromagnetic interference fields and aspects of electrical safety, were measured. The results of this study showed that LED lighting is all far too bright and causes strong glare. The integrated controls with pulse modulation and switching regulators cause electromagnetic interference fields. Rechargeable lithium batteries can explode depending on the temperature range. Electroluminescence brings even more hazards. A test method was developed for the evaluation of visibility at distances of 50, 100, and 150 m, including the interview of test persons. A measuring method was developed for the detection of glare effects at close range with the assignment of the maximum permissible luminance. The electromagnetic interference fields were tested in the time and frequency ranges. A risk and hazard analysis were prepared for the use of lithium batteries. The range of values for luminance and risk analysis for lithium batteries were discussed in the standards working group. These will be integrated into the standard. This paper gives a brief overview of the topics of illuminated warning vests, which takes into account the risks and hazards for the vest wearer or others

Keywords: illuminated warning vest, optical tests and measurements, risks, hazards, optical glare effects, LED, E-light, electric luminescent

Procedia PDF Downloads 94
349 Phytoremediation of Pharmaceutical Emerging Contaminant-Laden Wastewater: A Techno-Economic and Sustainable Development Approach

Authors: Reda A. Elkhyat, Mahmoud Nasr, Amel A. Tammam, Mohamed A. Ghazy

Abstract:

Pharmaceuticals and personal care products (PPCPs) are a unique group of emerging contaminants continuously introduced into the aquatic ecosystem at concentrations capable of inducing adverse effects on humans and aquatic organisms, even at trace levels ranging from ppt to ppm. Amongst the common pharmaceutical emerging pollutants detected in several aquatic environments, acetaminophen has been recognized for its high toxicity. Once released into the aquatic environment, acetaminophen could be degraded by the microbial community and adsorption/ uptake by the plants. Although many studies have investigated the hazard risks of acetaminophen pollutants on aquatic animals, the number of studies demonstrating its removal efficiency and effects on the aquatic plant still needs to be expanded. In this context, this study aims to apply the aquatic plant-based phytoremediation system to eliminate this emerging contaminant from domestic wastewater. The phytoremediation experiment was performed in a hydroponic system containing Eichhornia crassipes and operated under the natural environment at 25°C to 30°C. This system was subjected to synthetic domestic wastewater with the maximum initial chemical oxygen demand (COD) of 390 mg/L and three different acetaminophen concentrations of 25, 50, and 200 mg/L. After 17 d of operation, the phytoremediation system achieved removal efficiencies of about 100% and 85.6±4.2% for acetaminophen and COD, respectively.Moreover, the Eichhornia crassipes could withstand the toxicity associated with increasing the acetaminophen concentrations from 25 to 200 mg/L. This high treatment performance could be assigned to the well-adaptation of the water hyacinth to the phytoremediation factors. Moreover, it has been proposed that this phytoremediation system could be largely supported by phytodegradation and plant uptaking mechanisms; however, detecting the generated intermediates, metabolites, and degradation products are still under investigation. Applying this free-floating plant in wastewater treatment and reducing emerging contaminants would meet the targets of SDGs 3, 6, and. 14. The cost-benefit analysis was performed for the phytoremediation system. The phytoremediation system is financially viable as the net profit was 2921 US $/ y with a payback period of nine years.

Keywords: domestic wastewater, emerging pollutants, hydrophyte Eichhornia crassipes, paracetamol removal efficiency, sustainable development goals (SDGs)

Procedia PDF Downloads 92
348 Storm-water Management for Greenfield Area Using Low Impact Development Concept for Town Planning Scheme Mechanism

Authors: Sahil Patel

Abstract:

Increasing urbanization leads to a concrete forest. The effects of new development practices occur in the natural hydrologic cycle. Here the concerns have been raised about the groundwater recharge in sufficient quantity. With further development, porous surfaces reduce rapidly. A city like Ahmedabad, with a non-perennial river, is 100% dependent on groundwater. The Ahmedabad city receives its domestic use water from the Narmada river, located about 200 km away. The expenses to bring water is much higher. Ahmedabad city receives annually 800 mm rainfall, and mostly this water increases the local level waterlogging problems; after that, water goes to the Sabarmati river and merges into the sea. The existing developed area of Ahmedabad city is very dense, and does not offer many chances to change the built form and increase porous surfaces to absorb storm-water. Therefore, there is a need to plan upcoming areas with more effective solutions to manage storm-water. This paper is focusing on the management of stormwater for new development by retaining natural hydrology. The Low Impact Development (LID) concept is used to manage storm-water efficiently. Ahmedabad city has a tool called the “Town Planning Scheme,” which helps the local body drive new development by land pooling mechanism. This paper gives a detailed analysis of the selected area (proposed Town Planning Scheme area by the local authority) in Ahmedabad. Here the development control regulations for individual developers and some physical elements for public places are presented to manage storm-water. There is a different solution for the Town Planning scheme than that of the conventional way. A local authority can use it for any area, but it can be site-specific. In the end, there are benefits to locals with some financial analysis and comparisons.

Keywords: water management, green field development, low impact development, town planning scheme

Procedia PDF Downloads 104
347 Assessment of Heavy Metal Contamination for the Sustainable Management of Vulnerable Mangrove Ecosystem, the Sundarbans

Authors: S. Begum, T. Biswas, M. A. Islam

Abstract:

The present research investigates the distribution and contamination of heavy metals in core sediments collected from three locations of the Sundarbans mangrove forest. In this research, quality of the analysis is evaluated by analyzing certified reference materials IAEA-SL-1 (lake sediment), IAEA-Soil-7, and NIST-1633b (coal fly ash). Total concentrations of 28 heavy metals (Na, Al, K, Ca, Sc, Ti, V, Cr, Mn, Fe, Co, Zn, Ga, As, Sb, Cs, La, Ce, Sm, Eu, Tb, Dy, Ho, Yb, Hf, Ta, Th, and U) have determined in core sediments of the Sundarbans mangrove by neutron activation analysis (NAA) technique. When compared with upper continental crustal (UCC) values, it is observed that mean concentrations of K, Ti, Zn, Cs, La, Ce, Sm, Hf, and Th show elevated values in the research area is high. In this research, the assessments of metal contamination levels using different environmental contamination indices (EF, Igeo, CF) indicate that Ti, Sb, Cs, REEs, and Th have minor enrichment of the sediments of the Sundarbans. The modified degree of contamination (mCd) of studied samples of the Sundarbans ecosystem show low contamination. The pollution load index (PLI) values for the cores suggested that sampling points are moderately polluted. The possible sources of the deterioration of the sediment quality can be attributed to the different chemical carrying cargo accidents, port activities, ship breaking, agricultural and aquaculture run-off of the area. Pearson correlation matrix (PCM) established relationships among elements. The PCM indicates that most of the metal's distributions have been controlled by the same factors such as Fe-oxy-hydroxides and clay minerals, and also they have a similar origin. The poor correlations of Ca with most of the elements in the sediment cores indicate that calcium carbonate has a less significant role in this mangrove sediment. Finally, the data from this research will be used as a benchmark for future research and help to quantify levels of metal pollutions, as well as to manage future ecological risks of the vulnerable mangrove ecosystem, the Sundarbans.

Keywords: contamination, core sediment, trace element, sundarbans, vulnerable

Procedia PDF Downloads 104
346 Rating Agreement: Machine Learning for Environmental, Social, and Governance Disclosure

Authors: Nico Rosamilia

Abstract:

The study evaluates the importance of non-financial disclosure practices for regulators, investors, businesses, and markets. It aims to create a sector-specific set of indicators for environmental, social, and governance (ESG) performances alternative to the ratings of the agencies. The existing literature extensively studies the implementation of ESG rating systems. Conversely, this study has a twofold outcome. Firstly, it should generalize incentive systems and governance policies for ESG and sustainable principles. Therefore, it should contribute to the EU Sustainable Finance Disclosure Regulation. Secondly, it concerns the market and the investors by highlighting successful sustainable investing. Indeed, the study contemplates the effect of ESG adoption practices on corporate value. The research explores the asset pricing angle in order to shed light on the fragmented argument on the finance of ESG. Investors may be misguided about the positive or negative effects of ESG on performances. The paper proposes a different method to evaluate ESG performances. By comparing the results of a traditional econometric approach (Lasso) with a machine learning algorithm (Random Forest), the study establishes a set of indicators for ESG performance. Therefore, the research also empirically contributes to the theoretical strands of literature regarding model selection and variable importance in a finance framework. The algorithms will spit out sector-specific indicators. This set of indicators defines an alternative to the compounded scores of ESG rating agencies and avoids the possible offsetting effect of scores. With this approach, the paper defines a sector-specific set of indicators to standardize ESG disclosure. Additionally, it tries to shed light on the absence of a clear understanding of the direction of the ESG effect on corporate value (the problem of endogeneity).

Keywords: ESG ratings, non-financial information, value of firms, sustainable finance

Procedia PDF Downloads 59
345 Integration of Agroforestry Shrub for Diversification and Improved Smallholder Production: A Case of Cajanus cajan-Zea Mays (Pigeonpea-Maize) Production in Ghana

Authors: F. O. Danquah, F. Frimpong, E. Owusu Danquah, T. Frimpong, J. Adu, S. K. Amposah, P. Amankwaa-Yeboah, N. E. Amengor

Abstract:

In the face of global concerns such as population increase, climate change, and limited natural resources, sustainable agriculture practices are critical for ensuring food security and environmental stewardship. The study was conducted in the Forest zones of Ghana during the major and minor seasons of 2023 cropping seasons to evaluate maize yield productivity improvement and profitability of integrating Cajanus cajan (pigeonpea) into a maize production system described as a pigeonpea-maize cropping system. This is towards an integrated soil fertility management (ISFM) with a legume shrub pigeonpea for sustainable maize production while improving smallholder farmers' resilience to climate change. A split-plot design with maize-pigeonpea (Pigeonpea-Maize intercrop – MPP and No pigeonpea/ Sole maize – NPP) and inorganic fertilizer rate (250 kg/ha of 15-15-15 N-P2O5-K2O + 250 kg/ha Sulphate of Ammonia (SoA) – Full rate (FR), 125 kg/ha of 15-15-15 N-P2O5-K2O + 125 kg/ha Sulphate of Ammonia (SoA) – Half rate (HR) and no inorganic fertilizer (NF) as control) was used as the main plot and subplot treatments respectively. The results indicated a significant interaction of the pigeonpea-maize cropping system and inorganic fertilizer rate on the growth and yield of the maize with better and similar maize productivity when HR and FR were used with pigeonpea biomass. Thus, the integration of pigeonpea and its biomass would result in the reduction of recommended fertiliser rate to half. This would improve farmers’ income and profitability for sustainable maize production in the face of climate change.

Keywords: agroforestry tree, climate change, integrated soil fertility management, resource use efficiency

Procedia PDF Downloads 32
344 Application of Gamma Frailty Model in Survival of Liver Cirrhosis Patients

Authors: Elnaz Saeedi, Jamileh Abolaghasemi, Mohsen Nasiri Tousi, Saeedeh Khosravi

Abstract:

Goals and Objectives: A typical analysis of survival data involves the modeling of time-to-event data, such as the time till death. A frailty model is a random effect model for time-to-event data, where the random effect has a multiplicative influence on the baseline hazard function. This article aims to investigate the use of gamma frailty model with concomitant variable in order to individualize the prognostic factors that influence the liver cirrhosis patients’ survival times. Methods: During the one-year study period (May 2008-May 2009), data have been used from the recorded information of patients with liver cirrhosis who were scheduled for liver transplantation and were followed up for at least seven years in Imam Khomeini Hospital in Iran. In order to determine the effective factors for cirrhotic patients’ survival in the presence of latent variables, the gamma frailty distribution has been applied. In this article, it was considering the parametric model, such as Exponential and Weibull distributions for survival time. Data analysis is performed using R software, and the error level of 0.05 was considered for all tests. Results: 305 patients with liver cirrhosis including 180 (59%) men and 125 (41%) women were studied. The age average of patients was 39.8 years. At the end of the study, 82 (26%) patients died, among them 48 (58%) were men and 34 (42%) women. The main cause of liver cirrhosis was found hepatitis 'B' with 23%, followed by cryptogenic with 22.6% were identified as the second factor. Generally, 7-year’s survival was 28.44 months, for dead patients and for censoring was 19.33 and 31.79 months, respectively. Using multi-parametric survival models of progressive and regressive, Exponential and Weibull models with regard to the gamma frailty distribution were fitted to the cirrhosis data. In both models, factors including, age, bilirubin serum, albumin serum, and encephalopathy had a significant effect on survival time of cirrhotic patients. Conclusion: To investigate the effective factors for the time of patients’ death with liver cirrhosis in the presence of latent variables, gamma frailty model with parametric distributions seems desirable.

Keywords: frailty model, latent variables, liver cirrhosis, parametric distribution

Procedia PDF Downloads 242
343 A Machine Learning-Based Model to Screen Antituberculosis Compound Targeted against LprG Lipoprotein of Mycobacterium tuberculosis

Authors: Syed Asif Hassan, Syed Atif Hassan

Abstract:

Multidrug-resistant Tuberculosis (MDR-TB) is an infection caused by the resistant strains of Mycobacterium tuberculosis that do not respond either to isoniazid or rifampicin, which are the most important anti-TB drugs. The increase in the occurrence of a drug-resistance strain of MTB calls for an intensive search of novel target-based therapeutics. In this context LprG (Rv1411c) a lipoprotein from MTB plays a pivotal role in the immune evasion of Mtb leading to survival and propagation of the bacterium within the host cell. Therefore, a machine learning method will be developed for generating a computational model that could predict for a potential anti LprG activity of the novel antituberculosis compound. The present study will utilize dataset from PubChem database maintained by National Center for Biotechnology Information (NCBI). The dataset involves compounds screened against MTB were categorized as active and inactive based upon PubChem activity score. PowerMV, a molecular descriptor generator, and visualization tool will be used to generate the 2D molecular descriptors for the actives and inactive compounds present in the dataset. The 2D molecular descriptors generated from PowerMV will be used as features. We feed these features into three different classifiers, namely, random forest, a deep neural network, and a recurring neural network, to build separate predictive models and choosing the best performing model based on the accuracy of predicting novel antituberculosis compound with an anti LprG activity. Additionally, the efficacy of predicted active compounds will be screened using SMARTS filter to choose molecule with drug-like features.

Keywords: antituberculosis drug, classifier, machine learning, molecular descriptors, prediction

Procedia PDF Downloads 369
342 Ecological and Health Risk Assessment of the Heavy Metal Contaminant in Surface Soils around Effurun Market

Authors: A. O. Ogunkeyede, D. Amuchi, A. A. Adebayo

Abstract:

Heavy metal contaminations in soil have received great attention. Anthropogenic activities such as vehicular emission, industrial activities and constructions have resulted in elevated concentration of heavy metals in the surface soils. The metal particles can be free from the surface soil when they are disturbed and re-entrained in air, which necessitated the need to investigate surface soil at market environment where adults and children are present on daily basis. This study assesses concentration of heavy metal pollution, ecological and health risk factors in surface soil at Effurun market. 8 samples were collected at household material (EMH), fish (EMFs), fish and commodities (EMF-C), Abattoir (EMA 1 & 2), fruit sections (EMF 1 & 2) and lastly main road (EMMR). The samples were digested and analyzed in triplicate for contents of Lead (Pb), Nickel (Ni), Cadmium (Cd) and Copper (Cu). The mean concentration of the Pb mg/kg (112.27 ± 1.12) and Cu mg/kg (156.14 ± 1.10) were highest in the abattoir section (EMA 1). The mean concentrations of the heavy metal were then used to calculate the ecological and health risk for people within the market. Pb contamination at EMMR, EMF 2, EMFs were moderately while Pb shows considerable contamination at EMH, EMA 1, EMA 2 and EMF-C sections of the Effurun market. The ecological risk factor varies between low to moderate pollution for Pb and EMA 1 has the highest potential ecological risk that falls within moderate pollution. The hazard quotient results show that dermal exposure pathway is the possible means of heavy metal exposure to the traders while ingestion is the least sources of exposure to adult. The ingestion suggested that children around the EMA 1 have the highest possible exposure to children due to hand-to-mouth and object-to-mouth behaviour. The results further show that adults at the EMA1 will have the highest exposure to Pb due to inhalation during burning of cow with tyre that contained Pb and Cu. The carcinogenic risk values of most sections were higher than acceptable values, while Ni at EMMR, EMF 1 & 2, EMFs and EMF-C sections that were below the acceptable values. The cancer risk for inhalation exposure pathway for Pb (1.01E+17) shows a significant level of contamination than all the other sections of the market. It suggested that the people working at the Abattoir were very prone to cancer risk.

Keywords: carcinogenic, ecological, heavy metal, risk

Procedia PDF Downloads 123
341 Total Life Cycle Cost and Life Cycle Assessment of Mass Timber Buildings in the US

Authors: Hongmei Gu, Shaobo Liang, Richard Bergman

Abstract:

With current worldwide trend in designs to have net-zero emission buildings to mitigate climate change, widespread use of mass timber products, such as Cross Laminated Timber (CLT), or Nail Laminated Timber (NLT) or Dowel Laminated Timber (DLT) in buildings have been proposed as one approach in reducing Greenhouse Gas (GHG) emissions. Consequentially, mass timber building designs are being adopted more and more by architectures in North America, especially for mid- to high-rise buildings where concrete and steel buildings are currently prevalent, but traditional light-frame wood buildings are not. Wood buildings and their associated wood products have tended to have lower environmental impacts than competing energy-intensive materials. It is common practice to conduct life cycle assessments (LCAs) and life cycle cost analyses on buildings with traditional structural materials like concrete and steel in the building design process. Mass timber buildings with lower environmental impacts, especially GHG emissions, can contribute to the Net Zero-emission goal for the world-building sector. However, the economic impacts from CLT mass timber buildings still vary from the life-cycle cost perspective and environmental trade-offs associated with GHG emissions. This paper quantified the Total Life Cycle Cost and cradle-to-grave GHG emissions of a pre-designed CLT mass timber building and compared it to a functionally-equivalent concrete building. The Total life cycle Eco-cost-efficiency is defined in this study and calculated to discuss the trade-offs for the net-zero emission buildings in a holistic view for both environmental and economic impacts. Mass timber used in buildings for the United States is targeted to the materials from the nation’s sustainable managed forest in order to benefit both national and global environments and economies.

Keywords: GHG, economic impact, eco-cost-efficiency, total life-cycle costs

Procedia PDF Downloads 114
340 Outwrestling Cataclysmic Tsunamis at Hilo, Hawaii: Using Technical Developments of the past 50 Years to Improve Performance

Authors: Mark White

Abstract:

The best practices for owners and urban planners to manage tsunami risk have evolved during the last fifty years, and related technical advances have created opportunities for them to obtain better performance than in earlier cataclysmic tsunami inundations. This basic pattern is illustrated at Hilo Bay, the waterfront area of Hilo, Hawaii, an urban seaport which faces the most severe tsunami hazard of the Hawaiian archipelago. Since April 1, 1946, Hilo Bay has endured tsunami waves with a maximum water height exceeding 2.5 meters following four severe earthquakes: Unimak Island (Mw 8.6, 6.1 m) in 1946; Valdiva (Mw 9.5, the largest earthquake of the 20th century, 10.6 m) in 1960; William Prince Sound (Mw 9.2, 3.8 m) in 1964; and Kalapana (Mw 7.7, the largest earthquake in Hawaii since 1868, 2.6 m) in 1975. Ignoring numerous smaller tsunamis during the same time frame, these four cataclysmic tsunamis have caused property losses in Hilo to exceed $1.25 billion and more than 150 deaths. It is reasonable to foresee another cataclysmic tsunami inundating the urban core of Hilo in the next 50 years, which, if unchecked, could cause additional deaths and losses in the hundreds of millions of dollars. Urban planners and individual owners are now in a position to reduce these losses in the next foreseeable tsunami that generates maximum water heights between 2.5 and 10 meters in Hilo Bay. Since 1946, Hilo planners and individual owners have already created buffer zones between the shoreline and its historic downtown area. As these stakeholders make inevitable improvements to the built environment along and adjacent to the shoreline, they should incorporate new methods for better managing the obvious tsunami risk at Hilo. At the planning level, new manmade land forms, such as tsunami parks and inundation reservoirs, should be developed. Individual owners should require their design professionals to include sacrificial seismic and tsunami fuses that will perform well in foreseeable severe events and that can be easily repaired in the immediate aftermath. These investments before the next cataclysmic tsunami at Hilo will yield substantial reductions in property losses and fatalities.

Keywords: hilo, tsunami parks, reservoirs, fuse systems, risk managment

Procedia PDF Downloads 145
339 Construction of Microbial Fuel Cells from Local Benthic Zones

Authors: Maria Luiza D. Ramiento, Maria Lissette D. Lucas

Abstract:

Electricity is said to serve as the backbone of modern technology. Considering this, electricity consumption has dynamically grown due to the continuous demand. An alternative producer of energy concerning electricity must therefore be given focus. Microbial fuel cell wholly characterizes a new method of renewable energy recovery: the direct conversion of organic matter to electricity using bacteria. Electricity is produced as fuel or new food is given to the bacteria. The study concentrated in determining the feasibility of electricity production from local benthic zones. Microbial fuel cells were constructed to harvest the possible electricity and to test the presence of electricity producing microorganisms. Soil samples were gathered from Calumpang River, Palawan Mangrove Forest, Rosario River and Batangas Port. Eleven modules were constructed for the different trials of the soil samples. These modules were made of cathode and anode chambers connected by a salt bridge. For 85 days, the harvested voltage was measured daily. No parameter is added for the first 24 days. For the next 61 days, acetic acid was included in the first and second trials of the modules. Each of the trials of the soil samples gave a positive result in electricity production.There were electricity producing microbes in local benthic zones. It is observed that the higher the organic content of the soil sample, the higher the electricity harvested from it. It is recommended to identify the specific species of the electricity-producing microorganism present in the local benthic zone. Complement experiments are encouraged like determining the kind of soil particles to test its effect on the amount electricity that can be harvested. To pursue the development of microbial fuel cells by building a closed circuit in it is also suggested.

Keywords: microbial fuel cell, benthic zone, electricity, reduction-oxidation reaction, bacteria

Procedia PDF Downloads 375
338 Causal Estimation for the Left-Truncation Adjusted Time-Varying Covariates under the Semiparametric Transformation Models of a Survival Time

Authors: Yemane Hailu Fissuh, Zhongzhan Zhang

Abstract:

In biomedical researches and randomized clinical trials, the most commonly interested outcomes are time-to-event so-called survival data. The importance of robust models in this context is to compare the effect of randomly controlled experimental groups that have a sense of causality. Causal estimation is the scientific concept of comparing the pragmatic effect of treatments conditional to the given covariates rather than assessing the simple association of response and predictors. Hence, the causal effect based semiparametric transformation model was proposed to estimate the effect of treatment with the presence of possibly time-varying covariates. Due to its high flexibility and robustness, the semiparametric transformation model which shall be applied in this paper has been given much more attention for estimation of a causal effect in modeling left-truncated and right censored survival data. Despite its wide applications and popularity in estimating unknown parameters, the maximum likelihood estimation technique is quite complex and burdensome in estimating unknown parameters and unspecified transformation function in the presence of possibly time-varying covariates. Thus, to ease the complexity we proposed the modified estimating equations. After intuitive estimation procedures, the consistency and asymptotic properties of the estimators were derived and the characteristics of the estimators in the finite sample performance of the proposed model were illustrated via simulation studies and Stanford heart transplant real data example. To sum up the study, the bias of covariates was adjusted via estimating the density function for truncation variable which was also incorporated in the model as a covariate in order to relax the independence assumption of failure time and truncation time. Moreover, the expectation-maximization (EM) algorithm was described for the estimation of iterative unknown parameters and unspecified transformation function. In addition, the causal effect was derived by the ratio of the cumulative hazard function of active and passive experiments after adjusting for bias raised in the model due to the truncation variable.

Keywords: causal estimation, EM algorithm, semiparametric transformation models, time-to-event outcomes, time-varying covariate

Procedia PDF Downloads 103
337 A Statistical Approach to Predict and Classify the Commercial Hatchability of Chickens Using Extrinsic Parameters of Breeders and Eggs

Authors: M. S. Wickramarachchi, L. S. Nawarathna, C. M. B. Dematawewa

Abstract:

Hatchery performance is critical for the profitability of poultry breeder operations. Some extrinsic parameters of eggs and breeders cause to increase or decrease the hatchability. This study aims to identify the affecting extrinsic parameters on the commercial hatchability of local chicken's eggs and determine the most efficient classification model with a hatchability rate greater than 90%. In this study, seven extrinsic parameters were considered: egg weight, moisture loss, breeders age, number of fertilised eggs, shell width, shell length, and shell thickness. Multiple linear regression was performed to determine the most influencing variable on hatchability. First, the correlation between each parameter and hatchability were checked. Then a multiple regression model was developed, and the accuracy of the fitted model was evaluated. Linear Discriminant Analysis (LDA), Classification and Regression Trees (CART), k-Nearest Neighbors (kNN), Support Vector Machines (SVM) with a linear kernel, and Random Forest (RF) algorithms were applied to classify the hatchability. This grouping process was conducted using binary classification techniques. Hatchability was negatively correlated with egg weight, breeders' age, shell width, shell length, and positive correlations were identified with moisture loss, number of fertilised eggs, and shell thickness. Multiple linear regression models were more accurate than single linear models regarding the highest coefficient of determination (R²) with 94% and minimum AIC and BIC values. According to the classification results, RF, CART, and kNN had performed the highest accuracy values 0.99, 0.975, and 0.972, respectively, for the commercial hatchery process. Therefore, the RF is the most appropriate machine learning algorithm for classifying the breeder outcomes, which are economically profitable or not, in a commercial hatchery.

Keywords: classification models, egg weight, fertilised eggs, multiple linear regression

Procedia PDF Downloads 70