Search results for: 3D joint locations
596 Technologies of Factory Farming: An Exploration of Ongoing Confrontations with Farm Animal Sanctuaries
Authors: Chetna Khandelwal
Abstract:
This research aims to study the contentions that Farm Animal Sanctuaries pose to human-animal relationships in modernity, which have developed as a result of globalisation of the meat industry and advancements in technology. The sociological history of human-animal relationships in farming is contextualised in order to set a foundation for the follow-up examination of challenges to existing human-(farm)animal relationships by Farm Animal Sanctuaries. The methodology was influenced by relativism, and the method involved three semi-structured small-group interviews, conducted at locations of sanctuaries. The sample was chosen through purposive sampling and varied by location and size of the sanctuary. Data collected were transcribed and qualitatively coded to generate themes. Findings revealed that sanctuary contentions to established human-animal relationships by factory farming could be divided into 4 broad categories – Revealing horrors of factory farming (involving uncovering power relations in agribusiness); transforming relationships with animals (including letting them emotionally heal in accordance with their individual personalities and treating them as partial-pets); educating the public regarding welfare conditions in factory farms as well as animal sentience through practical experience or positive imagery of farm animals, and addressing retaliation made by agribusiness in the form of technologies or discursive strategies. Hence, this research concludes that The human-animal relationship in current times has been characterised by – (ideological and physical) distance from farm animals, commodification due to increased chasing of profits over welfare and exploitation using technological advancements, creating unequal power dynamics that rid animals of any agency. Challenges to this relationship can be influenced by local populations around the sanctuary but not so dependent upon the size of it. This research can benefit from further academic exploration into farm animal sanctuaries and their role in feminist animal rights activism to enrich the ongoing fight against intensive farming.Keywords: animal rights, factory farming, farm animal sanctuaries, human-animal relationships
Procedia PDF Downloads 137595 Stakeholder Perceptions of Wildlife Tourism in Communal Conservancies within the Mudumu North Complex, Zambezi Region, Namibia
Authors: Shimhanda M. N., Mogomotsi P. K., Thakadu O. T., Rutina L. P.
Abstract:
Wildlife tourism (WT) in communal conservancies has the potential to contribute significantly to sustainable rural development. However, understanding local perceptions, promoting participation, and addressing stakeholder concerns are all required for sustainability. This study looks at stakeholder perceptions of WT in conservancies near protected areas in Namibia's Zambezi region, specifically the Mudumu North Complex. A mixed-methods approach was employed to collect data from 356 households using stratified sampling. Qualitative data was gathered through six focus group discussions and 22 key informant interviews. Quantitative analysis, using descriptive statistics and Spearman correlation, investigated socio-demographic influences on WT perceptions, while qualitative data were subjected to thematic analysis to identify key themes. Results revealed high awareness and generally positive perceptions of WT, particularly in Mashi Conservancy, which benefits from diverse tourism activities and joint ventures with lodges. Kwandu and Kyaramacan, which rely heavily on consumptive tourism, had lower awareness and perceived benefits. Human-wildlife conflict emerged as a persistent issue, especially in Kwandu and Mashi, where crop damage and wildlife interference undermined community support for WT. Younger, more educated, and employed individuals held more positive attitudes towards WT. The study highlights the importance of recognising community heterogeneity and tailoring WT strategies to meet diverse needs, including HWC mitigation. Policy implications include increasing community engagement, ensuring equitable benefit distribution, and implementing inclusive tourism strategies that promote long-term sustainability. These findings are critical for developing long-term WT models that address local challenges, encourage community participation, and contribute to socioeconomic development and conservation goals.Keywords: sustainable tourism, stakeholder perceptions, community involvement, socio-economic development
Procedia PDF Downloads 16594 Application of Hydrologic Engineering Centers and River Analysis System Model for Hydrodynamic Analysis of Arial Khan River
Authors: Najeeb Hassan, Mahmudur Rahman
Abstract:
Arial Khan River is one of the main south-eastward outlets of the River Padma. This river maintains a meander channel through its course and is erosional in nature. The specific objective of the research is to study and evaluate the hydrological characteristics in the form of assessing changes of cross-sections, discharge, water level and velocity profile in different stations and to create a hydrodynamic model of the Arial Khan River. Necessary data have been collected from Bangladesh Water Development Board (BWDB) and Center for Environment and Geographic Information Services (CEGIS). Satellite images have been observed from Google earth. In this study, hydrodynamic model of Arial Khan River has been developed using well known steady open channel flow code Hydrologic Engineering Centers and River Analysis System (HEC-RAS) using field surveyed geometric data. Cross-section properties at 22 locations of River Arial Khan for the years 2011, 2013 and 2015 were also analysed. 1-D HEC-RAS model has been developed using the cross sectional data of 2015 and appropriate boundary condition is being used to run the model. This Arial Khan River model is calibrated using the pick discharge of 2015. The applicable value of Mannings roughness coefficient (n) is adjusted through the process of calibration. The value of water level which ties with the observed data to an acceptable accuracy is taken as calibrated model. The 1-D HEC-RAS model then validated by using the pick discharges from 2009-2018. Variation in observed water level in the model and collected water level data is being compared to validate the model. It is observed that due to seasonal variation, discharge of the river changes rapidly and Mannings roughness coefficient (n) also changes due to the vegetation growth along the river banks. This river model may act as a tool to measure flood area in future. By considering the past pick flow discharge, it is strongly recommended to improve the carrying capacity of Arial Khan River to protect the surrounding areas from flash flood.Keywords: BWDB, CEGIS, HEC-RAS
Procedia PDF Downloads 183593 Increasing Sulfur Handling Cost Efficiency Using the Eco Sulfur Paving Block Method at PT Pertamina EP Field Cepu
Authors: Adha Bayu Wijaya, A. Zainal Abidin, Naufal Baihaqi, Joko Suprayitno, Astika Titistiti, Muslim Adi Wijaya, Endah Tri Lestari, Agung Wibowo
Abstract:
Sulfur is a non-metallic chemical element in the form of a yellow crystalline solid with the chemical formula, and is formed from several types of natural and artificial chemical reactions. Commercial applications of sulfur processed products can be found in various aspects of life, for example in the use of processed sulfur as paving blocks. The Gundih Central Processing Plant (CPP) is capable of producing 14 tons/day of sulfur pellets. This amount comes from the high H2S content of the wells with a total concentration of 20,000 ppm and a volume accumulation of 14 MMSCFD acid gas. H2S is converted to sulfur using the thiobacillus microbe in the Biological Sulfur Recovery Unit (BSRU) with a sulfur product purity level greater than 95%. In 2018 sulfur production at Gundih CPP was recorded at 4044 tons which could potentially trigger serious problems from an environmental aspect. The use of sulfur as material for making paving blocks is an alternative solution in addressing the potential impact on the environment, as regulated by Government Regulation No.22 of Year 2021 concerning the Waste Management of Non-Hazardous and Toxic Substances (B3), and the high cost of handling sulfur by third parties. The design mix of ratio sulfur paving blocks is 22% cements, rock ash 67%, and 11% of sulfur pellets. The sulfur used in making the paving mixture is pure sulfur, namely the side product category without any contaminants, thereby eliminating the potential for environmental pollution when implementing sulfur paving. Strength tests of sulfur paving materials have also been confirmed by external laboratories. The standard used in making sulfur paving blocks refers to the SNI 03-0691-1996 standard. With the results of sulfur paving blocks made according to quality B. Currently, sulfur paving blocks are used in building access to wells locations and in public roads in the Cepu Field area as a contribution from Corporate Social Responsibility (CSR).Keywords: sulphur, innovation, paving block, CSR, sulphur paving
Procedia PDF Downloads 75592 Effectiveness of the Community Health Assist Scheme in Reducing Market Failure in Singapore’s Healthcare Sector
Authors: Matthew Scott Lau
Abstract:
This study addresses the research question: How effective has the Community Health Assist Scheme (CHAS) been in reducing market failure in Singapore’s healthcare sector? The CHAS policy, introduced in 2012 in Singapore, aims to improve accessibility and affordability of healthcare by offering subsidies to low and middle-income groups and elderly individuals for general practice consultations and healthcare. The investigation was undertaken by acquiring and analysing primary and secondary research data from 3 main sources, including handwritten survey responses of 334 individuals who were valid CHAS subsidy recipients (CHAS cardholders) from 5 different locations in Singapore, interview responses from two established general practitioner doctors with working knowledge of the scheme, and information from literature available online. Survey responses were analysed to determine how CHAS has affected the affordability and consumption of healthcare, and other benefits or drawbacks for CHAS users. The interview responses were used to explain the benefits of healthcare consumption and provide different perspectives on the impacts of CHAS on the various parties involved. Online sources provided useful information on changes in healthcare consumerism and Singapore’s government policies. The study revealed that CHAS has been largely effective in reducing market failure as the subsidies granted to consumers have improved the consumption of healthcare. This has allowed for the external benefits of healthcare consumption to be realized, thus reducing market failure. However, the study also revealed that CHAS cannot be fully effective in reducing market failure as the scope of CHAS prevents healthcare consumption from fully reaching the socially optimal level. Hence, the study concluded that CHAS has been effective to a large extent in reducing market failure in Singapore’s healthcare sector, albeit with some benefits to third parties yet to be realised. There are certain elements of the investigation, which may limit the validity of the conclusion, such as the means used to determine the socially optimal level of healthcare consumption, and the survey sample size.Keywords: healthcare consumption, health economics, market failure, subsidies
Procedia PDF Downloads 159591 Efficacy of Botulinum Toxin in Alleviating Pain Syndrome in Stroke Patients with Upper Limb Spasticity
Authors: Akulov M. A., Zaharov V. O., Jurishhev P. E., Tomskij A. A.
Abstract:
Introduction: Spasticity is a severe consequence of stroke, leading to profound disability, decreased quality of life and decrease of rehabilitation efficacy [4]. Spasticity is often associated with pain syndrome, arising from joint damage of paretic limbs (postural arthropathy) or painful spasm of paretic limb muscles. It is generally accepted that injection of botulinum toxin into a cramped muscle leads to decrease of muscle tone and improves motion range in paretic limb, which is accompanied by pain alleviation. Study aim: To evaluate the change in pain syndrome intensity after incections of botulinum toxin A (Xeomin) in stroke patients with upper limb spasticity. Patients and methods. 21 patients aged 47-74 years were evaluated. Inclusion criteria were: acute stroke 4-7 months before the inclusion into the study, leading to spasticity of wrist and/or finger flexors, elbow flexor or forearm pronator, associated with severe pain syndrome. Patients received Xeomin as monotherapy 90-300 U, according to spasticity pattern. Efficacy evaluation was performed using Ashworth scale, disability assessment scale (DAS), caregiver burden scale and global treatment benefit assessment on weeks 2, 4, 8 and 12. Efficacy criterion was the decrease of pain syndrome by week 4 on PQLS and VAS. Results: The study revealed a significant improvement of measured indices after 4 weeks of treatment, which persisted until the 12 week of treatment. Xeomin is effective in reducing muscle tone of flexors of wrist, fingers and elbow, forearm pronators. By the 4th week of treatment we observed a significant improvement on DAS (р < 0,05), Ashworth scale (1-2 points) in all patients (р < 0,05), caregiver burden scale (р < 0,05). A significant decrease of pain syndrome by the 4th week of treatment on PQLS (р < 0,05) и VAS (р < 0,05) was observed. No adverse effect were registered. Conclusion: Xeomin is an effective treatment of pain syndrome in postural upper limb spasticity after stroke. Xeomin treatment leads to a significant improvement on PQLS and VAS.Keywords: botulinum toxin, pain syndrome, spasticity, stroke
Procedia PDF Downloads 309590 India’s Developmental Assistance in Africa: Analyzing India’s Aid and Developmental Projects
Authors: Daniel Gidey, Kunwar Siddarth Dadhwal
Abstract:
By evaluating India's aid systems and ongoing development initiatives, this conference paper offers light on India's role as a source of developmental assistance in Africa. This research attempts to provide insights into the developing landscape of foreign aid and development cooperation by focusing on understanding India's motivations and strategy. In recent years, India's connection with Africa has grown significantly, driven by economic, political, and strategic reasons. This conference paper covers India's many forms of aid, including financial, capacity building efforts, technical assistance, and infrastructure development projects, via a thorough investigation. The article seeks to establish India's priorities and highlight the possible impacts of its development assistance in Africa by examining the industries and locations of concentration. Using secondary data sources, the investigation delves into the underlying goals of India's aid policy in Africa. It investigates whether India's development assistance is consistent with its broader geopolitical aims, such as access to resources, competing with regional rivals, or strengthening diplomatic ties. Furthermore, the article investigates how India's aid policy combines the ideals of South-South cooperation and mutual development, as well as the ramifications for recipient countries. Furthermore, the paper assesses the efficacy and sustainability of India's aid operations in Africa. It takes into account the elements that influence their success, the problems they face, and the amount to which they contribute to local development goals, community empowerment, and poverty alleviation. The study also focuses on the accountability systems, transparency, and knowledge transfer aspects of India's development assistance. By providing a detailed examination of India's aid endeavors in Africa, the paper adds to the current literature on international development cooperation. By offering fresh insights into the motives, strategies, and impacts of India's assistance programs, it seeks to enhance understanding of the emerging patterns in South-South cooperation and the complex dynamics of contemporary international aid architecture.Keywords: India, Africa, developmental assistance, aid projects and South-South cooperation
Procedia PDF Downloads 64589 Sorghum Grains Grading for Food, Feed, and Fuel Using NIR Spectroscopy
Authors: Irsa Ejaz, Siyang He, Wei Li, Naiyue Hu, Chaochen Tang, Songbo Li, Meng Li, Boubacar Diallo, Guanghui Xie, Kang Yu
Abstract:
Background: Near-infrared spectroscopy (NIR) is a non-destructive, fast, and low-cost method to measure the grain quality of different cereals. Previously reported NIR model calibrations using the whole grain spectra had moderate accuracy. Improved predictions are achievable by using the spectra of whole grains, when compared with the use of spectra collected from the flour samples. However, the feasibility for determining the critical biochemicals, related to the classifications for food, feed, and fuel products are not adequately investigated. Objectives: To evaluate the feasibility of using NIRS and the influence of four sample types (whole grains, flours, hulled grain flours, and hull-less grain flours) on the prediction of chemical components to improve the grain sorting efficiency for human food, animal feed, and biofuel. Methods: NIR was applied in this study to determine the eight biochemicals in four types of sorghum samples: hulled grain flours, hull-less grain flours, whole grains, and grain flours. A total of 20 hybrids of sorghum grains were selected from the two locations in China. Followed by NIR spectral and wet-chemically measured biochemical data, partial least squares regression (PLSR) was used to construct the prediction models. Results: The results showed that sorghum grain morphology and sample format affected the prediction of biochemicals. Using NIR data of grain flours generally improved the prediction compared with the use of NIR data of whole grains. In addition, using the spectra of whole grains enabled comparable predictions, which are recommended when a non-destructive and rapid analysis is required. Compared with the hulled grain flours, hull-less grain flours allowed for improved predictions for tannin, cellulose, and hemicellulose using NIR data. Conclusion: The established PLSR models could enable food, feed, and fuel producers to efficiently evaluate a large number of samples by predicting the required biochemical components in sorghum grains without destruction.Keywords: FT-NIR, sorghum grains, biochemical composition, food, feed, fuel, PLSR
Procedia PDF Downloads 69588 The Effect of Foot Progression Angle on Human Lower Extremity
Authors: Sungpil Ha, Ju Yong Kang, Sangbaek Park, Seung-Ju Lee, Soo-Won Chae
Abstract:
The growing number of obese patients in aging societies has led to an increase in the number of patients with knee medial osteoarthritis (OA). Artificial joint insertion is the most common treatment for knee medial OA. Surgery is effective for patients with serious arthritic symptoms, but it is costly and dangerous. It is also inappropriate way to prevent a disease as an early stage. Therefore Non-operative treatments such as toe-in gait are proposed recently. Toe-in gait is one of non-surgical interventions, which restrain the progression of arthritis and relieves pain by reducing knee adduction moment (KAM) to facilitate lateral distribution of load on to knee medial cartilage. Numerous studies have measured KAM in various foot progression angle (FPA), and KAM data could be obtained by motion analysis. However, variations in stress at knee cartilage could not be directly observed or evaluated by these experiments of measuring KAM. Therefore, this study applied motion analysis to major gait points (1st peak, mid –stance, 2nd peak) with regard to FPA, and to evaluate the effects of FPA on the human lower extremity, the finite element (FE) method was employed. Three types of gait analysis (toe-in, toe-out, baseline gait) were performed with markers placed at the lower extremity. Ground reaction forces (GRF) were obtained by the force plates. The forces associated with the major muscles were computed using GRF and marker trajectory data. MRI data provided by the Visible Human Project were used to develop a human lower extremity FE model. FE analyses for three types of gait simulations were performed based on the calculated muscle force and GRF. We observed the maximum stress point during toe-in gait was lower than the other types, by comparing the results of FE analyses at the 1st peak across gait types. This is the same as the trend exhibited by KAM, measured through motion analysis in other papers. This indicates that the progression of knee medial OA could be suppressed by adopting toe-in gait. This study integrated motion analysis with FE analysis. One advantage of this method is that re-modeling is not required even with changes in posture. Therefore another type of gait simulation or various motions of lower extremity can be easily analyzed using this method.Keywords: finite element analysis, gait analysis, human model, motion capture
Procedia PDF Downloads 335587 The Integration of Geographical Information Systems and Capacitated Vehicle Routing Problem with Simulated Demand for Humanitarian Logistics in Tsunami-Prone Area: A Case Study of Phuket, Thailand
Authors: Kiatkulchai Jitt-Aer, Graham Wall, Dylan Jones
Abstract:
As a result of the Indian Ocean tsunami in 2004, logistics applied to disaster relief operations has received great attention in the humanitarian sector. As learned from such disaster, preparing and responding to the aspect of delivering essential items from distribution centres to affected locations are of the importance for relief operations as the nature of disasters is uncertain especially in suffering figures, which are normally proportional to quantity of supplies. Thus, this study proposes a spatial decision support system (SDSS) for humanitarian logistics by integrating Geographical Information Systems (GIS) and the capacitated vehicle routing problem (CVRP). The GIS is utilised for acquiring demands simulated from the tsunami flooding model of the affected area in the first stage, and visualising the simulation solutions in the last stage. While CVRP in this study encompasses designing the relief routes of a set of homogeneous vehicles from a relief centre to a set of geographically distributed evacuation points in which their demands are estimated by using both simulation and randomisation techniques. The CVRP is modeled as a multi-objective optimization problem where both total travelling distance and total transport resources used are minimized, while demand-cost efficiency of each route is maximized in order to determine route priority. As the model is a NP-hard combinatorial optimization problem, the Clarke and Wright Saving heuristics is proposed to solve the problem for the near-optimal solutions. The real-case instances in the coastal area of Phuket, Thailand are studied to perform the SDSS that allows a decision maker to visually analyse the simulation scenarios through different decision factors.Keywords: demand simulation, humanitarian logistics, geographical information systems, relief operations, capacitated vehicle routing problem
Procedia PDF Downloads 248586 Relationships Between the Petrophysical and Mechanical Properties of Rocks and Shear Wave Velocity
Authors: Anamika Sahu
Abstract:
The Himalayas, like many mountainous regions, is susceptible to multiple hazards. In recent times, the frequency of such disasters is continuously increasing due to extreme weather phenomena. These natural hazards are responsible for irreparable human and economic loss. The Indian Himalayas has repeatedly been ruptured by great earthquakes in the past and has the potential for a future large seismic event as it falls under the seismic gap. Damages caused by earthquakes are different in different localities. It is well known that, during earthquakes, damage to the structure is associated with the subsurface conditions and the quality of construction materials. So, for sustainable mountain development, prior estimation of site characterization will be valuable for designing and constructing the space area and for efficient mitigation of the seismic risk. Both geotechnical and geophysical investigation of the subsurface is required to describe the subsurface complexity. In mountainous regions, geophysical methods are gaining popularity as areas can be studied without disturbing the ground surface, and also these methods are time and cost-effective. The MASW method is used to calculate the Vs30. Vs30 is the average shear wave velocity for the top 30m of soil. Shear wave velocity is considered the best stiffness indicator, and the average of shear wave velocity up to 30 m is used in National Earthquake Hazards Reduction Program (NEHRP) provisions (BSSC,1994) and Uniform Building Code (UBC), 1997 classification. Parameters obtained through geotechnical investigation have been integrated with findings obtained through the subsurface geophysical survey. Joint interpretation has been used to establish inter-relationships among mineral constituents, various textural parameters, and unconfined compressive strength (UCS) with shear wave velocity. It is found that results obtained through the MASW method fitted well with the laboratory test. In both conditions, mineral constituents and textural parameters (grain size, grain shape, grain orientation, and degree of interlocking) control the petrophysical and mechanical properties of rocks and the behavior of shear wave velocity.Keywords: MASW, mechanical, petrophysical, site characterization
Procedia PDF Downloads 86585 Assessing Overall Thermal Conductance Value of Low-Rise Residential Home Exterior Above-Grade Walls Using Infrared Thermography Methods
Authors: Matthew D. Baffa
Abstract:
Infrared thermography is a non-destructive test method used to estimate surface temperatures based on the amount of electromagnetic energy radiated by building envelope components. These surface temperatures are indicators of various qualitative building envelope deficiencies such as locations and extent of heat loss, thermal bridging, damaged or missing thermal insulation, air leakage, and moisture presence in roof, floor, and wall assemblies. Although infrared thermography is commonly used for qualitative deficiency detection in buildings, this study assesses its use as a quantitative method to estimate the overall thermal conductance value (U-value) of the exterior above-grade walls of a study home. The overall U-value of exterior above-grade walls in a home provides useful insight into the energy consumption and thermal comfort of a home. Three methodologies from the literature were employed to estimate the overall U-value by equating conductive heat loss through the exterior above-grade walls to the sum of convective and radiant heat losses of the walls. Outdoor infrared thermography field measurements of the exterior above-grade wall surface and reflective temperatures and emissivity values for various components of the exterior above-grade wall assemblies were carried out during winter months at the study home using a basic thermal imager device. The overall U-values estimated from each methodology from the literature using the recorded field measurements were compared to the nominal exterior above-grade wall overall U-value calculated from materials and dimensions detailed in architectural drawings of the study home. The nominal overall U-value was validated through calendarization and weather normalization of utility bills for the study home as well as various estimated heat loss quantities from a HOT2000 computer model of the study home and other methods. Under ideal environmental conditions, the estimated overall U-values deviated from the nominal overall U-value between ±2% to ±33%. This study suggests infrared thermography can estimate the overall U-value of exterior above-grade walls in low-rise residential homes with a fair amount of accuracy.Keywords: emissivity, heat loss, infrared thermography, thermal conductance
Procedia PDF Downloads 313584 Measurement and Simulation of Axial Neutron Flux Distribution in Dry Tube of KAMINI Reactor
Authors: Manish Chand, Subhrojit Bagchi, R. Kumar
Abstract:
A new dry tube (DT) has been installed in the tank of KAMINI research reactor, Kalpakkam India. This tube will be used for neutron activation analysis of small to large samples and testing of neutron detectors. DT tube is 375 cm height and 7.5 cm in diameter, located 35 cm away from the core centre. The experimental thermal flux at various axial positions inside the tube has been measured by irradiating the flux monitor (¹⁹⁷Au) at 20kW reactor power. The measured activity of ¹⁹⁸Au and the thermal cross section of ¹⁹⁷Au (n,γ) ¹⁹⁸Au reaction were used for experimental thermal flux measurement. The flux inside the tube varies from 10⁹ to 10¹⁰ and maximum flux was (1.02 ± 0.023) x10¹⁰ n cm⁻²s⁻¹ at 36 cm from the bottom of the tube. The Au and Zr foils without and with cadmium cover of 1-mm thickness were irradiated at the maximum flux position in the DT to find out the irradiation specific input parameters like sub-cadmium to epithermal neutron flux ratio (f) and the epithermal neutron flux shape factor (α). The f value was 143 ± 5, indicates about 99.3% thermal neutron component and α value was -0.2886 ± 0.0125, indicates hard epithermal neutron spectrum due to insufficient moderation. The measured flux profile has been validated using theoretical model of KAMINI reactor through Monte Carlo N-Particle Code (MCNP). In MCNP, the complex geometry of the entire reactor is modelled in 3D, ensuring minimum approximations for all the components. Continuous energy cross-section data from ENDF-B/VII.1 as well as S (α, β) thermal neutron scattering functions are considered. The neutron flux has been estimated at the corresponding axial locations of the DT using mesh tally. The thermal flux obtained from the experiment shows good agreement with the theoretically predicted values by MCNP, it was within ± 10%. It can be concluded that this MCNP model can be utilized for calculating other important parameters like neutron spectra, dose rate, etc. and multi elemental analysis can be carried out by irradiating the sample at maximum flux position using measured f and α parameters by k₀-NAA standardization.Keywords: neutron flux, neutron activation analysis, neutron flux shape factor, MCNP, Monte Carlo N-Particle Code
Procedia PDF Downloads 163583 Corporate Sustainability Practices in Asian Countries: Pattern of Disclosure and Impact on Financial Performance
Authors: Santi Gopal Maji, R. A. J. Syngkon
Abstract:
The changing attitude of the corporate enterprises from maximizing economic benefit to corporate sustainability after the publication of Brundtland Report has attracted the interest of researchers to investigate the sustainability practices of firms and its impact on financial performance. To enrich the empirical literature in Asian context, this study examines the disclosure pattern of corporate sustainability and the influence of sustainability reporting on financial performance of firms from four Asian countries (Japan, South Korea, India and Indonesia) that are publishing sustainability report continuously from 2009 to 2016. The study has used content analysis technique based on Global Reporting Framework (3 and 3.1) reporting framework to compute the disclosure score of corporate sustainability and its components. While dichotomous coding system has been employed to compute overall quantitative disclosure score, a four-point scale has been used to access the quality of the disclosure. For analysing the disclosure pattern of corporate sustainability, box plot has been used. Further, Pearson chi-square test has been used to examine whether there is any difference in the proportion of disclosure between the countries. Finally, quantile regression model has been employed to examine the influence of corporate sustainability reporting on the difference locations of the conditional distribution of firm performance. The findings of the study indicate that Japan has occupied first position in terms of disclosure of sustainability information followed by South Korea and India. In case of Indonesia, the quality of disclosure score is considerably less as compared to other three countries. Further, the gap between the quality and quantity of disclosure score is comparatively less in Japan and South Korea as compared to India and Indonesia. The same is evident in respect of the components of sustainability. The results of quantile regression indicate that a positive impact of corporate sustainability becomes stronger at upper quantiles in case of Japan and South Korea. But the study fails to extricate any definite pattern on the impact of corporate sustainability disclosure on the financial performance of firms from Indonesia and India.Keywords: corporate sustainability, quality and quantity of disclosure, content analysis, quantile regression, Asian countries
Procedia PDF Downloads 194582 An Occupational Health Risk Assessment for Exposure to Benzene, Toluene, Ethylbenzene and Xylenes: A Case Study of Informal Traders in a Metro Centre (Taxi Rank) in South Africa
Authors: Makhosazana Dubazana
Abstract:
Many South Africans commuters use minibus taxis daily and are connected to the informal transport network through metro centres informally known as Taxi Ranks. Taxi ranks form part of an economic nexus for many informal traders, connecting them to commuters, their prime clientele. They work along designated areas along the periphery of the taxi rank and in between taxi lanes. Informal traders are therefore at risk of adverse health effects associated with the inhalation of exhaust fumes from minibus taxis. Of the exhaust emissions, benzene, toluene, ethylbenzene and xylenes (BTEX) have high toxicity. Purpose: The purpose of this study was to conduct a Human Health Risk Assessment for informal traders, looking at their exposure to BTEX compounds. Methods: The study was conducted in a subsection of a taxi rank which is representative of the entire taxi rank. This subsection has a daily average of 400 minibus taxi moving through it and an average of 60 informal traders working in it. In the health risk assessment, a questionnaire was conducted to understand the occupational behaviour of the informal traders. This was used to deduce the exposure scenarios and sampling locations. Three sampling campaigns were run for an average of 10 hours each covering the average working hours of traders. A gas chronographer was used for collecting continues ambient air samples at 15 min intervals. Results: Over the three sampling days, the average concentrations were, 8.46ppb, 0.63 ppb, 1.27ppb and 1.0ppb for benzene, toluene, ethylbenzene, and xylene respectively. The average cancer risk is 9.46E-03. In several cases, they were incidences of unacceptable risk for the cumulative exposure of all four BTEX compounds. Conclusion: This study adds to the body of knowledge on the Human Health Risk effects of urban BTEX pollution, furthermore focusing on the impact of urban BTEX on high risk personal such as informal traders, in Southern Africa.Keywords: human health risk assessment, informal traders, occupational risk, urban BTEX
Procedia PDF Downloads 232581 In vitro Characterization of Mice Bone Microstructural Changes by Low-Field and High-Field Nuclear Magnetic Resonance
Authors: Q. Ni, J. A. Serna, D. Holland, X. Wang
Abstract:
The objective of this study is to develop Nuclear Magnetic Resonance (NMR) techniques to enhance bone related research applied on normal and disuse (Biglycan knockout) mice bone in vitro by using both low-field and high-field NMR simultaneously. It is known that the total amplitude of T₂ relaxation envelopes, measured by the Carr-Purcell-Meiboom-Gill NMR spin echo train (CPMG), is a representation of the liquid phase inside the pores. Therefore, the NMR CPMG magnetization amplitude can be transferred to the volume of water after calibration with the NMR signal amplitude of the known volume of the selected water. In this study, the distribution of mobile water, porosity that can be determined by using low-field (20 MHz) CPMG relaxation technique, and the pore size distributions can be determined by a computational inversion relaxation method. It is also known that the total proton intensity of magnetization from the NMR free induction decay (FID) signal is due to the water present inside the pores (mobile water), the water that has undergone hydration with the bone (bound water), and the protons in the collagen and mineral matter (solid-like protons). Therefore, the components of total mobile and bound water within bone that can be determined by low-field NMR free induction decay technique. Furthermore, the bound water in solid phase (mineral and organic constituents), especially, the dominated component of calcium hydroxyapatite (Ca₁₀(OH)₂(PO₄)₆) can be determined by using high-field (400 MHz) magic angle spinning (MAS) NMR. With MAS technique reducing NMR spectral linewidth inhomogeneous broadening and susceptibility broadening of liquid-solid mix, in particular, we can conduct further research into the ¹H and ³¹P elements and environments of bone materials to identify the locations of bound water such as OH- group within minerals and bone architecture. We hypothesize that with low-field and high-field magic angle spinning NMR can provide a more complete interpretation of water distribution, particularly, in bound water, and these data are important to access bone quality and predict the mechanical behavior of bone.Keywords: bone, mice bone, NMR, water in bone
Procedia PDF Downloads 176580 Optimizing the Location of Parking Areas Adapted for Dangerous Goods in the European Road Transport Network
Authors: María Dolores Caro, Eugenio M. Fedriani, Ángel F. Tenorio
Abstract:
The transportation of dangerous goods by lorries throughout Europe must be done by using the roads conforming the European Road Transport Network. In this network, there are several parking areas where lorry drivers can park to rest according to the regulations. According to the "European Agreement concerning the International Carriage of Dangerous Goods by Road", parking areas where lorries transporting dangerous goods can park to rest, must follow several security stipulations to keep safe the rest of road users. At this respect, these lorries must be parked in adapted areas with strict and permanent surveillance measures. Moreover, drivers must satisfy several restrictions about resting and driving time. Under these facts, one may expect that there exist enough parking areas for the transport of this type of goods in order to obey the regulations prescribed by the European Union and its member countries. However, the already-existing parking areas are not sufficient to cover all the stops required by drivers transporting dangerous goods. Our main goal is, starting from the already-existing parking areas and the loading-and-unloading location, to provide an optimal answer to the following question: how many additional parking areas must be built and where must they be located to assure that lorry drivers can transport dangerous goods following all the stipulations about security and safety for their stops? The sense of the word “optimal” is due to the fact that we give a global solution for the location of parking areas throughout the whole European Road Transport Network, adjusting the number of additional areas to be as lower as possible. To do so, we have modeled the problem using graph theory since we are working with a road network. As nodes, we have considered the locations of each already-existing parking area, each loading-and-unloading area each road bifurcation. Each road connecting two nodes is considered as an edge in the graph whose weight corresponds to the distance between both nodes in the edge. By applying a new efficient algorithm, we have found the additional nodes for the network representing the new parking areas adapted for dangerous goods, under the fact that the distance between two parking areas must be less than or equal to 400 km.Keywords: trans-european transport network, dangerous goods, parking areas, graph-based modeling
Procedia PDF Downloads 280579 Religion versus Secularism on Women’s Liberation: The Question of Women Liberation and Modern Education
Authors: Kinda AlSamara
Abstract:
The nineteenth century was characterized by major educational reforms in the Arab World. One of the unintended outcomes of colonization in Arab countries was the initiation of women liberation as well as the introduction of modern education and its application in sensitizing people on the rights of women and their liberation. The reforms were often attributed to various undercurrents that took place at different levels within the Ottoman Empire, and particularly the arrival and influence of the Christian missionaries were supported by the American and European governments. These trends were also significantly attributed to the increase in the presence of Europeans in the region, as well as the introduction of secular ideas and approaches related to the meaning of modernity. Using literary analysis as a method, this paper examines the role of an important male figure like the political activist and writer Qāsim Amīn and the religious reformer Muḥammad ʻAbduh in starting this discourse and shows their impact on the emancipation of women movement (Taḥrīr), and how later women led the movement with their published work. This paper explores Arab Salons and the initiation of women’s literary circles. Women from wealthy families in Egypt and Syria who had studied in Europe or interacted with European counterparts began these circles. These salons acted as central locations where people could meet and hold discussions on political, social, and literary trends as they happened each day. The paper concludes with a discussion of current debates between the Islamist and the secularist branches of the movement today. While the Islamists believe that adhering to the core of Islam with some of its contested position on women is a modern ideology of liberation that fits the current culture of modern time Egypt; the secularists argue that the influence that Islam has on the women’s liberation movement in Egypt has been a threat to the natural success and progress of the movement, which was initiated in the early nineteenth century independent of the more recent trends towards religiosity in the country.Keywords: educational model, crisis of terminologies, Arab awakening, nineteenth century
Procedia PDF Downloads 210578 Comparison of Fatty Acids Composition of Three Commercial Fish Species Farmed in the Adriatic Sea
Authors: Jelka Pleadin, Greta Krešić, Tina Lešić, Ana Vulić, Renata Barić, Tanja Bogdanović, Dražen Oraić, Ana Legac, Snježana Zrnčić
Abstract:
Fish has been acknowledged as an integral component of a well-balanced diet, providing a healthy source of energy, high-quality proteins, vitamins, essential minerals and, especially, n-3 long-chain polyunsaturated fatty acids (n-3 LC PUFA), mainly eicosapentaenoic acid (20:5 n-3 EPA), and docosahexaenoicacid, (22:6 n-3 DHA), whose pleiotropic effects in terms of health promotion and disease prevention have been increasingly recognised. In this study, the fatty acids composition of three commercially important farmed fish species: sea bream (Sparus aurata), sea bass (Dicentrarchus labrax) and dentex (Dentex dentex) was investigated. In total, 60 fish samples were retrieved during 2015 (n = 30) and 2016 (n = 30) from different locations in the Adriatic Sea. Methyl esters of fatty acids were analysed using gas chromatography (GC) with flame ionization detection (FID). The results show that the most represented fatty acid in all three analysed species is oleic acid (C18:1n-9, OA), followed by linoleic acid (C18:2n-6, LA) and palmitic acid (C16:0, PA). Dentex was shown to have two to four times higher eicosapentaenoic (EPA) and docosahexaenoic (DHA) acid content as compared to sea bream and sea bass. The recommended n-6/n-3 ratio was determined in all fish species but obtained results pointed to statistically significant differences (p < 0.05) in fatty acid composition among the analysed fish species and their potential as a dietary source of valuable fatty acids. Sea bass and sea bream had a significantly higher proportion of n-6 fatty acids, while dentex had a significantly higher proportion of n-3 (C18:4n-3, C20:4n-3, EPA, DHA) fatty acids. A higher hypocholesterolaemic and hypercholesterolaemic fatty acids (HH) ratio was determined for sea bass and sea bream, which comes as the consequence of a lower share of SFA determined in these two species in comparison to dentex. Since the analysed fish species vary in their fatty acids composition consumption of diverse fish species would be advisable. Based on the established lipid quality indicators, dentex, a fish species underutilised by the aquaculture, seems to be a highly recommendable and important source of fatty acids recommended to be included into the human diet.Keywords: dentex, fatty acids, farmed fish, sea bass, sea bream
Procedia PDF Downloads 392577 Structural Health Monitoring of Buildings–Recorded Data and Wave Method
Authors: Tzong-Ying Hao, Mohammad T. Rahmani
Abstract:
This article presents the structural health monitoring (SHM) method based on changes in wave traveling times (wave method) within a layered 1-D shear beam model of structure. The wave method measures the velocity of shear wave propagating in a building from the impulse response functions (IRF) obtained from recorded data at different locations inside the building. If structural damage occurs in a structure, the velocity of wave propagation through it changes. The wave method analysis is performed on the responses of Torre Central building, a 9-story shear wall structure located in Santiago, Chile. Because events of different intensity (ambient vibrations, weak and strong earthquake motions) have been recorded at this building, therefore it can serve as a full-scale benchmark to validate the structural health monitoring method utilized. The analysis of inter-story drifts and the Fourier spectra for the EW and NS motions during 2010 Chile earthquake are presented. The results for the NS motions suggest the coupling of translation and torsion responses. The system frequencies (estimated from the relative displacement response of the 8th-floor with respect to the basement from recorded data) were detected initially decreasing approximately 24% in the EW motion. Near the end of shaking, an increase of about 17% was detected. These analysis and results serve as baseline indicators of the occurrence of structural damage. The detected changes in wave velocities of the shear beam model are consistent with the observed damage. However, the 1-D shear beam model is not sufficient to simulate the coupling of translation and torsion responses in the NS motion. The wave method is proven for actual implementation in structural health monitoring systems based on carefully assessing the resolution and accuracy of the model for its effectiveness on post-earthquake damage detection in buildings.Keywords: Chile earthquake, damage detection, earthquake response, impulse response function, shear beam model, shear wave velocity, structural health monitoring, torre central building, wave method
Procedia PDF Downloads 367576 Evaluation of Chemoprotective Effect of NBRIQU16 against N-Methyl-N-Nitro-N-Nitrosoguanidine and NaCl-Induced Gastric Carcinomas in Wistar Rats
Authors: Lubna Azmi, Ila Shukla, Shyam Sundar Gupta, Padam Kant, C. V. Rao
Abstract:
To investigate the chemoprotective potential of NBRIQU16 chemotype isolated from Argyreia speciosa (Family: Convolvulaceae) on N-methyl-N-nitro-N-nitrosoguanidine (MNNG) and NaCl-induced gastric carcinomas in Wistar rats. Forty-six male 6-week-old Wistar rats were divided into two groups. Thirty rats in group A were fed with a diet supplemented with 8 % NaCl for 20 weeks and simultaneously given N-methyl-N’-nitro-N-nitrosoguanidine (MNNG) in drinking water at a concentration of 100 ug/ml for the first 17 weeks. After administration of the carcinogen, 200 and 400 mg/kg of NBRIQU16 were administered orally once a day throughout the study. From week 18, these rats were given normal water. From week 21, these rats were fed with a normal diet for 15 weeks. Group B containing 16 rats was fed standard diet for thirty-five days. It served as control. Ten rats from group A were sacrificed after 20 weeks. Scarification of remaining animals was conducted after 35 weeks. Entire stomach and some part of the duodenum were incised parallel to the greater curvature, and the samples were collected. After opening the stomach location and size of tumors were recorded. The number of tumors with their locations and sizes were recorded. Expression of survivin was examined by recording the Immunohistochemistry of the specimens. The treatment with NBRIQU16 significantly reduced the nodule incidence and nodule multiplicity in the rats after MNNG administration. Surviving expression in glandular stomachs of normal rats, of rats in middle induction period, in adenocarcinomas and NBRIQU16 treated tissues adjacent to tumor were 0, 42.0 %, 79.3%, and 36.4 %, respectively. Expression of survivin was significantly different as compared to the normal rats. Histological observations of stomach tissues too correlated with the biochemical observations.These finding powerfully supports that NBRIQU16 chemopreventive effect by suppressing the tumor burden and restoring the activities of gastric cancer marker enzymes on MNNG and NaCl-induced gastric carcinomas in Wistar rats.Keywords: Argyreia speciosa, gastric carcinoma, immunochemistry, NBRIQU16
Procedia PDF Downloads 298575 Incident Management System: An Essential Tool for Oil Spill Response
Authors: Ali Heyder Alatas, D. Xin, L. Nai Ming
Abstract:
An oil spill emergency can vary in size and complexity, subject to factors such as volume and characteristics of spilled oil, incident location, impacted sensitivities and resources required. A major incident typically involves numerous stakeholders; these include the responsible party, response organisations, government authorities across multiple jurisdictions, local communities, and a spectrum of technical experts. An incident management team will encounter numerous challenges. Factors such as limited access to location, adverse weather, poor communication, and lack of pre-identified resources can impede a response; delays caused by an inefficient response can exacerbate impacts caused to the wider environment, socio-economic and cultural resources. It is essential that all parties work based on defined roles, responsibilities and authority, and ensure the availability of sufficient resources. To promote steadfast coordination and overcome the challenges highlighted, an Incident Management System (IMS) offers an essential tool for oil spill response. It provides clarity in command and control, improves communication and coordination, facilitates the cooperation between stakeholders, and integrates resources committed. Following the preceding discussion, a comprehensive review of existing literature serves to illustrate the application of IMS in oil spill response to overcome common challenges faced in a major-scaled incident. With a primary audience comprising practitioners in mind, this study will discuss key principles of incident management which enables an effective response, along with pitfalls and challenges, particularly, the tension between government and industry; case studies will be used to frame learning and issues consolidated from previous research, and provide the context to link practice with theory. It will also feature the industry approach to incident management which was further crystallized as part of a review by the Joint Industry Project (JIP) established in the wake of the Macondo well control incident. The authors posit that a common IMS which can be adopted across the industry not only enhances response capacity towards a major oil spill incident but is essential to the global preparedness effort.Keywords: command and control, incident management system, oil spill response, response organisation
Procedia PDF Downloads 156574 Landslide Susceptibility Mapping Using Soft Computing in Amhara Saint
Authors: Semachew M. Kassa, Africa M Geremew, Tezera F. Azmatch, Nandyala Darga Kumar
Abstract:
Frequency ratio (FR) and analytical hierarchy process (AHP) methods are developed based on past landslide failure points to identify the landslide susceptibility mapping because landslides can seriously harm both the environment and society. However, it is still difficult to select the most efficient method and correctly identify the main driving factors for particular regions. In this study, we used fourteen landslide conditioning factors (LCFs) and five soft computing algorithms, including Random Forest (RF), Support Vector Machine (SVM), Logistic Regression (LR), Artificial Neural Network (ANN), and Naïve Bayes (NB), to predict the landslide susceptibility at 12.5 m spatial scale. The performance of the RF (F1-score: 0.88, AUC: 0.94), ANN (F1-score: 0.85, AUC: 0.92), and SVM (F1-score: 0.82, AUC: 0.86) methods was significantly better than the LR (F1-score: 0.75, AUC: 0.76) and NB (F1-score: 0.73, AUC: 0.75) method, according to the classification results based on inventory landslide points. The findings also showed that around 35% of the study region was made up of places with high and very high landslide risk (susceptibility greater than 0.5). The very high-risk locations were primarily found in the western and southeastern regions, and all five models showed good agreement and similar geographic distribution patterns in landslide susceptibility. The towns with the highest landslide risk include Amhara Saint Town's western part, the Northern part, and St. Gebreal Church villages, with mean susceptibility values greater than 0.5. However, rainfall, distance to road, and slope were typically among the top leading factors for most villages. The primary contributing factors to landslide vulnerability were slightly varied for the five models. Decision-makers and policy planners can use the information from our study to make informed decisions and establish policies. It also suggests that various places should take different safeguards to reduce or prevent serious damage from landslide events.Keywords: artificial neural network, logistic regression, landslide susceptibility, naïve Bayes, random forest, support vector machine
Procedia PDF Downloads 82573 Intermittent Effect of Coupled Thermal and Acoustic Sources on Combustion: A Spatial Perspective
Authors: Pallavi Gajjar, Vinayak Malhotra
Abstract:
Rockets have been known to have played a predominant role in spacecraft propulsion. The quintessential aspect of combustion-related requirements of a rocket engine is the minimization of the surrounding risks/hazards. Over time, it has become imperative to understand the combustion rate variation in presence of external energy source(s). Rocket propulsion represents a special domain of chemical propulsion assisted by high speed flows in presence of acoustics and thermal source(s). Jet noise leads to a significant loss of resources and every year a huge amount of financial aid is spent to prevent it. External heat source(s) induce high possibility of fire risk/hazards which can sufficiently endanger the operation of a space vehicle. Appreciable work had been done with justifiable simplification and emphasis on the linear variation of external energy source(s), which yields good physical insight but does not cater to accurate predictions. Present work experimentally attempts to understand the correlation between inter-energy conversions with the non-linear placement of external energy source(s). The work is motivated by the need to have better fire safety and enhanced combustion. The specific objectives of the work are a) To interpret the related energy transfer for combustion in presence of alternate external energy source(s) viz., thermal and acoustic, b) To fundamentally understand the role of key controlling parameters viz., separation distance, the number of the source(s), selected configurations and their non-linear variation to resemble real-life cases. An experimental setup was prepared using incense sticks as potential fuel and paraffin wax candles as the external energy source(s). The acoustics was generated using frequency generator, and source(s) were placed at selected locations. Non-equidistant parametric experimentation was carried out, and the effects were noted on regression rate changes. The results are expected to be very helpful in offering a new perspective into futuristic rocket designs and safety.Keywords: combustion, acoustic energy, external energy sources, regression rate
Procedia PDF Downloads 140572 Assessing Impacts of Climate Variability and Change on Water Productivity and Nutrient Use Efficiency of Maize in the Semi-arid Central Rift Valley of Ethiopia
Authors: Fitih Ademe, Kibebew Kibret, Sheleme Beyene, Mezgebu Getnet, Gashaw Meteke
Abstract:
Changes in precipitation, temperature and atmospheric CO2 concentration are expected to alter agricultural productivity patterns worldwide. The interactive effects of soil moisture and nutrient availability are the two key edaphic factors that determine crop yield and are sensitive to climatic changes. The study assessed the potential impacts of climate change on maize yield and corresponding water productivity and nutrient use efficiency under climate change scenarios for the Central Rift Valley of Ethiopia by mid (2041-2070) and end century (2071-2100). Projected impacts were evaluated using climate scenarios generated from four General Circulation Models (GCMs) dynamically downscaled by the Swedish RCA4 Regional Climate Model (RCM) in combination with two Representative Concentration Pathways (RCP 4.5 and RCP8.5). Decision Support System for Agro-technology Transfer cropping system model (DSSAT-CSM) was used to simulate yield, water and nutrient use for the study periods. Results indicate that rainfed maize yield might decrease on average by 16.5 and 23% by the 2050s and 2080s, respectively, due to climate change. Water productivity is expected to decline on average by 2.2 and 12% in the CRV by mid and end centuries with respect to the baseline. Nutrient uptake and corresponding nutrient use efficiency (NUE) might also be negatively affected by climate change. Phosphorus uptake probably will decrease in the CRV on average by 14.5 to 18% by 2050s, while N uptake may not change significantly at Melkassa. Nitrogen and P use efficiency indicators showed decreases in the range between 8.5 to 10.5% and between 9.3 to 10.5%, respectively, by 2050s relative to the baseline average. The simulation results further indicated that a combination of increased water availability and optimum nutrient application might increase both water productivity and nutrient use efficiency in the changed climate, which can ensure modest production in the future. Potential options that can improve water availability and nutrient uptake should be identified for the study locations using a crop modeling approach.Keywords: crop model, climate change scenario, nutrient uptake, nutrient use efficiency, water productivity
Procedia PDF Downloads 86571 Q Slope Rock Mass Classification and Slope Stability Assessment Methodology Application in Steep Interbedded Sedimentary Rock Slopes for a Motorway Constructed North of Auckland, New Zealand
Authors: Azariah Sosa, Carlos Renedo Sanchez
Abstract:
The development of a new motorway north of Auckland (New Zealand) includes steep rock cuts, from 63 up to 85 degrees, in an interbedded sandstone and siltstone rock mass of the geological unit Waitemata Group (Pakiri Formation), which shows sub-horizontal bedding planes, various sub-vertical joint sets, and a diverse weathering profile. In this kind of rock mass -that can be classified as a weak rock- the definition of the stable maximum geometry is not only governed by discontinuities and defects evident in the rock but is important to also consider the global stability of the rock slope, including (in the analysis) the rock mass characterisation, influence of the groundwater, the geological evolution, and the weathering processes. Depending on the weakness of the rock and the processes suffered, the global stability could, in fact, be a more restricting element than the potential instability of individual blocks through discontinuities. This paper discusses those elements that govern the stability of the rock slopes constructed in a rock formation with favourable bedding and distribution of discontinuities (horizontal and vertical) but with a weak behaviour in terms of global rock mass characterisation. In this context, classifications as Q-Slope and slope stability assessment methodology (SSAM) have been demonstrated as important tools which complement the assessment of the global stability together with the analytical tools related to the wedge-type failures and limit equilibrium methods. The paper focuses on the applicability of these two new empirical classifications to evaluate the slope stability in 18 already excavated rock slopes in the Pakiri formation through comparison between the predicted and observed stability issues and by reviewing the outcome of analytical methods (Rocscience slope stability software suite) compared against the expected stability determined from these rock classifications. This exercise will help validate such findings and correlations arising from the two empirical methods in order to adjust the methods to the nature of this specific kind of rock mass and provide a better understanding of the long-term stability of the slopes studied.Keywords: Pakiri formation, Q-slope, rock slope stability, SSAM, weak rock
Procedia PDF Downloads 208570 Improving the Uniformity of Electrostatic Meter’s Spatial Sensitivity
Authors: Mohamed Abdalla, Ruixue Cheng, Jianyong Zhang
Abstract:
In pneumatic conveying, the solids are mixed with air or gas. In industries such as coal fired power stations, blast furnaces for iron making, cement and flour processing, the mass flow rate of solids needs to be monitored or controlled. However the current gas-solids two-phase flow measurement techniques are not as accurate as the flow meters available for the single phase flow. One of the problems that the multi-phase flow meters to face is that the flow profiles vary with measurement locations and conditions of pipe routing, bends, elbows and other restriction devices in conveying system as well as conveying velocity and concentration. To measure solids flow rate or concentration with non-even distribution of solids in gas, a uniform spatial sensitivity is required for a multi-phase flow meter. However, there are not many meters inherently have such property. The circular electrostatic meter is a popular choice for gas-solids flow measurement with its high sensitivity to flow, robust construction, low cost for installation and non-intrusive nature. However such meters have the inherent non-uniform spatial sensitivity. This paper first analyses the spatial sensitivity of circular electrostatic meter in general and then by combining the effect of the sensitivity to a single particle and the sensing volume for a given electrode geometry, the paper reveals first time how a circular electrostatic meter responds to a roping flow stream, which is much more complex than what is believed at present. The paper will provide the recent research findings on spatial sensitivity investigation at the University of Tees side based on Finite element analysis using Ansys Fluent software, including time and frequency domain characteristics and the effect of electrode geometry. The simulation results will be compared tothe experimental results obtained on a large scale (14” diameter) rig. The purpose of this research is paving a way to achieve a uniform spatial sensitivity for the circular electrostatic sensor by mean of compensation so as to improve overall accuracy of gas-solids flow measurement.Keywords: spatial sensitivity, electrostatic sensor, pneumatic conveying, Ansys Fluent software
Procedia PDF Downloads 367569 A Study of Secondary Particle Production from Carbon Ion Beam for Radiotherapy
Authors: Shaikah Alsubayae, Gianluigi Casse, Carlos Chavez, Jon Taylor, Alan Taylor, Mohammad Alsulimane
Abstract:
Achieving precise radiotherapy through carbon therapy necessitates the accurate monitoring of radiation dose distribution within the patient's body. This process is pivotal for targeted tumor treatment, minimizing harm to healthy tissues, and enhancing overall treatment effectiveness while reducing the risk of side effects. In our investigation, we adopted a methodological approach to monitor secondary proton doses in carbon therapy using Monte Carlo (MC) simulations. Initially, Geant4 simulations were employed to extract the initial positions of secondary particles generated during interactions between carbon ions and water, including protons, gamma rays, alpha particles, neutrons, and tritons. Subsequently, we explored the relationship between the carbon ion beam and these secondary particles. Interaction vertex imaging (IVI) proves valuable for monitoring dose distribution during carbon therapy, providing information about secondary particle locations and abundances, particularly protons. The IVI method relies on charged particles produced during ion fragmentation to gather range information by reconstructing particle trajectories back to their point of origin, known as the vertex. In the context of carbon ion therapy, our simulation results indicated a strong correlation between some secondary particles and the range of carbon ions. However, challenges arose due to the unique elongated geometry of the target, hindering the straightforward transmission of forward-generated protons. Consequently, the limited protons that did emerge predominantly originated from points close to the target entrance. Fragment (protons) trajectories were approximated as straight lines, and a beam back-projection algorithm, utilizing interaction positions recorded in Si detectors, was developed to reconstruct vertices. The analysis revealed a correlation between the reconstructed and actual positions.Keywords: radiotherapy, carbon therapy, monitor secondary proton doses, interaction vertex imaging
Procedia PDF Downloads 78568 Characterization of Articular Cartilage Based on the Response of Cartilage Surface to Loading/Unloading
Authors: Z. Arabshahi, I. Afara, A. Oloyede, H. Moody, J. Kashani, T. Klein
Abstract:
Articular cartilage is a fluid-swollen tissue of synovial joints that functions by providing a lubricated surface for articulation and to facilitate the load transmission. The biomechanical function of this tissue is highly dependent on the integrity of its ultrastructural matrix. Any alteration of articular cartilage matrix, either by injury or degenerative conditions such as osteoarthritis (OA), compromises its functional behaviour. Therefore, the assessment of articular cartilage is important in early stages of degenerative process to prevent or reduce further joint damage with associated socio-economic impact. Therefore, there has been increasing research interest into the functional assessment of articular cartilage. This study developed a characterization parameter for articular cartilage assessment based on the response of cartilage surface to loading/unloading. This is because the response of articular cartilage to compressive loading is significantly depth-dependent, where the superficial zone and underlying matrix respond differently to deformation. In addition, the alteration of cartilage matrix in the early stages of degeneration is often characterized by PG loss in the superficial layer. In this study, it is hypothesized that the response of superficial layer is different in normal and proteoglycan depleted tissue. To establish the viability of this hypothesis, samples of visually intact and artificially proteoglycan-depleted bovine cartilage were subjected to compression at a constant rate to 30 percent strain using a ring-shaped indenter with an integrated ultrasound probe and then unloaded. The response of articular surface which was indirectly loaded was monitored using ultrasound during the time of loading/unloading (deformation/recovery). It was observed that the rate of cartilage surface response to loading/unloading was different for normal and PG-depleted cartilage samples. Principal Component Analysis was performed to identify the capability of the cartilage surface response to loading/unloading, to distinguish between normal and artificially degenerated cartilage samples. The classification analysis of this parameter showed an overlap between normal and degenerated samples during loading. While there was a clear distinction between normal and degenerated samples during unloading. This study showed that the cartilage surface response to loading/unloading has the potential to be used as a parameter for cartilage assessment.Keywords: cartilage integrity parameter, cartilage deformation/recovery, cartilage functional assessment, ultrasound
Procedia PDF Downloads 192567 Automatic Target Recognition in SAR Images Based on Sparse Representation Technique
Authors: Ahmet Karagoz, Irfan Karagoz
Abstract:
Synthetic Aperture Radar (SAR) is a radar mechanism that can be integrated into manned and unmanned aerial vehicles to create high-resolution images in all weather conditions, regardless of day and night. In this study, SAR images of military vehicles with different azimuth and descent angles are pre-processed at the first stage. The main purpose here is to reduce the high speckle noise found in SAR images. For this, the Wiener adaptive filter, the mean filter, and the median filters are used to reduce the amount of speckle noise in the images without causing loss of data. During the image segmentation phase, pixel values are ordered so that the target vehicle region is separated from other regions containing unnecessary information. The target image is parsed with the brightest 20% pixel value of 255 and the other pixel values of 0. In addition, by using appropriate parameters of statistical region merging algorithm, segmentation comparison is performed. In the step of feature extraction, the feature vectors belonging to the vehicles are obtained by using Gabor filters with different orientation, frequency and angle values. A number of Gabor filters are created by changing the orientation, frequency and angle parameters of the Gabor filters to extract important features of the images that form the distinctive parts. Finally, images are classified by sparse representation method. In the study, l₁ norm analysis of sparse representation is used. A joint database of the feature vectors generated by the target images of military vehicle types is obtained side by side and this database is transformed into the matrix form. In order to classify the vehicles in a similar way, the test images of each vehicle is converted to the vector form and l₁ norm analysis of the sparse representation method is applied through the existing database matrix form. As a result, correct recognition has been performed by matching the target images of military vehicles with the test images by means of the sparse representation method. 97% classification success of SAR images of different military vehicle types is obtained.Keywords: automatic target recognition, sparse representation, image classification, SAR images
Procedia PDF Downloads 365