Search results for: solid particle tracers
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3718

Search results for: solid particle tracers

298 A Comprehensive Study on Freshwater Aquatic Life Health Quality Assessment Using Physicochemical Parameters and Planktons as Bio Indicator in a Selected Region of Mahaweli River in Kandy District, Sri Lanka

Authors: S. M. D. Y. S. A. Wijayarathna, A. C. A. Jayasundera

Abstract:

Mahaweli River is the longest and largest river in Sri Lanka and it is the major drinking water source for a large portion of 2.5 million inhabitants in the Central Province. The aim of this study was to the determination of water quality and aquatic life health quality in a selected region of Mahaweli River. Six sampling locations (Site 1: 7° 16' 50" N, 80° 40' 00" E; Site 2: 7° 16' 34" N, 80° 40' 27" E; Site 3: 7° 16' 15" N, 80° 41' 28" E; Site 4: 7° 14' 06" N, 80° 44' 36" E; Site 5: 7° 14' 18" N, 80° 44' 39" E; Site 6: 7° 13' 32" N, 80° 46' 11" E) with various anthropogenic activities at bank of the river were selected for a period of three months from Tennekumbura Bridge to Victoria Reservoir. Temperature, pH, Electrical Conductivity (EC), Total Dissolved Solids (TDS), Dissolved Oxygen (DO), 5-day Biological Oxygen Demand (BOD5), Total Suspended Solids (TSS), hardness, the concentration of anions, and metal concentration were measured according to the standard methods, as physicochemical parameters. Planktons were considered as biological parameters. Using a plankton net (20 µm mesh size), surface water samples were collected into acid washed dried vials and were stored in an ice box during transportation. Diversity and abundance of planktons were identified within 4 days of sample collection using standard manuals of plankton identification under the light microscope. Almost all the measured physicochemical parameters were within the CEA standards limits for aquatic life, Sri Lanka Standards (SLS) or World Health Organization’s Guideline for drinking water. Concentration of orthophosphate ranged between 0.232 to 0.708 mg L-1, and it has exceeded the standard limit of aquatic life according to CEA guidelines (0.400 mg L-1) at Site 1 and Site 2, where there is high disturbance by cultivations and close households. According to the Pearson correlation (significant correlation at p < 0.05), it is obvious that some physicochemical parameters (temperature, DO, TDS, TSS, phosphate, sulphate, chloride fluoride, and sodium) were significantly correlated to the distribution of some plankton species such as Aulocoseira, Navicula, Synedra, Pediastrum, Fragilaria, Selenastrum, Oscillataria, Tribonema and Microcystis. Furthermore, species that appear in blooms (Aulocoseira), organic pollutants (Navicula), and phosphate high eutrophic water (Microcystis) were found, indicating deteriorated water quality in Mahaweli River due to agricultural activities, solid waste disposal, and release of domestic effluents. Therefore, it is necessary to improve environmental monitoring and management to control the further deterioration of water quality of the river.

Keywords: bio indicator, environmental variables, planktons, physicochemical parameters, water quality

Procedia PDF Downloads 106
297 Exploring Valproic Acid (VPA) Analogues Interactions with HDAC8 Involved in VPA Mediated Teratogenicity: A Toxicoinformatics Analysis

Authors: Sakshi Piplani, Ajit Kumar

Abstract:

Valproic acid (VPA) is the first synthetic therapeutic agent used to treat epileptic disorders, which account for affecting nearly 1% world population. Teratogenicity caused by VPA has prompted the search for next generation drug with better efficacy and lower side effects. Recent studies have posed HDAC8 as direct target of VPA that causes the teratogenic effect in foetus. We have employed molecular dynamics (MD) and docking simulations to understand the binding mode of VPA and their analogues onto HDAC8. A total of twenty 3D-structures of human HDAC8 isoforms were selected using BLAST-P search against PDB. Multiple sequence alignment was carried out using ClustalW and PDB-3F07 having least missing and mutated regions was selected for study. The missing residues of loop region were constructed using MODELLER and energy was minimized. A set of 216 structural analogues (>90% identity) of VPA were obtained from Pubchem and ZINC database and their energy was optimized with Chemsketch software using 3-D CHARMM-type force field. Four major neurotransmitters (GABAt, SSADH, α-KGDH, GAD) involved in anticonvulsant activity were docked with VPA and its analogues. Out of 216 analogues, 75 were selected on the basis of lower binding energy and inhibition constant as compared to VPA, thus predicted to have anti-convulsant activity. Selected hHDAC8 structure was then subjected to MD Simulation using licenced version YASARA with AMBER99SB force field. The structure was solvated in rectangular box of TIP3P. The simulation was carried out with periodic boundary conditions and electrostatic interactions and treated with Particle mesh Ewald algorithm. pH of system was set to 7.4, temperature 323K and pressure 1atm respectively. Simulation snapshots were stored every 25ps. The MD simulation was carried out for 20ns and pdb file of HDAC8 structure was saved every 2ns. The structures were analysed using castP and UCSF Chimera and most stabilized structure (20ns) was used for docking study. Molecular docking of 75 selected VPA-analogues with PDB-3F07 was performed using AUTODOCK4.2.6. Lamarckian Genetic Algorithm was used to generate conformations of docked ligand and structure. The docking study revealed that VPA and its analogues have more affinity towards ‘hydrophobic active site channel’, due to its hydrophobic properties and allows VPA and their analogues to take part in van der Waal interactions with TYR24, HIS42, VAL41, TYR20, SER138, TRP137 while TRP137 and SER138 showed hydrogen bonding interaction with VPA-analogues. 14 analogues showed better binding affinity than VPA. ADMET SAR server was used to predict the ADMET properties of selected VPA analogues for predicting their druggability. On the basis of ADMET screening, 09 molecules were selected and are being used for in-vivo evaluation using Danio rerio model.

Keywords: HDAC8, docking, molecular dynamics simulation, valproic acid

Procedia PDF Downloads 255
296 Bioanalytical Method Development and Validation of Aminophylline in Rat Plasma Using Reverse Phase High Performance Liquid Chromatography: An Application to Preclinical Pharmacokinetics

Authors: S. G. Vasantharaju, Viswanath Guptha, Raghavendra Shetty

Abstract:

Introduction: Aminophylline is a methylxanthine derivative belonging to the class bronchodilator. From the literature survey, reported methods reveals the solid phase extraction and liquid liquid extraction which is highly variable, time consuming, costly and laborious analysis. Present work aims to develop a simple, highly sensitive, precise and accurate high-performance liquid chromatography method for the quantification of Aminophylline in rat plasma samples which can be utilized for preclinical studies. Method: Reverse Phase high-performance liquid chromatography method. Results: Selectivity: Aminophylline and the internal standard were well separated from the co-eluted components and there was no interference from the endogenous material at the retention time of analyte and the internal standard. The LLOQ measurable with acceptable accuracy and precision for the analyte was 0.5 µg/mL. Linearity: The developed and validated method is linear over the range of 0.5-40.0 µg/mL. The coefficient of determination was found to be greater than 0.9967, indicating the linearity of this method. Accuracy and precision: The accuracy and precision values for intra and inter day studies at low, medium and high quality control samples concentrations of aminophylline in the plasma were within the acceptable limits Extraction recovery: The method produced consistent extraction recovery at all 3 QC levels. The mean extraction recovery of aminophylline was 93.57 ± 1.28% while that of internal standard was 90.70 ± 1.30%. Stability: The results show that aminophylline is stable in rat plasma under the studied stability conditions and that it is also stable for about 30 days when stored at -80˚C. Pharmacokinetic studies: The method was successfully applied to the quantitative estimation of aminophylline rat plasma following its oral administration to rats. Discussion: Preclinical studies require a rapid and sensitive method for estimating the drug concentration in the rat plasma. The method described in our article includes a simple protein precipitation extraction technique with ultraviolet detection for quantification. The present method is simple and robust for fast high-throughput sample analysis with less analysis cost for analyzing aminophylline in biological samples. In this proposed method, no interfering peaks were observed at the elution times of aminophylline and the internal standard. The method also had sufficient selectivity, specificity, precision and accuracy over the concentration range of 0.5 - 40.0 µg/mL. An isocratic separation technique was used underlining the simplicity of the presented method.

Keywords: Aminophyllin, preclinical pharmacokinetics, rat plasma, RPHPLC

Procedia PDF Downloads 223
295 Consolidated Predictive Model of the Natural History of Breast Cancer Considering Primary Tumor and Secondary Distant Metastases Growth

Authors: Ella Tyuryumina, Alexey Neznanov

Abstract:

This study is an attempt to obtain reliable data on the natural history of breast cancer growth. We analyze the opportunities for using classical mathematical models (exponential and logistic tumor growth models, Gompertz and von Bertalanffy tumor growth models) to try to describe growth of the primary tumor and the secondary distant metastases of human breast cancer. The research aim is to improve predicting accuracy of breast cancer progression using an original mathematical model referred to CoMPaS and corresponding software. We are interested in: 1) modelling the whole natural history of the primary tumor and the secondary distant metastases; 2) developing adequate and precise CoMPaS which reflects relations between the primary tumor and the secondary distant metastases; 3) analyzing the CoMPaS scope of application; 4) implementing the model as a software tool. The foundation of the CoMPaS is the exponential tumor growth model, which is described by determinate nonlinear and linear equations. The CoMPaS corresponds to TNM classification. It allows to calculate different growth periods of the primary tumor and the secondary distant metastases: 1) ‘non-visible period’ for the primary tumor; 2) ‘non-visible period’ for the secondary distant metastases; 3) ‘visible period’ for the secondary distant metastases. The CoMPaS is validated on clinical data of 10-years and 15-years survival depending on the tumor stage and diameter of the primary tumor. The new predictive tool: 1) is a solid foundation to develop future studies of breast cancer growth models; 2) does not require any expensive diagnostic tests; 3) is the first predictor which makes forecast using only current patient data, the others are based on the additional statistical data. The CoMPaS model and predictive software: a) fit to clinical trials data; b) detect different growth periods of the primary tumor and the secondary distant metastases; c) make forecast of the period of the secondary distant metastases appearance; d) have higher average prediction accuracy than the other tools; e) can improve forecasts on survival of breast cancer and facilitate optimization of diagnostic tests. The following are calculated by CoMPaS: the number of doublings for ‘non-visible’ and ‘visible’ growth period of the secondary distant metastases; tumor volume doubling time (days) for ‘non-visible’ and ‘visible’ growth period of the secondary distant metastases. The CoMPaS enables, for the first time, to predict ‘whole natural history’ of the primary tumor and the secondary distant metastases growth on each stage (pT1, pT2, pT3, pT4) relying only on the primary tumor sizes. Summarizing: a) CoMPaS describes correctly the primary tumor growth of IA, IIA, IIB, IIIB (T1-4N0M0) stages without metastases in lymph nodes (N0); b) facilitates the understanding of the appearance period and inception of the secondary distant metastases.

Keywords: breast cancer, exponential growth model, mathematical model, metastases in lymph nodes, primary tumor, survival

Procedia PDF Downloads 341
294 Combustion Characteristics and Pollutant Emissions in Gasoline/Ethanol Mixed Fuels

Authors: Shin Woo Kim, Eui Ju Lee

Abstract:

The recent development of biofuel production technology facilitates the use of bioethanol and biodiesel on automobile. Bioethanol, especially, can be used as a fuel for gasoline vehicles because the addition of ethanol has been known to increase octane number and reduce soot emissions. However, the wide application of biofuel has been still limited because of lack of detailed combustion properties such as auto-ignition temperature and pollutant emissions such as NOx and soot, which has been concerned mainly on the vehicle fire safety and environmental safety. In this study, the combustion characteristics of gasoline/ethanol fuel were investigated both numerically and experimentally. For auto-ignition temperature and NOx emission, the numerical simulation was performed on the well-stirred reactor (WSR) to simulate the homogeneous gasoline engine and to clarify the effect of ethanol addition in the gasoline fuel. Also, the response surface method (RSM) was introduced as a design of experiment (DOE), which enables the various combustion properties to be predicted and optimized systematically with respect to three independent variables, i.e., ethanol mole fraction, equivalence ratio and residence time. The results of stoichiometric gasoline surrogate show that the auto-ignition temperature increases but NOx yields decrease with increasing ethanol mole fraction. This implies that the bioethanol added gasoline is an eco-friendly fuel on engine running condition. However, unburned hydrocarbon is increased dramatically with increasing ethanol content, which results from the incomplete combustion and hence needs to adjust combustion itself rather than an after-treatment system. RSM results analyzed with three independent variables predict the auto-ignition temperature accurately. However, NOx emission had a big difference between the calculated values and the predicted values using conventional RSM because NOx emission varies very steeply and hence the obtained second order polynomial cannot follow the rates. To relax the increasing rate of dependent variable, NOx emission is taken as common logarithms and worked again with RSM. NOx emission predicted through logarithm transformation is in a fairly good agreement with the experimental results. For more tangible understanding of gasoline/ethanol fuel on pollutant emissions, experimental measurements of combustion products were performed in gasoline/ethanol pool fires, which is widely used as a fire source of laboratory scale experiments. Three measurement methods were introduced to clarify the pollutant emissions, i.e., various gas concentrations including NOx, gravimetric soot filter sampling for elements analysis and pyrolysis, thermophoretic soot sampling with transmission electron microscopy (TEM). Soot yield by gravimetric sampling was decreased dramatically as ethanol was added, but NOx emission was almost comparable regardless of ethanol mole fraction. The morphology of the soot particle was investigated to address the degree of soot maturing. The incipient soot such as a liquid like PAHs was observed clearly on the soot of higher ethanol containing gasoline, and the soot might be matured under the undiluted gasoline fuel.

Keywords: gasoline/ethanol fuel, NOx, pool fire, soot, well-stirred reactor (WSR)

Procedia PDF Downloads 212
293 Role of Calcination Treatment on the Structural Properties and Photocatalytic Activity of Nanorice N-Doped TiO₂ Catalyst

Authors: Totsaporn Suwannaruang, Kitirote Wantala

Abstract:

The purposes of this research were to synthesize titanium dioxide photocatalyst doped with nitrogen (N-doped TiO₂) by hydrothermal method and to test the photocatalytic degradation of paraquat under UV and visible light illumination. The effect of calcination treatment temperature on their physical and chemical properties and photocatalytic efficiencies were also investigated. The characterizations of calcined N-doped TiO₂ photocatalysts such as specific surface area, textural properties, bandgap energy, surface morphology, crystallinity, phase structure, elements and state of charges were investigated by Brunauer, Emmett, Teller (BET) and Barrett, Joyner, Halenda (BJH) equations, UV-Visible diffuse reflectance spectroscopy (UV-Vis-DRS) by using the Kubelka-Munk theory, Wide-angle X-ray scattering (WAXS), Focussed ion beam scanning electron microscopy (FIB-SEM), X-ray photoelectron spectroscopy (XPS) and X-ray absorption spectroscopy (XAS), respectively. The results showed that the effect of calcination temperature was significant on surface morphology, crystallinity, specific surface area, pore size diameter, bandgap energy and nitrogen content level, but insignificant on phase structure and oxidation state of titanium (Ti) atom. The N-doped TiO₂ samples illustrated only anatase crystalline phase due to nitrogen dopant in TiO₂ restrained the phase transformation from anatase to rutile. The samples presented the nanorice-like morphology. The expansion on the particle was found at 650 and 700°C of calcination temperature, resulting in increased pore size diameter. The bandgap energy was determined by Kubelka-Munk theory to be in the range 3.07-3.18 eV, which appeared slightly lower than anatase standard (3.20 eV), resulting in the nitrogen dopant could modify the optical absorption edge of TiO₂ from UV to visible light region. The nitrogen content was observed at 100, 300 and 400°C only. Also, the nitrogen element disappeared at 500°C onwards. The nitrogen (N) atom can be incorporated in TiO₂ structure with the interstitial site. The uncalcined (100°C) sample displayed the highest percent paraquat degradation under UV and visible light irradiation due to this sample revealed both the highest specific surface area and nitrogen content level. Moreover, percent paraquat removal significantly decreased with increasing calcination treatment temperature. The nitrogen content level in TiO₂ accelerated the rate of reaction with combining the effect of the specific surface area that generated the electrons and holes during illuminated with light. Therefore, the specific surface area and nitrogen content level demonstrated the important roles in the photocatalytic activity of paraquat under UV and visible light illumination.

Keywords: restraining phase transformation, interstitial site, chemical charge state, photocatalysis, paraquat degradation

Procedia PDF Downloads 158
292 Influence of Iron Content in Carbon Nanotubes on the Intensity of Hyperthermia in the Cancer Treatment

Authors: S. Wiak, L. Szymanski, Z. Kolacinski, G. Raniszewski, L. Pietrzak, Z. Staniszewska

Abstract:

The term ‘cancer’ is given to a collection of related diseases that may affect any part of the human body. It is a pathological behaviour of cells with the potential to undergo abnormal breakdown in the processes that control cell proliferation, differentiation, and death of particular cells. Although cancer is commonly considered as modern disease, there are beliefs that drastically growing number of new cases can be linked to the extensively prolonged life expectancy and enhanced techniques for cancer diagnosis. Magnetic hyperthermia therapy is a novel approach to cancer treatment, which may greatly contribute to higher efficiency of the therapy. Employing carbon nanotubes as nanocarriers for magnetic particles, it is possible to decrease toxicity and invasiveness of the treatment by surface functionalisation. Despite appearing in recent years, magnetic particle hyperthermia has already become of the highest interest in the scientific and medical environment. The reason why hyperthermia therapy brings so much hope for future treatment of cancer lays in the effect that it produces in malignant cells. Subjecting them to thermal shock results in activation of numerous degradation processes inside and outside the cell. The heating process initiates mechanisms of DNA destruction, protein denaturation and induction of cell apoptosis, which may lead to tumour shrinkage, and in some cases, it may even cause complete disappearance of cancer. The factors which have the major impact on the final efficiency of the treatment include temperatures generated inside the tissues, time of exposure to the heating process, and the character of an individual cancer cell type. The vast majority of cancer cells is characterised by lower pH, persistent hypoxia and lack of nutrients, which can be associated to abnormal microvasculature. Since in healthy tissues we cannot observe presence of these conditions, they should not be seriously affected by elevation of the temperature. The aim of this work is to investigate the influence of iron content in iron filled Carbon Nanotubes on the desired nanoparticles for cancer therapy. In the article, the development and demonstration of the method and the model device for hyperthermic selective destruction of cancer cells are presented. This method was based on the synthesis and functionalization of carbon nanotubes serving as ferromagnetic material nanocontainers. The methodology of the production carbon- ferromagnetic nanocontainers (FNCs) includes the synthesis of carbon nanotubes, chemical, and physical characterization, increasing the content of a ferromagnetic material and biochemical functionalization involving the attachment of the key addresses. The ferromagnetic nanocontainers were synthesised in CVD and microwave plasma system. The research work has been financed from the budget of science as a research project No. PBS2/A5/31/2013.

Keywords: hyperthermia, carbon nanotubes, cancer colon cells, radio frequency field

Procedia PDF Downloads 123
291 Evaluation of an Integrated Supersonic System for Inertial Extraction of CO₂ in Post-Combustion Streams of Fossil Fuel Operating Power Plants

Authors: Zarina Chokparova, Ighor Uzhinsky

Abstract:

Carbon dioxide emissions resulting from burning of the fossil fuels on large scales, such as oil industry or power plants, leads to a plenty of severe implications including global temperature raise, air pollution and other adverse impacts on the environment. Besides some precarious and costly ways for the alleviation of CO₂ emissions detriment in industrial scales (such as liquefaction of CO₂ and its deep-water treatment, application of adsorbents and membranes, which require careful consideration of drawback effects and their mitigation), one physically and commercially available technology for its capture and disposal is supersonic system for inertial extraction of CO₂ in after-combustion streams. Due to the flue gas with a carbon dioxide concentration of 10-15 volume percent being emitted from the combustion system, the waste stream represents a rather diluted condition at low pressure. The supersonic system induces a flue gas mixture stream to expand using a converge-and-diverge operating nozzle; the flow velocity increases to the supersonic ranges resulting in rapid drop of temperature and pressure. Thus, conversion of potential energy into the kinetic power causes a desublimation of CO₂. Solidified carbon dioxide can be sent to the separate vessel for further disposal. The major advantages of the current solution are its economic efficiency, physical stability, and compactness of the system, as well as needlessness of addition any chemical media. However, there are several challenges yet to be regarded to optimize the system: the way for increasing the size of separated CO₂ particles (as they are represented on a micrometers scale of effective diameter), reduction of the concomitant gas separated together with carbon dioxide and provision of CO₂ downstream flow purity. Moreover, determination of thermodynamic conditions of the vapor-solid mixture including specification of the valid and accurate equation of state remains to be an essential goal. Due to high speeds and temperatures reached during the process, the influence of the emitted heat should be considered, and the applicable solution model for the compressible flow need to be determined. In this report, a brief overview of the current technology status will be presented and a program for further evaluation of this approach is going to be proposed.

Keywords: CO₂ sequestration, converging diverging nozzle, fossil fuel power plant emissions, inertial CO₂ extraction, supersonic post-combustion carbon dioxide capture

Procedia PDF Downloads 141
290 Liquefaction Phenomenon in the Kathmandu Valley during the 2015 Earthquake of Nepal

Authors: Kalpana Adhikari, Mandip Subedi, Keshab Sharma, Indra P. Acharya

Abstract:

The Gorkha Nepal earthquake of moment magnitude (Mw) 7.8 struck the central region of Nepal on April 25, 2015 with the epicenter about 77 km northwest of Kathmandu Valley . Peak ground acceleration observed during the earthquake was 0.18g. This motion induced several geotechnical effects such as landslides, foundation failures liquefaction, lateral spreading and settlement, and local amplification. An aftershock of moment magnitude (Mw) 7.3 hit northeast of Kathmandu on May 12 after 17 days of main shock caused additional damages. Kathmandu is the largest city in Nepal, have a population over four million. As the Kathmandu Valley deposits are composed mainly of sand, silt and clay layers with a shallow ground water table, liquefaction is highly anticipated. Extensive liquefaction was also observed in Kathmandu Valley during the 1934 Nepal-Bihar earthquake. Field investigations were carried out in Kathmandu Valley immediately after Mw 7.8, April 25 main shock and Mw 7.3, May 12 aftershock. Geotechnical investigation of both liquefied and non-liquefied sites were conducted after the earthquake. This paper presents observations of liquefaction and liquefaction induced damage, and the liquefaction potential assessment based on Standard Penetration Tests (SPT) for liquefied and non-liquefied sites. SPT based semi-empirical approach has been used for evaluating liquefaction potential of the soil and Liquefaction Potential Index (LPI) has been used to determine liquefaction probability. Recorded ground motions from the event are presented. Geological aspect of Kathmandu Valley and local site effect on the occurrence of liquefaction is described briefly. Observed liquefaction case studies are described briefly. Typically, these are sand boils formed by freshly ejected sand forced out of over-pressurized sub-strata. At most site, sand was ejected to agricultural fields forming deposits that varied from millimetres to a few centimeters thick. Liquefaction-induced damage to structures in these areas was not significant except buildings on some places tilted slightly. Boiled soils at liquefied sites were collected and the particle size distributions of ejected soils were analyzed. SPT blow counts and the soil profiles at ten liquefied and non-liquefied sites were obtained. The factors of safety against liquefaction with depth and liquefaction potential index of the ten sites were estimated and compared with observed liquefaction after 2015 Gorkha earthquake. The liquefaction potential indices obtained from the analysis were found to be consistent with the field observation. The field observations along with results from liquefaction assessment were compared with the existing liquefaction hazard map. It was found that the existing hazard maps are unrepresentative and underestimate the liquefaction susceptibility in Kathmandu Valley. The lessons learned from the liquefaction during this earthquake are also summarized in this paper. Some recommendations are also made to the seismic liquefaction mitigation in the Kathmandu Valley.

Keywords: factor of safety, geotechnical investigation, liquefaction, Nepal earthquake

Procedia PDF Downloads 324
289 Analysis of Ancient and Present Lightning Protection Systems of Large Heritage Stupas in Sri Lanka

Authors: J.R.S.S. Kumara, M.A.R.M. Fernando, S.Venkatesh, D.K. Jayaratne

Abstract:

Protection of heritage monuments against lightning has become extremely important as far as their historical values are concerned. When such structures are large and tall, the risk of lightning initiated from both cloud and ground can be high. This paper presents a lightning risk analysis of three giant stupas in Anuradhapura era (fourth century BC onwards) in Sri Lanka. The three stupas are Jethawaaramaya (269-296 AD), Abayagiriya (88-76 BC) and Ruwanweliseya (161-137 BC), the third, fifth and seventh largest ancient structures in the world. These stupas are solid brick structures consisting of a base, a near hemispherical dome and a conical spire on the top. The ancient stupas constructed with a dielectric crystal on the top and connected to the ground through a conducting material, was considered as the hypothesis for their original lightning protection technique. However, at present, all three stupas are protected with Franklin rod type air termination systems located on top of the spire. First, a risk analysis was carried out according to IEC 62305 by considering the isokeraunic level of the area and the height of the stupas. Then the standard protective angle method and rolling sphere method were used to locate the possible touching points on the surface of the stupas. The study was extended to estimate the critical current which could strike on the unprotected areas of the stupas. The equations proposed by (Uman 2001) and (Cooray2007) were used to find the striking distances. A modified version of rolling sphere method was also applied to see the effects of upward leaders. All these studies were carried out for two scenarios: with original (i.e. ancient) lightning protection system and with present (i.e. new) air termination system. The field distribution on the surface of the stupa in the presence of a downward leader was obtained using finite element based commercial software COMSOL Multiphysics for further investigations of lightning risks. The obtained results were analyzed and compared each other to evaluate the performance of ancient and new lightning protection methods and identify suitable methods to design lightning protection systems for stupas. According to IEC standards, all three stupas with new and ancient lightning protection system has Level IV protection as per protection angle method. However according to rolling sphere method applied with Uman’s equation protection level is III. The same method applied with Cooray’s equation always shows a high risk with respect to Uman’s equation. It was found that there is a risk of lightning strikes on the dome and square chamber of the stupa, and the corresponding critical current values were different with respect to the equations used in the rolling sphere method and modified rolling sphere method.

Keywords: Stupa, heritage, lightning protection, rolling sphere method, protection level

Procedia PDF Downloads 255
288 The Role of High-Intensity Focused Ultrasound (HIFU) in the Treatment of Fibroadenomas: A Systematic Review

Authors: Ahmed Gonnah, Omar Masoud, Mohamed Abdel-Wahab, Ahmed ElMosalamy, Abdulrahman Al-Naseem

Abstract:

Introduction: Fibroadenomas are solid, mobile, and non-tender benign breast lumps, with the highest prevalence amongst young women aged between 15 and 35. Symptoms can include discomfort, and they can become problematic, particularly when they enlarge, resulting in many referrals for biopsies, with fibroadenomas accounting for 30-75% of the cases. Diagnosis is based on triple assessment that involves a clinical examination, ultrasound imaging and mammography, as well as core needle biopsies. Current management includes observation for 6-12 months, with the indication of definitive surgery, in cases that are older than 35 years or with fibroadenoma persistence. Serious adverse effects of surgery might include nipple-areolar distortion, scarring and damage to the breast tissue, as well as the risks associated with surgery and anesthesia, making it a non-feasible option. Methods: A literature search was performed on the databases EMBASE. MEDLINE/PubMed, Google scholar and Ovid, for English language papers published between 1st of January 2000 and 17th of March 2021. A structured protocol was employed to devise a comprehensive search strategy with keywords and Boolean operators defined by the research question. The keywords used for the search were ‘HIFU’, ‘High-Intensity Focused Ultrasound’, ‘Fibroadenoma’, ‘Breast’, ‘Lesion’. This review was carried out in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Results: Recently, a thermal ablative technique, High Intensity Focused Ultrasound (HIFU), was found to be a safe, non-invasive, and technically successful alternative, having displayed promising outcomes in reducing the volume of fibroadenomas, pain experienced by patients, and the length of hospitalization. Quality of life improvement was also evidenced, exhibited by the disappearance of symptoms, and enhanced physical activity post-intervention, in addition to patients’ satisfaction with the cosmetic results and future recommendation of the procedure to other patients. Conclusion: Overall, HIFU is a well-tolerated treatment associated with a low risk of complications that can potentially include erythema, skin discoloration and bruising, with the majority of this self-resolving shortly after the procedure.

Keywords: ultrasound, HIFU, breast, efficacy, side effects, fibroadenoma

Procedia PDF Downloads 227
287 Spatio-Temporal Variation of Gaseous Pollutants and the Contribution of Particulate Matters in Chao Phraya River Basin, Thailand

Authors: Samart Porncharoen, Nisa Pakvilai

Abstract:

The elevated levels of air pollutants in regional atmospheric environments is a significant problem that affects human health in Thailand, particularly in the Chao Phraya River Basin. Of concern are issues surrounding ambient air pollution such as particulate matter, gaseous pollutants and more specifically concerning air pollution along the river. Therefore, the spatio-temporal study of air pollution in this real environment can gain more accurate air quality data for making formalized environmental policy in river basins. In order to inform such a policy, a study was conducted over a period of January –December, 2015 to continually collect measurements of various pollutants in both urban and regional locations in the Chao Phraya River Basin. This study investigated the air pollutants in many diverse environments along the Chao Phraya River Basin, Thailand in 2015. Multivariate Analysis Techniques such as Principle Component Analysis (PCA) and Path analysis were utilised to classify air pollution in the surveyed location. Measurements were collected in both urban and rural areas to see if significant differences existed between the two locations in terms of air pollution levels. The meteorological parameters of various particulates were collected continually from a Thai pollution control department monitoring station over a period of January –December, 2015. Of interest to this study were the readings of SO2, CO, NOx, O3, and PM10. Results showed a daily arithmetic mean concentration of SO2, CO, NOx, O3, PM10 reading at 3±1 ppb, 0.5± 0.5 ppm, 30±21 ppb, 19±16 ppb, and 40±20 ug/m3 in urban locations (Bangkok). During the same time period, the readings for the same measurements in rural areas, Ayutthaya (were 1±0.5 ppb, 0.1± 0.05 ppm, 25±17 ppb, 30±21 ppb, and 35±10 ug/m3respectively. This show that Bangkok were located in highly polluted environments that are dominated source emitted from vehicles. Further, results were analysed to ascertain if significant seasonal variation existed in the measurements. It was found that levels of both gaseous pollutants and particle matter in dry season were higher than the wet season. More broadly, the results show that levels of pollutants were measured highest in locations along the Chao Phraya. River Basin known to have a large number of vehicles and biomass burning. This correlation suggests that the principle pollutants were from these anthropogenic sources. This study contributes to the body of knowledge surrounding ambient air pollution such as particulate matter, gaseous pollutants and more specifically concerning air pollution along the Chao Phraya River Basin. Further, this study is one of the first to utilise continuous mobile monitoring along a river in order to gain accurate measurements during a data collection period. Overall, the results of this study can be used for making formalized environmental policy in river basins in order to reduce the physical effects on human health.

Keywords: air pollution, Chao Phraya river basin, meteorology, seasonal variation, principal component analysis

Procedia PDF Downloads 286
286 Analysis of Overall Thermo-Elastic Properties of Random Particulate Nanocomposites with Various Interphase Models

Authors: Lidiia Nazarenko, Henryk Stolarski, Holm Altenbach

Abstract:

In the paper, a (hierarchical) approach to analysis of thermo-elastic properties of random composites with interphases is outlined and illustrated. It is based on the statistical homogenization method – the method of conditional moments – combined with recently introduced notion of the energy-equivalent inhomogeneity which, in this paper, is extended to include thermal effects. After exposition of the general principles, the approach is applied in the investigation of the effective thermo-elastic properties of a material with randomly distributed nanoparticles. The basic idea of equivalent inhomogeneity is to replace the inhomogeneity and the surrounding it interphase by a single equivalent inhomogeneity of constant stiffness tensor and coefficient of thermal expansion, combining thermal and elastic properties of both. The equivalent inhomogeneity is then perfectly bonded to the matrix which allows to analyze composites with interphases using techniques devised for problems without interphases. From the mechanical viewpoint, definition of the equivalent inhomogeneity is based on Hill’s energy equivalence principle, applied to the problem consisting only of the original inhomogeneity and its interphase. It is more general than the definitions proposed in the past in that, conceptually and practically, it allows to consider inhomogeneities of various shapes and various models of interphases. This is illustrated considering spherical particles with two models of interphases, Gurtin-Murdoch material surface model and spring layer model. The resulting equivalent inhomogeneities are subsequently used to determine effective thermo-elastic properties of randomly distributed particulate composites. The effective stiffness tensor and coefficient of thermal extension of the material with so defined equivalent inhomogeneities are determined by the method of conditional moments. Closed-form expressions for the effective thermo-elastic parameters of a composite consisting of a matrix and randomly distributed spherical inhomogeneities are derived for the bulk and the shear moduli as well as for the coefficient of thermal expansion. Dependence of the effective parameters on the interphase properties is included in the resulting expressions, exhibiting analytically the nature of the size-effects in nanomaterials. As a numerical example, the epoxy matrix with randomly distributed spherical glass particles is investigated. The dependence of the effective bulk and shear moduli, as well as of the effective thermal expansion coefficient on the particle volume fraction (for different radii of nanoparticles) and on the radius of nanoparticle (for fixed volume fraction of nanoparticles) for different interphase models are compared to and discussed in the context of other theoretical predictions. Possible applications of the proposed approach to short-fiber composites with various types of interphases are discussed.

Keywords: effective properties, energy equivalence, Gurtin-Murdoch surface model, interphase, random composites, spherical equivalent inhomogeneity, spring layer model

Procedia PDF Downloads 186
285 Experimental Investigation of the Thermal Conductivity of Neodymium and Samarium Melts by a Laser Flash Technique

Authors: Igor V. Savchenko, Dmitrii A. Samoshkin

Abstract:

The active study of the properties of lanthanides has begun in the late 50s of the last century, when methods for their purification were developed and metals with a relatively low content of impurities were obtained. Nevertheless, up to date, many properties of the rare earth metals (REM) have not been experimentally investigated, or insufficiently studied. Currently, the thermal conductivity and thermal diffusivity of lanthanides have been studied most thoroughly in the low-temperature region and at moderate temperatures (near 293 K). In the high-temperature region, corresponding to the solid phase, data on the thermophysical characteristics of the REM are fragmentary and in some cases contradictory. Analysis of the literature showed that the data on the thermal conductivity and thermal diffusivity of light REM in the liquid state are few in number, little informative (only one point corresponds to the liquid state region), contradictory (the nature of the thermal conductivity change with temperature is not reproduced), as well as the results of measurements diverge significantly beyond the limits of the total errors. Thereby our experimental results allow to fill this gap and to clarify the existing information on the heat transfer coefficients of neodymium and samarium in a wide temperature range from the melting point up to 1770 K. The measurement of the thermal conductivity of investigated metallic melts was carried out by laser flash technique on an automated experimental setup LFA-427. Neodymium sample of brand NM-1 (99.21 wt % purity) and samarium sample of brand SmM-1 (99.94 wt % purity) were cut from metal ingots and then ones were annealed in a vacuum (1 mPa) at a temperature of 1400 K for 3 hours. Measuring cells of a special design from tantalum were used for experiments. Sealing of the cell with a sample inside it was carried out by argon-arc welding in the protective atmosphere of the glovebox. The glovebox was filled with argon with purity of 99.998 vol. %; argon was additionally cleaned up by continuous running through sponge titanium heated to 900–1000 K. The general systematic error in determining the thermal conductivity of investigated metallic melts was 2–5%. The approximation dependences and the reference tables of the thermal conductivity and thermal diffusivity coefficients were developed. New reliable experimental data on the transport properties of the REM and their changes in phase transitions can serve as a scientific basis for optimizing the industrial processes of production and use of these materials, as well as ones are of interest for the theory of thermophysical properties of substances, physics of metals, liquids and phase transformations.

Keywords: high temperatures, laser flash technique, liquid state, metallic melt, rare earth metals, thermal conductivity, thermal diffusivity

Procedia PDF Downloads 201
284 Synthesis, Structure and Spectroscopic Properties of Oxo-centered Carboxylate-Bridged Triiron Complexes and a Deca Ferric Wheel

Authors: K. V. Ramanaiah, R. Jagan, N. N. Murthy

Abstract:

Trinuclear oxo-centered carboxylate-bridged iron complexes, [Fe3(µ3-O)(µ2-O2CR)L¬3]+/0 (where R = alkyl or aryl; L = H2O, ROH, Py, solvent) have attracted tremendous attention because of their interesting structural and magnetic properties, exhibit mixed-valent trapped and de-trapped states, and have bioinorganic relevance. The presence of a trinuclear iron binding center has been implicated in the formation of both bacterial and human iron storage protein, Ft. They are used as precursors for the synthesis of models for the active-site structures of non-heme proteins, hemerythrin (Hr), methane monooxygenase (MMO) and polyiron storage protein, ferritin (Ft). Used as important building blocks for the design and synthesis of supramolecules this can exhibit single molecular magnetism (SMM). Such studies have often employed simple and compact carboxylate ligands and the use of bulky carboxylates is scarce. In the present study, we employed two different type of sterically hindered carboxylates and synthesized a series of novel oxo-centered, carboxylate-bridged triiron complexes of general formula [Fe3(O)(O2CCPh3)6L3]X (L = H2O, 1; py, 2; 4-NMe2py, 3; X = ClO4; L = CH3CN, 4; X = FeCl4) and [Fe3(O)(O2C-anth)6L3]X (L = H2O, 5; X = ClO4; L = CH3OH, 6; X = Cl). Along with complex [Fe(OMe)2(O2CCPh3)]10, 7 was prepared by the self-assemble of anhydrous FeCl3, sodium triphenylacetate and sodium methoxide at ratio of 1:1:2 in CH3OH. The Electronic absorption spectra of these complexes 1-6, in CH2Cl2 display weak bands at near FTIR region (970-1135 nm, ε > 15M-1cm-1). For complex 7, one broad band centered at ~670nm and also an additional intense charge transfer (L→M or O→M) bands between 300 to 550nm observed for all the complexes. Paramagnetic 1H NMR is introduced as a good probe for the characterization of trinuclear oxo - cantered iron compounds in solution when the L ligand coordinated to iron varies as: H2O, py, 4-NMe2py, and CH3OH. The solution state magnetic moment values calculated by using Evans method for all the complexes and also solid state magnetic moment value of complex, 7 was calculated by VSM method, which is comparable with solution state value. These all magnetic moment values indicate there is a spin exchange process through oxo and carboxylate bridges in between two irons (d5). The ESI-mass data complement the data obtained from single crystal X-ray structure. Further purity of the compounds was confirmed by elemental analysis. Finally, structural determination of complexes 1, 3, 4, 5, 6 and 7 were unambiguously conformed by single crystal x-ray studies.

Keywords: decanuclear, paramagnetic NMR, trinuclear, uv-visible

Procedia PDF Downloads 348
283 Maternal Exposure to Bisphenol A and Its Association with Birth Outcomes

Authors: Yi-Ting Chen, Yu-Fang Huang, Pei-Wei Wang, Hai-Wei Liang, Chun-Hao Lai, Mei-Lien Chen

Abstract:

Background: Bisphenol A (BPA) is commonly used in consumer products, such as inner coatings of cans and polycarbonated bottles. BPA is considered to be an endocrine disrupting substance (EDs) that affects normal human hormones and may cause adverse effects on human health. Pregnant women and fetuses are susceptible groups of endocrine disrupting substances. Prenatal exposure to BPA has been shown to affect the fetus through the placenta. Therefore, it is important to evaluate the potential health risk of fetal exposure to BPA during pregnancy. The aims of this study were (1) to determine the urinary concentration of BPA in pregnant women, and (2) to investigate the association between BPA exposure during pregnancy and birth outcomes. Methods: This study recruited 117 pregnant women and their fetuses from 2012 to 2014 from the Taiwan Maternal- Infant Cohort Study (TMICS). Maternal urine samples were collected in the third trimester and questionnaires were used to collect socio-demographic characteristics, eating habits and medical conditions of the participants. Information about birth outcomes of the fetus was obtained from medical records. As for chemicals analysis, BPA concentrations in urine were determined by off-line solid-phase extraction-ultra-performance liquid chromatography coupled with a Q-Tof mass spectrometer. The urinary concentrations were adjusted with creatinine. The association between maternal concentrations of BPA and birth outcomes was estimated using the logistic regression model. Results: The detection rate of BPA is 99%; the concentration ranges (μg/g) from 0.16 to 46.90. The mean (SD) BPA levels are 5.37(6.42) μg/g creatinine. The mean ±SD of the body weight, body length, head circumference, chest circumference and gestational age at birth are 3105.18 ± 339.53 g, 49.33 ± 1.90 cm, 34.16 ± 1.06 cm, 32.34 ± 1.37 cm and 38.58 ± 1.37 weeks, respectively. After stratifying the exposure levels into two groups by median, pregnant women in higher exposure group would have an increased risk of lower body weight (OR=0.57, 95%CI=0.271-1.193), smaller chest circumference (OR=0.70, 95%CI=0.335-1.47) and shorter gestational age at birth newborn (OR=0.46, 95%CI=0.191-1.114). However, there are no associations between BPA concentration and birth outcomes reach a significant level (p < 0.05) in statistics. Conclusions: This study presents prenatal BPA profiles and infants in northern Taiwan. Women who have higher BPA concentrations tend to give birth to lower body weight, smaller chest circumference or shorter gestational age at birth newborn. More data will be included to verify the results. This report will also present the predictors of BPA concentrations for pregnant women.

Keywords: bisphenol A, birth outcomes, biomonitoring, prenatal exposure

Procedia PDF Downloads 144
282 Utilizing Artificial Intelligence to Predict Post Operative Atrial Fibrillation in Non-Cardiac Transplant

Authors: Alexander Heckman, Rohan Goswami, Zachi Attia, Paul Friedman, Peter Noseworthy, Demilade Adedinsewo, Pablo Moreno-Franco, Rickey Carter, Tathagat Narula

Abstract:

Background: Postoperative atrial fibrillation (POAF) is associated with adverse health consequences, higher costs, and longer hospital stays. Utilizing existing predictive models that rely on clinical variables and circulating biomarkers, multiple societies have published recommendations on the treatment and prevention of POAF. Although reasonably practical, there is room for improvement and automation to help individualize treatment strategies and reduce associated complications. Methods and Results: In this retrospective cohort study of solid organ transplant recipients, we evaluated the diagnostic utility of a previously developed AI-based ECG prediction for silent AF on the development of POAF within 30 days of transplant. A total of 2261 non-cardiac transplant patients without a preexisting diagnosis of AF were found to have a 5.8% (133/2261) incidence of POAF. While there were no apparent sex differences in POAF incidence (5.8% males vs. 6.0% females, p=.80), there were differences by race and ethnicity (p<0.001 and 0.035, respectively). The incidence in white transplanted patients was 7.2% (117/1628), whereas the incidence in black patients was 1.4% (6/430). Lung transplant recipients had the highest incidence of postoperative AF (17.4%, 37/213), followed by liver (5.6%, 56/1002) and kidney (3.6%, 32/895) recipients. The AUROC in the sample was 0.62 (95% CI: 0.58-0.67). The relatively low discrimination may result from undiagnosed AF in the sample. In particular, 1,177 patients had at least 1 AI-ECG screen for AF pre-transplant above .10, a value slightly higher than the published threshold of 0.08. The incidence of POAF in the 1104 patients without an elevated prediction pre-transplant was lower (3.7% vs. 8.0%; p<0.001). While this supported the hypothesis that potentially undiagnosed AF may have contributed to the diagnosis of POAF, the utility of the existing AI-ECG screening algorithm remained modest. When the prediction for POAF was made using the first postoperative ECG in the sample without an elevated screen pre-transplant (n=1084 on account of n=20 missing postoperative ECG), the AUROC was 0.66 (95% CI: 0.57-0.75). While this discrimination is relatively low, at a threshold of 0.08, the AI-ECG algorithm had a 98% (95% CI: 97 – 99%) negative predictive value at a sensitivity of 66% (95% CI: 49-80%). Conclusions: This study's principal finding is that the incidence of POAF is rare, and a considerable fraction of the POAF cases may be latent and undiagnosed. The high negative predictive value of AI-ECG screening suggests utility for prioritizing monitoring and evaluation on transplant patients with a positive AI-ECG screening. Further development and refinement of a post-transplant-specific algorithm may be warranted further to enhance the diagnostic yield of the ECG-based screening.

Keywords: artificial intelligence, atrial fibrillation, cardiology, transplant, medicine, ECG, machine learning

Procedia PDF Downloads 137
281 Survey of Indoor Radon/Thoron Concentrations in High Lung Cancer Incidence Area in India

Authors: Zoliana Bawitlung, P. C. Rohmingliana, L. Z. Chhangte, Remlal Siama, Hming Chungnunga, Vanram Lawma, L. Hnamte, B. K. Sahoo, B. K. Sapra, J. Malsawma

Abstract:

Mizoram state has the highest lung cancer incidence rate in India due to its high-level consumption of tobacco and its products which is supplemented by the food habits. While smoking is mainly responsible for this incidence, the effect of inhalation of indoor radon gas cannot be discarded as the hazardous nature of this radioactive gas and its progenies on human population have been well-established worldwide where the radiation damage to bronchial cells eventually can be the second leading cause of lung cancer next to smoking. It is also known that the effect of radiation, however, small may be the concentration, cannot be neglected as they can bring about the risk of cancer incidence. Hence, estimation of indoor radon concentration is important to give a useful reference against radiation effects as well as establishing its safety measures and to create a baseline for further case-control studies. The indoor radon/thoron concentrations in Mizoram had been measured in 41 dwellings selected on the basis of spot gamma background radiation and construction type of the houses during 2015-2016. The dwellings were monitored for one year, in 4 months cycles to indicate seasonal variations, for the indoor concentration of radon gas and its progenies, outdoor gamma dose, and indoor gamma dose respectively. A time-integrated method using Solid State Nuclear Track Detector (SSNTD) based single entry pin-hole dosimeters were used for measurement of indoor Radon/Thoron concentration. Gamma dose measurements for indoor as well as outdoor were carried out using Geiger Muller survey meters. Seasonal variation of indoor radon/ thoron concentration was monitored. The results show that the annual average radon concentrations varied from 54.07 – 144.72 Bq/m³ with an average of 90.20 Bq/m³ and the annual average thoron concentration varied from 17.39 – 54.19 Bq/m³ with an average of 35.91 Bq/m³ which are below the permissible limit. The spot survey of gamma background radiation level varies between 9 to 24 µR/h inside and outside the dwellings throughout Mizoram which are all within acceptable limits. From the above results, there is no direct indication that radon/thoron is responsible for the high lung cancer incidence in the area. In order to find epidemiological evidence of natural radiations to high cancer incidence in the area, one may need to conduct a case-control study which is beyond this scope. However, the derived data of measurement will provide baseline data for further studies.

Keywords: background gamma radiation, indoor radon/thoron, lung cancer, seasonal variation

Procedia PDF Downloads 144
280 An Introduction to the Radiation-Thrust Based on Alpha Decay and Spontaneous Fission

Authors: Shiyi He, Yan Xia, Xiaoping Ouyang, Liang Chen, Zhongbing Zhang, Jinlu Ruan

Abstract:

As the key system of the spacecraft, various propelling system have been developing rapidly, including ion thrust, laser thrust, solar sail and other micro-thrusters. However, there still are some shortages in these systems. The ion thruster requires the high-voltage or magnetic field to accelerate, resulting in extra system, heavy quantity and large volume. The laser thrust now is mostly ground-based and providing pulse thrust, restraint by the station distribution and the capacity of laser. The thrust direction of solar sail is limited to its relative position with the Sun, so it is hard to propel toward the Sun or adjust in the shadow.In this paper, a novel nuclear thruster based on alpha decay and spontaneous fission is proposed and the principle of this radiation-thrust with alpha particle has been expounded. Radioactive materials with different released energy, such as 210Po with 5.4MeV and 238Pu with 5.29MeV, attached to a metal film will provides various thrust among 0.02-5uN/cm2. With this repulsive force, radiation is able to be a power source. With the advantages of low system quantity, high accuracy and long active time, the radiation thrust is promising in the field of space debris removal, orbit control of nano-satellite array and deep space exploration. To do further study, a formula lead to the amplitude and direction of thrust by the released energy and decay coefficient is set up. With the initial formula, the alpha radiation elements with the half life period longer than a hundred days are calculated and listed. As the alpha particles emit continuously, the residual charge in metal film grows and affects the emitting energy distribution of alpha particles. With the residual charge or extra electromagnetic field, the emitting of alpha particles performs differently and is analyzed in this paper. Furthermore, three more complex situations are discussed. Radiation element generating alpha particles with several energies in different intensity, mixture of various radiation elements, and cascaded alpha decay are studied respectively. In combined way, it is more efficient and flexible to adjust the thrust amplitude. The propelling model of the spontaneous fission is similar with the one of alpha decay, which has a more complex angular distribution. A new quasi-sphere space propelling system based on the radiation-thrust has been introduced, as well as the collecting and processing system of excess charge and reaction heat. The energy and spatial angular distribution of emitting alpha particles on unit area and certain propelling system have been studied. As the alpha particles are easily losing energy and self-absorb, the distribution is not the simple stacking of each nuclide. With the change of the amplitude and angel of radiation-thrust, orbital variation strategy on space debris removal is shown and optimized.

Keywords: alpha decay, angular distribution, emitting energy, orbital variation, radiation-thruster

Procedia PDF Downloads 209
279 Analysis of the Keys Indicators of Sustainable Tourism: A Case Study in Lagoa da Confusão/to/Brazil

Authors: Veruska C. Dutra, Lucio F.M. Adorno, Mary L. G. S. Senna

Abstract:

Since it recognized the importance of planning sustainable tourism, which has been discussed effective methods of monitoring tourist. In this sense, the indicators, can transmit a set of information about complex processes, events or trends, showing up as an important monitoring tool and aid in the environmental assessment, helping to identify the progress of it and to chart future actions, contributing, so for decision making. The World Tourism Organization - WTO recognizes the importance of indicators to appraise the tourism activity in the point of view of sustainability, launching in 1995 eleven Keys Indicators of Sustainable Tourism to assist in the monitoring of tourist destinations. So we propose a case study to examine the applicability or otherwise of a monitoring methodology and aid in the understanding of tourism sustainability, analyzing the effectiveness of local indicators on the approach defined by the WTO. The study was applied to the Lagoa da Confusão City, in the state of Tocantins - North Brazil. The case study was carried out in 2006/2007, with the guiding deductive method. The indicators were measured by specific methodologies adapted to the study site, so that could generate quantitative results which could be analyzed at the proposed scale WTO (0 to 10 points). Applied indicators: Attractive Protection – AP (level of a natural and cultural attractive protection), Sociocultural Impact–SI (level of socio-cultural impacts), Waste Management - WM (level of management of solid waste generated), Planning Process-PP (trip planning level) Tourist Satisfaction-TS (satisfaction of the tourist experience), Community Satisfaction-CS (satisfaction of the local community with the development of local tourism) and Tourism Contribution to the Local Economy-TCLE (tourist level of contribution to the local economy). The city of Lagoa da Confusão was presented as an important object of study for the methodology in question, as offered condition to analyze the indicators and the complexities that arose during the research. The data collected can help discussions on the sustainability of tourism in the destination. The indicators TS, CS, WM , PP and AP appeared as satisfactory as allowed the measurement "translating" the reality under study, unlike TCLE and the SI indicators that were not seen as reliable and clear and should be reviewed and discussed for an adaptation and replication of the same. The application and study of various indicators of sustainable tourism, give better able to analyze the local tourism situation than monitor only one of the indicators, it does not demonstrate all collected data, which could result in a superficial analysis of the tourist destination.

Keywords: indicators, Lagoa da Confusão, Tocantins, Brazil, monitoring, sustainability

Procedia PDF Downloads 401
278 Artificial Intelligence-Aided Extended Kalman Filter for Magnetometer-Based Orbit Determination

Authors: Gilberto Goracci, Fabio Curti

Abstract:

This work presents a robust, light, and inexpensive algorithm to perform autonomous orbit determination using onboard magnetometer data in real-time. Magnetometers are low-cost and reliable sensors typically available on a spacecraft for attitude determination purposes, thus representing an interesting choice to perform real-time orbit determination without the need to add additional sensors to the spacecraft itself. Magnetic field measurements can be exploited by Extended/Unscented Kalman Filters (EKF/UKF) for orbit determination purposes to make up for GPS outages, yielding errors of a few kilometers and tens of meters per second in the position and velocity of a spacecraft, respectively. While this level of accuracy shows that Kalman filtering represents a solid baseline for autonomous orbit determination, it is not enough to provide a reliable state estimation in the absence of GPS signals. This work combines the solidity and reliability of the EKF with the versatility of a Recurrent Neural Network (RNN) architecture to further increase the precision of the state estimation. Deep learning models, in fact, can grasp nonlinear relations between the inputs, in this case, the magnetometer data and the EKF state estimations, and the targets, namely the true position, and velocity of the spacecraft. The model has been pre-trained on Sun-Synchronous orbits (SSO) up to 2126 kilometers of altitude with different initial conditions and levels of noise to cover a wide range of possible real-case scenarios. The orbits have been propagated considering J2-level dynamics, and the geomagnetic field has been modeled using the International Geomagnetic Reference Field (IGRF) coefficients up to the 13th order. The training of the module can be completed offline using the expected orbit of the spacecraft to heavily reduce the onboard computational burden. Once the spacecraft is launched, the model can use the GPS signal, if available, to fine-tune the parameters on the actual orbit onboard in real-time and work autonomously during GPS outages. In this way, the provided module shows versatility, as it can be applied to any mission operating in SSO, but at the same time, the training is completed and eventually fine-tuned, on the specific orbit, increasing performances and reliability. The results provided by this study show an increase of one order of magnitude in the precision of state estimate with respect to the use of the EKF alone. Tests on simulated and real data will be shown.

Keywords: artificial intelligence, extended Kalman filter, orbit determination, magnetic field

Procedia PDF Downloads 105
277 Baseline Study of Water Quality in Indonesia Using Dynamic Methods and Technologies

Authors: R. L. P. de Lima, F. C. B. Boogaard, D. Setyo Rini, P. Arisandi, R. E. de Graaf-Van Dinther

Abstract:

Water quality in many Asian countries is very poor due to inefficient solid waste management, high population growth and the lack of sewage and purification systems for households and industry. A consortium of Indonesian and Dutch organizations has begun a large-scale international research project to evaluate and propose solutions to face the surface water pollution challenges in Brantas Basin, Indonesia (East Java: Malang / Surabaya). The first phase of the project consisted in a baseline study to assess the current status of surface water bodies and to determine the ambitions and strategies among local stakeholders. This study was conducted with high participatory / collaborative and knowledge sharing objectives. Several methods such as using mobile sensors (attached to boats or underwater drones), test strips and mobile apps, bio-monitoring (sediments), ecology scans using underwater cameras, or continuous / static measurements, were applied in different locations in the regions of the basin, at multiple locations within the water systems (e.g. spring, upstream / downstream of industry and urban areas, mouth of the Surabaya River, groundwater). Results gave an indication of (reference) values of basic water quality parameters such as turbidity, electrical conductivity, dissolved oxygen or nutrients (ammonium / nitrate). An important outcome was that collecting random samples may not be representative of a body of water, given that water quality parameters can vary widely in space (x, y, and depth) and time (day / night and seasonal). Innovative / dynamic monitoring methods (e.g. underwater drones, sensors on boats) can contribute to better understand the quality of the living environment (water, ecology, sediment) and factors that affect it. The field work activities, in particular, underwater drones, revealed potential as awareness actions as they attracted interest from locals and local press. This baseline study involved the cooperation with local managing organizations with Dutch partners, and their willingness to work together is important to ensure participatory actions and social awareness regarding the process of adaptation and strengthening of regulations, or for the construction of facilities such as sewage.

Keywords: water quality monitoring, pollution, underwater drones, social awareness

Procedia PDF Downloads 192
276 Anaerobic Co-Digestion of Pressmud with Bagasse and Animal Waste for Biogas Production Potential

Authors: Samita Sondhi, Sachin Kumar, Chirag Chopra

Abstract:

The increase in population has resulted in an excessive feedstock production, which has in return lead to the accumulation of a large amount of waste from different resources as crop residues, industrial waste and solid municipal waste. This situation has raised the problem of waste disposal in present days. A parallel problem of depletion of natural fossil fuel resources has led to the formation of alternative sources of energy from the waste of different industries to concurrently resolve the two issues. The biogas is a carbon neutral fuel which has applications in transportation, heating and power generation. India is a nation that has an agriculture-based economy and agro-residues are a significant source of organic waste. Taking into account, the second largest agro-based industry that is sugarcane industry producing a high quantity of sugar and sugarcane waste byproducts such as Bagasse, Press Mud, Vinasse and Wastewater. Currently, there are not such efficient disposal methods adopted at large scales. According to manageability objectives, anaerobic digestion can be considered as a method to treat organic wastes. Press mud is lignocellulosic biomass and cannot be accumulated for Mono digestion because of its complexity. Prior investigations indicated that it has a potential for production of biogas. But because of its biological and elemental complexity, Mono-digestion was not successful. Due to the imbalance in the C/N ratio and presence of wax in it can be utilized with any other fibrous material hence will be digested properly under suitable conditions. In the first batch of Mono-digestion of Pressmud biogas production was low. Now, co-digestion of Pressmud with Bagasse which has desired C/N ratio will be performed to optimize the ratio for maximum biogas from Press mud. In addition, with respect to supportability, the main considerations are the monetary estimation of item result and ecological concerns. The work is designed in such a way that the waste from the sugar industry will be digested for maximum biogas generation and digestive after digestion will be characterized for its use as a bio-fertilizer for soil conditioning. Due to effectiveness demonstrated by studied setups of Mono-digestion and Co-digestion, this approach can be considered as a viable alternative for lignocellulosic waste disposal and in agricultural applications. Biogas produced from the Pressmud either can be used for Powerhouses or transportation. In addition, the work initiated towards the development of waste disposal for energy production will demonstrate balanced economy sustainability of the process development.

Keywords: anaerobic digestion, carbon neutral fuel, press mud, lignocellulosic biomass

Procedia PDF Downloads 170
275 Numerical Simulation of Seismic Process Accompanying the Formation of Shear-Type Fault Zone in Chuya-Kuray Depressions

Authors: Mikhail O. Eremin

Abstract:

Seismic activity around the world is clearly a threat to people's lives, as well as infrastructure and capital construction. It is the instability of the latter to powerful earthquakes that most often causes human casualties. Therefore, during construction it is necessary to take into account the risks of large-scale natural disasters. The task of assessing the risks of natural disasters is one of the most urgent at the present time. The final goal of any study of earthquakes is forecasting. This is especially important for seismically active regions of the planet where earthquakes occur frequently. Gorni Altai is one of such regions. In work, we developed the physical-mathematical model of stress-strain state evolution of loaded geomedium with the purpose of numerical simulation of seismic process accompanying the formation of Chuya-Kuray fault zone Gorni Altay, Russia. We build a structural model on the base of seismotectonic and paleoseismogeological investigations, as well as SRTM-data. Base of mathematical model is the system of equations of solid mechanics which includes the fundamental conservation laws and constitutive equations for elastic (Hooke's law) and inelastic deformation (modified model of Drucker-Prager-Nikolaevskii). An initial stress state of the model correspond to gravitational. Then we simulate an activation of a buried dextral strike-slip paleo-fault located in the basement of the model. We obtain the stages of formation and the structure of Chuya-Kuray fault zone. It is shown that results of numerical simulation are in good agreement with field observations in statistical sense. Simulated seismic process is strongly bound to the faults - lineaments with high degree of inelastic strain localization. Fault zone represents en-echelon system of dextral strike-slips according to the Riedel model. The system of surface lineaments is represented with R-, R'-shear bands, X- and Y-shears, T-fractures. Simulated seismic process obeys the laws of Gutenberg-Richter and Omori. Thus, the model describes a self-similar character of deformation and fracture of rocks and geomedia. We also modified the algorithm of determination of separate slip events in the model due to the features of strain rates dependence vs time.

Keywords: Drucker-Prager model, fault zone, numerical simulation, Riedel bands, seismic process, strike-slip fault

Procedia PDF Downloads 141
274 Static Charge Control Plan for High-Density Electronics Centers

Authors: Clara Oliver, Oibar Martinez, Jose Miguel Miranda

Abstract:

Ensuring a safe environment for sensitive electronics boards in places with high limitations in size poses two major difficulties: the control of charge accumulation in floating floors and the prevention of excess charge generation due to air cooling flows. In this paper, we discuss these mechanisms and possible solutions to prevent them. An experiment was made in the control room of a Cherenkov Telescope, where six racks of 2x1x1 m size and independent cooling units are located. The room is 10x4x2.5 m, and the electronics include high-speed digitizers, trigger circuits, etc. The floor used in this room was antistatic, but it was a raised floor mounted in floating design to facilitate the handling of the cables and maintenance. The tests were made by measuring the contact voltage acquired by a person who was walking along the room with different footwear qualities. In addition, we took some measurements of the voltage accumulated in a person in other situations like running or sitting up and down on an office chair. The voltages were taken in real time with an electrostatic voltage meter and dedicated control software. It is shown that peak voltages as high as 5 kV were measured with ambient humidity of more than 30%, which are within the range of a class 3A according to the HBM standard. In order to complete the results, we have made the same experiment in different spaces with alternative types of the floor like synthetic floor and earthenware floor obtaining peak voltages much lower than the ones measured with the floating synthetic floor. The grounding quality one achieves with this kind of floors can hardly beat the one typically encountered in standard floors glued directly on a solid substrate. On the other hand, the air ventilation used to prevent the overheating of the boards probably contributed in a significant way to the charge accumulated in the room. During the assessment of the quality of the static charge control, it is necessary to guarantee that the tests are made under repeatable conditions. One of the major difficulties which one encounters during these assessments is the fact the electrostatic voltmeters might provide different values depending on the humidity conditions and ground resistance quality. In addition, the use of certified antistatic footwear might mask deficiencies in the charge control. In this paper, we show how we defined protocols to guarantee that electrostatic readings are reliable. We believe that this can be helpful not only to qualify the static charge control in a laboratory but also to asses any procedure oriented to minimize the risk of electrostatic discharge events.

Keywords: electrostatics, ESD protocols, HBM, static charge control

Procedia PDF Downloads 131
273 Optimization and Evaluation of Different Pathways to Produce Biofuel from Biomass

Authors: Xiang Zheng, Zhaoping Zhong

Abstract:

In this study, Aspen Plus was used to simulate the whole process of biomass conversion to liquid fuel in different ways, and the main results of material and energy flow were obtained. The process optimization and evaluation were carried out on the four routes of cellulosic biomass pyrolysis gasification low-carbon olefin synthesis olefin oligomerization, biomass water pyrolysis and polymerization to jet fuel, biomass fermentation to ethanol, and biomass pyrolysis to liquid fuel. The environmental impacts of three biomass species (poplar wood, corn stover, and rice husk) were compared by the gasification synthesis pathway. The global warming potential, acidification potential, and eutrophication potential of the three biomasses were the same as those of rice husk > poplar wood > corn stover. In terms of human health hazard potential and solid waste potential, the results were poplar > rice husk > corn stover. In the popular pathway, 100 kg of poplar biomass was input to obtain 11.9 kg of aviation coal fraction and 6.3 kg of gasoline fraction. The energy conversion rate of the system was 31.6% when the output product energy included only the aviation coal product. In the basic process of hydrothermal depolymerization process, 14.41 kg aviation kerosene was produced per 100 kg biomass. The energy conversion rate of the basic process was 33.09%, which can be increased to 38.47% after the optimal utilization of lignin gasification and steam reforming for hydrogen production. The total exergy efficiency of the system increased from 30.48% to 34.43% after optimization, and the exergy loss mainly came from the concentration of precursor dilute solution. Global warming potential in environmental impact is mostly affected by the production process. Poplar wood was used as raw material in the process of ethanol production from cellulosic biomass. The simulation results showed that 827.4 kg of pretreatment mixture, 450.6 kg of fermentation broth, and 24.8 kg of ethanol were produced per 100 kg of biomass. The power output of boiler combustion reached 94.1 MJ, the unit power consumption in the process was 174.9 MJ, and the energy conversion rate was 33.5%. The environmental impact was mainly concentrated in the production process and agricultural processes. On the basis of the original biomass pyrolysis to liquid fuel, the enzymatic hydrolysis lignin residue produced by cellulose fermentation to produce ethanol was used as the pyrolysis raw material, and the fermentation and pyrolysis processes were coupled. In the coupled process, 24.8 kg ethanol and 4.78 kg upgraded liquid fuel were produced per 100 kg biomass with an energy conversion rate of 35.13%.

Keywords: biomass conversion, biofuel, process optimization, life cycle assessment

Procedia PDF Downloads 70
272 Mechanical and Material Characterization on the High Nitrogen Supersaturated Tool Steels for Die-Technology

Authors: Tatsuhiko Aizawa, Hiroshi Morita

Abstract:

The tool steels such as SKD11 and SKH51 have been utilized as punch and die substrates for cold stamping, forging, and fine blanking processes. The heat-treated SKD11 punches with the hardness of 700 HV wrought well in the stamping of SPCC, normal steel plates, and non-ferrous alloy such as a brass sheet. However, they suffered from severe damage in the fine blanking process of smaller holes than 1.5 mm in diameter. Under the high aspect ratio of punch length to diameter, an elastoplastic bucking of slender punches occurred on the production line. The heat-treated punches had a risk of chipping at their edges. To be free from those damages, the blanking punch must have sufficient rigidity and strength at the same time. In the present paper, the small-hole blanking punch with a dual toughness structure was proposed to provide a solution to this engineering issue in production. The low-temperature plasma nitriding process was utilized to form the nitrogen supersaturated thick layer into the original SKD11 punch. Through the plasma nitriding at 673 K for 14.4 ks, the nitrogen supersaturated layer, with the thickness of 50 μm and without nitride precipitates, was formed as a high nitrogen steel (HNS) layer surrounding the original SKD11 punch. In this two-zone structured SKD11 punch, the surface hardness increased from 700 HV for the heat-treated SKD11 to 1400 HV. This outer high nitrogen SKD11 (HN-SKD11) layer had a homogeneous nitrogen solute depth profile with a nitrogen solute content plateau of 4 mass% till the border between the outer HN-SKD11 layer and the original SKD11 matrix. When stamping the brass sheet with the thickness of 1 mm by using this dually toughened SKD11 punch, the punch life was extended from 500 K shots to 10000 K shots to attain a much more stable production line to yield the brass American snaps. Furthermore, with the aid of the masking technique, the punch side surface layer with the thickness of 50 μm was modified by this high nitrogen super-saturation process to have a stripe structure where the un-nitrided SKD11 and the HN-SKD11 layers were alternatively aligned from the punch head to the punch bottom. This flexible structuring promoted the mechanical integrity of total rigidity and toughness as a punch with an extremely small diameter.

Keywords: high nitrogen supersaturation, semi-dry cold stamping, solid solution hardening, tool steel dies, low temperature nitriding, dual toughness structure, extremely small diameter punch

Procedia PDF Downloads 89
271 DTI Connectome Changes in the Acute Phase of Aneurysmal Subarachnoid Hemorrhage Improve Outcome Classification

Authors: Sarah E. Nelson, Casey Weiner, Alexander Sigmon, Jun Hua, Haris I. Sair, Jose I. Suarez, Robert D. Stevens

Abstract:

Graph-theoretical information from structural connectomes indicated significant connectivity changes and improved acute prognostication in a Random Forest (RF) model in aneurysmal subarachnoid hemorrhage (aSAH), which can lead to significant morbidity and mortality and has traditionally been fraught by poor methods to predict outcome. This study’s hypothesis was that structural connectivity changes occur in canonical brain networks of acute aSAH patients, and that these changes are associated with functional outcome at six months. In a prospective cohort of patients admitted to a single institution for management of acute aSAH, patients underwent diffusion tensor imaging (DTI) as part of a multimodal MRI scan. A weighted undirected structural connectome was created of each patient’s images using Constant Solid Angle (CSA) tractography, with 176 regions of interest (ROIs) defined by the Johns Hopkins Eve atlas. ROIs were sorted into four networks: Default Mode Network, Executive Control Network, Salience Network, and Whole Brain. The resulting nodes and edges were characterized using graph-theoretic features, including Node Strength (NS), Betweenness Centrality (BC), Network Degree (ND), and Connectedness (C). Clinical (including demographics and World Federation of Neurologic Surgeons scale) and graph features were used separately and in combination to train RF and Logistic Regression classifiers to predict two outcomes: dichotomized modified Rankin Score (mRS) at discharge and at six months after discharge (favorable outcome mRS 0-2, unfavorable outcome mRS 3-6). A total of 56 aSAH patients underwent DTI a median (IQR) of 7 (IQR=8.5) days after admission. The best performing model (RF) combining clinical and DTI graph features had a mean Area Under the Receiver Operator Characteristic Curve (AUROC) of 0.88 ± 0.00 and Area Under the Precision Recall Curve (AUPRC) of 0.95 ± 0.00 over 500 trials. The combined model performed better than the clinical model alone (AUROC 0.81 ± 0.01, AUPRC 0.91 ± 0.00). The highest-ranked graph features for prediction were NS, BC, and ND. These results indicate reorganization of the connectome early after aSAH. The performance of clinical prognostic models was increased significantly by the inclusion of DTI-derived graph connectivity metrics. This methodology could significantly improve prognostication of aSAH.

Keywords: connectomics, diffusion tensor imaging, graph theory, machine learning, subarachnoid hemorrhage

Procedia PDF Downloads 190
270 Experimental Studies of the Reverse Load-Unloading Effect on the Mechanical, Linear and Nonlinear Elastic Properties of n-AMg6/C60 Nanocomposite

Authors: Aleksandr I. Korobov, Natalia V. Shirgina, Aleksey I. Kokshaiskiy, Vyacheslav M. Prokhorov

Abstract:

The paper presents the results of an experimental study of the effect of reverse mechanical load-unloading on the mechanical, linear, and nonlinear elastic properties of n-AMg6/C60 nanocomposite. Samples for experimental studies of n-AMg6/C60 nanocomposite were obtained by grinding AMg6 polycrystalline alloy in a planetary mill with 0.3 wt % of C60 fullerite in an argon atmosphere. The resulting product consisted of 200-500-micron agglomerates of nanoparticles. X-ray coherent scattering (CSL) method has shown that the average nanoparticle size is 40-60 nm. The resulting preform was extruded at high temperature. Modifications of C60 fullerite interferes the process of recrystallization at grain boundaries. In the samples of n-AMg6/C60 nanocomposite, the load curve is measured: the dependence of the mechanical stress σ on the strain of the sample ε under its multi-cycle load-unloading process till its destruction. The hysteresis dependence σ = σ(ε) was observed, and insignificant residual strain ε < 0.005 were recorded. At σ≈500 MPa and ε≈0.025, the sample was destroyed. The destruction of the sample was fragile. Microhardness was measured before and after destruction of the sample. It was found that the loading-unloading process led to an increase in its microhardness. The effect of the reversible mechanical stress on the linear and nonlinear elastic properties of the n-AMg6/C60 nanocomposite was studied experimentally by ultrasonic method on the automated complex Ritec RAM-5000 SNAP SYSTEM. In the n-AMg6/C60 nanocomposite, the velocities of the longitudinal and shear bulk waves were measured with the pulse method, and all the second-order elasticity coefficients and their dependence on the magnitude of the reversible mechanical stress applied to the sample were calculated. Studies of nonlinear elastic properties of the n-AMg6/C60 nanocomposite at reversible load-unloading of the sample were carried out with the spectral method. At arbitrary values of the strain of the sample (up to its breakage), the dependence of the amplitude of the second longitudinal acoustic harmonic at a frequency of 2f = 10MHz on the amplitude of the first harmonic at a frequency f = 5MHz of the acoustic wave is measured. Based on the results of these measurements, the values of the nonlinear acoustic parameter in the n-AMg6/C60 nanocomposite sample at different mechanical stress were determined. The obtained results can be used in solid-state physics, materials science, for development of new techniques for nondestructive testing of structural materials using methods of nonlinear acoustic diagnostics. This study was supported by the Russian Science Foundation (project №14-22-00042).

Keywords: nanocomposite, generation of acoustic harmonics, nonlinear acoustic parameter, hysteresis

Procedia PDF Downloads 152
269 Early Buddhist History in Architecture before Sui Dynasty

Authors: Yin Ruoxi

Abstract:

During the Eastern Han to Three Kingdoms period, Buddhism had not yet received comprehensive support from the ruling class, and its dissemination remained relatively limited. Based on existing evidence, Buddhist architecture was primarily concentrated in regions central to scripture translation and cultural exchange with the Western Regions, such as Luoyang, Pengcheng, and Guangling. The earliest Buddhist structures largely adhered to the traditional forms of ancient Indian architecture. The frequent wars of the late Western Jin and Sixteen Kingdoms periods compelled the Central Plains culture to interact with other civilizations. As a result, Buddhist architecture gradually integrated characteristics of Central Asian, ancient Indian, and native Chinese styles. In the Northern and Southern Dynasties, Buddhism gained formal support from rulers, leading to the establishment of numerous temples across the Central Plains. The prevalence of warfare, combined with the emergence of Wei-Jin reclusive thought and Buddhism’s own ascetic philosophy, gave rise to mountain temples. Additionally, the eastward spread of rock-cut cave architecture along the Silk Road accelerated the development of such mountain temples. Temple layouts also became increasingly complex with the deeper translation of Buddhist scriptures and the influence of traditional Chinese architectural concepts. From the earliest temples, where the only Buddhist structure was the temple itself, to layouts centered on the stupa with a "front stupa, rear hall" arrangement, and finally to Mahavira Halls becoming the sacred focal point, temple design evolved significantly. The grand halls eventually matched the scale of the central halls in imperial palaces, reflecting the growing deification of the Buddha in the public imagination. The multi-storied wooden pagoda exemplifies Buddhism’s remarkable adaptability during its early introduction to the Central Plains, while the dense- eaved pagoda represents a synthesis of Gandharan stupas, Central Asian temple shrines, ancient Indian devalaya, and Chinese multi-storied pavilions. This form demonstrates Buddhism’s ability to absorb features from diverse cultures during its dissemination. Through its continuous interaction with various cultures, Buddhist architecture achieved sustained development in both form and meaning, laying a solid foundation for the establishment and growth of Buddhism across different regions.

Keywords: Buddhism, buddhist architecture, pagoda, temple, South Asian Buddhism, Chinese Buddhism

Procedia PDF Downloads 14