Search results for: estimating cyclone intensity
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2477

Search results for: estimating cyclone intensity

497 Performance and Damage Detection of Composite Structural Insulated Panels Subjected to Shock Wave Loading

Authors: Anupoju Rajeev, Joanne Mathew, Amit Shelke

Abstract:

In the current study, a new type of Composite Structural Insulated Panels (CSIPs) is developed and investigated its performance against shock loading which can replace the conventional wooden structural materials. The CSIPs is made of Fibre Cement Board (FCB)/aluminum as the facesheet and the expanded polystyrene foam as the core material. As tornadoes are very often in the western countries, it is suggestable to monitor the health of the CSIPs during its lifetime. So, the composite structure is installed with three smart sensors located randomly at definite locations. Each smart sensor is fabricated with an embedded half stainless phononic crystal sensor attached to both ends of the nylon shaft that can resist the shock and impact on facesheet as well as polystyrene foam core and safeguards the system. In addition to the granular crystal sensors, the accelerometers are used in the horizontal spanning and vertical spanning with a definite offset distance. To estimate the health and damage of the CSIP panel using granular crystal sensor, shock wave loading experiments are conducted. During the experiments, the time of flight response from the granular sensors is measured. The main objective of conducting shock wave loading experiments on the CSIP panels is to study the effect and the sustaining capacity of the CSIP panels in the extreme hazardous situations like tornados and hurricanes which are very common in western countries. The effects have been replicated using a shock tube, an instrument that can be used to create the same wind and pressure intensity of tornado for the experimental study. Numerous experiments have been conducted to investigate the flexural strength of the CSIP. Furthermore, the study includes the damage detection using three smart sensors embedded in the CSIPs during the shock wave loading.

Keywords: composite structural insulated panels, damage detection, flexural strength, sandwich structures, shock wave loading

Procedia PDF Downloads 146
496 Clinical and Structural Differences in Knee Osteoarthritis with/without Synovial Hypertrophy

Authors: Gi-Young Park, Dong Rak Kwon, Sung Cheol Cho

Abstract:

Objective: The synovium is known to be involved in many pathological characteristic processes. Also, synovitis is common in advanced osteoarthritis. We aimed to evaluate the clinical, radiographic, and ultrasound findings in patients with knee osteoarthritis and to compare the clinical and imaging findings between knee osteoarthritis with and without synovial hypertrophy confirmed by ultrasound. Methods: One hundred knees (54 left, 46 right) in 95 patients (64 women, 31 men; mean age, 65.9 years; range, 43-85 years) with knee osteoarthritis were recruited. The Visual Analogue Scale (VAS) was used to assess the intensity of knee pain. The severity of knee osteoarthritis was classified according to Kellgren and Lawrence's (K-L) grade on a radiograph. Ultrasound examination was performed by a physiatrist who had 24 years of experience in musculoskeletal ultrasound. Ultrasound findings, including the thickness of joint effusion in the suprapatellar pouch, synovial hypertrophy, infrapatellar tendinosis, meniscal tear or extrusion, and Baker cyst, were measured and detected. The thickness of knee joint effusion was measured at the maximal anterior-posterior diameter of fluid collection in the suprapatellar pouch. Synovial hypertrophy was identified as the soft tissue of variable echogenicity, which is poorly compressible and nondisplaceable by compression of an ultrasound transducer. The knees were divided into two groups according to the presence of synovial hypertrophy. The differences in clinical and imaging findings between the two groups were evaluated by independent t-test and chi-square test. Results: Synovial hypertrophy was detected in 48 knees of 100 knees on ultrasound. There were no significant differences in demographic parameters and VAS score except in sex between the two groups (P<0.05). Medial meniscal extrusion and tear were significantly more frequent in knees with synovial hypertrophy than those in knees without synovial hypertrophy. K-L grade and joint effusion thickness were greater in patients with synovial hypertrophy than those in patients without synovial hypertrophy (P<0.05). Conclusion: Synovial hypertrophy in knee osteoarthritis was associated with greater suprapatellar joint effusion and higher K-L grade and maybe a characteristic ultrasound feature of late knee osteoarthritis. These results suggest that synovial hypertrophy on ultrasound can be regarded as a predictor of rapid progression in patients with knee osteoarthritis.

Keywords: knee osteoarthritis, synovial hypertrophy, ultrasound, K-L grade

Procedia PDF Downloads 75
495 Identification of Suitable Rainwater Harvesting Sites Using Geospatial Techniques with AHP in Chacha Watershed, Jemma Sub-Basin Upper Blue Nile, Ethiopia

Authors: Abrha Ybeyn Gebremedhn, Yitea Seneshaw Getahun, Alebachew Shumye Moges, Fikrey Tesfay

Abstract:

Rainfed agriculture in Ethiopia has failed to produce enough food, to achieve the increasing demand for food. Pinpointing the appropriate site for rainwater harvesting (RWH) have a substantial contribution to increasing the available water and enhancing agricultural productivity. The current study related to the identification of the potential RWH sites was conducted at the Chacha watershed central highlands of Ethiopia which is endowed with rugged topography. The Geographic Information System with Analytical Hierarchy Process was used to generate the different maps for identifying appropriate sites for RWH. In this study, 11 factors that determine the RWH locations including slope, soil texture, runoff depth, land cover type, annual average rainfall, drainage density, lineament intensity, hydrologic soil group, antecedent moisture content, and distance to the roads were considered. The overall analyzed result shows that 10.50%, 71.10%, 17.90%, and 0.50% of the areas were found under highly, moderately, marginally suitable, and unsuitable areas for RWH, respectively. The RWH site selection was found highly dependent on a slope, soil texture, and runoff depth; moderately dependent on drainage density, annual average rainfall, and land use land cover; but less dependent on the other factors. The highly suitable areas for rainwater harvesting expansion are lands having a flat topography with a soil textural class of high-water holding capacity that can produce high runoff depth. The application of this study could be a baseline for planners and decision-makers and support any strategy adoption for appropriate RWH site selection.

Keywords: runoff depth, antecedent moisture condition, AHP, weighted overlay, water resource

Procedia PDF Downloads 53
494 Tobacco Taxation and the Heterogeneity of Smokers' Responses to Price Increases

Authors: Simone Tedeschi, Francesco Crespi, Paolo Liberati, Massimo Paradiso, Antonio Sciala

Abstract:

This paper aims at contributing to the understanding of smokers’ responses to cigarette prices increases with a focus on heterogeneity, both across individuals and price levels. To do this, a stated preference quasi-experimental design grounded in a random utility framework is proposed to evaluate the effect on smokers’ utility of the price level and variation, along with social conditioning and health impact perception. The analysis is based on individual-level data drawn from a unique survey gathering very detailed information on Italian smokers’ habits. In particular, qualitative information on the individual reactions triggered by changes in prices of different magnitude and composition are exploited. The main findings stemming from the analysis are the following; the average price elasticity of cigarette consumption is comparable with previous estimates for advanced economies (-.32). However, the decomposition of this result across five latent-classes of smokers, reveals extreme heterogeneity in terms of price responsiveness, implying a potential price elasticity that ranges between 0.05 to almost 1. Such heterogeneity is in part explained by observable characteristics such as age, income, gender, education as well as (current and lagged) smoking intensity. Moreover, price responsiveness is far from being independent from the size of the prospected price increase. Finally, by comparing even and uneven price variations, it is shown that uniform across-brand price increases are able to limit the scope of product substitutions and downgrade. Estimated price-response heterogeneity has significant implications for tax policy. Among them, first, it provides evidence and a rationale for why the aggregate price elasticity is likely to follow a strictly increasing pattern as a function of the experienced price variation. This information is crucial for forecasting the effect of a given tax-driven price change on tax revenue. Second, it provides some guidance on how to design excise tax reforms to balance public health and revenue goals.

Keywords: smoking behaviour, preference heterogeneity, price responsiveness, cigarette taxation, random utility models

Procedia PDF Downloads 162
493 Analyzing the Street Pattern Characteristics on Young People’s Choice to Walk or Not: A Study Based on Accelerometer and Global Positioning Systems Data

Authors: Ebru Cubukcu, Gozde Eksioglu Cetintahra, Burcin Hepguzel Hatip, Mert Cubukcu

Abstract:

Obesity and overweight cause serious health problems. Public and private organizations aim to encourage walking in various ways in order to cope with the problem of obesity and overweight. This study aims to understand how the spatial characteristics of urban street pattern, connectivity and complexity influence young people’s choice to walk or not. 185 public university students in Izmir, the third largest city in Turkey, participated in the study. Each participant had worn an accelerometer and a global positioning (GPS) device for a week. The accelerometer device records data on the intensity of the participant’s activity at a specified time interval, and the GPS device on the activities’ locations. Combining the two datasets, activity maps are derived. These maps are then used to differentiate the participants’ walk trips and motor vehicle trips. Given that, the frequency of walk and motor vehicle trips are calculated at the street segment level, and the street segments are then categorized into two as ‘preferred by pedestrians’ and ‘preferred by motor vehicles’. Graph Theory-based accessibility indices are calculated to quantify the spatial characteristics of the streets in the sample. Six different indices are used: (I) edge density, (II) edge sinuosity, (III) eta index, (IV) node density, (V) order of a node, and (VI) beta index. T-tests show that the index values for the ‘preferred by pedestrians’ and ‘preferred by motor vehicles’ are significantly different. The findings indicate that the spatial characteristics of the street network have a measurable effect on young people’s choice to walk or not. Policy implications are discussed. This study is funded by the Scientific and Technological Research Council of Turkey, Project No: 116K358.

Keywords: graph theory, walkability, accessibility, street network

Procedia PDF Downloads 226
492 Systematic Review and Meta-analysis Investigating the Efficacy of Walking-based Aerobic Exercise Interventions to Treat Postpartum Depression

Authors: V. Pentland, S. Spilsbury, A. Biswas, M. F. Mottola, S. Paplinskie, M. S. Mitchell

Abstract:

Postpartum depression (PPD) is a form of major depressive disorder that afflicts 10–22% of mothers worldwide. Rising demands for traditional PPD treatment options (e.g., psychiatry), especially in the context of the COVID-19 pandemic, are increasingly difficult to meet. More accessible treatment options (e.g., walking) are needed. The objective of this review is to determine the impact of walking on PPD severity. A structured search of seven electronic databases for randomised controlled trials published between 2000 and July 29, 2021, was completed. Studies were included if walking was the sole or primary aerobic exercise modality. A random-effects meta-analysis was conducted for studies reporting PPD symptoms measured using a clinically validated tool. A simple count of positive/null effect studies was undertaken as part of a narrative summary. Five studies involving 242 participants were included (mean age=~28.9 years; 100% with mild-to-moderate depression). Interventions were 12 (n=4) and 24 (n=1) weeks long. Each assessed PPD severity using the Edinburgh Postnatal Depression Scale (EPDS) and was included in the meta-analysis. The pooled effect estimate suggests that relative to controls, walking yielded clinically significant decreases in mean EPDS scores from baseline to intervention end (pooled MD=-4.01; 95% CI:-7.18 to -0.84, I2=86%). The narrative summary provides preliminary evidence that walking-only, supervised, and group-based interventions, including 90-120+ minutes/week of moderate-intensity walking, may produce greater EPDS reductions. While limited by a relatively small number of included studies, pooled effect estimates suggest walking may help mothers manage PPD. This is the first time walking as a treatment for PPD, an exercise modality that uniquely addresses many barriers faced by mothers has been summarized in a systematic way. Trial registration: PROSPERO (CRD42020197521) on August 16th, 2020

Keywords: postpartum, exercise, depression, walking

Procedia PDF Downloads 204
491 Principles of Risk Management in Surgery Department

Authors: Mohammad H. Yarmohammadian, Masoud Ferdosi, Abbas Haghshenas, Fatemeh Rezaei

Abstract:

Surgical procedures aim at preserving human life and improving quality of their life. However, there are many potential risk sources that can cause serious harm to patients. For centuries, managers believed that technical competence of a surgeon is the only key to a successful surgery. But over the past decade, risks are considered in terms of process-based safety procedures, teamwork and inter departmental communication. Aims: This study aims to determine how the process- biased surgical risk management should be done in terms of project management tool named ABS (Activity Breakdown Structure). Settings and Design: This study was conducted in two stages. First, literature review and meeting with professors was done to determine principles and framework of surgical risk management. Next, responsible teams for surgical patient journey were involved in following meeting to develop the process- biased surgical risk management. Methods and Material: This study is a qualitative research in which focus groups with the inductive approach is used. Sampling was performed to achieve representativeness through intensity sampling biased on experience and seniority. Analysis Method used: context analysis of interviews and consensus themes extracted from FDG meetings discussion was the analysis tool. Results: we developed the patient journey process in 5 main phases, 24 activities and 108 tasks. Then, responsible teams, transposition and allocated places for performing determined. Some activities and tasks themes were repeated in each phases like patient identification and records review because of their importance. Conclusions: Risk management of surgical departments is significant as this facility is the hospital’s largest cost and revenue center. Good communication between surgical team and other clinical teams outside surgery department through process- biased perspective could improve safety of patient under this procedure.

Keywords: risk management, activity breakdown structure (ABS), surgical department, medical sciences

Procedia PDF Downloads 303
490 A Quality Index Optimization Method for Non-Invasive Fetal ECG Extraction

Authors: Lucia Billeci, Gennaro Tartarisco, Maurizio Varanini

Abstract:

Fetal cardiac monitoring by fetal electrocardiogram (fECG) can provide significant clinical information about the healthy condition of the fetus. Despite this potentiality till now the use of fECG in clinical practice has been quite limited due to the difficulties in its measuring. The recovery of fECG from the signals acquired non-invasively by using electrodes placed on the maternal abdomen is a challenging task because abdominal signals are a mixture of several components and the fetal one is very weak. This paper presents an approach for fECG extraction from abdominal maternal recordings, which exploits the characteristics of pseudo-periodicity of fetal ECG. It consists of devising a quality index (fQI) for fECG and of finding the linear combinations of preprocessed abdominal signals, which maximize these fQI (quality index optimization - QIO). It aims at improving the performances of the most commonly adopted methods for fECG extraction, usually based on maternal ECG (mECG) estimating and canceling. The procedure for the fECG extraction and fetal QRS (fQRS) detection is completely unsupervised and based on the following steps: signal pre-processing; maternal ECG (mECG) extraction and maternal QRS detection; mECG component approximation and canceling by weighted principal component analysis; fECG extraction by fQI maximization and fetal QRS detection. The proposed method was compared with our previously developed procedure, which obtained the highest at the Physionet/Computing in Cardiology Challenge 2013. That procedure was based on removing the mECG from abdominal signals estimated by a principal component analysis (PCA) and applying the Independent component Analysis (ICA) on the residual signals. Both methods were developed and tuned using 69, 1 min long, abdominal measurements with fetal QRS annotation of the dataset A provided by PhysioNet/Computing in Cardiology Challenge 2013. The QIO-based and the ICA-based methods were compared in analyzing two databases of abdominal maternal ECG available on the Physionet site. The first is the Abdominal and Direct Fetal Electrocardiogram Database (ADdb) which contains the fetal QRS annotations thus allowing a quantitative performance comparison, the second is the Non-Invasive Fetal Electrocardiogram Database (NIdb), which does not contain the fetal QRS annotations so that the comparison between the two methods can be only qualitative. In particular, the comparison on NIdb was performed defining an index of quality for the fetal RR series. On the annotated database ADdb the QIO method, provided the performance indexes Sens=0.9988, PPA=0.9991, F1=0.9989 overcoming the ICA-based one, which provided Sens=0.9966, PPA=0.9972, F1=0.9969. The comparison on NIdb was performed defining an index of quality for the fetal RR series. The index of quality resulted higher for the QIO-based method compared to the ICA-based one in 35 records out 55 cases of the NIdb. The QIO-based method gave very high performances with both the databases. The results of this study foresees the application of the algorithm in a fully unsupervised way for the implementation in wearable devices for self-monitoring of fetal health.

Keywords: fetal electrocardiography, fetal QRS detection, independent component analysis (ICA), optimization, wearable

Procedia PDF Downloads 280
489 Quality Assurances for an On-Board Imaging System of a Linear Accelerator: Five Months Data Analysis

Authors: Liyun Chang, Cheng-Hsiang Tsai

Abstract:

To ensure the radiation precisely delivering to the target of cancer patients, the linear accelerator equipped with the pretreatment on-board imaging system is introduced and through it the patient setup is verified before the daily treatment. New generation radiotherapy using beam-intensity modulation, usually associated the treatment with steep dose gradients, claimed to have achieved both a higher degree of dose conformation in the targets and a further reduction of toxicity in normal tissues. However, this benefit is counterproductive if the beam is delivered imprecisely. To avoid shooting critical organs or normal tissues rather than the target, it is very important to carry out the quality assurance (QA) of this on-board imaging system. The QA of the On-Board Imager® (OBI) system of one Varian Clinac-iX linear accelerator was performed through our procedures modified from a relevant report and AAPM TG142. Two image modalities, 2D radiography and 3D cone-beam computed tomography (CBCT), of the OBI system were examined. The daily and monthly QA was executed for five months in the categories of safety, geometrical accuracy and image quality. A marker phantom and a blade calibration plate were used for the QA of geometrical accuracy, while the Leeds phantom and Catphan 504 phantom were used in the QA of radiographic and CBCT image quality, respectively. The reference images were generated through a GE LightSpeed CT simulator with an ADAC Pinnacle treatment planning system. Finally, the image quality was analyzed via an OsiriX medical imaging system. For the geometrical accuracy test, the average deviations of the OBI isocenter in each direction are less than 0.6 mm with uncertainties less than 0.2 mm, while all the other items have the displacements less than 1 mm. For radiographic image quality, the spatial resolution is 1.6 lp/cm with contrasts less than 2.2%. The spatial resolution, low contrast, and HU homogenous of CBCT are larger than 6 lp/cm, less than 1% and within 20 HU, respectively. All tests are within the criteria, except the HU value of Teflon measured with the full fan mode exceeding the suggested value that could be due to itself high HU value and needed to be rechecked. The OBI system in our facility was then demonstrated to be reliable with stable image quality. The QA of OBI system is really necessary to achieve the best treatment for a patient.

Keywords: CBCT, image quality, quality assurance, OBI

Procedia PDF Downloads 298
488 Trend Analysis of Rainfall: A Climate Change Paradigm

Authors: Shyamli Singh, Ishupinder Kaur, Vinod K. Sharma

Abstract:

Climate Change refers to the change in climate for extended period of time. Climate is changing from the past history of earth but anthropogenic activities accelerate this rate of change and which is now being a global issue. Increase in greenhouse gas emissions is causing global warming and climate change related issues at an alarming rate. Increasing temperature results in climate variability across the globe. Changes in rainfall patterns, intensity and extreme events are some of the impacts of climate change. Rainfall variability refers to the degree to which rainfall patterns varies over a region (spatial) or through time period (temporal). Temporal rainfall variability can be directly or indirectly linked to climate change. Such variability in rainfall increases the vulnerability of communities towards climate change. Increasing urbanization and unplanned developmental activities, the air quality is deteriorating. This paper mainly focuses on the rainfall variability due to increasing level of greenhouse gases. Rainfall data of 65 years (1951-2015) of Safdarjung station of Delhi was collected from Indian Meteorological Department and analyzed using Mann-Kendall test for time-series data analysis. Mann-Kendall test is a statistical tool helps in analysis of trend in the given data sets. The slope of the trend can be measured through Sen’s slope estimator. Data was analyzed monthly, seasonally and yearly across the period of 65 years. The monthly rainfall data for the said period do not follow any increasing or decreasing trend. Monsoon season shows no increasing trend but here was an increasing trend in the pre-monsoon season. Hence, the actual rainfall differs from the normal trend of the rainfall. Through this analysis, it can be projected that there will be an increase in pre-monsoon rainfall than the actual monsoon season. Pre-monsoon rainfall causes cooling effect and results in drier monsoon season. This will increase the vulnerability of communities towards climate change and also effect related developmental activities.

Keywords: greenhouse gases, Mann-Kendall test, rainfall variability, Sen's slope

Procedia PDF Downloads 208
487 Modification of a Commercial Ultrafiltration Membrane by Electrospray Deposition for Performance Adjustment

Authors: Elizaveta Korzhova, Sebastien Deon, Patrick Fievet, Dmitry Lopatin, Oleg Baranov

Abstract:

Filtration with nanoporous ultrafiltration membranes is an attractive option to remove ionic pollutants from contaminated effluents. Unfortunately, commercial membranes are not necessarily suitable for specific applications, and their modification by polymer deposition is a fruitful way to adapt their performances accordingly. Many methods are usually used for surface modification, but a novel technique based on electrospray is proposed here. Various quantities of polymers were deposited on a commercial membrane, and the impact of the deposit is investigated on filtration performances and discussed in terms of charge and hydrophobicity. The electrospray deposition is a technique which has not been used for membrane modification up to now. It consists of spraying small drops of polymer solution under a high voltage between the needle containing the solution and the metallic support on which membrane is stuck. The advantage of this process lies in the small quantities of polymer that can be coated on the membrane surface compared with immersion technique. In this study, various quantities (from 2 to 40 μL/cm²) of solutions containing two charged polymers (13 mmol/L of monomer unit), namely polyethyleneimine (PEI) and polystyrene sulfonate (PSS), were sprayed on a negatively charged polyethersulfone membrane (PLEIADE, Orelis Environment). The efficacy of the polymer deposition was then investigated by estimating ion rejection, permeation flux, zeta-potential and contact angle before and after the polymer deposition. Firstly, contact angle (θ) measurements show that the surface hydrophilicity is notably improved by coating both PEI and PSS. Moreover, it was highlighted that the contact angle decreases monotonously with the amount of sprayed solution. Additionally, hydrophilicity enhancement was proved to be better with PSS (from 62 to 35°) than PEI (from 62 to 53°). Values of zeta-potential (ζ were estimated by measuring the streaming current generated by a pressure difference on both sides of a channel made by clamping two membranes. The ζ-values demonstrate that the deposits of PSS (negative at pH=5.5) allow an increase of the negative membrane charge, whereas the deposits of PEI (positive) lead to a positive surface charge. Zeta-potentials measurements also emphasize that the sprayed quantity has little impact on the membrane charge, except for very low quantities (2 μL/m²). The cross-flow filtration of salt solutions containing mono and divalent ions demonstrate that polymer deposition allows a strong enhancement of ion rejection. For instance, it is shown that rejection of a salt containing a divalent cation can be increased from 1 to 20 % and even to 35% by deposing 2 and 4 μL/cm² of PEI solution, respectively. This observation is coherent with the reversal of the membrane charge induced by PEI deposition. Similarly, the increase of negative charge induced by PSS deposition leads to an increase of NaCl rejection from 5 to 45 % due to electrostatic repulsion of the Cl- ion by the negative surface charge. Finally, a notable fall in the permeation flux due to the polymer layer coated at the surface was observed and the best polymer concentration in the sprayed solution remains to be determined to optimize performances.

Keywords: ultrafiltration, electrospray deposition, ion rejection, permeation flux, zeta-potential, hydrophobicity

Procedia PDF Downloads 187
486 Climate Smart Agriculture: Nano Technology in Solar Drying

Authors: Figen Kadirgan, M. A. Neset Kadirgan, Gokcen A. Ciftcioglu

Abstract:

Addressing food security and climate change challenges have to be done in an integrated manner. To increase food production and to reduce emissions intensity, thus contributing to mitigate climate change, food systems have to be more efficient in the use of resources. To ensure food security and adapt to climate change they have to become more resilient. The changes required in agricultural and food systems will require the creation of supporting institutions and enterprises to provide services and inputs to smallholders, fishermen and pastoralists, and transform and commercialize their production more efficiently. Thus there is continously growing need to switch to green economy where simultaneously causes reduction in carbon emissions and pollution, enhances energy and resource-use efficiency; and prevents the loss of biodiversity and ecosystem services. Smart Agriculture takes into account the four dimensions of food security, availability, accessibility, utilization, and stability. It is well known that, the increase in world population will strengthen the population-food imbalance. The emphasis on reduction of food losses makes a point on production, on farmers, on increasing productivity and income ensuring food security. Where also small farmers enhance their income and stabilize their budget. The use of solar drying for agricultural, marine or meat products is very important for preservation. Traditional sun drying is a relatively slow process where poor food quality is seen due to an infestation of insects, enzymatic reactions, microorganism growth and micotoxin development. In contrast, solar drying has a sound solution to all these negative effects of natural drying and artificial mechanical drying. The technical directions in the development of solar drying systems for agricultural products are compact collector design with high efficiency and low cost. In this study, using solar selective surface produced in Selektif Teknoloji Co. Inc. Ltd., solar dryers with high efficiency will be developed and a feasibility study will be realized.

Keywords: energy, renewable energy, solar collector, solar drying

Procedia PDF Downloads 225
485 Solids and Nutrient Loads Exported by Preserved and Impacted Low-Order Streams: A Comparison among Water Bodies in Different Latitudes in Brazil

Authors: Nicolas R. Finkler, Wesley A. Saltarelli, Taison A. Bortolin, Vania E. Schneider, Davi G. F. Cunha

Abstract:

Estimating the relative contribution of nonpoint or point sources of pollution in low-orders streams is an important tool for the water resources management. The location of headwaters in areas with anthropogenic impacts from urbanization and agriculture is a common scenario in developing countries. This condition can lead to conflicts among different water users and compromise ecosystem services. Water pollution also contributes to exporting organic loads to downstream areas, including higher order rivers. The purpose of this research is to preliminarily assess nutrients and solids loads exported by water bodies located in watersheds with different types of land uses in São Carlos - SP (Latitude. -22.0087; Longitude. -47.8909) and Caxias do Sul - RS (Latitude. -29.1634, Longitude. -51.1796), Brazil, using regression analysis. The variables analyzed in this study were Total Kjeldahl Nitrogen (TKN), Nitrate (NO3-), Total Phosphorus (TP) and Total Suspended Solids (TSS). Data were obtained in October and December 2015 for São Carlos (SC) and in November 2012 and March 2013 for Caxias do Sul (CXS). Such periods had similar weather patterns regarding precipitation and temperature. Altogether, 11 sites were divided into two groups, some classified as more pristine (SC1, SC4, SC5, SC6 and CXS2), with predominance of native forest; and others considered as impacted (SC2, SC3, CXS1, CXS3, CXS4 and CXS5), presenting larger urban and/or agricultural areas. Previous linear regression was applied for data on flow and drainage area of each site (R² = 0.9741), suggesting that the loads to be assessed had a significant relationship with the drainage areas. Thereafter, regression analysis was conducted between the drainage areas and the total loads for the two land use groups. The R² values were 0.070, 0.830, 0.752 e 0.455 respectively for SST, TKN, NO3- and TP loads in the more preserved areas, suggesting that the loads generated by runoff are significant in these locations. However, the respective R² values for sites located in impacted areas were respectively 0.488, 0.054, 0.519 e 0.059 for SST, TKN, NO3- and P loads, indicating a less important relationship between total loads and runoff as compared to the previous scenario. This study suggests three possible conclusions that will be further explored in the full-text article, with more sampling sites and periods: a) In preserved areas, nonpoint sources of pollution are more significant in determining water quality in relation to the studied variables; b) The nutrient (TKN and P) loads in impacted areas may be associated with point sources such as domestic wastewater discharges with inadequate treatment levels; and c) The presence of NO3- in impacted areas can be associated to the runoff, particularly in agricultural areas, where the application of fertilizers is common at certain times of the year.

Keywords: land use, linear regression, point and non-point pollution sources, streams, water resources management

Procedia PDF Downloads 307
484 Nondestructive Monitoring of Atomic Reactions to Detect Precursors of Structural Failure

Authors: Volodymyr Rombakh

Abstract:

This article was written to substantiate the possibility of detecting the precursors of catastrophic destruction of a structure or device and stopping operation before it. Damage to solids results from breaking the bond between atoms, which requires energy. Modern theories of strength and fracture assume that such energy is due to stress. However, in a letter to W. Thomson (Lord Kelvin) dated December 18, 1856, J.C. Maxwell provided evidence that elastic energy cannot destroy solids. He proposed an equation for estimating a deformable body's energy, equal to the sum of two energies. Due to symmetrical compression, the first term does not change, but the second term is distortion without compression. Both types of energy are represented in the equation as a quadratic function of strain, but Maxwell repeatedly wrote that it is not stress but strain. Furthermore, he notes that the nature of the energy causing the distortion is unknown to him. An article devoted to theories of elasticity was published in 1850. Maxwell tried to express mechanical properties with the help of optics, which became possible only after the creation of quantum mechanics. However, Maxwell's work on elasticity is not cited in the theories of strength and fracture. The authors of these theories and their associates are still trying to describe the phenomena they observe based on classical mechanics. The study of Faraday's experiments, Maxwell's and Rutherford's ideas, made it possible to discover a previously unknown area of electromagnetic radiation. The properties of photons emitted in this reaction are fundamentally different from those of photons emitted in nuclear reactions and are caused by the transition of electrons in an atom. The photons released during all processes in the universe, including from plants and organs in natural conditions; their penetrating power in metal is millions of times greater than that of one of the gamma rays. However, they are not non-invasive. This apparent contradiction is because the chaotic motion of protons is accompanied by the chaotic radiation of photons in time and space. Such photons are not coherent. The energy of a solitary photon is insufficient to break the bond between atoms, one of the stages of which is ionization. The photographs registered the rail deformation by 113 cars, while the Gaiger Counter did not. The author's studies show that the cause of damage to a solid is the breakage of bonds between a finite number of atoms due to the stimulated emission of metastable atoms. The guarantee of the reliability of the structure is the ratio of the energy dissipation rate to the energy accumulation rate, but not the strength, which is not a physical parameter since it cannot be measured or calculated. The possibility of continuous control of this ratio is due to the spontaneous emission of photons by metastable atoms. The article presents calculation examples of the destruction of energy and photographs due to the action of photons emitted during the atomic-proton reaction.

Keywords: atomic-proton reaction, precursors of man-made disasters, strain, stress

Procedia PDF Downloads 92
483 Peptide-Gold Nanocluster as an Optical Biosensor for Glycoconjugate Secreted from Leishmania

Authors: Y. A. Prada, Fanny Guzman, Rafael Cabanzo, John J. Castillo, Enrique Mejia-Ospino

Abstract:

In this work, we show the important results about of synthesis of photoluminiscents gold nanoclusters using a small peptide as template for biosensing applications. Interestingly, we design one peptide (NBC2854) homologue to conservative domain from 215 250 residue of a galactolectin protein which can recognize the proteophosphoglycans (PPG) from Leishmania. Peptide was synthetized by multiple solid phase synthesis using FMoc group methodology in acid medium. Finally, the peptide was purified by High-Performance Liquid Chromatography using a Vydac C-18 preparative column and the detection was at 215 nm using a Photo Diode Array detector. Molecular mass of this peptide was confirmed by MALDI-TOF and to verify the α-helix structure we use Circular Dichroism. By means of the methodology used we obtained a novel fluorescents gold nanoclusters (AuNC) using NBC2854 as a template. In this work, we described an easy and fast microsonic method for the synthesis of AuNC with ≈ 3.0 nm of hydrodynamic size and photoemission at 630 nm. The presence of cysteine residue in the C-terminal of the peptide allows the formation of Au-S bond which confers stability to Peptide-based gold nanoclusters. Interactions between the peptide and gold nanoclusters were confirmed by X-ray Photoemission and Raman Spectroscopy. Notably, from the ultrafine spectra shown in the MALDI-TOF analysis which containing only 3-7 KDa species was assigned to Au₈-₁₈[NBC2854]₂ clusters. Finally, we evaluated the Peptide-gold nanocluster as an optical biosensor based on fluorescence spectroscopy and the fluorescence signal of PPG (0.1 µg-mL⁻¹ to 1000 µg-mL⁻¹) was amplified at the same wavelength emission (≈ 630 nm). This can suggest that there is a strong interaction between PPG and Pep@AuNC, therefore, the increase of the fluorescence intensity can be related to the association mechanism that take place when the target molecule is sensing by the Pep@AuNC conjugate. Further spectroscopic studies are necessary to evaluate the fluorescence mechanism involve in the sensing of the PPG by the Pep@AuNC. To our best knowledge the fabrication of an optical biosensor based on Pep@AuNC for sensing biomolecules such as Proteophosphoglycans which are secreted in abundance by parasites Leishmania.

Keywords: biosensing, fluorescence, Leishmania, peptide-gold nanoclusters, proteophosphoglycans

Procedia PDF Downloads 169
482 Distraction from Pain: An fMRI Study on the Role of Age-Related Changes in Executive Functions

Authors: Katharina M. Rischer, Angelika Dierolf, Ana M. Gonzalez-Roldan, Pedro Montoya, Fernand Anton, Marian van der Meulen

Abstract:

Even though age has been associated with increased and prolonged episodes of pain, little is known about potential age-related changes in the ˈtop-downˈ modulation of pain, such as cognitive distraction from pain. The analgesic effects of distraction result from competition for attentional resources in the prefrontal cortex (PFC), a region that is also involved in executive functions. Given that the PFC shows pronounced age-related atrophy, distraction may be less effective in reducing pain in older compared to younger adults. The aim of this study was to investigate the influence of aging on task-related analgesia and the underpinning neural mechanisms, with a focus on the role of executive functions in distraction from pain. In a first session, 64 participants (32 young adults: 26.69 ± 4.14 years; 32 older adults: 68.28 ± 7.00 years) completed a battery of neuropsychological tests. In a second session, participants underwent a pain distraction paradigm, while fMRI images were acquired. In this paradigm, participants completed a low (0-back) and a high (2-back) load condition of a working memory task while receiving either warm or painful thermal stimuli to their lower arm. To control for age-related differences in sensitivity to pain and perceived task difficulty, stimulus intensity, and task speed were individually calibrated. Results indicate that both age groups showed significantly reduced activity in a network of regions involved in pain processing when completing the high load distraction task; however, young adults showed a larger neural distraction effect in different parts of the insula and the thalamus. Moreover, better executive functions, in particular inhibitory control abilities, were associated with a larger behavioral and neural distraction effect. These findings clearly demonstrate that top-down control of pain is affected in older age, and could explain the higher vulnerability for older adults to develop chronic pain. Moreover, our findings suggest that the assessment of executive functions may be a useful tool for predicting the efficacy of cognitive pain modulation strategies in older adults.

Keywords: executive functions, cognitive pain modulation, fMRI, PFC

Procedia PDF Downloads 144
481 Comparison of Sediment Rating Curve and Artificial Neural Network in Simulation of Suspended Sediment Load

Authors: Ahmad Saadiq, Neeraj Sahu

Abstract:

Sediment, which comprises of solid particles of mineral and organic material are transported by water. In river systems, the amount of sediment transported is controlled by both the transport capacity of the flow and the supply of sediment. The transport of sediment in rivers is important with respect to pollution, channel navigability, reservoir ageing, hydroelectric equipment longevity, fish habitat, river aesthetics and scientific interests. The sediment load transported in a river is a very complex hydrological phenomenon. Hence, sediment transport has attracted the attention of engineers from various aspects, and different methods have been used for its estimation. So, several experimental equations have been submitted by experts. Though the results of these methods have considerable differences with each other and with experimental observations, because the sediment measures have some limits, these equations can be used in estimating sediment load. In this present study, two black box models namely, an SRC (Sediment Rating Curve) and ANN (Artificial Neural Network) are used in the simulation of the suspended sediment load. The study is carried out for Seonath subbasin. Seonath is the biggest tributary of Mahanadi river, and it carries a vast amount of sediment. The data is collected for Jondhra hydrological observation station from India-WRIS (Water Resources Information System) and IMD (Indian Meteorological Department). These data include the discharge, sediment concentration and rainfall for 10 years. In this study, sediment load is estimated from the input parameters (discharge, rainfall, and past sediment) in various combination of simulations. A sediment rating curve used the water discharge to estimate the sediment concentration. This estimated sediment concentration is converted to sediment load. Likewise, for the application of these data in ANN, they are normalised first and then fed in various combinations to yield the sediment load. RMSE (root mean square error) and R² (coefficient of determination) between the observed load and the estimated load are used as evaluating criteria. For an ideal model, RMSE is zero and R² is 1. However, as the models used in this study are black box models, they don’t carry the exact representation of the factors which causes sedimentation. Hence, a model which gives the lowest RMSE and highest R² is the best model in this study. The lowest values of RMSE (based on normalised data) for sediment rating curve, feed forward back propagation, cascade forward back propagation and neural network fitting are 0.043425, 0.00679781, 0.0050089 and 0.0043727 respectively. The corresponding values of R² are 0.8258, 0.9941, 0.9968 and 0.9976. This implies that a neural network fitting model is superior to the other models used in this study. However, a drawback of neural network fitting is that it produces few negative estimates, which is not at all tolerable in the field of estimation of sediment load, and hence this model can’t be crowned as the best model among others, based on this study. A cascade forward back propagation produces results much closer to a neural network model and hence this model is the best model based on the present study.

Keywords: artificial neural network, Root mean squared error, sediment, sediment rating curve

Procedia PDF Downloads 325
480 Improvement of Electric Aircraft Endurance through an Optimal Propeller Design Using Combined BEM, Vortex and CFD Methods

Authors: Jose Daniel Hoyos Giraldo, Jesus Hernan Jimenez Giraldo, Juan Pablo Alvarado Perilla

Abstract:

Range and endurance are the main limitations of electric aircraft due to the nature of its source of power. The improvement of efficiency on this kind of systems is extremely meaningful to encourage the aircraft operation with less environmental impact. The propeller efficiency highly affects the overall efficiency of the propulsion system; hence its optimization can have an outstanding effect on the aircraft performance. An optimization method is applied to an aircraft propeller in order to maximize its range and endurance by estimating the best combination of geometrical parameters such as diameter and airfoil, chord and pitch distribution for a specific aircraft design at a certain cruise speed, then the rotational speed at which the propeller operates at minimum current consumption is estimated. The optimization is based on the Blade Element Momentum (BEM) method, additionally corrected to account for tip and hub losses, Mach number and rotational effects; furthermore an airfoil lift and drag coefficients approximation is implemented from Computational Fluid Dynamics (CFD) simulations supported by preliminary studies of grid independence and suitability of different turbulence models, to feed the BEM method, with the aim of achieve more reliable results. Additionally, Vortex Theory is employed to find the optimum pitch and chord distribution to achieve a minimum induced loss propeller design. Moreover, the optimization takes into account the well-known brushless motor model, thrust constraints for take-off runway limitations, maximum allowable propeller diameter due to aircraft height and maximum motor power. The BEM-CFD method is validated by comparing its predictions for a known APC propeller with both available experimental tests and APC reported performance curves which are based on Vortex Theory fed with the NASA Transonic Airfoil code, showing a adequate fitting with experimental data even more than reported APC data. Optimal propeller predictions are validated by wind tunnel tests, CFD propeller simulations and a study of how the propeller will perform if it replaces the one of on known aircraft. Some tendency charts relating a wide range of parameters such as diameter, voltage, pitch, rotational speed, current, propeller and electric efficiencies are obtained and discussed. The implementation of CFD tools shows an improvement in the accuracy of BEM predictions. Results also showed how a propeller has higher efficiency peaks when it operates at high rotational speed due to the higher Reynolds at which airfoils present lower drag. On the other hand, the behavior of the current consumption related to the propulsive efficiency shows counterintuitive results, the best range and endurance is not necessary achieved in an efficiency peak.

Keywords: BEM, blade design, CFD, electric aircraft, endurance, optimization, range

Procedia PDF Downloads 108
479 Effect of Injection Pressure and Fuel Injection Timing on Emission and Performance Characteristics of Karanja Biodiesel and its Blends in CI Engine

Authors: Mohan H., C. Elajchet Senni

Abstract:

In the present of high energy consumption in every sphere of life, renewable energy sources are emerging as alternative to conventional fuels for energy security, mitigating green house gas emission and climate change. There has been a world wide interest in searching for alternatives to petroleum derived fuels due to their depletion as well as due to the concern for the environment. Vegetable oils have capability to solve this problem because they are renewable and lead to reduction in environmental pollution. But high smoke emission and lower thermal efficiency are the main problems associated with the use of neat vegetable oils in diesel engines. In the present work, performance, combustion and emission characteristics of CI engine fuelled with 20% by vol. methyl esters mixed with Karanja seed Oil, and Fuel injection pressures of 200 bar and 240 bar, injection timings (21°,23° and 25° BTDC) and Proportion B20 diesel respectively. Vegetable oils have capability to solve this problem because they are renewable and lead to reduction in environmental pollution. But, high smoke emission and lower thermal efficiency are the main problems associated with the use of neat vegetable oils in diesel engines. In the present work, performance, combustion and emission characteristics of CI engine fuelled with 20% by vol. methyl esters mixed with Karanja seed Oil, and Fuel injection pressures of 200 bar and 240 bar ,Injection timings (21°,23° and 25° BTDC) and Proportion B20 diesel respectively. Various performance, combustion and emission characteristics such as thermal efficiency, and brake specific fuel consumption, maximum cylinder pressure, instantaneous heat release, cumulative heat release with respect to crank angle, ignition lag, combustion duration, HC, NOx, CO, exhaust temperature and smoke intensity were measured.

Keywords: karanja oil, injection pressure, injection timing, karanja oil methyl ester

Procedia PDF Downloads 290
478 Hand Gesture Detection via EmguCV Canny Pruning

Authors: N. N. Mosola, S. J. Molete, L. S. Masoebe, M. Letsae

Abstract:

Hand gesture recognition is a technique used to locate, detect, and recognize a hand gesture. Detection and recognition are concepts of Artificial Intelligence (AI). AI concepts are applicable in Human Computer Interaction (HCI), Expert systems (ES), etc. Hand gesture recognition can be used in sign language interpretation. Sign language is a visual communication tool. This tool is used mostly by deaf societies and those with speech disorder. Communication barriers exist when societies with speech disorder interact with others. This research aims to build a hand recognition system for Lesotho’s Sesotho and English language interpretation. The system will help to bridge the communication problems encountered by the mentioned societies. The system has various processing modules. The modules consist of a hand detection engine, image processing engine, feature extraction, and sign recognition. Detection is a process of identifying an object. The proposed system uses Canny pruning Haar and Haarcascade detection algorithms. Canny pruning implements the Canny edge detection. This is an optimal image processing algorithm. It is used to detect edges of an object. The system employs a skin detection algorithm. The skin detection performs background subtraction, computes the convex hull, and the centroid to assist in the detection process. Recognition is a process of gesture classification. Template matching classifies each hand gesture in real-time. The system was tested using various experiments. The results obtained show that time, distance, and light are factors that affect the rate of detection and ultimately recognition. Detection rate is directly proportional to the distance of the hand from the camera. Different lighting conditions were considered. The more the light intensity, the faster the detection rate. Based on the results obtained from this research, the applied methodologies are efficient and provide a plausible solution towards a light-weight, inexpensive system which can be used for sign language interpretation.

Keywords: canny pruning, hand recognition, machine learning, skin tracking

Procedia PDF Downloads 185
477 In vitro Characterization of Mice Bone Microstructural Changes by Low-Field and High-Field Nuclear Magnetic Resonance

Authors: Q. Ni, J. A. Serna, D. Holland, X. Wang

Abstract:

The objective of this study is to develop Nuclear Magnetic Resonance (NMR) techniques to enhance bone related research applied on normal and disuse (Biglycan knockout) mice bone in vitro by using both low-field and high-field NMR simultaneously. It is known that the total amplitude of T₂ relaxation envelopes, measured by the Carr-Purcell-Meiboom-Gill NMR spin echo train (CPMG), is a representation of the liquid phase inside the pores. Therefore, the NMR CPMG magnetization amplitude can be transferred to the volume of water after calibration with the NMR signal amplitude of the known volume of the selected water. In this study, the distribution of mobile water, porosity that can be determined by using low-field (20 MHz) CPMG relaxation technique, and the pore size distributions can be determined by a computational inversion relaxation method. It is also known that the total proton intensity of magnetization from the NMR free induction decay (FID) signal is due to the water present inside the pores (mobile water), the water that has undergone hydration with the bone (bound water), and the protons in the collagen and mineral matter (solid-like protons). Therefore, the components of total mobile and bound water within bone that can be determined by low-field NMR free induction decay technique. Furthermore, the bound water in solid phase (mineral and organic constituents), especially, the dominated component of calcium hydroxyapatite (Ca₁₀(OH)₂(PO₄)₆) can be determined by using high-field (400 MHz) magic angle spinning (MAS) NMR. With MAS technique reducing NMR spectral linewidth inhomogeneous broadening and susceptibility broadening of liquid-solid mix, in particular, we can conduct further research into the ¹H and ³¹P elements and environments of bone materials to identify the locations of bound water such as OH- group within minerals and bone architecture. We hypothesize that with low-field and high-field magic angle spinning NMR can provide a more complete interpretation of water distribution, particularly, in bound water, and these data are important to access bone quality and predict the mechanical behavior of bone.

Keywords: bone, mice bone, NMR, water in bone

Procedia PDF Downloads 177
476 Electrophoretic Light Scattering Based on Total Internal Reflection as a Promising Diagnostic Method

Authors: Ekaterina A. Savchenko, Elena N. Velichko, Evgenii T. Aksenov

Abstract:

The development of pathological processes, such as cardiovascular and oncological diseases, are accompanied by changes in molecular parameters in cells, tissues, and serum. The study of the behavior of protein molecules in solutions is of primarily importance for diagnosis of such diseases. Various physical and chemical methods are used to study molecular systems. With the advent of the laser and advances in electronics, optical methods, such as scanning electron microscopy, sedimentation analysis, nephelometry, static and dynamic light scattering, have become the most universal, informative and accurate tools for estimating the parameters of nanoscale objects. The electrophoretic light scattering is the most effective technique. It has a high potential in the study of biological solutions and their properties. This technique allows one to investigate the processes of aggregation and dissociation of different macromolecules and obtain information on their shapes, sizes and molecular weights. Electrophoretic light scattering is an analytical method for registration of the motion of microscopic particles under the influence of an electric field by means of quasi-elastic light scattering in a homogeneous solution with a subsequent registration of the spectral or correlation characteristics of the light scattered from a moving object. We modified the technique by using the regime of total internal reflection with the aim of increasing its sensitivity and reducing the volume of the sample to be investigated, which opens the prospects of automating simultaneous multiparameter measurements. In addition, the method of total internal reflection allows one to study biological fluids on the level of single molecules, which also makes it possible to increase the sensitivity and the informativeness of the results because the data obtained from an individual molecule is not averaged over an ensemble, which is important in the study of bimolecular fluids. To our best knowledge the study of electrophoretic light scattering in the regime of total internal reflection is proposed for the first time, latex microspheres 1 μm in size were used as test objects. In this study, the total internal reflection regime was realized on a quartz prism where the free electrophoresis regime was set. A semiconductor laser with a wavelength of 655 nm was used as a radiation source, and the light scattering signal was registered by a pin-diode. Then the signal from a photodetector was transmitted to a digital oscilloscope and to a computer. The autocorrelation functions and the fast Fourier transform in the regime of Brownian motion and under the action of the field were calculated to obtain the parameters of the object investigated. The main result of the study was the dependence of the autocorrelation function on the concentration of microspheres and the applied field magnitude. The effect of heating became more pronounced with increasing sample concentrations and electric field. The results obtained in our study demonstrated the applicability of the method for the examination of liquid solutions, including biological fluids.

Keywords: light scattering, electrophoretic light scattering, electrophoresis, total internal reflection

Procedia PDF Downloads 214
475 Present Status, Driving Forces and Pattern Optimization of Territory in Hubei Province, China

Authors: Tingke Wu, Man Yuan

Abstract:

“National Territorial Planning (2016-2030)” was issued by the State Council of China in 2017. As an important initiative of putting it into effect, territorial planning at provincial level makes overall arrangement of territorial development, resources and environment protection, comprehensive renovation and security system construction. Hubei province, as the pivot of the “Rise of Central China” national strategy, is now confronted with great opportunities and challenges in territorial development, protection, and renovation. Territorial spatial pattern experiences long time evolution, influenced by multiple internal and external driving forces. It is not clear what are the main causes of its formation and what are effective ways of optimizing it. By analyzing land use data in 2016, this paper reveals present status of territory in Hubei. Combined with economic and social data and construction information, driving forces of territorial spatial pattern are then analyzed. Research demonstrates that the three types of territorial space aggregate distinctively. The four aspects of driving forces include natural background which sets the stage for main functions, population and economic factors which generate agglomeration effect, transportation infrastructure construction which leads to axial expansion and significant provincial strategies which encourage the established path. On this basis, targeted strategies for optimizing territory spatial pattern are then put forward. Hierarchical protection pattern should be established based on development intensity control as respect for nature. By optimizing the layout of population and industry and improving the transportation network, polycentric network-based development pattern could be established. These findings provide basis for Hubei Territorial Planning, and reference for future territorial planning in other provinces.

Keywords: driving forces, Hubei, optimizing strategies, spatial pattern, territory

Procedia PDF Downloads 105
474 Postmortem Magnetic Resonance Imaging as an Objective Method for the Differential Diagnosis of a Stillborn and a Neonatal Death

Authors: Uliana N. Tumanova, Sergey M. Voevodin, Veronica A. Sinitsyna, Alexandr I. Shchegolev

Abstract:

An important part of forensic and autopsy research in perinatology is the answer to the question of life and stillbirth. Postmortem magnetic resonance imaging (MRI) is an objective non-invasive research method that allows to store data for a long time and not to exhume the body to clarify the diagnosis. The purpose of the research is to study the possibilities of a postmortem MRI to determine the stillbirth and death of a newborn who had spontaneous breathing and died on the first day after birth. MRI and morphological data of a study of 23 stillborn bodies, prenatally dead at a gestational age of 22-39 weeks (Group I) and the bodies of 16 newborns who died from 2 to 24 hours after birth (Group II) were compared. Before the autopsy, postmortem MRI was performed on the Siemens Magnetom Verio 3T device in the supine position of the body. The control group for MRI studies consisted of 7 live newborns without lung disease (Group III). On T2WI in the sagittal projection was measured MR-signal intensity (SI) in the lung tissue (L) and shoulder muscle (M). During the autopsy, a pulmonary swimming test was evaluated, and macro- and microscopic studies were performed. According to the postmortem MRI, the highest values of mean SI of the lung (430 ± 27.99) and of the muscle (405.5 ± 38.62) on T2WI were detected in group I and exceeded the corresponding value of group II by 2.7 times. The lowest values were found in the control group - 77.9 ± 12.34 and 119.7 ± 6.3, respectively. In the group II, the lung SI was 1.6 times higher than the muscle SI, whereas in the group I and in the control group, the muscle SI was 2.1 times and 1.8 times larger than the lung. On the basis of clinical and morphological data, we calculated the formula for determining the breathing index (BI) during postmortem MRI: BI = SIL x SIM / 100. The mean value of BI in the group I (1801.14 ± 241.6) (values ranged from 756 to 3744) significantly higher than the corresponding average value of BI in the group II (455.89 ± 137.32, p < 0.05) (305-638.4). In the control group, the mean BI value was 91.75 ± 13.3 (values ranged from 53 to 154). The BI with the results of pulmonary swimming tests and microscopic examination of the lungs were compared. The boundary value of BI for the differential diagnosis of stillborn and newborn death was 700. Using the postmortem MRI allows to differentiate the stillborn with the death of the breathing newborn.

Keywords: lung, newborn, postmortem MRI, stillborn

Procedia PDF Downloads 128
473 Earthquake Risk Assessment Using Out-of-Sequence Thrust Movement

Authors: Rajkumar Ghosh

Abstract:

Earthquakes are natural disasters that pose a significant risk to human life and infrastructure. Effective earthquake mitigation measures require a thorough understanding of the dynamics of seismic occurrences, including thrust movement. Traditionally, estimating thrust movement has relied on typical techniques that may not capture the full complexity of these events. Therefore, investigating alternative approaches, such as incorporating out-of-sequence thrust movement data, could enhance earthquake mitigation strategies. This review aims to provide an overview of the applications of out-of-sequence thrust movement in earthquake mitigation. By examining existing research and studies, the objective is to understand how precise estimation of thrust movement can contribute to improving structural design, analyzing infrastructure risk, and developing early warning systems. The study demonstrates how to estimate out-of-sequence thrust movement using multiple data sources, including GPS measurements, satellite imagery, and seismic recordings. By analyzing and synthesizing these diverse datasets, researchers can gain a more comprehensive understanding of thrust movement dynamics during seismic occurrences. The review identifies potential advantages of incorporating out-of-sequence data in earthquake mitigation techniques. These include improving the efficiency of structural design, enhancing infrastructure risk analysis, and developing more accurate early warning systems. By considering out-of-sequence thrust movement estimates, researchers and policymakers can make informed decisions to mitigate the impact of earthquakes. This study contributes to the field of seismic monitoring and earthquake risk assessment by highlighting the benefits of incorporating out-of-sequence thrust movement data. By broadening the scope of analysis beyond traditional techniques, researchers can enhance their knowledge of earthquake dynamics and improve the effectiveness of mitigation measures. The study collects data from various sources, including GPS measurements, satellite imagery, and seismic recordings. These datasets are then analyzed using appropriate statistical and computational techniques to estimate out-of-sequence thrust movement. The review integrates findings from multiple studies to provide a comprehensive assessment of the topic. The study concludes that incorporating out-of-sequence thrust movement data can significantly enhance earthquake mitigation measures. By utilizing diverse data sources, researchers and policymakers can gain a more comprehensive understanding of seismic dynamics and make informed decisions. However, challenges exist, such as data quality difficulties, modelling uncertainties, and computational complications. To address these obstacles and improve the accuracy of estimates, further research and advancements in methodology are recommended. Overall, this review serves as a valuable resource for researchers, engineers, and policymakers involved in earthquake mitigation, as it encourages the development of innovative strategies based on a better understanding of thrust movement dynamics.

Keywords: earthquake, out-of-sequence thrust, disaster, human life

Procedia PDF Downloads 77
472 Monitoring Air Pollution Effects on Children for Supporting Public Health Policy: Preliminary Results of MAPEC_LIFE Project

Authors: Elisabetta Ceretti, Silvia Bonizzoni, Alberto Bonetti, Milena Villarini, Marco Verani, Maria Antonella De Donno, Sara Bonetta, Umberto Gelatti

Abstract:

Introduction: Air pollution is a global problem. In 2013, the International Agency for Research on Cancer (IARC) classified air pollution and particulate matter as carcinogenic to human. The study of the health effects of air pollution in children is very important because they are a high-risk group in terms of the health effects of air pollution and early exposure during childhood can increase the risk of developing chronic diseases in adulthood. The MAPEC_LIFE (Monitoring Air Pollution Effects on Children for supporting public health policy) is a project founded by EU Life+ Programme which intends to evaluate the associations between air pollution and early biological effects in children and to propose a model for estimating the global risk of early biological effects due to air pollutants and other factors in children. Methods: The study was carried out on 6-8-year-old children living in five Italian towns in two different seasons. Two biomarkers of early biological effects, primary DNA damage detected with the comet assay and frequency of micronuclei, were investigated in buccal cells of children. Details of children diseases, socio-economic status, exposures to other pollutants and life-style were collected using a questionnaire administered to children’s parents. Child exposure to urban air pollution was assessed by analysing PM0.5 samples collected in the school areas for PAHs and nitro-PAHs concentration, lung toxicity and in vitro genotoxicity on bacterial and human cells. Data on the chemical features of the urban air during the study period were obtained from the Regional Agency for Environmental Protection. The project created also the opportunity to approach the issue of air pollution with the children, trying to raise their awareness on air quality, its health effects and some healthy behaviors by means of an educational intervention in the schools. Results: 1315 children were recruited for the study and participate in the first sampling campaign in the five towns. The second campaign, on the same children, is still ongoing. The preliminary results of the tests on buccal mucosa cells of children will be presented during the conference as well as the preliminary data about the chemical composition and the toxicity and genotoxicity features of PM0.5 samples. The educational package was tested on 250 children of the primary school and showed to be very useful, improving children knowledge about air pollution and its effects and stimulating their interest. Conclusions: The associations between levels of air pollutants, air mutagenicity and biomarkers of early effects will be investigated. A tentative model to calculate the global absolute risk of having early biological effects for air pollution and other variables together will be proposed and may be useful to support policy-making and community interventions to protect children from possible health effects of air pollutants.

Keywords: air pollution exposure, biomarkers of early effects, children, public health policy

Procedia PDF Downloads 330
471 Structural Health Monitoring of Buildings–Recorded Data and Wave Method

Authors: Tzong-Ying Hao, Mohammad T. Rahmani

Abstract:

This article presents the structural health monitoring (SHM) method based on changes in wave traveling times (wave method) within a layered 1-D shear beam model of structure. The wave method measures the velocity of shear wave propagating in a building from the impulse response functions (IRF) obtained from recorded data at different locations inside the building. If structural damage occurs in a structure, the velocity of wave propagation through it changes. The wave method analysis is performed on the responses of Torre Central building, a 9-story shear wall structure located in Santiago, Chile. Because events of different intensity (ambient vibrations, weak and strong earthquake motions) have been recorded at this building, therefore it can serve as a full-scale benchmark to validate the structural health monitoring method utilized. The analysis of inter-story drifts and the Fourier spectra for the EW and NS motions during 2010 Chile earthquake are presented. The results for the NS motions suggest the coupling of translation and torsion responses. The system frequencies (estimated from the relative displacement response of the 8th-floor with respect to the basement from recorded data) were detected initially decreasing approximately 24% in the EW motion. Near the end of shaking, an increase of about 17% was detected. These analysis and results serve as baseline indicators of the occurrence of structural damage. The detected changes in wave velocities of the shear beam model are consistent with the observed damage. However, the 1-D shear beam model is not sufficient to simulate the coupling of translation and torsion responses in the NS motion. The wave method is proven for actual implementation in structural health monitoring systems based on carefully assessing the resolution and accuracy of the model for its effectiveness on post-earthquake damage detection in buildings.

Keywords: Chile earthquake, damage detection, earthquake response, impulse response function, shear beam model, shear wave velocity, structural health monitoring, torre central building, wave method

Procedia PDF Downloads 368
470 Divergent Weathering on Two Sides of Plastic Fragments from Coastal Environments Around the Globe

Authors: Bo Hu, Mui-Choo Jong, João Frias, Irina Chubarenko, Gabriel Enrique De-la-Torre, Prabhu Kolandhasamy, Md. Jaker Hossain, Elena Esiukova, Lei Su, Hua Deng, Huahong Shi

Abstract:

Plastic debris in coastal environments undergoes a series of aging processes due to the diverse environmental conditions they are exposed to. Existing research to date lacks a thorough understanding of how these processes affect exposed and non-exposed sides of plastic fragments, leading to potentially biased conclusions on how degradation occurs. This study addresses this knowledge gap by examining surface aging characteristics on both sides (e.g., cracks, delaminations, pits, wrinkles and color residues) of 1573 plastic fragments collected from 15 coastal sites worldwide and conducting outdoor aging simulations. A clear contrast was observed between the two sides of the plastic fragments, where one of the sides often displayed more pronounced aging features. Three key indicators were introduced to quantify the aging characteristics of plastic fragments, with values ranging from 0.00 to 58.00 mm/mm2 (line density), 0.00 to 92.12% (surface loss) and 0.00 to 1.51 (texture index), respectively. Outdoor simulations revealed that sun-exposed sides of plastic sheets developed more cracks, pores, and bubbles, while the shaded sides remained smoother. The annual average solar radiation intensity of 4.47 kWh in the experimental area exacerbated the degradation of the sun-exposed side, as confirmed by a significant increase in carbonyl index, with PE rising from 0.50 to 1.70, PP from 0.18 to 1.10, and PVC from 0.45 to 1.57, indicating photoaging. These results highlight the uneven weathering patterns of plastic fragments on shorelines due to varying environmental stresses. In particular, the side facing the sun exhibited more pronounced signs of aging. Outdoor experiments confirmed that the fragments’ sun-exposed sides experienced significantly higher degrees of weathering compared to the shaded sides. This study demonstrated that the divergent weathering patterns on the two sides of beach plastic fragments were primarily driven by differences in light exposure, duration, and mechanical stress.

Keywords: plastic fragments, coastal environment, surface aging features, two-sided differences

Procedia PDF Downloads 21
469 Cadmium Telluride Quantum Dots (CdTe QDs)-Thymine Conjugate Based Fluorescence Biosensor for Sensitive Determination of Nucleobases/Nucleosides

Authors: Lucja Rodzik, Joanna Lewandowska-Lancucka, Michal Szuwarzynski, Krzysztof Szczubialka, Maria Nowakowska

Abstract:

The analysis of nucleobases is of great importance for bioscience since their abnormal concentration in body fluids suggests the deficiency and mutation of the immune system, and it is considered to be an important parameter for diagnosis of various diseases. The presented conjugate meets the need for development of the effective, selective and highly sensitive sensor for nucleobase/nucleoside detection. The novel, highly fluorescent cadmium telluride quantum dots (CdTe QDs) functionalized with thymine and stabilized with thioglycolic acid (TGA) conjugates has been developed and thoroughly characterized. Successful formation of the material was confirmed by elemental analysis, and UV–Vis fluorescence and FTIR spectroscopies. The crystalline structure of the obtained product was characterized with X-ray diffraction (XRD) method. The composition of CdTe QDs and their thymine conjugate was also examined using X-ray photoelectron spectroscopy (XPS). The size of the CdTe-thymine was 3-6 nm as demonstrated using atomic force microscopy (AFM) and high resolution transmission electron microscopy (HRTEM) imaging. The plasmon resonance fluorescence band at 540 nm on excitation at 351 nm was observed for these nanoparticles. The intensity of this band increased with the increase in the amount of conjugated thymine with no shift in its position. Based on the fluorescence measurements, it was found that the CdTe-thymine conjugate interacted efficiently and selectively not only with adenine, a nucleobase complementary to thymine, but also with nucleosides and adenine-containing modified nucleosides, i.e., 5′-deoxy-5′-(methylthio)adenosine (MTA) and 2’-O-methyladenosine, the urinary tumor markers which allow monitoring of the disease progression. The applicability of the CdTe-thymine sensor for the real sample analysis was also investigated in simulated urine conditions. High sensitivity and selectivity of CdTe-thymine fluorescence towards adenine, adenosine and modified adenosine suggest that obtained conjugate can be potentially useful for development of the biosensor for complementary nucleobase/nucleoside detection.

Keywords: CdTe quantum dots, conjugate, sensor, thymine

Procedia PDF Downloads 412
468 Parameter Estimation of Gumbel Distribution with Maximum-Likelihood Based on Broyden Fletcher Goldfarb Shanno Quasi-Newton

Authors: Dewi Retno Sari Saputro, Purnami Widyaningsih, Hendrika Handayani

Abstract:

Extreme data on an observation can occur due to unusual circumstances in the observation. The data can provide important information that can’t be provided by other data so that its existence needs to be further investigated. The method for obtaining extreme data is one of them using maxima block method. The distribution of extreme data sets taken with the maxima block method is called the distribution of extreme values. Distribution of extreme values is Gumbel distribution with two parameters. The parameter estimation of Gumbel distribution with maximum likelihood method (ML) is difficult to determine its exact value so that it is necessary to solve the approach. The purpose of this study was to determine the parameter estimation of Gumbel distribution with quasi-Newton BFGS method. The quasi-Newton BFGS method is a numerical method used for nonlinear function optimization without constraint so that the method can be used for parameter estimation from Gumbel distribution whose distribution function is in the form of exponential doubel function. The quasi-New BFGS method is a development of the Newton method. The Newton method uses the second derivative to calculate the parameter value changes on each iteration. Newton's method is then modified with the addition of a step length to provide a guarantee of convergence when the second derivative requires complex calculations. In the quasi-Newton BFGS method, Newton's method is modified by updating both derivatives on each iteration. The parameter estimation of the Gumbel distribution by a numerical approach using the quasi-Newton BFGS method is done by calculating the parameter values that make the distribution function maximum. In this method, we need gradient vector and hessian matrix. This research is a theory research and application by studying several journals and textbooks. The results of this study obtained the quasi-Newton BFGS algorithm and estimation of Gumbel distribution parameters. The estimation method is then applied to daily rainfall data in Purworejo District to estimate the distribution parameters. This indicates that the high rainfall that occurred in Purworejo District decreased its intensity and the range of rainfall that occurred decreased.

Keywords: parameter estimation, Gumbel distribution, maximum likelihood, broyden fletcher goldfarb shanno (BFGS)quasi newton

Procedia PDF Downloads 324