Search results for: channel estimation
432 A Simulation-Based Investigation of the Smooth-Wall, Radial Gravity Problem of Granular Flow through a Wedge-Shaped Hopper
Authors: A. F. Momin, D. V. Khakhar
Abstract:
Granular materials consist of particulate particles found in nature and various industries that, due to gravity flow, behave macroscopically like liquids. A fundamental industrial unit operation is a hopper with inclined walls or a converging channel in which material flows downward under gravity and exits the storage bin through the bottom outlet. The simplest form of the flow corresponds to a wedge-shaped, quasi-two-dimensional geometry with smooth walls and radially directed gravitational force toward the apex of the wedge. These flows were examined using the Mohr-Coulomb criterion in the classic work of Savage (1965), while Ravi Prakash and Rao used the critical state theory (1988). The smooth-wall radial gravity (SWRG) wedge-shaped hopper is simulated using the discrete element method (DEM) to test existing theories. DEM simulations involve the solution of Newton's equations, taking particle-particle interactions into account to compute stress and velocity fields for the flow in the SWRG system. Our computational results are consistent with the predictions of Savage (1965) and Ravi Prakash and Rao (1988), except for the region near the exit, where both viscous and frictional effects are present. To further comprehend this behaviour, a parametric analysis is carried out to analyze the rheology of wedge-shaped hoppers by varying the orifice diameter, wedge angle, friction coefficient, and stiffness. The conclusion is that velocity increases as the flow rate increases but decreases as the wedge angle and friction coefficient increase. We observed no substantial changes in velocity due to varying stiffness. It is anticipated that stresses at the exit result from the transfer of momentum during particle collisions; for this reason, relationships between viscosity and shear rate are shown, and all data are collapsed into a single curve. In addition, it is demonstrated that viscosity and volume fraction exhibit power law correlations with the inertial number and that all the data collapse into a single curve. A continuum model for determining granular flows is presented using empirical correlations.Keywords: discrete element method, gravity flow, smooth-wall, wedge-shaped hoppers
Procedia PDF Downloads 87431 Application of Data Driven Based Models as Early Warning Tools of High Stream Flow Events and Floods
Authors: Mohammed Seyam, Faridah Othman, Ahmed El-Shafie
Abstract:
The early warning of high stream flow events (HSF) and floods is an important aspect in the management of surface water and rivers systems. This process can be performed using either process-based models or data driven-based models such as artificial intelligence (AI) techniques. The main goal of this study is to develop efficient AI-based model for predicting the real-time hourly stream flow (Q) and apply it as early warning tool of HSF and floods in the downstream area of the Selangor River basin, taken here as a paradigm of humid tropical rivers in Southeast Asia. The performance of AI-based models has been improved through the integration of the lag time (Lt) estimation in the modelling process. A total of 8753 patterns of Q, water level, and rainfall hourly records representing one-year period (2011) were utilized in the modelling process. Six hydrological scenarios have been arranged through hypothetical cases of input variables to investigate how the changes in RF intensity in upstream stations can lead formation of floods. The initial SF was changed for each scenario in order to include wide range of hydrological situations in this study. The performance evaluation of the developed AI-based model shows that high correlation coefficient (R) between the observed and predicted Q is achieved. The AI-based model has been successfully employed in early warning throughout the advance detection of the hydrological conditions that could lead to formations of floods and HSF, where represented by three levels of severity (i.e., alert, warning, and danger). Based on the results of the scenarios, reaching the danger level in the downstream area required high RF intensity in at least two upstream areas. According to results of applications, it can be concluded that AI-based models are beneficial tools to the local authorities for flood control and awareness.Keywords: floods, stream flow, hydrological modelling, hydrology, artificial intelligence
Procedia PDF Downloads 248430 The Influence of Morphology and Interface Treatment on Organic 6,13-bis (triisopropylsilylethynyl)-Pentacene Field-Effect Transistors
Authors: Daniel Bülz, Franziska Lüttich, Sreetama Banerjee, Georgeta Salvan, Dietrich R. T. Zahn
Abstract:
For the development of electronics, organic semiconductors are of great interest due to their adjustable optical and electrical properties. Especially for spintronic applications they are interesting because of their weak spin scattering, which leads to longer spin life times compared to inorganic semiconductors. It was shown that some organic materials change their resistance if an external magnetic field is applied. Pentacene is one of the materials which exhibit the so called photoinduced magnetoresistance which results in a modulation of photocurrent when varying the external magnetic field. Also the soluble derivate of pentacene, the 6,13-bis (triisopropylsilylethynyl)-pentacene (TIPS-pentacene) exhibits the same negative magnetoresistance. Aiming for simpler fabrication processes, in this work, we compare TIPS-pentacene organic field effect transistors (OFETs) made from solution with those fabricated by thermal evaporation. Because of the different processing, the TIPS-pentacene thin films exhibit different morphologies in terms of crystal size and homogeneity of the substrate coverage. On the other hand, the interface treatment is known to have a high influence on the threshold voltage, eliminating trap states of silicon oxide at the gate electrode and thereby changing the electrical switching response of the transistors. Therefore, we investigate the influence of interface treatment using octadecyltrichlorosilane (OTS) or using a simple cleaning procedure with acetone, ethanol, and deionized water. The transistors consist of a prestructured OFET substrates including gate, source, and drain electrodes, on top of which TIPS-pentacene dissolved in a mixture of tetralin and toluene is deposited by drop-, spray-, and spin-coating. Thereafter we keep the sample for one hour at a temperature of 60 °C. For the transistor fabrication by thermal evaporation the prestructured OFET substrates are also kept at a temperature of 60 °C during deposition with a rate of 0.3 nm/min and at a pressure below 10-6 mbar. The OFETs are characterized by means of optical microscopy in order to determine the overall quality of the sample, i.e. crystal size and coverage of the channel region. The output and transfer characteristics are measured in the dark and under illumination provided by a white light LED in the spectral range from 450 nm to 650 nm with a power density of (8±2) mW/cm2.Keywords: organic field effect transistors, solution processed, surface treatment, TIPS-pentacene
Procedia PDF Downloads 447429 The Impact of Public Finance Management on Economic Growth and Development in South Africa
Authors: Zintle Sikhunyana
Abstract:
Management of public finance in many countries such as South Africa is affected by political decisions and by policies around fiscal decentralization amongst the government spheres. Economic success is said to be determined by efficient management of public finance and by the policies or strategies that are implemented to support efficient public finance management. Policymakers focus on pay attention to how economic policies have been implemented and how they are directed into ensuring stable development. This will allow policymakers to address economic challenges through the usage of fiscal policy parameters that are linked to the achieved rate of economic growth and development. Efficient public finance management reduces the likelihood of corruption and corruption is said to have negative effects on economic growth and development. Corruption in public finance refers to an act of using funds for personal benefits. To achieve macroeconomic objectives, governments make use of government expenditure and government expenditure is financed through tax revenue. The main aim of this paper is to investigate the potential impact of public finance management on economic growth and development in South Africa. The secondary data obtained from the South African Reserve Bank (SARB) and World Bank for 1980- 2020 has been utilized to achieve the research objectives. To test the impact of public finance management on economic growth and development, the study will use Seeming Unrelated Regression Equation (SURE) Modelling that allows researchers to model multiple equations with interdependent variables. The advantages of using SUR are that it efficiently allows estimation of relationships between variables by combining information on different equations and SUR test restrictions that involve parameters in different equations. The findings have shown that there is a positive relationship between efficient public finance management and economic growth/development. The findings also show that efficient public finance management has an indirect positive impact on economic growth and development. Corruption has a negative impact on economic growth and development. It results in an efficient allocation of government resources and thereby improves economic growth and development. The study recommends that governments who aim to stimulate economic growth and development should target and strengthen public finance management policies or strategies.Keywords: corruption, economic growth, economic development, public finance management, fiscal decentralization
Procedia PDF Downloads 201428 Bi-Component Particle Segregation Studies in a Spiral Concentrator Using Experimental and CFD Techniques
Authors: Prudhvinath Reddy Ankireddy, Narasimha Mangadoddy
Abstract:
Spiral concentrators are commonly used in various industries, including mineral and coal processing, to efficiently separate materials based on their density and size. In these concentrators, a mixture of solid particles and fluid (usually water) is introduced as feed at the top of a spiral channel. As the mixture flows down the spiral, centrifugal and gravitational forces act on the particles, causing them to stratify based on their density and size. Spiral flows exhibit complex fluid dynamics, and interactions involve multiple phases and components in the process. Understanding the behavior of these phases within the spiral concentrator is crucial for achieving efficient separation. An experimental bi-component particle interaction study is conducted in this work utilizing magnetite (heavier density) and silica (lighter density) with different proportions processed in the spiral concentrator. The observation separation reveals that denser particles accumulate towards the inner region of the spiral trough, while a significant concentration of lighter particles are found close to the outer edge. The 5th turn of the spiral trough is partitioned into five zones to achieve a comprehensive distribution analysis of bicomponent particle segregation. Samples are then gathered from these individual streams using an in-house sample collector, and subsequent analysis is conducted to assess component segregation. Along the trough, there was a decline in the concentration of coarser particles, accompanied by an increase in the concentration of lighter particles. The segregation pattern indicates that the heavier coarse component accumulates in the inner zone, whereas the lighter fine component collects in the outer zone. The middle zone primarily consists of heavier fine particles and lighter coarse particles. The zone-wise results reveal that there is a significant fraction of segregation occurs in inner and middle zones. Finer magnetite and silica particles predominantly accumulate in outer zones with the smallest fraction of segregation. Additionally, numerical simulations are also carried out using the computational fluid dynamics (CFD) model based on the volume of fluid (VOF) approach incorporating the RSM turbulence model. The discrete phase model (DPM) is employed for particle tracking, thereby understanding the particle segregation of magnetite and silica along the spiral trough.Keywords: spiral concentrator, bi-component particle segregation, computational fluid dynamics, discrete phase model
Procedia PDF Downloads 67427 Determination of Medians of Biochemical Maternal Serum Markers in Healthy Women Giving Birth to Normal Babies
Authors: Noreen Noreen, Aamir Ijaz, Hamza Akhtar
Abstract:
Background: Screening plays a major role to detect chromosomal abnormalities, Down syndrome, neural tube defects and other inborn diseases of the newborn. Serum biomarkers in the second trimester are useful in determining risk of most common chromosomal anomalies; these test include Alpha-fetoprotein (AFP), Human chorionic gonadotropin (hCG), Unconjugated Oestriol (UEȝ)and inhibin-A. Quadruple biomarkers are worth test in diagnosing the congenital pathology during pregnancy, these procedures does not form a part of routine health care of pregnant women in Pakistan, so the median value is lacking for population in Pakistan. Objective: To determine median values of biochemical maternal serum markers in local population during second trimester maternal screening. Study settings: Department of Chemical Pathology and Endocrinology, Armed Forces Institute of Pathology (AFIP) Rawalpindi. Methods: Cross-Sectional study for estimation of reference values. Non-probability consecutive sampling, 155 healthy pregnant women, of 30-40 years of age, will be included. As non-parametric statistics will be used, the minimum sample size is 120. Result: Total 155 women were enrolled into this study. The age of all women enrolled ranged from 30 to39 yrs. Among them, 39 per cent of women were less than 34 years. Mean maternal age 33.46±2.35 SD and maternal body weight were 54.98±2.88. Median value of quadruple markers calculated from 15-18th week of gestation that will be used for calculation of MOM for screening of trisomy21 in this gestational age. Median value at 15 week of gestation were observed hCG 36650 mIU/ml, AFP 23.3 IU/ml, UEȝ 3.5 nmol/L, InhibinA 198 ng/L, at 16 week of gestation hCG 29050 mIU/ml, AFP 35.4 IU/ml, UEȝ 4.1 nmol/L, InhibinA 179 ng/L, at 17 week of gestation hCG 28450 mIU/ml, AFP 36.0 IU/ml, UEȝ 6.7 nmol/L, InhibinA 176 ng/L and at 18 week of gestation hCG 25200 mIU/ml, AFP 38.2 IU/ml, UEȝ 8.2 nmol/L, InhibinA 190 ng/L respectively.All the comparisons were significant (p-Value <0.005) with 95% confidence Interval (CI) and level of significance of study set by going through literature and set at 5%. Conclusion: The median values for these four biomarkers in Pakistani pregnant women can be used to calculate MoM.Keywords: screening, down syndrome, quadruple test, second trimester, serum biomarkers
Procedia PDF Downloads 180426 Comparison Analysis of Fuzzy Logic Controler Based PV-Pumped Hydro and PV-Battery Storage Systems
Authors: Seada Hussen, Frie Ayalew
Abstract:
Integrating different energy resources, like solar PV and hydro, is used to ensure reliable power to rural communities like Hara village in Ethiopia. Hybrid power system offers power supply for rural villages by providing an alternative supply for the intermittent nature of renewable energy resources. The intermittent nature of renewable energy resources is a challenge to electrifying rural communities in a sustainable manner with solar resources. Major rural villages in Ethiopia are suffering from a lack of electrification, that cause our people to suffer deforestation, travel for long distance to fetch water, and lack good services like clinic and school sufficiently. The main objective of this project is to provide a balanced, stable, reliable supply for Hara village, Ethiopia using solar power with a pumped hydro energy storage system. The design of this project starts by collecting data from villages and taking solar irradiance data from NASA. In addition to this, geographical arrangement and location are also taken into consideration. After collecting this, all data analysis and cost estimation or optimal sizing of the system and comparison of solar with pumped hydro and solar with battery storage system is done using Homer Software. And since solar power only works in the daytime and pumped hydro works at night time and also at night and morning, both load will share to cover the load demand; this need controller designed to control multiple switch and scheduling in this project fuzzy logic controller is used to control this scenario. The result of the simulation shows that solar with pumped hydro energy storage system achieves good results than with a battery storage system since the comparison is done considering storage reliability, cost, storage capacity, life span, and efficiency.Keywords: pumped hydro storage, solar energy, solar PV, battery energy storage, fuzzy logic controller
Procedia PDF Downloads 78425 Effect of Access to Finance on Innovation and Productivity of SMEs in Nigeria: Evidence from the World Bank Enterprise Survey
Authors: Abidemi C. Adegboye, Samuel Iweriebor
Abstract:
The primary link between financial institutions and economic performance is the provision of resources by these institutions to businesses in order to drive enterprise expansion, sustainability, and development. In this study, the role of access to finance in driving innovations and productivity in Nigerian SMEs is investigated using the World Bank Enterprise Survey (ES) dataset. Innovation is defined based on the ES analysis using five compositions including product, method, organisational, use of foreign-licensed technology, and spending on R&D. The study considers finance in terms of source in meeting investment needs and in terms of access. Moreover, finance access is categorized as external and internal to a firm with each having different implications. The research methodology adopted a survey analysis based on the 2014 World Bank Enterprise Survey of 19 states in Nigeria. The survey comprised over 10,000 manufacturing and services firms, both at the small scale and medium scale levels. The logit estimation technique is used to estimate the relationships in the study. The results from the empirical analysis show that in general, access to finance drives SME innovation in Nigeria. In particular, ease of accessing bank loans and credit is shown to be the strongest positive force in driving all types of innovation among SMEs in Nigeria. In the same vein, the type of finance source for investment matters in terms of how it affects innovation: it is shown that both internal and external sources improve investment in product, process, and organisational innovation, but only external financing has effect on R&D spending and use of foreign licensed technology. Overall spending on R&D is only driven by access to external finance by the SMEs. For productivity, the results show that while structure of financing investment improves productivity, increased access to finance may actually lead to productivity decline among SMEs in Nigeria. There is a need for the financial system to evolve structures to increase fund availability to SMEs in Nigeria, especially for the purpose of innovation investment.Keywords: access to finance, financing investment, innovation, productivity, SMEs
Procedia PDF Downloads 358424 Household Food Security and Poverty Reduction in Cameroon
Authors: Bougema Theodore Ntenkeh, Chi-bikom Barbara Kyien
Abstract:
The reduction of poverty and hunger sits at the heart of the United Nations 2030 Agenda for Sustainable Development, and are the first two of the Sustainable Development Goals. The World Food Day celebrated on the 16th of October every year, highlights the need for people to have physical and economic access at all times to enough nutritious and safe food to live a healthy and active life; while the world poverty day celebrated on the 17th of October is an opportunity to acknowledge the struggle of people living in poverty, a chance for them to make their concerns heard, and for the community to recognize and support poor people in their fight against poverty. The association between household food security and poverty reduction is not only sparse in Cameroon but mostly qualitative. The paper therefore investigates the effect of household food security on poverty reduction in Cameroon quantitatively using data from the Cameroon Household Consumption Survey collected by the Government Statistics Office. The methodology employed five indicators of household food security using the Multiple Correspondence Analysis and poverty is captured as a dummy variable. Using a control function technique, with pre and post estimation test for robustness, the study postulates that household food security has a positive and significant effect on poverty reduction in Cameroon. A unit increase in the food security score reduces the probability of the household being poor by 31.8%, and this effect is statistically significant at 1%. The result further illustrates that the age of the household head and household size increases household poverty while households residing in urban areas are significantly less poor. The paper therefore recommends that households should diversify their food intake to enhance an effective supply of labour in the job market as a strategy to reduce household poverty. Furthermore, family planning methods should be encouraged as a strategy to reduce birth rate for an equitable distribution of household resources including food while the government of Cameroon should also develop the rural areas given that trend in urbanization are associated with the concentration of productive economic activities, leading to increase household income, increased household food security and poverty reduction.Keywords: food security, poverty reduction, SDGs, Cameroon
Procedia PDF Downloads 77423 Structural Stress of Hegemon’s Power Loss: A Pestle Analysis for Pacification and Security Policy Plan
Authors: Sehrish Qayyum
Abstract:
Active military power contention is shifting to economic and cyberwar to retain hegemony. Attuned Pestle analysis confirms that structural stress of hegemon’s power loss drives a containment approach towards caging actions. Ongoing diplomatic, asymmetric, proxy and direct wars are increasing stress hegemon’s power retention due to tangled military and economic alliances. It creates the condition of catalepsy with defective reflexive control which affects the core warfare operations. When one’s own power is doubted it gives power to one’s own doubt to ruin all planning either done with superlative cost-benefit analysis. Strategically calculated estimation of Hegemon’s power game since the early WWI to WWII, WWII-to Cold War and then to the current era in three chronological periods exposits that Thucydides’s trap became the reason for war broke out. Thirst for power is the demise of imagination and cooperation for better sense to prevail instead it drives ashes to dust. Pestle analysis is a wide array of evaluation from political and economic to legal dimensions of the state matters. It helps to develop the Pacification and Security Policy Plan (PSPP) to avoid hegemon’s structural stress of power loss in fact, in turn, creates an alliance with maximum amicable outputs. PSPP may serve to regulate and pause the hurricane of power clashes. PSPP along with a strategic work plan is based on Pestle analysis to deal with any conceivable war condition and approach for saving international peace. Getting tangled into self-imposed epistemic dilemmas results in regret that becomes the only option of performance. It is a generic application of probability tests to find the best possible options and conditions to develop PSPP for any adversity possible so far. Innovation in expertise begets innovation in planning and action-plan to serve as a rheostat approach to deal with any plausible power clash.Keywords: alliance, hegemon, pestle analysis, pacification and security policy plan, security
Procedia PDF Downloads 106422 [Keynote Talk]: Discovering Liouville-Type Problems for p-Energy Minimizing Maps in Closed Half-Ellipsoids by Calculus Variation Method
Authors: Lina Wu, Jia Liu, Ye Li
Abstract:
The goal of this project is to investigate constant properties (called the Liouville-type Problem) for a p-stable map as a local or global minimum of a p-energy functional where the domain is a Euclidean space and the target space is a closed half-ellipsoid. The First and Second Variation Formulas for a p-energy functional has been applied in the Calculus Variation Method as computation techniques. Stokes’ Theorem, Cauchy-Schwarz Inequality, Hardy-Sobolev type Inequalities, and the Bochner Formula as estimation techniques have been used to estimate the lower bound and the upper bound of the derived p-Harmonic Stability Inequality. One challenging point in this project is to construct a family of variation maps such that the images of variation maps must be guaranteed in a closed half-ellipsoid. The other challenging point is to find a contradiction between the lower bound and the upper bound in an analysis of p-Harmonic Stability Inequality when a p-energy minimizing map is not constant. Therefore, the possibility of a non-constant p-energy minimizing map has been ruled out and the constant property for a p-energy minimizing map has been obtained. Our research finding is to explore the constant property for a p-stable map from a Euclidean space into a closed half-ellipsoid in a certain range of p. The certain range of p is determined by the dimension values of a Euclidean space (the domain) and an ellipsoid (the target space). The certain range of p is also bounded by the curvature values on an ellipsoid (that is, the ratio of the longest axis to the shortest axis). Regarding Liouville-type results for a p-stable map, our research finding on an ellipsoid is a generalization of mathematicians’ results on a sphere. Our result is also an extension of mathematicians’ Liouville-type results from a special ellipsoid with only one parameter to any ellipsoid with (n+1) parameters in the general setting.Keywords: Bochner formula, Calculus Stokes' Theorem, Cauchy-Schwarz Inequality, first and second variation formulas, Liouville-type problem, p-harmonic map
Procedia PDF Downloads 274421 Identification of a Lead Compound for Selective Inhibition of Nav1.7 to Treat Chronic Pain
Authors: Sharat Chandra, Zilong Wang, Ru-Rong Ji, Andrey Bortsov
Abstract:
Chronic pain (CP) therapeutic approaches have limited efficacy. As a result, doctors are prescribing opioids for chronic pain, leading to opioid overuse, abuse, and addiction epidemic. Therefore, the development of effective and safe CP drugs remains an unmet medical need. Voltage-gated sodium (Nav) channels act as cardiovascular and neurological disorder’s molecular targets. Nav channels selective inhibitors are hard to design because there are nine closely-related isoforms (Nav1.1-1.9) that share the protein sequence segments. We are targeting the Nav1.7 found in the peripheral nervous system and engaged in the perception of pain. The objective of this project was to screen a 1.5 million compound library for identification of inhibitors for Nav1.7 with analgesic effect. In this study, we designed a protocol for identification of isoform-selective inhibitors of Nav1.7, by utilizing the prior information on isoform-selective antagonists. First, a similarity search was performed; then the identified hits were docked into a binding site on the fourth voltage-sensor domain (VSD4) of Nav1.7. We used the FTrees tool for similarity searching and library generation; the generated library was docked in the VSD4 domain binding site using FlexX and compounds were shortlisted using a FlexX score and SeeSAR hyde scoring. Finally, the top 25 compounds were tested with molecular dynamics simulation (MDS). We reduced our list to 9 compounds based on the MDS root mean square deviation plot and obtained them from a vendor for in vitro and in vivo validation. Whole-cell patch-clamp recordings in HEK-293 cells and dorsal root ganglion neurons were conducted. We used patch pipettes to record transient Na⁺ currents. One of the compounds reduced the peak sodium currents in Nav1.7-HEK-293 stable cell line in a dose-dependent manner, with IC50 values at 0.74 µM. In summary, our computer-aided analgesic discovery approach allowed us to develop pre-clinical analgesic candidate with significant reduction of time and cost.Keywords: chronic pain, voltage-gated sodium channel, isoform-selective antagonist, similarity search, virtual screening, analgesics development
Procedia PDF Downloads 123420 Blood Flow Estimator of the Left Ventricular Assist Device Based in Look-Up-Table: In vitro Tests
Authors: Tarcisio F. Leao, Bruno Utiyama, Jeison Fonseca, Eduardo Bock, Aron Andrade
Abstract:
This work presents a blood flow estimator based in Look-Up-Table (LUT) for control of Left Ventricular Assist Device (LVAD). This device has been used as bridge to transplantation or as destination therapy to treat patients with heart failure (HF). Destination Therapy application requires a high performance LVAD; thus, a stable control is important to keep adequate interaction between heart and device. LVAD control provides an adequate cardiac output while sustaining an appropriate flow and pressure blood perfusion, also described as physiologic control. Because thrombus formation and system reliability reduction, sensors are not desirable to measure these variables (flow and pressure blood). To achieve this, control systems have been researched to estimate blood flow. LVAD used in the study is composed by blood centrifugal pump, control, and power supply. This technique used pump and actuator (motor) parameters of LVAD, such as speed and electric current. Estimator relates electromechanical torque (motor or actuator) and hydraulic power (blood pump) via LUT. An in vitro Mock Loop was used to evaluate deviations between blood flow estimated and actual. A solution with glycerin (50%) and water was used to simulate the blood viscosity with hematocrit 45%. Tests were carried out with variation hematocrit: 25%, 45% and 58% of hematocrit, or 40%, 50% and 60% of glycerin in water solution, respectively. Test with bovine blood was carried out (42% hematocrit). Mock Loop is composed: reservoir, tubes, pressure and flow sensors, and fluid (or blood), beyond LVAD. Estimator based in LUT is patented, number BR1020160068363, in Brazil. Mean deviation is 0.23 ± 0.07 L/min for mean flow estimated. Larger mean deviation was 0.5 L/min considering hematocrit variation. This estimator achieved deviation adequate for physiologic control implementation. Future works will evaluate flow estimation performance in control system of LVAD.Keywords: blood pump, flow estimator, left ventricular assist device, look-up-table
Procedia PDF Downloads 186419 Assessing the Impact of Climate Change on Pulses Production in Khyber Pakhtunkhwa, Pakistan
Authors: Khuram Nawaz Sadozai, Rizwan Ahmad, Munawar Raza Kazmi, Awais Habib
Abstract:
Climate change and crop production are intrinsically associated with each other. Therefore, this research study is designed to assess the impact of climate change on pulses production in Southern districts of Khyber Pakhtunkhwa (KP) Province of Pakistan. Two pulses (i.e. chickpea and mung bean) were selected for this research study with respect to climate change. Climatic variables such as temperature, humidity and precipitation along with pulses production and area under cultivation of pulses were encompassed as the major variables of this study. Secondary data of climatic variables and crop variables for the period of thirty four years (1986-2020) were obtained from Pakistan Metrological Department and Agriculture Statistics of KP respectively. Panel data set of chickpea and mung bean crops was estimated separately. The analysis validate that both data sets were a balanced panel data. The Hausman specification test was run separately for both the panel data sets whose findings had suggested the fixed effect model can be deemed as an appropriate model for chickpea panel data, however random effect model was appropriate for estimation of the panel data of mung bean. Major findings confirm that maximum temperature is statistically significant for the chickpea yield. This implies if maximum temperature increases by 1 0C, it can enhance the chickpea yield by 0.0463 units. However, the impact of precipitation was reported insignificant. Furthermore, the humidity was statistically significant and has a positive association with chickpea yield. In case of mung bean the minimum temperature was significantly contributing in the yield of mung bean. This study concludes that temperature and humidity can significantly contribute to enhance the pulses yield. It is recommended that capacity building of pulses growers may be made to adapt the climate change strategies. Moreover, government may ensure the availability of climate change resistant varieties of pulses to encourage the pulses cultivation.Keywords: climate change, pulses productivity, agriculture, Pakistan
Procedia PDF Downloads 44418 Perceptual Image Coding by Exploiting Internal Generative Mechanism
Authors: Kuo-Cheng Liu
Abstract:
In the perceptual image coding, the objective is to shape the coding distortion such that the amplitude of distortion does not exceed the error visibility threshold, or to remove perceptually redundant signals from the image. While most researches focus on color image coding, the perceptual-based quantizer developed for luminance signals are always directly applied to chrominance signals such that the color image compression methods are inefficient. In this paper, the internal generative mechanism is integrated into the design of a color image compression method. The internal generative mechanism working model based on the structure-based spatial masking is used to assess the subjective distortion visibility thresholds that are visually consistent to human eyes better. The estimation method of structure-based distortion visibility thresholds for color components is further presented in a locally adaptive way to design quantization process in the wavelet color image compression scheme. Since the lowest subband coefficient matrix of images in the wavelet domain preserves the local property of images in the spatial domain, the error visibility threshold inherent in each coefficient of the lowest subband for each color component is estimated by using the proposed spatial error visibility threshold assessment. The threshold inherent in each coefficient of other subbands for each color component is then estimated in a local adaptive fashion based on the distortion energy allocation. By considering that the error visibility thresholds are estimated using predicting and reconstructed signals of the color image, the coding scheme incorporated with locally adaptive perceptual color quantizer does not require side information. Experimental results show that the entropies of three color components obtained by using proposed IGM-based color image compression scheme are lower than that obtained by using the existing color image compression method at perceptually lossless visual quality.Keywords: internal generative mechanism, structure-based spatial masking, visibility threshold, wavelet domain
Procedia PDF Downloads 248417 Internal Mercury Exposure Levels Correlated to DNA Methylation of Imprinting Gene H19 in Human Sperm of Reproductive-Aged Man
Authors: Zhaoxu Lu, Yufeng Ma, Linying Gao, Li Wang, Mei Qiang
Abstract:
Mercury (Hg) is a well-recognized environmental pollutant known by its toxicity of development and neurotoxicity, which may result in adverse health outcomes. However, the mechanisms underlying the teratogenic effects of Hg are not well understood. Imprinting genes are emerging regulators for fetal development subject to environmental pollutants impacts. In this study, we examined the association between paternal preconception Hg exposures and the alteration of DNA methylation of imprinting genes in human sperm DNA. A total of 618 men aged from 22 to 59 was recruited from the Reproductive Medicine Clinic of Maternal and Child Care Service Center and the Urologic Surgery Clinic of Shanxi Academy of Medical Sciences during April 2015 and March 2016. Demographic information was collected using questionnaires. Urinary Hg concentrations were measured using a fully-automatic double-channel hydride generation atomic fluorescence spectrometer. And methylation status in the DMRs of imprinting genes H19, Meg3 and Peg3 of sperm DNA were examined by bisulfite pyrosequencing in 243 participants. Spearman’s rank and multivariate regression analysis were used for correlation analysis between sperm DNA methylation status of imprinting genes and urinary Hg levels. The median concentration of Hg for participants overall was 9.09μg/l (IQR: 5.54 - 12.52μg/l; range = 0 - 71.35μg/l); no significant difference was found in median concentrations of Hg among various demographic groups (p > 0.05). The proportion of samples that a beyond intoxication criterion (10μg/l) for urinary Hg was 42.6%. Spearman’s rank correlation analysis indicates a negative correlation between urinary Hg concentrations and average DNA methylation levels in the DMRs of imprinted genes H19 (rs=﹣0.330, p = 0.000). However, there was no such a correlation found in genes of Peg3 and Meg3. Further, we analyzed of correlation between methylation level at each CpG site of H19 and Hg level, the results showed that three out of 7 CpG sites on H19 DMR, namely CpG2 (rs =﹣0.138, p = 0.031), CpG4 (rs =﹣0.369, p = 0.000) and CpG6 (rs=﹣0.228, p = 0.000), demonstrated a significant negative correlation between methylation levels and the levels of urinary Hg. After adjusting age, smoking, drinking, intake of aquatic products and education by multivariate regression analysis, the results have shown a similar correlation. In summary, mercury nonoccupational environmental exposure in reproductive-aged men associated with altered DNA methylation outcomes at DMR of imprinting gene H19 in sperm, implicating the susceptibility of the developing sperm for environmental insults.Keywords: epigenetics, genomic imprinting gene, DNA methylation, mercury, transgenerational effects, sperm
Procedia PDF Downloads 261416 Gas Chromatography-Analysis, Antioxidant, Anti-Inflammatory, and Anticancer Activities of Some Extracts and Fractions of Linum usitatissimum
Authors: Eman Abdullah Morsi, Hend Okasha, Heba Abdel Hady, Mortada El-Sayed, Mohamed Abbas Shemis
Abstract:
Context: Linum usitatissimum (Linn), known as Flaxseed, is one of the most important medicinal plants traditionally used for various health as nutritional purposes. Objective: Estimation of total phenolic and flavonoid contents as well as evaluate the antioxidant using α, α-diphenyl-β-picrylhydrazyl (DPPH), 2-2'azinobis (3-ethylbenzthiazoline-6-sulphonic acid (ABTS) and total antioxidant capacity (TAC) assay and investigation of anti-inflammatory by Bovine serum albumin (BSA) and anticancer activities of hepatocellular carcinoma cell line (HepG2) and breast cancer cell line (MCF7) have been applied on hexane, ethyl acetate, n-butanol and methanol extracts and also, fractions of methonal extract (hexane, ethyl acetate and n-butanol). Materials and Methods: Phenolic and flavonoid contents were detected using spectrophotometric and colorimetric assays. Antioxidant and anti-inflammatory activities were estimated in-vitro. Anticancer activity of extracts and fractions of methanolic extract were tested on (HepG2) and (MCF7). Results: Methanolic extract and its ethyl acetate fraction contain higher contents of total phenols and flavonoids. In addition, methanolic extract had higher antioxidant activity. Butanolic and ethyl acetate fractions yielded higher percent of inhibition of protein denaturation. Meanwhile, ethyl acetate fraction and methanolic extract had anticancer activity against HepG2 and MCF7 (IC50=60 ± 0.24 and 29.4 ± 0.12µg.mL⁻¹) and (IC50=94.7 ± 0.21 and 227 ± 0.48µg.mL⁻¹), respectively. In Gas chromatography-mass spectrometry (GC-MS) analysis, methanolic extract has 32 compounds, whereas; ethyl acetate and butanol fractions contain 40 and 36 compounds, respectively. Conclusion: Flaxseed contains totally different biologically active compounds that have been found to possess good variable activities, which can protect human body against several diseases.Keywords: phenolic content, flavonoid content, HepG2, MCF7, hemolysis-assay, flaxseed
Procedia PDF Downloads 125415 Risk Assessment of Natural Gas Pipelines in Coal Mined Gobs Based on Bow-Tie Model and Cloud Inference
Authors: Xiaobin Liang, Wei Liang, Laibin Zhang, Xiaoyan Guo
Abstract:
Pipelines pass through coal mined gobs inevitably in the mining area, the stability of which has great influence on the safety of pipelines. After extensive literature study and field research, it was found that there are a few risk assessment methods for coal mined gob pipelines, and there is a lack of data on the gob sites. Therefore, the fuzzy comprehensive evaluation method is widely used based on expert opinions. However, the subjective opinions or lack of experience of individual experts may lead to inaccurate evaluation results. Hence the accuracy of the results needs to be further improved. This paper presents a comprehensive approach to achieve this purpose by combining bow-tie model and cloud inference. The specific evaluation process is as follows: First, a bow-tie model composed of a fault tree and an event tree is established to graphically illustrate the probability and consequence indicators of pipeline failure. Second, the interval estimation method can be scored in the form of intervals to improve the accuracy of the results, and the censored mean algorithm is used to remove the maximum and minimum values of the score to improve the stability of the results. The golden section method is used to determine the weight of the indicators and reduce the subjectivity of index weights. Third, the failure probability and failure consequence scores of the pipeline are converted into three numerical features by using cloud inference. The cloud inference can better describe the ambiguity and volatility of the results which can better describe the volatility of the risk level. Finally, the cloud drop graphs of failure probability and failure consequences can be expressed, which intuitively and accurately illustrate the ambiguity and randomness of the results. A case study of a coal mine gob pipeline carrying natural gas has been investigated to validate the utility of the proposed method. The evaluation results of this case show that the probability of failure of the pipeline is very low, the consequences of failure are more serious, which is consistent with the reality.Keywords: bow-tie model, natural gas pipeline, coal mine gob, cloud inference
Procedia PDF Downloads 250414 Flood Simulation and Forecasting for Sustainable Planning of Response in Municipalities
Authors: Mariana Damova, Stanko Stankov, Emil Stoyanov, Hristo Hristov, Hermand Pessek, Plamen Chernev
Abstract:
We will present one of the first use cases on the DestinE platform, a joint initiative of the European Commission, European Space Agency and EUMETSAT, providing access to global earth observation, meteorological and statistical data, and emphasize the good practice of intergovernmental agencies acting in concert. Further, we will discuss the importance of space-bound disruptive solutions for improving the balance between the ever-increasing water-related disasters coming from climate change and minimizing their economic and societal impact. The use case focuses on forecasting floods and estimating the impact of flood events on the urban environment and the ecosystems in the affected areas with the purpose of helping municipal decision-makers to analyze and plan resource needs and to forge human-environment relationships by providing farmers with insightful information for improving their agricultural productivity. For the forecast, we will adopt an EO4AI method of our platform ISME-HYDRO, in which we employ a pipeline of neural networks applied to in-situ measurements and satellite data of meteorological factors influencing the hydrological and hydrodynamic status of rivers and dams, such as precipitations, soil moisture, vegetation index, snow cover to model flood events and their span. ISME-HYDRO platform is an e-infrastructure for water resources management based on linked data, extended with further intelligence that generates forecasts with the method described above, throws alerts, formulates queries, provides superior interactivity and drives communication with the users. It provides synchronized visualization of table views, graphviews and interactive maps. It will be federated with the DestinE platform.Keywords: flood simulation, AI, Earth observation, e-Infrastructure, flood forecasting, flood areas localization, response planning, resource estimation
Procedia PDF Downloads 21413 The Impact of Human Intervention on Net Primary Productivity for the South-Central Zone of Chile
Authors: Yannay Casas-Ledon, Cinthya A. Andrade, Camila E. Salazar, Mauricio Aguayo
Abstract:
The sustainable management of available natural resources is a crucial question for policy-makers, economists, and the research community. Among several, land constitutes one of the most critical resources, which is being intensively appropriated by human activities producing ecological stresses and reducing ecosystem services. In this context, net primary production (NPP) has been considered as a feasible proxy indicator for estimating the impacts of human interventions on land-uses intensity. Accordingly, the human appropriation of NPP (HANPP) was calculated for the south-central regions of Chile between 2007 and 2014. The HANPP was defined as the difference between the potential NPP of the naturally produced vegetation (NPP0, i.e., the vegetation that would exist without any human interferences) and the NPP remaining in the field after harvest (NPPeco), expressed in gC/m² yr. Other NPP flows taken into account in HANPP estimation were the harvested (NPPh) and the losses of NPP through land conversion (NPPluc). The ArcGIS 10.4 software was used for assessing the spatial and temporal HANPP changes. The differentiation of HANPP as % of NPP0 was estimated by each landcover type taken in 2007 and 2014 as the reference years. The spatial results depicted a negative impact on land use efficiency during 2007 and 2014, showing negative HANPP changes for the whole region. The harvest and biomass losses through land conversion components are the leading causes of loss of land-use efficiency. Furthermore, the study depicted higher HANPP in 2014 than in 2007, representing 50% of NPP0 for all landcover classes concerning 2007. This performance was mainly related to the higher volume of harvested biomass for agriculture. In consequence, the cropland depicted the high HANPP followed by plantation. This performance highlights the strong positive correlation between the economic activities developed into the region. This finding constitutes the base for a better understanding of the main driving force influencing biomass productivity and a powerful metric for supporting the sustainable management of land use.Keywords: human appropriation, land-use changes, land-use impact, net primary productivity
Procedia PDF Downloads 136412 Derivation of Fragility Functions of Marine Drilling Risers Under Ocean Environment
Authors: Pranjal Srivastava, Piyali Sengupta
Abstract:
The performance of marine drilling risers is crucial in the offshore oil and gas industry to ensure safe drilling operation with minimum downtime. Experimental investigations on marine drilling risers are limited in the literature owing to the expensive and exhaustive test setup required to replicate the realistic riser model and ocean environment in the laboratory. Therefore, this study presents an analytical model of marine drilling riser for determining its fragility under ocean environmental loading. In this study, the marine drilling riser is idealized as a continuous beam having a concentric circular cross-section. Hydrodynamic loading acting on the marine drilling riser is determined by Morison’s equations. By considering the equilibrium of forces on the marine drilling riser for the connected and normal drilling conditions, the governing partial differential equations in terms of independent variables z (depth) and t (time) are derived. Subsequently, the Runge Kutta method and Finite Difference Method are employed for solving the partial differential equations arising from the analytical model. The proposed analytical approach is successfully validated with respect to the experimental results from the literature. From the dynamic analysis results of the proposed analytical approach, the critical design parameters peak displacements, upper and lower flex joint rotations and von Mises stresses of marine drilling risers are determined. An extensive parametric study is conducted to explore the effects of top tension, drilling depth, ocean current speed and platform drift on the critical design parameters of the marine drilling riser. Thereafter, incremental dynamic analysis is performed to derive the fragility functions of shallow water and deep-water marine drilling risers under ocean environmental loading. The proposed methodology can also be adopted for downtime estimation of marine drilling risers incorporating the ranges of uncertainties associated with the ocean environment, especially at deep and ultra-deepwater.Keywords: drilling riser, marine, analytical model, fragility
Procedia PDF Downloads 146411 Organ Dose Calculator for Fetus Undergoing Computed Tomography
Authors: Choonsik Lee, Les Folio
Abstract:
Pregnant patients may undergo CT in emergencies unrelated with pregnancy, and potential risk to the developing fetus is of concern. It is critical to accurately estimate fetal organ doses in CT scans. We developed a fetal organ dose calculation tool using pregnancy-specific computational phantoms combined with Monte Carlo radiation transport techniques. We adopted a series of pregnancy computational phantoms developed at the University of Florida at the gestational ages of 8, 10, 15, 20, 25, 30, 35, and 38 weeks (Maynard et al. 2011). More than 30 organs and tissues and 20 skeletal sites are defined in each fetus model. We calculated fetal organ dose-normalized by CTDIvol to derive organ dose conversion coefficients (mGy/mGy) for the eight fetuses for consequential slice locations ranging from the top to the bottom of the pregnancy phantoms with 1 cm slice thickness. Organ dose from helical scans was approximated by the summation of doses from multiple axial slices included in the given scan range of interest. We then compared dose conversion coefficients for major fetal organs in the abdominal-pelvis CT scan of pregnancy phantoms with the uterine dose of a non-pregnant adult female computational phantom. A comprehensive library of organ conversion coefficients was established for the eight developing fetuses undergoing CT. They were implemented into an in-house graphical user interface-based computer program for convenient estimation of fetal organ doses by inputting CT technical parameters as well as the age of the fetus. We found that the esophagus received the least dose, whereas the kidneys received the greatest dose in all fetuses in AP scans of the pregnancy phantoms. We also found that when the uterine dose of a non-pregnant adult female phantom is used as a surrogate for fetal organ doses, root-mean-square-error ranged from 0.08 mGy (8 weeks) to 0.38 mGy (38 weeks). The uterine dose was up to 1.7-fold greater than the esophagus dose of the 38-week fetus model. The calculation tool should be useful in cases requiring fetal organ dose in emergency CT scans as well as patient dose monitoring.Keywords: computed tomography, fetal dose, pregnant women, radiation dose
Procedia PDF Downloads 140410 Familiarity with Flood and Engineering Solutions to Control It
Authors: Hamid Fallah
Abstract:
Undoubtedly, flood is known as a natural disaster, and in practice, flood is considered the most terrible natural disaster in the world both in terms of loss of life and financial losses. From 1988 to 1997, about 390,000 people were killed by natural disasters in the world, 58% of which were related to floods, 26% due to earthquakes, and 16% due to storms and other disasters. The total damages in these 10 years were about 700 billion dollars, which were 33, 29, 28% related to floods, storms and earthquakes, respectively. In this regard, the worrisome point has been the increasing trend of flood deaths and damages in the world in recent decades. The increase in population and assets in flood plains, changes in hydro systems and the destructive effects of human activities have been the main reasons for this increase. During rain and snow, some of the water is absorbed by the soil and plants. A percentage evaporates and the rest flows and is called runoff. Floods occur when the soil and plants cannot absorb the rainfall, and as a result, the natural river channel does not have the capacity to pass the generated runoff. On average, almost 30% of precipitation is converted into runoff, which increases with snow melting. Floods that occur differently create an area called flood plain around the river. River floods are often caused by heavy rains, which in some cases are accompanied by snow melt. A flood that flows in a river without warning or with little warning is called a flash flood. The casualties of these rapid floods that occur in small watersheds are generally more than the casualties of large river floods. Coastal areas are also subject to flooding caused by waves caused by strong storms on the surface of the oceans or waves caused by underground earthquakes. Floods not only cause damage to property and endanger the lives of humans and animals, but also leave other effects. Runoff caused by heavy rains causes soil erosion in the upstream and sedimentation problems in the downstream. The habitats of fish and other animals are often destroyed by floods. The high speed of the current increases the damage. Long-term floods stop traffic and prevent drainage and economic use of land. The supports of bridges, river banks, sewage outlets and other structures are damaged, and there is a disruption in shipping and hydropower generation. The economic losses of floods in the world are estimated at tens of billions of dollars annually.Keywords: flood, hydrological engineering, gis, dam, small hydropower, suitablity
Procedia PDF Downloads 67409 Constraints on Source Rock Organic Matter Biodegradation in the Biogenic Gas Fields in the Sanhu Depression, Qaidam Basin, Northwestern China: A Study of Compound Concentration and Concentration Ratio Changes Using GC-MS Data
Authors: Mengsha Yin
Abstract:
Extractable organic matter (EOM) from thirty-six biogenic gas source rocks from the Sanhu Depression in Qaidam Basin in northwestern China were obtained via Soxhlet extraction. Twenty-nine of them were conducted SARA (Saturates, Aromatics, Resins and Asphaltenes) separation for bulk composition analysis. Saturated and aromatic fractions of all the extractions were analyzed by Gas Chromatography-Mass Spectrometry (GC-MS) to investigate the compound compositions. More abundant n-alkanes, naphthalene, phenanthrene, dibenzothiophene and their alkylated products occur in samples in shallower depths. From 2000m downward, concentrations of these compounds increase sharply, and concentration ratios of more-over-less biodegradation susceptible compounds coincidently decrease dramatically. ∑iC15-16, 18-20/∑nC15-16, 18-20 and hopanoids/∑n-alkanes concentration ratios and mono- and tri-aromatic sterane concentrations and concentration ratios frequently fluctuate with depth rather than trend with it, reflecting effects from organic input and paleoenvironments other than biodegradation. Saturated and aromatic compound distributions on the saturates and aromatics total ion chromatogram (TIC) traces of samples display different degrees of biodegradation. Dramatic and simultaneous variations in compound concentrations and their ratios at 2000m and their changes with depth underneath cooperatively justified the crucial control of burial depth on organic matter biodegradation scales in source rocks and prompted the proposition that 2000m is the bottom depth boundary for active microbial activities in this study. The study helps to better curb the conditions where effective source rocks occur in terms of depth in the Sanhu biogenic gas fields and calls for additional attention to source rock pore size estimation during biogenic gas source rock appraisals.Keywords: pore space, Sanhu depression, saturated and aromatic hydrocarbon compound concentration, source rock organic matter biodegradation, total ion chromatogram
Procedia PDF Downloads 156408 Synthesis and Thermoluminescence Investigations of Doped LiF Nanophosphor
Authors: Pooja Seth, Shruti Aggarwal
Abstract:
Thermoluminescence dosimetry (TLD) is one of the most effective methods for the assessment of dose during diagnostic radiology and radiotherapy applications. In these applications monitoring of absorbed dose is essential to prevent patient from undue exposure and to evaluate the risks that may arise due to exposure. LiF based thermoluminescence (TL) dosimeters are promising materials for the estimation, calibration and monitoring of dose due to their favourable dosimetric characteristics like tissue-equivalence, high sensitivity, energy independence and dose linearity. As the TL efficiency of a phosphor strongly depends on the preparation route, it is interesting to investigate the TL properties of LiF based phosphor in nanocrystalline form. LiF doped with magnesium (Mg), copper (Cu), sodium (Na) and silicon (Si) in nanocrystalline form has been prepared using chemical co-precipitation method. Cubical shape LiF nanostructures are formed. TL dosimetry properties have been investigated by exposing it to gamma rays. TL glow curve structure of nanocrystalline form consists of a single peak at 419 K as compared to the multiple peaks observed in microcrystalline form. A consistent glow curve structure with maximum TL intensity at annealing temperature of 573 K and linear dose response from 0.1 to 1000 Gy is observed which is advantageous for radiotherapy application. Good reusability, low fading (5 % over a month) and negligible residual signal (0.0019%) are observed. As per photoluminescence measurements, wide emission band at 360 nm - 550 nm is observed in an undoped LiF. However, an intense peak at 488 nm is observed in doped LiF nanophosphor. The phosphor also exhibits the intense optically stimulated luminescence. Nanocrystalline LiF: Mg, Cu, Na, Si phosphor prepared by co-precipitation method showed simple glow curve structure, linear dose response, reproducibility, negligible residual signal, good thermal stability and low fading. The LiF: Mg, Cu, Na, Si phosphor in nanocrystalline form has tremendous potential in diagnostic radiology, radiotherapy and high energy radiation application.Keywords: thermoluminescence, nanophosphor, optically stimulated luminescence, co-precipitation method
Procedia PDF Downloads 404407 Assessment of the Performance of the Sonoreactors Operated at Different Ultrasound Frequencies, to Remove Pollutants from Aqueous Media
Authors: Gabriela Rivadeneyra-Romero, Claudia del C. Gutierrez Torres, Sergio A. Martinez-Delgadillo, Victor X. Mendoza-Escamilla, Alejandro Alonzo-Garcia
Abstract:
Ultrasonic degradation is currently being used in sonochemical reactors to degrade pollutant compounds from aqueous media, as emerging contaminants (e.g. pharmaceuticals, drugs and personal care products.) because they can produce possible ecological impacts on the environment. For this reason, it is important to develop appropriate water and wastewater treatments able to reduce pollution and increase reuse. Pollutants such as textile dyes, aromatic and phenolic compounds, cholorobenzene, bisphenol-A and carboxylic acid and other organic pollutants, can be removed from wastewaters by sonochemical oxidation. The effect on the removal of pollutants depends on the type of the ultrasonic frequency used; however, not much studies have been done related to the behavior of the fluid into the sonoreactors operated at different ultrasonic frequencies. Based on the above, it is necessary to study the hydrodynamic behavior of the liquid generated by the ultrasonic irradiation to design efficient sonoreactors to reduce treatment times and costs. In this work, it was studied the hydrodynamic behavior of the fluid in sonochemical reactors at different frequencies (250 kHz, 500 kHz and 1000 kHz). The performances of the sonoreactors at those frequencies were simulated using computational fluid dynamics (CFD). Due to there is great sound speed gradient between piezoelectric and fluid, k-e models were used. Piezoelectric was defined as a vibration surface, to evaluate the different frequencies effect on the fluid into sonochemical reactor. Structured hexahedral cells were used to mesh the computational liquid domain, and fine triangular cells were used to mesh the piezoelectric transducers. Unsteady state conditions were used in the solver. Estimation of the dissipation rate, flow field velocities, Reynolds stress and turbulent quantities were evaluated by CFD and 2D-PIV measurements. Test results show that there is no necessary correlation between an increase of the ultrasonic frequency and the pollutant degradation, moreover, the reactor geometry and power density are important factors that should be considered in the sonochemical reactor design.Keywords: CFD, reactor, ultrasound, wastewater
Procedia PDF Downloads 190406 MIMO Radar-Based System for Structural Health Monitoring and Geophysical Applications
Authors: Davide D’Aria, Paolo Falcone, Luigi Maggi, Aldo Cero, Giovanni Amoroso
Abstract:
The paper presents a methodology for real-time structural health monitoring and geophysical applications. The key elements of the system are a high performance MIMO RADAR sensor, an optical camera and a dedicated set of software algorithms encompassing interferometry, tomography and photogrammetry. The MIMO Radar sensor proposed in this work, provides an extremely high sensitivity to displacements making the system able to react to tiny deformations (up to tens of microns) with a time scale which spans from milliseconds to hours. The MIMO feature of the system makes the system capable of providing a set of two-dimensional images of the observed scene, each mapped on the azimuth-range directions with noticeably resolution in both the dimensions and with an outstanding repetition rate. The back-scattered energy, which is distributed in the 3D space, is projected on a 2D plane, where each pixel has as coordinates the Line-Of-Sight distance and the cross-range azimuthal angle. At the same time, the high performing processing unit allows to sense the observed scene with remarkable refresh periods (up to milliseconds), thus opening the way for combined static and dynamic structural health monitoring. Thanks to the smart TX/RX antenna array layout, the MIMO data can be processed through a tomographic approach to reconstruct the three-dimensional map of the observed scene. This 3D point cloud is then accurately mapped on a 2D digital optical image through photogrammetric techniques, allowing for easy and straightforward interpretations of the measurements. Once the three-dimensional image is reconstructed, a 'repeat-pass' interferometric approach is exploited to provide the user of the system with high frequency three-dimensional motion/vibration estimation of each point of the reconstructed image. At this stage, the methodology leverages consolidated atmospheric correction algorithms to provide reliable displacement and vibration measurements.Keywords: interferometry, MIMO RADAR, SAR, tomography
Procedia PDF Downloads 195405 Evaluation of a Piecewise Linear Mixed-Effects Model in the Analysis of Randomized Cross-over Trial
Authors: Moses Mwangi, Geert Verbeke, Geert Molenberghs
Abstract:
Cross-over designs are commonly used in randomized clinical trials to estimate efficacy of a new treatment with respect to a reference treatment (placebo or standard). The main advantage of using cross-over design over conventional parallel design is its flexibility, where every subject become its own control, thereby reducing confounding effect. Jones & Kenward, discuss in detail more recent developments in the analysis of cross-over trials. We revisit the simple piecewise linear mixed-effects model, proposed by Mwangi et. al, (in press) for its first application in the analysis of cross-over trials. We compared performance of the proposed piecewise linear mixed-effects model with two commonly cited statistical models namely, (1) Grizzle model; and (2) Jones & Kenward model, used in estimation of the treatment effect, in the analysis of randomized cross-over trial. We estimate two performance measurements (mean square error (MSE) and coverage probability) for the three methods, using data simulated from the proposed piecewise linear mixed-effects model. Piecewise linear mixed-effects model yielded lowest MSE estimates compared to Grizzle and Jones & Kenward models for both small (Nobs=20) and large (Nobs=600) sample sizes. It’s coverage probability were highest compared to Grizzle and Jones & Kenward models for both small and large sample sizes. A piecewise linear mixed-effects model is a better estimator of treatment effect than its two competing estimators (Grizzle and Jones & Kenward models) in the analysis of cross-over trials. The data generating mechanism used in this paper captures two time periods for a simple 2-Treatments x 2-Periods cross-over design. Its application is extendible to more complex cross-over designs with multiple treatments and periods. In addition, it is important to note that, even for single response models, adding more random effects increases the complexity of the model and thus may be difficult or impossible to fit in some cases.Keywords: Evaluation, Grizzle model, Jones & Kenward model, Performance measures, Simulation
Procedia PDF Downloads 122404 Drivers of Liking: Probiotic Petit Suisse Cheese
Authors: Helena Bolini, Erick Esmerino, Adriano Cruz, Juliana Paixao
Abstract:
The currently concern for health has increased demand for low-calorie ingredients and functional foods as probiotics. Understand the reasons that infer on food choice, besides a challenging task, it is important step for development and/or reformulation of existing food products. The use of appropriate multivariate statistical techniques, such as External Preference Map (PrefMap), associated with regression by Partial Least Squares (PLS) can help in determining those factors. Thus, this study aimed to determine, through PLS regression analysis, the sensory attributes considered drivers of liking in probiotic petit suisse cheeses, strawberry flavor, sweetened with different sweeteners. Five samples in same equivalent sweetness: PROB1 (Sucralose 0.0243%), PROB2 (Stevia 0.1520%), PROB3 (Aspartame 0.0877%), PROB4 (Neotame 0.0025%) and PROB5 (Sucrose 15.2%) determined by just-about-right and magnitude estimation methods, and three commercial samples COM1, COM2 and COM3, were studied. Analysis was done over data coming from QDA, performed by 12 expert (highly trained assessors) on 20 descriptor terms, correlated with data from assessment of overall liking in acceptance test, carried out by 125 consumers, on all samples. Sequentially, results were submitted to PLS regression using XLSTAT software from Byossistemes. As shown in results, it was possible determine, that three sensory descriptor terms might be considered drivers of liking of probiotic petit suisse cheese samples added with sweeteners (p<0.05). The milk flavor was noticed as a sensory characteristic with positive impact on acceptance, while descriptors bitter taste and sweet aftertaste were perceived as descriptor terms with negative impact on acceptance of petit suisse probiotic cheeses. It was possible conclude that PLS regression analysis is a practical and useful tool in determining drivers of liking of probiotic petit suisse cheeses sweetened with artificial and natural sweeteners, allowing food industry to understand and improve their formulations maximizing the acceptability of their products.Keywords: acceptance, consumer, quantitative descriptive analysis, sweetener
Procedia PDF Downloads 446403 Microplastics in the Seine River Catchment: Results and Lessons from a Pluriannual Research Programme
Authors: Bruno Tassin, Robin Treilles, Cleo Stratmann, Minh Trang Nguyen, Sam Azimi, Vincent Rocher, Rachid Dris, Johnny Gasperi
Abstract:
Microplastics (<5mm) in the environment and in hydro systems is one of the major present environmental issues. Over the last five years a research programme was conducted in order to assess the behavior of microplastics in the Seine river catchment, in a Man-Land-Sea continuum approach. Results show that microplastic concentration varies at the seasonal scale, but also at much smaller scales, during flood events and with tides in the estuary for instance. Moreover, microplastic sampling and characterization issues emerged throughout this work. The Seine river is a 750km long river flowing in Northwestern France. It crosses the Paris megacity (12 millions inhabitants) and reaches the English Channel after a 170 km long estuary. This site is a very relevant one to assess the effect of anthropogenic pollution as the mean river flow is low (mean flow around 350m³/s) while the human presence and activities are very intense. Monthly monitoring of the microplastic concentration took place over a 19-month period and showed significant temporal variations at all sampling stations but no significant upstream-downstream increase, indicating a possible major sink to the sediment. At the scale of a major flood event (winter and spring 2018), microplastic concentration shows an evolution similar to the well-known suspended solids concentration, with an increase during the increase of the flow and a decrease during the decrease of the flow. Assessing the position of the concentration peak in relation to the flow peak was unfortunately impossible. In the estuary, concentrations vary with time in connection with tides movements and in the water column in relation to the salinity and the turbidity. Although major gains of knowledge on the microplastic dynamics in the Seine river have been obtained over the last years, major gaps remain to deal mostly with the interaction with the dynamics of the suspended solids, the selling processes in the water column and the resuspension by navigation or shear stress increase. Moreover, the development of efficient chemical characterization techniques during the 5 year period of this pluriannual research programme led to the improvement of the sampling techniques in order to access smaller microplastics (>10µm) as well as larger but rare ones (>500µm).Keywords: microplastics, Paris megacity, seine river, suspended solids
Procedia PDF Downloads 198