Search results for: oscillating flow
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4730

Search results for: oscillating flow

2570 Hiveopolis - Honey Harvester System

Authors: Erol Bayraktarov, Asya Ilgun, Thomas Schickl, Alexandre Campo, Nicolis Stamatios

Abstract:

Traditional means of harvesting honey are often stressful for honeybees. Each time honey is collected a portion of the colony can die. In consequence, the colonies’ resilience to environmental stressors will decrease and this ultimately contributes to the global problem of honeybee colony losses. As part of the project HIVEOPOLIS, we design and build a different kind of beehive, incorporating technology to reduce negative impacts of beekeeping procedures, including honey harvesting. A first step in maintaining more sustainable honey harvesting practices is to design honey storage frames that can automate the honey collection procedures. This way, beekeepers save time, money, and labor by not having to open the hive and remove frames, and the honeybees' nest stays undisturbed.This system shows promising features, e.g., high reliability which could be a key advantage compared to current honey harvesting technologies.Our original concept of fractional honey harvesting has been to encourage the removal of honey only from "safe" locations and at levels that would leave the bees enough high-nutritional-value honey. In this abstract, we describe the current state of our honey harvester, its technology and areas to improve. The honey harvester works by separating the honeycomb cells away from the comb foundation; the movement and the elastic nature of honey supports this functionality. The honey sticks to the foundation, because of the surface tension forces amplified by the geometry. In the future, by monitoring the weight and therefore the capped honey cells on our honey harvester frames, we will be able to remove honey as soon as the weight measuring system reports that the comb is ready for harvesting. Higher viscosity honey or crystalized honey cause challenges in temperate locations when a smooth flow of honey is required. We use resistive heaters to soften the propolis and wax to unglue the moving parts during extraction. These heaters can also melt the honey slightly to the needed flow state. Precise control of these heaters allows us to operate the device for several purposes. We use ‘Nitinol’ springs that are activated by heat as an actuation method. Unlike conventional stepper or servo motors, which we also evaluated throughout development, the springs and heaters take up less space and reduce the overall system complexity. Honeybee acceptance was unknown until we actually inserted a device inside a hive. We not only observed bees walking on the artificial comb but also building wax, filling gaps with propolis and storing honey. This also shows that bees don’t mind living in spaces and hives built from 3D printed materials. We do not have data yet to prove that the plastic materials do not affect the chemical composition of the honey. We succeeded in automatically extracting stored honey from the device, demonstrating a useful extraction flow and overall effective operation this way.

Keywords: honey harvesting, honeybee, hiveopolis, nitinol

Procedia PDF Downloads 95
2569 Comparative Numerical Simulations of Reaction-Coupled Annular and Free-Bubbling Fluidized Beds Performance

Authors: Adefarati Oloruntoba, Yongmin Zhang, Hongliang Xiao

Abstract:

An annular fluidized bed (AFB) is gaining extensive application in the process industry due to its efficient gas-solids contacting. But a direct evaluation of its reaction performance is still lacking. In this paper, comparative 3D Euler–Lagrange multiphase-particle-in-cell (MP-PIC) computations are performed to assess the reaction performance of AFB relative to a bubbling fluidized bed (BFB) in an FCC regeneration process. By using the energy-minimization multi-scale (EMMS) drag model with a suitable heterogeneity index, the MP-PIC simulation predicts the typical fountain region in AFB and solids holdup of BFB, which is consistent with an experiment. Coke combustion rate, flue gas and temperature profile are utilized as the performance indicators, while related bed hydrodynamics are explored to account for the different performance under varying superficial gas velocities (0.5 m/s, 0.6 m/s, and 0.7 m/s). Simulation results indicate that the burning rates of coke and its species are relatively the same in both beds, albeit marginal increase in BFB. Similarly, the shape and evolution time of flue gas (CO, CO₂, H₂O and O₂) curves are indistinguishable but match the coke combustion rates. However, AFB has high proclivity to high temperature-gradient as higher gas and solids temperatures are predicted in the freeboard. Moreover, for both beds, the effect of superficial gas velocity is only conspicuous on the temperature but negligible on combustion efficiency and effluent gas emissions due to constant gas volumetric flow rate and bed loading criteria. Cross-flow of solids from the annulus to the spout region as well as the high primary gas in the AFB directly assume the underlying mechanisms for its unique gas-solids hydrodynamics (pressure, solids holdup, velocity, mass flux) and local spatial homogeneity, which in turn influence the reactor performance. Overall, the study portrays AFB as a cheap alternative reactor to BFB for catalyst regeneration.

Keywords: annular fluidized bed, bubbling fluidized bed, coke combustion, flue gas, fountaining, CFD, MP-PIC, hydrodynamics, FCC regeneration

Procedia PDF Downloads 145
2568 Estimating the Traffic Impacts of Green Light Optimal Speed Advisory Systems Using Microsimulation

Authors: C. B. Masera, M. Imprialou, L. Budd, C. Morton

Abstract:

Even though signalised intersections are necessary for urban road traffic management, they can act as bottlenecks and disrupt traffic operations. Interrupted traffic flow causes congestion, delays, stop-and-go conditions (i.e. excessive acceleration/deceleration) and longer journey times. Vehicle and infrastructure connectivity offers the potential to provide improved new services with additional functions of assisting drivers. This paper focuses on one of the applications of vehicle-to-infrastructure communication namely Green Light Optimal Speed Advisory (GLOSA). To assess the effectiveness of GLOSA in the urban road network, an integrated microscopic traffic simulation framework is built into VISSIM software. Vehicle movements and vehicle-infrastructure communications are simulated through the interface of External Driver Model. A control algorithm is developed for recommending an optimal speed that is continuously updated in every time step for all vehicles approaching a signal-controlled point. This algorithm allows vehicles to pass a traffic signal without stopping or to minimise stopping times at a red phase. This study is performed with all connected vehicles at 100% penetration rate. Conventional vehicles are also simulated in the same network as a reference. A straight road segment composed of two opposite directions with two traffic lights per lane is studied. The simulation is implemented under 150 vehicles per hour and 200 per hour traffic volume conditions to identify how different traffic densities influence the benefits of GLOSA. The results indicate that traffic flow is improved by the application of GLOSA. According to this study, vehicles passed through the traffic lights more smoothly, and waiting times were reduced by up to 28 seconds. Average delays decreased for the entire network by 86.46% and 83.84% under traffic densities of 150 vehicles per hour per lane and 200 vehicles per hour per lane, respectively.

Keywords: connected vehicles, GLOSA, intelligent transport systems, vehicle-to-infrastructure communication

Procedia PDF Downloads 149
2567 Laboratory Assessment of Electrical Vertical Drains in Composite Soils Using Kaolin and Bentonite Clays

Authors: Maher Z. Mohammed, Barry G. Clarke

Abstract:

As an alternative to stone column in fine grained soils, it is possible to create stiffened columns of soils using electroosmosis (electroosmotic piles). This program of this research is to establish the effectiveness and efficiency of the process in different soils. The aim of this study is to assess the capability of electroosmosis treatment in a range of composite soils. The combined electroosmotic and preloading equipment developed by Nizar and Clarke (2013) was used with an octagonal array of anodes surrounding a single cathode in a nominal 250mm diameter 300mm deep cylinder of soil and 80mm anode to cathode distance. Copper coiled springs were used as electrodes to allow the soil to consolidate either due to an external vertical applied load or electroosmosis. The equipment was modified to allow the temperature to be monitored during the test. Electroosmotic tests were performed on China Clay Grade E kaolin and calcium bentonite (Bentonex CB) mixed with sand fraction C (BS 1881 part 131) at different ratios by weight; (0, 23, 33, 50 and 67%) subjected to applied voltages (5, 10, 15 and 20). The soil slurry was prepared by mixing the dry soil with water to 1.5 times the liquid limit of the soil mixture. The mineralogical and geotechnical properties of the tested soils were measured before the electroosmosis treatment began. In the electroosmosis cell tests, the settlement, expelled water, variation of electrical current and applied voltage, and the generated heat was monitored during the test time for 24 osmotic tests. Water content was measured at the end of each test. The electroosmotic tests are divided into three phases. In Phase 1, 15 kPa was applied to simulate a working platform and produce a uniform soil which had been deposited as a slurry. 50 kPa was used in Phase 3 to simulate a surcharge load. The electroosmotic treatment was only performed during Phase 2 where a constant voltage was applied through the electrodes in addition to the 15 kPa pressure. This phase was stopped when no further water was expelled from the cell, indicating the electroosmotic process had stopped due to either the degradation of the anode or the flow due to the hydraulic gradient exactly balanced the electroosmotic flow resulting in no flow. Control tests for each soil mixture were carried out to assess the behaviour of the soil samples subjected to only an increase of vertical pressure, which is 15kPa in Phase 1 and 50kPa in Phase 3. Analysis of the experimental results from this study showed a significant dewatering effect on the soil slurries. The water discharged by the electroosmotic treatment process decreased as the sand content increased. Soil temperature increased significantly when electrical power was applied and drops when applied DC power turned off or when the electrode degraded. The highest increase in temperature was found in pure clays at higher applied voltage after about 8 hours of electroosmosis test.

Keywords: electrokinetic treatment, electrical conductivity, electroosmotic consolidation, electroosmosis permeability ratio

Procedia PDF Downloads 150
2566 Optimizing Fire Tube Boiler Design for Efficient Saturated Steam Production: A Cost-Minimization Approach

Authors: Yoftahe Nigussie Worku

Abstract:

This report unveils a meticulous project focused on the design intricacies of a Fire Tube Boiler tailored for the efficient generation of saturated steam. The overarching objective is to produce 2000kg/h of saturated steam at 12-bar design pressure, achieved through the development of an advanced fire tube boiler. This design is meticulously crafted to harmonize cost-effectiveness and parameter refinement, with a keen emphasis on material selection for component parts, construction materials, and production methods throughout the analytical phases. The analytical process involves iterative calculations, utilizing pertinent formulas to optimize design parameters, including the selection of tube diameters and overall heat transfer coefficients. The boiler configuration incorporates two passes, a strategic choice influenced by tube and shell size considerations. The utilization of heavy oil fuel no. 6, with a higher heating value of 44000kJ/kg and a lower heating value of 41300kJ/kg, results in a fuel consumption of 140.37kg/hr. The boiler achieves an impressive heat output of 1610kW with an efficiency rating of 85.25%. The fluid flow pattern within the boiler adopts a cross-flow arrangement strategically chosen for inherent advantages. Internally, the welding of the tube sheet to the shell, secured by gaskets and welds, ensures structural integrity. The shell design adheres to European Standard code sections for pressure vessels, encompassing considerations for weight, supplementary accessories (lifting lugs, openings, ends, manhole), and detailed assembly drawings. This research represents a significant stride in optimizing fire tube boiler technology, balancing efficiency and safety considerations in the pursuit of enhanced saturated steam production.

Keywords: fire tube, saturated steam, material selection, efficiency

Procedia PDF Downloads 58
2565 A Variational Reformulation for the Thermomechanically Coupled Behavior of Shape Memory Alloys

Authors: Elisa Boatti, Ulisse Stefanelli, Alessandro Reali, Ferdinando Auricchio

Abstract:

Thanks to their unusual properties, shape memory alloys (SMAs) are good candidates for advanced applications in a wide range of engineering fields, such as automotive, robotics, civil, biomedical, aerospace. In the last decades, the ever-growing interest for such materials has boosted several research studies aimed at modeling their complex nonlinear behavior in an effective and robust way. Since the constitutive response of SMAs is strongly thermomechanically coupled, the investigation of the non-isothermal evolution of the material must be taken into consideration. The present study considers an existing three-dimensional phenomenological model for SMAs, able to reproduce the main SMA properties while maintaining a simple user-friendly structure, and proposes a variational reformulation of the full non-isothermal version of the model. While the considered model has been thoroughly assessed in an isothermal setting, the proposed formulation allows to take into account the full nonisothermal problem. In particular, the reformulation is inspired to the GENERIC (General Equations for Non-Equilibrium Reversible-Irreversible Coupling) formalism, and is based on a generalized gradient flow of the total entropy, related to thermal and mechanical variables. Such phrasing of the model is new and allows for a discussion of the model from both a theoretical and a numerical point of view. Moreover, it directly implies the dissipativity of the flow. A semi-implicit time-discrete scheme is also presented for the fully coupled thermomechanical system, and is proven unconditionally stable and convergent. The correspondent algorithm is then implemented, under a space-homogeneous temperature field assumption, and tested under different conditions. The core of the algorithm is composed of a mechanical subproblem and a thermal subproblem. The iterative scheme is solved by a generalized Newton method. Numerous uniaxial and biaxial tests are reported to assess the performance of the model and algorithm, including variable imposed strain, strain rate, heat exchange properties, and external temperature. In particular, the heat exchange with the environment is the only source of rate-dependency in the model. The reported curves clearly display the interdependence between phase transformation strain and material temperature. The full thermomechanical coupling allows to reproduce the exothermic and endothermic effects during respectively forward and backward phase transformation. The numerical tests have thus demonstrated that the model can appropriately reproduce the coupled SMA behavior in different loading conditions and rates. Moreover, the algorithm has proved effective and robust. Further developments are being considered, such as the extension of the formulation to the finite-strain setting and the study of the boundary value problem.

Keywords: generalized gradient flow, GENERIC formalism, shape memory alloys, thermomechanical coupling

Procedia PDF Downloads 208
2564 Analysis of Cell Cycle Status in Radiation Non-Targeted Hepatoma Cells Using Flow Cytometry: Evidence of Dose Dependent Response

Authors: Sharmi Mukherjee, Anindita Chakraborty

Abstract:

Cellular irradiation incites complex responses including arrest of cell cycle progression. This article accentuates the effects of radiation on cell cycle status of radiation non-targeted cells. Human Hepatoma HepG2 cells were exposed to increasing doses of γ radiations (1, 2, 4, 6 Gy) and their cell culture media was transferred to non-targeted HepG2 cells cultured in other Petri plates. These radiation non-targeted cells cultured in the ICCM (Irradiated cell conditioned media) were the bystander cells on which cell cycle analysis was performed using flow cytometry. An apparent decrease in the distribution of bystander cells at G0/G1 phase was observed with increased radiation doses upto 4 Gy representing a linear relationship. This was accompanied by a gradual increase in cellular distribution at G2/M phase. Interestingly the number of cells in G2/M phase at 1 and 2 Gy irradiation was not significantly different from each other. However, the percentage of G2 phase cells at 4 and 6 Gy doses were significantly higher than 2 Gy dose indicating the IC50 dose to be between 2 and 4 Gy. Cell cycle arrest is an indirect indicator of genotoxic damage in cells. In this study, bystander stress signals through the cell culture media of irradiated cells disseminated the radiation induced DNA damages in the non-targeted cells which resulted in arrest of the cell cycle progression at G2/M phase checkpoint. This implies that actual radiation biological effects represent a penumbra with effects encompassing a larger area than the actual beam. This article highlights the existence of genotoxic damages as bystander effects of γ rays in human Hepatoma cells by cell cycle analysis and opens up avenues for appraisal of bystander stress communications between tumor cells. Contemplation of underlying signaling mechanisms can be manipulated to maximize damaging effects of radiation with minimum dose and thus has therapeutic applications.

Keywords: bystander effect, cell cycle, genotoxic damage, hepatoma

Procedia PDF Downloads 172
2563 Market-Driven Process of Brain Circulation in Knowledge Services Industry in Sri Lanka

Authors: Panagodage Janaka Sampath Fernando

Abstract:

Brain circulation has become a buzzword in the skilled migration literature. However, promoting brain circulation; returning of skilled migrants is challenging. Success stories in Asia, for instances, Taiwan, and China, are results of rigorous policy interventions of the respective governments. Nonetheless, the same policy mix has failed in other countries making it skeptical to attribute the success of brain circulation to the policy interventions per se. The paper seeks to answer whether the success of brain circulation within the Knowledge Services Industry (KSI) in Sri Lanka is a policy driven or a market driven process. Mixed method approach, which is a combination of case study and survey methods, was employed. Qualitative data derived from ten case studies of returned entrepreneurs whereas quantitative data generated from a self-administered survey of 205 returned skilled migrants (returned skilled employees and entrepreneurs) within KSI. The pull factors have driven the current flow of brain circulation within KSI but to a lesser extent, push factors also have influenced. The founding stone of the industry has been laid by a group of returned entrepreneurs, and the subsequent growth of the industry has attracted returning skilled employees. Sri Lankan government has not actively implemented the reverse brain drain model, however, has played a passive role by creating a peaceful and healthy environment for the industry. Therefore, in contrast to the other stories, brain circulation within KSI has emerged as a market driven process with minimal government interventions. Entrepreneurs play the main role in a market-driven process of brain circulation, and it is free from the inherent limitations of the reverse brain drain model such as discriminating non-migrants and generating a sudden flow of low-skilled migrants. Thus, to experience a successful brain circulation, developing countries should promote returned entrepreneurs by creating opportunities in knowledge-based industries.

Keywords: brain circulation, knowledge services industry, return migration, Sri Lanka

Procedia PDF Downloads 262
2562 Role of Vitamin-D in Reducing Need for Supplemental Oxygen Among COVID-19 Patients

Authors: Anita Bajpai, Sarah Duan, Ashlee Erskine, Shehzein Khan, Raymond Kramer

Abstract:

Introduction: This research focuses on exploring the beneficial effects if any, of Vitamin-D in reducing the need for supplemental oxygen among hospitalized COVID-19 patients. Two questions are investigated – Q1)Doeshaving a healthy level of baselineVitamin-D 25-OH (≥ 30ng/ml) help,andQ2) does administering Vitamin-D therapy after-the-factduring inpatient hospitalization help? Methods/Study Design: This is a comprehensive, retrospective, observational study of all inpatients at RUHS from March through December 2020 who tested positive for COVID-19 based on real-time reverse transcriptase–polymerase chain reaction assay of nasal and pharyngeal swabs and rapid assay antigen test. To address Q1, we looked atall N1=182 patients whose baseline plasma Vitamin-D 25-OH was known and who needed supplemental oxygen. Of this, a total of 121 patients had a healthy Vitamin-D level of ≥30 ng/mlwhile the remaining 61 patients had low or borderline (≤ 29.9ng/ml)level. Similarly, for Q2, we looked at a total of N2=893 patients who were given supplemental oxygen, of which713 were not given Vitamin-D and 180 were given Vitamin-D therapy. The numerical value of the maximum amount of oxygen flow rate(dependent variable) administered was recorded for each patient. The mean values and associated standard deviations for each group were calculated. Thesetwo sets of independent data served as the basis for independent, two-sample t-Test statistical analysis. To be accommodative of any reasonable benefitof Vitamin-D, ap-value of 0.10(α< 10%) was set as the cutoff point for statistical significance. Results: Given the large sample sizes, the calculated statistical power for both our studies exceeded the customary norm of 80% or better (β< 0.2). For Q1, the mean value for maximumoxygen flow rate for the group with healthybaseline level of Vitamin-D was 8.6 L/min vs.12.6L/min for those with low or borderline levels, yielding a p-value of 0.07 (p < 0.10) with the conclusion that those with a healthy level of baseline Vitamin-D needed statistically significant lower levels of supplemental oxygen. ForQ2, the mean value for a maximum oxygen flow rate for those not administered Vitamin-Dwas 12.5 L/min vs.12.8L/min for those given Vitamin-D, yielding a p-valueof 0.87 (p > 0.10). We thereforeconcludedthat there was no statistically significant difference in the use of oxygen therapy between those who were or were not administered Vitamin-D after-the-fact in the hospital. Discussion/Conclusion: We found that patients who had healthy levels of Vitamin-D at baseline needed statistically significant lower levels of supplemental oxygen. Vitamin-D is well documented, including in a recent article in the Lancet, for its anti-inflammatory role as an adjuvant in the regulation of cytokines and immune cells. Interestingly, we found no statistically significant advantage for giving Vitamin-D to hospitalized patients. It may be a case of “too little too late”. A randomized clinical trial reported in JAMA also did not find any reduction in hospital stay of patients given Vitamin-D. Such conclusions come with a caveat that any delayed marginal benefits may not have materialized promptly in the presence of a significant inflammatory condition. Since Vitamin-D is a low-cost, low-risk option, it may still be useful on an inpatient basis until more definitive findings are established.

Keywords: COVID-19, vitamin-D, supplemental oxygen, vitamin-D in primary care

Procedia PDF Downloads 137
2561 A CORDIC Based Design Technique for Efficient Computation of DCT

Authors: Deboraj Muchahary, Amlan Deep Borah Abir J. Mondal, Alak Majumder

Abstract:

A discrete cosine transform (DCT) is described and a technique to compute it using fast Fourier transform (FFT) is developed. In this work, DCT of a finite length sequence is obtained by incorporating CORDIC methodology in radix-2 FFT algorithm. The proposed methodology is simple to comprehend and maintains a regular structure, thereby reducing computational complexity. DCTs are used extensively in the area of digital processing for the purpose of pattern recognition. So the efficient computation of DCT maintaining a transparent design flow is highly solicited.

Keywords: DCT, DFT, CORDIC, FFT

Procedia PDF Downloads 458
2560 Effects of Umbilical Cord Clamping on Puppies Neonatal Vitality

Authors: Maria L. G. Lourenço, Keylla H. N. P. Pereira, Viviane Y. Hibaru, Fabiana F. Souza, Joao C. P. Ferreira, Simone B. Chiacchio, Luiz H. A. Machado

Abstract:

In veterinary medicine, the standard procedure during a caesarian section is clamping the umbilical cord immediately after birth. In human neonates, when the umbilical cord is kept intact after birth, blood continues to flow from the cord to the newborn, but this procedure may prove to be difficult in dogs due to the shorter umbilical cord and the number of newborns in the litter. However, a possible detachment of the placenta while keeping the umbilical cord intact may make the residual blood to flow to the neonate. This study compared the effects on neonatal vitality between clamping and no clamping the umbilical cord of dogs born through cesarean section, assessing them through Apgar and reflex scores. Fifty puppies delivered from 16 bitches were randomly allocated to receive clamping of the umbilical cord immediately (n=25) or to not receive the clamping until breathing (n=25). The neonates were assessed during the first five min of life and once again 10 min after the first assessment. The differences observed between the two moments were significant (p < 0.01) for both the Apgar and reflex scores. The differences observed between the groups (clamped vs. not clamped) were not significant for the Apgar score in the 1st moment (p=0.1), but the 2nd moment was significantly (p < 0.01) in the group not clamped, as well as significant (p < 0.05) for the reflex score in the 1st moment and 2nd moment (p < 0.05), revealing higher neonatal vitality in the not clamped group. The differences observed between the moments (1st vs. 2nd) of each group as significant (p < 0.01), revealing higher neonatal vitality in the 2nd moments. In the no clamping group, after removing the neonates together with the umbilical cord and the placenta, we observed that the umbilical cords were full of blood at the time of birth and later became whitish and collapsed, demonstrating the blood transfer. The results suggest that keeping the umbilical cord intact for at least three minutes after the onset breathing is not detrimental and may contribute to increase neonate vitality in puppies delivered by cesarean section.

Keywords: puppy vitality, newborn dog, cesarean section, Apgar score

Procedia PDF Downloads 136
2559 Study on Planning of Smart GRID Using Landscape Ecology

Authors: Sunglim Lee, Susumu Fujii, Koji Okamura

Abstract:

Smart grid is a new approach for electric power grid that uses information and communications technology to control the electric power grid. Smart grid provides real-time control of the electric power grid, controlling the direction of power flow or time of the flow. Control devices are installed on the power lines of the electric power grid to implement smart grid. The number of the control devices should be determined, in relation with the area one control device covers and the cost associated with the control devices. One approach to determine the number of the control devices is to use the data on the surplus power generated by home solar generators. In current implementations, the surplus power is sent all the way to the power plant, which may cause power loss. To reduce the power loss, the surplus power may be sent to a control device and sent to where the power is needed from the control device. Under assumption that the control devices are installed on a lattice of equal size squares, our goal is to figure out the optimal spacing between the control devices, where the power sharing area (the area covered by one control device) is kept small to avoid power loss, and at the same time the power sharing area is big enough to have no surplus power wasted. To achieve this goal, a simulation using landscape ecology method is conducted on a sample area. First an aerial photograph of the land of interest is turned into a mosaic map where each area is colored according to the ratio of the amount of power production to the amount of power consumption in the area. The amount of power consumption is estimated according to the characteristics of the buildings in the area. The power production is calculated by the sum of the area of the roofs shown in the aerial photograph and assuming that solar panels are installed on all the roofs. The mosaic map is colored in three colors, each color representing producer, consumer, and neither. We started with a mosaic map with 100 m grid size, and the grid size is grown until there is no red grid. One control device is installed on each grid, so that the grid is the area which the control device covers. As the result of this simulation we got 350 m as the optimal spacing between the control devices that makes effective use of the surplus power for the sample area.

Keywords: landscape ecology, IT, smart grid, aerial photograph, simulation

Procedia PDF Downloads 430
2558 Unveiling Drought Dynamics in the Cuneo District, Italy: A Machine Learning-Enhanced Hydrological Modelling Approach

Authors: Mohammadamin Hashemi, Mohammadreza Kashizadeh

Abstract:

Droughts pose a significant threat to sustainable water resource management, agriculture, and socioeconomic sectors, particularly in the field of climate change. This study investigates drought simulation using rainfall-runoff modelling in the Cuneo district, Italy, over the past 60-year period. The study leverages the TUW model, a lumped conceptual rainfall-runoff model with a semi-distributed operation capability. Similar in structure to the widely used Hydrologiska Byråns Vattenbalansavdelning (HBV) model, the TUW model operates on daily timesteps for input and output data specific to each catchment. It incorporates essential routines for snow accumulation and melting, soil moisture storage, and streamflow generation. Multiple catchments' discharge data within the Cuneo district form the basis for thorough model calibration employing the Kling-Gupta Efficiency (KGE) metric. A crucial metric for reliable drought analysis is one that can accurately represent low-flow events during drought periods. This ensures that the model provides a realistic picture of water availability during these critical times. Subsequent validation of monthly discharge simulations thoroughly evaluates overall model performance. Beyond model development, the investigation delves into drought analysis using the robust Standardized Runoff Index (SRI). This index allows for precise characterization of drought occurrences within the study area. A meticulous comparison of observed and simulated discharge data is conducted, with particular focus on low-flow events that characterize droughts. Additionally, the study explores the complex interplay between land characteristics (e.g., soil type, vegetation cover) and climate variables (e.g., precipitation, temperature) that influence the severity and duration of hydrological droughts. The study's findings demonstrate successful calibration of the TUW model across most catchments, achieving commendable model efficiency. Comparative analysis between simulated and observed discharge data reveals significant agreement, especially during critical low-flow periods. This agreement is further supported by the Pareto coefficient, a statistical measure of goodness-of-fit. The drought analysis provides critical insights into the duration, intensity, and severity of drought events within the Cuneo district. This newfound understanding of spatial and temporal drought dynamics offers valuable information for water resource management strategies and drought mitigation efforts. This research deepens our understanding of drought dynamics in the Cuneo region. Future research directions include refining hydrological modelling techniques and exploring future drought projections under various climate change scenarios.

Keywords: hydrologic extremes, hydrological drought, hydrological modelling, machine learning, rainfall-runoff modelling

Procedia PDF Downloads 25
2557 Identifying Game Variables from Students’ Surveys for Prototyping Games for Learning

Authors: N. Ismail, O. Thammajinda, U. Thongpanya

Abstract:

Games-based learning (GBL) has become increasingly important in teaching and learning. This paper explains the first two phases (analysis and design) of a GBL development project, ending up with a prototype design based on students’ and teachers’ perceptions. The two phases are part of a full cycle GBL project aiming to help secondary school students in Thailand in their study of Comprehensive Sex Education (CSE). In the course of the study, we invited 1,152 students to complete questionnaires and interviewed 12 secondary school teachers in focus groups. This paper found that GBL can serve students in their learning about CSE, enabling them to gain understanding of their sexuality, develop skills, including critical thinking skills and interact with others (peers, teachers, etc.) in a safe environment. The objectives of this paper are to outline the development of GBL variables from the research question(s) into the developers’ flow chart, to be responsive to the GBL beneficiaries’ preferences and expectations, and to help in answering the research questions. This paper details the steps applied to generate GBL variables that can feed into a game flow chart to develop a GBL prototype. In our approach, we detailed two models: (1) Game Elements Model (GEM) and (2) Game Object Model (GOM). There are three outcomes of this research – first, to achieve the objectives and benefits of GBL in learning, game design has to start with the research question(s) and the challenges to be resolved as research outcomes. Second, aligning the educational aims with engaging GBL end users (students) within the data collection phase to inform the game prototype with the game variables is essential to address the answer/solution to the research question(s). Third, for efficient GBL to bridge the gap between pedagogy and technology and in order to answer the research questions via technology (i.e. GBL) and to minimise the isolation between the pedagogists “P” and technologist “T”, several meetings and discussions need to take place within the team.

Keywords: games-based learning, engagement, pedagogy, preferences, prototype

Procedia PDF Downloads 156
2556 T Cell Immunity Profile in Pediatric Obesity and Asthma

Authors: Mustafa M. Donma, Erkut Karasu, Burcu Ozdilek, Burhan Turgut, Birol Topcu, Burcin Nalbantoglu, Orkide Donma

Abstract:

The mechanisms underlying the association between obesity and asthma may be related to a decreased immunological tolerance induced by a defective function of regulatory T cells (Tregs). The aim of this study is to establish the potential link between these diseases and CD4+, CD25+ FoxP3+ Tregs as well as T helper cells (Ths) in children. This is a prospective case control study. Obese (n:40), asthmatic (n:40), asthmatic obese (n:40), and healthy children (n:40), who don't have any acute or chronic diseases, were included in this study. Obese children were evaluated according to WHO criteria. Asthmatic patients were chosen based on GINA criteria. Parents were asked to fill up the questionnaire. Informed consent forms were taken. Blood samples were marked with CD4+, CD25+ and FoxP3+ in order to determine Tregs and Ths by flow cytometric method. Statistical analyses were performed. p≤0.05 was chosen as meaningful threshold. Tregs exhibiting anti-inflammatory nature were significantly lower in obese (0,16%; p≤0,001), asthmatic (0,25%; p≤0,01) and asthmatic obese (0,29%; p≤0,05) groups than the control group (0,38%). Ths were counted higher in asthma group than the control (p≤0,01) and obese (p≤0,001)) groups. T cell immunity plays important roles in obesity and asthma pathogeneses. Decreased numbers of Tregs found in obese, asthmatic and asthmatic obese children may help to elucidate some questions in pathophysiology of these diseases. For HOMA-IR levels, any significant difference was not noted between control and obese groups, but statistically higher values were found for obese asthmatics. The values obtained in all groups were found to be below the critical cut off points. This finding has made the statistically significant difference observed between Tregs of obese, asthmatic, obese asthmatic, and control groups much more valuable. These findings will be useful in diagnosis and treatment of these disorders and future studies are needed. The production and propagation of Tregs may be promising in alternative asthma and obesity treatments.

Keywords: asthma, flow cytometry, pediatric obesity, T cells

Procedia PDF Downloads 332
2555 Determining Optimum Locations for Runoff Water Harvesting in W. Watir, South Sinai, Using RS, GIS, and WMS Techniques

Authors: H. H. Elewa, E. M. Ramadan, A. M. Nosair

Abstract:

Rainfall water harvesting is considered as an important tool for overcoming water scarcity in arid and semi-arid region. Wadi Watir in the southeastern part of Sinai Peninsula is considered as one of the main and active basins in the Gulf of Aqaba drainage system. It is characterized by steep hills mainly consist of impermeable rocks, whereas the streambeds are covered by a highly permeable mixture of gravel and sand. A comprehensive approach involving the integration of geographic information systems, remote sensing and watershed modeling was followed to identify the RWH capability in this area. Eight thematic layers, viz volume of annual flood, overland flow distance, maximum flow distance, rock or soil infiltration, drainage frequency density, basin area, basin slope and basin length were used as a multi-parametric decision support system for conducting weighted spatial probability models (WSPMs) to determine the potential areas for the RWH. The WSPMs maps classified the area into five RWH potentiality classes ranging from the very low to very high. Three performed WSPMs' scenarios for W. Watir reflected identical results among their maps for the high and very high RWH potentiality classes, which are the most suitable ones for conducting surface water harvesting techniques. There is also a reasonable match with respect to the potentiality of runoff harvesting areas with a probability of moderate, low and very low among the three scenarios. WSPM results have shown that the high and very high classes, which are the most suitable for the RWH are representing approximately 40.23% of the total area of the basin. Accordingly, several locations were decided for the establishment of water harvesting dams and cisterns to improve the water conditions and living environment in the study area.

Keywords: Sinai, Wadi Watir, remote sensing, geographic information systems, watershed modeling, runoff water harvesting

Procedia PDF Downloads 344
2554 Investigation of External Pressure Coefficients on Large Antenna Parabolic Reflector Using Computational Fluid Dynamics

Authors: Varun K, Pramod B. Balareddy

Abstract:

Estimation of wind forces plays a significant role in the in the design of large antenna parabolic reflectors. Reflector surface accuracies are very sensitive to the gain of the antenna system at higher frequencies. Hence accurate estimation of wind forces becomes important, which is primary input for design and analysis of the reflector system. In the present work, numerical simulation of wind flow using Computational Fluid Dynamics (CFD) software is used to investigate the external pressure coefficients. An extensive comparative study has been made between the CFD results and the published wind tunnel data for different wind angle of attacks (α) acting over concave to convex surfaces respectively. Flow simulations using CFD are carried out to estimate the coefficients of Drag, Lift and Moment for the parabolic reflector. Coefficients of pressures (Cp) over the front and the rear face of the reflector are extracted over surface of the reflector to study the net pressure variations. These resultant pressure variations are compared with the published wind tunnel data for different angle of attacks. It was observed from the CFD simulations, both convex and concave face of reflector system experience a band of pressure variations for the positive and negative angle of attacks respectively. In the published wind tunnel data, Pressure variations over convex surfaces are assumed to be uniform and vice versa. Chordwise and spanwise pressure variations were calculated and compared with the published experimental data. In the present work, it was observed that the maximum pressure coefficients for α ranging from +30° to -90° and α=+90° was lower. For α ranging from +45° to +75°, maximum pressure coefficients were higher as compared to wind tunnel data. This variation is due to non-uniform pressure distribution observed over front and back faces of reflector. Variations in Cd, Cl and Cm over α=+90° to α=-90° was in close resemblance with the experimental data.

Keywords: angle of attack, drag coefficient, lift coefficient, pressure coefficient

Procedia PDF Downloads 240
2553 Superparamagnetic Sensor with Lateral Flow Immunoassays as Platforms for Biomarker Quantification

Authors: M. Salvador, J. C. Martinez-Garcia, A. Moyano, M. C. Blanco-Lopez, M. Rivas

Abstract:

Biosensors play a crucial role in the detection of molecules nowadays due to their advantages of user-friendliness, high selectivity, the analysis in real time and in-situ applications. Among them, Lateral Flow Immunoassays (LFIAs) are presented among technologies for point-of-care bioassays with outstanding characteristics such as affordability, portability and low-cost. They have been widely used for the detection of a vast range of biomarkers, which do not only include proteins but also nucleic acids and even whole cells. Although the LFIA has traditionally been a positive/negative test, tremendous efforts are being done to add to the method the quantifying capability based on the combination of suitable labels and a proper sensor. One of the most successful approaches involves the use of magnetic sensors for detection of magnetic labels. Bringing together the required characteristics mentioned before, our research group has developed a biosensor to detect biomolecules. Superparamagnetic nanoparticles (SPNPs) together with LFIAs play the fundamental roles. SPMNPs are detected by their interaction with a high-frequency current flowing on a printed micro track. By means of the instant and proportional variation of the impedance of this track provoked by the presence of the SPNPs, quantitative and rapid measurement of the number of particles can be obtained. This way of detection requires no external magnetic field application, which reduces the device complexity. On the other hand, the major limitations of LFIAs are that they are only qualitative or semiquantitative when traditional gold or latex nanoparticles are used as color labels. Moreover, the necessity of always-constant ambient conditions to get reproducible results, the exclusive detection of the nanoparticles on the surface of the membrane, and the short durability of the signal are drawbacks that can be advantageously overcome with the design of magnetically labeled LFIAs. The approach followed was to coat the SPIONs with a specific monoclonal antibody which targets the protein under consideration by chemical bonds. Then, a sandwich-type immunoassay was prepared by printing onto the nitrocellulose membrane strip a second antibody against a different epitope of the protein (test line) and an IgG antibody (control line). When the sample flows along the strip, the SPION-labeled proteins are immobilized at the test line, which provides magnetic signal as described before. Preliminary results using this practical combination for the detection and quantification of the Prostatic-Specific Antigen (PSA) shows the validity and consistency of the technique in the clinical range, where a PSA level of 4.0 ng/mL is the established upper normal limit. Moreover, a LOD of 0.25 ng/mL was calculated with a confident level of 3 according to the IUPAC Gold Book definition. Its versatility has also been proved with the detection of other biomolecules such as troponin I (cardiac injury biomarker) or histamine.

Keywords: biosensor, lateral flow immunoassays, point-of-care devices, superparamagnetic nanoparticles

Procedia PDF Downloads 218
2552 Micro-Filtration with an Inorganic Membrane

Authors: Benyamina, Ouldabess, Bensalah

Abstract:

The aim of this study is to use membrane technique for filtration of a coloring solution. the preparation of the micro-filtration membranes is based on a natural clay powder with a low cost, deposited on macro-porous ceramic supports. The micro-filtration membrane provided a very large permeation flow. Indeed, the filtration effectiveness of membrane was proved by the total discoloration of bromothymol blue solution with initial concentration of 10-3 mg/L after the first minutes.

Keywords: the inorganic membrane, micro-filtration, coloring solution, natural clay powder

Procedia PDF Downloads 498
2551 Numerical Simulation of Production of Microspheres from Polymer Emulsion in Microfluidic Device toward Using in Drug Delivery Systems

Authors: Nizar Jawad Hadi, Sajad Abd Alabbas

Abstract:

Because of their ability to encapsulate and release drugs in a controlled manner, microspheres fabricated from polymer emulsions using microfluidic devices have shown promise for drug delivery applications. In this study, the effects of velocity, density, viscosity, and surface tension, as well as channel diameter, on microsphere generation were investigated using Fluent Ansys software. The software was programmed with the physical properties of the polymer emulsion such as density, viscosity and surface tension. Simulation will then be performed to predict fluid flow and microsphere production and improve the design of drug delivery applications based on changes in these parameters. The effects of capillary and Weber numbers are also studied. The results of the study showed that the size of the microspheres can be controlled by adjusting the speed and diameter of the channel. Narrower microspheres resulted from narrower channel widths and higher flow rates, which could improve drug delivery efficiency, while smaller microspheres resulted from lower interfacial surface tension. The viscosity and density of the polymer emulsion significantly affected the size of the microspheres, ith higher viscosities and densities producing smaller microspheres. The loading and drug release properties of the microspheres created with the microfluidic technique were also predicted. The results showed that the microspheres can efficiently encapsulate drugs and release them in a controlled manner over a period of time. This is due to the high surface area to volume ratio of the microspheres, which allows for efficient drug diffusion. The ability to tune the manufacturing process using factors such as speed, density, viscosity, channel diameter, and surface tension offers a potential opportunity to design drug delivery systems with greater efficiency and fewer side effects.

Keywords: polymer emulsion, microspheres, numerical simulation, microfluidic device

Procedia PDF Downloads 53
2550 Determinants of Dividend Payout Ratio: Evidence form MENA Region

Authors: Abdul-Nasser El-Kassar, Walid Elgammal, Hisham Jawhar

Abstract:

This paper studies the determinants of the dividends payout ratio. The factors affecting the dividends payout ratio are to be identified. The study focuses only on the cement and construction industry within the MENA region in an attempt to isolate any incoherent behavior. The factors under consideration are: sales growth, ROE, ROA, ROS, debt to equity ratio, firm size, and free cash flow. Data were collected from official stock exchange markets in addition to annual reports. The study considered all firms that paid dividend in each of the three consecutive years starting from 2010 till 2012. Out of the 123 listed firms that work in cement and construction industry in MENA region, only 19 paid dividends in the three consecutive years 2010-12. Our sample consists of the 19 firms (57 observations) which are selected according to purposive sampling. Moreover, the study uses the homogeneous subcategory within the purposive sampling since only similar firms in the construction industry had been examined. The outcome of the study provides a vital insight into the determinants of dividends payout ratio of companies in MENA region. The results showed that the dividend payout ratio has a strong and positive relationship with return on assets and strong but negative relationship with return on equity. On the other hand, the results detected weak relationships between dividend payout ratio and sale growth, debt to equity ratio, firm size, and free cash flow. The study suggests that board of directors tend to compensate shareholders and minimize the agency cost by distributing a high portion of profits in form of dividends whenever return on equity decreases. Also, when the performance of the firm improves, and hence return on assets increases, boards of directors are more generous in distributing profits.

Keywords: dividends payout ratio, profitability firm size, free cashflow, debt to equity ratio

Procedia PDF Downloads 347
2549 Calculation of the Supersonic Air Intake with the Optimization of the Shock Wave System

Authors: Elena Vinogradova, Aleksei Pleshakov, Aleksei Yakovlev

Abstract:

During the flight of a supersonic aircraft under various conditions (altitude, Mach, etc.), it becomes necessary to coordinate the operating modes of the air intake and engine. On the supersonic aircraft, it’s been done by changing various control factors (the angle of rotation of the wedge panels and etc.). This paper investigates the possibility of using modern optimization methods to determine the optimal position of the supersonic air intake wedge panels in order to maximize the total pressure recovery coefficient. Modern software allows us to conduct auto-optimization, which determines the optimal position of the control elements of the investigated product to achieve its maximum efficiency. In this work, the flow in the supersonic aircraft inlet has investigated and optimized the operation of the flaps of the supersonic inlet in an aircraft in a 2-D setting. This work has done using ANSYS CFX software. The supersonic aircraft inlet is a flat adjustable external compression inlet. The braking surface is made in the form of a three-stage wedge. The IOSO NM software package was chosen for optimization. Change in the position of the panels of the input device is carried out by changing the angle between the first and second steps of the three-stage wedge. The position of the rest of the panels is changed automatically. Within the framework of the presented work, the position of the moving air intake panel was optimized under fixed flight conditions of the aircraft under a certain engine operating mode. As a result of the numerical modeling, the distribution of total pressure losses was obtained for various cases of the engine operation, depending on the incoming flow velocity and the flight altitude of the aircraft. The results make it possible to obtain the maximum total pressure recovery coefficient under given conditions. Also, the initial geometry was set with a certain angle between the first and second wedge panels. Having performed all the calculations, as well as the subsequent optimization of the aircraft input device, it can be concluded that the initial angle was set sufficiently close to the optimal angle.

Keywords: optimal angle, optimization, supersonic air intake, total pressure recovery coefficient

Procedia PDF Downloads 223
2548 An Adaptive Decomposition for the Variability Analysis of Observation Time Series in Geophysics

Authors: Olivier Delage, Thierry Portafaix, Hassan Bencherif, Guillaume Guimbretiere

Abstract:

Most observation data sequences in geophysics can be interpreted as resulting from the interaction of several physical processes at several time and space scales. As a consequence, measurements time series in geophysics have often characteristics of non-linearity and non-stationarity and thereby exhibit strong fluctuations at all time-scales and require a time-frequency representation to analyze their variability. Empirical Mode Decomposition (EMD) is a relatively new technic as part of a more general signal processing method called the Hilbert-Huang transform. This analysis method turns out to be particularly suitable for non-linear and non-stationary signals and consists in decomposing a signal in an auto adaptive way into a sum of oscillating components named IMFs (Intrinsic Mode Functions), and thereby acts as a bank of bandpass filters. The advantages of the EMD technic are to be entirely data driven and to provide the principal variability modes of the dynamics represented by the original time series. However, the main limiting factor is the frequency resolution that may give rise to the mode mixing phenomenon where the spectral contents of some IMFs overlap each other. To overcome this problem, J. Gilles proposed an alternative entitled “Empirical Wavelet Transform” (EWT) which consists in building from the segmentation of the original signal Fourier spectrum, a bank of filters. The method used is based on the idea utilized in the construction of both Littlewood-Paley and Meyer’s wavelets. The heart of the method lies in the segmentation of the Fourier spectrum based on the local maxima detection in order to obtain a set of non-overlapping segments. Because linked to the Fourier spectrum, the frequency resolution provided by EWT is higher than that provided by EMD and therefore allows to overcome the mode-mixing problem. On the other hand, if the EWT technique is able to detect the frequencies involved in the original time series fluctuations, EWT does not allow to associate the detected frequencies to a specific mode of variability as in the EMD technic. Because EMD is closer to the observation of physical phenomena than EWT, we propose here a new technic called EAWD (Empirical Adaptive Wavelet Decomposition) based on the coupling of the EMD and EWT technics by using the IMFs density spectral content to optimize the segmentation of the Fourier spectrum required by EWT. In this study, EMD and EWT technics are described, then EAWD technic is presented. Comparison of results obtained respectively by EMD, EWT and EAWD technics on time series of ozone total columns recorded at Reunion island over [1978-2019] period is discussed. This study was carried out as part of the SOLSTYCE project dedicated to the characterization and modeling of the underlying dynamics of time series issued from complex systems in atmospheric sciences

Keywords: adaptive filtering, empirical mode decomposition, empirical wavelet transform, filter banks, mode-mixing, non-linear and non-stationary time series, wavelet

Procedia PDF Downloads 121
2547 Derivation of Human NK Cells from T Cell-Derived Induced Pluripotent Stem Cells Using Xenogeneic Serum-Free and Feeder Cell-Free Culture System

Authors: Aliya Sekenova, Vyacheslav Ogay

Abstract:

The derivation of human induced pluripotent stem cells (iPSCs) from somatic cells by direct reprogramming opens wide perspectives in the regenerative medicine. It means the possibility to develop the personal and, consequently, any immunologically compatible cells for applications in cell-based therapy. The purpose of our study was to develop the technology for the production of NK cells from T cell-derived induced pluripotent stem cells (TiPSCs) for subsequent application in adoptive cancer immunotherapy. Methods: In this study iPSCs were derived from peripheral blood T cells using Sendai virus vectors expressing Oct4, Sox2, Klf4 and c-Myc. Pluripotent characteristics of TiPSCs were examined and confirmed with alkaline phosphatase staining, immunocytochemistry and RT-PCR analysis. For NK cell differentiation, embryoid bodies (EB) formed from (TiPSCs) were cultured in xenogeneic serum-free medium containing human serum, IL-3, IL-7, IL-15, SCF, FLT3L without using M210-B4 and AFT-024 stromal feeder cells. After differentiation, NK cells were characterized with immunofluorescence analysis, flow cytometry and cytotoxicity assay. Results: Here, we for the first time demonstrate that TiPSCs can effectively differentiate into functionally active NK cells without M210-B4 and AFT-024 xenogeneic stroma cells. Immunofluorescence and flow cytometry analysis showed that EB-derived cells can differentiate into a homogeneous population of NK cell expressing high levels of CD56, CD45 and CD16 specific markers. Moreover, these cells significantly express killing activation receptors such as NKp44 and NKp46. In the comparative analysis, we observed that NK cells derived using feeder-free culture system have more high killing activity against K-562 tumor cells, than NK cells derived by feeder-dependent method. Thus, we think that our obtained data will be useful for the development of large-scale production of NK cells for translation into cancer immunotherapy.

Keywords: induced pluripotent stem cells, NK cells, T cells, cell diffentiation, feeder cell-free culture system

Procedia PDF Downloads 314
2546 A Review of Pharmacological Prevention of Peri-and Post-Procedural Myocardial Injury After Percutaneous Coronary Intervention

Authors: Syed Dawood Md. Taimur, Md. Hasanur Rahman, Syeda Fahmida Afrin, Farzana Islam

Abstract:

The concept of myocardial injury, although first recognized from animal studies, is now recognized as a clinical phenomenon that may result in microvascular damage, no-reflow phenomenon, myocardial stunning, myocardial hibernation and ischemic preconditioning. The final consequence of this event is left ventricular (LV) systolic dysfunction leading to increased morbidity and mortality. The typical clinical case of reperfusion injury occurs in acute myocardial infarction (MI) with ST segment elevation in which an occlusion of a major epicardial coronary artery is followed by recanalization of the artery. This may occur either spontaneously or by means of thrombolysis and/or by primary percutaneous coronary intervention (PCI) with efficient platelet inhibition by aspirin (acetylsalicylic acid), clopidogrel and glycoprotein IIb/IIIa inhibitors. In recent years, percutaneous coronary intervention (PCI) has become a well-established technique for the treatment of coronary artery disease. PCI improves symptoms in patients with coronary artery disease and it has been increasing the safety of procedures. However, peri- and post-procedural myocardial injury, including angiographical slow coronary flow, microvascular embolization, and elevated levels of cardiac enzyme, such as creatine kinase and troponin-T and -I, has also been reported even in elective cases. Furthermore, myocardial reperfusion injury at the beginning of myocardial reperfusion, which causes tissue damage and cardiac dysfunction, may occur in cases of the acute coronary syndrome. Because patients with myocardial injury is related to larger myocardial infarction and have a worse long-term prognosis than those without myocardial injury, it is important to prevent myocardial injury during and/or after PCI in patients with coronary artery disease. To date, many studies have demonstrated that adjunctive pharmacological treatment suppresses myocardial injury and increases coronary blood flow during PCI procedures. In this review, we highlight the usefulness of pharmacological treatment in combination with PCI in attenuating myocardial injury in patients with coronary artery disease.

Keywords: coronary artery disease, percutaneous coronary intervention, myocardial injury, pharmacology

Procedia PDF Downloads 437
2545 Energy Efficiency Approach to Reduce Costs of Ownership of Air Jet Weaving

Authors: Corrado Grassi, Achim Schröter, Yves Gloy, Thomas Gries

Abstract:

Air jet weaving is the most productive, but also the most energy consuming weaving method. Increasing energy costs and environmental impact are constantly a challenge for the manufacturers of weaving machines. Current technological developments concern with low energy costs, low environmental impact, high productivity, and constant product quality. The high degree of energy consumption of the method can be ascribed to the high need of compressed air. An energy efficiency method is applied to the air jet weaving technology. Such method identifies and classifies the main relevant energy consumers and processes from the exergy point of view and it leads to the identification of energy efficiency potentials during the weft insertion process. Starting from the design phase, energy efficiency is considered as the central requirement to be satisfied. The initial phase of the method consists of an analysis of the state of the art of the main weft insertion components in order to point out a prioritization of the high demanding energy components and processes. The identified major components are investigated to reduce the high demand of energy of the weft insertion process. During the interaction of the flow field coming from the relay nozzles within the profiled reed, only a minor part of the stream is really accelerating the weft yarn, hence resulting in large energy inefficiency. Different tools such as FEM analysis, CFD simulation models and experimental analysis are used in order to design a more energy efficient design of the involved components in the filling insertion. A different concept for the metal strip of the profiled reed is developed. The developed metal strip allows a reduction of the machine energy consumption. Based on a parametric and aerodynamic study, the designed reed transmits higher values of the flow power to the filling yarn. The innovative reed fulfills both the requirement of raising energy efficiency and the compliance with the weaving constraints.

Keywords: air jet weaving, aerodynamic simulation, energy efficiency, experimental validation, weft insertion

Procedia PDF Downloads 182
2544 Application of Hydrologic Engineering Centers and River Analysis System Model for Hydrodynamic Analysis of Arial Khan River

Authors: Najeeb Hassan, Mahmudur Rahman

Abstract:

Arial Khan River is one of the main south-eastward outlets of the River Padma. This river maintains a meander channel through its course and is erosional in nature. The specific objective of the research is to study and evaluate the hydrological characteristics in the form of assessing changes of cross-sections, discharge, water level and velocity profile in different stations and to create a hydrodynamic model of the Arial Khan River. Necessary data have been collected from Bangladesh Water Development Board (BWDB) and Center for Environment and Geographic Information Services (CEGIS). Satellite images have been observed from Google earth. In this study, hydrodynamic model of Arial Khan River has been developed using well known steady open channel flow code Hydrologic Engineering Centers and River Analysis System (HEC-RAS) using field surveyed geometric data. Cross-section properties at 22 locations of River Arial Khan for the years 2011, 2013 and 2015 were also analysed. 1-D HEC-RAS model has been developed using the cross sectional data of 2015 and appropriate boundary condition is being used to run the model. This Arial Khan River model is calibrated using the pick discharge of 2015. The applicable value of Mannings roughness coefficient (n) is adjusted through the process of calibration. The value of water level which ties with the observed data to an acceptable accuracy is taken as calibrated model. The 1-D HEC-RAS model then validated by using the pick discharges from 2009-2018. Variation in observed water level in the model and collected water level data is being compared to validate the model. It is observed that due to seasonal variation, discharge of the river changes rapidly and Mannings roughness coefficient (n) also changes due to the vegetation growth along the river banks. This river model may act as a tool to measure flood area in future. By considering the past pick flow discharge, it is strongly recommended to improve the carrying capacity of Arial Khan River to protect the surrounding areas from flash flood.

Keywords: BWDB, CEGIS, HEC-RAS

Procedia PDF Downloads 163
2543 Calibration and Validation of ArcSWAT Model for Estimation of Surface Runoff and Sediment Yield from Dhangaon Watershed

Authors: M. P. Tripathi, Priti Tiwari

Abstract:

Soil and Water Assessment Tool (SWAT) is a distributed parameter continuous time model and was tested on daily and fortnightly basis for a small agricultural watershed (Dhangaon) of Chhattisgarh state in India. The SWAT model recently interfaced with ArcGIS and called as ArcSWAT. The watershed and sub-watershed boundaries, drainage networks, slope and texture maps were generated in the environment of ArcGIS of ArcSWAT. Supervised classification method was used for land use/cover classification from satellite imageries of the years 2009 and 2012. Manning's roughness coefficient 'n' for overland flow and channel flow and Fraction of Field Capacity (FFC) were calibrated for monsoon season of the years 2009 and 2010. The model was validated on a daily basis for the years 2011 and 2012 by using the observed daily rainfall and temperature data. Calibration and validation results revealed that the model was predicting the daily surface runoff and sediment yield satisfactorily. Sensitivity analysis showed that the annual sediment yield was inversely proportional to the overland and channel 'n' values whereas; annual runoff and sediment yields were directly proportional to the FFC. The model was also tested (calibrated and validated) for the fortnightly runoff and sediment yield for the year 2009-10 and 2011-12, respectively. Simulated values of fortnightly runoff and sediment yield for the calibration and validation years compared well with their observed counterparts. The calibration and validation results revealed that the ArcSWAT model could be used for identification of critical sub-watershed and for developing management scenarios for the Dhangaon watershed. Further, the model should be tested for simulating the surface runoff and sediment yield using generated rainfall and temperature before applying it for developing the management scenario for the critical or priority sub-watersheds.

Keywords: watershed, hydrologic and water quality, ArcSWAT model, remote sensing, GIS, runoff and sediment yield

Procedia PDF Downloads 359
2542 Modeling and Calculation of Physical Parameters of the Pollution of Water by Oil and Materials in Suspensions

Authors: Ainas Belkacem, Fourar Ali

Abstract:

The present study focuses on the mathematical modeling and calculation of physical parameters of water pollution by oil and sand in regime fully dispersed in water. In this study, the sand particles and oil are suspended in the case of fully developed turbulence. The study consists to understand, model and predict the viscosity, the structure and dynamics of these types of mixtures. The work carried out is Numerical and validated by experience.

Keywords: multi phase flow, pollution, suspensions, turbulence

Procedia PDF Downloads 223
2541 The Effect of Online Analyzer Malfunction on the Performance of Sulfur Recovery Unit and Providing a Temporary Solution to Reduce the Emission Rate

Authors: Hamid Reza Mahdipoor, Mehdi Bahrami, Mohammad Bodaghi, Seyed Ali Akbar Mansoori

Abstract:

Nowadays, with stricter limitations to reduce emissions, considerable penalties are imposed if pollution limits are exceeded. Therefore, refineries, along with focusing on improving the quality of their products, are also focused on producing products with the least environmental impact. The duty of the sulfur recovery unit (SRU) is to convert H₂S gas coming from the upstream units to elemental sulfur and minimize the burning of sulfur compounds to SO₂. The Claus process is a common process for converting H₂S to sulfur, including a reaction furnace followed by catalytic reactors and sulfur condensers. In addition to a Claus section, SRUs usually consist of a tail gas treatment (TGT) section to decrease the concentration of SO₂ in the flue gas below the emission limits. To operate an SRU properly, the flow rate of combustion air to the reaction furnace must be adjusted so that the Claus reaction is performed according to stoichiometry. Accurate control of the air demand leads to an optimum recovery of sulfur during the flow and composition fluctuations in the acid gas feed. Therefore, the major control system in the SRU is the air demand control loop, which includes a feed-forward control system based on predetermined feed flow rates and a feed-back control system based on the signal from the tail gas online analyzer. The use of online analyzers requires compliance with the installation and operation instructions. Unfortunately, most of these analyzers in Iran are out of service for different reasons, like the low importance of environmental issues and a lack of access to after-sales services due to sanctions. In this paper, an SRU in Iran was simulated and calibrated using industrial experimental data. Afterward, the effect of the malfunction of the online analyzer on the performance of SRU was investigated using the calibrated simulation. The results showed that an increase in the SO₂ concentration in the tail gas led to an increase in the temperature of the reduction reactor in the TGT section. This increase in temperature caused the failure of TGT and increased the concentration of SO₂ from 750 ppm to 35,000 ppm. In addition, the lack of a control system for the adjustment of the combustion air caused further increases in SO₂ emissions. In some processes, the major variable cannot be controlled directly due to difficulty in measurement or a long delay in the sampling system. In these cases, a secondary variable, which can be measured more easily, is considered to be controlled. With the correct selection of this variable, the main variable is also controlled along with the secondary variable. This strategy for controlling a process system is referred to as inferential control" and is considered in this paper. Therefore, a sensitivity analysis was performed to investigate the sensitivity of other measurable parameters to input disturbances. The results revealed that the output temperature of the first Claus reactor could be used for inferential control of the combustion air. Applying this method to the operation led to maximizing the sulfur recovery in the Claus section.

Keywords: sulfur recovery, online analyzer, inferential control, SO₂ emission

Procedia PDF Downloads 60