Search results for: cumulative absolute velocity
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2519

Search results for: cumulative absolute velocity

299 Monitoring Air Pollution Effects on Children for Supporting Public Health Policy: Preliminary Results of MAPEC_LIFE Project

Authors: Elisabetta Ceretti, Silvia Bonizzoni, Alberto Bonetti, Milena Villarini, Marco Verani, Maria Antonella De Donno, Sara Bonetta, Umberto Gelatti

Abstract:

Introduction: Air pollution is a global problem. In 2013, the International Agency for Research on Cancer (IARC) classified air pollution and particulate matter as carcinogenic to human. The study of the health effects of air pollution in children is very important because they are a high-risk group in terms of the health effects of air pollution and early exposure during childhood can increase the risk of developing chronic diseases in adulthood. The MAPEC_LIFE (Monitoring Air Pollution Effects on Children for supporting public health policy) is a project founded by EU Life+ Programme which intends to evaluate the associations between air pollution and early biological effects in children and to propose a model for estimating the global risk of early biological effects due to air pollutants and other factors in children. Methods: The study was carried out on 6-8-year-old children living in five Italian towns in two different seasons. Two biomarkers of early biological effects, primary DNA damage detected with the comet assay and frequency of micronuclei, were investigated in buccal cells of children. Details of children diseases, socio-economic status, exposures to other pollutants and life-style were collected using a questionnaire administered to children’s parents. Child exposure to urban air pollution was assessed by analysing PM0.5 samples collected in the school areas for PAHs and nitro-PAHs concentration, lung toxicity and in vitro genotoxicity on bacterial and human cells. Data on the chemical features of the urban air during the study period were obtained from the Regional Agency for Environmental Protection. The project created also the opportunity to approach the issue of air pollution with the children, trying to raise their awareness on air quality, its health effects and some healthy behaviors by means of an educational intervention in the schools. Results: 1315 children were recruited for the study and participate in the first sampling campaign in the five towns. The second campaign, on the same children, is still ongoing. The preliminary results of the tests on buccal mucosa cells of children will be presented during the conference as well as the preliminary data about the chemical composition and the toxicity and genotoxicity features of PM0.5 samples. The educational package was tested on 250 children of the primary school and showed to be very useful, improving children knowledge about air pollution and its effects and stimulating their interest. Conclusions: The associations between levels of air pollutants, air mutagenicity and biomarkers of early effects will be investigated. A tentative model to calculate the global absolute risk of having early biological effects for air pollution and other variables together will be proposed and may be useful to support policy-making and community interventions to protect children from possible health effects of air pollutants.

Keywords: air pollution exposure, biomarkers of early effects, children, public health policy

Procedia PDF Downloads 332
298 Thermal Regulation of Channel Flows Using Phase Change Material

Authors: Kira Toxopeus, Kamran Siddiqui

Abstract:

Channel flows are common in a wide range of engineering applications. In some types of channel flows, particularly the ones involving chemical or biological processes, the control of the flow temperature is crucial to maintain the optimal conditions for the chemical reaction or to control the growth of biological species. This often becomes an issue when the flow experiences temperature fluctuations due to external conditions. While active heating and cooling could regulate the channel temperature, it may not be feasible logistically or economically and is also regarded as a non-sustainable option. Thermal energy storage utilizing phase change material (PCM) could provide the required thermal regulation sustainably by storing the excess heat from the channel and releasing it back as required, thus regulating the channel temperature within a range in the proximity of the PCM melting temperature. However, in designing such systems, the configuration of the PCM storage within the channel is critical as it could influence the channel flow dynamics, which would, in turn, affect the heat exchange between the channel fluid and the PCM. The present research is focused on the investigation of the flow dynamical behavior in the channel during heat transfer from the channel flow to the PCM thermal energy storage. Offset vertical columns in a narrow channel were used that contained the PCM. Two different column shapes, square and circular, were considered. Water was used as the channel fluid that entered the channel at a temperature higher than that of the PCM melting temperature. Hence, as the water was passing through the channel, the heat was being transferred from the water to the PCM, causing the PCM to store the heat through a phase transition from solid to liquid. Particle image velocimetry (PIV) was used to measure the two-dimensional velocity field of the channel flow as it flows between the PCM columns. Thermocouples were also attached to the PCM columns to measure the PCM temperature at three different heights. Three different water flow rates (0.5, 0.75 and 1.2 liters/min) were considered. At each flow rate, experiments were conducted at three different inlet water temperatures (28ᵒC, 33ᵒC and 38ᵒC). The results show that the flow rate and the inlet temperature influenced the flow behavior inside the channel.

Keywords: channel flow, phase change material, thermal energy storage, thermal regulation

Procedia PDF Downloads 141
297 Green Extraction Processes for the Recovery of Polyphenols from Solid Wastes of Olive Oil Industry

Authors: Theodora-Venetia Missirli, Konstantina Kyriakopoulou, Magdalini Krokida

Abstract:

Olive mill solid waste is an olive oil mill industry by-product with high phenolic, lipid and organic acid concentrations that can be used as a low cost source of natural antioxidants. In this study, extracts of Olea europaea (olive tree) solid olive mill waste (SOMW) were evaluated in terms of their antiradical activity and total phenolic compounds concentrations, such as oleuropein, hydroxytyrosol etc. SOMW samples were subjected to drying prior to extraction as a pretreatment step. Two drying processes, accelerated solar drying (ASD) and air-drying (AD) (at 35, 50, 70°C constant air velocity of 1 m/s), were applied. Subsequently, three different extraction methods were employed to recover extracts from untreated and dried SOMW samples. The methods include the green Microwave Assisted (MAE) and Ultrasound Assisted Extraction (UAE) and the conventional Soxhlet extraction (SE), using water and methanol as solvents. The efficiency and selectivity of the processes were evaluated in terms of extraction yield. The antioxidant activity (AAR) and the total phenolic content (TPC) of the extracts were evaluated using the DPPH assay and the Folin-Ciocalteu method, respectively. The results showed that bioactive content was significantly affected by the extraction technique and the solvent. Specifically, untreated SOMW samples showed higher performance in the yield for all solvents and higher antioxidant potential and phenolic content in the case of water. UAE extraction method showed greater extraction yields than the MAE method for both untreated and dried leaves regardless of the solvent used. The use of ultrasound and microwave assisted extraction in combination with industrially applied drying methods, such as air and solar drying, was feasible and effective for the recovery of bioactive compounds.

Keywords: antioxidant potential, drying treatment, olive mill pomace, microwave assisted extraction, ultrasound assisted extraction

Procedia PDF Downloads 306
296 Development of Hydrodynamic Drag Calculation and Cavity Shape Generation for Supercavitating Torpedoes

Authors: Sertac Arslan, Sezer Kefeli

Abstract:

In this paper, firstly supercavitating phenomenon and supercavity shape design parameters are explained and then drag force calculation methods of high speed supercavitating torpedoes are investigated with numerical techniques and verified with empirical studies. In order to reach huge speeds such as 200, 300 knots for underwater vehicles, hydrodynamic hull drag force which is proportional to density of water (ρ) and square of speed should be reduced. Conventional heavy weight torpedoes could reach up to ~50 knots by classic underwater hydrodynamic techniques. However, to exceed 50 knots and reach about 200 knots speeds, hydrodynamic viscous forces must be reduced or eliminated completely. This requirement revives supercavitation phenomena that could be implemented to conventional torpedoes. Supercavitation is the use of cavitation effects to create a gas bubble, allowing the torpedo to move at huge speed through the water by being fully developed cavitation bubble. When the torpedo moves in a cavitation envelope due to cavitator in nose section and solid fuel rocket engine in rear section, this kind of torpedoes could be entitled as Supercavitating Torpedoes. There are two types of cavitation; first one is natural cavitation, and second one is ventilated cavitation. In this study, disk cavitator is modeled with natural cavitation and supercavitation phenomenon parameters are studied. Moreover, drag force calculation is performed for disk shape cavitator with numerical techniques and compared via empirical studies. Drag forces are calculated with computational fluid dynamics methods and different empirical methods. Numerical calculation method is developed by comparing with empirical results. In verification study cavitation number (σ), drag coefficient (CD) and drag force (D), cavity wall velocity (U

Keywords: cavity envelope, CFD, high speed underwater vehicles, supercavitation, supercavity flows

Procedia PDF Downloads 188
295 Estimating Precipitable Water Vapour Using the Global Positioning System and Radio Occultation over Ethiopian Regions

Authors: Asmamaw Yehun, Tsegaye Gogie, Martin Vermeer, Addisu Hunegnaw

Abstract:

The Global Positioning System (GPS) is a space-based radio positioning system, which is capable of providing continuous position, velocity, and time information to users anywhere on or near the surface of the Earth. The main objective of this work was to estimate the integrated precipitable water vapour (IPWV) using ground GPS and Low Earth Orbit (LEO) Radio Occultation (RO) to study spatial-temporal variability. For LEO-GPS RO, we used Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC) datasets. We estimated the daily and monthly mean of IPWV using six selected ground-based GPS stations over a period of range from 2012 to 2016 (i.e. five-years period). The main perspective for selecting the range period from 2012 to 2016 is that, continuous data were available during these periods at all Ethiopian GPS stations. We studied temporal, seasonal, diurnal, and vertical variations of precipitable water vapour using GPS observables extracted from the precise geodetic GAMIT-GLOBK software package. Finally, we determined the cross-correlation of our GPS-derived IPWV values with those of the European Centre for Medium-Range Weather Forecasts (ECMWF) ERA-40 Interim reanalysis and of the second generation National Oceanic and Atmospheric Administration (NOAA) model ensemble Forecast System Reforecast (GEFS/R) for validation and static comparison. There are higher values of the IPWV range from 30 to 37.5 millimetres (mm) in Gambela and Southern Regions of Ethiopia. Some parts of Tigray, Amhara, and Oromia regions had low IPWV ranges from 8.62 to 15.27 mm. The correlation coefficient between GPS-derived IPWV with ECMWF and GEFS/R exceeds 90%. We conclude that there are highly temporal, seasonal, diurnal, and vertical variations of precipitable water vapour in the study area.

Keywords: GNSS, radio occultation, atmosphere, precipitable water vapour

Procedia PDF Downloads 86
294 Predicting Provider Service Time in Outpatient Clinics Using Artificial Intelligence-Based Models

Authors: Haya Salah, Srinivas Sharan

Abstract:

Healthcare facilities use appointment systems to schedule their appointments and to manage access to their medical services. With the growing demand for outpatient care, it is now imperative to manage physician's time effectively. However, high variation in consultation duration affects the clinical scheduler's ability to estimate the appointment duration and allocate provider time appropriately. Underestimating consultation times can lead to physician's burnout, misdiagnosis, and patient dissatisfaction. On the other hand, appointment durations that are longer than required lead to doctor idle time and fewer patient visits. Therefore, a good estimation of consultation duration has the potential to improve timely access to care, resource utilization, quality of care, and patient satisfaction. Although the literature on factors influencing consultation length abound, little work has done to predict it using based data-driven approaches. Therefore, this study aims to predict consultation duration using supervised machine learning algorithms (ML), which predicts an outcome variable (e.g., consultation) based on potential features that influence the outcome. In particular, ML algorithms learn from a historical dataset without explicitly being programmed and uncover the relationship between the features and outcome variable. A subset of the data used in this study has been obtained from the electronic medical records (EMR) of four different outpatient clinics located in central Pennsylvania, USA. Also, publicly available information on doctor's characteristics such as gender and experience has been extracted from online sources. This research develops three popular ML algorithms (deep learning, random forest, gradient boosting machine) to predict the treatment time required for a patient and conducts a comparative analysis of these algorithms with respect to predictive performance. The findings of this study indicate that ML algorithms have the potential to predict the provider service time with superior accuracy. While the current approach of experience-based appointment duration estimation adopted by the clinic resulted in a mean absolute percentage error of 25.8%, the Deep learning algorithm developed in this study yielded the best performance with a MAPE of 12.24%, followed by gradient boosting machine (13.26%) and random forests (14.71%). Besides, this research also identified the critical variables affecting consultation duration to be patient type (new vs. established), doctor's experience, zip code, appointment day, and doctor's specialty. Moreover, several practical insights are obtained based on the comparative analysis of the ML algorithms. The machine learning approach presented in this study can serve as a decision support tool and could be integrated into the appointment system for effectively managing patient scheduling.

Keywords: clinical decision support system, machine learning algorithms, patient scheduling, prediction models, provider service time

Procedia PDF Downloads 122
293 Desulfurization of Crude Oil Using Bacteria

Authors: Namratha Pai, K. Vasantharaj, K. Haribabu

Abstract:

Our Team is developing an innovative cost effective biological technique to desulfurize crude oil. ’Sulphur’ is found to be present in crude oil samples from .05% - 13.95% and its elimination by industrial methods is expensive currently. Materials required :- Alicyclobacillus acidoterrestrius, potato dextrose agar, oxygen, Pyragallol and inert gas(nitrogen). Method adapted and proposed:- 1) Growth of bacteria studied, energy needs. 2) Compatibility with crude-oil. 3) Reaction rate of bacteria studied and optimized. 4) Reaction development by computer simulation. 5) Simulated work tested by building the reactor. The method being developed requires the use of bacteria Alicyclobacillus acidoterrestrius - an acidothermophilic heterotrophic, soil dwelling aerobic, Sulfur bacteria. The bacteria are fed to crude oil in a unique manner. Its coated onto potato dextrose agar beads, cultured for 24 hours (growth time coincides with time when it begins reacting) and fed into the reactor. The beads are to be replenished with O2 by passing them through a jacket around the reactor which has O2 supply. The O2 can’t be supplied directly as crude oil is inflammable, hence the process. Beads are made to move around based on the concept of fluidized bed reactor. By controlling the velocity of inert gas pumped , the beads are made to settle down when exhausted of O2. It is recycled through the jacket where O2 is re-fed and beads which were inside the ring substitute the exhausted ones. Crude-oil is maintained between 1 atm-270 M Pa pressure and 45°C treated with tartaric acid (Ph reason for bacteria growth) for optimum output. Bacteria being of oxidising type react with Sulphur in crude-oil and liberate out SO4^2- and no gas. SO4^2- is absorbed into H2O. NaOH is fed once reaction is complete and beads separated. Crude-oil is thus separated of SO4^2-, thereby Sulphur, tartaric acid and other acids which are separated out. Bio-corrosion is taken care of by internal wall painting (phenolepoxy paints). Earlier methods used included use of Pseudomonas and Rhodococcus species. They were found to be inefficient, time and energy consuming and reduce the fuel value as they fed on skeleton.

Keywords: alicyclobacillus acidoterrestrius, potato dextrose agar, fluidized bed reactor principle, reaction time for bacteria, compatibility with crude oil

Procedia PDF Downloads 320
292 Simulation of Turbulent Flow in Channel Using Generalized Hydrodynamic Equations

Authors: Alex Fedoseyev

Abstract:

This study explores Generalized Hydrodynamic Equations (GHE) for the simulation of turbulent flows. The GHE was derived from the Generalized Boltzmann Equation (GBE) by Alexeev (1994). GBE was obtained by first principles from the chain of Bogolubov kinetic equations and considered particles of finite dimensions, Alexeev (1994). The GHE has new terms, temporal and spatial fluctuations compared to the Navier-Stokes equations (NSE). These new terms have a timescale multiplier τ, and the GHE becomes the NSE when τ is zero. The nondimensional τ is a product of the Reynolds number and the squared length scale ratio, τ=Re*(l/L)², where l is the apparent Kolmogorov length scale, and L is a hydrodynamic length scale. The turbulence phenomenon is not well understood and is not described by NSE. An additional one or two equations are required for the turbulence model, which may have to be tuned for specific problems. We show that, in the case of the GHE, no additional turbulence model is needed, and the turbulent velocity profile is obtained from the GHE. The 2D turbulent channel and circular pipe flows were investigated using a numerical solution of the GHE for several cases. The solutions are compared with the experimental data in the circular pipes and 2D channels by Nicuradse (1932, Prandtl Lab), Hussain and Reynolds (1975), Wei and Willmarth (1989), Van Doorne (2007), theory by Wosnik, Castillo and George (2000), and the relevant experiments on Superpipe setup at Princeton, data by Zagarola (1996) and Zagarola and Smits (1998), the Reynolds number is from Re=7200 to Re=960000. The numerical solution data compared well with the experimental data, as well as with the approximate analytical solution for turbulent flow in channel Fedoseyev (2023). The obtained results confirm that the Alexeev generalized hydrodynamic theory (GHE) is in good agreement with the experiments for turbulent flows. The proposed approach is limited to 2D and 3D axisymmetric channel geometries. Further work will extend this approach by including channels with square and rectangular cross-sections.

Keywords: comparison with experimental data. generalized hydrodynamic equations, numerical solution, turbulent boundary layer, turbulent flow in channel

Procedia PDF Downloads 66
291 Social Economic Factors Associated with the Nutritional Status of Children In Western Uganda

Authors: Baguma Daniel Kajura

Abstract:

The study explores socio-economic factors, health related and individual factors that influence the breastfeeding habits of mothers and their effect on the nutritional status of their infants in the Rwenzori region of Western Uganda. A cross-sectional research design was adopted, and it involved the use of self-administered questionnaires, interview guides, and focused group discussion guides to assess the extent to which socio-demographic factors associated with breastfeeding practices influence child malnutrition. Using this design, data was collected from 276 mother-paired infants out of the selected 318 mother-paired infants over a period of ten days. Using a sample size formula by Kish Leslie for cross-sectional studies N= Zα2 P (1- P) / δ2, where N= sample size estimate of paired mother paired infants. P= assumed true population prevalence of mother–paired infants with malnutrition cases, P = 29.3%. 1-P = the probability of mother-paired infants not having malnutrition, so 1-P = 70.7% Zα = Standard normal deviation at 95% confidence interval corresponding to 1.96.δ = Absolute error between the estimated and true population prevalence of malnutrition of 5%. The calculated sample size N = 1.96 × 1.96 (0.293 × 0.707) /0,052= 318 mother paired infants. Demographic and socio-economic data for all mothers were entered into Microsoft Excel software and then exported to STATA 14 (StataCorp, 2015). Anthropometric measurements were taken for all children by the researcher and the trained assistants who physically weighed the children. The use of immunization card was used to attain the age of the child. The bivariate logistic regression analysis was used to assess the relationship between socio-demographic factors associated with breastfeeding practices and child malnutrition. The multivariable regression analysis was used to draw a conclusion on whether or not there are any true relationships between the socio-demographic factors associated with breastfeeding practices as independent variables and child stunting and underweight as dependent variables in relation to breastfeeding practices. Descriptive statistics on background characteristics of the mothers were generated and presented in frequency distribution tables. Frequencies and means were computed, and the results were presented using tables, then, we determined the distribution of stunting and underweight among infants by the socioeconomic and demographic factors. Findings reveal that children of mothers who used milk substitutes besides breastfeeding are over two times more likely to be stunted compared to those whose mothers exclusively breastfed them. Feeding children with milk substitutes instead of breastmilk predisposes them to both stunting and underweight. Children of mothers between 18 and 34 years of age are less likely to be underweight, as were those who were breastfed over ten times a day. The study further reveals that 55% of the children were underweight, and 49% were stunted. Of the underweight children, an equal number (58/151) were either mildly or moderately underweight (38%), and 23% (35/151) were severely underweight. Empowering community outreach programs by increasing knowledge and increased access to services on integrated management of child malnutrition is crucial to curbing child malnutrition in rural areas.

Keywords: infant and young child feeding, breastfeeding, child malnutrition, maternal health

Procedia PDF Downloads 24
290 Debris Flow Mapping Using Geographical Information System Based Model and Geospatial Data in Middle Himalayas

Authors: Anand Malik

Abstract:

The Himalayas with high tectonic activities poses a great threat to human life and property. Climate change is another reason which triggering extreme events multiple fold effect on high mountain glacial environment, rock falls, landslides, debris flows, flash flood and snow avalanches. One such extreme event of cloud burst along with breach of moraine dammed Chorabri Lake occurred from June 14 to June 17, 2013, triggered flooding of Saraswati and Mandakini rivers in the Kedarnath Valley of Rudraprayag district of Uttrakhand state of India. As a result, huge volume of water with its high velocity created a catastrophe of the century, which resulted into loss of large number of human/animals, pilgrimage, tourism, agriculture and property. Thus a comprehensive assessment of debris flow hazards requires GIS-based modeling using numerical methods. The aim of present study is to focus on analysis and mapping of debris flow movements using geospatial data with flow-r (developed by team at IGAR, University of Lausanne). The model is based on combined probabilistic and energetic algorithms for the assessment of spreading of flow with maximum run out distances. Aster Digital Elevation Model (DEM) with 30m x 30m cell size (resolution) is used as main geospatial data for preparing the run out assessment, while Landsat data is used to analyze land use land cover change in the study area. The results of the study area show that model can be applied with great accuracy as the model is very useful in determining debris flow areas. The results are compared with existing available landslides/debris flow maps. ArcGIS software is used in preparing run out susceptibility maps which can be used in debris flow mitigation and future land use planning.

Keywords: debris flow, geospatial data, GIS based modeling, flow-R

Procedia PDF Downloads 274
289 Electron Beam Melting Process Parameter Optimization Using Multi Objective Reinforcement Learning

Authors: Michael A. Sprayberry, Vincent C. Paquit

Abstract:

Process parameter optimization in metal powder bed electron beam melting (MPBEBM) is crucial to ensure the technology's repeatability, control, and industry-continued adoption. Despite continued efforts to address the challenges via the traditional design of experiments and process mapping techniques, there needs to be more successful in an on-the-fly optimization framework that can be adapted to MPBEBM systems. Additionally, data-intensive physics-based modeling and simulation methods are difficult to support by a metal AM alloy or system due to cost restrictions. To mitigate the challenge of resource-intensive experiments and models, this paper introduces a Multi-Objective Reinforcement Learning (MORL) methodology defined as an optimization problem for MPBEBM. An off-policy MORL framework based on policy gradient is proposed to discover optimal sets of beam power (P) – beam velocity (v) combinations to maintain a steady-state melt pool depth and phase transformation. For this, an experimentally validated Eagar-Tsai melt pool model is used to simulate the MPBEBM environment, where the beam acts as the agent across the P – v space to maximize returns for the uncertain powder bed environment producing a melt pool and phase transformation closer to the optimum. The culmination of the training process yields a set of process parameters {power, speed, hatch spacing, layer depth, and preheat} where the state (P,v) with the highest returns corresponds to a refined process parameter mapping. The resultant objects and mapping of returns to the P-v space show convergence with experimental observations. The framework, therefore, provides a model-free multi-objective approach to discovery without the need for trial-and-error experiments.

Keywords: additive manufacturing, metal powder bed fusion, reinforcement learning, process parameter optimization

Procedia PDF Downloads 94
288 An Experimental Machine Learning Analysis on Adaptive Thermal Comfort and Energy Management in Hospitals

Authors: Ibrahim Khan, Waqas Khalid

Abstract:

The Healthcare sector is known to consume a higher proportion of total energy consumption in the HVAC market owing to an excessive cooling and heating requirement in maintaining human thermal comfort in indoor conditions, catering to patients undergoing treatment in hospital wards, rooms, and intensive care units. The indoor thermal comfort conditions in selected hospitals of Islamabad, Pakistan, were measured on a real-time basis with the collection of first-hand experimental data using calibrated sensors measuring Ambient Temperature, Wet Bulb Globe Temperature, Relative Humidity, Air Velocity, Light Intensity and CO2 levels. The Experimental data recorded was analyzed in conjunction with the Thermal Comfort Questionnaire Surveys, where the participants, including patients, doctors, nurses, and hospital staff, were assessed based on their thermal sensation, acceptability, preference, and comfort responses. The Recorded Dataset, including experimental and survey-based responses, was further analyzed in the development of a correlation between operative temperature, operative relative humidity, and other measured operative parameters with the predicted mean vote and adaptive predicted mean vote, with the adaptive temperature and adaptive relative humidity estimated using the seasonal data set gathered for both summer – hot and dry, and hot and humid as well as winter – cold and dry, and cold and humid climate conditions. The Machine Learning Logistic Regression Algorithm was incorporated to train the operative experimental data parameters and develop a correlation between patient sensations and the thermal environmental parameters for which a new ML-based adaptive thermal comfort model was proposed and developed in our study. Finally, the accuracy of our model was determined using the K-fold cross-validation.

Keywords: predicted mean vote, thermal comfort, energy management, logistic regression, machine learning

Procedia PDF Downloads 64
287 Methotrexate Associated Skin Cancer: A Signal Review of Pharmacovigilance Center

Authors: Abdulaziz Alakeel, Abdulrahman Alomair, Mohammed Fouda

Abstract:

Introduction: Methotrexate (MTX) is an antimetabolite used to treat multiple conditions, including neoplastic diseases, severe psoriasis, and rheumatoid arthritis. Skin cancer is the out-of-control growth of abnormal cells in the epidermis, the outermost skin layer, caused by unrepaired DNA damage that triggers mutations. These mutations lead the skin cells to multiply rapidly and form malignant tumors. The aim of this review is to evaluate the risk of skin cancer associated with the use of methotrexate and to suggest regulatory recommendations if required. Methodology: Signal Detection team at Saudi Food and Drug Authority (SFDA) performed a safety review using National Pharmacovigilance Center (NPC) database as well as the World Health Organization (WHO) VigiBase, alongside with literature screening to retrieve related information for assessing the causality between skin cancer and methotrexate. The search conducted in July 2020. Results: Four published articles support the association seen while searching in literature, a recent randomized control trial published in 2020 revealed a statistically significant increase in skin cancer among MTX users. Another study mentioned methotrexate increases the risk of non-melanoma skin cancer when used in combination with immunosuppressant and biologic agents. In addition, the incidence of melanoma for methotrexate users was 3-fold more than the general population in a cohort study of rheumatoid arthritis patients. The last article estimated the risk of cutaneous malignant melanoma (CMM) in a cohort study shows a statistically significant risk increase for CMM was observed in MTX exposed patients. The WHO database (VigiBase) searched for individual case safety reports (ICSRs) reported for “Skin Cancer” and 'Methotrexate' use, which yielded 121 ICSRs. The initial review revealed that 106 cases are insufficiently documented for proper medical assessment. However, the remaining fifteen cases have extensively evaluated by applying the WHO criteria of causality assessment. As a result, 30 percent of the cases showed that MTX could possibly cause skin cancer; five cases provide unlikely association and five un-assessable cases due to lack of information. The Saudi NPC database searched to retrieve any reported cases for the combined terms methotrexate/skin cancer; however, no local cases reported up to date. The data mining of the observed and the expected reporting rate for drug/adverse drug reaction pair is estimated using information component (IC), a tool developed by the WHO Uppsala Monitoring Centre to measure the reporting ratio. Positive IC reflects higher statistical association, while negative values translated as a less statistical association, considering the null value equal to zero. Results showed that a combination of 'Methotrexate' and 'Skin cancer' observed more than expected when compared to other medications in the WHO database (IC value is 1.2). Conclusion: The weighted cumulative pieces of evidence identified from global cases, data mining, and published literature are sufficient to support a causal association between the risk of skin cancer and methotrexate. Therefore, health care professionals should be aware of this possible risk and may consider monitoring any signs or symptoms of skin cancer in patients treated with methotrexate.

Keywords: methotrexate, skin cancer, signal detection, pharmacovigilance

Procedia PDF Downloads 114
286 Osteoprotegerin and Osteoprotegerin/TRAIL Ratio are Associated with Cardiovascular Dysfunction and Mortality among Patients with Renal Failure

Authors: Marek Kuźniewski, Magdalena B. Kaziuk , Danuta Fedak, Paulina Dumnicka, Ewa Stępień, Beata Kuśnierz-Cabala, Władysław Sułowicz

Abstract:

Background: The high prevalence of cardiovascular morbidity and mortality among patients with chronic kidney disease (CKD) is observed especially in those undergoing dialysis. Osteoprotegerin (OPG) and its ligands, receptor activator of nuclear factor kappa-B ligand (RANKL) and tumor necrosis factor-related apoptosis-inducing ligand (TRAIL) have been associated with cardiovascular complications. Our aim was to study their role as cardiovascular risk factors in stage 5 CKD patients. Methods: OPG, RANKL and TRAIL concentrations were measured in 69 hemodialyzed CKD patients and 35 healthy volunteers. In CKD patients, cardiovascular dysfunction was assessed with aortic pulse wave velocity (AoPWV), carotid artery intima-media thickness (CCA-IMT), coronary artery calcium score (CaSc) and N-terminal pro-B-type natriuretic peptide (NT-proBNP) serum concentration. Cardiovascular and overall mortality data were collected during a 7-years follow-up. Results: OPG plasma concentrations were higher in CKD patients comparing to controls. Total soluble RANKL was lower and OPG/RANKL ratio higher in patients. Soluble TRAIL concentrations did not differ between the groups and OPG/TRAIL ratio was higher in CKD patients. OPG and OPG/TRAIL positively predicted long-term mortality (all-cause and cardiovascular) in CKD patients. OPG positively correlated with AoPWV, CCA-IMT and NT-proBNP whereas OPG/TRAIL with AoPWV and NT-proBNP. Described relationships were independent of classical and non-classical cardiovascular risk factors, with exception of age. Conclusions: Our study confirmed the role of OPG as a biomarker of cardiovascular dysfunction and a predictor of mortality in stage 5 CKD. OPG/TRAIL ratio can be proposed as a predictor of cardiovascular dysfunction and mortality.

Keywords: osteoprotegerin, tumor necrosis factor-related apoptosis-inducing ligand, receptor activator of nuclear factor kappa-B ligand, hemodialysis, chronic kidney disease, cardiovascular disease

Procedia PDF Downloads 336
285 Environmental Impact of Pallets in the Supply Chain: Including Logistics and Material Durability in a Life Cycle Assessment Approach

Authors: Joana Almeida, Kendall Reid, Jonas Bengtsson

Abstract:

Pallets are devices that are used for moving and storing freight and are nearly omnipresent in supply chains. The market is dominated by timber pallets, with plastic being a common alternative. Either option underpins the use of important resources (oil, land, timber), the emission of greenhouse gases and additional waste generation in most supply chains. This study uses a dynamic approach to the life cycle assessment (LCA) of pallets. It demonstrates that what ultimately defines the environmental burden of pallets in the supply chain is how often the length of its lifespan, which depends on the durability of the material and on how pallets are utilized. This study proposes a life cycle assessment (LCA) of pallets in supply chains supported by an algorithm that estimates pallet durability in function of material resilience and of logistics. The LCA runs from cradle-to-grave, including raw material provision, manufacture, transport and end of life. The scope is representative of timber and plastic pallets in the Australian and South-East Asia markets. The materials included in this analysis are: -tropical mixed hardwood, unsustainably harvested in SE Asia; -certified softwood, sustainably harvested; -conventional plastic, a mix of virgin and scrap plastic; -recycled plastic pallets, 100% mixed plastic scrap, which are being pioneered by Re > Pal. The logistical model purports that more complex supply chains and rougher handling subject pallets to higher stress loads. More stress shortens the lifespan of pallets in function of their composition. Timber pallets can be repaired, extending their lifespan, while plastic pallets cannot. At the factory gate, softwood pallets have the lowest carbon footprint. Re > pal follows closely due to its burden-free feedstock. Tropical mixed hardwood and plastic pallets have the highest footprints. Harvesting tropical mixed hardwood in SE Asia often leads to deforestation, leading to emissions from land use change. The higher footprint of plastic pallets is due to the production of virgin plastic. Our findings show that manufacture alone does not determine the sustainability of pallets. Even though certified softwood pallets have lower carbon footprint and their lifespan can be extended by repair, the need for re-supply of materials and disposal of waste timber offsets this advantage. It also leads to most waste being generated among all pallets. In a supply chain context, Re > Pal pallets have the lowest footprint due to lower replacement and disposal needs. In addition, Re > Pal are nearly ‘waste neutral’, because the waste that is generated throughout their life cycle is almost totally offset by the scrap uptake for production. The absolute results of this study can be confirmed by progressing the logistics model, improving data quality, expanding the range of materials and utilization practices. Still, this LCA demonstrates that considering logistics, raw materials and material durability is central for sustainable decision-making on pallet purchasing, management and disposal.

Keywords: carbon footprint, life cycle assessment, recycled plastic, waste

Procedia PDF Downloads 225
284 Numerical Simulation of Production of Microspheres from Polymer Emulsion in Microfluidic Device toward Using in Drug Delivery Systems

Authors: Nizar Jawad Hadi, Sajad Abd Alabbas

Abstract:

Because of their ability to encapsulate and release drugs in a controlled manner, microspheres fabricated from polymer emulsions using microfluidic devices have shown promise for drug delivery applications. In this study, the effects of velocity, density, viscosity, and surface tension, as well as channel diameter, on microsphere generation were investigated using Fluent Ansys software. The software was programmed with the physical properties of the polymer emulsion such as density, viscosity and surface tension. Simulation will then be performed to predict fluid flow and microsphere production and improve the design of drug delivery applications based on changes in these parameters. The effects of capillary and Weber numbers are also studied. The results of the study showed that the size of the microspheres can be controlled by adjusting the speed and diameter of the channel. Narrower microspheres resulted from narrower channel widths and higher flow rates, which could improve drug delivery efficiency, while smaller microspheres resulted from lower interfacial surface tension. The viscosity and density of the polymer emulsion significantly affected the size of the microspheres, ith higher viscosities and densities producing smaller microspheres. The loading and drug release properties of the microspheres created with the microfluidic technique were also predicted. The results showed that the microspheres can efficiently encapsulate drugs and release them in a controlled manner over a period of time. This is due to the high surface area to volume ratio of the microspheres, which allows for efficient drug diffusion. The ability to tune the manufacturing process using factors such as speed, density, viscosity, channel diameter, and surface tension offers a potential opportunity to design drug delivery systems with greater efficiency and fewer side effects.

Keywords: polymer emulsion, microspheres, numerical simulation, microfluidic device

Procedia PDF Downloads 66
283 Calculation of the Supersonic Air Intake with the Optimization of the Shock Wave System

Authors: Elena Vinogradova, Aleksei Pleshakov, Aleksei Yakovlev

Abstract:

During the flight of a supersonic aircraft under various conditions (altitude, Mach, etc.), it becomes necessary to coordinate the operating modes of the air intake and engine. On the supersonic aircraft, it’s been done by changing various control factors (the angle of rotation of the wedge panels and etc.). This paper investigates the possibility of using modern optimization methods to determine the optimal position of the supersonic air intake wedge panels in order to maximize the total pressure recovery coefficient. Modern software allows us to conduct auto-optimization, which determines the optimal position of the control elements of the investigated product to achieve its maximum efficiency. In this work, the flow in the supersonic aircraft inlet has investigated and optimized the operation of the flaps of the supersonic inlet in an aircraft in a 2-D setting. This work has done using ANSYS CFX software. The supersonic aircraft inlet is a flat adjustable external compression inlet. The braking surface is made in the form of a three-stage wedge. The IOSO NM software package was chosen for optimization. Change in the position of the panels of the input device is carried out by changing the angle between the first and second steps of the three-stage wedge. The position of the rest of the panels is changed automatically. Within the framework of the presented work, the position of the moving air intake panel was optimized under fixed flight conditions of the aircraft under a certain engine operating mode. As a result of the numerical modeling, the distribution of total pressure losses was obtained for various cases of the engine operation, depending on the incoming flow velocity and the flight altitude of the aircraft. The results make it possible to obtain the maximum total pressure recovery coefficient under given conditions. Also, the initial geometry was set with a certain angle between the first and second wedge panels. Having performed all the calculations, as well as the subsequent optimization of the aircraft input device, it can be concluded that the initial angle was set sufficiently close to the optimal angle.

Keywords: optimal angle, optimization, supersonic air intake, total pressure recovery coefficient

Procedia PDF Downloads 244
282 Lake Bardawil Water Quality

Authors: Mohamed Elkashouty, Mohamed Elkammar, Mohamed Gomma, Menal Elminiami

Abstract:

Lake Bardawil is considered as one of the major morphological features of northern Sinai. It represents the largest fish production lake for export in Egypt. Nineteen and thirty one samples were collected from lake water during winter and summer (2005). TDS, cations, anions, Cd, Cu, Fe, Mn, Zn, Ni, Co and Pb concentrations were measured within winter and summer seasons. During summer, in the eastern sector of the lake, TDS concentration is decreased due northeastern part (38000 ppm), it is attributed to dilution from seawater through Boughaz II. The TDS concentration increased generally in the central and southern parts of the lake (44000 and 42000 ppm, respectively). It is caused by they are far from dilution from seawater, disconnected water body, shallow depth (mean 2 m), and high evaporation rate. In the western sector, the TDS content ranged from low (38000 ppm) in the northeastern part to high (50000 ppm) in the western part. Generally, the TDS concentration in the western sector is higher than those in the eastern. It is attributed to low volume of water body for the former, high evaporation rate, and therefore increase in TDS content in the lake water.During winter season, in the eastern sector, the wind velocity is high which enhance the water current to inflow into the lake through Boughaz I and II. The resultant water lake is diluted by seawater and rainfall in the winter season. The TDS concentration increased due southern part of the lake (42000 ppm) and declined in the northern part (36000 ppm). The concentration of Co, Ni, Pb, Fe, Cd, Zn, Cu, Mn and Pb within winter and summery seasons, in lake water are low, which considered as background concentrations with respect to seawater. Therefore, there are no industrial, agricultural and sanitary wastewaters dump into the lake. This confirms the statement that has been written at the entrance of Lake Bardawil at El-Telool area "Lake Bardawil, one of the purest lakes in the world". It indicate that the Lake Bardawil is excellent area for fish production for export (current state) and is the second main fish source in Egypt after the Mediterranean Sea after the illness of Lake Manzala.

Keywords: lake Bardawil, water quality, major ions, toxic metals

Procedia PDF Downloads 521
281 Proportional and Integral Controller-Based Direct Current Servo Motor Speed Characterization

Authors: Adel Salem Bahakeem, Ahmad Jamal, Mir Md. Maruf Morshed, Elwaleed Awad Khidir

Abstract:

Direct Current (DC) servo motors, or simply DC motors, play an important role in many industrial applications such as manufacturing of plastics, precise positioning of the equipment, and operating computer-controlled systems where speed of feed control, maintaining the position, and ensuring to have a constantly desired output is very critical. These parameters can be controlled with the help of control systems such as the Proportional Integral Derivative (PID) controller. The aim of the current work is to investigate the effects of Proportional (P) and Integral (I) controllers on the steady state and transient response of the DC motor. The controller gains are varied to observe their effects on the error, damping, and stability of the steady and transient motor response. The current investigation is conducted experimentally on a servo trainer CE 110 using analog PI controller CE 120 and theoretically using Simulink in MATLAB. Both experimental and theoretical work involves varying integral controller gain to obtain the response to a steady-state input, varying, individually, the proportional and integral controller gains to obtain the response to a step input function at a certain frequency, and theoretically obtaining the proportional and integral controller gains for desired values of damping ratio and response frequency. Results reveal that a proportional controller helps reduce the steady-state and transient error between the input signal and output response and makes the system more stable. In addition, it also speeds up the response of the system. On the other hand, the integral controller eliminates the error but tends to make the system unstable with induced oscillations and slow response to eliminate the error. From the current work, it is desired to achieve a stable response of the servo motor in terms of its angular velocity subjected to steady-state and transient input signals by utilizing the strengths of both P and I controllers.

Keywords: DC servo motor, proportional controller, integral controller, controller gain optimization, Simulink

Procedia PDF Downloads 110
280 Optical Flow Technique for Supersonic Jet Measurements

Authors: Haoxiang Desmond Lim, Jie Wu, Tze How Daniel New, Shengxian Shi

Abstract:

This paper outlines the development of a novel experimental technique in quantifying supersonic jet flows, in an attempt to avoid seeding particle problems frequently associated with particle-image velocimetry (PIV) techniques at high Mach numbers. Based on optical flow algorithms, the idea behind the technique involves using high speed cameras to capture Schlieren images of the supersonic jet shear layers, before they are subjected to an adapted optical flow algorithm based on the Horn-Schnuck method to determine the associated flow fields. The proposed method is capable of offering full-field unsteady flow information with potentially higher accuracy and resolution than existing point-measurements or PIV techniques. Preliminary study via numerical simulations of a circular de Laval jet nozzle successfully reveals flow and shock structures typically associated with supersonic jet flows, which serve as useful data for subsequent validation of the optical flow based experimental results. For experimental technique, a Z-type Schlieren setup is proposed with supersonic jet operated in cold mode, stagnation pressure of 8.2 bar and exit velocity of Mach 1.5. High-speed single-frame or double-frame cameras are used to capture successive Schlieren images. As implementation of optical flow technique to supersonic flows remains rare, the current focus revolves around methodology validation through synthetic images. The results of validation test offers valuable insight into how the optical flow algorithm can be further improved to improve robustness and accuracy. Details of the methodology employed and challenges faced will be further elaborated in the final conference paper should the abstract be accepted. Despite these challenges however, this novel supersonic flow measurement technique may potentially offer a simpler way to identify and quantify the fine spatial structures within the shock shear layer.

Keywords: Schlieren, optical flow, supersonic jets, shock shear layer

Procedia PDF Downloads 312
279 Application of Aerogeomagnetic and Ground Magnetic Surveys for Deep-Seated Kimberlite Pipes in Central India

Authors: Utkarsh Tripathi, Bikalp C. Mandal, Ravi Kumar Umrao, Sirsha Das, M. K. Bhowmic, Joyesh Bagchi, Hemant Kumar

Abstract:

The Central India Diamond Province (CIDP) is known for the occurrences of primary and secondary sources for diamonds from the Vindhyan platformal sediments, which host several kimberlites, with one operating mine. The known kimberlites are Neo-Proterozoic in age and intrude into the Kaimur Group of rocks. Based on the interpretation of areo-geomagnetic data, three potential zones were demarcated in parts of Chitrakoot and Banda districts, Uttar Pradesh, and Satna district, Madhya Pradesh, India. To validate the aero-geomagnetic interpretation, ground magnetic coupled with a gravity survey was conducted to validate the anomaly and explore the possibility of some pipes concealed beneath the Vindhyan sedimentary cover. Geologically the area exposes the milky white to buff-colored arkosic and arenitic sandstone belonging to the Dhandraul Formation of the Kaimur Group, which are undeformed and unmetamorphosed providing almost transparent media for geophysical exploration. There is neither surface nor any geophysical indication of intersections of linear structures, but the joint patterns depict three principal joints along NNE-SSW, ENE-WSW, and NW-SE directions with vertical to sub-vertical dips. Aeromagnetic data interpretation brings out three promising zones with the bi-polar magnetic anomaly (69-602nT) that represent potential kimberlite intrusive concealed below at an approximate depth of 150-170m. The ground magnetic survey has brought out the above-mentioned anomalies in zone-I, which is congruent with the available aero-geophysical data. The magnetic anomaly map shows a total variation of 741 nT over the area. Two very high magnetic zones (H1 and H2) have been observed with around 500 nT and 400 nT magnitudes, respectively. Anomaly zone H1 is located in the west-central part of the area, south of Madulihai village, while anomaly zone H2 is located 2km apart in the north-eastern direction. The Euler 3D solution map indicates the possible existence of the ultramafic body in both the magnetic highs (H1 and H2). The H2 high shows the shallow depth, and H1 shows a deeper depth solution. In the reduced-to-pole (RTP) method, the bipolar anomaly disappears and indicates the existence of one causative source for both anomalies, which is, in all probabilities, an ultramafic suite of rock. The H1 magnetic high represents the main body, which persists up to depths of ~500m, as depicted through the upward continuation derivative map. Radially Averaged Power Spectrum (RAPS) shows the thickness of loose sediments up to 25m with a cumulative depth of 154m for sandstone overlying the ultramafic body. The average depth range of the shallower body (H2) is 60.5-86 meters, as estimated through the Peters half slope method. Magnetic (TF) anomaly with BA contour also shows high BA value around the high zones of magnetic anomaly (H1 and H2), which suggests that the causative body is with higher density and susceptibility for the surrounding host rock. The ground magnetic survey coupled with the gravity confirms a potential target for further exploration as the findings are co-relatable with the presence of the known diamondiferous kimberlites in this region, which post-date the rocks of the Kaimur Group.

Keywords: Kaimur, kimberlite, Euler 3D solution, magnetic

Procedia PDF Downloads 76
278 Loss Quantification Archaeological Sites in Watershed Due to the Use and Occupation of Land

Authors: Elissandro Voigt Beier, Cristiano Poleto

Abstract:

The main objective of the research is to assess the loss through the quantification of material culture (archaeological fragments) in rural areas, sites explored economically by machining on seasonal crops, and also permanent, in a hydrographic subsystem Camaquã River in the state of Rio Grande do Sul, Brazil. The study area consists of different micro basins and differs in area, ranging between 1,000 m² and 10,000 m², respectively the largest and the smallest, all with a large number of occurrences and outcrop locations of archaeological material and high density in intense farm environment. In the first stage of the research aimed to identify the dispersion of points of archaeological material through field survey through plot points by the Global Positioning System (GPS), within each river basin, was made use of concise bibliography on the topic in the region, helping theoretically in understanding the old landscaping with preferences of occupation for reasons of ancient historical people through the settlements relating to the practice observed in the field. The mapping was followed by the cartographic development in the region through the development of cartographic products of the land elevation, consequently were created cartographic products were to contribute to the understanding of the distribution of the absolute materials; the definition and scope of the material dispersed; and as a result of human activities the development of revolving letter by mechanization of in situ material, it was also necessary for the preparation of materials found density maps, linking natural environments conducive to ancient historical occupation with the current human occupation. The third stage of the project it is for the systematic collection of archaeological material without alteration or interference in the subsurface of the indigenous settlements, thus, the material was prepared and treated in the laboratory to remove soil excesses, cleaning through previous communication methodology, measurement and quantification. Approximately 15,000 were identified archaeological fragments belonging to different periods of ancient history of the region, all collected outside of its environmental and historical context and it also has quite changed and modified. The material was identified and cataloged considering features such as object weight, size, type of material (lithic, ceramic, bone, Historical porcelain and their true association with the ancient history) and it was disregarded its principles as individual lithology of the object and functionality same. As observed preliminary results, we can point out the change of materials by heavy mechanization and consequent soil disturbance processes, and these processes generate loading of archaeological materials. Therefore, as a next step will be sought, an estimate of potential losses through a mathematical model. It is expected by this process, to reach a reliable model of high accuracy which can be applied to an archeological site of lower density without encountering a significant error.

Keywords: degradation of heritage, quantification in archaeology, watershed, use and occupation of land

Procedia PDF Downloads 277
277 The Observable Method for the Regularization of Shock-Interface Interactions

Authors: Teng Li, Kamran Mohseni

Abstract:

This paper presents an inviscid regularization technique that is capable of regularizing the shocks and sharp interfaces simultaneously in the shock-interface interaction simulations. The direct numerical simulation of flows involving shocks has been investigated for many years and a lot of numerical methods were developed to capture the shocks. However, most of these methods rely on the numerical dissipation to regularize the shocks. Moreover, in high Reynolds number flows, the nonlinear terms in hyperbolic Partial Differential Equations (PDE) dominates, constantly generating small scale features. This makes direct numerical simulation of shocks even harder. The same difficulty happens in two-phase flow with sharp interfaces where the nonlinear terms in the governing equations keep sharpening the interfaces to discontinuities. The main idea of the proposed technique is to average out the small scales that is below the resolution (observable scale) of the computational grid by filtering the convective velocity in the nonlinear terms in the governing PDE. This technique is named “observable method” and it results in a set of hyperbolic equations called observable equations, namely, observable Navier-Stokes or Euler equations. The observable method has been applied to the flow simulations involving shocks, turbulence, and two-phase flows, and the results are promising. In the current paper, the observable method is examined on the performance of regularizing shocks and interfaces at the same time in shock-interface interaction problems. Bubble-shock interactions and Richtmyer-Meshkov instability are particularly chosen to be studied. Observable Euler equations will be numerically solved with pseudo-spectral discretization in space and third order Total Variation Diminishing (TVD) Runge Kutta method in time. Results are presented and compared with existing publications. The interface acceleration and deformation and shock reflection are particularly examined.

Keywords: compressible flow simulation, inviscid regularization, Richtmyer-Meshkov instability, shock-bubble interactions.

Procedia PDF Downloads 349
276 Improving Fluid Catalytic Cracking Unit Performance through Low Cost Debottlenecking

Authors: Saidulu Gadari, Manoj Kumar Yadav, V. K. Satheesh, Debasis Bhattacharyya, S. S. V. Ramakumar, Subhajit Sarkar

Abstract:

Most Fluid Catalytic Cracking Units (FCCUs) are big profit makers and hence, always operated with several constraints. It is the primary source for production of gasoline, light olefins as petrochemical feedstocks, feedstock for alkylate & oxygenates, LPG, etc. in a refinery. Increasing unit capacity and improving product yields as well as qualities such as gasoline RON have dramatic impact on the refinery economics. FCCUs are often debottlenecked significantly beyond their original design capacities. Depending upon the unit configuration, operating conditions, and feedstock quality, the FCC unit can have a variety of bottlenecks. While some of these are aimed to increase the feed rate, improve the conversion, etc., the others are aimed to improve the reliability of the equipment or overall unit. Apart from investment cost, the other factors considered generally while evaluating the debottlenecking options are shutdown days, faster payback, risk on investment, etc. A low-cost solution such as replacement of feed injectors, air distributor, steam distributors, spent catalyst distributor, efficient cyclone system, etc. are the preferred way of upgrading FCCU. It also has lower lead time from idea inception to implementation. This paper discusses various bottlenecks generally encountered in FCCU and presents a case study on improvement of performance of one of the FCCUs in IndianOil through implementation of cost-effective technical solution including use of improved internals in Reactor-Regeneration (R-R) section. After implementation reduction in regenerator air, gas superficial velocity in regenerator and cyclone velocities by about 10% and improvement of CLO yield from 10 to 6 wt% have been achieved. By ensuring proper pressure balance and optimum immersion of cyclone dipleg in the standpipe, frequent formation of perforations in regenerator cyclones could be addressed which in turn improved the unit on-stream factor.

Keywords: FCC, low-cost, revamp, debottleneck, internals, distributors, cyclone, dipleg

Procedia PDF Downloads 216
275 Application of Hydrologic Engineering Centers and River Analysis System Model for Hydrodynamic Analysis of Arial Khan River

Authors: Najeeb Hassan, Mahmudur Rahman

Abstract:

Arial Khan River is one of the main south-eastward outlets of the River Padma. This river maintains a meander channel through its course and is erosional in nature. The specific objective of the research is to study and evaluate the hydrological characteristics in the form of assessing changes of cross-sections, discharge, water level and velocity profile in different stations and to create a hydrodynamic model of the Arial Khan River. Necessary data have been collected from Bangladesh Water Development Board (BWDB) and Center for Environment and Geographic Information Services (CEGIS). Satellite images have been observed from Google earth. In this study, hydrodynamic model of Arial Khan River has been developed using well known steady open channel flow code Hydrologic Engineering Centers and River Analysis System (HEC-RAS) using field surveyed geometric data. Cross-section properties at 22 locations of River Arial Khan for the years 2011, 2013 and 2015 were also analysed. 1-D HEC-RAS model has been developed using the cross sectional data of 2015 and appropriate boundary condition is being used to run the model. This Arial Khan River model is calibrated using the pick discharge of 2015. The applicable value of Mannings roughness coefficient (n) is adjusted through the process of calibration. The value of water level which ties with the observed data to an acceptable accuracy is taken as calibrated model. The 1-D HEC-RAS model then validated by using the pick discharges from 2009-2018. Variation in observed water level in the model and collected water level data is being compared to validate the model. It is observed that due to seasonal variation, discharge of the river changes rapidly and Mannings roughness coefficient (n) also changes due to the vegetation growth along the river banks. This river model may act as a tool to measure flood area in future. By considering the past pick flow discharge, it is strongly recommended to improve the carrying capacity of Arial Khan River to protect the surrounding areas from flash flood.

Keywords: BWDB, CEGIS, HEC-RAS

Procedia PDF Downloads 186
274 Bimetallic MOFs Based Membrane for the Removal of Heavy Metal Ions from the Industrial Wastewater

Authors: Muhammad Umar Mushtaq, Muhammad Bilal Khan Niazi, Nouman Ahmad, Dooa Arif

Abstract:

Apart from organic dyes, heavy metals such as Pb, Ni, Cr, and Cu are present in textile effluent and pose a threat to humans and the environment. Many studies on removing heavy metallic ions from textile wastewater have been conducted in recent decades using metal-organic frameworks (MOFs). In this study new polyether sulfone ultrafiltration membrane, modified with Cu/Co and Cu/Zn-based bimetal-organic frameworks (MOFs), was produced. Phase inversion was used to produce the membrane, and atomic force microscopy (AFM), scanning electron microscopy (SEM) were used to characterize it. The bimetallic MOFs-based membrane structure is complex and can be comprehended using characterization techniques. The bimetallic MOF-based filtration membranes are designed to selectively adsorb specific contaminants while allowing the passage of water molecules, improving the ultrafiltration efficiency. MOFs' adsorption capacity and selectivity are enhanced by functionalizing them with particular chemical groups or incorporating them into composite membranes with other materials, such as polymers. The morphology and performance of the bimetallic MOF-based membrane were investigated regarding pure water flux and metal ion rejection. The advantages of developed bimetallic MOFs based membranes for wastewater treatment include enhanced adsorption capacity because of the presence of two metals in their structure, which provides additional binding sites for contaminants, leading to a higher adsorption capacity and more efficient removal of pollutants from wastewater. Based on the experimental findings, bimetallic MOF-based membranes are more capable of rejecting metal ions from industrial wastewater than conventional membranes that have already been developed. Furthermore, the difficulties associated with operational parameters, including pressure gradients and velocity profiles, are simulated using Ansys Fluent software. The simulation results obtained for the operating parameters are in complete agreement with the experimental results.

Keywords: bimetallic MOFs, heavy metal ions, industrial wastewater treatment, ultrafiltration.

Procedia PDF Downloads 91
273 Assessment of Occupational Exposure and Individual Radio-Sensitivity in People Subjected to Ionizing Radiation

Authors: Oksana G. Cherednichenko, Anastasia L. Pilyugina, Sergey N.Lukashenko, Elena G. Gubitskaya

Abstract:

The estimation of accumulated radiation doses in people professionally exposed to ionizing radiation was performed using methods of biological (chromosomal aberrations frequency in lymphocytes) and physical (radionuclides analysis in urine, whole-body radiation meter, individual thermoluminescent dosimeters) dosimetry. A group of 84 "A" category employees after their work in the territory of former Semipalatinsk test site (Kazakhstan) was investigated. The dose rate in some funnels exceeds 40 μSv/h. After radionuclides determination in urine using radiochemical and WBC methods, it was shown that the total effective dose of personnel internal exposure did not exceed 0.2 mSv/year, while an acceptable dose limit for staff is 20 mSv/year. The range of external radiation doses measured with individual thermo-luminescent dosimeters was 0.3-1.406 µSv. The cytogenetic examination showed that chromosomal aberrations frequency in staff was 4.27±0.22%, which is significantly higher than at the people from non-polluting settlement Tausugur (0.87±0.1%) (р ≤ 0.01) and citizens of Almaty (1.6±0.12%) (р≤ 0.01). Chromosomal type aberrations accounted for 2.32±0.16%, 0.27±0.06% of which were dicentrics and centric rings. The cytogenetic analysis of different types group radiosensitivity among «professionals» (age, sex, ethnic group, epidemiological data) revealed no significant differences between the compared values. Using various techniques by frequency of dicentrics and centric rings, the average cumulative radiation dose for group was calculated, and that was 0.084-0.143 Gy. To perform comparative individual dosimetry using physical and biological methods of dose assessment, calibration curves (including own ones) and regression equations based on general frequency of chromosomal aberrations obtained after irradiation of blood samples by gamma-radiation with the dose rate of 0,1 Gy/min were used. Herewith, on the assumption of individual variation of chromosomal aberrations frequency (1–10%), the accumulated dose of radiation varied 0-0.3 Gy. The main problem in the interpretation of individual dosimetry results is reduced to different reaction of the objects to irradiation - radiosensitivity, which dictates the need of quantitative definition of this individual reaction and its consideration in the calculation of the received radiation dose. The entire examined contingent was assigned to a group based on the received dose and detected cytogenetic aberrations. Radiosensitive individuals, at the lowest received dose in a year, showed the highest frequency of chromosomal aberrations (5.72%). In opposite, radioresistant individuals showed the lowest frequency of chromosomal aberrations (2.8%). The cohort correlation according to the criterion of radio-sensitivity in our research was distributed as follows: radio-sensitive (26.2%) — medium radio-sensitivity (57.1%), radioresistant (16.7%). Herewith, the dispersion for radioresistant individuals is 2.3; for the group with medium radio-sensitivity — 3.3; and for radio-sensitive group — 9. These data indicate the highest variation of characteristic (reactions to radiation effect) in the group of radio-sensitive individuals. People with medium radio-sensitivity show significant long-term correlation (0.66; n=48, β ≥ 0.999) between the values of doses defined according to the results of cytogenetic analysis and dose of external radiation obtained with the help of thermoluminescent dosimeters. Mathematical models based on the type of violation of the radiation dose according to the professionals radiosensitivity level were offered.

Keywords: biodosimetry, chromosomal aberrations, ionizing radiation, radiosensitivity

Procedia PDF Downloads 185
272 Design and Analysis for a 4-Stage Crash Energy Management System for Railway Vehicles

Authors: Ziwen Fang, Jianran Wang, Hongtao Liu, Weiguo Kong, Kefei Wang, Qi Luo, Haifeng Hong

Abstract:

A 4-stage crash energy management (CEM) system for subway rail vehicles used by Massachusetts Bay Transportation Authority (MBTA) in the USA is developed in this paper. The 4 stages of this new CEM system include 1) energy absorbing coupler (draft gear and shear bolts), 2) primary energy absorbers (aluminum honeycomb structured box), 3) secondary energy absorbers (crush tube), and 4) collision post and corner post. A sliding anti-climber and a fixed anti-climber are designed at the front of the vehicle cooperating with the 4-stage CEM to maximize the energy to be absorbed and minimize the damage to passengers and crews. In order to investigate the effectiveness of this CEM system, both finite element (FE) methods and crashworthiness test have been employed. The whole vehicle consists of 3 married pairs, i.e., six cars. In the FE approach, full-scale railway car models are developed and different collision cases such as a single moving car impacting a rigid wall, two moving cars into a rigid wall, two moving cars into two stationary cars, six moving cars into six stationary cars and so on are investigated. The FE analysis results show that the railway vehicle incorporating this CEM system has a superior crashworthiness performance. In the crashworthiness test, a simplified vehicle front end including the sliding anti-climber, the fixed anti-climber, the primary energy absorbers, the secondary energy absorber, the collision post and the corner post is built and impacted to a rigid wall. The same test model is also analyzed in the FE and the results such as crushing force, stress, and strain of critical components, acceleration and velocity curves are compared and studied. FE results show very good comparison to the test results.

Keywords: railway vehicle collision, crash energy management design, finite element method, crashworthiness test

Procedia PDF Downloads 404
271 Numerical Modelling of Hydrodynamic Drag and Supercavitation Parameters for Supercavitating Torpedoes

Authors: Sezer Kefeli, Sertaç Arslan

Abstract:

In this paper, supercavitationphenomena, and parameters are explained, and hydrodynamic design approaches are investigated for supercavitating torpedoes. In addition, drag force calculation methods ofsupercavitatingvehicles are obtained. Basically, conventional heavyweight torpedoes reach up to ~50 knots by classic hydrodynamic techniques, on the other hand super cavitating torpedoes may reach up to ~200 knots, theoretically. However, in order to reachhigh speeds, hydrodynamic viscous forces have to be reduced or eliminated completely. This necessity is revived the supercavitation phenomena that is implemented to conventional torpedoes. Supercavitation is a type of cavitation, after all, it is more stable and continuous than other cavitation types. The general principle of supercavitation is to separate the underwater vehicle from water phase by surrounding the vehicle with cavitation bubbles. This situation allows the torpedo to operate at high speeds through the water being fully developed cavitation. Conventional torpedoes are entitled as supercavitating torpedoes when the torpedo moves in a cavity envelope due to cavitator in the nose section and solid fuel rocket engine in the rear section. There are two types of supercavitation phase, these are natural and artificial cavitation phases. In this study, natural cavitation is investigated on the disk cavitators based on numerical methods. Once the supercavitation characteristics and drag reduction of natural cavitationare studied on CFD platform, results are verified with the empirical equations. As supercavitation parameters cavitation number (), pressure distribution along axial axes, drag coefficient (C_?) and drag force (D), cavity wall velocity (U_?) and dimensionless cavity shape parameters, which are cavity length (L_?/d_?), cavity diameter(d_ₘ/d_?) and cavity fineness ratio (〖L_?/d〗_ₘ) are investigated and compared with empirical results. This paper has the characteristics of feasibility study to carry out numerical solutions of the supercavitation phenomena comparing with empirical equations.

Keywords: CFD, cavity envelope, high speed underwater vehicles, supercavitating flows, supercavitation, drag reduction, supercavitation parameters

Procedia PDF Downloads 173
270 Use of Artificial Neural Networks to Estimate Evapotranspiration for Efficient Irrigation Management

Authors: Adriana Postal, Silvio C. Sampaio, Marcio A. Villas Boas, Josué P. Castro

Abstract:

This study deals with the estimation of reference evapotranspiration (ET₀) in an agricultural context, focusing on efficient irrigation management to meet the growing interest in the sustainable management of water resources. Given the importance of water in agriculture and its scarcity in many regions, efficient use of this resource is essential to ensure food security and environmental sustainability. The methodology used involved the application of artificial intelligence techniques, specifically Multilayer Perceptron (MLP) Artificial Neural Networks (ANNs), to predict ET₀ in the state of Paraná, Brazil. The models were trained and validated with meteorological data from the Brazilian National Institute of Meteorology (INMET), together with data obtained from a producer's weather station in the western region of Paraná. Two optimizers (SGD and Adam) and different meteorological variables, such as temperature, humidity, solar radiation, and wind speed, were explored as inputs to the models. Nineteen configurations with different input variables were tested; amidst them, configuration 9, with 8 input variables, was identified as the most efficient of all. Configuration 10, with 4 input variables, was considered the most effective, considering the smallest number of variables. The main conclusions of this study show that MLP ANNs are capable of accurately estimating ET₀, providing a valuable tool for irrigation management in agriculture. Both configurations (9 and 10) showed promising performance in predicting ET₀. The validation of the models with cultivator data underlined the practical relevance of these tools and confirmed their generalization ability for different field conditions. The results of the statistical metrics, including Mean Absolute Error (MAE), Mean Squared Error (MSE), Root Mean Squared Error (RMSE), and Coefficient of Determination (R²), showed excellent agreement between the model predictions and the observed data, with MAE as low as 0.01 mm/day and 0.03 mm/day, respectively. In addition, the models achieved an R² between 0.99 and 1, indicating a satisfactory fit to the real data. This agreement was also confirmed by the Kolmogorov-Smirnov test, which evaluates the agreement of the predictions with the statistical behavior of the real data and yields values between 0.02 and 0.04 for the producer data. In addition, the results of this study suggest that the developed technique can be applied to other locations by using specific data from these sites to further improve ET₀ predictions and thus contribute to sustainable irrigation management in different agricultural regions. The study has some limitations, such as the use of a single ANN architecture and two optimizers, the validation with data from only one producer, and the possible underestimation of the influence of seasonality and local climate variability. An irrigation management application using the most efficient models from this study is already under development. Future research can explore different ANN architectures and optimization techniques, validate models with data from multiple producers and regions, and investigate the model's response to different seasonal and climatic conditions.

Keywords: agricultural technology, neural networks in agriculture, water efficiency, water use optimization

Procedia PDF Downloads 51