Search results for: velocity gradient tensor
296 Comparison of the Effectiveness of Tree Algorithms in Classification of Spongy Tissue Texture
Authors: Roza Dzierzak, Waldemar Wojcik, Piotr Kacejko
Abstract:
Analysis of the texture of medical images consists of determining the parameters and characteristics of the examined tissue. The main goal is to assign the analyzed area to one of two basic groups: as a healthy tissue or a tissue with pathological changes. The CT images of the thoracic lumbar spine from 15 healthy patients and 15 with confirmed osteoporosis were used for the analysis. As a result, 120 samples with dimensions of 50x50 pixels were obtained. The set of features has been obtained based on the histogram, gradient, run-length matrix, co-occurrence matrix, autoregressive model, and Haar wavelet. As a result of the image analysis, 290 descriptors of textural features were obtained. The dimension of the space of features was reduced by the use of three selection methods: Fisher coefficient (FC), mutual information (MI), minimization of the classification error probability and average correlation coefficients between the chosen features minimization of classification error probability (POE) and average correlation coefficients (ACC). Each of them returned ten features occupying the initial place in the ranking devised according to its own coefficient. As a result of the Fisher coefficient and mutual information selections, the same features arranged in a different order were obtained. In both rankings, the 50% percentile (Perc.50%) was found in the first place. The next selected features come from the co-occurrence matrix. The sets of features selected in the selection process were evaluated using six classification tree methods. These were: decision stump (DS), Hoeffding tree (HT), logistic model trees (LMT), random forest (RF), random tree (RT) and reduced error pruning tree (REPT). In order to assess the accuracy of classifiers, the following parameters were used: overall classification accuracy (ACC), true positive rate (TPR, classification sensitivity), true negative rate (TNR, classification specificity), positive predictive value (PPV) and negative predictive value (NPV). Taking into account the classification results, it should be stated that the best results were obtained for the Hoeffding tree and logistic model trees classifiers, using the set of features selected by the POE + ACC method. In the case of the Hoeffding tree classifier, the highest values of three parameters were obtained: ACC = 90%, TPR = 93.3% and PPV = 93.3%. Additionally, the values of the other two parameters, i.e., TNR = 86.7% and NPV = 86.6% were close to the maximum values obtained for the LMT classifier. In the case of logistic model trees classifier, the same ACC value was obtained ACC=90% and the highest values for TNR=88.3% and NPV= 88.3%. The values of the other two parameters remained at a level close to the highest TPR = 91.7% and PPV = 91.6%. The results obtained in the experiment show that the use of classification trees is an effective method of classification of texture features. This allows identifying the conditions of the spongy tissue for healthy cases and those with the porosis.Keywords: classification, feature selection, texture analysis, tree algorithms
Procedia PDF Downloads 176295 Using Fractal Architectures for Enhancing the Thermal-Fluid Transport
Authors: Surupa Shaw, Debjyoti Banerjee
Abstract:
Enhancing heat transfer in compact volumes is a challenge when constrained by cost issues, especially those associated with requirements for minimizing pumping power consumption. This is particularly acute for electronic chip cooling applications. Technological advancements in microelectronics have led to development of chip architectures that involve increased power consumption. As a consequence packaging, technologies are saddled with needs for higher rates of power dissipation in smaller form factors. The increasing circuit density, higher heat flux values for dissipation and the significant decrease in the size of the electronic devices are posing thermal management challenges that need to be addressed with a better design of the cooling system. Maximizing surface area for heat exchanging surfaces (e.g., extended surfaces or “fins”) can enable dissipation of higher levels of heat flux. Fractal structures have been shown to maximize surface area in compact volumes. Self-replicating structures at multiple length scales are called “Fractals” (i.e., objects with fractional dimensions; unlike regular geometric objects, such as spheres or cubes whose volumes and surface area values scale as integer values of the length scale dimensions). Fractal structures are expected to provide an appropriate technology solution to meet these challenges for enhanced heat transfer in the microelectronic devices by maximizing surface area available for heat exchanging fluids within compact volumes. In this study, the effect of different fractal micro-channel architectures and flow structures on the enhancement of transport phenomena in heat exchangers is explored by parametric variation of fractal dimension. This study proposes a model that would enable cost-effective solutions for thermal-fluid transport for energy applications. The objective of this study is to ascertain the sensitivity of various parameters (such as heat flux and pressure gradient as well as pumping power) to variation in fractal dimension. The role of the fractal parameters will be instrumental in establishing the most effective design for the optimum cooling of microelectronic devices. This can help establish the requirement of minimal pumping power for enhancement of heat transfer during cooling. Results obtained in this study show that the proposed models for fractal architectures of microchannels significantly enhanced heat transfer due to augmentation of surface area in the branching networks of varying length-scales.Keywords: fractals, microelectronics, constructal theory, heat transfer enhancement, pumping power enhancement
Procedia PDF Downloads 318294 Investigating the Dynamic Plantar Pressure Distribution in Individuals with Multiple Sclerosis
Authors: Hilal Keklicek, Baris Cetin, Yeliz Salci, Ayla Fil, Umut Altinkaynak, Kadriye Armutlu
Abstract:
Objectives and Goals: Spasticity is a common symptom characterized with a velocity dependent increase in tonic stretch reflexes (muscle tone) in patient with multiple sclerosis (MS). Hypertonic muscles affect the normal plantigrade contact by disturbing accommodation of foot to the ground while walking. It is important to know the differences between healthy and neurologic foot features for management of spasticity related deformities and/or determination of rehabilitation purposes and contents. This study was planned with the aim of investigating the dynamic plantar pressure distribution in individuals with MS and determining the differences between healthy individuals (HI). Methods: Fifty-five individuals with MS (108 foot with spasticity according to Modified Ashworth Scale) and 20 HI (40 foot) were the participants of the study. The dynamic pedobarograph was utilized for evaluation of dynamic loading parameters. Participants were informed to walk at their self-selected speed for seven times to eliminate learning effect. The parameters were divided into 2 categories including; maximum loading pressure (N/cm2) and time of maximum pressure (ms) were collected from heal medial, heal lateral, mid foot, heads of first, second, third, fourth and fifth metatarsal bones. Results: There were differences between the groups in maximum loading pressure of heal medial (p < .001), heal lateral (p < .001), midfoot (p=.041) and 5th metatarsal areas (p=.036). Also, there were differences between the groups the time of maximum pressure of all metatarsal areas, midfoot, heal medial and heal lateral (p < .001) in favor of HI. Conclusions: The study provided basic data about foot pressure distribution in individuals with MS. Results of the study primarily showed that spasticity of lower extremity muscle disrupted the posteromedial foot loading. Secondarily, according to the study result, spasticity lead to inappropriate timing during load transfer from hind foot to forefoot.Keywords: multiple sclerosis, plantar pressure distribution, gait, norm values
Procedia PDF Downloads 319293 The Potential of Edaphic Algae for Bioremediation of the Diesel-Contaminated Soil
Authors: C. J. Tien, C. S. Chen, S. F. Huang, Z. X. Wang
Abstract:
Algae in soil ecosystems can produce organic matters and oxygen by photosynthesis. Heterocyst-forming cyanobacteria can fix nitrogen to increase soil nitrogen contents. Secretion of mucilage by some algae increases the soil water content and soil aggregation. These actions will improve soil quality and fertility, and further increase abundance and diversity of soil microorganisms. In addition, some mixotrophic and heterotrophic algae are able to degrade petroleum hydrocarbons. Therefore, the objectives of this study were to analyze the effects of algal addition on the degradation of total petroleum hydrocarbons (TPH), diversity and activity of bacteria and algae in the diesel-contaminated soil under different nutrient contents and frequency of plowing and irrigation in order to assess the potential bioremediation technique using edaphic algae. The known amount of diesel was added into the farmland soil. This diesel-contaminated soil was subject to five settings, experiment-1 with algal addition by plowing and irrigation every two weeks, experiment-2 with algal addition by plowing and irrigation every four weeks, experiment-3 with algal and nutrient addition by plowing and irrigation every two weeks, experiment-4 with algal and nutrient addition by plowing and irrigation every four weeks, and the control without algal addition. Soil samples were taken every two weeks to analyze TPH concentrations, diversity of bacteria and algae, and catabolic genes encoding functional degrading enzymes. The results show that the TPH removal rates of five settings after the two-month experimental period were in the order: experiment-2 > expermient-4 > experiment-3 > experiment-1 > control. It indicated that algal addition enhanced the degradation of TPH in the diesel-contaminated soil, but not for nutrient addition. Plowing and irrigation every four weeks resulted in more TPH removal than that every two weeks. The banding patterns of denaturing gradient gel electrophoresis (DGGE) revealed an increase in diversity of bacteria and algae after algal addition. Three petroleum hydrocarbon-degrading algae (Anabaena sp., Oscillatoria sp. and Nostoc sp.) and two added algal strains (Leptolyngbya sp. and Synechococcus sp.) were sequenced from DGGE prominent bands. The four hydrocarbon-degrading bacteria Gordonia sp., Mycobacterium sp., Rodococcus sp. and Alcanivorax sp. were abundant in the treated soils. These results suggested that growth of indigenous bacteria and algae were improved after adding edaphic algae. Real-time polymerase chain reaction results showed that relative amounts of four catabolic genes encoding catechol 2, 3-dioxygenase, toluene monooxygenase, xylene monooxygenase and phenol monooxygenase were appeared and expressed in the treated soil. The addition of algae increased the expression of these genes at the end of experiments to biodegrade petroleum hydrocarbons. This study demonstrated that edaphic algae were suitable biomaterials for bioremediating diesel-contaminated soils with plowing and irrigation every four weeks.Keywords: catabolic gene, diesel, diversity, edaphic algae
Procedia PDF Downloads 278292 Thermal Regulation of Channel Flows Using Phase Change Material
Authors: Kira Toxopeus, Kamran Siddiqui
Abstract:
Channel flows are common in a wide range of engineering applications. In some types of channel flows, particularly the ones involving chemical or biological processes, the control of the flow temperature is crucial to maintain the optimal conditions for the chemical reaction or to control the growth of biological species. This often becomes an issue when the flow experiences temperature fluctuations due to external conditions. While active heating and cooling could regulate the channel temperature, it may not be feasible logistically or economically and is also regarded as a non-sustainable option. Thermal energy storage utilizing phase change material (PCM) could provide the required thermal regulation sustainably by storing the excess heat from the channel and releasing it back as required, thus regulating the channel temperature within a range in the proximity of the PCM melting temperature. However, in designing such systems, the configuration of the PCM storage within the channel is critical as it could influence the channel flow dynamics, which would, in turn, affect the heat exchange between the channel fluid and the PCM. The present research is focused on the investigation of the flow dynamical behavior in the channel during heat transfer from the channel flow to the PCM thermal energy storage. Offset vertical columns in a narrow channel were used that contained the PCM. Two different column shapes, square and circular, were considered. Water was used as the channel fluid that entered the channel at a temperature higher than that of the PCM melting temperature. Hence, as the water was passing through the channel, the heat was being transferred from the water to the PCM, causing the PCM to store the heat through a phase transition from solid to liquid. Particle image velocimetry (PIV) was used to measure the two-dimensional velocity field of the channel flow as it flows between the PCM columns. Thermocouples were also attached to the PCM columns to measure the PCM temperature at three different heights. Three different water flow rates (0.5, 0.75 and 1.2 liters/min) were considered. At each flow rate, experiments were conducted at three different inlet water temperatures (28ᵒC, 33ᵒC and 38ᵒC). The results show that the flow rate and the inlet temperature influenced the flow behavior inside the channel.Keywords: channel flow, phase change material, thermal energy storage, thermal regulation
Procedia PDF Downloads 138291 Green Extraction Processes for the Recovery of Polyphenols from Solid Wastes of Olive Oil Industry
Authors: Theodora-Venetia Missirli, Konstantina Kyriakopoulou, Magdalini Krokida
Abstract:
Olive mill solid waste is an olive oil mill industry by-product with high phenolic, lipid and organic acid concentrations that can be used as a low cost source of natural antioxidants. In this study, extracts of Olea europaea (olive tree) solid olive mill waste (SOMW) were evaluated in terms of their antiradical activity and total phenolic compounds concentrations, such as oleuropein, hydroxytyrosol etc. SOMW samples were subjected to drying prior to extraction as a pretreatment step. Two drying processes, accelerated solar drying (ASD) and air-drying (AD) (at 35, 50, 70°C constant air velocity of 1 m/s), were applied. Subsequently, three different extraction methods were employed to recover extracts from untreated and dried SOMW samples. The methods include the green Microwave Assisted (MAE) and Ultrasound Assisted Extraction (UAE) and the conventional Soxhlet extraction (SE), using water and methanol as solvents. The efficiency and selectivity of the processes were evaluated in terms of extraction yield. The antioxidant activity (AAR) and the total phenolic content (TPC) of the extracts were evaluated using the DPPH assay and the Folin-Ciocalteu method, respectively. The results showed that bioactive content was significantly affected by the extraction technique and the solvent. Specifically, untreated SOMW samples showed higher performance in the yield for all solvents and higher antioxidant potential and phenolic content in the case of water. UAE extraction method showed greater extraction yields than the MAE method for both untreated and dried leaves regardless of the solvent used. The use of ultrasound and microwave assisted extraction in combination with industrially applied drying methods, such as air and solar drying, was feasible and effective for the recovery of bioactive compounds.Keywords: antioxidant potential, drying treatment, olive mill pomace, microwave assisted extraction, ultrasound assisted extraction
Procedia PDF Downloads 304290 Retrofitting Insulation to Historic Masonry Buildings: Improving Thermal Performance and Maintaining Moisture Movement to Minimize Condensation Risk
Authors: Moses Jenkins
Abstract:
Much of the focus when improving energy efficiency in buildings fall on the raising of standards within new build dwellings. However, as a significant proportion of the building stock across Europe is of historic or traditional construction, there is also a pressing need to improve the thermal performance of structures of this sort. On average, around twenty percent of buildings across Europe are built of historic masonry construction. In order to meet carbon reduction targets, these buildings will require to be retrofitted with insulation to improve their thermal performance. At the same time, there is also a need to balance this with maintaining the ability of historic masonry construction to allow moisture movement through building fabric to take place. This moisture transfer, often referred to as 'breathable construction', is critical to the success, or otherwise, of retrofit projects. The significance of this paper is to demonstrate that substantial thermal improvements can be made to historic buildings whilst avoiding damage to building fabric through surface or interstitial condensation. The paper will analyze the results of a wide range of retrofit measures installed to twenty buildings as part of Historic Environment Scotland's technical research program. This program has been active for fourteen years and has seen interventions across a wide range of building types, using over thirty different methods and materials to improve the thermal performance of historic buildings. The first part of the paper will present the range of interventions which have been made. This includes insulating mass masonry walls both internally and externally, warm and cold roof insulation and improvements to floors. The second part of the paper will present the results of monitoring work which has taken place to these buildings after being retrofitted. This will be in terms of both thermal improvement, expressed as a U-value as defined in BS EN ISO 7345:1987, and also, crucially, will present the results of moisture monitoring both on the surface of masonry walls the following retrofit and also within the masonry itself. The aim of this moisture monitoring is to establish if there are any problems with interstitial condensation. This monitoring utilizes Interstitial Hygrothermal Gradient Monitoring (IHGM) and similar methods to establish relative humidity on the surface of and within the masonry. The results of the testing are clear and significant for retrofit projects across Europe. Where a building is of historic construction the use of materials for wall, roof and floor insulation which are permeable to moisture vapor provides both significant thermal improvements (achieving a u-value as low as 0.2 Wm²K) whilst avoiding problems of both surface and intestinal condensation. As the evidence which will be presented in the paper comes from monitoring work in buildings rather than theoretical modeling, there are many important lessons which can be learned and which can inform retrofit projects to historic buildings throughout Europe.Keywords: insulation, condensation, masonry, historic
Procedia PDF Downloads 171289 Development of Hydrodynamic Drag Calculation and Cavity Shape Generation for Supercavitating Torpedoes
Authors: Sertac Arslan, Sezer Kefeli
Abstract:
In this paper, firstly supercavitating phenomenon and supercavity shape design parameters are explained and then drag force calculation methods of high speed supercavitating torpedoes are investigated with numerical techniques and verified with empirical studies. In order to reach huge speeds such as 200, 300 knots for underwater vehicles, hydrodynamic hull drag force which is proportional to density of water (ρ) and square of speed should be reduced. Conventional heavy weight torpedoes could reach up to ~50 knots by classic underwater hydrodynamic techniques. However, to exceed 50 knots and reach about 200 knots speeds, hydrodynamic viscous forces must be reduced or eliminated completely. This requirement revives supercavitation phenomena that could be implemented to conventional torpedoes. Supercavitation is the use of cavitation effects to create a gas bubble, allowing the torpedo to move at huge speed through the water by being fully developed cavitation bubble. When the torpedo moves in a cavitation envelope due to cavitator in nose section and solid fuel rocket engine in rear section, this kind of torpedoes could be entitled as Supercavitating Torpedoes. There are two types of cavitation; first one is natural cavitation, and second one is ventilated cavitation. In this study, disk cavitator is modeled with natural cavitation and supercavitation phenomenon parameters are studied. Moreover, drag force calculation is performed for disk shape cavitator with numerical techniques and compared via empirical studies. Drag forces are calculated with computational fluid dynamics methods and different empirical methods. Numerical calculation method is developed by comparing with empirical results. In verification study cavitation number (σ), drag coefficient (CD) and drag force (D), cavity wall velocity (UKeywords: cavity envelope, CFD, high speed underwater vehicles, supercavitation, supercavity flows
Procedia PDF Downloads 187288 Estimating Precipitable Water Vapour Using the Global Positioning System and Radio Occultation over Ethiopian Regions
Authors: Asmamaw Yehun, Tsegaye Gogie, Martin Vermeer, Addisu Hunegnaw
Abstract:
The Global Positioning System (GPS) is a space-based radio positioning system, which is capable of providing continuous position, velocity, and time information to users anywhere on or near the surface of the Earth. The main objective of this work was to estimate the integrated precipitable water vapour (IPWV) using ground GPS and Low Earth Orbit (LEO) Radio Occultation (RO) to study spatial-temporal variability. For LEO-GPS RO, we used Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC) datasets. We estimated the daily and monthly mean of IPWV using six selected ground-based GPS stations over a period of range from 2012 to 2016 (i.e. five-years period). The main perspective for selecting the range period from 2012 to 2016 is that, continuous data were available during these periods at all Ethiopian GPS stations. We studied temporal, seasonal, diurnal, and vertical variations of precipitable water vapour using GPS observables extracted from the precise geodetic GAMIT-GLOBK software package. Finally, we determined the cross-correlation of our GPS-derived IPWV values with those of the European Centre for Medium-Range Weather Forecasts (ECMWF) ERA-40 Interim reanalysis and of the second generation National Oceanic and Atmospheric Administration (NOAA) model ensemble Forecast System Reforecast (GEFS/R) for validation and static comparison. There are higher values of the IPWV range from 30 to 37.5 millimetres (mm) in Gambela and Southern Regions of Ethiopia. Some parts of Tigray, Amhara, and Oromia regions had low IPWV ranges from 8.62 to 15.27 mm. The correlation coefficient between GPS-derived IPWV with ECMWF and GEFS/R exceeds 90%. We conclude that there are highly temporal, seasonal, diurnal, and vertical variations of precipitable water vapour in the study area.Keywords: GNSS, radio occultation, atmosphere, precipitable water vapour
Procedia PDF Downloads 84287 Desulfurization of Crude Oil Using Bacteria
Authors: Namratha Pai, K. Vasantharaj, K. Haribabu
Abstract:
Our Team is developing an innovative cost effective biological technique to desulfurize crude oil. ’Sulphur’ is found to be present in crude oil samples from .05% - 13.95% and its elimination by industrial methods is expensive currently. Materials required :- Alicyclobacillus acidoterrestrius, potato dextrose agar, oxygen, Pyragallol and inert gas(nitrogen). Method adapted and proposed:- 1) Growth of bacteria studied, energy needs. 2) Compatibility with crude-oil. 3) Reaction rate of bacteria studied and optimized. 4) Reaction development by computer simulation. 5) Simulated work tested by building the reactor. The method being developed requires the use of bacteria Alicyclobacillus acidoterrestrius - an acidothermophilic heterotrophic, soil dwelling aerobic, Sulfur bacteria. The bacteria are fed to crude oil in a unique manner. Its coated onto potato dextrose agar beads, cultured for 24 hours (growth time coincides with time when it begins reacting) and fed into the reactor. The beads are to be replenished with O2 by passing them through a jacket around the reactor which has O2 supply. The O2 can’t be supplied directly as crude oil is inflammable, hence the process. Beads are made to move around based on the concept of fluidized bed reactor. By controlling the velocity of inert gas pumped , the beads are made to settle down when exhausted of O2. It is recycled through the jacket where O2 is re-fed and beads which were inside the ring substitute the exhausted ones. Crude-oil is maintained between 1 atm-270 M Pa pressure and 45°C treated with tartaric acid (Ph reason for bacteria growth) for optimum output. Bacteria being of oxidising type react with Sulphur in crude-oil and liberate out SO4^2- and no gas. SO4^2- is absorbed into H2O. NaOH is fed once reaction is complete and beads separated. Crude-oil is thus separated of SO4^2-, thereby Sulphur, tartaric acid and other acids which are separated out. Bio-corrosion is taken care of by internal wall painting (phenolepoxy paints). Earlier methods used included use of Pseudomonas and Rhodococcus species. They were found to be inefficient, time and energy consuming and reduce the fuel value as they fed on skeleton.Keywords: alicyclobacillus acidoterrestrius, potato dextrose agar, fluidized bed reactor principle, reaction time for bacteria, compatibility with crude oil
Procedia PDF Downloads 315286 Simulation of Turbulent Flow in Channel Using Generalized Hydrodynamic Equations
Authors: Alex Fedoseyev
Abstract:
This study explores Generalized Hydrodynamic Equations (GHE) for the simulation of turbulent flows. The GHE was derived from the Generalized Boltzmann Equation (GBE) by Alexeev (1994). GBE was obtained by first principles from the chain of Bogolubov kinetic equations and considered particles of finite dimensions, Alexeev (1994). The GHE has new terms, temporal and spatial fluctuations compared to the Navier-Stokes equations (NSE). These new terms have a timescale multiplier τ, and the GHE becomes the NSE when τ is zero. The nondimensional τ is a product of the Reynolds number and the squared length scale ratio, τ=Re*(l/L)², where l is the apparent Kolmogorov length scale, and L is a hydrodynamic length scale. The turbulence phenomenon is not well understood and is not described by NSE. An additional one or two equations are required for the turbulence model, which may have to be tuned for specific problems. We show that, in the case of the GHE, no additional turbulence model is needed, and the turbulent velocity profile is obtained from the GHE. The 2D turbulent channel and circular pipe flows were investigated using a numerical solution of the GHE for several cases. The solutions are compared with the experimental data in the circular pipes and 2D channels by Nicuradse (1932, Prandtl Lab), Hussain and Reynolds (1975), Wei and Willmarth (1989), Van Doorne (2007), theory by Wosnik, Castillo and George (2000), and the relevant experiments on Superpipe setup at Princeton, data by Zagarola (1996) and Zagarola and Smits (1998), the Reynolds number is from Re=7200 to Re=960000. The numerical solution data compared well with the experimental data, as well as with the approximate analytical solution for turbulent flow in channel Fedoseyev (2023). The obtained results confirm that the Alexeev generalized hydrodynamic theory (GHE) is in good agreement with the experiments for turbulent flows. The proposed approach is limited to 2D and 3D axisymmetric channel geometries. Further work will extend this approach by including channels with square and rectangular cross-sections.Keywords: comparison with experimental data. generalized hydrodynamic equations, numerical solution, turbulent boundary layer, turbulent flow in channel
Procedia PDF Downloads 62285 Debris Flow Mapping Using Geographical Information System Based Model and Geospatial Data in Middle Himalayas
Authors: Anand Malik
Abstract:
The Himalayas with high tectonic activities poses a great threat to human life and property. Climate change is another reason which triggering extreme events multiple fold effect on high mountain glacial environment, rock falls, landslides, debris flows, flash flood and snow avalanches. One such extreme event of cloud burst along with breach of moraine dammed Chorabri Lake occurred from June 14 to June 17, 2013, triggered flooding of Saraswati and Mandakini rivers in the Kedarnath Valley of Rudraprayag district of Uttrakhand state of India. As a result, huge volume of water with its high velocity created a catastrophe of the century, which resulted into loss of large number of human/animals, pilgrimage, tourism, agriculture and property. Thus a comprehensive assessment of debris flow hazards requires GIS-based modeling using numerical methods. The aim of present study is to focus on analysis and mapping of debris flow movements using geospatial data with flow-r (developed by team at IGAR, University of Lausanne). The model is based on combined probabilistic and energetic algorithms for the assessment of spreading of flow with maximum run out distances. Aster Digital Elevation Model (DEM) with 30m x 30m cell size (resolution) is used as main geospatial data for preparing the run out assessment, while Landsat data is used to analyze land use land cover change in the study area. The results of the study area show that model can be applied with great accuracy as the model is very useful in determining debris flow areas. The results are compared with existing available landslides/debris flow maps. ArcGIS software is used in preparing run out susceptibility maps which can be used in debris flow mitigation and future land use planning.Keywords: debris flow, geospatial data, GIS based modeling, flow-R
Procedia PDF Downloads 270284 An Experimental Machine Learning Analysis on Adaptive Thermal Comfort and Energy Management in Hospitals
Authors: Ibrahim Khan, Waqas Khalid
Abstract:
The Healthcare sector is known to consume a higher proportion of total energy consumption in the HVAC market owing to an excessive cooling and heating requirement in maintaining human thermal comfort in indoor conditions, catering to patients undergoing treatment in hospital wards, rooms, and intensive care units. The indoor thermal comfort conditions in selected hospitals of Islamabad, Pakistan, were measured on a real-time basis with the collection of first-hand experimental data using calibrated sensors measuring Ambient Temperature, Wet Bulb Globe Temperature, Relative Humidity, Air Velocity, Light Intensity and CO2 levels. The Experimental data recorded was analyzed in conjunction with the Thermal Comfort Questionnaire Surveys, where the participants, including patients, doctors, nurses, and hospital staff, were assessed based on their thermal sensation, acceptability, preference, and comfort responses. The Recorded Dataset, including experimental and survey-based responses, was further analyzed in the development of a correlation between operative temperature, operative relative humidity, and other measured operative parameters with the predicted mean vote and adaptive predicted mean vote, with the adaptive temperature and adaptive relative humidity estimated using the seasonal data set gathered for both summer – hot and dry, and hot and humid as well as winter – cold and dry, and cold and humid climate conditions. The Machine Learning Logistic Regression Algorithm was incorporated to train the operative experimental data parameters and develop a correlation between patient sensations and the thermal environmental parameters for which a new ML-based adaptive thermal comfort model was proposed and developed in our study. Finally, the accuracy of our model was determined using the K-fold cross-validation.Keywords: predicted mean vote, thermal comfort, energy management, logistic regression, machine learning
Procedia PDF Downloads 62283 Coping with Geological Hazards during Construction of Hydroelectric Projects in Himalaya
Authors: B. D. Patni, Ashwani Jain, Arindom Chakraborty
Abstract:
The world’s highest mountain range has been forming since the collision of Indian Plate with Asian Plate 40-50 million years ago. The Indian subcontinent has been deeper and deeper in to the rest of Asia resulting upliftment of Himalaya & Tibetan Plateau. The complex domain has become a major challenge for construction of hydro electric projects. The Himalayas are geologically complex & seismically active. Shifting of Indian Plate northwardly and increasing the amount of stresses in the fragile domain which leads to deformation in the form of several fold, faults and upliftment. It is difficult to undergo extensive geological investigation to ascertain the geological problems to be encountered during construction. Inaccessibility of the terrain, high rock cover, unpredictable ground water condition etc. are the main constraints. The hydroelectric projects located in Himalayas have faced many geological and geo-hydrological problems while construction of surface and subsurface works. Based on the experience, efforts have been made to identify the expected geological problems during and after construction of the projects. These have been classified into surface and subsurface problems which include existence of inhomogeneous deep overburden in the river bed or buried valley, abrupt change in bed rock profile, Occurrences of fault zones/shear zones/fractured rock in dam foundation and slope instability in the abutments. The tunneling difficulties are many such as squeezing ground condition, popping, rock bursting, high temperature gradient, heavy ingress of water, existence of shear seams/shear zones and emission of obnoxious gases. However, these problems were mitigated by adopting suitable remedial measures as per site requirement. The support system includes shotcrete, wire mesh, rock bolts, steel ribs, fore-poling, pre-grouting, pipe-roofing, MAI anchors, toe wall, retaining walls, reinforced concrete dowels, drainage drifts, anchorage cum drainage shafts, soil nails, concrete cladding and shear keys. Controlled drilling & blasting, heading & benching, proper drainage network and ventilation system are other remedial measures adopted to overcome such adverse situations. The paper highlights the geological uncertainties and its remedial measures in Himalaya, based on the analysis and evaluation of 20 hydroelectric projects during construction.Keywords: geological problems, shear seams, slope, drilling & blasting, shear zones
Procedia PDF Downloads 400282 Osteoprotegerin and Osteoprotegerin/TRAIL Ratio are Associated with Cardiovascular Dysfunction and Mortality among Patients with Renal Failure
Authors: Marek Kuźniewski, Magdalena B. Kaziuk , Danuta Fedak, Paulina Dumnicka, Ewa Stępień, Beata Kuśnierz-Cabala, Władysław Sułowicz
Abstract:
Background: The high prevalence of cardiovascular morbidity and mortality among patients with chronic kidney disease (CKD) is observed especially in those undergoing dialysis. Osteoprotegerin (OPG) and its ligands, receptor activator of nuclear factor kappa-B ligand (RANKL) and tumor necrosis factor-related apoptosis-inducing ligand (TRAIL) have been associated with cardiovascular complications. Our aim was to study their role as cardiovascular risk factors in stage 5 CKD patients. Methods: OPG, RANKL and TRAIL concentrations were measured in 69 hemodialyzed CKD patients and 35 healthy volunteers. In CKD patients, cardiovascular dysfunction was assessed with aortic pulse wave velocity (AoPWV), carotid artery intima-media thickness (CCA-IMT), coronary artery calcium score (CaSc) and N-terminal pro-B-type natriuretic peptide (NT-proBNP) serum concentration. Cardiovascular and overall mortality data were collected during a 7-years follow-up. Results: OPG plasma concentrations were higher in CKD patients comparing to controls. Total soluble RANKL was lower and OPG/RANKL ratio higher in patients. Soluble TRAIL concentrations did not differ between the groups and OPG/TRAIL ratio was higher in CKD patients. OPG and OPG/TRAIL positively predicted long-term mortality (all-cause and cardiovascular) in CKD patients. OPG positively correlated with AoPWV, CCA-IMT and NT-proBNP whereas OPG/TRAIL with AoPWV and NT-proBNP. Described relationships were independent of classical and non-classical cardiovascular risk factors, with exception of age. Conclusions: Our study confirmed the role of OPG as a biomarker of cardiovascular dysfunction and a predictor of mortality in stage 5 CKD. OPG/TRAIL ratio can be proposed as a predictor of cardiovascular dysfunction and mortality.Keywords: osteoprotegerin, tumor necrosis factor-related apoptosis-inducing ligand, receptor activator of nuclear factor kappa-B ligand, hemodialysis, chronic kidney disease, cardiovascular disease
Procedia PDF Downloads 334281 Numerical Simulation of Production of Microspheres from Polymer Emulsion in Microfluidic Device toward Using in Drug Delivery Systems
Authors: Nizar Jawad Hadi, Sajad Abd Alabbas
Abstract:
Because of their ability to encapsulate and release drugs in a controlled manner, microspheres fabricated from polymer emulsions using microfluidic devices have shown promise for drug delivery applications. In this study, the effects of velocity, density, viscosity, and surface tension, as well as channel diameter, on microsphere generation were investigated using Fluent Ansys software. The software was programmed with the physical properties of the polymer emulsion such as density, viscosity and surface tension. Simulation will then be performed to predict fluid flow and microsphere production and improve the design of drug delivery applications based on changes in these parameters. The effects of capillary and Weber numbers are also studied. The results of the study showed that the size of the microspheres can be controlled by adjusting the speed and diameter of the channel. Narrower microspheres resulted from narrower channel widths and higher flow rates, which could improve drug delivery efficiency, while smaller microspheres resulted from lower interfacial surface tension. The viscosity and density of the polymer emulsion significantly affected the size of the microspheres, ith higher viscosities and densities producing smaller microspheres. The loading and drug release properties of the microspheres created with the microfluidic technique were also predicted. The results showed that the microspheres can efficiently encapsulate drugs and release them in a controlled manner over a period of time. This is due to the high surface area to volume ratio of the microspheres, which allows for efficient drug diffusion. The ability to tune the manufacturing process using factors such as speed, density, viscosity, channel diameter, and surface tension offers a potential opportunity to design drug delivery systems with greater efficiency and fewer side effects.Keywords: polymer emulsion, microspheres, numerical simulation, microfluidic device
Procedia PDF Downloads 63280 Calculation of the Supersonic Air Intake with the Optimization of the Shock Wave System
Authors: Elena Vinogradova, Aleksei Pleshakov, Aleksei Yakovlev
Abstract:
During the flight of a supersonic aircraft under various conditions (altitude, Mach, etc.), it becomes necessary to coordinate the operating modes of the air intake and engine. On the supersonic aircraft, it’s been done by changing various control factors (the angle of rotation of the wedge panels and etc.). This paper investigates the possibility of using modern optimization methods to determine the optimal position of the supersonic air intake wedge panels in order to maximize the total pressure recovery coefficient. Modern software allows us to conduct auto-optimization, which determines the optimal position of the control elements of the investigated product to achieve its maximum efficiency. In this work, the flow in the supersonic aircraft inlet has investigated and optimized the operation of the flaps of the supersonic inlet in an aircraft in a 2-D setting. This work has done using ANSYS CFX software. The supersonic aircraft inlet is a flat adjustable external compression inlet. The braking surface is made in the form of a three-stage wedge. The IOSO NM software package was chosen for optimization. Change in the position of the panels of the input device is carried out by changing the angle between the first and second steps of the three-stage wedge. The position of the rest of the panels is changed automatically. Within the framework of the presented work, the position of the moving air intake panel was optimized under fixed flight conditions of the aircraft under a certain engine operating mode. As a result of the numerical modeling, the distribution of total pressure losses was obtained for various cases of the engine operation, depending on the incoming flow velocity and the flight altitude of the aircraft. The results make it possible to obtain the maximum total pressure recovery coefficient under given conditions. Also, the initial geometry was set with a certain angle between the first and second wedge panels. Having performed all the calculations, as well as the subsequent optimization of the aircraft input device, it can be concluded that the initial angle was set sufficiently close to the optimal angle.Keywords: optimal angle, optimization, supersonic air intake, total pressure recovery coefficient
Procedia PDF Downloads 241279 Lake Bardawil Water Quality
Authors: Mohamed Elkashouty, Mohamed Elkammar, Mohamed Gomma, Menal Elminiami
Abstract:
Lake Bardawil is considered as one of the major morphological features of northern Sinai. It represents the largest fish production lake for export in Egypt. Nineteen and thirty one samples were collected from lake water during winter and summer (2005). TDS, cations, anions, Cd, Cu, Fe, Mn, Zn, Ni, Co and Pb concentrations were measured within winter and summer seasons. During summer, in the eastern sector of the lake, TDS concentration is decreased due northeastern part (38000 ppm), it is attributed to dilution from seawater through Boughaz II. The TDS concentration increased generally in the central and southern parts of the lake (44000 and 42000 ppm, respectively). It is caused by they are far from dilution from seawater, disconnected water body, shallow depth (mean 2 m), and high evaporation rate. In the western sector, the TDS content ranged from low (38000 ppm) in the northeastern part to high (50000 ppm) in the western part. Generally, the TDS concentration in the western sector is higher than those in the eastern. It is attributed to low volume of water body for the former, high evaporation rate, and therefore increase in TDS content in the lake water.During winter season, in the eastern sector, the wind velocity is high which enhance the water current to inflow into the lake through Boughaz I and II. The resultant water lake is diluted by seawater and rainfall in the winter season. The TDS concentration increased due southern part of the lake (42000 ppm) and declined in the northern part (36000 ppm). The concentration of Co, Ni, Pb, Fe, Cd, Zn, Cu, Mn and Pb within winter and summery seasons, in lake water are low, which considered as background concentrations with respect to seawater. Therefore, there are no industrial, agricultural and sanitary wastewaters dump into the lake. This confirms the statement that has been written at the entrance of Lake Bardawil at El-Telool area "Lake Bardawil, one of the purest lakes in the world". It indicate that the Lake Bardawil is excellent area for fish production for export (current state) and is the second main fish source in Egypt after the Mediterranean Sea after the illness of Lake Manzala.Keywords: lake Bardawil, water quality, major ions, toxic metals
Procedia PDF Downloads 518278 Proportional and Integral Controller-Based Direct Current Servo Motor Speed Characterization
Authors: Adel Salem Bahakeem, Ahmad Jamal, Mir Md. Maruf Morshed, Elwaleed Awad Khidir
Abstract:
Direct Current (DC) servo motors, or simply DC motors, play an important role in many industrial applications such as manufacturing of plastics, precise positioning of the equipment, and operating computer-controlled systems where speed of feed control, maintaining the position, and ensuring to have a constantly desired output is very critical. These parameters can be controlled with the help of control systems such as the Proportional Integral Derivative (PID) controller. The aim of the current work is to investigate the effects of Proportional (P) and Integral (I) controllers on the steady state and transient response of the DC motor. The controller gains are varied to observe their effects on the error, damping, and stability of the steady and transient motor response. The current investigation is conducted experimentally on a servo trainer CE 110 using analog PI controller CE 120 and theoretically using Simulink in MATLAB. Both experimental and theoretical work involves varying integral controller gain to obtain the response to a steady-state input, varying, individually, the proportional and integral controller gains to obtain the response to a step input function at a certain frequency, and theoretically obtaining the proportional and integral controller gains for desired values of damping ratio and response frequency. Results reveal that a proportional controller helps reduce the steady-state and transient error between the input signal and output response and makes the system more stable. In addition, it also speeds up the response of the system. On the other hand, the integral controller eliminates the error but tends to make the system unstable with induced oscillations and slow response to eliminate the error. From the current work, it is desired to achieve a stable response of the servo motor in terms of its angular velocity subjected to steady-state and transient input signals by utilizing the strengths of both P and I controllers.Keywords: DC servo motor, proportional controller, integral controller, controller gain optimization, Simulink
Procedia PDF Downloads 108277 Optical Flow Technique for Supersonic Jet Measurements
Authors: Haoxiang Desmond Lim, Jie Wu, Tze How Daniel New, Shengxian Shi
Abstract:
This paper outlines the development of a novel experimental technique in quantifying supersonic jet flows, in an attempt to avoid seeding particle problems frequently associated with particle-image velocimetry (PIV) techniques at high Mach numbers. Based on optical flow algorithms, the idea behind the technique involves using high speed cameras to capture Schlieren images of the supersonic jet shear layers, before they are subjected to an adapted optical flow algorithm based on the Horn-Schnuck method to determine the associated flow fields. The proposed method is capable of offering full-field unsteady flow information with potentially higher accuracy and resolution than existing point-measurements or PIV techniques. Preliminary study via numerical simulations of a circular de Laval jet nozzle successfully reveals flow and shock structures typically associated with supersonic jet flows, which serve as useful data for subsequent validation of the optical flow based experimental results. For experimental technique, a Z-type Schlieren setup is proposed with supersonic jet operated in cold mode, stagnation pressure of 8.2 bar and exit velocity of Mach 1.5. High-speed single-frame or double-frame cameras are used to capture successive Schlieren images. As implementation of optical flow technique to supersonic flows remains rare, the current focus revolves around methodology validation through synthetic images. The results of validation test offers valuable insight into how the optical flow algorithm can be further improved to improve robustness and accuracy. Details of the methodology employed and challenges faced will be further elaborated in the final conference paper should the abstract be accepted. Despite these challenges however, this novel supersonic flow measurement technique may potentially offer a simpler way to identify and quantify the fine spatial structures within the shock shear layer.Keywords: Schlieren, optical flow, supersonic jets, shock shear layer
Procedia PDF Downloads 311276 The Observable Method for the Regularization of Shock-Interface Interactions
Authors: Teng Li, Kamran Mohseni
Abstract:
This paper presents an inviscid regularization technique that is capable of regularizing the shocks and sharp interfaces simultaneously in the shock-interface interaction simulations. The direct numerical simulation of flows involving shocks has been investigated for many years and a lot of numerical methods were developed to capture the shocks. However, most of these methods rely on the numerical dissipation to regularize the shocks. Moreover, in high Reynolds number flows, the nonlinear terms in hyperbolic Partial Differential Equations (PDE) dominates, constantly generating small scale features. This makes direct numerical simulation of shocks even harder. The same difficulty happens in two-phase flow with sharp interfaces where the nonlinear terms in the governing equations keep sharpening the interfaces to discontinuities. The main idea of the proposed technique is to average out the small scales that is below the resolution (observable scale) of the computational grid by filtering the convective velocity in the nonlinear terms in the governing PDE. This technique is named “observable method” and it results in a set of hyperbolic equations called observable equations, namely, observable Navier-Stokes or Euler equations. The observable method has been applied to the flow simulations involving shocks, turbulence, and two-phase flows, and the results are promising. In the current paper, the observable method is examined on the performance of regularizing shocks and interfaces at the same time in shock-interface interaction problems. Bubble-shock interactions and Richtmyer-Meshkov instability are particularly chosen to be studied. Observable Euler equations will be numerically solved with pseudo-spectral discretization in space and third order Total Variation Diminishing (TVD) Runge Kutta method in time. Results are presented and compared with existing publications. The interface acceleration and deformation and shock reflection are particularly examined.Keywords: compressible flow simulation, inviscid regularization, Richtmyer-Meshkov instability, shock-bubble interactions.
Procedia PDF Downloads 348275 Improving Fluid Catalytic Cracking Unit Performance through Low Cost Debottlenecking
Authors: Saidulu Gadari, Manoj Kumar Yadav, V. K. Satheesh, Debasis Bhattacharyya, S. S. V. Ramakumar, Subhajit Sarkar
Abstract:
Most Fluid Catalytic Cracking Units (FCCUs) are big profit makers and hence, always operated with several constraints. It is the primary source for production of gasoline, light olefins as petrochemical feedstocks, feedstock for alkylate & oxygenates, LPG, etc. in a refinery. Increasing unit capacity and improving product yields as well as qualities such as gasoline RON have dramatic impact on the refinery economics. FCCUs are often debottlenecked significantly beyond their original design capacities. Depending upon the unit configuration, operating conditions, and feedstock quality, the FCC unit can have a variety of bottlenecks. While some of these are aimed to increase the feed rate, improve the conversion, etc., the others are aimed to improve the reliability of the equipment or overall unit. Apart from investment cost, the other factors considered generally while evaluating the debottlenecking options are shutdown days, faster payback, risk on investment, etc. A low-cost solution such as replacement of feed injectors, air distributor, steam distributors, spent catalyst distributor, efficient cyclone system, etc. are the preferred way of upgrading FCCU. It also has lower lead time from idea inception to implementation. This paper discusses various bottlenecks generally encountered in FCCU and presents a case study on improvement of performance of one of the FCCUs in IndianOil through implementation of cost-effective technical solution including use of improved internals in Reactor-Regeneration (R-R) section. After implementation reduction in regenerator air, gas superficial velocity in regenerator and cyclone velocities by about 10% and improvement of CLO yield from 10 to 6 wt% have been achieved. By ensuring proper pressure balance and optimum immersion of cyclone dipleg in the standpipe, frequent formation of perforations in regenerator cyclones could be addressed which in turn improved the unit on-stream factor.Keywords: FCC, low-cost, revamp, debottleneck, internals, distributors, cyclone, dipleg
Procedia PDF Downloads 214274 Application of Hydrologic Engineering Centers and River Analysis System Model for Hydrodynamic Analysis of Arial Khan River
Authors: Najeeb Hassan, Mahmudur Rahman
Abstract:
Arial Khan River is one of the main south-eastward outlets of the River Padma. This river maintains a meander channel through its course and is erosional in nature. The specific objective of the research is to study and evaluate the hydrological characteristics in the form of assessing changes of cross-sections, discharge, water level and velocity profile in different stations and to create a hydrodynamic model of the Arial Khan River. Necessary data have been collected from Bangladesh Water Development Board (BWDB) and Center for Environment and Geographic Information Services (CEGIS). Satellite images have been observed from Google earth. In this study, hydrodynamic model of Arial Khan River has been developed using well known steady open channel flow code Hydrologic Engineering Centers and River Analysis System (HEC-RAS) using field surveyed geometric data. Cross-section properties at 22 locations of River Arial Khan for the years 2011, 2013 and 2015 were also analysed. 1-D HEC-RAS model has been developed using the cross sectional data of 2015 and appropriate boundary condition is being used to run the model. This Arial Khan River model is calibrated using the pick discharge of 2015. The applicable value of Mannings roughness coefficient (n) is adjusted through the process of calibration. The value of water level which ties with the observed data to an acceptable accuracy is taken as calibrated model. The 1-D HEC-RAS model then validated by using the pick discharges from 2009-2018. Variation in observed water level in the model and collected water level data is being compared to validate the model. It is observed that due to seasonal variation, discharge of the river changes rapidly and Mannings roughness coefficient (n) also changes due to the vegetation growth along the river banks. This river model may act as a tool to measure flood area in future. By considering the past pick flow discharge, it is strongly recommended to improve the carrying capacity of Arial Khan River to protect the surrounding areas from flash flood.Keywords: BWDB, CEGIS, HEC-RAS
Procedia PDF Downloads 180273 Spatial Heterogeneity of Urban Land Use in the Yangtze River Economic Belt Based on DMSP/OLS Data
Authors: Liang Zhou, Qinke Sun
Abstract:
Taking the Yangtze River Economic Belt as an example, using long-term nighttime lighting data from DMSP/OLS from 1992 to 2012, support vector machine classification (SVM) was used to quantitatively extract urban built-up areas of economic belts, and spatial analysis of expansion intensity index, standard deviation ellipse, etc. was introduced. The model conducts detailed and in-depth discussions on the strength, direction, and type of the expansion of the middle and lower reaches of the economic belt and the key node cities. The results show that: (1) From 1992 to 2012, the built-up areas of the major cities in the Yangtze River Valley showed a rapid expansion trend. The built-up area expanded by 60,392 km², and the average annual expansion rate was 31%, that is, from 9615 km² in 1992 to 70007 km² in 2012. The spatial gradient analysis of the watershed shows that the expansion of urban built-up areas in the middle and lower reaches of the river basin takes Shanghai as the leading force, and the 'bottom-up' model shows an expanding pattern of 'upstream-downstream-middle-range' declines. The average annual rate of expansion is 36% and 35%, respectively. 17% of which the midstream expansion rate is about 50% of the upstream and downstream. (2) The analysis of expansion intensity shows that the urban expansion intensity in the Yangtze River Basin has generally shown an upward trend, the downstream region has continued to rise, and the upper and middle reaches have experienced different amplitude fluctuations. To further analyze the strength of urban expansion at key nodes, Chengdu, Chongqing, and Wuhan in the upper and middle reaches maintain a high degree of consistency with the intensity of regional expansion. Node cities with Shanghai as the core downstream continue to maintain a high level of expansion. (3) The standard deviation ellipse analysis shows that the overall center of gravity of the Yangtze River basin city is located in Anqing City, Anhui Province, and it showed a phenomenon of reciprocating movement from 1992 to 2012. The nighttime standard deviation ellipse distribution range increased from 61.96 km² to 76.52 km². The growth of the major axis of the ellipse was significantly larger than that of the minor axis. It had obvious east-west axiality, in which the nighttime lights in the downstream area occupied in the entire luminosity scale urban system leading position.Keywords: urban space, support vector machine, spatial characteristics, night lights, Yangtze River Economic Belt
Procedia PDF Downloads 113272 Bimetallic MOFs Based Membrane for the Removal of Heavy Metal Ions from the Industrial Wastewater
Authors: Muhammad Umar Mushtaq, Muhammad Bilal Khan Niazi, Nouman Ahmad, Dooa Arif
Abstract:
Apart from organic dyes, heavy metals such as Pb, Ni, Cr, and Cu are present in textile effluent and pose a threat to humans and the environment. Many studies on removing heavy metallic ions from textile wastewater have been conducted in recent decades using metal-organic frameworks (MOFs). In this study new polyether sulfone ultrafiltration membrane, modified with Cu/Co and Cu/Zn-based bimetal-organic frameworks (MOFs), was produced. Phase inversion was used to produce the membrane, and atomic force microscopy (AFM), scanning electron microscopy (SEM) were used to characterize it. The bimetallic MOFs-based membrane structure is complex and can be comprehended using characterization techniques. The bimetallic MOF-based filtration membranes are designed to selectively adsorb specific contaminants while allowing the passage of water molecules, improving the ultrafiltration efficiency. MOFs' adsorption capacity and selectivity are enhanced by functionalizing them with particular chemical groups or incorporating them into composite membranes with other materials, such as polymers. The morphology and performance of the bimetallic MOF-based membrane were investigated regarding pure water flux and metal ion rejection. The advantages of developed bimetallic MOFs based membranes for wastewater treatment include enhanced adsorption capacity because of the presence of two metals in their structure, which provides additional binding sites for contaminants, leading to a higher adsorption capacity and more efficient removal of pollutants from wastewater. Based on the experimental findings, bimetallic MOF-based membranes are more capable of rejecting metal ions from industrial wastewater than conventional membranes that have already been developed. Furthermore, the difficulties associated with operational parameters, including pressure gradients and velocity profiles, are simulated using Ansys Fluent software. The simulation results obtained for the operating parameters are in complete agreement with the experimental results.Keywords: bimetallic MOFs, heavy metal ions, industrial wastewater treatment, ultrafiltration.
Procedia PDF Downloads 88271 Wireless FPGA-Based Motion Controller Design by Implementing 3-Axis Linear Trajectory
Authors: Kiana Zeighami, Morteza Ozlati Moghadam
Abstract:
Designing a high accuracy and high precision motion controller is one of the important issues in today’s industry. There are effective solutions available in the industry but the real-time performance, smoothness and accuracy of the movement can be further improved. This paper discusses a complete solution to carry out the movement of three stepper motors in three dimensions. The objective is to provide a method to design a fully integrated System-on-Chip (SOC)-based motion controller to reduce the cost and complexity of production by incorporating Field Programmable Gate Array (FPGA) into the design. In the proposed method the FPGA receives its commands from a host computer via wireless internet communication and calculates the motion trajectory for three axes. A profile generator module is designed to realize the interpolation algorithm by translating the position data to the real-time pulses. This paper discusses an approach to implement the linear interpolation algorithm, since it is one of the fundamentals of robots’ movements and it is highly applicable in motion control industries. Along with full profile trajectory, the triangular drive is implemented to eliminate the existence of error at small distances. To integrate the parallelism and real-time performance of FPGA with the power of Central Processing Unit (CPU) in executing complex and sequential algorithms, the NIOS II soft-core processor was added into the design. This paper presents different operating modes such as absolute, relative positioning, reset and velocity modes to fulfill the user requirements. The proposed approach was evaluated by designing a custom-made FPGA board along with a mechanical structure. As a result, a precise and smooth movement of stepper motors was observed which proved the effectiveness of this approach.Keywords: 3-axis linear interpolation, FPGA, motion controller, micro-stepping
Procedia PDF Downloads 207270 Analyzing Temperature and Pressure Performance of a Natural Air-Circulation System
Authors: Emma S. Bowers
Abstract:
Perturbations in global environments and temperatures have heightened the urgency of creating cost-efficient, energy-neutral building techniques. Structural responses to this thermal crisis have included designs (including those of the building standard PassivHaus) with airtightness, window placement, insulation, solar orientation, shading, and heat-exchange ventilators as potential solutions or interventions. Limitations in the predictability of the circulation of cooled air through the ambient temperature gradients throughout a structure are one of the major obstacles facing these enhanced building methods. A diverse range of air-cooling devices utilizing varying technologies is implemented around the world. Many of them worsen the problem of climate change by consuming energy. Using natural ventilation principles of air buoyancy and density to circulate fresh air throughout a building with no energy input can combat these obstacles. A unique prototype of an energy-neutral air-circulation system was constructed in order to investigate potential temperature and pressure gradients related to the stack effect (updraft of air through a building due to changes in air pressure). The stack effect principle maintains that since warmer air rises, it will leave an area of low pressure that cooler air will rush in to fill. The result is that warmer air will be expelled from the top of the building as cooler air is directed through the bottom, creating an updraft. Stack effect can be amplified by cooling the air near the bottom of a building and heating the air near the top. Using readily available, mostly recyclable or biodegradable materials, an insulated building module was constructed. A tri-part construction model was utilized: a subterranean earth-tube heat exchanger constructed of PVC pipe and placed in a horizontally oriented trench, an insulated, airtight cube aboveground to represent a building, and a solar chimney (painted black to increase heat in the out-going air). Pressure and temperature sensors were placed at four different heights within the module as well as outside, and data was collected for a period of 21 days. The air pressures and temperatures over the course of the experiment were compared and averaged. The promise of this design is that it represents a novel approach which directly addresses the obstacles of air flow and expense, using the physical principle of stack effect to draw a continuous supply of fresh air through the structure, using low-cost and readily available materials (and zero manufactured energy). This design serves as a model for novel approaches to creating temperature controlled buildings using zero energy and opens the door for future research into the effects of increasing module scale, increasing length and depth of the earth tube, and shading the building. (Model can be provided).Keywords: air circulation, PassivHaus, stack effect, thermal gradient
Procedia PDF Downloads 153269 Design and Analysis for a 4-Stage Crash Energy Management System for Railway Vehicles
Authors: Ziwen Fang, Jianran Wang, Hongtao Liu, Weiguo Kong, Kefei Wang, Qi Luo, Haifeng Hong
Abstract:
A 4-stage crash energy management (CEM) system for subway rail vehicles used by Massachusetts Bay Transportation Authority (MBTA) in the USA is developed in this paper. The 4 stages of this new CEM system include 1) energy absorbing coupler (draft gear and shear bolts), 2) primary energy absorbers (aluminum honeycomb structured box), 3) secondary energy absorbers (crush tube), and 4) collision post and corner post. A sliding anti-climber and a fixed anti-climber are designed at the front of the vehicle cooperating with the 4-stage CEM to maximize the energy to be absorbed and minimize the damage to passengers and crews. In order to investigate the effectiveness of this CEM system, both finite element (FE) methods and crashworthiness test have been employed. The whole vehicle consists of 3 married pairs, i.e., six cars. In the FE approach, full-scale railway car models are developed and different collision cases such as a single moving car impacting a rigid wall, two moving cars into a rigid wall, two moving cars into two stationary cars, six moving cars into six stationary cars and so on are investigated. The FE analysis results show that the railway vehicle incorporating this CEM system has a superior crashworthiness performance. In the crashworthiness test, a simplified vehicle front end including the sliding anti-climber, the fixed anti-climber, the primary energy absorbers, the secondary energy absorber, the collision post and the corner post is built and impacted to a rigid wall. The same test model is also analyzed in the FE and the results such as crushing force, stress, and strain of critical components, acceleration and velocity curves are compared and studied. FE results show very good comparison to the test results.Keywords: railway vehicle collision, crash energy management design, finite element method, crashworthiness test
Procedia PDF Downloads 401268 Numerical Modelling of Hydrodynamic Drag and Supercavitation Parameters for Supercavitating Torpedoes
Authors: Sezer Kefeli, Sertaç Arslan
Abstract:
In this paper, supercavitationphenomena, and parameters are explained, and hydrodynamic design approaches are investigated for supercavitating torpedoes. In addition, drag force calculation methods ofsupercavitatingvehicles are obtained. Basically, conventional heavyweight torpedoes reach up to ~50 knots by classic hydrodynamic techniques, on the other hand super cavitating torpedoes may reach up to ~200 knots, theoretically. However, in order to reachhigh speeds, hydrodynamic viscous forces have to be reduced or eliminated completely. This necessity is revived the supercavitation phenomena that is implemented to conventional torpedoes. Supercavitation is a type of cavitation, after all, it is more stable and continuous than other cavitation types. The general principle of supercavitation is to separate the underwater vehicle from water phase by surrounding the vehicle with cavitation bubbles. This situation allows the torpedo to operate at high speeds through the water being fully developed cavitation. Conventional torpedoes are entitled as supercavitating torpedoes when the torpedo moves in a cavity envelope due to cavitator in the nose section and solid fuel rocket engine in the rear section. There are two types of supercavitation phase, these are natural and artificial cavitation phases. In this study, natural cavitation is investigated on the disk cavitators based on numerical methods. Once the supercavitation characteristics and drag reduction of natural cavitationare studied on CFD platform, results are verified with the empirical equations. As supercavitation parameters cavitation number (), pressure distribution along axial axes, drag coefficient (C_?) and drag force (D), cavity wall velocity (U_?) and dimensionless cavity shape parameters, which are cavity length (L_?/d_?), cavity diameter(d_ₘ/d_?) and cavity fineness ratio (〖L_?/d〗_ₘ) are investigated and compared with empirical results. This paper has the characteristics of feasibility study to carry out numerical solutions of the supercavitation phenomena comparing with empirical equations.Keywords: CFD, cavity envelope, high speed underwater vehicles, supercavitating flows, supercavitation, drag reduction, supercavitation parameters
Procedia PDF Downloads 171267 Microbial Fuel Cells and Their Applications in Electricity Generating and Wastewater Treatment
Authors: Shima Fasahat
Abstract:
This research is an experimental research which was done about microbial fuel cells in order to study them for electricity generating and wastewater treatment. These days, it is very important to find new, clean and sustainable ways for energy supplying. Because of this reason there are many researchers around the world who are studying about new and sustainable energies. There are different ways to produce these kind of energies like: solar cells, wind turbines, geothermal energy, fuel cells and many other ways. Fuel cells have different types one of these types is microbial fuel cell. In this research, an MFC was built in order to study how it can be used for electricity generating and wastewater treatment. The microbial fuel cell which was used in this research is a reactor that has two tanks with a catalyst solution. The chemical reaction in microbial fuel cells is a redox reaction. The microbial fuel cell in this research is a two chamber MFC. Anode chamber is an anaerobic one (ABR reactor) and the other chamber is a cathode chamber. Anode chamber consists of stabilized sludge which is the source of microorganisms that do redox reaction. The main microorganisms here are: Propionibacterium and Clostridium. The electrodes of anode chamber are graphite pages. Cathode chamber consists of graphite page electrodes and catalysts like: O2, KMnO4 and C6N6FeK4. The membrane which separates the chambers is Nafion117. The reason of choosing this membrane is explained in the complete paper. The main goal of this research is to generate electricity and treating wastewater. It was found that when you use electron receptor compounds like: O2, MnO4, C6N6FeK4 the velocity of electron receiving speeds up and in a less time more current will be achieved. It was found that the best compounds for this purpose are compounds which have iron in their chemical formula. It is also important to pay attention to the amount of nutrients which enters to bacteria chamber. By adding extra nutrients in some cases the result will be reverse. By using ABR the amount of chemical oxidation demand reduces per day till it arrives to a stable amount.Keywords: anaerobic baffled reactor, bioenergy, electrode, energy efficient, microbial fuel cell, renewable chemicals, sustainable
Procedia PDF Downloads 226