Search results for: simulated microgravity
529 Investigating the Formation of Nano-Hydroxyapatite on a Biocompatible and Antibacterial Cu/Mg-Substituted Bioglass
Authors: Elhamalsadat Ghaffari, Moghan Amirhosseinian, Amir Khaleghipour
Abstract:
Multifunctional bioactive glasses (BGs) are designed with a focus on the provision of bactericidal and biological properties desired for angiogenesis, osteogenesis, and ultimately potential applications in bone tissue engineering. To achieve these, six sol-gel copper/magnesium substituted derivatives of 58S-BG, i.e. a mol% series of 60SiO2-4P2O5-5CuO-(31-x) CaO/xMgO (where x=0, 1, 3, 5, 8, and 10), were synthesized. Afterwards, the effect of MgO/CaO substitution on the in vitro formation of nano-hydroxyapatite (HA), osteoblast-like cell responses and BGs antibacterial performance were studied. During the BGs synthesis, the elimination of nitrates was achieved at 700 °C that prevented the BGs crystallization and stabilized the obtained dried gels. The structural and morphological evaluations were performed with X-ray diffraction (XRD), Fourier transform infrared spectroscopy (FTIR), and scanning electron microscopy (SEM). These characterizations revealed that Cu-substituted 58S-BG consisting of 5 mol% MgO (BG-5/5) slightly had retarded the formation of HA. In addition, Cu-substituted 58S-BGs consisting 8 mol% and 10 mol% MgO (BG-5/8 and BG-5/10) displayed lower bioactivity probably due to the lower ion release rate of Ca–Si into the simulated body fluid (SBF). The determination of 3-(4, 5 dimethylthiazol-2-yl)-2, 5-diphenyltetrazolium bromide (MTT) and alkaline phosphate (ALP) activities proved that the highest values of both differentiation and proliferation of MC3T3-E1 cells can be obtained from a 5 mol% MgO substituted BG, while the over addition of MgO (8 mol% and 10 mol%) decreased the bioactivity. Furthermore, these novel Cu/Mg-substituted 58S-BGs displayed antibacterial effect against methicillin-resistant Staphylococcus aureus bacteria. Taken together, the results suggest the equally-substituted BG-5/5 (i.e. the one consists of 5 mol% of both CuO and MgO) as a promising candidate for bone tissue engineering, among all newly designed BGs in this work, owing to its desirable cell proliferation, ALP activity and antibacterial properties.Keywords: apatite, bioactivity, biomedical applications, sol-gel processes
Procedia PDF Downloads 124528 Bifunctional Electrospun Fibers Based on Poly(Lactic Acid)/Calcium Oxide Nanocomposites as a Potential Scaffold for Bone Tissue Engineering
Authors: Daniel Canales, Fabián Alvarez, Pablo Varela, Marcela Saavedra, Claudio García, Paula Zapata
Abstract:
Calcium oxide nanoparticles (n-CaO) ca. 8 nm were obtained from eggshell waste. The n-CaO was incorporated into Poly(lactic acid) PLA matrix in 10 and 20 wt.% of filler content by electrospinning process to obtain PLA/n-CaO nanocomposite fibers as a potential use in scaffold for bone tissue regeneration. The fibers morphology and diameter were homogeneity, the PLA had a diameter of 2.2 ± 0.8 µm and, with the nanoparticles incorporation (20wt.%), reached ca. 2.9 ± 0.9 µm. The PLA/n-CaO nanocomposites fibers showed in vitro bioactivity, capable of inducing the precipitation of hydroxyapatite (HA) layer in the fiber surface after 7 days in Simulated Body Solution (SBF). The biocidal and biological properties of PLA/n-Cao with 20 wt.% were evaluated, showing a 30% reduction in bacterial viability against S. aureus and 11% for E. coli after 6 hours of bacterial suspensions exposure. Furthermore, the fibers did not show a cytotoxic effect on the bone marrow ST-2 cell line, permitting the cell adhesion and proliferation in Roswell Park Memorial Institute medium (RPMI). The PLA/n-CaO with 20 wt.% of nanoparticles showed a higher capacity to promote the osteogenic differentiation, significantly increasing the alkaline phosphatase (ALP) expression after 7 days compared to PLA and cell control. The in vivo analysis corroborated the biocompatibility of scaffolds prepared, the presence of n-CaO in PLA reduced the formation of fibrous encapsulation of the material improve the healing process.Keywords: electrospun scaffolds, PLA based nanocomposites, calcium oxide nanoparticles, bioactive materials, tissue engineering
Procedia PDF Downloads 92527 Quantum Statistical Machine Learning and Quantum Time Series
Authors: Omar Alzeley, Sergey Utev
Abstract:
Minimizing a constrained multivariate function is the fundamental of Machine learning, and these algorithms are at the core of data mining and data visualization techniques. The decision function that maps input points to output points is based on the result of optimization. This optimization is the central of learning theory. One approach to complex systems where the dynamics of the system is inferred by a statistical analysis of the fluctuations in time of some associated observable is time series analysis. The purpose of this paper is a mathematical transition from the autoregressive model of classical time series to the matrix formalization of quantum theory. Firstly, we have proposed a quantum time series model (QTS). Although Hamiltonian technique becomes an established tool to detect a deterministic chaos, other approaches emerge. The quantum probabilistic technique is used to motivate the construction of our QTS model. The QTS model resembles the quantum dynamic model which was applied to financial data. Secondly, various statistical methods, including machine learning algorithms such as the Kalman filter algorithm, are applied to estimate and analyses the unknown parameters of the model. Finally, simulation techniques such as Markov chain Monte Carlo have been used to support our investigations. The proposed model has been examined by using real and simulated data. We establish the relation between quantum statistical machine and quantum time series via random matrix theory. It is interesting to note that the primary focus of the application of QTS in the field of quantum chaos was to find a model that explain chaotic behaviour. Maybe this model will reveal another insight into quantum chaos.Keywords: machine learning, simulation techniques, quantum probability, tensor product, time series
Procedia PDF Downloads 467526 Progressive Collapse of Cooling Towers
Authors: Esmaeil Asadzadeh, Mehtab Alam
Abstract:
Well documented records of the past failures of the structures reveals that the progressive collapse of structures is one of the major reasons for dramatic human loss and economical consequences. Progressive collapse is the failure mechanism in which the structure fails gradually due to the sudden removal of the structural elements. The sudden removal of some structural elements results in the excessive redistributed loads on the others. This sudden removal may be caused by any sudden loading resulted from local explosion, impact loading and terrorist attacks. Hyperbolic thin walled concrete shell structures being an important part of nuclear and thermal power plants are always prone to such terrorist attacks. In concrete structures, the gradual failure would take place by generation of initial cracks and its propagation in the supporting columns along with the tower shell leading to the collapse of the entire structure. In this study the mechanism of progressive collapse for such high raised towers would be simulated employing the finite element method. The aim of this study would be providing clear conceptual step-by-step descriptions of various procedures for progressive collapse analysis using commercially available finite element structural analysis software’s, with the aim that the explanations would be clear enough that they will be readily understandable and will be used by practicing engineers. The study would be carried out in the following procedures: 1. Provide explanations of modeling, simulation and analysis procedures including input screen snapshots; 2. Interpretation of the results and discussions; 3. Conclusions and recommendations.Keywords: progressive collapse, cooling towers, finite element analysis, crack generation, reinforced concrete
Procedia PDF Downloads 479525 Optimization of Multi Commodities Consumer Supply Chain: Part 1-Modelling
Authors: Zeinab Haji Abolhasani, Romeo Marian, Lee Luong
Abstract:
This paper and its companions (Part II, Part III) will concentrate on optimizing a class of supply chain problems known as Multi- Commodities Consumer Supply Chain (MCCSC) problem. MCCSC problem belongs to production-distribution (P-D) planning category. It aims to determine facilities location, consumers’ allocation, and facilities configuration to minimize total cost (CT) of the entire network. These facilities can be manufacturer units (MUs), distribution centres (DCs), and retailers/end-users (REs) but not limited to them. To address this problem, three major tasks should be undertaken. At the first place, a mixed integer non-linear programming (MINP) mathematical model is developed. Then, system’s behaviors under different conditions will be observed using a simulation modeling tool. Finally, the most optimum solution (minimum CT) of the system will be obtained using a multi-objective optimization technique. Due to the large size of the problem, and the uncertainties in finding the most optimum solution, integration of modeling and simulation methodologies is proposed followed by developing new approach known as GASG. It is a genetic algorithm on the basis of granular simulation which is the subject of the methodology of this research. In part II, MCCSC is simulated using discrete-event simulation (DES) device within an integrated environment of SimEvents and Simulink of MATLAB® software package followed by a comprehensive case study to examine the given strategy. Also, the effect of genetic operators on the obtained optimal/near optimal solution by the simulation model will be discussed in part III.Keywords: supply chain, genetic algorithm, optimization, simulation, discrete event system
Procedia PDF Downloads 315524 Designing Web Application to Simulate Agricultural Management for Smart Farmer: Land Development Department’s Integrated Management Farm
Authors: Panasbodee Thachaopas, Duangdorm Gamnerdsap, Waraporn Inthip, Arissara Pungpa
Abstract:
LDD’s IM Farm or Land Development Department’s Integrated Management Farm is the agricultural simulation application developed by Land Development Department relies on actual data in simulation game to grow 12 cash crops which are rice, corn, cassava, sugarcane, soybean, rubber tree, oil palm, pineapple, longan, rambutan, durian, and mangosteen. Launching in simulation game, players could select preferable areas for cropping from base map or Orthophoto map scale 1:4,000. Farm management is simulated from field preparation to harvesting. The system uses soil group, and present land use database to facilitate player to know whether what kind of crop is suitable to grow in each soil groups and integrate LDD’s data with other agencies which are soil types, soil properties, soil problems, climate, cultivation cost, fertilizer use, fertilizer price, socio-economic data, plant diseases, weed, pest, interest rate for taking on loan from Bank for Agriculture and Agricultural Cooperatives (BAAC), labor cost, market prices. These mentioned data affect the cost and yield differently to each crop. After completing, the player will know the yield, income and expense, profit/loss. The player could change to other crops that are more suitable to soil groups for optimal yields and profits.Keywords: agricultural simulation, smart farmer, web application, factors of agricultural production
Procedia PDF Downloads 197523 Check Red Blood Cells Concentrations of a Blood Sample by Using Photoconductive Antenna
Authors: Ahmed Banda, Alaa Maghrabi, Aiman Fakieh
Abstract:
Terahertz (THz) range lies in the area between 0.1 to 10 THz. The process of generating and detecting THz can be done through different techniques. One of the most familiar techniques is done through a photoconductive antenna (PCA). The process of generating THz radiation at PCA includes applying a laser pump in femtosecond and DC voltage difference. However, photocurrent is generated at PCA, which its value is affected by different parameters (e.g., dielectric properties, DC voltage difference and incident power of laser pump). THz radiation is used for biomedical applications. However, different biomedical fields need new technologies to meet patients’ needs (e.g. blood-related conditions). In this work, a novel method to check the red blood cells (RBCs) concentration of a blood sample using PCA is presented. RBCs constitute 44% of total blood volume. RBCs contain Hemoglobin that transfers oxygen from lungs to body organs. Then it returns to the lungs carrying carbon dioxide, which the body then gets rid of in the process of exhalation. The configuration has been simulated and optimized using COMSOL Multiphysics. The differentiation of RBCs concentration affects its dielectric properties (e.g., the relative permittivity of RBCs in the blood sample). However, the effects of four blood samples (with different concentrations of RBCs) on photocurrent value have been tested. Photocurrent peak value and RBCs concentration are inversely proportional to each other due to the change of dielectric properties of RBCs. It was noticed that photocurrent peak value has dropped from 162.99 nA to 108.66 nA when RBCs concentration has risen from 0% to 100% of a blood sample. The optimization of this method helps to launch new products for diagnosing blood-related conditions (e.g., anemia and leukemia). The resultant electric field from DC components can not be used to count the RBCs of the blood sample.Keywords: biomedical applications, photoconductive antenna, photocurrent, red blood cells, THz radiation
Procedia PDF Downloads 200522 Simulation of Complex-Shaped Particle Breakage with a Bonded Particle Model Using the Discrete Element Method
Authors: Felix Platzer, Eric Fimbinger
Abstract:
In Discrete Element Method (DEM) simulations, the breakage behavior of particles can be simulated based on different principles. In the case of large, complex-shaped particles that show various breakage patterns depending on the scenario leading to the failure and often only break locally instead of fracturing completely, some of these principles do not lead to realistic results. The reason for this is that in said cases, the methods in question, such as the Particle Replacement Method (PRM) or Voronoi Fracture, replace the initial particle (that is intended to break) into several sub-particles when certain breakage criteria are reached, such as exceeding the fracture energy. That is why those methods are commonly used for the simulation of materials that fracture completely instead of breaking locally. That being the case, when simulating local failure, it is advisable to pre-build the initial particle from sub-particles that are bonded together. The dimensions of these sub-particles consequently define the minimum size of the fracture results. This structure of bonded sub-particles enables the initial particle to break at the location of the highest local loads – due to the failure of the bonds in those areas – with several sub-particle clusters being the result of the fracture, which can again also break locally. In this project, different methods for the generation and calibration of complex-shaped particle conglomerates using bonded particle modeling (BPM) to enable the ability to depict more realistic fracture behavior were evaluated based on the example of filter cake. The method that proved suitable for this purpose and which furthermore allows efficient and realistic simulation of breakage behavior of complex-shaped particles applicable to industrial-sized simulations is presented in this paper.Keywords: bonded particle model, DEM, filter cake, particle breakage
Procedia PDF Downloads 209521 Fluid–Structure Interaction Modeling of Wind Turbines
Authors: Andre F. A. Cyrino
Abstract:
Knowing that the technological advance is the focus on the efficient extraction of energy from wind, and therefore in the design of wind turbine structures, this work aims the study of the fluid-structure interaction of an idealized wind turbine. The blade was studied as a beam attached to a cylindrical Hub with rotation axis pointing the air flow that passes through the rotor. Using the calculus of variations and the finite difference method the blade will be simulated by a discrete number of nodes and the aerodynamic forces were evaluated. The study presented here was written on Matlab and performs a numeric simulation of a simplified model of windmill containing a Hub and three blades modeled as Euler-Bernoulli beams for small strains and under the constant and uniform wind. The mathematical approach is done by Hamilton’s Extended Principle with the aerodynamic loads applied on the nodes considering the local relative wind speed, angle of attack and aerodynamic lift and drag coefficients. Due to the wide range of angles of attack, a wind turbine blade operates, the airfoil used on the model was NREL SERI S809 which allowed obtaining equations for Cl and Cd as functions of the angle of attack, based on a NASA study. Tridimensional flow effects were no taken in part, as well as torsion of the beam, which only bends. The results showed the dynamic response of the system in terms of displacement and rotational speed as the turbine reached the final speed. Although the results were not compared to real windmills or more complete models, the resulting values were consistent with the size of the system and wind speed.Keywords: blade aerodynamics, fluid–structure interaction, wind turbine aerodynamics, wind turbine blade
Procedia PDF Downloads 266520 Preparation and Functional Properties of Synbiotic Yogurt Fermented with Lactobacillus brevis PML1 Derived from a Fermented Cereal-Dairy Product
Authors: Farideh Tabatabei-Yazdi, Fereshteh Falah, Alireza Vasiee
Abstract:
Nowadays, production of functional foods has become very essential. Inulin is one of the most functional hydrocolloid compounds used in such products. In the present study, the production of a synbiotic yogurt containing 1, 2.5, and 5% (w/v) inulin has been investigated. The yogurt was fermented with Lactobacillus brevis PML1 derived from Tarkhineh, an Iranian cereal-dairy fermented food. Furthermore, the physicochemical properties, antioxidant activity, sensory attributes, and microbial viability properties were investigated on the 0th, 7th, and 14th days of storage after fermentation. The viable cells of L. brevis PML1 reached 108 CFU/g, and the product resisted to simulated digestive juices. Moreover, the synbiotic yogurt impressively increased the production of antimicrobial compounds and had the most profound antimicrobial effect on S. typhimurium. The physiochemical properties were in the normal range, and the fat content of the synbiotic yogurt was reduced remarkably. The antioxidant capacity of the fermented yogurt was significantly increased (p<0:05), which was equal to those of DPPH (69:18±1:00%) and BHA (89:16±2:00%). The viability of L. brevis PML1 was increased during storage. Sensory analysis showed that there were significant differences in terms of the impressive parameters between the samples and the control (p<0:05). Addition of 2.5% inulin not only improved the physical properties but also retained the viability of the probiotic after 14 days of storage, in addition to the viability of L. brevis with a viability count above 6 log CFU/g in the yogurt. Therefore, a novel synbiotic product containing L. brevis PML1, which can exert the desired properties, can be used as a suitable carrier for the delivery of the probiotic strain, exerting its beneficial health effects.Keywords: functional food, lactobacillus brevis, symbiotic yogurt, physiochemical properties
Procedia PDF Downloads 90519 Perceived and Performed E-Health Literacy: Survey and Simulated Performance Test
Authors: Efrat Neter, Esther Brainin, Orna Baron-Epel
Abstract:
Background: Connecting end-users to newly developed ICT technologies and channeling patients to new products requires an assessment of compatibility. End user’s assessment is conveyed in the concept of eHealth literacy. The study examined the association between perceived and performed eHealth literacy (EHL) in a heterogeneous age sample in Israel. Methods: Participants included 100 Israeli adults (mean age 43,SD 13.9) who were first phone interviewed and then tested on a computer simulation of health-related Internet tasks. Performed, perceived and evaluated EHL were assessed. Levels of successful completion of tasks represented EHL performance and evaluated EHL included observed motivation, confidence, and amount of help provided. Results: The skills of accessing, understanding, appraising, applying, and generating new information had a decreasing successful completion rate with increase in complexity of the task. Generating new information, though highly correlated with all other skills, was least correlated with the other skills. Perceived and performed EHL were correlated (r=.40, P=.001), while facets of performance (i.e, digital literacy and EHL) were highly correlated (r=.89, P<.001). Participants low and high in performed EHL were significantly different: low performers were older, had attained less education, used the Internet for less time and perceived themselves as less healthy. They also encountered more difficulties, required more assistance, were less confident in their conduct and exhibited less motivation than high performers. Conclusions: The association in this age-hetrogenous ample was larger than in previous age-homogenous samples. The moderate association between perceived and performed EHL indicates that the two are associated yet distinct, the latter requiring separate assessment. Features of future rapid performed EHL tools are discussed.Keywords: eHealth, health literacy, performance, simulation
Procedia PDF Downloads 233518 Time-Domain Analysis Approaches of Soil-Structure Interaction: A Comparative Study
Authors: Abdelrahman Taha, Niloofar Malekghaini, Hamed Ebrahimian, Ramin Motamed
Abstract:
This paper compares the substructure and direct methods for soil-structure interaction (SSI) analysis in the time domain. In the substructure SSI method, the soil domain is replaced by a set of springs and dashpots, also referred to as the impedance function, derived through the study of the behavior of a massless rigid foundation. The impedance function is inherently frequency dependent, i.e., it varies as a function of the frequency content of the structural response. To use the frequency-dependent impedance function for time-domain SSI analysis, the impedance function is approximated at the fundamental frequency of the structure-soil system. To explore the potential limitations of the substructure modeling process, a two-dimensional reinforced concrete frame structure is modeled using substructure and direct methods in this study. The results show discrepancies between the simulated responses of the substructure and the direct approaches. To isolate the effects of higher modal responses, the same study is repeated using a harmonic input motion, in which a similar discrepancy is still observed between the substructure and direct approaches. It is concluded that the main source of discrepancy between the substructure and direct SSI approaches is likely attributed to the way the impedance functions are calculated, i.e., assuming a massless rigid foundation without considering the presence of the superstructure. Hence, a refined impedance function, considering the presence of the superstructure, shall be developed. This refined impedance function is expected to significantly improve the simulation accuracy of the substructure approach for structural systems whose behavior is dominated by the fundamental mode response.Keywords: direct approach, impedance function, soil-structure interaction, substructure approach
Procedia PDF Downloads 111517 Performance Comparison of Microcontroller-Based Optimum Controller for Fruit Drying System
Authors: Umar Salisu
Abstract:
This research presents the development of a hot air tomatoes drying system. To provide a more efficient and continuous temperature control, microcontroller-based optimal controller was developed. The system is based on a power control principle to achieve smooth power variations depending on a feedback temperature signal of the process. An LM35 temperature sensor and LM399 differential comparator were used to measure the temperature. The mathematical model of the system was developed and the optimal controller was designed and simulated and compared with the PID controller transient response. A controlled environment suitable for fruit drying is developed within a closed chamber and is a three step process. First, the infrared light is used internally to preheated the fruit to speedily remove the water content inside the fruit for fast drying. Second, hot air of a specified temperature is blown inside the chamber to maintain the humidity below a specified level and exhaust the humid air of the chamber. Third, the microcontroller disconnects the power to the chamber after the moisture content of the fruits is removed to minimal. Experiments were conducted with 1kg of fresh tomatoes at three different temperatures (40, 50 and 60 °C) at constant relative humidity of 30%RH. The results obtained indicate that the system is significantly reducing the drying time without affecting the quality of the fruits. In the context of temperature control, the results obtained showed that the response of the optimal controller has zero overshoot whereas the PID controller response overshoots to about 30% of the set-point. Another performance metric used is the rising time; the optimal controller rose without any delay while the PID controller delayed for more than 50s. It can be argued that the optimal controller performance is preferable than that of the PID controller since it does not overshoot and it starts in good time.Keywords: drying, microcontroller, optimum controller, PID controller
Procedia PDF Downloads 299516 Construction of Submerged Aquatic Vegetation Index through Global Sensitivity Analysis of Radiative Transfer Model
Authors: Guanhua Zhou, Zhongqi Ma
Abstract:
Submerged aquatic vegetation (SAV) in wetlands can absorb nitrogen and phosphorus effectively to prevent the eutrophication of water. It is feasible to monitor the distribution of SAV through remote sensing, but for the reason of weak vegetation signals affected by water body, traditional terrestrial vegetation indices are not applicable. This paper aims at constructing SAV index to enhance the vegetation signals and distinguish SAV from water body. The methodology is as follows: (1) select the bands sensitive to the vegetation parameters based on global sensitivity analysis of SAV canopy radiative transfer model; (2) take the soil line concept as reference, analyze the distribution of SAV and water reflectance simulated by SAV canopy model and semi-analytical water model in the two-dimensional space built by different sensitive bands; (3)select the band combinations which have better separation performance between SAV and water, and use them to build the SAVI indices in the form of normalized difference vegetation index(NDVI); (4)analyze the sensitivity of indices to the water and vegetation parameters, choose the one more sensitive to vegetation parameters. It is proved that index formed of the bands with central wavelengths in 705nm and 842nm has high sensitivity to chlorophyll content in leaves while it is less affected by water constituents. The model simulation shows a general negative, little correlation of SAV index with increasing water depth. Moreover, the index enhances capabilities in separating SAV from water compared to NDVI. The SAV index is expected to have potential in parameter inversion of wetland remote sensing.Keywords: global sensitivity analysis, radiative transfer model, submerged aquatic vegetation, vegetation indices
Procedia PDF Downloads 261515 A Stochastic Diffusion Process Based on the Two-Parameters Weibull Density Function
Authors: Meriem Bahij, Ahmed Nafidi, Boujemâa Achchab, Sílvio M. A. Gama, José A. O. Matos
Abstract:
Stochastic modeling concerns the use of probability to model real-world situations in which uncertainty is present. Therefore, the purpose of stochastic modeling is to estimate the probability of outcomes within a forecast, i.e. to be able to predict what conditions or decisions might happen under different situations. In the present study, we present a model of a stochastic diffusion process based on the bi-Weibull distribution function (its trend is proportional to the bi-Weibull probability density function). In general, the Weibull distribution has the ability to assume the characteristics of many different types of distributions. This has made it very popular among engineers and quality practitioners, who have considered it the most commonly used distribution for studying problems such as modeling reliability data, accelerated life testing, and maintainability modeling and analysis. In this work, we start by obtaining the probabilistic characteristics of this model, as the explicit expression of the process, its trends, and its distribution by transforming the diffusion process in a Wiener process as shown in the Ricciaardi theorem. Then, we develop the statistical inference of this model using the maximum likelihood methodology. Finally, we analyse with simulated data the computational problems associated with the parameters, an issue of great importance in its application to real data with the use of the convergence analysis methods. Overall, the use of a stochastic model reflects only a pragmatic decision on the part of the modeler. According to the data that is available and the universe of models known to the modeler, this model represents the best currently available description of the phenomenon under consideration.Keywords: diffusion process, discrete sampling, likelihood estimation method, simulation, stochastic diffusion process, trends functions, bi-parameters weibull density function
Procedia PDF Downloads 306514 Deriving Framework for Slum Rehabilitation through Environmental Perspective: Case of Mumbai
Authors: Ashwini Bhosale, Yogesh Patil
Abstract:
Urban areas are extremely complicated environmental settings, where health and well-being of an individual and population are governed by a large number of bio-physical, socio-economical, and inclusive aspects. Although poverty and slums are the prime issues under UN-HABITAT agenda of environmental sustainability, slums, the inevitable part of urban environment, have not accounted for inclusive city planning. Developing nations, where about 60 % of world slum population resides, are increasingly under pressure to uplift the urban poor, particularly slum dwellers. From a point of advantage, these new slum redevelopment projects have succeeded in providing legitimized and more permanent and stable shelter for the low income people, as well as individualized sanitation and water supply. However, they unfortunately follow the “one type fits all" approach and exhibit no response to the climatic design needs on Mumbai. The thesis focuses on the study of environmental perspectives in the context of Daylight, natural ventilation and social aspects in the design process of Slum-Rehabilitation schemes (SRS) – case of Mumbai. It attempts to investigate into Indian approaches about SRS and concludes upon strategies to be incorporated in SRS to improve the overall SRS environment. The main objectives of this work have been to identify and study the spatial configuration and possibilities of daylight and natural ventilation in Slum Rehabilitated buildings. The performance of the proposed method was evaluated by comparison with the daylight luminance simulated by lighting software, namely ECOTECT, and with measurements under real skies whereas for the ventilation study purpose, software named FLOW DESIGN was used.Keywords: urban environment, slum-rehabilitation, daylight, natural-ventilation, architectural consequences
Procedia PDF Downloads 385513 Battery Energy Storage System Economic Benefits Assessment on a Network Frequency Control
Authors: Kréhi Serge Agbli, Samuel Portebos, Michaël Salomon
Abstract:
Here a methodology is considered aiming at evaluating the economic benefit of the provision of a primary frequency control unit using a Battery Energy Storage System (BESS). In this methodology, two control types (basic and hysteresis) are implemented and the corresponding minimum energy storage system power allowing to maintain the frequency drop inside a given threshold under a given contingency is identified and compared using DigSilent’s PowerFactory software. Following this step, the corresponding energy storage capacity (in MWh) is calculated. As PowerFactory is dedicated to dynamic simulation for transient analysis, a first order model related to the IEEE 9 bus grid used for the analysis under PowerFactory is characterized and implemented on MATLAB-Simulink. Primary frequency control is simulated using the two control types over one-month grid's frequency deviation data on this Simulink model. This simulation results in the energy throughput both basic and hysteresis BESSs. It emerges that the 15 minutes operation band of the battery capacity allocated to frequency control is sufficient under the considered disturbances. A sensitivity analysis on the width of the control deadband is then performed for the two control types. The deadband width variation leads to an identical sizing with the hysteresis control showing a better frequency control at the cost of a higher delivered throughput compared to the basic control. An economic analysis comparing the cost of the sized BESS to the potential revenues is then performed.Keywords: battery energy storage system, electrical network frequency stability, frequency control unit, PowerFactor
Procedia PDF Downloads 127512 Orthogonal Metal Cutting Simulation of Steel AISI 1045 via Smoothed Particle Hydrodynamic Method
Authors: Seyed Hamed Hashemi Sohi, Gerald Jo Denoga
Abstract:
Machining or metal cutting is one of the most widely used production processes in industry. The quality of the process and the resulting machined product depends on parameters like tool geometry, material, and cutting conditions. However, the relationships of these parameters to the cutting process are often based mostly on empirical knowledge. In this study, computer modeling and simulation using LS-DYNA software and a Smoothed Particle Hydrodynamic (SPH) methodology, was performed on the orthogonal metal cutting process to analyze three-dimensional deformation of AISI 1045 medium carbon steel during machining. The simulation was performed using the following constitutive models: the Power Law model, the Johnson-Cook model, and the Zerilli-Armstrong models (Z-A). The outcomes were compared against the simulated results obtained by Cenk Kiliçaslan using the Finite Element Method (FEM) and the empirical results of Jaspers and Filice. The analysis shows that the SPH method combined with the Zerilli-Armstrong constitutive model is a viable alternative to simulating the metal cutting process. The tangential force was overestimated by 7%, and the normal force was underestimated by 16% when compared with empirical values. The simulation values for flow stress versus strain at various temperatures were also validated against empirical values. The SPH method using the Z-A model has also proven to be robust against issues of time-scaling. Experimental work was also done to investigate the effects of friction, rake angle and tool tip radius on the simulation.Keywords: metal cutting, smoothed particle hydrodynamics, constitutive models, experimental, cutting forces analyses
Procedia PDF Downloads 259511 Radiation Protection Assessment of the Emission of a d-t Neutron Generator: Simulations with MCNP Code and Experimental Measurements in Different Operating Conditions
Authors: G. M. Contessa, L. Lepore, G. Gandolfo, C. Poggi, N. Cherubini, R. Remetti, S. Sandri
Abstract:
Practical guidelines are provided in this work for the safe use of a portable d-t Thermo Scientific MP-320 neutron generator producing pulsed 14.1 MeV neutron beams. The neutron generator’s emission was tested experimentally and reproduced by MCNPX Monte Carlo code. Simulations were particularly accurate, even generator’s internal components were reproduced on the basis of ad-hoc collected X-ray radiographic images. Measurement campaigns were conducted under different standard experimental conditions using an LB 6411 neutron detector properly calibrated at three different energies, and comparing simulated and experimental data. In order to estimate the dose to the operator vs. the operating conditions and the energy spectrum, the most appropriate value of the conversion factor between neutron fluence and ambient dose equivalent has been identified, taking into account both direct and scattered components. The results of the simulations show that, in real situations, when there is no information about the neutron spectrum at the point where the dose has to be evaluated, it is possible - and in any case conservative - to convert the measured value of the count rate by means of the conversion factor corresponding to 14 MeV energy. This outcome has a general value when using this type of generator, enabling a more accurate design of experimental activities in different setups. The increasingly widespread use of this type of device for industrial and medical applications makes the results of this work of interest in different situations, especially as a support for the definition of appropriate radiation protection procedures and, in general, for risk analysis.Keywords: instrumentation and monitoring, management of radiological safety, measurement of individual dose, radiation protection of workers
Procedia PDF Downloads 129510 Development of Method for Recovery of Nickel from Aqueous Solution Using 2-Hydroxy-5-Nonyl- Acetophenone Oxime Impregnated on Activated Charcoal
Authors: A. O. Adebayo, G. A. Idowu, F. Odegbemi
Abstract:
Investigations on the recovery of nickel from aqueous solution using 2-hydroxy-5-nonyl- acetophenone oxime (LIX-84I) impregnated on activated charcoal was carried out. The LIX-84I was impregnated onto the pores of dried activated charcoal by dry method and optimum conditions for different equilibrium parameters (pH, adsorbent dosage, extractant concentration, agitation time and temperature) were determined using a simulated solution of nickel. The kinetics and adsorption isotherm studies were also evaluated. It was observed that the efficiency of recovery with LIX-84I impregnated on charcoal was dependent on the pH of the aqueous solution as there was little or no recovery at pH below 4. However, as the pH was raised, percentage recovery increases and peaked at pH 5.0. The recovery was found to increase with temperature up to 60ºC. Also it was observed that nickel adsorbed onto the loaded charcoal best at a lower concentration (0.1M) of the extractant when compared with higher concentrations. Similarly, a moderately low dosage (1 g) of the adsorbent showed better recovery than larger dosages. These optimum conditions were used to recover nickel from the leachate of Ni-MH batteries dissolved with sulphuric acid, and a 99.6% recovery was attained. Adsorption isotherm studies showed that the equilibrium data fitted best to Temkin model, with a negative value of constant, b (-1.017 J/mol) and a high correlation coefficient, R² of 0.9913. Kinetic studies showed that the adsorption process followed a pseudo-second order model. Thermodynamic parameter values (∆G⁰, ∆H⁰, and ∆S⁰) showed that the adsorption was endothermic and spontaneous. The impregnated charcoal appreciably recovered nickel using a relatively smaller volume of extractant than what is required in solvent extraction. Desorption studies showed that the loaded charcoal is reusable for three times, and so might be economical for nickel recovery from waste battery.Keywords: charcoal, impregnated, LIX-84I, nickel, recovery
Procedia PDF Downloads 147509 Design of Robust and Intelligent Controller for Active Removal of Space Debris
Authors: Shabadini Sampath, Jinglang Feng
Abstract:
With huge kinetic energy, space debris poses a major threat to astronauts’ space activities and spacecraft in orbit if a collision happens. The active removal of space debris is required in order to avoid frequent collisions that would occur. In addition, the amount of space debris will increase uncontrollably, posing a threat to the safety of the entire space system. But the safe and reliable removal of large-scale space debris has been a huge challenge to date. While capturing and deorbiting space debris, the space manipulator has to achieve high control precision. However, due to uncertainties and unknown disturbances, there is difficulty in coordinating the control of the space manipulator. To address this challenge, this paper focuses on developing a robust and intelligent control algorithm that controls joint movement and restricts it on the sliding manifold by reducing uncertainties. A neural network adaptive sliding mode controller (NNASMC) is applied with the objective of finding the control law such that the joint motions of the space manipulator follow the given trajectory. A computed torque control (CTC) is an effective motion control strategy that is used in this paper for computing space manipulator arm torque to generate the required motion. Based on the Lyapunov stability theorem, the proposed intelligent controller NNASMC and CTC guarantees the robustness and global asymptotic stability of the closed-loop control system. Finally, the controllers used in the paper are modeled and simulated using MATLAB Simulink. The results are presented to prove the effectiveness of the proposed controller approach.Keywords: GNC, active removal of space debris, AI controllers, MatLabSimulink
Procedia PDF Downloads 130508 Data Compression in Ultrasonic Network Communication via Sparse Signal Processing
Authors: Beata Zima, Octavio A. Márquez Reyes, Masoud Mohammadgholiha, Jochen Moll, Luca de Marchi
Abstract:
This document presents the approach of using compressed sensing in signal encoding and information transferring within a guided wave sensor network, comprised of specially designed frequency steerable acoustic transducers (FSATs). Wave propagation in a damaged plate was simulated using commercial FEM-based software COMSOL. Guided waves were excited by means of FSATs, characterized by the special shape of its electrodes, and modeled using PIC255 piezoelectric material. The special shape of the FSAT, allows for focusing wave energy in a certain direction, accordingly to the frequency components of its actuation signal, which makes available a larger monitored area. The process begins when a FSAT detects and records reflection from damage in the structure, this signal is then encoded and prepared for transmission, using a combined approach, based on Compressed Sensing Matching Pursuit and Quadrature Amplitude Modulation (QAM). After codification of the signal is in binary chars the information is transmitted between the nodes in the network. The message reaches the last node, where it is finally decoded and processed, to be used for damage detection and localization purposes. The main aim of the investigation is to determine the location of detected damage using reconstructed signals. The study demonstrates that the special steerable capabilities of FSATs, not only facilitate the detection of damage but also permit transmitting the damage information to a chosen area in a specific direction of the investigated structure.Keywords: data compression, ultrasonic communication, guided waves, FEM analysis
Procedia PDF Downloads 123507 Effects of a Simulated Power Cut in Automatic Milking Systems on Dairy Cows Heart Activity
Authors: Anja Gräff, Stefan Holzer, Manfred Höld, Jörn Stumpenhausen, Heinz Bernhardt
Abstract:
In view of the increasing quantity of 'green energy' from renewable raw materials and photovoltaic facilities, it is quite conceivable that power supply variations may occur, so that constantly working machines like automatic milking systems (AMS) may break down temporarily. The usage of farm-made energy is steadily increasing in order to keep energy costs as low as possible. As a result, power cuts are likely to happen more frequently. Current work in the framework of the project 'stable 4.0' focuses on possible stress reactions by simulating power cuts up to four hours in dairy farms. Based on heart activity it should be found out whether stress on dairy cows increases under these circumstances. In order to simulate a power cut, 12 random cows out of 2 herds were not admitted to the AMS for at least two hours on three consecutive days. The heart rates of the cows were measured and the collected data evaluated with HRV Program Kubios Version 2.1 on the basis of eight parameters (HR, RMSSD, pNN50, SD1, SD2, LF, HF and LF/HF). Furthermore, stress reactions were examined closely via video analysis, milk yield, ruminant activity, pedometer and measurements of cortisol metabolites. Concluding it turned out, that during the test only some animals were suffering from minor stress symptoms, when they tried to get into the AMS at their regular milking time, but couldn´t be milked because the system was manipulated. However, the stress level during a regular “time-dependent milking rejection” was just as high. So the study comes to the conclusion, that the low psychological stress level in the case of a 2-4 hours failure of an AMS does not have any impact on animal welfare and health.Keywords: dairy cow, heart activity, power cut, stable 4.0
Procedia PDF Downloads 310506 A System Dynamics Model for Analyzing Customer Satisfaction in Healthcare Systems
Authors: Mahdi Bastan, Ali Mohammad Ahmadvand, Fatemeh Soltani Khamsehpour
Abstract:
Health organizations’ sustainable development has nowadays become highly affected by customers’ satisfaction due to significant changes made in the business environment of the healthcare system and emerging of Competitiveness paradigm. In case we look at the hospitals and other health organizations as service providers concerning profit issues, the satisfaction of employees as interior customers, and patients as exterior customers would be of significant importance in health business success. Furthermore, satisfaction rate could be considered in performance assessment of healthcare organizations as a perceived quality measure. Several researches have been carried out in identification of effective factors on patients’ satisfaction in health organizations. However, considering a systemic view, the complex causal relations among many components of healthcare system would be an issue that its acquisition and sustainability requires an understanding of the dynamic complexity, an appropriate cognition of different components, and effective relationships among them resulting ultimately in identifying the generative structure of patients’ satisfaction. Hence, the presenting paper applies system dynamics approaches coherently and methodologically to represent the systemic structure of customers’ satisfaction of a health system involving the constituent components and interactions among them. Then, the results of different policies taken on the system are simulated via developing mathematical models, identifying leverage points, and using scenario making technique and then, the best solutions are presented to improve customers’ satisfaction of the services. The presenting approach supports taking advantage of decision support systems. Additionally, relying on understanding of system behavior Dynamics, the effective policies for improving the health system would be recognized.Keywords: customer satisfaction, healthcare, scenario, simulation, system dynamics
Procedia PDF Downloads 413505 Hydrological Response of the Glacierised Catchment: Himalayan Perspective
Authors: Sonu Khanal, Mandira Shrestha
Abstract:
Snow and Glaciers are the largest dependable reserved sources of water for the river system originating from the Himalayas so an accurate estimate of the volume of water contained in the snowpack and the rate of release of water from snow and glaciers are, therefore, needed for efficient management of the water resources. This research assess the fusion of energy exchanges between the snowpack, air above and soil below according to mass and energy balance which makes it apposite than the models using simple temperature index for the snow and glacier melt computation. UEBGrid a Distributed energy based model is used to calculate the melt which is then routed by Geo-SFM. The model robustness is maintained by incorporating the albedo generated from the Landsat-7 ETM images on a seasonal basis for the year 2002-2003 and substrate map derived from TM. The Substrate file includes predominantly the 4 major thematic layers viz Snow, clean ice, Glaciers and Barren land. This approach makes use of CPC RFE-2 and MERRA gridded data sets as the source of precipitation and climatic variables. The subsequent model run for the year between 2002-2008 shows a total annual melt of 17.15 meter is generate from the Marshyangdi Basin of which 71% is contributed by the glaciers , 18% by the rain and rest being from the snow melt. The albedo file is decisive in governing the melt dynamics as 30% increase in the generated surface albedo results in the 10% decrease in the simulated discharge. The melt routed with the land cover and soil variables using Geo-SFM shows Nash-Sutcliffe Efficiency of 0.60 with observed discharge for the study period.Keywords: Glacier, Glacier melt, Snowmelt, Energy balance
Procedia PDF Downloads 453504 Numerical Analysis for Soil Compaction and Plastic Points Extension in Pile Drivability
Authors: Omid Tavasoli, Mahmoud Ghazavi
Abstract:
A numerical analysis of drivability of piles in different geometry is presented. In this paper, a three-dimensional finite difference analysis for plastic point extension and soil compaction in the effect of pile driving is analyzed. Four pile configurations such as cylindrical pile, fully tapered pile, T-C pile consists of a top tapered segment and a lower cylindrical segment and C-T pile has a top cylindrical part followed by a tapered part are investigated. All piles which driven up to a total penetration depth of 16 m have the same length with equivalent surface area and approximately with identical material volumes. An idealization for pile-soil system in pile driving is considered for this approach. A linear elastic material is assumed to model the vertical pile behaviors and the soil obeys the elasto-plastic constitutive low and its failure is controlled by the Mohr-Coulomb failure criterion. A slip which occurred at the pile-soil contact surfaces along the shaft and the toe in pile driving procedures is simulated with interface elements. All initial and boundary conditions are the same in all analyses. Quiet boundaries are used to prevent wave reflection in the lateral and vertical directions for the soil. The results obtained from numerical analyses were compared with available other numerical data and laboratory tests, indicating a satisfactory agreement. It will be shown that with increasing the angle of taper, the permanent piles toe settlement increase and therefore, the extension of plastic points increase. These are interesting phenomena in pile driving and are on the safe side for driven piles.Keywords: pile driving, finite difference method, non-uniform piles, pile geometry, pile set, plastic points, soil compaction
Procedia PDF Downloads 482503 Simulation and Fabrication of Plasmonic Lens for Bacteria Detection
Authors: Sangwoo Oh, Jaewoo Kim, Dongmin Seo, Jaewon Park, Yongha Hwang, Sungkyu Seo
Abstract:
Plasmonics has been regarded one of the most powerful bio-sensing modalities to evaluate bio-molecular interactions in real-time. However, most of the plasmonic sensing methods are based on labeling metallic nanoparticles, e.g. gold or silver, as optical modulation markers, which are non-recyclable and expensive. This plasmonic modulation can be usually achieved through various nano structures, e.g., nano-hole arrays. Among those structures, plasmonic lens has been regarded as a unique plasmonic structure due to its light focusing characteristics. In this study, we introduce a custom designed plasmonic lens array for bio-sensing, which was simulated by finite-difference-time-domain (FDTD) approach and fabricated by top-down approach. In our work, we performed the FDTD simulations of various plasmonic lens designs for bacteria sensor, i.e., Samonella and Hominis. We optimized the design parameters, i.e., radius, shape, and material, of the plasmonic lens. The simulation results showed the change in the peak intensity value with the introduction of each bacteria and antigen i.e., peak intensity 1.8711 a.u. with the introduction of antibody layer of thickness of 15nm. For Salmonella, the peak intensity changed from 1.8711 a.u. to 2.3654 a.u. and for Hominis, the peak intensity changed from 1.8711 a.u. to 3.2355 a.u. This significant shift in the intensity due to the interaction between bacteria and antigen showed a promising sensing capability of the plasmonic lens. With the batch processing and bulk production of this nano scale design, the cost of biological sensing can be significantly reduced, holding great promise in the fields of clinical diagnostics and bio-defense.Keywords: plasmonic lens, FDTD, fabrication, bacteria sensor, salmonella, hominis
Procedia PDF Downloads 269502 Fabrication of Cheap Novel 3d Porous Scaffolds Activated by Nano-Particles and Active Molecules for Bone Regeneration and Drug Delivery Applications
Authors: Mostafa Mabrouk, Basma E. Abdel-Ghany, Mona Moaness, Bothaina M. Abdel-Hady, Hanan H. Beherei
Abstract:
Tissue engineering became a promising field for bone repair and regenerative medicine in which cultured cells, scaffolds and osteogenic inductive signals are used to regenerate tissues. The annual cost of treating bone defects in Egypt has been estimated to be many billions, while enormous costs are spent on imported bone grafts for bone injuries, tumors, and other pathologies associated with defective fracture healing. The current study is aimed at developing a more strategic approach in order to speed-up recovery after bone damage. This will reduce the risk of fatal surgical complications and improve the quality of life of people affected with such fractures. 3D scaffolds loaded with cheap nano-particles that possess an osteogenic effect were prepared by nano-electrospinning. The Microstructure and morphology characterizations of the 3D scaffolds were monitored using scanning electron microscopy (SEM). The physicochemical characterization was investigated using X-ray diffractometry (XRD) and infrared spectroscopy (IR). The Physicomechanical properties of the 3D scaffold were determined by a universal testing machine. The in vitro bioactivity of the 3D scaffold was assessed in simulated body fluid (SBF). The bone-bonding ability of novel 3D scaffolds was also evaluated. The obtained nanofibrous scaffolds demonstrated promising microstructure, physicochemical and physicomechanical features appropriate for enhanced bone regeneration. Therefore, the utilized nanomaterials loaded with the drug are greatly recommended as cheap alternatives to growth factors.Keywords: bone regeneration, cheap scaffolds, nanomaterials, active molecules
Procedia PDF Downloads 186501 Fire and Explosion Consequence Modeling Using Fire Dynamic Simulator: A Case Study
Authors: Iftekhar Hassan, Sayedil Morsalin, Easir A Khan
Abstract:
Accidents involving fire occur frequently in recent times and their causes showing a great deal of variety which require intervention methods and risk assessment strategies are unique in each case. On September 4, 2020, a fire and explosion occurred in a confined space caused by a methane gas leak from an underground pipeline in Baitus Salat Jame mosque during Night (Esha) prayer in Narayanganj District, Bangladesh that killed 34 people. In this research, this incident is simulated using Fire Dynamics Simulator (FDS) software to analyze and understand the nature of the accident and associated consequences. FDS is an advanced computational fluid dynamics (CFD) system of fire-driven fluid flow which solves numerically a large eddy simulation form of the Navier–Stokes’s equations for simulation of the fire and smoke spread and prediction of thermal radiation, toxic substances concentrations and other relevant parameters of fire. This study focuses on understanding the nature of the fire and consequence evaluation due to thermal radiation caused by vapor cloud explosion. An evacuation modeling was constructed to visualize the effect of evacuation time and fractional effective dose (FED) for different types of agents. The results were presented by 3D animation, sliced pictures and graphical representation to understand fire hazards caused by thermal radiation or smoke due to vapor cloud explosion. This study will help to design and develop appropriate respond strategy for preventing similar accidents.Keywords: consequence modeling, fire and explosion, fire dynamics simulation (FDS), thermal radiation
Procedia PDF Downloads 223500 Optimal Concentration of Fluorescent Nanodiamonds in Aqueous Media for Bioimaging and Thermometry Applications
Authors: Francisco Pedroza-Montero, Jesús Naín Pedroza-Montero, Diego Soto-Puebla, Osiris Alvarez-Bajo, Beatriz Castaneda, Sofía Navarro-Espinoza, Martín Pedroza-Montero
Abstract:
Nanodiamonds have been widely studied for their physical properties, including chemical inertness, biocompatibility, optical transparency from the ultraviolet to the infrared region, high thermal conductivity, and mechanical strength. In this work, we studied how the fluorescence spectrum of nanodiamonds quenches concerning the concentration in aqueous solutions systematically ranging from 0.1 to 10 mg/mL. Our results demonstrated a non-linear fluorescence quenching as the concentration increases for both of the NV zero-phonon lines; the 5 mg/mL concentration shows the maximum fluorescence emission. Furthermore, this behaviour is theoretically explained as an electronic recombination process that modulates the intensity in the NV centres. Finally, to gain more insight, the FRET methodology is used to determine the fluorescence efficiency in terms of the fluorophores' separation distance. Thus, the concentration level is simulated as follows, a small distance between nanodiamonds would be considered a highly concentrated system, whereas a large distance would mean a low concentrated one. Although the 5 mg/mL concentration shows the maximum intensity, our main interest is focused on the concentration of 0.5 mg/mL, which our studies demonstrate the optimal human cell viability (99%). In this respect, this concentration has the feature of being as biocompatible as water giving the possibility to internalize it in cells without harming the living media. To this end, not only can we track nanodiamonds on the surface or inside the cell with excellent precision due to their fluorescent intensity, but also, we can perform thermometry tests transforming a fluorescence contrast image into a temperature contrast image.Keywords: nanodiamonds, fluorescence spectroscopy, concentration, bioimaging, thermometry
Procedia PDF Downloads 403