Search results for: dispersive wave equations
146 The Effect of Artificial Intelligence on Mobile Phones and Communication Systems
Authors: Ibram Khalafalla Roshdy Shokry
Abstract:
This paper gives service feel multiple get entry to (CSMA) verbal exchange model based totally totally on SoC format method. Such model can be used to guide the modelling of the complex c084d04ddacadd4b971ae3d98fecfb2a communique systems, consequently use of such communication version is an crucial method in the creation of excessive general overall performance conversation. SystemC has been selected as it gives a homogeneous format drift for complicated designs (i.e. SoC and IP based format). We use a swarm device to validate CSMA designed version and to expose how advantages of incorporating communication early within the layout process. The wireless conversation created via the modeling of CSMA protocol that may be used to attain conversation among all of the retailers and to coordinate get proper of entry to to the shared medium (channel).The device of automobiles with wi-fiwireless communique abilities is expected to be the important thing to the evolution to next era intelligent transportation systems (ITS). The IEEE network has been continuously operating at the development of an wireless vehicular communication protocol for the enhancement of wi-fi get admission to in Vehicular surroundings (WAVE). Vehicular verbal exchange systems, known as V2X, help car to car (V2V) and automobile to infrastructure (V2I) communications. The wi-ficiencywireless of such communication systems relies upon on several elements, amongst which the encircling surroundings and mobility are prominent. as a result, this observe makes a speciality of the evaluation of the actual performance of vehicular verbal exchange with unique cognizance on the effects of the actual surroundings and mobility on V2X verbal exchange. It begins by wi-fi the actual most range that such conversation can guide and then evaluates V2I and V2V performances. The Arada LocoMate OBU transmission device changed into used to check and evaluate the effect of the transmission range in V2X verbal exchange. The evaluation of V2I and V2V communique takes the real effects of low and excessive mobility on transmission under consideration.Multiagent systems have received sizeable attention in numerous wi-fields, which include robotics, independent automobiles, and allotted computing, where a couple of retailers cooperate and speak to reap complicated duties. wi-figreen communication among retailers is a critical thing of these systems, because it directly influences their usual performance and scalability. This scholarly work gives an exploration of essential communication factors and conducts a comparative assessment of diverse protocols utilized in multiagent systems. The emphasis lies in scrutinizing the strengths, weaknesses, and applicability of those protocols across diverse situations. The studies additionally sheds light on rising tendencies within verbal exchange protocols for multiagent systems, together with the incorporation of device mastering strategies and the adoption of blockchain-based totally solutions to make sure comfy communique. those developments offer valuable insights into the evolving landscape of multiagent structures and their verbal exchange protocols.Keywords: communication, multi-agent systems, protocols, consensussystemC, modelling, simulation, CSMA
Procedia PDF Downloads 25145 Role of Yeast-Based Bioadditive on Controlling Lignin Inhibition in Anaerobic Digestion Process
Authors: Ogemdi Chinwendu Anika, Anna Strzelecka, Yadira Bajón-Fernández, Raffaella Villa
Abstract:
Anaerobic digestion (AD) has been used since time in memorial to take care of organic wastes in the environment, especially for sewage and wastewater treatments. Recently, the rising demand/need to increase renewable energy from organic matter has caused the AD substrates spectrum to expand and include a wider variety of organic materials such as agricultural residues and farm manure which is annually generated at around 140 billion metric tons globally. The problem, however, is that agricultural wastes are composed of materials that are heterogeneous and too difficult to degrade -particularly lignin, that make up about 0–40% of the total lignocellulose content. This study aimed to evaluate the impact of varying concentrations of lignin on biogas yields and their subsequent response to a commercial yeast-based bioadditive in batch anaerobic digesters. The experiments were carried out in batches for a retention time of 56 days with different lignin concentrations (200 mg, 300 mg, 400 mg, 500 mg, and 600 mg) treated to different conditions to first determine the concentration of the bioadditive that was most optimal for overall process improvement and yields increase. The batch experiments were set up using 130 mL bottles with a working volume of 60mL, maintained at 38°C in an incubator shaker (150rpm). Digestate obtained from a local plant operating at mesophilic conditions was used as the starting inoculum, and commercial kraft lignin was used as feedstock. Biogas measurements were carried out using the displacement method and were corrected to standard temperature and pressure using standard gas equations. Furthermore, the modified Gompertz equation model was used to non-linearly regress the resulting data to estimate gas production potential, production rates, and the duration of lag phases as indicatives of degrees of lignin inhibition. The results showed that lignin had a strong inhibitory effect on the AD process, and the higher the lignin concentration, the more the inhibition. Also, the modelling showed that the rates of gas production were influenced by the concentrations of the lignin substrate added to the system – the higher the lignin concentrations in mg (0, 200, 300, 400, 500, and 600) the lower the respective rate of gas production in ml/gVS.day (3.3, 2.2, 2.3, 1.6, 1.3, and 1.1), although the 300 mg increased by 0.1 ml/gVS.day over that of the 200 mg. The impact of the yeast-based bioaddition on the rate of production was most significant in the 400 mg and 500 mg as the rate was improved by 0.1 ml/gVS.day and 0.2 ml/gVS.day respectively. This indicates that agricultural residues with higher lignin content may be more responsive to inhibition alleviation by yeast-based bioadditive; therefore, further study on its application to the AD of agricultural residues of high lignin content will be the next step in this research.Keywords: anaerobic digestion, renewable energy, lignin valorisation, biogas
Procedia PDF Downloads 91144 Suspended Sediment Concentration and Water Quality Monitoring Along Aswan High Dam Reservoir Using Remote Sensing
Authors: M. Aboalazayem, Essam A. Gouda, Ahmed M. Moussa, Amr E. Flifl
Abstract:
Field data collecting is considered one of the most difficult work due to the difficulty of accessing large zones such as large lakes. Also, it is well known that the cost of obtaining field data is very expensive. Remotely monitoring of lake water quality (WQ) provides an economically feasible approach comparing to field data collection. Researchers have shown that lake WQ can be properly monitored via Remote sensing (RS) analyses. Using satellite images as a method of WQ detection provides a realistic technique to measure quality parameters across huge areas. Landsat (LS) data provides full free access to often occurring and repeating satellite photos. This enables researchers to undertake large-scale temporal comparisons of parameters related to lake WQ. Satellite measurements have been extensively utilized to develop algorithms for predicting critical water quality parameters (WQPs). The goal of this paper is to use RS to derive WQ indicators in Aswan High Dam Reservoir (AHDR), which is considered Egypt's primary and strategic reservoir of freshwater. This study focuses on using Landsat8 (L-8) band surface reflectance (SR) observations to predict water-quality characteristics which are limited to Turbidity (TUR), total suspended solids (TSS), and chlorophyll-a (Chl-a). ArcGIS pro is used to retrieve L-8 SR data for the study region. Multiple linear regression analysis was used to derive new correlations between observed optical water-quality indicators in April and L-8 SR which were atmospherically corrected by values of various bands, band ratios, and or combinations. Field measurements taken in the month of May were used to validate WQP obtained from SR data of L-8 Operational Land Imager (OLI) satellite. The findings demonstrate a strong correlation between indicators of WQ and L-8 .For TUR, the best validation correlation with OLI SR bands blue, green, and red, were derived with high values of Coefficient of correlation (R2) and Root Mean Square Error (RMSE) equal 0.96 and 3.1 NTU, respectively. For TSS, Two equations were strongly correlated and verified with band ratios and combinations. A logarithm of the ratio of blue and green SR was determined to be the best performing model with values of R2 and RMSE equal to 0.9861 and 1.84 mg/l, respectively. For Chl-a, eight methods were presented for calculating its value within the study area. A mix of blue, red, shortwave infrared 1(SWR1) and panchromatic SR yielded the greatest validation results with values of R2 and RMSE equal 0.98 and 1.4 mg/l, respectively.Keywords: remote sensing, landsat 8, nasser lake, water quality
Procedia PDF Downloads 92143 Magnetic Navigation of Nanoparticles inside a 3D Carotid Model
Authors: E. G. Karvelas, C. Liosis, A. Theodorakakos, T. E. Karakasidis
Abstract:
Magnetic navigation of the drug inside the human vessels is a very important concept since the drug is delivered to the desired area. Consequently, the quantity of the drug required to reach therapeutic levels is being reduced while the drug concentration at targeted sites is increased. Magnetic navigation of drug agents can be achieved with the use of magnetic nanoparticles where anti-tumor agents are loaded on the surface of the nanoparticles. The magnetic field that is required to navigate the particles inside the human arteries is produced by a magnetic resonance imaging (MRI) device. The main factors which influence the efficiency of the usage of magnetic nanoparticles for biomedical applications in magnetic driving are the size and the magnetization of the biocompatible nanoparticles. In this study, a computational platform for the simulation of the optimal gradient magnetic fields for the navigation of magnetic nanoparticles inside a carotid artery is presented. For the propulsion model of the particles, seven major forces are considered, i.e., the magnetic force from MRIs main magnet static field as well as the magnetic field gradient force from the special propulsion gradient coils. The static field is responsible for the aggregation of nanoparticles, while the magnetic gradient contributes to the navigation of the agglomerates that are formed. Moreover, the contact forces among the aggregated nanoparticles and the wall and the Stokes drag force for each particle are considered, while only spherical particles are used in this study. In addition, gravitational forces due to gravity and the force due to buoyancy are included. Finally, Van der Walls force and Brownian motion are taken into account in the simulation. The OpenFoam platform is used for the calculation of the flow field and the uncoupled equations of particles' motion. To verify the optimal gradient magnetic fields, a covariance matrix adaptation evolution strategy (CMAES) is used in order to navigate the particles into the desired area. A desired trajectory is inserted into the computational geometry, which the particles are going to be navigated in. Initially, the CMAES optimization strategy provides the OpenFOAM program with random values of the gradient magnetic field. At the end of each simulation, the computational platform evaluates the distance between the particles and the desired trajectory. The present model can simulate the motion of particles when they are navigated by the magnetic field that is produced by the MRI device. Under the influence of fluid flow, the model investigates the effect of different gradient magnetic fields in order to minimize the distance of particles from the desired trajectory. In addition, the platform can navigate the particles into the desired trajectory with an efficiency between 80-90%. On the other hand, a small number of particles are stuck to the walls and remains there for the rest of the simulation.Keywords: artery, drug, nanoparticles, navigation
Procedia PDF Downloads 107142 Adapting an Accurate Reverse-time Migration Method to USCT Imaging
Authors: Brayden Mi
Abstract:
Reverse time migration has been widely used in the Petroleum exploration industry to reveal subsurface images and to detect rock and fluid properties since the early 1980s. The seismic technology involves the construction of a velocity model through interpretive model construction, seismic tomography, or full waveform inversion, and the application of the reverse-time propagation of acquired seismic data and the original wavelet used in the acquisition. The methodology has matured from 2D, simple media to present-day to handle full 3D imaging challenges in extremely complex geological conditions. Conventional Ultrasound computed tomography (USCT) utilize travel-time-inversion to reconstruct the velocity structure of an organ. With the velocity structure, USCT data can be migrated with the “bend-ray” method, also known as migration. Its seismic application counterpart is called Kirchhoff depth migration, in which the source of reflective energy is traced by ray-tracing and summed to produce a subsurface image. It is well known that ray-tracing-based migration has severe limitations in strongly heterogeneous media and irregular acquisition geometries. Reverse time migration (RTM), on the other hand, fully accounts for the wave phenomena, including multiple arrives and turning rays due to complex velocity structure. It has the capability to fully reconstruct the image detectable in its acquisition aperture. The RTM algorithms typically require a rather accurate velocity model and demand high computing powers, and may not be applicable to real-time imaging as normally required in day-to-day medical operations. However, with the improvement of computing technology, such a computational bottleneck may not present a challenge in the near future. The present-day (RTM) algorithms are typically implemented from a flat datum for the seismic industry. It can be modified to accommodate any acquisition geometry and aperture, as long as sufficient illumination is provided. Such flexibility of RTM can be conveniently implemented for the application in USCT imaging if the spatial coordinates of the transmitters and receivers are known and enough data is collected to provide full illumination. This paper proposes an implementation of a full 3D RTM algorithm for USCT imaging to produce an accurate 3D acoustic image based on the Phase-shift-plus-interpolation (PSPI) method for wavefield extrapolation. In this method, each acquired data set (shot) is propagated back in time, and a known ultrasound wavelet is propagated forward in time, with PSPI wavefield extrapolation and a piece-wise constant velocity model of the organ (breast). The imaging condition is then applied to produce a partial image. Although each image is subject to the limitation of its own illumination aperture, the stack of multiple partial images will produce a full image of the organ, with a much-reduced noise level if compared with individual partial images.Keywords: illumination, reverse time migration (RTM), ultrasound computed tomography (USCT), wavefield extrapolation
Procedia PDF Downloads 74141 Analysis of Reduced Mechanisms for Premixed Combustion of Methane/Hydrogen/Propane/Air Flames in Geometrically Modified Combustor and Its Effects on Flame Properties
Authors: E. Salem
Abstract:
Combustion has been used for a long time as a means of energy extraction. However, in recent years, there has been a further increase in air pollution, through pollutants such as nitrogen oxides, acid etc. In order to solve this problem, there is a need to reduce carbon and nitrogen oxides through learn burning modifying combustors and fuel dilution. A numerical investigation has been done to investigate the effectiveness of several reduced mechanisms in terms of computational time and accuracy, for the combustion of the hydrocarbons/air or diluted with hydrogen in a micro combustor. The simulations were carried out using the ANSYS Fluent 19.1. To validate the results “PREMIX and CHEMKIN” codes were used to calculate 1D premixed flame based on the temperature, composition of burned and unburned gas mixtures. Numerical calculations were carried for several hydrocarbons by changing the equivalence ratios and adding small amounts of hydrogen into the fuel blends then analyzing the flammable limit, the reduction in NOx and CO emissions, then comparing it to experimental data. By solving the conservations equations, several global reduced mechanisms (2-9-12) were obtained. These reduced mechanisms were simulated on a 2D cylindrical tube with dimensions of 40 cm in length and 2.5 cm diameter. The mesh of the model included a proper fine quad mesh, within the first 7 cm of the tube and around the walls. By developing a proper boundary layer, several simulations were performed on hydrocarbon/air blends to visualize the flame characteristics than were compared with experimental data. Once the results were within acceptable range, the geometry of the combustor was modified through changing the length, diameter, adding hydrogen by volume, and changing the equivalence ratios from lean to rich in the fuel blends, the results on flame temperature, shape, velocity and concentrations of radicals and emissions were observed. It was determined that the reduced mechanisms provided results within an acceptable range. The variation of the inlet velocity and geometry of the tube lead to an increase of the temperature and CO2 emissions, highest temperatures were obtained in lean conditions (0.5-0.9) equivalence ratio. Addition of hydrogen blends into combustor fuel blends resulted in; reduction in CO and NOx emissions, expansion of the flammable limit, under the condition of having same laminar flow, and varying equivalence ratio with hydrogen additions. The production of NO is reduced because the combustion happens in a leaner state and helps in solving environmental problems.Keywords: combustor, equivalence-ratio, hydrogenation, premixed flames
Procedia PDF Downloads 114140 Thermoregulatory Responses of Holstein Cows Exposed to Intense Heat Stress
Authors: Rodrigo De A. Ferrazza, Henry D. M. Garcia, Viviana H. V. Aristizabal, Camilla De S. Nogueira, Cecilia J. Verissimo, Jose Roberto Sartori, Roberto Sartori, Joao Carlos P. Ferreira
Abstract:
Environmental factors adversely influence sustainability in livestock production system. Dairy herds are the most affected by heat stress among livestock industries. This clearly implies in development of new strategies for mitigating heat, which should be based on physiological and metabolic adaptations of the animal. In this study, we incorporated the effect of climate variables and heat exposure time on the thermoregulatory responses in order to clarify the adaptive mechanisms for bovine heat dissipation under intense thermal stress induced experimentally in climate chamber. Non-lactating Holstein cows were contemporaneously and randomly assigned to thermoneutral (TN; n=12) or heat stress (HS; n=12) treatments during 16 days. Vaginal temperature (VT) was measured every 15 min with a microprocessor-controlled data logger (HOBO®, Onset Computer Corporation, Bourne, MA, USA) attached to a modified vaginal controlled internal drug release insert (Sincrogest®, Ourofino, Brazil). Rectal temperature (RT), respiratory rate (RR) and heart rate (HR) were measured twice a day (0700 and 1500h) and dry matter intake (DMI) was estimated daily. The ambient temperature and air relative humidity were 25.9±0.2°C and 73.0±0.8%, respectively for TN, and 36.3± 0.3°C and 60.9±0.9%, respectively for HS. Respiratory rate of HS cows increased immediately after exposure to heat and was higher (76.02±1.70bpm; P<0.001) than TN (39.70±0.71bpm), followed by rising of RT (39.87°C±0.07 for HS versus 38.56±0.03°C for TN; P<0.001) and VT (39.82±0.10°C for HS versus 38.26±0.03°C for TN; P<0.001). A diurnal pattern was detected, with higher (P<0.01) afternoon temperatures than morning and this effect was aggravated for HS cows. There was decrease (P<0.05) of HR for HS cows (62.13±0.99bpm) compared to TN (66.23±0.79bpm), but the magnitude of the differences was not the same over time. From the third day, there was a decrease of DMI for HS in attempt to maintain homeothermy, while TN cows increased DMI (8.27kg±0.33kg d-1 for HS versus 14.03±0.29kg d-1 for TN; P<0.001). By regression analysis, RT and RR better reflected the response of cows to changes in the Temperature Humidity Index and the effect of climate variables from the previous day to influence the physiological parameters and DMI was more important than the current day, with ambient temperature the most important factor. Comparison between acute (0 to 3 days) and chronic (13 to 16 days) exposure to heat stress showed decreasing of the slope of the regression equations for RR and DMI, suggesting an adaptive adjustment, however with no change for RT. In conclusion, intense heat stress exerted strong influence on the thermoregulatory mechanisms, but the acclimation process was only partial.Keywords: acclimation, bovine, climate chamber, hyperthermia, thermoregulation
Procedia PDF Downloads 218139 Photonic Dual-Microcomb Ranging with Extreme Speed Resolution
Authors: R. R. Galiev, I. I. Lykov, A. E. Shitikov, I. A. Bilenko
Abstract:
Dual-comb interferometry is based on the mixing of two optical frequency combs with slightly different lines spacing which results in the mapping of the optical spectrum into the radio-frequency domain for future digitizing and numerical processing. The dual-comb approach enables diverse applications, including metrology, fast high-precision spectroscopy, and distance range. Ordinary frequency-modulated continuous-wave (FMCW) laser-based Light Identification Detection and Ranging systems (LIDARs) suffer from two main disadvantages: slow and unreliable mechanical, spatial scan and a rather wide linewidth of conventional lasers, which limits speed measurement resolution. Dual-comb distance measurements with Allan deviations down to 12 nanometers at averaging times of 13 microseconds, along with ultrafast ranging at acquisition rates of 100 megahertz, allowing for an in-flight sampling of gun projectiles moving at 150 meters per second, was previously demonstrated. Nevertheless, pump lasers with EDFA amplifiers made the device bulky and expensive. An alternative approach is a direct coupling of the laser to a reference microring cavity. Backscattering can tune the laser to the eigenfrequency of the cavity via the so-called self-injection locked (SIL) effect. Moreover, the nonlinearity of the cavity allows a solitonic frequency comb generation in the very same cavity. In this work, we developed a fully integrated, power-efficient, electrically driven dual-micro comb source based on the semiconductor lasers SIL to high-quality integrated Si3N4 microresonators. We managed to obtain robust 1400-1700 nm combs generation with a 150 GHz or 1 THz lines spacing and measure less than a 1 kHz Lorentzian withs of stable, MHz spaced beat notes in a GHz band using two separated chips, each pumped by its own, self-injection locked laser. A deep investigation of the SIL dynamic allows us to find out the turn-key operation regime even for affordable Fabry-Perot multifrequency lasers used as a pump. It is important that such lasers are usually more powerful than DFB ones, which were also tested in our experiments. In order to test the advantages of the proposed techniques, we experimentally measured a minimum detectable speed of a reflective object. It has been shown that the narrow line of the laser locked to the microresonator provides markedly better velocity accuracy, showing velocity resolution down to 16 nm/s, while the no-SIL diode laser only allowed 160 nm/s with good accuracy. The results obtained are in agreement with the estimations and open up ways to develop LIDARs based on compact and cheap lasers. Our implementation uses affordable components, including semiconductor laser diodes and commercially available silicon nitride photonic circuits with microresonators.Keywords: dual-comb spectroscopy, LIDAR, optical microresonator, self-injection locking
Procedia PDF Downloads 72138 Bringing the World to Net Zero Carbon Dioxide by Sequestering Biomass Carbon
Authors: Jeffrey A. Amelse
Abstract:
Many corporations aspire to become Net Zero Carbon Carbon Dioxide by 2035-2050. This paper examines what it will take to achieve those goals. Achieving Net Zero CO₂ requires an understanding of where energy is produced and consumed, the magnitude of CO₂ generation, and proper understanding of the Carbon Cycle. The latter leads to the distinction between CO₂ and biomass carbon sequestration. Short reviews are provided for prior technologies proposed for reducing CO₂ emissions from fossil fuels or substitution by renewable energy, to focus on their limitations and to show that none offer a complete solution. Of these, CO₂ sequestration is poised to have the largest impact. It will just cost money, scale-up is a huge challenge, and it will not be a complete solution. CO₂ sequestration is still in the demonstration and semi-commercial scale. Transportation accounts for only about 30% of total U.S. energy demand, and renewables account for only a small fraction of that sector. Yet, bioethanol production consumes 40% of U.S. corn crop, and biodiesel consumes 30% of U.S. soybeans. It is unrealistic to believe that biofuels can completely displace fossil fuels in the transportation market. Bioethanol is traced through its Carbon Cycle and shown to be both energy inefficient and inefficient use of biomass carbon. Both biofuels and CO₂ sequestration reduce future CO₂ emissions from continued use of fossil fuels. They will not remove CO₂ already in the atmosphere. Planting more trees has been proposed as a way to reduce atmospheric CO₂. Trees are a temporary solution. When they complete their Carbon Cycle, they die and release their carbon as CO₂ to the atmosphere. Thus, planting more trees is just 'kicking the can down the road.' The only way to permanently remove CO₂ already in the atmosphere is to break the Carbon Cycle by growing biomass from atmospheric CO₂ and sequestering biomass carbon. Sequestering tree leaves is proposed as a solution. Unlike wood, leaves have a short Carbon Cycle time constant. They renew and decompose every year. Allometric equations from the USDA indicate that theoretically, sequestrating only a fraction of the world’s tree leaves can get the world to Net Zero CO₂ without disturbing the underlying forests. How can tree leaves be permanently sequestered? It may be as simple as rethinking how landfills are designed to discourage instead of encouraging decomposition. In traditional landfills, municipal waste undergoes rapid initial aerobic decomposition to CO₂, followed by slow anaerobic decomposition to methane and CO₂. The latter can take hundreds to thousands of years. The first step in anaerobic decomposition is hydrolysis of cellulose to release sugars, which those who have worked on cellulosic ethanol know is challenging for a number of reasons. The key to permanent leaf sequestration may be keeping the landfills dry and exploiting known inhibitors for anaerobic bacteria.Keywords: carbon dioxide, net zero, sequestration, biomass, leaves
Procedia PDF Downloads 128137 Study of Ion Density Distribution and Sheath Thickness in Warm Electronegative Plasma
Authors: Rajat Dhawan, Hitendra K. Malik
Abstract:
Electronegative plasmas comprising electrons, positive ions, and negative ions are advantageous for their expanding applications in industries. In plasma cleaning, plasma etching, and plasma deposition process, electronegative plasmas are preferred because of relatively less potential developed on the surface of the material under investigation. Also, the presence of negative ions avoid the irregularity in etching shapes and also enhance the material working during the fabrication process. The interaction of metallic conducting surface with plasma becomes mandatory to understand these applications. A metallic conducting probe immersed in a plasma results in the formation of a thin layer of charged species around the probe called as a sheath. The density of the ions embedded on the surface of the material and the sheath thickness are the important parameters for the surface-plasma interaction. Sheath thickness will give rise to the information of affected plasma region due to conducting surface/probe. The knowledge of the density of ions in the sheath region is advantageous in plasma nitriding, and their temperature is equally important as it strongly influences the thickness of the modified layer during surface plasma interaction. In the present work, we considered a negatively biased metallic probe immersed in a warm electronegative plasma. For this system, we adopted the continuity equation and momentum transfer equation for both the positive and negative ions, whereas electrons are described by Boltzmann distribution. Finally, we use the Poisson’s equation. Here, we assumed the spherical geometry for small probe radius. Poisson’s equation reveals the behaviour of potential surrounding a conducting metallic probe along with the use of the continuity and momentum transfer equations, with the help of proper boundary conditions. In turn, it gives rise to the information about the density profile of charged species and most importantly the thickness of the sheath. By keeping in mind, the well-known Bohm-Sheath criterion, all calculations are done. We found that positive ion density decreases with an increase in positive ion temperature, whereas it increases with the higher temperature of the negative ions. Positive ion density decreases as we move away from the center of the probe and is found to show a discontinuity at a particular distance from the center of the probe. The distance where discontinuity occurs is designated as sheath edge, i.e., the point where sheath ends. These results are beneficial for industrial applications, as the density of ions embedded on material surface is strongly affected by the temperature of plasma species. It has a drastic influence on the surface properties, i.e., the hardness, corrosion resistance, etc. of the materials.Keywords: electronegative plasmas, plasma surface interaction positive ion density, sheath thickness
Procedia PDF Downloads 131136 Risk Assessment and Haloacetic Acids Exposure in Drinking Water in Tunja, Colombia
Authors: Bibiana Matilde Bernal Gómez, Manuel Salvador Rodríguez Susa, Mildred Fernanda Lemus Perez
Abstract:
In chlorinated drinking water, Haloacetic acids have been identified and are classified as disinfection byproducts originating from reaction between natural organic matter and/or bromide ions in water sources. These byproducts can be generated through a variety of chemical and pharmaceutical processes. The term ‘Total Haloacetic Acids’ (THAAs) is used to describe the cumulative concentration of dichloroacetic acid, trichloroacetic acid, monochloroacetic acid, monobromoacetic acid, and dibromoacetic acid in water samples, which are usually measured to evaluate water quality. Chronic presence of these acids in drinking water has a risk of cancer in humans. The detection of THAAs for the first time in 15 municipalities of Boyacá was accomplished in 2023. Aim is to describe the correlation between the levels of THAAs and digestive cancer in Tunja, a city in Colombia with higher rates of digestive cancer and to compare the risk across 15 towns, taking into account factors such as water quality. A research project was conducted with the aim of comparing water sources based on the geographical features of the town, describing the disinfection process in 15 municipalities, and exploring physical properties such as water temperature and pH level. The project also involved a study of contact time based on habits documented through a survey, and a comparison of socioeconomic factors and lifestyle, in order to assess the personal risk of exposure. Data on the levels of THAAs were obtained after characterizing the water quality in urban sectors in eight months of 2022. This, based on the protocol described in the Stage 2 DBP of the United States Environmental Protection Agency (USEPA) from 2006, which takes into account the size of the population being supplied. A cancer risk assessment was conducted to evaluate the likelihood of an individual developing cancer due to exposure to pollutants THAAs. The assessment considered exposure methods like oral ingestion, skin absorption, and inhalation. The chronic daily intake (CDI) for these exposure routes was calculated using specific equations. The lifetime cancer risk (LCR) was then determined by adding the cancer risks from the three exposure routes for each HAA. The risk assessment process involved four phases: exposure assessment, toxicity evaluation, data gathering and analysis, and risk definition and management. The results conclude that there is a cumulative higher risk of digestive cancer due to THAAs exposure in drinking water.Keywords: haloacetic acids, drinking water, water quality, cancer risk assessment
Procedia PDF Downloads 57135 Folding of β-Structures via the Polarized Structure-Specific Backbone Charge (PSBC) Model
Authors: Yew Mun Yip, Dawei Zhang
Abstract:
Proteins are the biological machinery that executes specific vital functions in every cell of the human body by folding into their 3D structures. When a protein misfolds from its native structure, the machinery will malfunction and lead to misfolding diseases. Although in vitro experiments are able to conclude that the mutations of the amino acid sequence lead to incorrectly folded protein structures, these experiments are unable to decipher the folding process. Therefore, molecular dynamic (MD) simulations are employed to simulate the folding process so that our improved understanding of the folding process will enable us to contemplate better treatments for misfolding diseases. MD simulations make use of force fields to simulate the folding process of peptides. Secondary structures are formed via the hydrogen bonds formed between the backbone atoms (C, O, N, H). It is important that the hydrogen bond energy computed during the MD simulation is accurate in order to direct the folding process to the native structure. Since the atoms involved in a hydrogen bond possess very dissimilar electronegativities, the more electronegative atom will attract greater electron density from the less electronegative atom towards itself. This is known as the polarization effect. Since the polarization effect changes the electron density of the two atoms in close proximity, the atomic charges of the two atoms should also vary based on the strength of the polarization effect. However, the fixed atomic charge scheme in force fields does not account for the polarization effect. In this study, we introduce the polarized structure-specific backbone charge (PSBC) model. The PSBC model accounts for the polarization effect in MD simulation by updating the atomic charges of the backbone hydrogen bond atoms according to equations derived between the amount of charge transferred to the atom and the length of the hydrogen bond, which are calculated from quantum-mechanical calculations. Compared to other polarizable models, the PSBC model does not require quantum-mechanical calculations of the peptide simulated at every time-step of the simulation and maintains the dynamic update of atomic charges, thereby reducing the computational cost and time while accounting for the polarization effect dynamically at the same time. The PSBC model is applied to two different β-peptides, namely the Beta3s/GS peptide, a de novo designed three-stranded β-sheet whose structure is folded in vitro and studied by NMR, and the trpzip peptides, a double-stranded β-sheet where a correlation is found between the type of amino acids that constitute the β-turn and the β-propensity.Keywords: hydrogen bond, polarization effect, protein folding, PSBC
Procedia PDF Downloads 270134 Effects of Temperature and Mechanical Abrasion on Microplastics
Authors: N. Singh, G. K. Darbha
Abstract:
Since the last decade, a wave of research has begun to study the prevalence and impact of ever-increasing plastic pollution in the environment. The wide application and ubiquitous distribution of plastic have become a global concern due to its persistent nature. The disposal of plastics has emerged as one of the major challenges for waste management landfills. Microplastics (MPs) have found its existence in almost every environment, from the high altitude mountain lake to the deep sea sediments, polar icebergs, coral reefs, estuaries, beaches, and river, etc. Microplastics are fragments of plastics with size less than 5 mm. Microplastics can be classified as primary microplastics and secondary microplastics. Primary microplastics includes purposefully introduced microplastics into the end products for consumers (microbeads used in facial cleansers, personal care product, etc.), pellets (used in manufacturing industries) or fibres (from textile industries) which finally enters into the environment. Secondary microplastics are formed by disintegration of larger fragments under the exposure of sunlight, mechanical abrasive forces by rain, waves, wind and/or water. A number of factors affect the quantity of microplastic present in freshwater environments. In addition to physical forces, human population density proximal to the water body, proximity to urban centres, water residence time, and size of the water body also affects plastic properties. With time, other complex processes in nature such as physical, chemical and biological break down plastics by interfering with its structural integrity. Several studies demonstrate that microplastics found in wastewater sludge being used as manure for agricultural fields, thus having the tendency to alter the soil environment condition influencing the microbial population as well. Inadequate data are available on the fate and transport of microplastics under varying environmental conditions that are required to supplement important information for further research. In addition, microplastics have the tendency to absorb heavy metals and hydrophobic organic contaminants such as PAHs and PCBs from its surroundings and thus acting as carriers for these contaminants in the environment system. In this study, three kinds of microplastics (polyethylene, polypropylene and expanded polystyrene) of different densities were chosen. Plastic samples were placed in sand with different aqueous media (distilled water, surface water, groundwater and marine water). It was incubated at varying temperatures (25, 35 and 40 °C) and agitation levels (rpm). The results show that the number of plastic fragments enhanced with increase in temperature and agitation speed. Moreover, the rate of disintegration of expanded polystyrene is high compared to other plastics. These results demonstrate that temperature, salinity, and mechanical abrasion plays a major role in degradation of plastics. Since weathered microplastics are more harmful as compared to the virgin microplastics, long-term studies involving other environmental factors are needed to have a better understanding of degradation of plastics.Keywords: environmental contamination, fragmentation, microplastics, temperature, weathering
Procedia PDF Downloads 170133 Numerical Investigation of Phase Change Materials (PCM) Solidification in a Finned Rectangular Heat Exchanger
Authors: Mounir Baccar, Imen Jmal
Abstract:
Because of the rise in energy costs, thermal storage systems designed for the heating and cooling of buildings are becoming increasingly important. Energy storage can not only reduce the time or rate mismatch between energy supply and demand but also plays an important role in energy conservation. One of the most preferable storage techniques is the Latent Heat Thermal Energy Storage (LHTES) by Phase Change Materials (PCM) due to its important energy storage density and isothermal storage process. This paper presents a numerical study of the solidification of a PCM (paraffin RT27) in a rectangular thermal storage exchanger for air conditioning systems taking into account the presence of natural convection. Resolution of continuity, momentum and thermal energy equations are treated by the finite volume method. The main objective of this numerical approach is to study the effect of natural convection on the PCM solidification time and the impact of fins number on heat transfer enhancement. It also aims at investigating the temporal evolution of PCM solidification, as well as the longitudinal profiles of the HTF circling in the duct. The present research undertakes the study of two cases: the first one treats the solidification of PCM in a PCM-air heat exchanger without fins, while the second focuses on the solidification of PCM in a heat exchanger of the same type with the addition of fins (3 fins, 5 fins, and 9 fins). Without fins, the stratification of the PCM from colder to hotter during the heat transfer process has been noted. This behavior prevents the formation of thermo-convective cells in PCM area and then makes transferring almost conductive. In the presence of fins, energy extraction from PCM to airflow occurs at a faster rate, which contributes to the reduction of the discharging time and the increase of the outlet air temperature (HTF). However, for a great number of fins (9 fins), the enhancement of the solidification process is not significant because of the effect of confinement of PCM liquid spaces for the development of thermo-convective flow. Hence, it can be concluded that the effect of natural convection is not very significant for a high number of fins. In the optimum case, using 3 fins, the increasing temperature of the HTF exceeds approximately 10°C during the first 30 minutes. When solidification progresses from the surfaces of the PCM-container and propagates to the central liquid phase, an insulating layer will be created in the vicinity of the container surfaces and the fins, causing a low heat exchange rate between PCM and air. As the solid PCM layer gets thicker, a progressive regression of the field of movements is induced in the liquid phase, thus leading to the inhibition of heat extraction process. After about 2 hours, 68% of the PCM became solid, and heat transfer was almost dominated by conduction mechanism.Keywords: heat transfer enhancement, front solidification, PCM, natural convection
Procedia PDF Downloads 187132 Performance Analysis of Double Gate FinFET at Sub-10NM Node
Authors: Suruchi Saini, Hitender Kumar Tyagi
Abstract:
With the rapid progress of the nanotechnology industry, it is becoming increasingly important to have compact semiconductor devices to function and offer the best results at various technology nodes. While performing the scaling of the device, several short-channel effects occur. To minimize these scaling limitations, some device architectures have been developed in the semiconductor industry. FinFET is one of the most promising structures. Also, the double-gate 2D Fin field effect transistor has the benefit of suppressing short channel effects (SCE) and functioning well for less than 14 nm technology nodes. In the present research, the MuGFET simulation tool is used to analyze and explain the electrical behaviour of a double-gate 2D Fin field effect transistor. The drift-diffusion and Poisson equations are solved self-consistently. Various models, such as Fermi-Dirac distribution, bandgap narrowing, carrier scattering, and concentration-dependent mobility models, are used for device simulation. The transfer and output characteristics of the double-gate 2D Fin field effect transistor are determined at 10 nm technology node. The performance parameters are extracted in terms of threshold voltage, trans-conductance, leakage current and current on-off ratio. In this paper, the device performance is analyzed at different structure parameters. The utilization of the Id-Vg curve is a robust technique that holds significant importance in the modeling of transistors, circuit design, optimization of performance, and quality control in electronic devices and integrated circuits for comprehending field-effect transistors. The FinFET structure is optimized to increase the current on-off ratio and transconductance. Through this analysis, the impact of different channel widths, source and drain lengths on the Id-Vg and transconductance is examined. Device performance was affected by the difficulty of maintaining effective gate control over the channel at decreasing feature sizes. For every set of simulations, the device's features are simulated at two different drain voltages, 50 mV and 0.7 V. In low-power and precision applications, the off-state current is a significant factor to consider. Therefore, it is crucial to minimize the off-state current to maximize circuit performance and efficiency. The findings demonstrate that the performance of the current on-off ratio is maximum with the channel width of 3 nm for a gate length of 10 nm, but there is no significant effect of source and drain length on the current on-off ratio. The transconductance value plays a pivotal role in various electronic applications and should be considered carefully. In this research, it is also concluded that the transconductance value of 340 S/m is achieved with the fin width of 3 nm at a gate length of 10 nm and 2380 S/m for the source and drain extension length of 5 nm, respectively.Keywords: current on-off ratio, FinFET, short-channel effects, transconductance
Procedia PDF Downloads 61131 Improved Functions For Runoff Coefficients And Smart Design Of Ditches & Biofilters For Effective Flow detention
Authors: Thomas Larm, Anna Wahlsten
Abstract:
An international literature study has been carried out for comparison of commonly used methods for the dimensioning of transport systems and stormwater facilities for flow detention. The focus of the literature study regarding the calculation of design flow and detention has been the widely used Rational method and its underlying parameters. The impact of chosen design parameters such as return time, rain intensity, runoff coefficient, and climate factor have been studied. The parameters used in the calculations have been analyzed regarding how they can be calculated and within what limits they can be used. Data used within different countries have been specified, e.g., recommended rainfall return times, estimated runoff times, and climate factors used for different cases and time periods. The literature study concluded that the determination of runoff coefficients is the most uncertain parameter that also affects the calculated flow and required detention volume the most. Proposals have been developed for new runoff coefficients, including a new proposed method with equations for calculating runoff coefficients as a function of return time (years) and rain intensity (l/s/ha), respectively. Suggestions have been made that it is recommended not to limit the use of the Rational Method to a specific catchment size, contrary to what many design manuals recommend, with references to this. The proposed relationships between return time or rain intensity and runoff coefficients need further investigation and to include the quantification of uncertainties. Examples of parameters that have not been considered are the influence on the runoff coefficients of different dimensioning rain durations and the degree of water saturation of green areas, which will be investigated further. The influence of climate effects and design rain on the dimensioning of the stormwater facilities grassed ditches and biofilters (bio retention systems) has been studied, focusing on flow detention capacity. We have investigated how the calculated runoff coefficients regarding climate effect and the influence of changed (increased) return time affect the inflow to and dimensioning of the stormwater facilities. We have developed a smart design of ditches and biofilters that results in both high treatment and flow detention effects and compared these with the effect from dry and wet ponds. Studies of biofilters have generally before focused on treatment of pollutants, but their effect on flow volume and how its flow detention capability can improve is only rarely studied. For both the new type of stormwater ditches and biofilters, it is required to be able to simulate their performance in a model under larger design rains and future climate, as these conditions cannot be tested in the field. The stormwater model StormTac Web has been used on case studies. The results showed that the new smart design of ditches and biofilters had similar flow detention capacity as dry and wet ponds for the same facility area.Keywords: runoff coefficients, flow detention, smart design, biofilter, ditch
Procedia PDF Downloads 87130 Forest Fire Burnt Area Assessment in a Part of West Himalayan Region Using Differenced Normalized Burnt Ratio and Neural Network Approach
Authors: Sunil Chandra, Himanshu Rawat, Vikas Gusain, Triparna Barman
Abstract:
Forest fires are a recurrent phenomenon in the Himalayan region owing to the presence of vulnerable forest types, topographical gradients, climatic weather conditions, and anthropogenic pressure. The present study focuses on the identification of forest fire-affected areas in a small part of the West Himalayan region using a differential normalized burnt ratio method and spectral unmixing methods. The study area has a rugged terrain with the presence of sub-tropical pine forest, montane temperate forest, and sub-alpine forest and scrub. The major reason for fires in this region is anthropogenic in nature, with the practice of human-induced fires for getting fresh leaves, scaring wild animals to protect agricultural crops, grazing practices within reserved forests, and igniting fires for cooking and other reasons. The fires caused by the above reasons affect a large area on the ground, necessitating its precise estimation for further management and policy making. In the present study, two approaches have been used for carrying out a burnt area analysis. The first approach followed for burnt area analysis uses a differenced normalized burnt ratio (dNBR) index approach that uses the burnt ratio values generated using the Short-Wave Infrared (SWIR) band and Near Infrared (NIR) bands of the Sentinel-2 image. The results of the dNBR have been compared with the outputs of the spectral mixing methods. It has been found that the dNBR is able to create good results in fire-affected areas having homogenous forest stratum and with slope degree <5 degrees. However, in a rugged terrain where the landscape is largely influenced by the topographical variations, vegetation types, tree density, the results may be largely influenced by the effects of topography, complexity in tree composition, fuel load composition, and soil moisture. Hence, such variations in the factors influencing burnt area assessment may not be effectively carried out using a dNBR approach which is commonly followed for burnt area assessment over a large area. Hence, another approach that has been attempted in the present study utilizes a spectral mixing method where the individual pixel is tested before assigning an information class to it. The method uses a neural network approach utilizing Sentinel-2 bands. The training and testing data are generated from the Sentinel-2 data and the national field inventory, which is further used for generating outputs using ML tools. The analysis of the results indicates that the fire-affected regions and their severity can be better estimated using spectral unmixing methods, which have the capability to resolve the noise in the data and can classify the individual pixel to the precise burnt/unburnt class.Keywords: categorical data, log linear modeling, neural network, shifting cultivation
Procedia PDF Downloads 54129 Revenge: Dramaturgy and the Tragedy of Jihad
Authors: Myriam Benraad
Abstract:
On 5 July 2016, just days before the bloody terrorist attack on the Promenade des Anglais in Nice, the Al-Hayat media centre, one of the official propaganda branches of the Islamic State, broadcast a French nasheed which paid tribute to the Paris and Brussels attacks of November 2015 and March 2016. Entitled 'My Revenge', the terrorist anthem was of rare vehemence. It mentioned, sequentially, 'huddled bodies', in a reference to the civilian casualties of Western air strikes in the Iraqi-Syrian zone, 'explosive belts', 'sharp knives', 'large-calibre weapons' as well as 'localised targets'. France was accused of bearing the responsibility for the wave of attacks on its territory since the Charlie Hebdo massacre of January 2015 due to its 'ruthless war' against the Muslim world. Evoking an 'old aggression' and the 'crimes and spoliations' of which France has made itself guilty, the jihadist hymn depicted the rebirth of the caliphate as 'laudable revenge'. The notion of revenge has always been central to contemporary jihadism, understood both as a revolutionary ideology and a global militant movement. In recent years, the attacks carried out in Europe and elsewhere in the world have, for most, been claimed in its name. Whoever says jihad, says drama, yet few studies, if any, have looked at its dramatic and emotional elements, most notably its tragic vengefulness. This seems all the more astonishing that jihad is filled with drama; it could even be seen as a drama in its own right. The jihadists perform a script and take on roles inspired by their respective group’s culture (norms, values, beliefs, and symbols). The militants stage and perform such a script for a designated audience, either partisan, sympathising or hostile towards them and their cause. This research paper will examine the dramaturgy of jihadism and in particular, the genre that best characterises its violence: revenge tragedy. Theoretically, the research will rely on the tools of social movement theory and the sociology of emotions. Methodologically, it will draw from dramaturgical analysis and a combination of qualitative and quantitative tools to attain valuable observations of a number of developments, trends, and patterns. The choice has been made to focus mainly – however not exclusively – on the attacks which have taken place since 2001 in the European Union and more specific member states that have been significantly hit by jihadist terrorism. The research looks at a number of representative longitudinal samples identifying continuities and discontinuities, similarities, but also substantial differences. The preliminary findings tend to establish the relevance and validity of this approach in helping make better sense of sensitisation, mobilisation, and survival dynamics within jihadist groups, and motivations among individuals who have embraced violence. Besides, they illustrate their pertinence for counterterrorism policymakers and practitioners. Through drama, jihadist groups ensure the unceasing regeneration of their militant cause as well as their legitimation among their partisans. Without drama, and without the spectacular ideological staging of reality, they would not be able to maintain their attraction potential and power of persuasion.Keywords: Jihadism, dramaturgy, revenge, tragedy
Procedia PDF Downloads 135128 Wheeled Robot Stable Braking Process under Asymmetric Traction Coefficients
Authors: Boguslaw Schreyer
Abstract:
During the wheeled robot’s braking process, the extra dynamic vertical forces act on all wheels: left, right, front or rear. Those forces are directed downward on the front wheels while directed upward on the rear wheels. In order to maximize the deceleration, therefore, minimize the braking time and braking distance, we need to calculate a correct torque distribution: the front braking torque should be increased, and rear torque should be decreased. At the same time, we need to provide better transversal stability. In a simple case of all adhesion coefficients being the same under all wheels, the torque distribution may secure the optimal (maximal) control of the robot braking process, securing the minimum braking distance and a minimum braking time. At the same time, the transversal stability is relatively good. At any time, we control the transversal acceleration. In the case of the transversal movement, we stop the braking process and re-apply braking torque after a defined period of time. If we correctly calculate the value of the torques, we may secure the traction coefficient under the front and rear wheels close to its maximum. Also, in order to provide an optimum braking control, we need to calculate the timing of the braking torque application and the timing of its release. The braking torques should be released shortly after the wheels passed a maximum traction coefficient (while a wheels’ slip increases) and applied again after the wheels pass a maximum of traction coefficient (while the slip decreases). The correct braking torque distribution secures the front and rear wheels, passing this maximum at the same time. It guarantees an optimum deceleration control, therefore, minimum braking time. In order to calculate a correct torque distribution, a control unit should receive the input signals of a rear torque value (which changes independently), the robot’s deceleration, and values of the vertical front and rear forces. In order to calculate the timing of torque application and torque release, more signals are needed: speed of the robot: angular speed, and angular deceleration of the wheels. In case of different adhesion coefficients under the left and right wheels, but the same under each pair of wheels- the same under right wheels and the same under left wheels, the Select-Low (SL) and select high (SH) methods are applied. The SL method is suggested if transversal stability is more important than braking efficiency. Often in the case of the robot, more important is braking efficiency; therefore, the SH method is applied with some control of the transversal stability. In the case that all adhesion coefficients are different under all wheels, the front-rear torque distribution is maintained as in all previous cases. However, the timing of the braking torque application and release is controlled by the rear wheels’ lowest adhesion coefficient. The Lagrange equations have been used to describe robot dynamics. Matlab has been used in order to simulate the process of wheeled robot braking, and in conclusion, the braking methods have been selected.Keywords: wheeled robots, braking, traction coefficient, asymmetric
Procedia PDF Downloads 165127 Definition of Aerodynamic Coefficients for Microgravity Unmanned Aerial System
Authors: Gamaliel Salazar, Adriana Chazaro, Oscar Madrigal
Abstract:
The evolution of Unmanned Aerial Systems (UAS) has made it possible to develop new vehicles capable to perform microgravity experiments which due its cost and complexity were beyond the reach for many institutions. In this study, the aerodynamic behavior of an UAS is studied through its deceleration stage after an initial free fall phase (where the microgravity effect is generated) using Computational Fluid Dynamics (CFD). Due to the fact that the payload would be analyzed under a microgravity environment and the nature of the payload itself, the speed of the UAS must be reduced in a smoothly way. Moreover, the terminal speed of the vehicle should be low enough to preserve the integrity of the payload and vehicle during the landing stage. The UAS model is made by a study pod, control surfaces with fixed and mobile sections, landing gear and two semicircular wing sections. The speed of the vehicle is decreased by increasing the angle of attack (AoA) of each wing section from 2° (where the airfoil S1091 has its greatest aerodynamic efficiency) to 80°, creating a circular wing geometry. Drag coefficients (Cd) and forces (Fd) are obtained employing CFD analysis. A simplified 3D model of the vehicle is analyzed using Ansys Workbench 16. The distance between the object of study and the walls of the control volume is eight times the length of the vehicle. The domain is discretized using an unstructured mesh based on tetrahedral elements. The refinement of the mesh is made by defining an element size of 0.004 m in the wing and control surfaces in order to figure out the fluid behavior in the most important zones, as well as accurate approximations of the Cd. The turbulent model k-epsilon is selected to solve the governing equations of the fluids while a couple of monitors are placed in both wing and all-body vehicle to visualize the variation of the coefficients along the simulation process. Employing a statistical approximation response surface methodology the case of study is parametrized considering the AoA of the wing as the input parameter and Cd and Fd as output parameters. Based on a Central Composite Design (CCD), the Design Points (DP) are generated so the Cd and Fd for each DP could be estimated. Applying a 2nd degree polynomial approximation the drag coefficients for every AoA were determined. Using this values, the terminal speed at each position is calculated considering a specific Cd. Additionally, the distance required to reach the terminal velocity at each AoA is calculated, so the minimum distance for the entire deceleration stage without comprising the payload could be determine. The Cd max of the vehicle is 1.18, so its maximum drag will be almost like the drag generated by a parachute. This guarantees that aerodynamically the vehicle can be braked, so it could be utilized for several missions allowing repeatability of microgravity experiments.Keywords: microgravity effect, response surface, terminal speed, unmanned system
Procedia PDF Downloads 173126 Early Diagnosis of Myocardial Ischemia Based on Support Vector Machine and Gaussian Mixture Model by Using Features of ECG Recordings
Authors: Merve Begum Terzi, Orhan Arikan, Adnan Abaci, Mustafa Candemir
Abstract:
Acute myocardial infarction is a major cause of death in the world. Therefore, its fast and reliable diagnosis is a major clinical need. ECG is the most important diagnostic methodology which is used to make decisions about the management of the cardiovascular diseases. In patients with acute myocardial ischemia, temporary chest pains together with changes in ST segment and T wave of ECG occur shortly before the start of myocardial infarction. In this study, a technique which detects changes in ST/T sections of ECG is developed for the early diagnosis of acute myocardial ischemia. For this purpose, a database of real ECG recordings that contains a set of records from 75 patients presenting symptoms of chest pain who underwent elective percutaneous coronary intervention (PCI) is constituted. 12-lead ECG’s of the patients were recorded before and during the PCI procedure. Two ECG epochs, which are the pre-inflation ECG which is acquired before any catheter insertion and the occlusion ECG which is acquired during balloon inflation, are analyzed for each patient. By using pre-inflation and occlusion recordings, ECG features that are critical in the detection of acute myocardial ischemia are identified and the most discriminative features for the detection of acute myocardial ischemia are extracted. A classification technique based on support vector machine (SVM) approach operating with linear and radial basis function (RBF) kernels to detect ischemic events by using ST-T derived joint features from non-ischemic and ischemic states of the patients is developed. The dataset is randomly divided into training and testing sets and the training set is used to optimize SVM hyperparameters by using grid-search method and 10fold cross-validation. SVMs are designed specifically for each patient by tuning the kernel parameters in order to obtain the optimal classification performance results. As a result of implementing the developed classification technique to real ECG recordings, it is shown that the proposed technique provides highly reliable detections of the anomalies in ECG signals. Furthermore, to develop a detection technique that can be used in the absence of ECG recording obtained during healthy stage, the detection of acute myocardial ischemia based on ECG recordings of the patients obtained during ischemia is also investigated. For this purpose, a Gaussian mixture model (GMM) is used to represent the joint pdf of the most discriminating ECG features of myocardial ischemia. Then, a Neyman-Pearson type of approach is developed to provide detection of outliers that would correspond to acute myocardial ischemia. Neyman – Pearson decision strategy is used by computing the average log likelihood values of ECG segments and comparing them with a range of different threshold values. For different discrimination threshold values and number of ECG segments, probability of detection and probability of false alarm values are computed, and the corresponding ROC curves are obtained. The results indicate that increasing number of ECG segments provide higher performance for GMM based classification. Moreover, the comparison between the performances of SVM and GMM based classification showed that SVM provides higher classification performance results over ECG recordings of considerable number of patients.Keywords: ECG classification, Gaussian mixture model, Neyman–Pearson approach, support vector machine
Procedia PDF Downloads 162125 Saudi State Arabia’s Struggle for a Post-Rentier Regional Order
Authors: Omair Anas
Abstract:
The Persian Gulf has been in turmoil for a long time since the colonial administration has handed over the role to the small and weak kings and emirs who were assured of protection in return of many economic and security promises to them. The regional order, Saudi Arabia evolved was a rentier regional order secured by an expansion of rentier economy and taking responsibility for much of the expenses of the regional order on behalf of relatively poor countries. The two oil booms helped the Saudi state to expand the 'rentier order' driven stability and bring the countries like Egypt, Jordan, Syria, and Palestine under its tutelage. The disruptive misadventure, however, came with Iran's proclamation of the Islamic Revolution in 1979 which it wanted to be exported to its 'un-Islamic and American puppet' Arab neighbours. For Saudi Arabia, even the challenge presented by the socialist-nationalist Arab dictators like Gamal Abdul Nasser and Hafez Al-Assad was not that much threatening to the Saudi Arabia’s then-defensive realism. In the Arab uprisings, the Gulf monarchies saw a wave of insecurity and Iran found it an opportune time to complete the revolutionary process it could not complete after 1979. An alliance of convenience and ideology between Iran and Islamist groups had the real potential to challenge both Saudi Arabia’s own security and its leadership in the region. The disruptive threat appeared at a time when the Saudi state had already sensed an impending crisis originating from the shifts in the energy markets. Low energy prices, declining global demands, and huge investments in alternative energy resources required Saudi Arabia to rationalize its economy according to changing the global political economy. The domestic Saudi reforms remained gradual until the death of King Abdullah in 2015. What is happening now in the region, the Qatar crisis, the Lebanon crisis and the Saudi-Iranian proxy war in Iraq, Syria, and Yemen has combined three immediate objectives, rationalising Saudi economy and most importantly, the resetting the Saudi royal power for Saudi Arabia’s longest-serving future King Mohammad bin Salman. The Saudi King perhaps has no time to wait and watch the power vacuum appearing because of Iran’s expansionist foreign policy. The Saudis appear to be employing an offensive realism by advancing a pro-active regional policy to counter Iran’s threatening influence amid disappearing Western security from the region. As the Syrian civil war is coming to a compromised end with ceding much ground to Iran-controlled militias, Hezbollah and Al-Hashad, the Saudi state has lost much ground in these years and the threat from Iranian proxies is more than a reality, more clearly in Bahrain, Iraq, Syria, and Yemen. This paper attempts to analyse the changing Saudi behaviour in the region, which, the author understands, is shaped by an offensive-realist approach towards finding a favourable security environment for the Saudi-led regional order, a post-rentier order perhaps.Keywords: terrorism, Saudi Arabia, Rentier State, gulf crisis
Procedia PDF Downloads 136124 Climate Change Impact on Mortality from Cardiovascular Diseases: Case Study of Bucharest, Romania
Authors: Zenaida Chitu, Roxana Bojariu, Liliana Velea, Roxana Burcea
Abstract:
A number of studies show that extreme air temperature affects mortality related to cardiovascular diseases, particularly among elderly people. In Romania, the summer thermal discomfort expressed by Universal Thermal Climate Index (UTCI) is highest in the Southern part of the country, where Bucharest, the largest Romanian urban agglomeration, is also located. The urban characteristics such as high building density and reduced green areas enhance the increase of the air temperature during summer. In Bucharest, as in many other large cities, the effect of heat urban island is present and determines an increase of air temperature compared to surrounding areas. This increase is particularly important during heat wave periods in summer. In this context, the researchers performed a temperature-mortality analysis based on daily deaths related to cardiovascular diseases, recorded between 2010 and 2019 in Bucharest. The temperature-mortality relationship was modeled by applying distributed lag non-linear model (DLNM) that includes a bi-dimensional cross-basis function and flexible natural cubic spline functions with three internal knots in the 10th, 75th and 90th percentiles of the temperature distribution, for modelling both exposure-response and lagged-response dimensions. Firstly, this study applied this analysis for the present climate. Extrapolation of the exposure-response associations beyond the observed data allowed us to estimate future effects on mortality due to temperature changes under climate change scenarios and specific assumptions. We used future projections of air temperature from five numerical experiments with regional climate models included in the EURO-CORDEX initiative under the relatively moderate (RCP 4.5) and pessimistic (RCP 8.5) concentration scenarios. The results of this analysis show for RCP 8.5 an ensemble-averaged increase with 6.1% of heat-attributable mortality fraction in future in comparison with present climate (2090-2100 vs. 2010-219), corresponding to an increase of 640 deaths/year, while mortality fraction due to the cold conditions will be reduced by 2.76%, corresponding to a decrease by 288 deaths/year. When mortality data is stratified according to the age, the ensemble-averaged increase of heat-attributable mortality fraction for elderly people (> 75 years) in the future is even higher (6.5 %). These findings reveal the necessity to carefully plan urban development in Bucharest to face the public health challenges raised by the climate change. Paper Details: This work is financed by the project URCLIM which is part of ERA4CS, an ERA-NET initiated by JPI Climate, and funded by Ministry of Environment, Romania with co-funding by the European Union (Grant 690462). A part of this work performed by one of the authors has received funding from the European Union’s Horizon 2020 research and innovation programme from the project EXHAUSTION under grant agreement No 820655.Keywords: cardiovascular diseases, climate change, extreme air temperature, mortality
Procedia PDF Downloads 128123 Numerical Investigation on Transient Heat Conduction through Brine-Spongy Ice
Authors: S. R. Dehghani, Y. S. Muzychka, G. F. Naterer
Abstract:
The ice accretion of salt water on cold substrates creates brine-spongy ice. This type of ice is a mixture of pure ice and liquid brine. A real case of creation of this type of ice is superstructure icing which occurs on marine vessels and offshore structures in cold and harsh conditions. Transient heat transfer through this medium causes phase changes between brine pockets and pure ice. Salt rejection during the process of transient heat conduction increases the salinity of brine pockets to reach a local equilibrium state. In this process the only effect of passing heat through the medium is not changing the sensible heat of the ice and brine pockets; latent heat plays an important role and affects the mechanism of heat transfer. In this study, a new analytical model for evaluating heat transfer through brine-spongy ice is suggested. This model considers heat transfer and partial solidification and melting together. Properties of brine-spongy ice are obtained using properties of liquid brine and pure ice. A numerical solution using Method of Lines discretizes the medium to reach a set of ordinary differential equations. Boundary conditions are chosen using one of the applicable cases of this type of ice; one side is considered as a thermally isolated surface, and the other side is assumed to be suddenly affected by a constant temperature boundary. All cases are evaluated in temperatures between -20 C and the freezing point of brine-spongy ice. Solutions are conducted using different salinities from 5 to 60 ppt. Time steps and space intervals are chosen properly to maintain the most stable and fast solution. Variation of temperature, volume fraction of brine and brine salinity versus time are the most important outputs of this study. Results show that transient heat conduction through brine-spongy ice can create a various range of salinity of brine pockets from the initial salinity to that of 180 ppt. The rate of variation of temperature is found to be slower for high salinity cases. The maximum rate of heat transfer occurs at the start of the simulation. This rate decreases as time passes. Brine pockets are smaller at portions closer to the colder side than that of the warmer side. A the start of the solution, the numerical solution tends to increase instabilities. This is because of sharp variation of temperature at the start of the process. Changing the intervals improves the unstable situation. The analytical model using a numerical scheme is capable of predicting thermal behavior of brine spongy ice. This model and numerical solutions are important for modeling the process of freezing of salt water and ice accretion on cold structures.Keywords: method of lines, brine-spongy ice, heat conduction, salt water
Procedia PDF Downloads 217122 Capacity of Cold-Formed Steel Warping-Restrained Members Subjected to Combined Axial Compressive Load and Bending
Authors: Maryam Hasanali, Syed Mohammad Mojtabaei, Iman Hajirasouliha, G. Charles Clifton, James B. P. Lim
Abstract:
Cold-formed steel (CFS) elements are increasingly being used as main load-bearing components in the modern construction industry, including low- to mid-rise buildings. In typical multi-storey buildings, CFS structural members act as beam-column elements since they are exposed to combined axial compression and bending actions, both in moment-resisting frames and stud wall systems. Current design specifications, including the American Iron and Steel Institute (AISI S100) and the Australian/New Zealand Standard (AS/NZS 4600), neglect the beneficial effects of warping-restrained boundary conditions in the design of beam-column elements. Furthermore, while a non-linear relationship governs the interaction of axial compression and bending, the combined effect of these actions is taken into account through a simplified linear expression combining pure axial and flexural strengths. This paper aims to evaluate the reliability of the well-known Direct Strength Method (DSM) as well as design proposals found in the literature to provide a better understanding of the efficiency of the code-prescribed linear interaction equation in the strength predictions of CFS beam columns and the effects of warping-restrained boundary conditions on their behavior. To this end, the experimentally validated finite element (FE) models of CFS elements under compression and bending were developed in ABAQUS software, which accounts for both non-linear material properties and geometric imperfections. The validated models were then used for a comprehensive parametric study containing 270 FE models, covering a wide range of key design parameters, such as length (i.e., 0.5, 1.5, and 3 m), thickness (i.e., 1, 2, and 4 mm) and cross-sectional dimensions under ten different load eccentricity levels. The results of this parametric study demonstrated that using the DSM led to the most conservative strength predictions for beam-column members by up to 55%, depending on the element’s length and thickness. This can be sourced by the errors associated with (i) the absence of warping-restrained boundary condition effects, (ii) equations for the calculations of buckling loads, and (iii) the linear interaction equation. While the influence of warping restraint is generally less than 6%, the code suggested interaction equation led to an average error of 4% to 22%, based on the element lengths. This paper highlights the need to provide more reliable design solutions for CFS beam-column elements for practical design purposes.Keywords: beam-columns, cold-formed steel, finite element model, interaction equation, warping-restrained boundary conditions
Procedia PDF Downloads 104121 The French Ekang Ethnographic Dictionary. The Quantum Approach
Authors: Henda Gnakate Biba, Ndassa Mouafon Issa
Abstract:
Dictionaries modeled on the Western model [tonic accent languages] are not suitable and do not account for tonal languages phonologically, which is why the [prosodic and phonological] ethnographic dictionary was designed. It is a glossary that expresses the tones and the rhythm of words. It recreates exactly the speaking or singing of a tonal language, and allows the non-speaker of this language to pronounce the words as if they were a native. It is a dictionary adapted to tonal languages. It was built from ethnomusicological theorems and phonological processes, according to Jean. J. Rousseau 1776 hypothesis /To say and to sing were once the same thing/. Each word in the French dictionary finds its corresponding language, ekaη. And each word ekaη is written on a musical staff. This ethnographic dictionary is also an inventive, original and innovative research thesis, but it is also an inventive, original and innovative research thesis. A contribution to the theoretical, musicological, ethno musicological and linguistic conceptualization of languages, giving rise to the practice of interlocution between the social and cognitive sciences, the activities of artistic creation and the question of modeling in the human sciences: mathematics, computer science, translation automation and artificial intelligence. When you apply this theory to any text of a folksong of a world-tone language, you do not only piece together the exact melody, rhythm, and harmonies of that song as if you knew it in advance but also the exact speaking of this language. The author believes that the issue of the disappearance of tonal languages and their preservation has been structurally resolved, as well as one of the greatest cultural equations related to the composition and creation of tonal, polytonal and random music. The experimentation confirming the theorization designed a semi-digital, semi-analog application which translates the tonal languages of Africa (about 2,100 languages) into blues, jazz, world music, polyphonic music, tonal and anatonal music and deterministic and random music). To test this application, I use a music reading and writing software that allows me to collect the data extracted from my mother tongue, which is already modeled in the musical staves saved in the ethnographic (semiotic) dictionary for automatic translation ( volume 2 of the book). Translation is done (from writing to writing, from writing to speech and from writing to music). Mode of operation: you type a text on your computer, a structured song (chorus-verse), and you command the machine a melody of blues, jazz and, world music or, variety etc. The software runs, giving you the option to choose harmonies, and then you select your melody.Keywords: music, language, entenglement, science, research
Procedia PDF Downloads 69120 The Relationship between Anthropometric Obesity Indices and Insulin in Children with Metabolic Syndrome
Authors: Mustafa M. Donma, Orkide Donma
Abstract:
The number of indices developed for the evaluation of obesity both in adults and pediatric population is ever increasing. These indices are also used in cases with metabolic syndrome (MetS), mostly the ultimate form of morbid obesity. Aside from anthropometric measurements, formulas constituted using these parameters also find clinical use. These formulas can be listed as two groups; being weight-dependent and –independent. Some are extremely sophisticated equations and their clinical utility is questionable in routine clinical practice. The aim of this study is to compare presently available obesity indices and find the most practical one. Their associations with MetS components were also investigated to determine their capacities in differential diagnosis of morbid obesity with and without MetS. Children with normal body mass index (N-BMI) and morbid obesity were recruited for this study. Three groups were constituted. Age- and sex- dependent BMI percentiles for morbid obese (MO) children were above 99 according to World Health Organization tables. Of them, those with MetS findings were evaluated as MetS group. Children, whose values were between 85 and 15 were included in N-BMI group. The study protocol was approved by the Ethics Committee of the Institution. Parents filled out informed consent forms to participate in the study. Anthropometric measurements and blood pressure values were recorded. Body mass index, hip index (HI), conicity index (CI), triponderal mass index (TPMI), body adiposity index (BAI), body shape index (ABSI), body roundness index (BRI), abdominal volume index (AVI), waist-to-hip ratio (WHR) and waist circumference+hip circumference/2 ((WC+HC)/2) were the formulas examined within the scope of this study. Routine biochemical tests including fasting blood glucose (FBG), insulin (INS), triglycerides (TRG), high density lipoprotein-cholesterol (HDL-C) were performed. Statistical package program SPSS was used for the evaluation of study data. p<0.05 was accepted as the statistical significance degree. Hip index did not differ among the groups. A statistically significant difference was noted between N-BMI and MetS groups in terms of ABSI. All the other indices were capable of making discrimination between N-BMI-MO, N-BMI- MetS and MO-MetS groups. No correlation was found between FBG and any obesity indices in any groups. The same was true for INS in N-BMI group. Insulin was correlated with BAI, TPMI, CI, BRI, AVI and (WC+HC)/2 in MO group without MetS findings. In MetS group, the only index, which was correlated with INS was (WC+HC)/2. These findings have pointed out that complicated formulas may not be required for the evaluation of the alterations among N-BMI and various obesity groups including MetS. The simple easily computable weight-independent index, (WC+HC)/2, was unique, because it was the only index, which exhibits a valuable association with INS in MetS group. It did not exhibit any correlation with other obesity indices showing associations with INS in MO group. It was concluded that (WC+HC)/2 was pretty valuable practicable index for the discrimination of MO children with and without MetS findings.Keywords: children, insulin, metabolic syndrome, obesity indices
Procedia PDF Downloads 77119 Flexible Design Solutions for Complex Free form Geometries Aimed to Optimize Performances and Resources Consumption
Authors: Vlad Andrei Raducanu, Mariana Lucia Angelescu, Ion Cinca, Vasile Danut Cojocaru, Doina Raducanu
Abstract:
By using smart digital tools, such as generative design (GD) and digital fabrication (DF), problems of high actuality concerning resources optimization (materials, energy, time) can be solved and applications or products of free-form type can be created. In the new digital technology materials are active, designed in response to a set of performance requirements, which impose a total rethinking of old material practices. The article presents the design procedure key steps of a free-form architectural object - a column type one with connections to get an adaptive 3D surface, by using the parametric design methodology and by exploiting the properties of conventional metallic materials. In parametric design the form of the created object or space is shaped by varying the parameters values and relationships between the forms are described by mathematical equations. Digital parametric design is based on specific procedures, as shape grammars, Lindenmayer - systems, cellular automata, genetic algorithms or swarm intelligence, each of these procedures having limitations which make them applicable only in certain cases. In the paper the design process stages and the shape grammar type algorithm are presented. The generative design process relies on two basic principles: the modeling principle and the generative principle. The generative method is based on a form finding process, by creating many 3D spatial forms, using an algorithm conceived in order to apply its generating logic onto different input geometry. Once the algorithm is realized, it can be applied repeatedly to generate the geometry for a number of different input surfaces. The generated configurations are then analyzed through a technical or aesthetic selection criterion and finally the optimal solution is selected. Endless range of generative capacity of codes and algorithms used in digital design offers various conceptual possibilities and optimal solutions for both technical and environmental increasing demands of building industry and architecture. Constructions or spaces generated by parametric design can be specifically tuned, in order to meet certain technical or aesthetical requirements. The proposed approach has direct applicability in sustainable architecture, offering important potential economic advantages, a flexible design (which can be changed until the end of the design process) and unique geometric models of high performance.Keywords: parametric design, algorithmic procedures, free-form architectural object, sustainable architecture
Procedia PDF Downloads 377118 Longitudinal Examination of Depressive Symptoms among U.S. Parents who Gave Birth During the COVID-19 Pandemic
Authors: Amy Claridge, Tishra Beeson
Abstract:
Background: Maternal depression is a serious health concern impacting between 10-16% of birthing persons. It is associated with difficulty in emotional interaction and the formation of attachment bonds between parent and infant. Longitudinally, maternal depression can have severe, lasting impacts on both parent and child, increasing the risk for mental, social, and physical health issues. Rates of prenatal depression have been higher among individuals who were pregnant during the first year of the COVID-19 pandemic. Pregnant persons are considered a high-risk group for poor clinical outcomes from COVID-19 infection and may also have faced or continue to face additional stressors such as financial burdens, loss of income or employment, and the benefits accompanying employment, especially among those in the United States (U.S.). It is less clear whether individuals who gave birth during the pandemic continue to experience high levels of depressive symptoms or whether symptoms have been reduced as a pandemic response has shifted. The current study examined longitudinal reports of depressive symptoms among individuals in the U.S. who gave birth between March 2020 and September 2021. Methods: This mixed-method study involved surveys and interviews with birthing persons (18-45 years old) in their third trimester of pregnancy and at 8 weeks postpartum. Participants also completed a follow-up survey at 12-18 months postpartum. Participants were recruited using convenience methods via an online survey. Survey participants included 242 U.S. women who self-reported depressive symptoms (10-item Edinburgh Postnatal Depression Scale) at each data collection wave. A subset of 23 women participated in semi-structured prenatal and 8-week postpartum qualitative interviews. Follow-up interviews are currently underway and will be integrated into the presentation. Preliminary Results: Prenatal depressive symptoms were significantly positively correlated to 8-week and 12-18-month postpartum depressive symptoms. Participants who reported clinical levels of depression prenatally were 3.29 times (SE = .32, p < .001) more likely to report clinical levels of depression at 18 months postpartum. Those who reported clinical depression at 8-weeks postpartum were 6.52 times (SE = .41, p < .001) more likely to report clinical levels of depression at 18 months postpartum. Participants who gave birth earlier in the pandemic reported significantly higher prenatal (t(103) = 2.84, p < .01) and 8-week postpartum depressive symptoms (t(126) = 3.31, p < .001). Data from qualitative interviews contextualize the findings. Participants reported negative emotions during pregnancy, including sadness, grief, and anxiety. They attributed this in part to their experiences of pregnancy during the pandemic and uncertainty related to the birth experience and postpartum period. Postpartum interviews revealed some stressors specific to childbirth during the COVID-19 pandemic; however, most women reflected on positive experiences of birth and postpartum. Conclusions: Taken together, findings reveal a pattern of persistent depressive symptoms among U.S. parents who gave birth during the pandemic. Depressive symptoms are of significant concern for the health of parents and children, and the findings of this study suggest a need for continued mental health intervention for parents who gave birth during the pandemic. Policy and practice implications will be discussed.Keywords: maternal mental health, perinatal depression, postpartum depression, covid-19 pandemic
Procedia PDF Downloads 77117 Selective Conversion of Biodiesel Derived Glycerol to 1,2-Propanediol over Highly Efficient γ-Al2O3 Supported Bimetallic Cu-Ni Catalyst
Authors: Smita Mondal, Dinesh Kumar Pandey, Prakash Biswas
Abstract:
During past two decades, considerable attention has been given to the value addition of biodiesel derived glycerol (~10wt.%) to make the biodiesel industry economically viable. Among the various glycerol value-addition methods, hydrogenolysis of glycerol to 1,2-propanediol is one of the attractive and promising routes. In this study, highly active and selective γ-Al₂O₃ supported bimetallic Cu-Ni catalyst was developed for selective hydrogenolysis of glycerol to 1,2-propanediol in the liquid phase. The catalytic performance was evaluated in a high-pressure autoclave reactor. The formation of mixed oxide indicated the strong interaction of Cu, Ni with the alumina support. Experimental results demonstrated that bimetallic copper-nickel catalyst was more active and selective to 1,2-PDO as compared to monometallic catalysts due to bifunctional behavior. To verify the effect of calcination temperature on the formation of Cu-Ni mixed oxide phase, the calcination temperature of 20wt.% Cu:Ni(1:1)/Al₂O₃ catalyst was varied from 300°C-550°C. The physicochemical properties of the catalysts were characterized by various techniques such as specific surface area (BET), X-ray diffraction study (XRD), temperature programmed reduction (TPR), and temperature programmed desorption (TPD). The BET surface area and pore volume of the catalysts were in the range of 71-78 m²g⁻¹, and 0.12-0.15 cm³g⁻¹, respectively. The peaks at the 2θ range of 43.3°-45.5° and 50.4°-52°, was corresponded to the copper-nickel mixed oxidephase [JCPDS: 78-1602]. The formation of mixed oxide indicated the strong interaction of Cu, Ni with the alumina support. The crystallite size decreased with increasing the calcination temperature up to 450°C. Further, the crystallite size was increased due to agglomeration. Smaller crystallite size of 16.5 nm was obtained for the catalyst calcined at 400°C. Total acidic sites of the catalysts were determined by NH₃-TPD, and the maximum total acidic of 0.609 mmol NH₃ gcat⁻¹ was obtained over the catalyst calcined at 400°C. TPR data suggested the maximum of 75% degree of reduction of catalyst calcined at 400°C among all others. Further, 20wt.%Cu:Ni(1:1)/γ-Al₂O₃ catalyst calcined at 400°C exhibited highest catalytic activity ( > 70%) and 1,2-PDO selectivity ( > 85%) at mild reaction condition due to highest acidity, highest degree of reduction, smallest crystallite size. Further, the modified Power law kinetic model was developed to understand the true kinetic behaviour of hydrogenolysis of glycerol over 20wt.%Cu:Ni(1:1)/γ-Al₂O₃ catalyst. Rate equations obtained from the model was solved by ode23 using MATLAB coupled with Genetic Algorithm. Results demonstrated that the model predicted data were very well fitted with the experimental data. The activation energy of the formation of 1,2-PDO was found to be 45 kJ mol⁻¹.Keywords: glycerol, 1, 2-PDO, calcination, kinetic
Procedia PDF Downloads 145