Search results for: Ionic liquid coupled HDS of DBT
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3623

Search results for: Ionic liquid coupled HDS of DBT

563 Exploration of Cone Foam Breaker Behavior Using Computational Fluid Dynamic

Authors: G. St-Pierre-Lemieux, E. Askari Mahvelati, D. Groleau, P. Proulx

Abstract:

Mathematical modeling has become an important tool for the study of foam behavior. Computational Fluid Dynamic (CFD) can be used to investigate the behavior of foam around foam breakers to better understand the mechanisms leading to the ‘destruction’ of foam. The focus of this investigation was the simple cone foam breaker, whose performance has been identified in numerous studies. While the optimal pumping angle is known from the literature, the contribution of pressure drop, shearing, and centrifugal forces to the foam syneresis are subject to speculation. This work provides a screening of those factors against changes in the cone angle and foam rheology. The CFD simulation was made with the open source OpenFOAM toolkits on a full three-dimensional model discretized using hexahedral cells. The geometry was generated using a python script then meshed with blockMesh. The OpenFOAM Volume Of Fluid (VOF) method was used (interFOAM) to obtain a detailed description of the interfacial forces, and the model k-omega SST was used to calculate the turbulence fields. The cone configuration allows the use of a rotating wall boundary condition. In each case, a pair of immiscible fluids, foam/air or water/air was used. The foam was modeled as a shear thinning (Herschel-Buckley) fluid. The results were compared to our measurements and to results found in the literature, first by computing the pumping rate of the cone, and second by the liquid break-up at the exit of the cone. A 3D printed version of the cones submerged in foam (shaving cream or soap solution) and water, at speeds varying between 400 RPM and 1500 RPM, was also used to validate the modeling results by calculating the torque exerted on the shaft. While most of the literature is focusing on cone behavior using Newtonian fluids, this works explore its behavior in shear thinning fluid which better reflects foam apparent rheology. Those simulations bring new light on the cone behavior within the foam and allow the computation of shearing, pressure, and velocity of the fluid, enabling to better evaluate the efficiency of the cones as foam breakers. This study contributes to clarify the mechanisms behind foam breaker performances, at least in part, using modern CFD techniques.

Keywords: bioreactor, CFD, foam breaker, foam mitigation, OpenFOAM

Procedia PDF Downloads 205
562 Motherhood Constrained: The Minotaur Legend Reimagined Through the Perspective of Marginalized Mothers

Authors: Gevorgianiene Violeta, Sumskiene Egle

Abstract:

Background. Child removal is a profound and life-altering measure that significantly impacts both children and their mothers. Unfortunately, mothers with intellectual disabilities are disproportionately affected by the removal of their children. This action is often taken due to concerns about the mother's perceived inability to care for the child, instances of abuse and neglect, or struggles with addiction. In many cases, the failure to meet society's standards of a "good mother" is seen as a deviation from conventional norms of femininity and motherhood. From an institutional perspective, separating a child from their mother is sometimes viewed as a step toward restoring justice or doing what is considered "right." In another light, this act of child removal can be seen as the removal of a mother from her child, an attempt to shield society from the complexities and fears associated with motherhood for women with disabilities. This separation can be likened to the Greek legend of the Minotaur, a fearsome beast confined within an impenetrable labyrinth. By reimagining this legend, we can see the social fears surrounding 'mothering with intellectual disability' as deeply sealed within an unreachable place. The Aim of this Presentation. Our goal with this presentation is to draw from our research and the metaphors found in the Greek legend to delve into the profound challenges faced by mothers with intellectual disabilities in raising their children. These challenges often become entangled within an insurmountable labyrinth, including navigating complex institutional bureaucracies, enduring persistent doubts cast upon their maternal competencies, battling unfavorable societal narratives, and struggling to retain custody of their children. Coupled with limited social support networks, these challenges frequently lead to situations resulting in maternal failure and, ultimately, child removal. On a broader scale, this separation of a child from their mother symbolizes society’s collective avoidance of confronting the issue of 'mothering with disability,' which can only be effectively addressed through united efforts. Conclusion. Just as in the labyrinth of the Minotaur legend, the struggles faced by mothers with disabilities in their pursuit of retaining their children reveal the need for a metaphorical 'string of Ariadne.' This string symbolizes the support offered by social service providers, communities, and the loved ones these women often dream of but rarely encounter in their lives.

Keywords: motherhood, disability, child removal, support.

Procedia PDF Downloads 58
561 Multi-Scale Damage Modelling for Microstructure Dependent Short Fiber Reinforced Composite Structure Design

Authors: Joseph Fitoussi, Mohammadali Shirinbayan, Abbas Tcharkhtchi

Abstract:

Due to material flow during processing, short fiber reinforced composites structures obtained by injection or compression molding generally present strong spatial microstructure variation. On the other hand, quasi-static, dynamic, and fatigue behavior of these materials are highly dependent on microstructure parameters such as fiber orientation distribution. Indeed, because of complex damage mechanisms, SFRC structures design is a key challenge for safety and reliability. In this paper, we propose a micromechanical model allowing prediction of damage behavior of real structures as a function of microstructure spatial distribution. To this aim, a statistical damage criterion including strain rate and fatigue effect at the local scale is introduced into a Mori and Tanaka model. A critical local damage state is identified, allowing fatigue life prediction. Moreover, the multi-scale model is coupled with an experimental intrinsic link between damage under monotonic loading and fatigue life in order to build an abacus giving Tsai-Wu failure criterion parameters as a function of microstructure and targeted fatigue life. On the other hand, the micromechanical damage model gives access to the evolution of the anisotropic stiffness tensor of SFRC submitted to complex thermomechanical loading, including quasi-static, dynamic, and cyclic loading with temperature and amplitude variations. Then, the latter is used to fill out microstructure dependent material cards in finite element analysis for design optimization in the case of complex loading history. The proposed methodology is illustrated in the case of a real automotive component made of sheet molding compound (PSA 3008 tailgate). The obtained results emphasize how the proposed micromechanical methodology opens a new path for the automotive industry to lighten vehicle bodies and thereby save energy and reduce gas emission.

Keywords: short fiber reinforced composite, structural design, damage, micromechanical modelling, fatigue, strain rate effect

Procedia PDF Downloads 107
560 Krill-Herd Step-Up Approach Based Energy Efficiency Enhancement Opportunities in the Offshore Mixed Refrigerant Natural Gas Liquefaction Process

Authors: Kinza Qadeer, Muhammad Abdul Qyyum, Moonyong Lee

Abstract:

Natural gas has become an attractive energy source in comparison with other fossil fuels because of its lower CO₂ and other air pollutant emissions. Therefore, compared to the demand for coal and oil, that for natural gas is increasing rapidly world-wide. The transportation of natural gas over long distances as a liquid (LNG) preferable for several reasons, including economic, technical, political, and safety factors. However, LNG production is an energy-intensive process due to the tremendous amount of power requirements for compression of refrigerants, which provide sufficient cold energy to liquefy natural gas. Therefore, one of the major issues in the LNG industry is to improve the energy efficiency of existing LNG processes through a cost-effective approach that is 'optimization'. In this context, a bio-inspired Krill-herd (KH) step-up approach was examined to enhance the energy efficiency of a single mixed refrigerant (SMR) natural gas liquefaction (LNG) process, which is considered as a most promising candidate for offshore LNG production (FPSO). The optimal design of a natural gas liquefaction processes involves multivariable non-linear thermodynamic interactions, which lead to exergy destruction and contribute to process irreversibility. As key decision variables, the optimal values of mixed refrigerant flow rates and process operating pressures were determined based on the herding behavior of krill individuals corresponding to the minimum energy consumption for LNG production. To perform the rigorous process analysis, the SMR process was simulated in Aspen Hysys® software and the resulting model was connected with the Krill-herd approach coded in MATLAB. The optimal operating conditions found by the proposed approach significantly reduced the overall energy consumption of the SMR process by ≤ 22.5% and also improved the coefficient of performance in comparison with the base case. The proposed approach was also compared with other well-proven optimization algorithms, such as genetic and particle swarm optimization algorithms, and was found to exhibit a superior performance over these existing approaches.

Keywords: energy efficiency, Krill-herd, LNG, optimization, single mixed refrigerant

Procedia PDF Downloads 155
559 Biopolymer Nanoparticles Loaded with Calcium as a Source of Fertilizer

Authors: Erwin San Juan Martinez, Miguel Angel Aguilar Mendez, Manuel Sandoval Villa, Libia Iris Trejo Tellez

Abstract:

Some nanomaterials may improve the vegetal growth in certain concentration intervals, and could be used as nanofertilizers in order to increase crops yield, and decreasing the environmental pollution due to non-controlled use of conventional fertilizers, therefore the present investigation’s objective was to synthetize and characterize gelatin nanoparticles loaded with calcium generated through pulverization technique and be used as nanofertilizers. To obtain these materials, a fractional factorial design 27-4 was used in order to evaluate the largest number of factors (concentration of Ca2+, temperature and agitation time of the solution and calcium concentration, drying temperature, and % spray) with a possible effect on the size, distribution and morphology of nanoparticles. For the formation of nanoparticles, a Nano Spray-Dryer B - 90® (Buchi, Flawil, Switzerland), equipped with a spray cap of 4 µm was used. Size and morphology of the obtained nanoparticles were evaluated using a scanning electron microscope (JOEL JSM-6390LV model; Tokyo, Japan) equipped with an energy dispersive x-ray X (EDS) detector. The total quantification of Ca2+ as well as its release by the nanoparticles was carried out in an equipment of induction atomic emission spectroscopy coupled plasma (ICP-ES 725, Agilent, Mulgrave, Australia). Of the seven factors evaluated, only the concentration of fertilizer, % spray and concentration of polymer presented a statistically significant effect on particle size. Micrographs of SEM from six of the eight conditions evaluated in this research showed particles separated and with a good degree of sphericity, while in the other two particles had amorphous morphology and aggregation. In all treatments, most of the particles showed smooth surfaces. The average size of smallest particle obtained was 492 nm, while EDS results showed an even distribution of Ca2+ in the polymer matrix. The largest concentration of Ca2+ in ICP was 10.5%, which agrees with the theoretical value calculated, while the release kinetics showed an upward trend within 24 h. Using the technique employed in this research, it was possible to obtain nanoparticles loaded with calcium, of good size, sphericity and with release controlled properties. The characteristics of nanoparticles resulted from manipulation of the conditions of synthesis which allow control of the size and shape of the particles, and provides the means to adapt the properties of the materials to an specific application.

Keywords: calcium, controlled release, gelatin, nano spraydryer, nanofertilizer

Procedia PDF Downloads 179
558 Biosynthesis of Tumor Inhibitory Podophyllotoxin, Quercetin and Kaempferol from Callogenesis of Dysosma Pleiantha (Hance) Woodson

Authors: Palaniyandi Karuppaiya, Hsin Sheng Tsay, Fang Chen

Abstract:

Medicinal herbs do represent a huge and noteworthy reservoir for novel anticancer drugs discovery. Dysosma pleiantha (Hance) Woodson (Berberidaceae), one of the oldest traditional Chinese medicinal herb, highly prized by the mountain tribes of Taiwan and China for its medicinal properties contained pharmaceutically important antitumor compounds podophyllotoxin, quercetin and kaempferol. Among lignans, podophyllotoxin is an active antitumor compound and has now been modified to produce clinically useful drugs etoposide and teniposide. In recent years, natural populations of D. peliantha have declined considerably due to anthropogenic activities such as habitat destruction and commercial exploitation for medicinal applications. As to its overall conservation status, D. pleiantha has been ranked as threatened on the China Species Red List. In the present study, an efficient in vitro callus culture system of D. pleiantha was established on Gamborg’s medium with various combinations and concentrations of different auxins and cytokinins under dark condition. Best callus induction was recorded in 2 mg/L 2, 4 - Dichlorophenoxyacetic acid (2,4-D) along with 0.2 mg/L kinetin and the maximum callus proliferation was achieved at 1 mg/L 2,4-D. Among the explants tested, maximum callus induction (86 %) was achieved from tender leaves. Hence, in subsequent experiments, leaf callus was further investigated for suitable callus biomass and production level of anticancer compounds under the influence of different additives. A maximum fresh callus biomass (8.765 g) was recorded in callus proliferation medium contained 500 mg/L casein hydrolysate. High performance liquid chromatography results revealed that the addition of different concentrations of peptone (1, 2 and 4 g/L) in callus proliferation medium enhanced podophyllotoxin (16 fold), quercetin (12 fold) and kaempferol (5 fold) accumulation than control. Thus, the established in vitro callus culture under the influence of different additives may offer an alternative source of enhanced production of podophyllotoxin, kaempferol and quecertin without harming natural plant population.

Keywords: dysosma pleiantha, kaempferol, podophyllotoxin, quercetin

Procedia PDF Downloads 277
557 Evaluating the Performance of Passive Direct Methanol Fuel Cell under Varying Operating and Structural Conditions

Authors: Rahul Saraswat

Abstract:

More recently, a focus is given on replacing machined stainless steel metal flow-fields with inexpensive wiremesh current collectors. The flow-fields are based on simple woven wiremesh screens of various stainless steels, which are sandwiched between a thin metal plate of the same material to create a bipolar plate/flow-field configuration for use in a stack. Major advantages of using stainless steel wire screens include the elimination of expensive raw materials as well as machining and/or other special fabrication costs. Objective of the project is to improve the performance of the passive direct methanol fuel cell without increasing the cost of the cell and to make it as compact and light as possible. From the literature survey, it was found that very little is done in this direction & the following methodology was used. 1.) The passive DMFC cell can be made more compact, lighter and less costly by changing the material used in its construction. 2.) Controlling the fuel diffusion rate through the cell improves the performance of the cell. A passive liquid feed direct methanol fuel cell ( DMFC ) was fabricated using given MEA( Membrane Electrode Assembly ) and tested for different current collector structure. Mesh current collectors of different mesh densities, along with different support structures, were used, and the performance was found to be better. Methanol concentration was also varied. Optimisation of mesh size, support structure and fuel concentration was achieved. Cost analysis was also performed hereby. From the performance analysis study of DMFC, we can conclude with the following points : Area specific resistance (ASR) of wiremesh current collectors is lower than ASR of stainless steel current collectors. Also, the power produced by wiremesh current collectors is always more than that produced by stainless steel current collectors. Low or moderate methanol concentrations should be used for better and stable DMFC performance. Wiremesh is a good substitute of stainless steel for current collector plates of passive DMFC because of lower cost( by about 27 %), flexibility and light in weight characteristics of wiremesh.

Keywords: direct methanol fuel cell, membrane electrode assembly, mesh, mesh size, methanol concentration and support structure

Procedia PDF Downloads 69
556 Reduction of the Cellular Infectivity of SARS-CoV-2 by a Mucoadhesive Nasal Spray

Authors: Adam M. Pitz, Gillian L. Phillipson, Jayant E. Khanolkar, Andrew M. Middleton

Abstract:

New emerging evidence suggests that the nose is the predominant route for entry of the SARS-CoV-2 virus into the host. A virucidal suspension test (conforming in principle to the European Standard EN14476) was conducted to determine whether a commercial liquid gel intranasal spray containing 1% of the mucoadhesive hydroxypropyl methylcellulose (HPMC) could inhibit the cellular infectivity of the SARS-CoV-2 coronavirus. Virus was added to the test product samples and to controls in a 1:8 ratio and mixed with one part bovine serum albumin as an interfering substance. The test samples were pre-equilibrated to 34 ± 2°C (representing the temperature of the nasopharynx) with the temperature maintained at 34 ± 2°C for virus contact times of 1, 5 and 10 minutes. Neutralized aliquots were inoculated onto host cells (Vero E6 cells, ATCC CRL-1586). The host cells were then incubated at 36 ± 2°C for a period of 7 days. The residual infectious virus in both test and controls was detected by viral-induced cytopathic effect. The 50% tissue culture infective dose per mL (TCID50/mL) was determined using the Spearman-Karber method with results reported as the reduction of the virus titer due to treatment with test product, expressed as log10. The controls confirmed the validity of the results with no cytotoxicity or viral interference observed in the neutralized test product samples. The HPMC formulation reduced SARS-CoV-2 titer, expressed as log10TCID50, by 2.30 ( ± 0.17), 2.60 ( ± 0.19), and 3.88 ( ± 0.19) with the respective contact times of 1, 5 and 10 minutes. The results demonstrate that this 1% HPMC gel formulation can reduce the cellular infectivity of the SARS-CoV-2 virus with an increasing viral inhibition observed with increasing exposure time. This 1% HMPC gel is well tolerated and can reside, when delivered via nasal spray, for up to one hour in the nasal cavity. We conclude that this intranasal gel spray with 1% HPMC repeat-dosed every few hours may offer an effective preventive or early intervention solution to limit the transmission and impact of the SARS-CoV-2 coronavirus.

Keywords: hydroxypropyl methylcellulose, mucoadhesive nasal spray, respiratory viruses, SARS-CoV-2

Procedia PDF Downloads 145
555 A Risk Assessment Tool for the Contamination of Aflatoxins on Dried Figs Based on Machine Learning Algorithms

Authors: Kottaridi Klimentia, Demopoulos Vasilis, Sidiropoulos Anastasios, Ihara Diego, Nikolaidis Vasileios, Antonopoulos Dimitrios

Abstract:

Aflatoxins are highly poisonous and carcinogenic compounds produced by species of the genus Aspergillus spp. that can infect a variety of agricultural foods, including dried figs. Biological and environmental factors, such as population, pathogenicity, and aflatoxinogenic capacity of the strains, topography, soil, and climate parameters of the fig orchards, are believed to have a strong effect on aflatoxin levels. Existing methods for aflatoxin detection and measurement, such as high performance liquid chromatography (HPLC), and enzyme-linked immunosorbent assay (ELISA), can provide accurate results, but the procedures are usually time-consuming, sample-destructive, and expensive. Predicting aflatoxin levels prior to crop harvest is useful for minimizing the health and financial impact of a contaminated crop. Consequently, there is interest in developing a tool that predicts aflatoxin levels based on topography and soil analysis data of fig orchards. This paper describes the development of a risk assessment tool for the contamination of aflatoxin on dried figs, based on the location and altitude of the fig orchards, the population of the fungus Aspergillus spp. in the soil, and soil parameters such as pH, saturation percentage (SP), electrical conductivity (EC), organic matter, particle size analysis (sand, silt, clay), the concentration of the exchangeable cations (Ca, Mg, K, Na), extractable P, and trace of elements (B, Fe, Mn, Zn and Cu), by employing machine learning methods. In particular, our proposed method integrates three machine learning techniques, i.e., dimensionality reduction on the original dataset (principal component analysis), metric learning (Mahalanobis metric for clustering), and k-nearest neighbors learning algorithm (KNN), into an enhanced model, with mean performance equal to 85% by terms of the Pearson correlation coefficient (PCC) between observed and predicted values.

Keywords: aflatoxins, Aspergillus spp., dried figs, k-nearest neighbors, machine learning, prediction

Procedia PDF Downloads 184
554 Supercritical Hydrothermal and Subcritical Glycolysis Conversion of Biomass Waste to Produce Biofuel and High-Value Products

Authors: Chiu-Hsuan Lee, Min-Hao Yuan, Kun-Cheng Lin, Qiao-Yin Tsai, Yun-Jie Lu, Yi-Jhen Wang, Hsin-Yi Lin, Chih-Hua Hsu, Jia-Rong Jhou, Si-Ying Li, Yi-Hung Chen, Je-Lueng Shie

Abstract:

Raw food waste has a high-water content. If it is incinerated, it will increase the cost of treatment. Therefore, composting or energy is usually used. There are mature technologies for composting food waste. Odor, wastewater, and other problems are serious, but the output of compost products is limited. And bakelite is mainly used in the manufacturing of integrated circuit boards. It is hard to directly recycle and reuse due to its hard structure and also difficult to incinerate and produce air pollutants due to incomplete incineration. In this study, supercritical hydrothermal and subcritical glycolysis thermal conversion technology is used to convert biomass wastes of bakelite and raw kitchen wastes to carbon materials and biofuels. Batch carbonization tests are performed under high temperature and pressure conditions of solvents and different operating conditions, including wet and dry base mixed biomass. This study can be divided into two parts. In the first part, bakelite waste is performed as dry-based industrial waste. And in the second part, raw kitchen wastes (lemon, banana, watermelon, and pineapple peel) are used as wet-based biomass ones. The parameters include reaction temperature, reaction time, mass-to-solvent ratio, and volume filling rates. The yield, conversion, and recovery rates of products (solid, gas, and liquid) are evaluated and discussed. The results explore the benefits of synergistic effects in thermal glycolysis dehydration and carbonization on the yield and recovery rate of solid products. The purpose is to obtain the optimum operating conditions. This technology is a biomass-negative carbon technology (BNCT); if it is combined with carbon capture and storage (BECCS), it can provide a new direction for 2050 net zero carbon dioxide emissions (NZCDE).

Keywords: biochar, raw food waste, bakelite, supercritical hydrothermal, subcritical glycolysis, biofuels

Procedia PDF Downloads 179
553 Urban Noise and Air Quality: Correlation between Air and Noise Pollution; Sensors, Data Collection, Analysis and Mapping in Urban Planning

Authors: Massimiliano Condotta, Paolo Ruggeri, Chiara Scanagatta, Giovanni Borga

Abstract:

Architects and urban planners, when designing and renewing cities, have to face a complex set of problems, including the issues of noise and air pollution which are considered as hot topics (i.e., the Clean Air Act of London and the Soundscape definition). It is usually taken for granted that these problems go by together because the noise pollution present in cities is often linked to traffic and industries, and these produce air pollutants as well. Traffic congestion can create both noise pollution and air pollution, because NO₂ is mostly created from the oxidation of NO, and these two are notoriously produced by processes of combustion at high temperatures (i.e., car engines or thermal power stations). We can see the same process for industrial plants as well. What have to be investigated – and is the topic of this paper – is whether or not there really is a correlation between noise pollution and air pollution (taking into account NO₂) in urban areas. To evaluate if there is a correlation, some low-cost methodologies will be used. For noise measurements, the OpeNoise App will be installed on an Android phone. The smartphone will be positioned inside a waterproof box, to stay outdoor, with an external battery to allow it to collect data continuously. The box will have a small hole to install an external microphone, connected to the smartphone, which will be calibrated to collect the most accurate data. For air, pollution measurements will be used the AirMonitor device, an Arduino board to which the sensors, and all the other components, are plugged. After assembling the sensors, they will be coupled (one noise and one air sensor) and placed in different critical locations in the area of Mestre (Venice) to map the existing situation. The sensors will collect data for a fixed period of time to have an input for both week and weekend days, in this way it will be possible to see the changes of the situation during the week. The novelty is that data will be compared to check if there is a correlation between the two pollutants using graphs that should show the percentage of pollution instead of the values obtained with the sensors. To do so, the data will be converted to fit on a scale that goes up to 100% and will be shown thru a mapping of the measurement using GIS methods. Another relevant aspect is that this comparison can help to choose which are the right mitigation solutions to be applied in the area of the analysis because it will make it possible to solve both the noise and the air pollution problem making only one intervention. The mitigation solutions must consider not only the health aspect but also how to create a more livable space for citizens. The paper will describe in detail the methodology and the technical solution adopted for the realization of the sensors, the data collection, noise and pollution mapping and analysis.

Keywords: air quality, data analysis, data collection, NO₂, noise mapping, noise pollution, particulate matter

Procedia PDF Downloads 212
552 Regional Low Gravity Anomalies Influencing High Concentrations of Heavy Minerals on Placer Deposits

Authors: T. B. Karu Jayasundara

Abstract:

Regions of low gravity and gravity anomalies both influence heavy mineral concentrations on placer deposits. Economically imported heavy minerals are likely to have higher levels of deposition in low gravity regions of placer deposits. This can be found in coastal regions of Southern Asia, particularly in Sri Lanka and Peninsula India and areas located in the lowest gravity region of the world. The area about 70 kilometers of the east coast of Sri Lanka is covered by a high percentage of ilmenite deposits, and the southwest coast of the island consists of Monazite placer deposit. These deposits are one of the largest placer deposits in the world. In India, the heavy mineral industry has a good market. On the other hand, based on the coastal placer deposits recorded, the high gravity region located around Papua New Guinea, has no such heavy mineral deposits. In low gravity regions, with the help of other depositional environmental factors, the grains have more time and space to float in the sea, this helps bring high concentrations of heavy mineral deposits to the coast. The effect of low and high gravity can be demonstrated by using heavy mineral separation devices.  The Wilfley heavy mineral separating table is one of these; it is extensively used in industries and in laboratories for heavy mineral separation. The horizontally oscillating Wilfley table helps to separate heavy and light mineral grains in to deferent fractions, with the use of water. In this experiment, the low and high angle of the Wilfley table are representing low and high gravity respectively. A sample mixture of grain size <0.85 mm of heavy and light mineral grains has been used for this experiment. The high and low angle of the table was 60 and 20 respectively for this experiment. The separated fractions from the table are again separated into heavy and light minerals, with the use of heavy liquid, which consists of a specific gravity of 2.85. The fractions of separated heavy and light minerals have been used for drawing the two-dimensional graphs. The graphs show that the low gravity stage has a high percentage of heavy minerals collected in the upper area of the table than in the high gravity stage. The results of the experiment can be used for the comparison of regional low gravity and high gravity levels of heavy minerals. If there are any heavy mineral deposits in the high gravity regions, these deposits will take place far away from the coast, within the continental shelf.

Keywords: anomaly, gravity, influence, mineral

Procedia PDF Downloads 199
551 The High Precision of Magnetic Detection with Microwave Modulation in Solid Spin Assembly of NV Centres in Diamond

Authors: Zongmin Ma, Shaowen Zhang, Yueping Fu, Jun Tang, Yunbo Shi, Jun Liu

Abstract:

Solid-state quantum sensors are attracting wide interest because of their high sensitivity at room temperature. In particular, spin properties of nitrogen–vacancy (NV) color centres in diamond make them outstanding sensors of magnetic fields, electric fields and temperature under ambient conditions. Much of the work on NV magnetic sensing has been done so as to achieve the smallest volume, high sensitivity of NV ensemble-based magnetometry using micro-cavity, light-trapping diamond waveguide (LTDW), nano-cantilevers combined with MEMS (Micro-Electronic-Mechanical System) techniques. Recently, frequency-modulated microwaves with continuous optical excitation method have been proposed to achieve high sensitivity of 6 μT/√Hz using individual NV centres at nanoscale. In this research, we built-up an experiment to measure static magnetic field through continuous wave optical excitation with frequency-modulated microwaves method under continuous illumination with green pump light at 532 nm, and bulk diamond sample with a high density of NV centers (1 ppm). The output of the confocal microscopy was collected by an objective (NA = 0.7) and detected by a high sensitivity photodetector. We design uniform and efficient excitation of the micro strip antenna, which is coupled well with the spin ensembles at 2.87 GHz for zero-field splitting of the NV centers. Output of the PD signal was sent to an LIA (Lock-In Amplifier) modulated signal, generated by the microwave source by IQ mixer. The detected signal is received by the photodetector, and the reference signal enters the lock-in amplifier to realize the open-loop detection of the NV atomic magnetometer. We can plot ODMR spectra under continuous-wave (CW) microwave. Due to the high sensitivity of the lock-in amplifier, the minimum detectable value of the voltage can be measured, and the minimum detectable frequency can be made by the minimum and slope of the voltage. The magnetic field sensitivity can be derived from η = δB√T corresponds to a 10 nT minimum detectable shift in the magnetic field. Further, frequency analysis of the noise in the system indicates that at 10Hz the sensitivity less than 10 nT/√Hz.

Keywords: nitrogen-vacancy (NV) centers, frequency-modulated microwaves, magnetic field sensitivity, noise density

Procedia PDF Downloads 440
550 The Analysis of Drill Bit Optimization by the Application of New Electric Impulse Technology in Shallow Water Absheron Peninsula

Authors: Ayshan Gurbanova

Abstract:

Despite based on the fact that drill bit which is the smallest part of bottom hole assembly costs only in between 10% and 15% of the total expenses made, they are the first equipment that is in contact with the formation itself. Hence, it is consequential to choose the appropriate type and dimension of drilling bit, which will prevent majority of problems by not demanding many tripping procedure. However, within the advance in technology, it is now seamless to be beneficial in the terms of many concepts such as subsequent time of operation, energy, expenditure, power and so forth. With the intention of applying the method to Azerbaijan, the field of Shallow Water Absheron Peninsula has been suggested, where the mainland has been located 15 km away from the wildcat wells, named as “NKX01”. It has the water depth of 22 m as indicated. In 2015 and 2016, the seismic survey analysis of 2D and 3D have been conducted in contract area as well as onshore shallow water depth locations. With the aim of indicating clear elucidation, soil stability, possible submersible dangerous scenarios, geohazards and bathymetry surveys have been carried out as well. Within the seismic analysis results, the exact location of exploration wells have been determined and along with this, the correct measurement decisions have been made to divide the land into three productive zones. In the term of the method, Electric Impulse Technology (EIT) is based on discharge energies of electricity within the corrosivity in rock. Take it simply, the highest value of voltages could be created in the less range of nano time, where it is sent to the rock through electrodes’ baring as demonstrated below. These electrodes- higher voltage powered and grounded are placed on the formation which could be obscured in liquid. With the design, it is more seamless to drill horizontal well based on the advantage of loose contact of formation. There is also no chance of worn ability as there are no combustion, mechanical power exist. In the case of energy, the usage of conventional drilling accounts for 1000 𝐽/𝑐𝑚3 , where this value accounts for between 100 and 200 𝐽/𝑐𝑚3 in EIT. Last but not the least, from the test analysis, it has been yielded that it achieves the value of ROP more than 2 𝑚/ℎ𝑟 throughout 15 days. Taking everything into consideration, it is such a fact that with the comparison of data analysis, this method is highly applicable to the fields of Azerbaijan.

Keywords: drilling, drill bit cost, efficiency, cost

Procedia PDF Downloads 74
549 Effect of Different Sterilization Processes on Drug Loaded Silicone-Hydrogel

Authors: Raquel Galante, Marina Braga, Daniela Ghisleni, Terezinha J. A. Pinto, Rogério Colaço, Ana Paula Serro

Abstract:

The sensitive nature of soft biomaterials, such as hydrogels, renders their sterilization a particularly challenging task for the biomedical industry. Widely used contact lenses are now studied as promising platforms for topical corneal drug delivery. However, to the best of the authors knowledge, the influence of sterilization methods on these systems has yet to be evaluated. The main goal of this study was to understand how different pairs drug-hydrogel would interact under an ozone-based sterilization method in comparison with two conventional processes (steam heat and gamma irradiation). For that, Si-Hy containing hydroxylethyl methacrylate (HEMA) and [tris(trimethylsiloxy)silyl]propyl methacrylate (TRIS) was produced and soaked in different drug solutions, commonly used for the treatment of ocular diseases (levofloxacin, chlorhexidine, diclofenac and timolol maleate). The drug release profiles and main material properties were evaluated before and after the sterilization. Namely, swelling capacity was determined by water uptake studies, transparency was accessed by UV-Vis spectroscopy, surface topography/morphology by scanning electron microscopy (SEM) and mechanical properties by performing tensile tests. The drug released was quantified by high performance liquid chromatography (HPLC). The effectiveness of the sterilization procedures was assured by performing sterility tests. Ozone gas method led to a significant reduction of drug released and to the formation of degradation products specially for diclofenac and levofloxacin. Gamma irradiation led to darkening of the loaded Si-Hys and to the complete degradation of levofloxacin. Steam heat led to smoother surfaces and to a decrease of the amount of drug released, however, with no formation of degradation products. This difference in the total drug released could be the related to drug/polymer interactions promoted by the sterilization conditions in presence of the drug. Our findings offer important insights that, in turn, could be a useful contribution to the safe development of actual products.

Keywords: drug delivery, silicone hydrogels, sterilization, gamma irradiation, steam heat, ozone gas

Procedia PDF Downloads 312
548 Virtual Approach to Simulating Geotechnical Problems under Both Static and Dynamic Conditions

Authors: Varvara Roubtsova, Mohamed Chekired

Abstract:

Recent studies on the numerical simulation of geotechnical problems show the importance of considering the soil micro-structure. At this scale, soil is a discrete particle medium where the particles can interact with each other and with water flow under external forces, structure loads or natural events. This paper presents research conducted in a virtual laboratory named SiGran, developed at IREQ (Institut de recherche d’Hydro-Quebec) for the purpose of investigating a broad range of problems encountered in geotechnics. Using Discrete Element Method (DEM), SiGran simulated granular materials directly by applying Newton’s laws to each particle. The water flow was simulated by using Marker and Cell method (MAC) to solve the full form of Navier-Stokes’s equation for non-compressible viscous liquid. In this paper, examples of numerical simulation and their comparisons with real experiments have been selected to show the complexity of geotechnical research at the micro level. These examples describe transient flows into a porous medium, interaction of particles in a viscous flow, compacting of saturated and unsaturated soils and the phenomenon of liquefaction under seismic load. They also provide an opportunity to present SiGran’s capacity to compute the distribution and evolution of energy by type (particle kinetic energy, particle internal elastic energy, energy dissipated by friction or as a result of viscous interaction into flow, and so on). This work also includes the first attempts to apply micro discrete results on a macro continuum level where the Smoothed Particle Hydrodynamics (SPH) method was used to resolve the system of governing equations. The material behavior equation is based on the results of simulations carried out at a micro level. The possibility of combining three methods (DEM, MAC and SPH) is discussed.

Keywords: discrete element method, marker and cell method, numerical simulation, multi-scale simulations, smoothed particle hydrodynamics

Procedia PDF Downloads 302
547 Metaphysics of the Unified Field of the Universe

Authors: Santosh Kaware, Dnyandeo Patil, Moninder Modgil, Hemant Bhoir, Debendra Behera

Abstract:

The Unified Field Theory has been an area of intensive research since many decades. This paper focuses on philosophy and metaphysics of unified field theory at Planck scale - and its relationship with super string theory and Quantum Vacuum Dynamic Physics. We examined the epistemology of questions such as - (1) what is the Unified Field of universe? (2) can it actually - (a) permeate the complete universe - or (b) be localized in bound regions of the universe - or, (c) extend into the extra dimensions? - -or (d) live only in extra dimensions? (3) What should be the emergent ontological properties of Unified field? (4) How the universe is manifesting through its Quantum Vacuum energies? (5) How is the space time metric coupled to the Unified field? We present a number of ansatz - which we outline below. It is proposed that the unified field possesses consciousness as well as a memory - a recording of past history - analogous to ‘Consistent Histories’ interpretation of quantum mechanics. We proposed Planck scale geometry of Unified Field with circle like topology and having 32 energy points on its periphery which are the connected to each other by 10 dimensional meta-strings which are sources for manifestation of different fundamentals forces and particles of universe through its Quantum Vacuum energies. It is also proposed that the sub energy levels of ‘Conscious Unified Field’ are used for the process of creation, preservation and rejuvenation of the universe over a period of time by means of negentropy. These epochs can be for the complete universe, or for localized regions such as galaxies or cluster of galaxies. It is proposed that Unified field operates through geometric patterns of its Quantum Vacuum energies - manifesting as various elementary particles by giving spins to zero point energy elements. Epistemological relationship between unified field theory and super-string theories is examined. Properties of ‘consciousness’ and 'memory' cascades from universe, into macroscopic objects - and further onto the elementary particles - via a fractal pattern. Other properties of fundamental particles - such as mass, charge, spin, iso-spin also spill out of such a cascade. The manifestations of the unified field can reach into the parallel universes or the ‘multi-verse’ and essentially have an existence independent of the space-time. It is proposed that mass, length, time scales of the unified theory are less than even the Planck scale - and can be called at a level which we call that of 'Super Quantum Gravity (SQG)'.

Keywords: super string theory, Planck scale geometry, negentropy, super quantum gravity

Procedia PDF Downloads 275
546 Speckle-Based Phase Contrast Micro-Computed Tomography with Neural Network Reconstruction

Authors: Y. Zheng, M. Busi, A. F. Pedersen, M. A. Beltran, C. Gundlach

Abstract:

X-ray phase contrast imaging has shown to yield a better contrast compared to conventional attenuation X-ray imaging, especially for soft tissues in the medical imaging energy range. This can potentially lead to better diagnosis for patients. However, phase contrast imaging has mainly been performed using highly brilliant Synchrotron radiation, as it requires high coherence X-rays. Many research teams have demonstrated that it is also feasible using a laboratory source, bringing it one step closer to clinical use. Nevertheless, the requirement of fine gratings and high precision stepping motors when using a laboratory source prevents it from being widely used. Recently, a random phase object has been proposed as an analyzer. This method requires a much less robust experimental setup. However, previous studies were done using a particular X-ray source (liquid-metal jet micro-focus source) or high precision motors for stepping. We have been working on a much simpler setup with just small modification of a commercial bench-top micro-CT (computed tomography) scanner, by introducing a piece of sandpaper as the phase analyzer in front of the X-ray source. However, it needs a suitable algorithm for speckle tracking and 3D reconstructions. The precision and sensitivity of speckle tracking algorithm determine the resolution of the system, while the 3D reconstruction algorithm will affect the minimum number of projections required, thus limiting the temporal resolution. As phase contrast imaging methods usually require much longer exposure time than traditional absorption based X-ray imaging technologies, a dynamic phase contrast micro-CT with a high temporal resolution is particularly challenging. Different reconstruction methods, including neural network based techniques, will be evaluated in this project to increase the temporal resolution of the phase contrast micro-CT. A Monte Carlo ray tracing simulation (McXtrace) was used to generate a large dataset to train the neural network, in order to address the issue that neural networks require large amount of training data to get high-quality reconstructions.

Keywords: micro-ct, neural networks, reconstruction, speckle-based x-ray phase contrast

Procedia PDF Downloads 258
545 A Study on Computational Fluid Dynamics (CFD)-Based Design Optimization Techniques Using Multi-Objective Evolutionary Algorithms (MOEA)

Authors: Ahmed E. Hodaib, Mohamed A. Hashem

Abstract:

In engineering applications, a design has to be as fully perfect as possible in some defined case. The designer has to overcome many challenges in order to reach the optimal solution to a specific problem. This process is called optimization. Generally, there is always a function called “objective function” that is required to be maximized or minimized by choosing input parameters called “degrees of freedom” within an allowed domain called “search space” and computing the values of the objective function for these input values. It becomes more complex when we have more than one objective for our design. As an example for Multi-Objective Optimization Problem (MOP): A structural design that aims to minimize weight and maximize strength. In such case, the Pareto Optimal Frontier (POF) is used, which is a curve plotting two objective functions for the best cases. At this point, a designer should make a decision to choose the point on the curve. Engineers use algorithms or iterative methods for optimization. In this paper, we will discuss the Evolutionary Algorithms (EA) which are widely used with Multi-objective Optimization Problems due to their robustness, simplicity, suitability to be coupled and to be parallelized. Evolutionary algorithms are developed to guarantee the convergence to an optimal solution. An EA uses mechanisms inspired by Darwinian evolution principles. Technically, they belong to the family of trial and error problem solvers and can be considered global optimization methods with a stochastic optimization character. The optimization is initialized by picking random solutions from the search space and then the solution progresses towards the optimal point by using operators such as Selection, Combination, Cross-over and/or Mutation. These operators are applied to the old solutions “parents” so that new sets of design variables called “children” appear. The process is repeated until the optimal solution to the problem is reached. Reliable and robust computational fluid dynamics solvers are nowadays commonly utilized in the design and analyses of various engineering systems, such as aircraft, turbo-machinery, and auto-motives. Coupling of Computational Fluid Dynamics “CFD” and Multi-Objective Evolutionary Algorithms “MOEA” has become substantial in aerospace engineering applications, such as in aerodynamic shape optimization and advanced turbo-machinery design.

Keywords: mathematical optimization, multi-objective evolutionary algorithms "MOEA", computational fluid dynamics "CFD", aerodynamic shape optimization

Procedia PDF Downloads 256
544 Impact of Urban Densification on Travel Behaviour: Case of Surat and Udaipur, India

Authors: Darshini Mahadevia, Kanika Gounder, Saumya Lathia

Abstract:

Cities, an outcome of natural growth and migration, are ever-expanding due to urban sprawl. In the Global South, urban areas are experiencing a switch from public transport to private vehicles, coupled with intensified urban agglomeration, leading to frequent longer commutes by automobiles. This increase in travel distance and motorized vehicle kilometres lead to unsustainable cities. To achieve the nationally pledged GHG emission mitigation goal, the government is prioritizing a modal shift to low-carbon transport modes like mass transit and paratransit. Mixed land-use and urban densification are crucial for the economic viability of these projects. Informed by desktop assessment of mobility plans and in-person primary surveys, the paper explores the challenges around urban densification and travel patterns in two Indian cities of contrasting nature- Surat, a metropolitan industrial city with a 5.9 million population and a very compact urban form, and Udaipur, a heritage city attracting large international tourists’ footfall, with limited scope for further densification. Dense, mixed-use urban areas often improve access to basic services and economic opportunities by reducing distances and enabling people who don't own personal vehicles to reach them on foot/ cycle. But residents travelling on different modes end up contributing to similar trip lengths, highlighting the non-uniform distribution of land-uses and lack of planned transport infrastructure in the city and the urban-peri urban networks. Additionally, it is imperative to manage these densities to reduce negative externalities like congestion, air/noise pollution, lack of public spaces, loss of livelihood, etc. The study presents a comparison of the relationship between transport systems with the built form in both cities. The paper concludes with recommendations for managing densities in urban areas along with promoting low-carbon transport choices like improved non-motorized transport and public transport infrastructure and minimizing personal vehicle usage in the Global South.

Keywords: India, low-carbon transport, travel behaviour, trip length, urban densification

Procedia PDF Downloads 217
543 Observationally Constrained Estimates of Aerosol Indirect Radiative Forcing over Indian Ocean

Authors: Sofiya Rao, Sagnik Dey

Abstract:

Aerosol-cloud-precipitation interaction continues to be one of the largest sources of uncertainty in quantifying the aerosol climate forcing. The uncertainty is increasing from global to regional scale. This problem remains unresolved due to the large discrepancy in the representation of cloud processes in the climate models. Most of the studies on aerosol-cloud-climate interaction and aerosol-cloud-precipitation over Indian Ocean (like INDOEX, CAIPEEX campaign etc.) are restricted to either particular to one season or particular to one region. Here we developed a theoretical framework to quantify aerosol indirect radiative forcing using Moderate Resolution Imaging Spectroradiometer (MODIS) aerosol and cloud products of 15 years (2000-2015) period over the Indian Ocean. This framework relies on the observationally constrained estimate of the aerosol-induced change in cloud albedo. We partitioned the change in cloud albedo into the change in Liquid Water Path (LWP) and Effective Radius of Clouds (Reff) in response to an aerosol optical depth (AOD). Cloud albedo response to an increase in AOD is most sensitive in the range of LWP between 120-300 gm/m² for a range of Reff varying from 8-24 micrometer, which means aerosols are most sensitive to this range of LWP and Reff. Using this framework, aerosol forcing during a transition from indirect to semi-direct effect is also calculated. The outcome of this analysis shows best results over the Arabian Sea in comparison with the Bay of Bengal and the South Indian Ocean because of heterogeneity in aerosol spices over the Arabian Sea. Over the Arabian Sea during Winter Season the more absorbing aerosols are dominating, during Pre-monsoon dust (coarse mode aerosol particles) are more dominating. In winter and pre-monsoon majorly the aerosol forcing is more dominating while during monsoon and post-monsoon season meteorological forcing is more dominating. Over the South Indian Ocean, more or less same types of aerosol (Sea salt) are present. Over the Arabian Sea the Aerosol Indirect Radiative forcing are varying from -5 ± 4.5 W/m² for winter season while in other seasons it is reducing. The results provide observationally constrained estimates of aerosol indirect forcing in the Indian Ocean which can be helpful in evaluating the climate model performance in the context of such complex interactions.

Keywords: aerosol-cloud-precipitation interaction, aerosol-cloud-climate interaction, indirect radiative forcing, climate model

Procedia PDF Downloads 176
542 3-Dimensional Contamination Conceptual Site Model: A Case Study Illustrating the Multiple Applications of Developing and Maintaining a 3D Contamination Model during an Active Remediation Project on a Former Urban Gasworks Site

Authors: Duncan Fraser

Abstract:

A 3-Dimensional (3D) conceptual site model was developed using the Leapfrog Works® platform utilising a comprehensive historical dataset for a large former Gasworks site in Fitzroy, Melbourne. The gasworks had been constructed across two fractured geological units with varying hydraulic conductivities. A Newer Volcanic (basaltic) outcrop covered approximately half of the site and was overlying a fractured Melbourne formation (Siltstone) bedrock outcropping over the remaining portion. During the investigative phase of works, a dense non-aqueous phase liquid (DNAPL) plume (coal tar) was identified within both geological units in the subsurface originating from multiple sources, including gasholders, tar wells, condensers, and leaking pipework. The first stage of model development was undertaken to determine the horizontal and vertical extents of the coal tar in the subsurface and assess the potential causality between potential sources, plume location, and site geology. Concentrations of key contaminants of interest (COIs) were also interpolated within Leapfrog to refine the distribution of contaminated soils. The model was subsequently used to develop a robust soil remediation strategy and achieve endorsement from an Environmental Auditor. A change in project scope, following the removal and validation of the three former gasholders, necessitated the additional excavation of a significant volume of residual contaminated rock to allow for the future construction of two-story underground basements. To assess financial liabilities associated with the offsite disposal or thermal treatment of material, the 3D model was updated with three years of additional analytical data from the active remediation phase of works. Chemical concentrations and the residual tar plume within the rock fractures were modelled to pre-classify the in-situ material and enhance separation strategies to prevent the unnecessary treatment of material and reduce costs.

Keywords: 3D model, contaminated land, Leapfrog, remediation

Procedia PDF Downloads 133
541 Nondestructive Prediction and Classification of Gel Strength in Ethanol-Treated Kudzu Starch Gels Using Near-Infrared Spectroscopy

Authors: John-Nelson Ekumah, Selorm Yao-Say Solomon Adade, Mingming Zhong, Yufan Sun, Qiufang Liang, Muhammad Safiullah Virk, Xorlali Nunekpeku, Nana Adwoa Nkuma Johnson, Bridget Ama Kwadzokpui, Xiaofeng Ren

Abstract:

Enhancing starch gel strength and stability is crucial. However, traditional gel property assessment methods are destructive, time-consuming, and resource-intensive. Thus, understanding ethanol treatment effects on kudzu starch gel strength and developing a rapid, nondestructive gel strength assessment method is essential for optimizing the treatment process and ensuring product quality consistency. This study investigated the effects of different ethanol concentrations on the microstructure of kudzu starch gels using a comprehensive microstructural analysis. We also developed a nondestructive method for predicting gel strength and classifying treatment levels using near-infrared (NIR) spectroscopy, and advanced data analytics. Scanning electron microscopy revealed progressive network densification and pore collapse with increasing ethanol concentration, correlating with enhanced mechanical properties. NIR spectroscopy, combined with various variable selection methods (CARS, GA, and UVE) and modeling algorithms (PLS, SVM, and ELM), was employed to develop predictive models for gel strength. The UVE-SVM model demonstrated exceptional performance, with the highest R² values (Rc = 0.9786, Rp = 0.9688) and lowest error rates (RMSEC = 6.1340, RMSEP = 6.0283). Pattern recognition algorithms (PCA, LDA, and KNN) successfully classified gels based on ethanol treatment levels, achieving near-perfect accuracy. This integrated approach provided a multiscale perspective on ethanol-induced starch gel modification, from molecular interactions to macroscopic properties. Our findings demonstrate the potential of NIR spectroscopy, coupled with advanced data analysis, as a powerful tool for rapid, nondestructive quality assessment in starch gel production. This study contributes significantly to the understanding of starch modification processes and opens new avenues for research and industrial applications in food science, pharmaceuticals, and biomaterials.

Keywords: kudzu starch gel, near-infrared spectroscopy, gel strength prediction, support vector machine, pattern recognition algorithms, ethanol treatment

Procedia PDF Downloads 37
540 Synthesis of Microencapsulated Phase Change Material for Adhesives with Thermoregulating Properties

Authors: Christin Koch, Andreas Winkel, Martin Kahlmeyer, Stefan Böhm

Abstract:

Due to environmental regulations on greenhouse gas emissions and the depletion of fossil fuels, there is an increasing interest in electric vehicles.To maximize their driving range, batteries with high storage capacities are needed. In most electric cars, rechargeable lithium-ion batteries are used because of their high energy density. However, it has to be taken into account that these batteries generate a large amount of heat during the charge and discharge processes. This leads to a decrease in a lifetime and damage to the battery cells when the temperature exceeds the defined operating range. To ensure an efficient performance of the battery cells, reliable thermal management is required. Currently, the cooling is achieved by heat sinks (e.g., cooling plates) bonded to the battery cells with a thermally conductive adhesive (TCA) that directs the heat away from the components. Especially when large amounts of heat have to be dissipated spontaneously due to peak loads, the principle of heat conduction is not sufficient, so attention must be paid to the mechanism of heat storage. An efficient method to store thermal energy is the use of phase change materials (PCM). Through an isothermal phase change, PCM can briefly absorb or release thermal energy at a constant temperature. If the phase change takes place in the transition from solid to liquid, heat is stored during melting and is released to the ambient during the freezing process upon cooling. The presented work displays the great potential of thermally conductive adhesives filled with microencapsulated PCM to limit peak temperatures in battery systems. The encapsulation of the PCM avoids the effects of aging (e.g., migration) and chemical reactions between the PCM and the adhesive matrix components. In this study, microencapsulation has been carried out by in situ polymerization. The microencapsulated PCM was characterized by FT-IR spectroscopy, and the thermal properties were measured by DSC and laser flash method. The mechanical properties, electrical and thermal conductivity, and adhesive toughness of the TCA/PCM composite were also investigated.

Keywords: phase change material, microencapsulation, adhesive bonding, thermal management

Procedia PDF Downloads 72
539 Territorial Influence of Religious Based Armed Conflicts in Africa

Authors: Badru Hasan Segujja, Nassiwa Shamim

Abstract:

This study “Territorial Influence of Religious Based Armed Conflicts in Africa” was in place to identify the influence of religious based armed conflicts, their parsistance and their impact on African societies. The study employed a qualitative research methodology, as data from respondents was descriptively recorded using random sampling technics. The study discovered that, the world is experiencing religious based armed violence where actors fight under the umbrella of freedom fighters where the African continent in particular has been at the pic of such armed violence almost since each countries independence to date. Because of this situation, the Continent is torn apart as families are traumatized by the memories of their dear ones who never survived in yesterdays’ faith based armed violence. The study disvovered that, some of these faith based armed conflicts are caused by factors ranging from undemocratic practices due to poor governance, poverty, Unemployment, religious extremism and radicalism which later turn into intractable violence. Religious armed groups such as, Holly Spirit Movement (HSM), Allied Democratic Forces (ADF) and Lords Resistance Army (LRA) in Uganda and now Eastern DRC and Central African Republic, ALSHABAB in East Africa, SELEKE and ANTI BALAKA in Central African Republic, BOKO HARAM in Nigeria, JANJAWEED in Sudan and Republic of Chad, Sudaneess Peoples Liberation Army (SPLA) in Southern Sudan, Alqaida Mission in Islamic Magreeb (AQIIM) in Mali coupled with acute racism of Hutu and Tutsi in Rwanda or Burundi and Xenophobic Nationalism in (South Africa). The study futher discovered that, the component of “freedom fighters” has strongly made these groups maintain the ground without fear of any repucation, which situation has resulted into children and women becoming disproportionally victims and the response of international communities to the violence is inadequate. The study concludes that, dialogue for peace is better than going for wars. The study recommends that, in order to restore peace on the African continent and elsewhere in the world, UN should recommend the teaching of peace values in schools, pre-conflict early warnings must be well attended, actors must refrain from using religious lebles, democracy, unemployment and poverty issues should as well be addressed to avoid unnessesary conflicts.

Keywords: influence, religious, armed, conflicts

Procedia PDF Downloads 85
538 Molecular Pathogenesis of NASH through the Dysregulation of Metabolic Organ Network in the NASH-HCC Model Mouse Treated with Streptozotocin-High Fat Diet

Authors: Bui Phuong Linh, Yuki Sakakibara, Ryuto Tanaka, Elizabeth H. Pigney, Taishi Hashiguchi

Abstract:

NASH is an increasingly prevalent chronic liver disease that can progress to hepatocellular carcinoma and now is attracting interest worldwide. The STAM™ model is a clinically-correlated murine NASH model which shows the same pathological progression as NASH patients and has been widely used for pharmacological and basic research. The multiple parallel hits hypothesis suggests abnormalities in adipocytokines, intestinal microflora, and endotoxins are intertwined and could contribute to the development of NASH. In fact, NASH patients often exhibit gut dysbiosis and dysfunction in adipose tissue and metabolism. However, the analysis of the STAM™ model has only focused on the liver. To clarify whether the STAM™ model can also mimic multiple pathways of NASH progression, we analyzed the organ crosstalk interactions between the liver and the gut and the phenotype of adipose tissue in the STAM™ model. NASH was induced in male mice by a single subcutaneous injection of 200 µg streptozotocin 2 days after birth and feeding with high-fat diet after 4 weeks of age. The mice were sacrificed at NASH stage. Colon samples were snap-frozen in liquid nitrogen and stored at -80˚C for tight junction-related protein analysis. Adipose tissue was prepared into paraffin blocks for HE staining. Blood adiponectin was analyzed to confirm changes in the adipocytokine profile. Tight junction-related proteins in the intestine showed that expression of ZO-1 decreased with the progression of the disease. Increased expression of endotoxin in the blood and decreased expression of Adiponectin were also observed. HE staining revealed hypertrophy of adipocytes. Decreased expression of ZO-1 in the intestine of STAM™ mice suggests the occurrence of leaky gut, and abnormalities in adipocytokine secretion were also observed. Together with the liver, phenotypes in these organs are highly similar to human NASH patients and might be involved in the pathogenesis of NASH.

Keywords: Non-alcoholic steatohepatitis, hepatocellular carcinoma, fibrosis, organ crosstalk, leaky gut

Procedia PDF Downloads 159
537 Composite Materials from Beer Bran Fibers and Polylactic Acid: Characterization and Properties

Authors: Camila Hurtado, Maria A. Morales, Diego Torres, L.H. Reyes, Alejandro Maranon, Alicia Porras

Abstract:

This work presents the physical and chemical characterization of beer brand fibers and the properties of novel composite materials made of these fibers and polylactic acid (PLA). Treated and untreated fibers were physically characterized in terms of their moisture content (ASTM D1348), density, and particle size (ASAE S319.2). A chemical analysis following TAPPI standards was performed to determine ash, extractives, lignin, and cellulose content on fibers. Thermal stability was determined by TGA analysis, and an FTIR was carried out to check the influence of the alkali treatment in fiber composition. An alkali treatment with NaOH (5%) of fibers was performed for 90 min, with the objective to improve the interfacial adhesion with polymeric matrix in composites. Composite materials based on either treated or untreated beer brand fibers and polylactic acid (PLA) were developed characterized in tension (ASTM D638), bending (ASTM D790) and impact (ASTM D256). Before composites manufacturing, PLA and brand beer fibers (10 wt.%) were mixed in a twin extruder with a temperature profile between 155°C and 180°C. Coupons were manufactured by compression molding (110 bar) at 190°C. Physical characterization showed that alkali treatment does not affect the moisture content (6.9%) and the density (0.48 g/cm³ for untreated fiber and 0.46 g/cm³ for the treated one). Chemical and FTIR analysis showed a slight decrease in ash and extractives. Also, a decrease of 47% and 50% for lignin and hemicellulose content was observed, coupled with an increase of 71% for cellulose content. Fiber thermal stability was improved with the alkali treatment at about 10°C. Tensile strength of composites was found to be between 42 and 44 MPa with no significant statistical difference between coupons with either treated or untreated fibers. However, compared to neat PLA, composites with beer bran fibers present a decrease in tensile strength of 27%. Young modulus increases by 10% with treated fiber, compared to neat PLA. Flexural strength decreases in coupons with treated fiber (67.7 MPa), while flexural modulus increases (3.2 GPa) compared to neat PLA (83.3 MPa and 2.8 GPa, respectively). Izod impact test results showed an improvement of 99.4% in coupons with treated fibers - compared with neat PLA.

Keywords: beer bran, characterization, green composite, polylactic acid, surface treatment

Procedia PDF Downloads 133
536 Integrated Coastal Management for the Sustainable Development of Coastal Cities: The Case of El-Mina, Tripoli, Lebanon

Authors: G. Ghamrawi, Y. Abunnasr, M. Fawaz, S. Yazigi

Abstract:

Coastal cities are constantly exposed to environmental degradation and economic regression fueled by rapid and uncontrolled urban growth as well as continuous resource depletion. This is the case of the City of Mina in Tripoli (Lebanon), where lack of awareness to preserve social, ecological, and historical assets, coupled with the increasing development pressures, are threatening the socioeconomic status of the city residents, the quality of life and accessibility to the coast. To address these challenges, a holistic coastal urban design and planning approach was developed to analyze the environmental, political, legal, and socioeconomic context of the city. This approach aims to investigate the potential of balancing urban development with the protection and enhancement of cultural, ecological, and environmental assets under an integrated coastal zone management approach (ICZM). The analysis of Mina's different sectors adopted several tools that include direct field observation, interviews with stakeholders, analysis of available data, historical maps, and previously proposed projects. The findings from the analysis were mapped and graphically represented, allowing the recognition of character zones that become the design intervention units. Consequently, the thesis proposes an urban, city-scale intervention that identifies 6 different character zones (the historical fishing port, Abdul Wahab island, the abandoned Port Said, Hammam el Makloub, the sand beach, and the new developable area) and proposes context-specific design interventions that capitalize on the main characteristics of each zone. Moreover, the intervention builds on the institutional framework of ICZM as well as other studies previously conducted for the coast and adopts nature-based solutions with hybrid systems for providing better environmental design solutions for developing the coast. This enables the realization of an all-inclusive, well-connected shoreline with easy and free access towards the sea; a developed shoreline with an active local economy, and an improved urban environment.

Keywords: blue green infrastructure, coastal cities, hybrid solutions, integrated coastal zone management, sustainable development, urban planning

Procedia PDF Downloads 156
535 A Xenon Mass Gauging through Heat Transfer Modeling for Electric Propulsion Thrusters

Authors: A. Soria-Salinas, M.-P. Zorzano, J. Martín-Torres, J. Sánchez-García-Casarrubios, J.-L. Pérez-Díaz, A. Vakkada-Ramachandran

Abstract:

The current state-of-the-art methods of mass gauging of Electric Propulsion (EP) propellants in microgravity conditions rely on external measurements that are taken at the surface of the tank. The tanks are operated under a constant thermal duty cycle to store the propellant within a pre-defined temperature and pressure range. We demonstrate using computational fluid dynamics (CFD) simulations that the heat-transfer within the pressurized propellant generates temperature and density anisotropies. This challenges the standard mass gauging methods that rely on the use of time changing skin-temperatures and pressures. We observe that the domes of the tanks are prone to be overheated, and that a long time after the heaters of the thermal cycle are switched off, the system reaches a quasi-equilibrium state with a more uniform density. We propose a new gauging method, which we call the Improved PVT method, based on universal physics and thermodynamics principles, existing TRL-9 technology and telemetry data. This method only uses as inputs the temperature and pressure readings of sensors externally attached to the tank. These sensors can operate during the nominal thermal duty cycle. The improved PVT method shows little sensitivity to the pressure sensor drifts which are critical towards the end-of-life of the missions, as well as little sensitivity to systematic temperature errors. The retrieval method has been validated experimentally with CO2 in gas and fluid state in a chamber that operates up to 82 bar within a nominal thermal cycle of 38 °C to 42 °C. The mass gauging error is shown to be lower than 1% the mass at the beginning of life, assuming an initial tank load at 100 bar. In particular, for a pressure of about 70 bar, just below the critical pressure of CO2, the error of the mass gauging in gas phase goes down to 0.1% and for 77 bar, just above the critical point, the error of the mass gauging of the liquid phase is 0.6% of initial tank load. This gauging method improves by a factor of 8 the accuracy of the standard PVT retrievals using look-up tables with tabulated data from the National Institute of Standards and Technology.

Keywords: electric propulsion, mass gauging, propellant, PVT, xenon

Procedia PDF Downloads 345
534 Screening for Non-hallucinogenic Neuroplastogens as Drug Candidates for the Treatment of Anxiety, Depression, and Posttraumatic Stress Disorder

Authors: Jillian M. Hagel, Joseph E. Tucker, Peter J. Facchini

Abstract:

With the aim of establishing a holistic approach for the treatment of central nervous system (CNS) disorders, we are pursuing a drug development program rapidly progressing through discovery and characterization phases. The drug candidates identified in this program are referred to as neuroplastogens owing to their ability to mediate neuroplasticity, which can be beneficial to patients suffering from anxiety, depression, or posttraumatic stress disorder. These and other related neuropsychiatric conditions are associated with the onset of neuronal atrophy, which is defined as a reduction in the number and/or productivity of neurons. The stimulation of neuroplasticity results in an increase in the connectivity between neurons and promotes the restoration of healthy brain function. We have synthesized a substantial catalogue of proprietary indolethylamine derivatives based on the general structures of serotonin (5-hydroxytryptamine) and psychedelic molecules such as N,N-dimethyltryptamine (DMT) and psilocin (4-hydroxy-DMT) that function as neuroplastogens. A primary objective in our screening protocol is the identification of derivatives associated with a significant reduction in hallucination, which will allow administration of the drug at a dose that induces neuroplasticity and triggers other efficacious outcomes in the treatment of targeted CNS disorders but which does not cause a psychedelic response in the patient. Both neuroplasticity and hallucination are associated with engagement of the 5HT2A receptor, requiring drug candidates differentially coupled to these two outcomes at a molecular level. We use novel and proprietary artificial intelligence algorithms to predict the mode of binding to the 5HT2A receptor, which has been shown to correlate with the hallucinogenic response. Hallucination is tested using the mouse head-twitch response model, whereas mouse marble-burying and sucrose preference assays are used to evaluate anxiolytic and anti-depressive potential. Neuroplasticity is assays using dendritic outgrowth assays and cell-based ELISA analysis. Pharmacokinetics and additional receptor-binding analyses also contribute the selection of lead candidates. A summary of the program is presented.

Keywords: neuroplastogen, non-hallucinogenic, drug development, anxiety, depression, PTSD, indolethylamine derivatives, psychedelic-inspired, 5-HT2A receptor, computational chemistry, head-twitch response behavioural model, neurite outgrowth assay

Procedia PDF Downloads 138