Search results for: natural radionuclides
914 Geomatic Techniques to Filter Vegetation from Point Clouds
Authors: M. Amparo Núñez-Andrés, Felipe Buill, Albert Prades
Abstract:
More and more frequently, geomatics techniques such as terrestrial laser scanning or digital photogrammetry, either terrestrial or from drones, are being used to obtain digital terrain models (DTM) used for the monitoring of geological phenomena that cause natural disasters, such as landslides, rockfalls, debris-flow. One of the main multitemporal analyses developed from these models is the quantification of volume changes in the slopes and hillsides, either caused by erosion, fall, or land movement in the source area or sedimentation in the deposition zone. To carry out this task, it is necessary to filter the point clouds of all those elements that do not belong to the slopes. Among these elements, vegetation stands out as it is the one we find with the greatest presence and its constant change, both seasonal and daily, as it is affected by factors such as wind. One of the best-known indexes to detect vegetation on the image is the NVDI (Normalized Difference Vegetation Index), which is obtained from the combination of the infrared and red channels. Therefore it is necessary to have a multispectral camera. These cameras are generally of lower resolution than conventional RGB cameras, while their cost is much higher. Therefore we have to look for alternative indices based on RGB. In this communication, we present the results obtained in Georisk project (PID2019‐103974RB‐I00/MCIN/AEI/10.13039/501100011033) by using the GLI (Green Leaf Index) and ExG (Excessive Greenness), as well as the change to the Hue-Saturation-Value (HSV) color space being the H coordinate the one that gives us the most information for vegetation filtering. These filters are applied both to the images, creating binary masks to be used when applying the SfM algorithms, and to the point cloud obtained directly by the photogrammetric process without any previous filter or the one obtained by TLS (Terrestrial Laser Scanning). In this last case, we have also tried to work with a Riegl VZ400i sensor that allows the reception, as in the aerial LiDAR, of several returns of the signal. Information to be used for the classification on the point cloud. After applying all the techniques in different locations, the results show that the color-based filters allow correct filtering in those areas where the presence of shadows is not excessive and there is a contrast between the color of the slope lithology and the vegetation. As we have advanced in the case of using the HSV color space, it is the H coordinate that responds best for this filtering. Finally, the use of the various returns of the TLS signal allows filtering with some limitations.Keywords: RGB index, TLS, photogrammetry, multispectral camera, point cloud
Procedia PDF Downloads 154913 Kinetic Energy Recovery System Using Spring
Authors: Mayuresh Thombre, Prajyot Borkar, Mangirish Bhobe
Abstract:
New advancement of technology and never satisfying demands of the civilization are putting huge pressure on the natural fuel resources and these resources are at a constant threat to its sustainability. To get the best out of the automobile, the optimum balance between performance and fuel economy is important. In the present state of art, either of the above two aspects are taken into mind while designing and development process which puts the other in the loss as increase in fuel economy leads to decrement in performance and vice-versa. In-depth observation of the vehicle dynamics apparently shows that large amount of energy is lost during braking and likewise large amount of fuel is consumed to reclaim the initial state, this leads to lower fuel efficiency to gain the same performance. Current use of Kinetic Energy Recovery System is only limited to sports vehicles only because of the higher cost of this system. They are also temporary in nature as power can be squeezed only during a small time duration and use of superior parts leads to high cost, which results on concentration on performance only and neglecting the fuel economy. In this paper Kinetic Energy Recovery System for storing the power and then using the same while accelerating has been discussed. The major storing element in this system is a Flat Spiral Spring that will store energy by compression and torsion. The use of spring ensure the permanent storage of energy until used by the driver unlike present mechanical regeneration system in which the energy stored decreases with time and is eventually lost. A combination of internal gears and spur gears will be used in order to make the energy release uniform which will lead to safe usage. The system can be used to improve the fuel efficiency by assisting in overcoming the vehicle’s inertia after braking or to provide instant acceleration whenever required by the driver. The performance characteristics of the system including response time, mechanical efficiency and overall increase in efficiency are demonstrated. This technology makes the KERS (Kinetic Energy Recovery System) more flexible and economical allowing specific application while at the same time increasing the time frame and ease of usage.Keywords: electric control unit, energy, mechanical KERS, planetary gear system, power, smart braking, spiral spring
Procedia PDF Downloads 201912 Development and Nutritional Evaluation of Sorghum Flour-Based Crackers Enriched with Bioactive Tomato Processing Residue
Authors: Liana Claudia Salanță, Anca Corina Fărcaș
Abstract:
Valorization of agro-industrial by-products offers significant economic and environmental advantages. This study investigates the transformation of tomato processing residues into value-added products, contributing to waste reduction and promoting a circular, sustainable economy. Specifically, the development of sorghum flour-based crackers enriched with tomato waste powder targets the dietary requirements of individuals with celiac disease and diabetes, evaluating their nutritional and sensory properties. Tomato residues were obtained from Roma-Spania tomatoes and processed into powder through drying and grinding. The bioactive compounds, including carotenoids, lycopene, and polyphenols, were quantified using established analytical methods. Formulation of the crackers involved optimizing the incorporation of tomato powder into sorghum flour. Subsequently, their nutritional and sensory attributes were assessed. The tomato waste powder demonstrated considerable bioactive potential, with total carotenoid content measured at 66 mg/100g, lycopene at 52.61 mg/100g, and total polyphenols at 463.60 mg GAE/100g. Additionally, the crackers with a 30% powder addition exhibited the highest concentration of polyphenols. Consequently, this sample also demonstrated a high antioxidant activity of 15.04% inhibition of DPPH radicals. Nutritionally, the crackers showed a 30% increase in fiber content and a 25% increase in protein content compared to standard gluten-free products. Sensory evaluation indicated positive consumer acceptance, with an average score of 8 out of 10 for taste and 7.5 out of 10 for color, attributed to the natural pigments from tomato waste. This innovative approach highlights the potential of tomato by-products in creating nutritionally enhanced gluten-free foods. Future research should explore the long-term stability of these bioactive compounds in finished products and evaluate the scalability of this process for industrial applications. Integrating such sustainable practices can significantly contribute to waste reduction and the development of functional foods.Keywords: tomato waste, circular economy, bioactive compounds, sustainability, health benefits
Procedia PDF Downloads 35911 An Overview of the Porosity Classification in Carbonate Reservoirs and Their Challenges: An Example of Macro-Microporosity Classification from Offshore Miocene Carbonate in Central Luconia, Malaysia
Authors: Hammad T. Janjuhah, Josep Sanjuan, Mohamed K. Salah
Abstract:
Biological and chemical activities in carbonates are responsible for the complexity of the pore system. Primary porosity is generally of natural origin while secondary porosity is subject to chemical reactivity through diagenetic processes. To understand the integrated part of hydrocarbon exploration, it is necessary to understand the carbonate pore system. However, the current porosity classification scheme is limited to adequately predict the petrophysical properties of different reservoirs having various origins and depositional environments. Rock classification provides a descriptive method for explaining the lithofacies but makes no significant contribution to the application of porosity and permeability (poro-perm) correlation. The Central Luconia carbonate system (Malaysia) represents a good example of pore complexity (in terms of nature and origin) mainly related to diagenetic processes which have altered the original reservoir. For quantitative analysis, 32 high-resolution images of each thin section were taken using transmitted light microscopy. The quantification of grains, matrix, cement, and macroporosity (pore types) was achieved using a petrographic analysis of thin sections and FESEM images. The point counting technique was used to estimate the amount of macroporosity from thin section, which was then subtracted from the total porosity to derive the microporosity. The quantitative observation of thin sections revealed that the mouldic porosity (macroporosity) is the dominant porosity type present, whereas the microporosity seems to correspond to a sum of 40 to 50% of the total porosity. It has been proven that these Miocene carbonates contain a significant amount of microporosity, which significantly complicates the estimation and production of hydrocarbons. Neglecting its impact can increase uncertainty about estimating hydrocarbon reserves. Due to the diversity of geological parameters, the application of existing porosity classifications does not allow a better understanding of the poro-perm relationship. However, the classification can be improved by including the pore types and pore structures where they can be divided into macro- and microporosity. Such studies of microporosity identification/classification represent now a major concern in limestone reservoirs around the world.Keywords: overview of porosity classification, reservoir characterization, microporosity, carbonate reservoir
Procedia PDF Downloads 154910 High Performance Liquid Cooling Garment (LCG) Using ThermoCore
Authors: Venkat Kamavaram, Ravi Pare
Abstract:
Modern warfighters experience extreme environmental conditions in many of their operational and training activities. In temperatures exceeding 95°F, the body’s temperature regulation can no longer cool through convection and radiation. In this case, the only cooling mechanism is evaporation. However, evaporative cooling is often compromised by excessive humidity. Natural cooling mechanisms can be further compromised by clothing and protective gear, which trap hot air and moisture close to the body. Creating an efficient heat extraction apparel system that is also lightweight without hindering dexterity or mobility of personnel working in extreme temperatures is a difficult technical challenge and one that needs to be addressed to increase the probability for the future success of the US military. To address this challenge, Oceanit Laboratories, Inc. has developed and patented a Liquid Cooled Garment (LCG) more effective than any on the market today. Oceanit’s LCG is a form-fitting garment with a network of thermally conductive tubes that extracts body heat and can be worn under all authorized and chemical/biological protective clothing. Oceanit specifically designed and developed ThermoCore®, a thermally conductive polymer, for use in this apparel, optimizing the product for thermal conductivity, mechanical properties, manufacturability, and performance temperatures. Thermal Manikin tests were conducted in accordance with the ASTM test method, ASTM F2371, Standard Test Method for Measuring the Heat Removal Rate of Personal Cooling Systems Using a Sweating Heated Manikin, in an environmental chamber using a 20-zone sweating thermal manikin. Manikin test results have shown that Oceanit’s LCG provides significantly higher heat extraction under the same environmental conditions than the currently fielded Environmental Control Vest (ECV) while at the same time reducing the weight. Oceanit’s LCG vests performed nearly 30% better in extracting body heat while weighing 15% less than the ECV. There are NO cooling garments in the market that provide the same thermal extraction performance, form-factor, and reduced weight as Oceanit’s LCG. The two cooling garments that are commercially available and most commonly used are the Environmental Control Vest (ECV) and the Microclimate Cooling Garment (MCG).Keywords: thermally conductive composite, tubing, garment design, form fitting vest, thermocore
Procedia PDF Downloads 115909 The Effects of Ellagic Acid on Rat Lungs Induced Tobacco Smoke
Authors: Nalan Kaya, Gonca Ozan, Elif Erdem, Neriman Colakoglu, Enver Ozan
Abstract:
The toxic effects of tobacco smoke exposure have been detected in numerous studies. Ellagic acid (EA), (2,3,7,8-tetrahydroxy [1]-benzopyranol [5,4,3-cde] benzopyran 5,10-dione), a natural phenolic lactone compound, is found in various plant species including pomegranate, grape, strawberries, blackberries and raspberries. Similar to the other effective antioxidants, EA can safely interact with the free radicals and reduces oxidative stress through the phenolic ring and hydroxyl components in its structure. The aim of the present study was to examine the protective effects of ellagic acid against oxidative damage on lung tissues of rats induced by tobacco smoke. Twenty-four male adult (8 weeks old) Spraque-Dawley rats were divided randomly into 4 equal groups: group I (Control), group II (Tobacco smoke), group III (Tobacco smoke + corn oil) and group IV (Tobacco smoke + ellagic acid). The rats in group II, III and IV, were exposed to tobacco smoke 1 hour twice a day for 12 weeks. In addition to tobacco smoke exposure, 12 mg/kg ellagic acid (dissolved in corn oil), was applied to the rats in group IV by oral gavage. Equal amount of corn oil used in solving ellagic acid was applied to the rats by oral gavage in group III. At the end of the experimental period, rats were decapitated. Lung tissues and blood samples were taken. The lung slides were stained by H&E and Masson’s Trichrome methods. Also, galactin-3 stain was applied. Biochemical analyzes were performed. Vascular congestion and inflammatory cell infiltration in pulmonary interstitium, thickness in interalveolar septum, cytoplasmic vacuolation in some macrophages and galactin-3 positive cells were observed in histological examination of tobacco smoke group. In addition to these findings, hemorrhage in pulmonary interstitium and bronchial lumen was detected in tobacco smoke + corn oil group. Reduced vascular congestion and hemorrhage in pulmoner interstitium and rarely thickness in interalveolar septum were shown in tobacco smoke + EA group. Compared to group-I, group-II GSH level was decreased and MDA level was increased significantly. Nevertheless group-IV GSH level was higher and MDA level was lower than group-II. The results indicate that ellagic acid could protect the lung tissue from the tobacco smoke harmful effects.Keywords: ellagic acid, lung, rat, tobacco smoke
Procedia PDF Downloads 214908 Biomimetic Systems to Reveal the Action Mode of Epigallocatechin-3-Gallate in Lipid Membrane
Authors: F. Pires, V. Geraldo, O. N. Oliveira Jr., M. Raposo
Abstract:
Catechins are powerful antioxidants which have attractive properties useful for tumor therapy. Considering their antioxidant activity, these molecules can act as a scavenger of the reactive oxygen species (ROS), alleviating the damage of cell membrane induced by oxidative stress. The complexity and dynamic nature of the cell membrane compromise the analysis of the biophysical interactions between drug and cell membrane and restricts the transport or uptake of the drug by intracellular targets. To avoid the cell membrane complexity, we used biomimetic systems as liposomes and Langmuir monolayers to study the interaction between catechin and membranes at the molecular level. Liposomes were formed after the dispersion of anionic 1,2-dipalmitoyl-sn-glycero-3-[phospho-rac-(1-glycerol)(sodium salt) (DPPG) phospholipids in an aqueous solution, which mimic the arrangement of lipids in natural cell membranes and allows the entrapment of catechins. Langmuir monolayers were formed after dropping amphiphilic molecules, DPPG phospholipids, dissolved in an organic solvent onto the water surface. In this work, we mixed epigallocatechin-3-gallate (EGCG) with DPPG liposomes and exposed them to ultra-violet radiation in order to evaluate the antioxidant potential of these molecules against oxidative stress induced by radiation. The presence of EGCG in the mixture decreased the rate of lipid peroxidation, proving that EGCG protects membranes through the quenching of the reactive oxygen species. Considering the high amount of hydroxyl groups (OH groups) on structure of EGCG, a possible mechanism to these molecules interact with membrane is through hydrogen bonding. We also investigated the effect of EGCG at various concentrations on DPPG Langmuir monolayers. The surface pressure isotherms and infrared reflection-absorption spectroscopy (PM-IRRAS) results corroborate with absorbance results preformed on liposome-model, showing that EGCG interacts with polar heads of the monolayers. This study elucidates the physiological action of EGCG which can be incorporated in lipid membrane. These results are also relevant for the improvement of the current protocols used to incorporate catechins in drug delivery systems.Keywords: catechins, lipid membrane, anticancer agent, molecular interactions
Procedia PDF Downloads 233907 Relation Between Traffic Mix and Traffic Accidents in a Mixed Industrial Urban Area
Authors: Michelle Eliane Hernández-García, Angélica Lozano
Abstract:
The traffic accidents study usually contemplates the relation between factors such as the type of vehicle, its operation, and the road infrastructure. Traffic accidents can be explained by different factors, which have a greater or lower relevance. Two zones are studied, a mixed industrial zone and the extended zone of it. The first zone has mainly residential (57%), and industrial (23%) land uses. Trucks are mainly on the roads where industries are located. Four sensors give information about traffic and speed on the main roads. The extended zone (which includes the first zone) has mainly residential (47%) and mixed residential (43%) land use, and just 3% of industrial use. The traffic mix is composed mainly of non-trucks. 39 traffic and speed sensors are located on main roads. The traffic mix in a mixed land use zone, could be related to traffic accidents. To understand this relation, it is required to identify the elements of the traffic mix which are linked to traffic accidents. Models that attempt to explain what factors are related to traffic accidents have faced multiple methodological problems for obtaining robust databases. Poisson regression models are used to explain the accidents. The objective of the Poisson analysis is to estimate a vector to provide an estimate of the natural logarithm of the mean number of accidents per period; this estimate is achieved by standard maximum likelihood procedures. For the estimation of the relation between traffic accidents and the traffic mix, the database is integrated of eight variables, with 17,520 observations and six vectors. In the model, the dependent variable is the occurrence or non-occurrence of accidents, and the vectors that seek to explain it, correspond to the vehicle classes: C1, C2, C3, C4, C5, and C6, respectively, standing for car, microbus, and van, bus, unitary trucks (2 to 6 axles), articulated trucks (3 to 6 axles) and bi-articulated trucks (5 to 9 axles); in addition, there is a vector for the average speed of the traffic mix. A Poisson model is applied, using a logarithmic link function and a Poisson family. For the first zone, the Poisson model shows a positive relation among traffic accidents and C6, average speed, C3, C2, and C1 (in a decreasing order). The analysis of the coefficient shows a high relation with bi-articulated truck and bus (C6 and the C3), indicating an important participation of freight trucks. For the expanded zone, the Poisson model shows a positive relation among traffic accidents and speed average, biarticulated truck (C6), and microbus and vans (C2). The coefficients obtained in both Poisson models shows a higher relation among freight trucks and traffic accidents in the first industrial zone than in the expanded zone.Keywords: freight transport, industrial zone, traffic accidents, traffic mix, trucks
Procedia PDF Downloads 130906 Propagation of Simmondsia chinensis (Link) Schneider by Stem Cuttings
Authors: Ahmed M. Eed, Adam H. Burgoyne
Abstract:
Jojoba (Simmondsia chinensis (Link) Schneider), is a desert shrub which tolerates saline, alkyle soils and drought. The seeds contain a characteristic liquid wax of economic importance in industry as a machine lubricant and cosmetics. A major problem in seed propagation is that jojoba is a dioecious plant whose sex is not easily determined prior to flowering (3-4 years from germination). To overcome this phenomenon, asexual propagation using vegetative methods such as cutting can be used. This research was conducted to find out the effect of different Plant Growth Regulators (PGRs) and rooting media on Jojoba rhizogenesis. An experiment was carried out in a Factorial Completely Randomized Design (FCRD) with three replications, each with sixty cuttings per replication in fiberglass house of Natural Jojoba Corporation at Yemen. The different rooting media used were peat moss + perlite + vermiculite (1:1:1), peat moss + perlite (1:1) and peat moss + sand (1:1). Plant materials used were semi-hard wood cuttings of jojoba plants with length of 15 cm. The cuttings were collected in the month of June during 2012 and 2013 from the sub-terminal growth of the mother plants of Amman farm and introduced to Yemen. They were wounded, treated with Indole butyric acid (IBA), α-naphthalene acetic acid (NAA) or Indole-3-acetic acid (IAA) all @ 4000 ppm (part per million) and cultured on different rooting media under intermittent mist propagation conditions. IBA gave significantly higher percentage of rooting (66.23%) compared to NAA and IAA in all media used. However, the lowest percentage of rooting (5.33%) was recorded with IAA in the medium consisting of peat moss and sand (1:1). No significant difference was observed at all types of PGRs used with rooting media in respect of root length. Maximum number of roots was noticed in medium consisting of peat moss, perlite and vermiculite (1:1:1); peat moss and perlite (1:1) and peat moss and sand (1:1) using IBA, NAA and IBA, respectively. The interaction among rooting media was statistically significant with respect to rooting percentage character. Similarly, the interactions among PGRs were significant in terms of rooting percentage and also root length characters. The results demonstrated suitability of propagation of jojoba plants by semi-hard wood cuttings.Keywords: cutting, IBA, Jojoba, propagation, rhizogenesis
Procedia PDF Downloads 342905 Hybrid Fermentation System for Improvement of Ergosterol Biosynthesis
Authors: Alexandra Tucaliuc, Alexandra C. Blaga, Anca I. Galaction, Lenuta Kloetzer, Dan Cascaval
Abstract:
Ergosterol (ergosta-5,7,22-trien-3β-ol), also known as provitamin D2, is the precursor of vitamin D2 (ergocalciferol), because it is converted under UV radiation to this vitamin. The natural sources of ergosterol are mainly the yeasts (Saccharomyces sp., Candida sp.), but it can be also found in fungus (Claviceps sp.) or plants (orchids). In the yeasts cells, ergosterol is accumulated in membranes, especially in free form in the plasma membrane, but also as esters with fatty acids in membrane lipids. The chemical synthesis of ergosterol does not represent an efficient method for its production, in these circumstances, the most attractive alternative for producing ergosterol at larger-scale remains the aerobic fermentation using S. cerevisiae on glucose or by-products from agriculture of food industry as substrates, in batch or fed-batch operating systems. The aim of this work is to analyze comparatively the influence of aeration efficiency on ergosterol production by S. cerevisiae in batch and fed-batch fermentations, by considering different levels of mixing intensity, aeration rate, and n-dodecane concentration. The effects of the studied factors are quantitatively described by means of the mathematical correlations proposed for each of the two fermentation systems, valid both for the absence and presence of oxygen-vector inside the broth. The experiments were carried out in a laboratory stirred bioreactor, provided with computer-controlled and recorded parameters. n-Dodecane was used as oxygen-vector and the ergosterol content inside the yeasts cells has been considered at the fermentation moment related to the maximum concentration of ergosterol, 9 hrs for batch process and 20 hrs for fed-batch one. Ergosterol biosynthesis is strongly dependent on the dissolved oxygen concentration. The hydrocarbon concentration exhibits a significant influence on ergosterol production mainly by accelerating the oxygen transfer rate. Regardless of n-dodecane addition, by maintaining the glucose concentration at a constant level in the fed-batch process, the amount of ergosterol accumulated into the yeasts cells has been almost tripled. In the presence of hydrocarbon, the ergosterol concentration increased by over 50%. The value of oxygen-vector concentration corresponding to the maximum level of ergosterol depends mainly on biomass concentration, due to its negative influences on broth viscosity and interfacial phenomena of air bubbles blockage through the adsorption of hydrocarbon droplets–yeast cells associations. Therefore, for the batch process, the maximum ergosterol amount was reached for 5% vol. n-dodecane, while for the fed-batch process for 10% vol. hydrocarbon.Keywords: bioreactors, ergosterol, fermentation, oxygen-vector
Procedia PDF Downloads 189904 Adversarial Attacks and Defenses on Deep Neural Networks
Authors: Jonathan Sohn
Abstract:
Deep neural networks (DNNs) have shown state-of-the-art performance for many applications, including computer vision, natural language processing, and speech recognition. Recently, adversarial attacks have been studied in the context of deep neural networks, which aim to alter the results of deep neural networks by modifying the inputs slightly. For example, an adversarial attack on a DNN used for object detection can cause the DNN to miss certain objects. As a result, the reliability of DNNs is undermined by their lack of robustness against adversarial attacks, raising concerns about their use in safety-critical applications such as autonomous driving. In this paper, we focus on studying the adversarial attacks and defenses on DNNs for image classification. There are two types of adversarial attacks studied which are fast gradient sign method (FGSM) attack and projected gradient descent (PGD) attack. A DNN forms decision boundaries that separate the input images into different categories. The adversarial attack slightly alters the image to move over the decision boundary, causing the DNN to misclassify the image. FGSM attack obtains the gradient with respect to the image and updates the image once based on the gradients to cross the decision boundary. PGD attack, instead of taking one big step, repeatedly modifies the input image with multiple small steps. There is also another type of attack called the target attack. This adversarial attack is designed to make the machine classify an image to a class chosen by the attacker. We can defend against adversarial attacks by incorporating adversarial examples in training. Specifically, instead of training the neural network with clean examples, we can explicitly let the neural network learn from the adversarial examples. In our experiments, the digit recognition accuracy on the MNIST dataset drops from 97.81% to 39.50% and 34.01% when the DNN is attacked by FGSM and PGD attacks, respectively. If we utilize FGSM training as a defense method, the classification accuracy greatly improves from 39.50% to 92.31% for FGSM attacks and from 34.01% to 75.63% for PGD attacks. To further improve the classification accuracy under adversarial attacks, we can also use a stronger PGD training method. PGD training improves the accuracy by 2.7% under FGSM attacks and 18.4% under PGD attacks over FGSM training. It is worth mentioning that both FGSM and PGD training do not affect the accuracy of clean images. In summary, we find that PGD attacks can greatly degrade the performance of DNNs, and PGD training is a very effective way to defend against such attacks. PGD attacks and defence are overall significantly more effective than FGSM methods.Keywords: deep neural network, adversarial attack, adversarial defense, adversarial machine learning
Procedia PDF Downloads 195903 Effect of Damper Combinations in Series or Parallel on Structural Response
Authors: Ajay Kumar Sinha, Sharad Singh, Anukriti Sinha
Abstract:
Passive energy dissipation method for earthquake protection of structures is undergoing developments for improved performance. Combined use of different types of damping mechanisms has shown positive results in the near past. Different supplemental damping methods like viscous damping, frictional damping and metallic damping are being combined together for optimum performance. The conventional method of connecting passive dampers to structures is a parallel connection between the damper unit and structural member. Researchers are investigating coupling effect of different types of dampers. The most popular choice among the research community is coupling of viscous dampers and frictional dampers. The series and parallel coupling of these damping units are being studied for relative performance of the coupled system on response control of structures against earthquake. In this paper an attempt has been made to couple Fluid Viscous Dampers and Frictional Dampers in series and parallel to form a single unit of damping system. The relative performance of the coupled units has been studied on three dimensional reinforced concrete framed structure. The current theories of structural dynamics in practice for viscous damping and frictional damping have been incorporated in this study. The time history analysis of the structural system with coupled damper units, uncoupled damper units as well as of structural system without any supplemental damping has been performed in this study. The investigations reported in this study show significant improved performance of coupled system. A higher natural frequency of the system outside the forcing frequency has been obtained for structural systems with coupled damper units as against the other cases. The structural response of the structure in terms of storey displacement and storey drift show significant improvement for the case with coupled damper units as against the cases with uncoupled units or without any supplemental damping. The results are promising in terms of improved response of the structure with coupled damper units. Further investigations in this regard for a comparative performance of the series and parallel coupled systems will be carried out to study the optimum behavior of these coupled systems for enhanced response control of structural systems.Keywords: frictional damping, parallel coupling, response control, series coupling, supplemental damping, viscous damping
Procedia PDF Downloads 456902 Artificial Habitat Mapping in Adriatic Sea
Authors: Annalisa Gaetani, Anna Nora Tassetti, Gianna Fabi
Abstract:
The hydroacoustic technology is an efficient tool to study the sea environment: the most recent advancement in artificial habitat mapping involves acoustic systems to investigate fish abundance, distribution and behavior in specific areas. Along with a detailed high-coverage bathymetric mapping of the seabed, the high-frequency Multibeam Echosounder (MBES) offers the potential of detecting fine-scale distribution of fish aggregation, combining its ability to detect at the same time the seafloor and the water column. Surveying fish schools distribution around artificial structures, MBES allows to evaluate how their presence modifies the biological natural habitat overtime in terms of fish attraction and abundance. In the last years, artificial habitat mapping experiences have been carried out by CNR-ISMAR in the Adriatic sea: fish assemblages aggregating at offshore gas platforms and artificial reefs have been systematically monitored employing different kinds of methodologies. This work focuses on two case studies: a gas extraction platform founded at 80 meters of depth in the central Adriatic sea, 30 miles far from the coast of Ancona, and the concrete and steel artificial reef of Senigallia, deployed by CNR-ISMAR about 1.2 miles offshore at a depth of 11.2 m . Relating the MBES data (metrical dimensions of fish assemblages, shape, depth, density etc.) with the results coming from other methodologies, such as experimental fishing surveys and underwater video camera, it has been possible to investigate the biological assemblage attracted by artificial structures hypothesizing which species populate the investigated area and their spatial dislocation from these artificial structures. Processing MBES bathymetric and water column data, 3D virtual scenes of the artificial habitats have been created, receiving an intuitive-looking depiction of their state and allowing overtime to evaluate their change in terms of dimensional characteristics and depth fish schools’ disposition. These MBES surveys play a leading part in the general multi-year programs carried out by CNR-ISMAR with the aim to assess potential biological changes linked to human activities on.Keywords: artificial habitat mapping, fish assemblages, hydroacustic technology, multibeam echosounder
Procedia PDF Downloads 259901 The Role of Artificial Intelligence in Creating Personalized Health Content for Elderly People: A Systematic Review Study
Authors: Mahnaz Khalafehnilsaz, Rozina Rahnama
Abstract:
Introduction: The elderly population is growing rapidly, and with this growth comes an increased demand for healthcare services. Artificial intelligence (AI) has the potential to revolutionize the delivery of healthcare services to the elderly population. In this study, the various ways in which AI is used to create health content for elderly people and its transformative impact on the healthcare industry will be explored. Method: A systematic review of the literature was conducted to identify studies that have investigated the role of AI in creating health content specifically for elderly people. Several databases, including PubMed, Scopus, and Web of Science, were searched for relevant articles published between 2000 and 2022. The search strategy employed a combination of keywords related to AI, personalized health content, and the elderly. Studies that utilized AI to create health content for elderly individuals were included, while those that did not meet the inclusion criteria were excluded. A total of 20 articles that met the inclusion criteria were identified. Finding: The findings of this review highlight the diverse applications of AI in creating health content for elderly people. One significant application is the use of natural language processing (NLP), which involves the creation of chatbots and virtual assistants capable of providing personalized health information and advice to elderly patients. AI is also utilized in the field of medical imaging, where algorithms analyze medical images such as X-rays, CT scans, and MRIs to detect diseases and abnormalities. Additionally, AI enables the development of personalized health content for elderly patients by analyzing large amounts of patient data to identify patterns and trends that can inform healthcare providers in developing tailored treatment plans. Conclusion: AI is transforming the healthcare industry by providing a wide range of applications that can improve patient outcomes and reduce healthcare costs. From creating chatbots and virtual assistants to analyzing medical images and developing personalized treatment plans, AI is revolutionizing the way healthcare is delivered to elderly patients. Continued investment in this field is essential to ensure that elderly patients receive the best possible care.Keywords: artificial intelligence, health content, older adult, healthcare
Procedia PDF Downloads 66900 A Critical Examination of the Iranian National Legal Regulation of the Ecosystem of Lake Urmia
Authors: Siavash Ostovar
Abstract:
The Iranian national Law on the Ramsar Convention (officially known as the Convention of International Wetlands and Aquatic Birds' Habitat Wetlands) was approved by the Senate and became a law in 1974 after the ratification of the National Council. There are other national laws with the aim of preservation of environment in the country. However, Lake Urmia which is declared a wetland of international importance by the Ramsar Convention in 1971 and designated a UNESCO Biosphere Reserve in 1976 is now at the brink of total disappearance due mainly to the climate change, water mismanagement, dam construction, and agricultural deficiencies. Lake Urmia is located in the north western corner of Iran. It is the third largest salt water lake in the world and the largest lake in the Middle East. Locally, it is designated as a National Park. It is, indeed, a unique lake both nationally and internationally. This study investigated how effective the national legal regulation of the ecosystem of Lake Urmia is in Iran. To do so, the Iranian national laws as Enforcement of Ramsar Convention in the country including three nationally established laws of (i) Five sets of laws for the programme of economic, social and cultural development of Islamic Republic of Iran, (ii) The Iranian Penal Code, (iii) law of conservation, restoration and management of the country were investigated. Using black letter law methods, it was revealed that (i) regarding the national five sets of laws; the benchmark to force the implementation of the legislations and policies is not set clearly. In other words, there is no clear guarantee to enforce these legislations and policies at the time of deviation and violation; (ii) regarding the Penal Code, there is lack of determining the environmental crimes, determining appropriate penalties for the environmental crimes, implementing those penalties appropriately, monitoring and training programmes precisely; (iii) regarding the law of conservation, restoration and management, implementation of this regulation is adjourned to preparation, announcement and approval of several categories of enactments and guidelines. In fact, this study used a national environmental catastrophe caused by drying up of Lake Urmia as an excuse to direct the attention to the weaknesses of the existing national rules and regulations. Finally, as we all depend on the natural world for our survival, this study recommended further research on every environmental issue including the Lake Urmia.Keywords: conservation, environmental law, Lake Urmia, national laws, Ramsar Convention, water management, wetlands
Procedia PDF Downloads 201899 Spatial Analysis as a Tool to Assess Risk Management in Peru
Authors: Josué Alfredo Tomas Machaca Fajardo, Jhon Elvis Chahua Janampa, Pedro Rau Lavado
Abstract:
A flood vulnerability index was developed for the Piura River watershed in northern Peru using Principal Component Analysis (PCA) to assess flood risk. The official methodology to assess risk from natural hazards in Peru was introduced in 1980 and proved effective for aiding complex decision-making. This method relies in part on decision-makers defining subjective correlations between variables to identify high-risk areas. While risk identification and ensuing response activities benefit from a qualitative understanding of influences, this method does not take advantage of the advent of national and international data collection efforts, which can supplement our understanding of risk. Furthermore, this method does not take advantage of broadly applied statistical methods such as PCA, which highlight central indicators of vulnerability. Nowadays, information processing is much faster and allows for more objective decision-making tools, such as PCA. The approach presented here develops a tool to improve the current flood risk assessment in the Peruvian basin. Hence, the spatial analysis of the census and other datasets provides a better understanding of the current land occupation and a basin-wide distribution of services and human populations, a necessary step toward ultimately reducing flood risk in Peru. PCA allows the simplification of a large number of variables into a few factors regarding social, economic, physical and environmental dimensions of vulnerability. There is a correlation between the location of people and the water availability mainly found in rivers. For this reason, a comprehensive vision of the population location around the river basin is necessary to establish flood prevention policies. The grouping of 5x5 km gridded areas allows the spatial analysis of flood risk rather than assessing political divisions of the territory. The index was applied to the Peruvian region of Piura, where several flood events occurred in recent past years, being one of the most affected regions during the ENSO events in Peru. The analysis evidenced inequalities for the access to basic services, such as water, electricity, internet and sewage, between rural and urban areas.Keywords: assess risk, flood risk, indicators of vulnerability, principal component analysis
Procedia PDF Downloads 186898 AI Predictive Modeling of Excited State Dynamics in OPV Materials
Authors: Pranav Gunhal., Krish Jhurani
Abstract:
This study tackles the significant computational challenge of predicting excited state dynamics in organic photovoltaic (OPV) materials—a pivotal factor in the performance of solar energy solutions. Time-dependent density functional theory (TDDFT), though effective, is computationally prohibitive for larger and more complex molecules. As a solution, the research explores the application of transformer neural networks, a type of artificial intelligence (AI) model known for its superior performance in natural language processing, to predict excited state dynamics in OPV materials. The methodology involves a two-fold process. First, the transformer model is trained on an extensive dataset comprising over 10,000 TDDFT calculations of excited state dynamics from a diverse set of OPV materials. Each training example includes a molecular structure and the corresponding TDDFT-calculated excited state lifetimes and key electronic transitions. Second, the trained model is tested on a separate set of molecules, and its predictions are rigorously compared to independent TDDFT calculations. The results indicate a remarkable degree of predictive accuracy. Specifically, for a test set of 1,000 OPV materials, the transformer model predicted excited state lifetimes with a mean absolute error of 0.15 picoseconds, a negligible deviation from TDDFT-calculated values. The model also correctly identified key electronic transitions contributing to the excited state dynamics in 92% of the test cases, signifying a substantial concordance with the results obtained via conventional quantum chemistry calculations. The practical integration of the transformer model with existing quantum chemistry software was also realized, demonstrating its potential as a powerful tool in the arsenal of materials scientists and chemists. The implementation of this AI model is estimated to reduce the computational cost of predicting excited state dynamics by two orders of magnitude compared to conventional TDDFT calculations. The successful utilization of transformer neural networks to accurately predict excited state dynamics provides an efficient computational pathway for the accelerated discovery and design of new OPV materials, potentially catalyzing advancements in the realm of sustainable energy solutions.Keywords: transformer neural networks, organic photovoltaic materials, excited state dynamics, time-dependent density functional theory, predictive modeling
Procedia PDF Downloads 118897 Oral Supplementation of Sweet Orange Extract “Citrus Sinensis” as Substitute for Synthetic Vitamin C on Transported Pullets in Humid Tropics
Authors: Mathew O. Ayoola, Foluke Aderemi, Tunde E. Lawal, Opeyemi Oladejo, Micheal A. Abiola
Abstract:
Food animals reared for meat require transportation during their life cycle. The transportation procedures could initiate stressors capable of disrupting the physiological homeostasis. Such stressors associated with transportation may include; loading and unloading, crowding, environmental temperature, fear, vehicle motion/vibration, feed / water deprivation, and length of travel. This may cause oxidative stress and damage to excess free radicals or reactive oxygen species (ROS). In recent years, the application of natural products as a substitute for synthetic electrolytes and tranquilizers as anti-stress agents during the transportation is yet under investigation. Sweet orange, a predominant fruit in humid tropics, has been reported to have a good content of vitamin C (Ascorbic acid). Vitamin C, which is an active ingredient in orange juice, plays a major role in the biosynthesis of Corticosterone, a hormone that enhances energy supply during transportation and heat stress. Ninety-six, 15weeks, Isa brown pullets were allotted to four (4) oral treatments; sterile water (T1), synthetic vit C (T2), 30ml orange/liter of water (T3), 50ml orange/1 liter (T4). Physiological parameters; body temperature (BTC), rectal temperature (RTC), respiratory rate (RR), and panting rate (PR) were measured pre and post-transportation. The birds were transported with a specialized vehicle for a distance of 50km at a speed of 60 km/hr. The average environmental THI and within the vehicle was 81.8 and 74.6, respectively, and the average wind speed was 11km/hr. Treatments and periods had a significant (p>0.05) effect on all the physiological parameters investigated. Birds on T1 are significantly (p<0.05) different as compared to T2, T3, and T4. Values recorded post-transportation are significantly (p<0.05) higher as compared to pre-transportation for all parameters. In conclusion, this study showed that transportation as a stressor can affect the physiological homeostasis of pullets. Oral supplementation of electrolytes or tranquilizers is essential as an anti-stress during transportation. The application of the organic product in form of sweet orange could serve as a suitable alternative for the synthetic vitamin C.Keywords: physiological, pullets, sweet orange, transportation stress, and vitamin C
Procedia PDF Downloads 120896 A Multi-Role Oriented Collaboration Platform for Distributed Disaster Reduction in China
Authors: Linyao Qiu, Zhiqiang Du
Abstract:
As the rapid development of urbanization, economic developments, and steady population growth in China, the widespread devastation, economic damages, and loss of human lives caused by numerous forms of natural disasters are becoming increasingly serious every year. Disaster management requires available and effective cooperation of different roles and organizations in whole process including mitigation, preparedness, response and recovery. Due to the imbalance of regional development in China, the disaster management capabilities of national and provincial disaster reduction centers are uneven. When an undeveloped area suffers from disaster, neither local reduction department could get first-hand information like high-resolution remote sensing images from satellites and aircrafts independently, nor sharing mechanism is provided for the department to access to data resources deployed in other place directly. Most existing disaster management systems operate in a typical passive data-centric mode and work for single department, where resources cannot be fully shared. The impediment blocks local department and group from quick emergency response and decision-making. In this paper, we introduce a collaborative platform for distributed disaster reduction. To address the issues of imbalance of sharing data sources and technology in the process of disaster reduction, we propose a multi-role oriented collaboration business mechanism, which is capable of scheduling and allocating for optimum utilization of multiple resources, to link various roles for collaborative reduction business in different place. The platform fully considers the difference of equipment conditions in different provinces and provide several service modes to satisfy technology need in disaster reduction. An integrated collaboration system based on focusing services mechanism is designed and implemented for resource scheduling, functional integration, data processing, task management, collaborative mapping, and visualization. Actual applications illustrate that the platform can well support data sharing and business collaboration between national and provincial department. It could significantly improve the capability of disaster reduction in China.Keywords: business collaboration, data sharing, distributed disaster reduction, focusing service
Procedia PDF Downloads 295895 Treating Voxels as Words: Word-to-Vector Methods for fMRI Meta-Analyses
Authors: Matthew Baucum
Abstract:
With the increasing popularity of fMRI as an experimental method, psychology and neuroscience can greatly benefit from advanced techniques for summarizing and synthesizing large amounts of data from brain imaging studies. One promising avenue is automated meta-analyses, in which natural language processing methods are used to identify the brain regions consistently associated with certain semantic concepts (e.g. “social”, “reward’) across large corpora of studies. This study builds on this approach by demonstrating how, in fMRI meta-analyses, individual voxels can be treated as vectors in a semantic space and evaluated for their “proximity” to terms of interest. In this technique, a low-dimensional semantic space is built from brain imaging study texts, allowing words in each text to be represented as vectors (where words that frequently appear together are near each other in the semantic space). Consequently, each voxel in a brain mask can be represented as a normalized vector sum of all of the words in the studies that showed activation in that voxel. The entire brain mask can then be visualized in terms of each voxel’s proximity to a given term of interest (e.g., “vision”, “decision making”) or collection of terms (e.g., “theory of mind”, “social”, “agent”), as measured by the cosine similarity between the voxel’s vector and the term vector (or the average of multiple term vectors). Analysis can also proceed in the opposite direction, allowing word cloud visualizations of the nearest semantic neighbors for a given brain region. This approach allows for continuous, fine-grained metrics of voxel-term associations, and relies on state-of-the-art “open vocabulary” methods that go beyond mere word-counts. An analysis of over 11,000 neuroimaging studies from an existing meta-analytic fMRI database demonstrates that this technique can be used to recover known neural bases for multiple psychological functions, suggesting this method’s utility for efficient, high-level meta-analyses of localized brain function. While automated text analytic methods are no replacement for deliberate, manual meta-analyses, they seem to show promise for the efficient aggregation of large bodies of scientific knowledge, at least on a relatively general level.Keywords: FMRI, machine learning, meta-analysis, text analysis
Procedia PDF Downloads 448894 Predicting Wealth Status of Households Using Ensemble Machine Learning Algorithms
Authors: Habtamu Ayenew Asegie
Abstract:
Wealth, as opposed to income or consumption, implies a more stable and permanent status. Due to natural and human-made difficulties, households' economies will be diminished, and their well-being will fall into trouble. Hence, governments and humanitarian agencies offer considerable resources for poverty and malnutrition reduction efforts. One key factor in the effectiveness of such efforts is the accuracy with which low-income or poor populations can be identified. As a result, this study aims to predict a household’s wealth status using ensemble Machine learning (ML) algorithms. In this study, design science research methodology (DSRM) is employed, and four ML algorithms, Random Forest (RF), Adaptive Boosting (AdaBoost), Light Gradient Boosted Machine (LightGBM), and Extreme Gradient Boosting (XGBoost), have been used to train models. The Ethiopian Demographic and Health Survey (EDHS) dataset is accessed for this purpose from the Central Statistical Agency (CSA)'s database. Various data pre-processing techniques were employed, and the model training has been conducted using the scikit learn Python library functions. Model evaluation is executed using various metrics like Accuracy, Precision, Recall, F1-score, area under curve-the receiver operating characteristics (AUC-ROC), and subjective evaluations of domain experts. An optimal subset of hyper-parameters for the algorithms was selected through the grid search function for the best prediction. The RF model has performed better than the rest of the algorithms by achieving an accuracy of 96.06% and is better suited as a solution model for our purpose. Following RF, LightGBM, XGBoost, and AdaBoost algorithms have an accuracy of 91.53%, 88.44%, and 58.55%, respectively. The findings suggest that some of the features like ‘Age of household head’, ‘Total children ever born’ in a family, ‘Main roof material’ of their house, ‘Region’ they lived in, whether a household uses ‘Electricity’ or not, and ‘Type of toilet facility’ of a household are determinant factors to be a focal point for economic policymakers. The determinant risk factors, extracted rules, and designed artifact achieved 82.28% of the domain expert’s evaluation. Overall, the study shows ML techniques are effective in predicting the wealth status of households.Keywords: ensemble machine learning, households wealth status, predictive model, wealth status prediction
Procedia PDF Downloads 38893 Phage Capsid for Efficient Delivery of Cytotoxic Drugs
Authors: Simona Dostalova, Dita Munzova, Ana Maria Jimenez Jimenez, Marketa Vaculovicova, Vojtech Adam, Rene Kizek
Abstract:
The boom of nanomedicine in recent years has led to the development of numerous new nanomaterials that can be used as nanocarriers in the drug delivery. These nanocarriers can either be synthetic or natural-based. The disadvantage of many synthetic nanocarriers is their toxicity in patient’s body. Protein cages that can naturally be found in human body do not exhibit such disadvantage. However, the release of cargo from some protein cages in target cells can be problematic. As a special type of protein cages can serve the capsid of many viruses, including phage. Phages infect bacterial cells; therefore they are not harmful to human cells. The targeting of phage particles to cancer cells can be solved by producing of empty phage capsids during which the targeting moieties (e.g. peptides) can be cloned into genes of phage capsid to decorate its surface. Moreover, the produced capsids do not contain viral nucleic acid and are therefore not infectious to beneficial bacteria in the patient’s body. The protein cage composed of viral capsid is larger than other frequently used apoferritin cage but its size is still small enough to benefit from passive targeting by Enhanced Permeability and Retention effect. In this work, bacteriophage λ was used, both whole and its empty capsid for delivery of different cytotoxic drugs (cisplatin, carboplatin, oxaliplatin, etoposide and doxorubicin). Large quantities of phage λ were obtained from phage λ-producing strain of E. coli cultivated in medium with 0.2 % maltose. After killing of E. coli with chloroform and its removal by centrifugation, the phage was concentrated by ultracentrifugation at 130 000 g and 4 °C for 3 h. The encapsulation of the drugs was performed by infusion method and four different concentrations of the drugs were encapsulated (200; 100; 50; 25 µg/ml). Free molecules of drugs were removed by dialysis. The encapsulation was verified using spectrophotometric and electrochemical methods. The amount of encapsulated drug linearly increased with the amount of applied drug (determination coefficient R2=0.8013). 76% of applied drug was encapsulated in phage λ particles (concentration of 10 µg/ml), even with the highest applied concentration of drugs, 200 µg/ml. Only 1% of encapsulated drug was detected in phage DNA. Similar results were obtained with encapsulation in phage empty capsid. Therefore, it can be concluded that the encapsulation of drugs into phage particles is efficient and mostly occurs by interaction of drugs with protein capsid.Keywords: cytostatics, drug delivery, nanocarriers, phage capsid
Procedia PDF Downloads 493892 Saco Sweet Cherry from Fundão Region, Portugal: Chemical Profile and Health-Promoting Properties
Authors: Luís R. Silva, Ana C. Gonçalves, Catarina Bento, Fábio Jesus, Branca M. Silva
Abstract:
Prunus avium Linnaeus, more known as sweet cherry, is one of the most appreciated fruit worldwide. Most of these quantities are produced in Fundão region, being Saco the cultivar most produced. Saco is very rich in bioactive compounds, especially phenolics, and presents great antioxidant capacity. The purpose of the present study was to investigate the chemical profile and biological potential, concerning antioxidant, anti-diabetic activity and protective effects towards erythrocytes by Saco sweet cherry collected from Fundão region (Portugal). The hydroethanolic extracts were prepared and passed through a C18 solid-phase extraction column. The phenolic profile analyzed by LC-DAD method allowed to the identification of 22 phenolic compounds, being 16 non-phenolics and 6 anthocyanins. In respect to non-coloured phenolics, 3-O-caffeoylquinic and ρ-coumaroylquinic acids were the main ones. Concerning to anthocyanins, cyanidin-3-O-rutinoside was found in higher amounts. Relatively to biological potential, Saco showed great antioxidant potential, through DPPH and NO radical assays, with IC50 =16.24 ± 0.46 µg/mL and IC50 = 176.69 ± 3.35 µg/mL for DPPH and NO, respectively. These results were similar to those obtained for ascorbic acid control (IC50 = 16.92 ± 0.69 and IC50 = 162.66 ± 1.31 μg/mL for DPPH and NO, respectively). In respect to antidiabetic potential, Saco revealed capacity to inhibit α-glucosidase in a dose-dependent manner (IC50 = 10.79 ± 0.40 µg/mL), being much active than positive control acarbose (IC50 = 306.66 ± 0.84 μg/mL). Additionally, Saco extracts revealed protective effects against ROO•-mediated toxicity generated by AAPH in human blood erythrocytes, inhibiting hemoglobin oxidation (IC50 = 38.57 ± 0.96 μg/mL) and hemolysis (IC50 = 73.03 ± 1.48 μg/mL), in a concentration-dependent manner. However, Saco extracts were less effective than quercetin control (IC50 = 3.10 μg/mL and IC50 = 0.7 μg/mL for inhibition of hemoglobin oxidation and hemolysis, respectively). The results obtained showed that Saco is an excellent source of phenolic compounds. These ones are natural antioxidant substances, which easily capture reactive species. This work presents new insights regarding sweet cherry antioxidant properties which may be useful for the future development of new therapeutic strategies for preventing or attenuating oxidative-related disorders.Keywords: antioxidant capacity, health benefits, phenolic compounds, saco
Procedia PDF Downloads 316891 Phytoremediation of Heavy Metals by the Perennial Tussock Chrysopogon Zizanioides Grown on Zn and Cd Contaminated Soil Amended with Biochar
Authors: Dhritilekha Deka, Deepak Patwa, Ravi K., Archana M. Nair
Abstract:
Bioaccumulation of heavy metal contaminants due to intense anthropogenic interference degrades the environment and ecosystem functions. Conventional physicochemical methods involve energy-intensive and costly methodologies. Phytoremediation, on the other hand, provides an efficient nature-based strategy for the reclamation of heavy metal-contaminated sites. However, the slow process and adaptation to high-concentration contaminant sequestration often limit the efficiency of the method. This necessitates natural amendments such as biochar to improve phytoextraction and stabilize the green cover. Biochar is a highly porous structure with high carbon sequestration potential and containing negatively charged functional groups that provide binding sites for the positively charged metals. This study aims to develop and determine the synergy between sugarcane bagasse biochar content and phytoremediation. A 60-day pot experiment using perennial tussock vetiver grass (Chrysopogon zizanioides) was conducted for different biochar contents of 1%, 2%, and 4% for the removal of cadmium and zinc. A concentration of 500 ppm is maintained for the amended and unamended control (CK) samples. The survival rates of the plants, biomass production, and leaf area index were measured for the plant growth characteristics. Results indicate a visible change in the plant growth and the heavy metal concentration with the biochar content. The bioconcentration factor (BCF) in the plant improved significantly for the 4% biochar content by 57% in comparison to the control CK treatment in Cd-treated soils. The Zn soils indicated the highest reduction in the metal concentration by 50% in the 2% amended samples and an increase in the BCF in all the amended samples. The translocation from the rhizosphere to the shoots was low but not dependent on the amendment content and varied for each contaminant type. The root-to-shoot ratio indicates higher values compared to the control samples. The enhanced tolerance capacities can be attributed to the nutrients released by the biochar in the soil. The study reveals the high potential of biochar as a phytoremediation amendment, but its effect is dependent on the soil and heavy metal and accumulator species.Keywords: phytoextraction, biochar, heavy metals, chrysopogon zizanioides, bioaccumulation factor
Procedia PDF Downloads 65890 Lactate Biostimulation for Remediation of Aquifers Affected by Recalcitrant Sources of Chloromethanes
Authors: Diana Puigserver Cuerda, Jofre Herrero Ferran, José M. Carmona Perez
Abstract:
In the transition zone between aquifers and basal aquitards, DNAPL-pools of chlorinated solvents are more recalcitrant than at other depths in the aquifer. Although degradation of carbon tetrachloride (CT) and chloroform (CF) occurs in this zone, this is a slow process, which is why an adequate remediation strategy is necessary. The working hypothesis of this study is that the biostimulation of the transition zone of an aquifer contaminated by CT and CF can be an effective remediation strategy. This hypothesis has been tested in a site on an unconfined aquifer in which the major contaminants were CT and CF of industrial origin and where the hydrochemical background was rich in other compounds that can hinder natural attenuation of chloromethanes. Field studies and five laboratory microcosm experiments were carried out at the level of groundwater and sediments to identify: i) the degradation processes of CT and CF; ii) the structure of microbial communities; and iii) the microorganisms implicated on this degradation. For this, concentration of contaminants and co-contaminants (nitrate and sulfate), Compound Specific Isotope Analysis, molecular techniques (Denaturing Gradient Gel Electrophoresis) and clone library analysis were used. The main results were: i) degradation processes of CT and CF occurred in groundwater and in the lesser conductive sediments; ii) sulfate-reducing conditions in the transition zone were high and similar to those in the source of contamination; iii) two microorganisms (Azospira suillum and a bacterium of the Clostridiales order) were identified in the transition zone at the field and lab experiments that were compatible with the role of carrying out the reductive dechlorination of CT, CF and their degradation products (dichloromethane and chloromethane); iv) these two microorganisms were present at the high starting concentrations of the microcosm experiments (similar to those in the source of DNAPL) and continued being present until the last day of the lactate biostimulation; and v) the lactate biostimulation gave rise to the fastest and highest degradation rates and promoted the elimination of other electron acceptors (e.g. nitrate and sulfate). All these results are evidence that lactate biostimulation can be effective in remediating the source and plume, especially in the transition zone, and highlight the environmental relevance of the treatment of contaminated transition zones in industrial contexts similar to that studied.Keywords: Azospira suillum, lactate biostimulation of carbon tetrachloride and chloroform, reductive dechlorination, transition zone between aquifer and aquitard
Procedia PDF Downloads 176889 Ikat: Undaunted Journey of a Traditional Textile Practice, a Sublime Connect of Traditionality with Modernity and Calibration for Eco-Sustainable Options
Authors: Purva Khurana
Abstract:
Traditional textile crafts are universally found to have been significantly impeded by the uprise of innovative technologies, but sustained human endeavor, in sync with dynamic market nuances, holds key to these otherwise getting fast-extinct marvels. The metamorphosis of such art-forms into niche markets pre-supposes sharp concentration on adaptability. The author has concentrated on the ancient handicraft of Ikat in Andhra Pradesh (India), a manifestation of their cultural heritage and esoteric cottage industry, so very intrinsic to the development and support of local economy and identity. Like any other traditional practice, ikat weaving has been subjected to the challenges of modernization. However, owing to its unique character, personalize production and adaptability, both of material and process, ikat weaving has stood the test of time by way of judiciously embellishing innovation with contemporary taste. To survive as a living craft as also to justify its role as a universal language of aesthetic sensibility, it is imperative that ikat tradition should lend itself continuous process of experiments, change and growth. Besides, the instant paper aims to examine the contours of ikat production process from its pure form, to more fashion and market oriented production, with upgraded process, material and tools. Over the time, it has adapted well to new style-paradigms, duly matching up with the latest fashion trends, in tandem with the market-sensitivities. Apart, it is an effort to investigate how this craft could respond constructively to the pressure of contemporary technical developments in order to be at cutting edge, while preserving its integrity. In order to approach these issues, the methodology adopted is, conceptual analysis of the craft practices, its unique strength and how they could be used to advance the craft in relation to the emergence of technical developments. The paper summarizes the result of the study carried out by the author on the peculiar advantages of suitably- calibrated vat dyes over natural dyes, in terms of its recycling ability and eco-friendly properties, thus holding definite edge, both in terms of socio-economic as well as environmental concerns.Keywords: craft, eco-friendly dyes, ikat, metamorphosis
Procedia PDF Downloads 174888 Mathematics as the Foundation for the STEM Disciplines: Different Pedagogical Strategies Addressed
Authors: Marion G. Ben-Jacob, David Wang
Abstract:
There is a mathematics requirement for entry level college and university students, especially those who plan to study STEM (Science, Technology, Engineering and Mathematics). Most of them take College Algebra, and to continue their studies, they need to succeed in this course. Different pedagogical strategies are employed to promote the success of our students. There is, of course, the Traditional Method of teaching- lecture, examples, problems for students to solve. The Emporium Model, another pedagogical approach, replaces traditional lectures with a learning resource center model featuring interactive software and on-demand personalized assistance. This presentation will compare these two methods of pedagogy and the study done with its results on this comparison. Math is the foundation for science, technology, and engineering. Its work is generally used in STEM to find patterns in data. These patterns can be used to test relationships, draw general conclusions about data, and model the real world. In STEM, solutions to problems are analyzed, reasoned, and interpreted using math abilities in a assortment of real-world scenarios. This presentation will examine specific examples of how math is used in the different STEM disciplines. Math becomes practical in science when it is used to model natural and artificial experiments to identify a problem and develop a solution for it. As we analyze data, we are using math to find the statistical correlation between the cause of an effect. Scientists who use math include the following: data scientists, scientists, biologists and geologists. Without math, most technology would not be possible. Math is the basis of binary, and without programming, you just have the hardware. Addition, subtraction, multiplication, and division is also used in almost every program written. Mathematical algorithms are inherent in software as well. Mechanical engineers analyze scientific data to design robots by applying math and using the software. Electrical engineers use math to help design and test electrical equipment. They also use math when creating computer simulations and designing new products. Chemical engineers often use mathematics in the lab. Advanced computer software is used to aid in their research and production processes to model theoretical synthesis techniques and properties of chemical compounds. Mathematics mastery is crucial for success in the STEM disciplines. Pedagogical research on formative strategies and necessary topics to be covered are essential.Keywords: emporium model, mathematics, pedagogy, STEM
Procedia PDF Downloads 75887 Multi-Criteria Decision Making Tool for Assessment of Biorefinery Strategies
Authors: Marzouk Benali, Jawad Jeaidi, Behrang Mansoornejad, Olumoye Ajao, Banafsheh Gilani, Nima Ghavidel Mehr
Abstract:
Canadian forest industry is seeking to identify and implement transformational strategies for enhanced financial performance through the emerging bioeconomy or more specifically through the concept of the biorefinery. For example, processing forest residues or surplus of biomass available on the mill sites for the production of biofuels, biochemicals and/or biomaterials is one of the attractive strategies along with traditional wood and paper products and cogenerated energy. There are many possible process-product biorefinery pathways, each associated with specific product portfolios with different levels of risk. Thus, it is not obvious which unique strategy forest industry should select and implement. Therefore, there is a need for analytical and design tools that enable evaluating biorefinery strategies based on a set of criteria considering a perspective of sustainability over the short and long terms, while selecting the existing core products as well as selecting the new product portfolio. In addition, it is critical to assess the manufacturing flexibility to internalize the risk from market price volatility of each targeted bio-based product in the product portfolio, prior to invest heavily in any biorefinery strategy. The proposed paper will focus on introducing a systematic methodology for designing integrated biorefineries using process systems engineering tools as well as a multi-criteria decision making framework to put forward the most effective biorefinery strategies that fulfill the needs of the forest industry. Topics to be covered will include market analysis, techno-economic assessment, cost accounting, energy integration analysis, life cycle assessment and supply chain analysis. This will be followed by describing the vision as well as the key features and functionalities of the I-BIOREF software platform, developed by CanmetENERGY of Natural Resources Canada. Two industrial case studies will be presented to support the robustness and flexibility of I-BIOREF software platform: i) An integrated Canadian Kraft pulp mill with lignin recovery process (namely, LignoBoost™); ii) A standalone biorefinery based on ethanol-organosolv process.Keywords: biorefinery strategies, bioproducts, co-production, multi-criteria decision making, tool
Procedia PDF Downloads 232886 The Analyzer: Clustering Based System for Improving Business Productivity by Analyzing User Profiles to Enhance Human Computer Interaction
Authors: Dona Shaini Abhilasha Nanayakkara, Kurugamage Jude Pravinda Gregory Perera
Abstract:
E-commerce platforms have revolutionized the shopping experience, offering convenient ways for consumers to make purchases. To improve interactions with customers and optimize marketing strategies, it is essential for businesses to understand user behavior, preferences, and needs on these platforms. This paper focuses on recommending businesses to customize interactions with users based on their behavioral patterns, leveraging data-driven analysis and machine learning techniques. Businesses can improve engagement and boost the adoption of e-commerce platforms by aligning behavioral patterns with user goals of usability and satisfaction. We propose TheAnalyzer, a clustering-based system designed to enhance business productivity by analyzing user-profiles and improving human-computer interaction. The Analyzer seamlessly integrates with business applications, collecting relevant data points based on users' natural interactions without additional burdens such as questionnaires or surveys. It defines five key user analytics as features for its dataset, which are easily captured through users' interactions with e-commerce platforms. This research presents a study demonstrating the successful distinction of users into specific groups based on the five key analytics considered by TheAnalyzer. With the assistance of domain experts, customized business rules can be attached to each group, enabling The Analyzer to influence business applications and provide an enhanced personalized user experience. The outcomes are evaluated quantitatively and qualitatively, demonstrating that utilizing TheAnalyzer’s capabilities can optimize business outcomes, enhance customer satisfaction, and drive sustainable growth. The findings of this research contribute to the advancement of personalized interactions in e-commerce platforms. By leveraging user behavioral patterns and analyzing both new and existing users, businesses can effectively tailor their interactions to improve customer satisfaction, loyalty and ultimately drive sales.Keywords: data clustering, data standardization, dimensionality reduction, human computer interaction, user profiling
Procedia PDF Downloads 73885 Valorization of Plastic and Cork Wastes in Design of Composite Materials
Authors: Svetlana Petlitckaia, Toussaint Barboni, Paul-Antoine Santoni
Abstract:
Plastic is a revolutionary material. However, the pollution caused by plastics damages the environment, human health and the economy of different countries. It is important to find new ways to recycle and reuse plastic material. The use of waste materials as filler and as a matrix for composite materials is receiving increasing attention as an approach to increasing the economic value of streams. In this study, a new composite material based on high-density polyethylene (HDPE) and polypropylene (PP) wastes from bottle caps and cork powder from unused cork (virgin cork), which has a high capacity for thermal insulation, was developed. The composites were prepared with virgin and modified cork. The composite materials were obtained through twin-screw extrusion and injection molding. The composites were produced with proportions of 0 %, 5 %, 10 %, 15 %, and 20 % of cork powder in a polymer matrix with and without coupling agent and flame retardant. These composites were investigated in terms of mechanical, structural and thermal properties. The effect of cork fraction, particle size and the use of flame retardant on the properties of composites were investigated. The properties of samples elaborated with the polymer and the cork were compared to them with the coupling agent and commercial flame retardant. It was observed that the morphology of HDPE/cork and PP/cork composites revealed good distribution and dispersion of cork particles without agglomeration. The results showed that the addition of cork powder in the polymer matrix reduced the density of the composites. However, the incorporation of natural additives doesn’t have a significant effect on water adsorption. Regarding the mechanical properties, the value of tensile strength decreases with the addition of cork powder, ranging from 30 MPa to 19 MPa for PP composites and from 19 MPa to 17 MPa for HDPE composites. The value of thermal conductivity of composites HDPE/cork and PP/ cork is about 0.230 W/mK and 0.170 W/mK, respectively. Evaluation of the flammability of the composites was performed using a cone calorimeter. The results of thermal analysis and fire tests show that it is important to add flame retardants to improve fire resistance. The samples elaborated with the coupling agent and flame retardant have better mechanical properties and fire resistance. The feasibility of the composites based on cork and PP and HDPE wastes opens new ways of valorizing plastic waste and virgin cork. The formulation of composite materials must be optimized.Keywords: composite materials, cork and polymer wastes, flammability, modificated cork
Procedia PDF Downloads 88