Search results for: time-frequency feature extraction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3254

Search results for: time-frequency feature extraction

2174 Mentha piperita Formulations in Natural Deep Eutectic Solvents: Phenolic Profile and Biological Activity

Authors: Tatjana Jurić, Bojana Blagojević, Denis Uka, Ružica Ždero Pavlović, Boris M. Popović

Abstract:

Natural deep eutectic solvents (NADES) represent a class of modern systems that have been developed as a green alternative to toxic organic solvents, which are commonly used as extraction media. It has been considered that hydrogen bonding is the main interaction leading to the formation of NADES. The aim of this study was phytochemical characterization and determination of the antioxidant and antibacterial activity of Mentha piperita leaf extracts obtained by six choline chloride-based NADES. NADES were prepared by mixing choline chloride with different hydrogen bond donors in 1:1 molar ratio following the addition of 30% (w/w) water. The mixtures were then heated (60 °C) and stirred (650 rpm) until the clear homogenous liquids were obtained. The Mentha piperita extracts were prepared by mixing 75 mg of peppermint leaves with 1 mL of NADES following by the heating and stirring (60 °C, 650 rpm) within 30 min. The content of six phenolics in extracts was determined using HPLC-PDA. The dominant compounds presented in peppermint leaves - rosmarinic acid and luteolin 7-O-glucoside, were extracted by NADES at a similar level as 70% ethanol. The microdilution method was applied to test the antibacterial activity of extracts. Compared with 70% ethanol, all NADES systems showed higher antibacterial activity towards Pseudomonas aeruginosa (Gram -), Staphylococcus aureus (Gram +), Escherichia coli (Gram -), and Salmonella enterica (Gram -), especially NADES containing organic acids. The majority of NADES extracts showed a better ability to neutralize DPPH radical than conventional solvent and similar ability to reduce Fe3+ to Fe2+ ions in FRAP assay. The obtained results introduce NADES systems as the novel, sustainable, and low-cost solvents with a variety of applications.

Keywords: antibacterial activity, antioxidant activity, green extraction, natural deep eutectic solvents, polyphenols

Procedia PDF Downloads 187
2173 Classification of Coughing and Breathing Activities Using Wearable and a Light-Weight DL Model

Authors: Subham Ghosh, Arnab Nandi

Abstract:

Background: The proliferation of Wireless Body Area Networks (WBAN) and Internet of Things (IoT) applications demonstrates the potential for continuous monitoring of physical changes in the body. These technologies are vital for health monitoring tasks, such as identifying coughing and breathing activities, which are necessary for disease diagnosis and management. Monitoring activities such as coughing and deep breathing can provide valuable insights into a variety of medical issues. Wearable radio-based antenna sensors, which are lightweight and easy to incorporate into clothing or portable goods, provide continuous monitoring. This mobility gives it a substantial advantage over stationary environmental sensors like as cameras and radar, which are constrained to certain places. Furthermore, using compressive techniques provides benefits such as reduced data transmission speeds and memory needs. These wearable sensors offer more advanced and diverse health monitoring capabilities. Methodology: This study analyzes the feasibility of using a semi-flexible antenna operating at 2.4 GHz (ISM band) and positioned around the neck and near the mouth to identify three activities: coughing, deep breathing, and idleness. Vector network analyzer (VNA) is used to collect time-varying complex reflection coefficient data from perturbed antenna nearfield. The reflection coefficient (S11) conveys nuanced information caused by simultaneous variations in the nearfield radiation of three activities across time. The signatures are sparsely represented with gaussian windowed Gabor spectrograms. The Gabor spectrogram is used as a sparse representation approach, which reassigns the ridges of the spectrogram images to improve their resolution and focus on essential components. The antenna is biocompatible in terms of specific absorption rate (SAR). The sparsely represented Gabor spectrogram pictures are fed into a lightweight deep learning (DL) model for feature extraction and classification. Two antenna locations are investigated in order to determine the most effective localization for three different activities. Findings: Cross-validation techniques were used on data from both locations. Due to the complex form of the recorded S11, separate analyzes and assessments were performed on the magnitude, phase, and their combination. The combination of magnitude and phase fared better than the separate analyses. Various sliding window sizes, ranging from 1 to 5 seconds, were tested to find the best window for activity classification. It was discovered that a neck-mounted design was effective at detecting the three unique behaviors.

Keywords: activity recognition, antenna, deep-learning, time-frequency

Procedia PDF Downloads 16
2172 CPW-Fed Broadband Circularly Polarized Planar Antenna with Improved Ground

Authors: Gnanadeep Gudapati, V. Annie Grace

Abstract:

A broadband circular polarization (CP) feature is designed for a CPW-fed planar printed monopole antenna. A rectangle patch and an improved ground plane make up the antenna. The antenna's impedance bandwidth can be increased by adding a vertical stub and a horizontal slit in the ground plane. The measured results show that the proposed antenna has a wide 10-dB return loss bandwidth of 70.2% (4.35GHz, 3.7-8.1GHz) centered at 4.2 GHz.

Keywords: CPW-fed, circular polarised, FR4 epoxy, slit and stub

Procedia PDF Downloads 147
2171 Linkage between a Plant-based Diet and Visual Impairment: A Systematic Review and Meta-Analysis

Authors: Cristina Cirone, Katrina Cirone, Monali S. Malvankar-Mehta

Abstract:

Purpose: An increased risk of visual impairment has been observed in individuals lacking a balanced diet. The purpose of this paper is to characterize the relationship between plant-based diets and specific ocular outcomes among adults. Design: Systematic review and meta-analysis. Methods: This systematic review and meta-analysis were conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses statement guidelines. The databases MEDLINE, EMBASE, Cochrane, and PubMed, were systematically searched up until May 27, 2021. Of the 503 articles independently screened by two reviewers, 21 were included in this review. Quality assessment and data extraction were performed by both reviewers. Meta-analysis was conducted using STATA 15.0. Fixed-effect and random-effect models were computed based on heterogeneity. Results: A total of 503 studies were identified which then underwent duplicate removal and a title and abstract screen. The remaining 61 studies underwent a full-text screen, 21 progressed to data extraction and fifteen were included in the quantitative analysis. Meta-analysis indicated that regular consumption of fish (OR = 0.70; CI: [0.62-0.79]) and skim milk, poultry, and non-meat animal products (OR = 0.70; CI: [0.61-0.79]) is positively correlated with a reduced risk of visual impairment (age-related macular degeneration, age-related maculopathy, cataract development, and central geographic atrophy) among adults. Consumption of red meat [OR = 1.41; CI: [1.07-1.86]) is associated with an increased risk of visual impairment. Conclusion: Overall, a pescatarian diet is associated with the most favorable visual outcomes among adults, while the consumption of red meat appears to negatively impact vision. Results suggest a need for more local and government-led interventions promoting a healthy and balanced diet.

Keywords: plant-based diet, pescatarian diet, visual impairment, systematic review, meta-analysis

Procedia PDF Downloads 186
2170 Phytotechnologies for Use and Reconstitution of Contaminated Sites

Authors: Olga Shuvaeva, Tamara Romanova, Sergey Volynkin, Valentina Podolinnaya

Abstract:

Green chemistry concept is focused on the prevention of environmental pollution caused by human activity. However, there are a lot of contaminated areas in the world which pose a serious threat to ecosystems in terms of their conservation. Therefore in accordance with the principles of green chemistry, it should not be forgotten about the need to clean these areas. Furthermore, the waste material often contains the valuable components, the extraction of which by traditional wet chemical technologies is inefficient both from the economic and environmental protection standpoint. Wherein, the plants may be successfully used to ‘scavenge’ a range of metals from polluted land sites in an approach allowing to carry out both of these processes – phytoremediation and phytomining in conjunction. The goal of the present work was to study bioaccumulation ability of floating macrophytes such as water hyacinth and pondweed toward Hg, Ba, Cd, Mo and Pb as pollutants in aquatic medium and terrestrial plants (birch, reed, and cane) towards gold and silver as valuable components. The peculiarity of ongoing research was that the plants grew under extreme conditions (pH of drainage and pore waters was about 2.5). The study was conducted at the territory of Ursk tailings (Southwestern Siberia, Russia) formed as a result of primary polymetallic ores cyanidation. The waste material is mainly presented (~80%) by pyrite (FeS₂) and barite (BaSO₄), the raw minerals included FeAsS, HgS, PbS, Ag₂S as minor ones. It has been shown that water hyacinth demonstrates high ability to accumulate different metals, and what is especially important – to remove mercury from polluted waters with BCF value more than 1000. As for the gold, its concentrations in reed and cane growing near the waste material were estimated as 500 and 900 μg∙kg⁻¹ respectively. It was also found that the plants can survive under extreme conditions of acidic environment and hence we can assume that there is a principal opportunity to use them for the valuable substances extraction from an area of the mining waste dumps burial.

Keywords: bioaccumulation, gold, heavy metals, mine tailing

Procedia PDF Downloads 173
2169 HCl-Based Hydrometallurgical Recycling Route for Metal Recovery from Li-Ion Battery Wastes

Authors: Claudia Schier, Arvid Biallas, Bernd Friedrich

Abstract:

The demand for Li-ion-batteries owing to their benefits, such as; fast charging time, high energy density, low weight, large temperature range, and a long service life performance is increasing compared to other battery systems. These characteristics are substantial not only for battery-operated portable devices but also in the growing field of electromobility where high-performance energy storage systems in the form of batteries are highly requested. Due to the sharp rising production, there is a tremendous interest to recycle spent Li-Ion batteries in a closed-loop manner owed to the high content of valuable metals such as cobalt, manganese, and lithium as well as regarding the increasing demand for those scarce applied metals. Currently, there are just a few industrial processes using hydrometallurgical methods to recover valuable metals from Li-ion-battery waste. In this study, the extraction of valuable metals from spent Li-ion-batteries is investigated by pretreated and subsequently leached battery wastes using different precipitation methods in a comparative manner. For the extraction of lithium, cobalt, and other valuable metals, pelletized battery wastes with an initial Li content of 2.24 wt. % and cobalt of 22 wt. % is used. Hydrochloric acid with 4 mol/L is applied with 1:50 solid to liquid (s/l) ratio to generate pregnant leach solution for subsequent precipitation steps. In order to obtain pure precipitates, two different pathways (pathway 1 and pathway 2) are investigated, which differ from each other with regard to the precipitation steps carried out. While lithium carbonate recovery is the final process step in pathway 1, pathway 2 requires a preliminary removal of lithium from the process. The aim is to evaluate both processes in terms of purity and yield of the products obtained. ICP-OES is used to determine the chemical content of leach liquor as well as of the solid residue.

Keywords: hydrochloric acid, hydrometallurgy, Li-ion-batteries, metal recovery

Procedia PDF Downloads 172
2168 Processing and Economic Analysis of Rain Tree (Samanea saman) Pods for Village Level Hydrous Bioethanol Production

Authors: Dharell B. Siano, Wendy C. Mateo, Victorino T. Taylan, Francisco D. Cuaresma

Abstract:

Biofuel is one of the renewable energy sources adapted by the Philippine government in order to lessen the dependency on foreign fuel and to reduce carbon dioxide emissions. Rain tree pods were seen to be a promising source of bioethanol since it contains significant amount of fermentable sugars. The study was conducted to establish the complete procedure in processing rain tree pods for village level hydrous bioethanol production. Production processes were done for village level hydrous bioethanol production from collection, drying, storage, shredding, dilution, extraction, fermentation, and distillation. The feedstock was sundried, and moisture content was determined at a range of 20% to 26% prior to storage. Dilution ratio was 1:1.25 (1 kg of pods = 1.25 L of water) and after extraction process yielded a sugar concentration of 22 0Bx to 24 0Bx. The dilution period was three hours. After three hours of diluting the samples, the juice was extracted using extractor with a capacity of 64.10 L/hour. 150 L of rain tree pods juice was extracted and subjected to fermentation process using a village level anaerobic bioreactor. Fermentation with yeast (Saccharomyces cerevisiae) can fasten up the process, thus producing more ethanol at a shorter period of time; however, without yeast fermentation, it also produces ethanol at lower volume with slower fermentation process. Distillation of 150 L of fermented broth was done for six hours at 85 °C to 95 °C temperature (feedstock) and 74 °C to 95 °C temperature of the column head (vapor state of ethanol). The highest volume of ethanol recovered was established at with yeast fermentation at five-day duration with a value of 14.89 L and lowest actual ethanol content was found at without yeast fermentation at three-day duration having a value of 11.63 L. In general, the results suggested that rain tree pods had a very good potential as feedstock for bioethanol production. Fermentation of rain tree pods juice can be done with yeast and without yeast.

Keywords: fermentation, hydrous bioethanol, fermentation, rain tree pods, village level

Procedia PDF Downloads 296
2167 Lexical Semantic Analysis to Support Ontology Modeling of Maintenance Activities– Case Study of Offshore Riser Integrity

Authors: Vahid Ebrahimipour

Abstract:

Word representation and context meaning of text-based documents play an essential role in knowledge modeling. Business procedures written in natural language are meant to store technical and engineering information, management decision and operation experience during the production system life cycle. Context meaning representation is highly dependent upon word sense, lexical relativity, and sematic features of the argument. This paper proposes a method for lexical semantic analysis and context meaning representation of maintenance activity in a mass production system. Our approach constructs a straightforward lexical semantic approach to analyze facilitates semantic and syntactic features of context structure of maintenance report to facilitate translation, interpretation, and conversion of human-readable interpretation into computer-readable representation and understandable with less heterogeneity and ambiguity. The methodology will enable users to obtain a representation format that maximizes shareability and accessibility for multi-purpose usage. It provides a contextualized structure to obtain a generic context model that can be utilized during the system life cycle. At first, it employs a co-occurrence-based clustering framework to recognize a group of highly frequent contextual features that correspond to a maintenance report text. Then the keywords are identified for syntactic and semantic extraction analysis. The analysis exercises causality-driven logic of keywords’ senses to divulge the structural and meaning dependency relationships between the words in a context. The output is a word contextualized representation of maintenance activity accommodating computer-based representation and inference using OWL/RDF.

Keywords: lexical semantic analysis, metadata modeling, contextual meaning extraction, ontology modeling, knowledge representation

Procedia PDF Downloads 105
2166 A Scientific Method of Drug Development Based on Ayurvedic Bhaishajya Knowledge

Authors: Rajesh S. Mony, Vaidyaratnam Oushadhasala

Abstract:

An attempt is made in this study to evolve a drug development modality based on classical Ayurvedic knowledge base as well as on modern scientific methodology. The present study involves (a) identification of a specific ailment condition, (b) the selection of a polyherbal formulation, (c) deciding suitable extraction procedure, (d) confirming the efficacy of the combination by in-vitro trials and (e) fixing up the recommended dose. The ailment segment selected is arthritic condition. The selected herbal combination is Kunturushka, Vibhitaki, Guggulu, Haridra, Maricha and Nirgundi. They were selected as per Classical Ayurvedic references, Authentified as per API (Ayurvedic Pharmacopeia of India), Extraction of each drug was done by different ratios of Hydroalcoholic menstrums, Invitro assessment of each extract after removing residual solvent for anti-Inflammatory, anti-arthritic activities (by UV-Vis. Spectrophotometer with positive control), Invitro assessment of each extract for COX enzyme inhibition (by UV-Vis. Spectrophotometer with positive control), Selection of the extracts was made having good in-vitro activity, Performed the QC testing of each selected extract including HPTLC, that is the in process QC specifications, h. Decision of the single dose with mixtures of selected extracts was made as per the level of in-vitro activity and available toxicology data, Quantification of major groups like Phenolics, Flavonoids, Alkaloids and Bitters was done with both standard Spectrophotometric and Gravimetric methods, Method for Marker assay was developed and validated by HPTLC and a good resolved HPTLC finger print was developed for the single dosage API (Active Pharmaceutical Ingredient mixture of extracts), Three batches was prepared to fix the in process and API (Active Pharmaceutical Ingredient) QC specifications.

Keywords: drug development, antiinflammatory, quality stardardisation, planar chromatography

Procedia PDF Downloads 101
2165 Methodology for the Determination of Triterpenic Compounds in Apple Extracts

Authors: Mindaugas Liaudanskas, Darius Kviklys, Kristina Zymonė, Raimondas Raudonis, Jonas Viškelis, Norbertas Uselis, Pranas Viškelis, Valdimaras Janulis

Abstract:

Apples are among the most commonly consumed fruits in the world. Based on data from the year 2014, approximately 84.63 million tons of apples are grown per annum. Apples are widely used in food industry to produce various products and drinks (juice, wine, and cider); they are also used unprocessed. Apples in human diet are an important source of different groups of biological active compounds that can positively contribute to the prevention of various diseases. They are a source of various biologically active substances – especially vitamins, organic acids, micro- and macro-elements, pectins, and phenolic, triterpenic, and other compounds. Triterpenic compounds, which are characterized by versatile biological activity, are the biologically active compounds found in apples that are among the most promising and most significant for human health. A specific analytical procedure including sample preparation and High Performance Liquid Chromatography (HPLC) analysis was developed, optimized, and validated for the detection of triterpenic compounds in the samples of different apples, their peels, and flesh from widespread apple cultivars 'Aldas', 'Auksis', 'Connel Red', 'Ligol', 'Lodel', and 'Rajka' grown in Lithuanian climatic conditions. The conditions for triterpenic compound extraction were optimized: the solvent of the extraction was 100% (v/v) acetone, and the extraction was performed in an ultrasound bath for 10 min. Isocratic elution (the eluents ratio being 88% (solvent A) and 12% (solvent B)) for a rapid separation of triterpenic compounds was performed. The validation of the methodology was performed on the basis of the ICH recommendations. The following characteristics of validation were evaluated: the selectivity of the method (specificity), precision, the detection and quantitation limits of the analytes, and linearity. The obtained parameters values confirm suitability of methodology to perform analysis of triterpenic compounds. Using the optimised and validated HPLC technique, four triterpenic compounds were separated and identified, and their specificity was confirmed. These compounds were corosolic acid, betulinic acid, oleanolic acid, and ursolic acid. Ursolic acid was the dominant compound in all the tested apple samples. The detected amount of betulinic acid was the lowest of all the identified triterpenic compounds. The greatest amounts of triterpenic compounds were detected in whole apple and apple peel samples of the 'Lodel' cultivar, and thus apples and apple extracts of this cultivar are potentially valuable for use in medical practice, for the prevention of various diseases, for adjunct therapy, for the isolation of individual compounds with a specific biological effect, and for the development and production of dietary supplements and functional food enriched in biologically active compounds. Acknowledgements. This work was supported by a grant from the Research Council of Lithuania, project No. MIP-17-8.

Keywords: apples, HPLC, triterpenic compounds, validation

Procedia PDF Downloads 173
2164 Thermodynamic Evaluation of Coupling APR-1400 with a Thermal Desalination Plant

Authors: M. Gomaa Abdoelatef, Robert M. Field, Lee, Yong-Kwan

Abstract:

Growing human populations have placed increased demands on water supplies and a heightened interest in desalination infrastructure. Key elements of the economics of desalination projects are thermal and electrical inputs. With growing concerns over the use of fossil fuels to (indirectly) supply these inputs, coupling of desalination with nuclear power production represents a significant opportunity. Individually, nuclear and desalination technologies have a long history and are relatively mature. For desalination, Reverse Osmosis (RO) has the lowest energy inputs. However, the economically driven output quality of the water produced using RO, which uses only electrical inputs, is lower than the output water quality from thermal desalination plants. Therefore, modern desalination projects consider that RO should be coupled with thermal desalination technologies (MSF, MED, or MED-TVC) with attendant steam inputs to permit blending to produce various qualities of water. A large nuclear facility is well positioned to dispatch large quantities of both electrical and thermal power. This paper considers the supply of thermal energy to a large desalination facility to examine heat balance impact on the nuclear steam cycle. The APR1400 nuclear plant is selected as prototypical from both a capacity and turbine cycle heat balance perspective to examine steam supply and the impact on electrical output. Extraction points and quantities of steam are considered parametrically along with various types of thermal desalination technologies to form the basis for further evaluations of economically optimal approaches to the interface of nuclear power production with desalination projects. In our study, the thermodynamic evaluation will be executed by DE-TOP which is the IAEA desalination program, it is approved to be capable of analyzing power generation systems coupled to desalination systems through various steam extraction positions, taking into consideration the isolation loop between the APR-1400 and the thermal desalination plant for safety concern.

Keywords: APR-1400, desalination, DE-TOP, IAEA, MSF, MED, MED-TVC, RO

Procedia PDF Downloads 533
2163 Thorium Extraction with Cyanex272 Coated Magnetic Nanoparticles

Authors: Afshin Shahbazi, Hadi Shadi Naghadeh, Ahmad Khodadadi Darban

Abstract:

In the Magnetically Assisted Chemical Separation (MACS) process, tiny ferromagnetic particles coated with solvent extractant are used to selectively separate radionuclides and hazardous metals from aqueous waste streams. The contaminant-loaded particles are then recovered from the waste solutions using a magnetic field. In the present study, Cyanex272 or C272 (bis (2,4,4-trimethylpentyl) phosphinic acid) coated magnetic particles are being evaluated for the possible application in the extraction of Thorium (IV) from nuclear waste streams. The uptake behaviour of Th(IV) from nitric acid solutions was investigated by batch studies. Adsorption of Thorium (IV) from aqueous solution onto adsorbent was investigated in a batch system. Adsorption isotherm and adsorption kinetic studies of Thorium (IV) onto nanoparticles coated Cyanex272 were carried out in a batch system. The factors influencing Thorium (IV) adsorption were investigated and described in detail, as a function of the parameters such as initial pH value, contact time, adsorbent mass, and initial Thorium (IV) concentration. Magnetically Assisted Chemical Separation (MACS) process adsorbent showed best results for the fast adsorption of Th (IV) from aqueous solution at aqueous phase acidity value of 0.5 molar. In addition, more than 80% of Th (IV) was removed within the first 2 hours, and the time required to achieve the adsorption equilibrium was only 140 minutes. Langmuir and Frendlich adsorption models were used for the mathematical description of the adsorption equilibrium. Equilibrium data agreed very well with the Langmuir model, with a maximum adsorption capacity of 48 mg.g-1. Adsorption kinetics data were tested using pseudo-first-order, pseudo-second-order and intra-particle diffusion models. Kinetic studies showed that the adsorption followed a pseudo-second-order kinetic model, indicating that the chemical adsorption was the rate-limiting step.

Keywords: Thorium (IV) adsorption, MACS process, magnetic nanoparticles, Cyanex272

Procedia PDF Downloads 342
2162 An Approach to Solving Some Inverse Problems for Parabolic Equations

Authors: Bolatbek Rysbaiuly, Aliya S. Azhibekova

Abstract:

Problems concerning the interpretation of the well testing results belong to the class of inverse problems of subsurface hydromechanics. The distinctive feature of such problems is that additional information is depending on the capabilities of oilfield experiments. Another factor that should not be overlooked is the existence of errors in the test data. To determine reservoir properties, some inverse problems for parabolic equations were investigated. An approach to solving the inverse problems based on the method of regularization is proposed.

Keywords: iterative approach, inverse problem, parabolic equation, reservoir properties

Procedia PDF Downloads 429
2161 Exploring the Applications of Neural Networks in the Adaptive Learning Environment

Authors: Baladitya Swaika, Rahul Khatry

Abstract:

Computer Adaptive Tests (CATs) is one of the most efficient ways for testing the cognitive abilities of students. CATs are based on Item Response Theory (IRT) which is based on item selection and ability estimation using statistical methods of maximum information selection/selection from posterior and maximum-likelihood (ML)/maximum a posteriori (MAP) estimators respectively. This study aims at combining both classical and Bayesian approaches to IRT to create a dataset which is then fed to a neural network which automates the process of ability estimation and then comparing it to traditional CAT models designed using IRT. This study uses python as the base coding language, pymc for statistical modelling of the IRT and scikit-learn for neural network implementations. On creation of the model and on comparison, it is found that the Neural Network based model performs 7-10% worse than the IRT model for score estimations. Although performing poorly, compared to the IRT model, the neural network model can be beneficially used in back-ends for reducing time complexity as the IRT model would have to re-calculate the ability every-time it gets a request whereas the prediction from a neural network could be done in a single step for an existing trained Regressor. This study also proposes a new kind of framework whereby the neural network model could be used to incorporate feature sets, other than the normal IRT feature set and use a neural network’s capacity of learning unknown functions to give rise to better CAT models. Categorical features like test type, etc. could be learnt and incorporated in IRT functions with the help of techniques like logistic regression and can be used to learn functions and expressed as models which may not be trivial to be expressed via equations. This kind of a framework, when implemented would be highly advantageous in psychometrics and cognitive assessments. This study gives a brief overview as to how neural networks can be used in adaptive testing, not only by reducing time-complexity but also by being able to incorporate newer and better datasets which would eventually lead to higher quality testing.

Keywords: computer adaptive tests, item response theory, machine learning, neural networks

Procedia PDF Downloads 176
2160 Edible and Ecofriendly Packaging – A Trendsetter of the Modern Era – Standardization and Properties of Films and Cutleries from Food Starch

Authors: P. Raajeswari, S. M. Devatha, R. Pragatheeswari

Abstract:

The edible packaging is a new trendsetter in the era of modern packaging. The researchers and food scientist recognise edible packaging as a useful alternative or addition to conventional packaging to reduce waste and to create novel applications for improving product stability. Starch was extracted from different sources that contains abundantly like potato, tapioca, rice, wheat, and corn. The starch based edible films and cutleries are developed as an alternative for conventional packages providing the nutritional benefit when consumed along with the food. The development of starch based edible films by the extraction of starch from various raw ingredients at lab scale level. The films are developed by the employment of plasticiser at different concentrations of 1.5ml and 2ml. The films developed using glycerol as a plasticiser in filmogenic solution to increase the flexibility and plasticity of film. It reduces intra and intermolecular forces in starch, and it increases the mobility of starch based edible films. The films developed are tested for its functional properties such as thickness, tensile strength, elongation at break, moisture permeability, moisture content, and puncture strength. The cutleries like spoons and cups are prepared by making dough and rolling the starch along with water. The overall results showed that starch based edible films absorbed less moisture, and they also contributed to the low moisture permeability with high tensile strength. Food colorants extracted from red onion peel, pumpkin, and red amaranth adds on the nutritive value, colour, and attraction when incorporated in edible cutleries, and it doesn’t influence the functional properties. Addition of a low quantity of glycerol in edible films and colour extraction from onion peel, pumpkin, and red amaranth enhances biodegradability and provides a good quantity of nutrients when consumed. Therefore, due to its multiple advantages, food starch can serve as the best response for eco-friendly industrial products aimed to replace single use plastics at low cost.

Keywords: edible films, edible cutleries, plasticizer, glycerol, starch, functional property

Procedia PDF Downloads 186
2159 Characteristics and Feature Analysis of PCF Labeling among Construction Materials

Authors: Sung-mo Seo, Chang-u Chae

Abstract:

The Product Carbon Footprint Labeling has been run for more than four years by the Ministry of Environment and there are number of products labeled by KEITI, as for declaring products with their carbon emission during life cycle stages. There are several categories for certifying products by the characteristics of usage. Building products which are applied to a building as combined components. In this paper, current status of PCF labeling has been compared with LCI DB for data composition. By this comparative analysis, we suggest carbon labeling development.

Keywords: carbon labeling, LCI DB, building materials, life cycle assessment

Procedia PDF Downloads 421
2158 Implementation of a Serializer to Represent PHP Objects in the Extensible Markup Language

Authors: Lidia N. Hernández-Piña, Carlos R. Jaimez-González

Abstract:

Interoperability in distributed systems is an important feature that refers to the communication of two applications written in different programming languages. This paper presents a serializer and a de-serializer of PHP objects to and from XML, which is an independent library written in the PHP programming language. The XML generated by this serializer is independent of the programming language, and can be used by other existing Web Objects in XML (WOX) serializers and de-serializers, which allow interoperability with other object-oriented programming languages.

Keywords: interoperability, PHP object serialization, PHP to XML, web objects in XML, WOX

Procedia PDF Downloads 237
2157 Investigating Software Engineering Challenges in Game Development

Authors: Fawad Zaidi

Abstract:

This paper discusses a variety of challenges and solutions involved with creating computer games and the issues faced by the software engineers working in this field. This review further investigates the articles coverage of project scope and the problem of feature creep that appears to be inherent with game development. The paper tries to answer the following question: Is this a problem caused by a shortage, or bad software engineering practices, or is this outside the control of the software engineering component of the game production process?

Keywords: software engineering, computer games, software applications, development

Procedia PDF Downloads 476
2156 Towards Dynamic Estimation of Residential Building Energy Consumption in Germany: Leveraging Machine Learning and Public Data from England and Wales

Authors: Philipp Sommer, Amgad Agoub

Abstract:

The construction sector significantly impacts global CO₂ emissions, particularly through the energy usage of residential buildings. To address this, various governments, including Germany's, are focusing on reducing emissions via sustainable refurbishment initiatives. This study examines the application of machine learning (ML) to estimate energy demands dynamically in residential buildings and enhance the potential for large-scale sustainable refurbishment. A major challenge in Germany is the lack of extensive publicly labeled datasets for energy performance, as energy performance certificates, which provide critical data on building-specific energy requirements and consumption, are not available for all buildings or require on-site inspections. Conversely, England and other countries in the European Union (EU) have rich public datasets, providing a viable alternative for analysis. This research adapts insights from these English datasets to the German context by developing a comprehensive data schema and calibration dataset capable of predicting building energy demand effectively. The study proposes a minimal feature set, determined through feature importance analysis, to optimize the ML model. Findings indicate that ML significantly improves the scalability and accuracy of energy demand forecasts, supporting more effective emissions reduction strategies in the construction industry. Integrating energy performance certificates into municipal heat planning in Germany highlights the transformative impact of data-driven approaches on environmental sustainability. The goal is to identify and utilize key features from open data sources that significantly influence energy demand, creating an efficient forecasting model. Using Extreme Gradient Boosting (XGB) and data from energy performance certificates, effective features such as building type, year of construction, living space, insulation level, and building materials were incorporated. These were supplemented by data derived from descriptions of roofs, walls, windows, and floors, integrated into three datasets. The emphasis was on features accessible via remote sensing, which, along with other correlated characteristics, greatly improved the model's accuracy. The model was further validated using SHapley Additive exPlanations (SHAP) values and aggregated feature importance, which quantified the effects of individual features on the predictions. The refined model using remote sensing data showed a coefficient of determination (R²) of 0.64 and a mean absolute error (MAE) of 4.12, indicating predictions based on efficiency class 1-100 (G-A) may deviate by 4.12 points. This R² increased to 0.84 with the inclusion of more samples, with wall type emerging as the most predictive feature. After optimizing and incorporating related features like estimated primary energy consumption, the R² score for the training and test set reached 0.94, demonstrating good generalization. The study concludes that ML models significantly improve prediction accuracy over traditional methods, illustrating the potential of ML in enhancing energy efficiency analysis and planning. This supports better decision-making for energy optimization and highlights the benefits of developing and refining data schemas using open data to bolster sustainability in the building sector. The study underscores the importance of supporting open data initiatives to collect similar features and support the creation of comparable models in Germany, enhancing the outlook for environmental sustainability.

Keywords: machine learning, remote sensing, residential building, energy performance certificates, data-driven, heat planning

Procedia PDF Downloads 59
2155 Wet Processing of Algae for Protein and Carbohydrate Recovery as Co-Product of Algal Oil

Authors: Sahil Kumar, Rajaram Ghadge, Ramesh Bhujade

Abstract:

Historically, lipid extraction from dried algal biomass remained a focus area of the algal research. It has been realized over the past few years that the lipid-centric approach and conversion technologies that require dry algal biomass have several challenges. Algal culture in cultivation systems contains more than 99% water, with algal concentrations of just a few hundred milligrams per liter ( < 0.05 wt%), which makes harvesting and drying energy intensive. Drying the algal biomass followed by extraction also entails the loss of water and nutrients. In view of these challenges, focus has shifted toward developing processes that will enable oil production from wet algal biomass without drying. Hydrothermal liquefaction (HTL), an emerging technology, is a thermo-chemical conversion process that converts wet biomass to oil and gas using water as a solvent at high temperature and high pressure. HTL processes wet algal slurry containing more than 80% water and significantly reduces the adverse cost impact owing to drying the algal biomass. HTL, being inherently feedstock agnostic, i.e., can convert carbohydrates and proteins also to fuels and recovers water and nutrients. It is most effective with low-lipid (10--30%) algal biomass, and bio-crude yield is two to four times higher than the lipid content in the feedstock. In the early 2010s, research remained focused on increasing the oil yield by optimizing the process conditions of HTL. However, various techno-economic studies showed that simply converting algal biomass to only oil does not make economic sense, particularly in view of low crude oil prices. Making the best use of every component of algae is a key for economic viability of algal to oil process. On investigation of HTL reactions at the molecular level, it has been observed that sequential HTL has the potential to recover value-added products along with biocrude and improve the overall economics of the process. This potential of sequential HTL makes it a most promising technology for converting wet waste to wealth. In this presentation, we will share our experience on the techno-economic and engineering aspects of sequential HTL for conversion of algal biomass to algal bio-oil and co-products.

Keywords: algae, biomass, lipid, protein

Procedia PDF Downloads 216
2154 Fermented Fruit and Vegetable Discard as a Source of Feeding Ingredients and Functional Additives

Authors: Jone Ibarruri, Mikel Manso, Marta Cebrián

Abstract:

A high amount of food is lost or discarded in the World every year. In addition, in the last decades, an increasing demand of new alternative and sustainable sources of proteins and other valuable compounds is being observed in the food and feeding sectors and, therefore, the use of food by-products as nutrients for these purposes sounds very interesting from the environmental and economical point of view. However, the direct use of discarded fruit and vegetables that present, in general, a low protein content is not interesting as feeding ingredient except if they are used as a source of fiber for ruminants. Especially in the case of aquaculture, several alternatives to the use of fish meal and other vegetable protein sources have been extensively explored due to the scarcity of fish stocks and the unsustainability of fishing for these purposes. Fish mortality is also of great concern in this sector as this problem highly reduces their economic feasibility. So, the development of new functional and natural ingredients that could reduce the need for vaccination is also of great interest. In this work, several fermentation tests were developed at lab scale using a selected mixture of fruit and vegetable discards from a wholesale market located in the Basque Country to increase their protein content and also to produce some bioactive extracts that could be used as additives in aquaculture. Fruit and vegetable mixtures (60/40 ww) were centrifugated for humidity reduction and crushed to 2-5 mm particle size. Samples were inoculated with a selected Rhizopus oryzae strain and fermented for 7 days in controlled conditions (humidity between 65 and 75% and 28ºC) in Petri plates (120 mm) by triplicate. Obtained results indicated that the final fermented product presented a twofold protein content (from 13 to 28% d.w). Fermented product was further processed to determine their possible functionality as a feed additive. Extraction tests were carried out to obtain an ethanolic extract (60:40 ethanol: water, v.v) and remaining biomass that also could present applications in food or feed sectors. The extract presented a polyphenol content of about 27 mg GAE/gr d.w with antioxidant activity of 8.4 mg TEAC/g d.w. Remining biomass is mainly composed of fiber (51%), protein (24%) and fat (10%). Extracts also presented antibacterial activity according to the results obtained in Agar Diffusion and to the Minimum Inhibitory Concentration (MIC) tests determined against several food and fish pathogen strains. In vitro, digestibility was also assessed to obtain preliminary information about the expected effect of extraction procedure on fermented product digestibility. First results indicated that remaining biomass after extraction doesn´t seem to improve digestibility in comparison to the initial fermented product. These preliminary results show that fermented fruit and vegetables can be a useful source of functional ingredients for aquaculture applications and a substitute of other protein sources in the feeding sector. Further validation will be also carried out through “in vivo” tests with trout and bass.

Keywords: fungal solid state fermentation, protein increase, functional extracts, feed ingredients

Procedia PDF Downloads 64
2153 Optimization of Shale Gas Production by Advanced Hydraulic Fracturing

Authors: Fazl Ullah, Rahmat Ullah

Abstract:

This paper shows a comprehensive learning focused on the optimization of gas production in shale gas reservoirs through hydraulic fracturing. Shale gas has emerged as an important unconventional vigor resource, necessitating innovative techniques to enhance its extraction. The key objective of this study is to examine the influence of fracture parameters on reservoir productivity and formulate strategies for production optimization. A sophisticated model integrating gas flow dynamics and real stress considerations is developed for hydraulic fracturing in multi-stage shale gas reservoirs. This model encompasses distinct zones: a single-porosity medium region, a dual-porosity average region, and a hydraulic fracture region. The apparent permeability of the matrix and fracture system is modeled using principles like effective stress mechanics, porous elastic medium theory, fractal dimension evolution, and fluid transport apparatuses. The developed model is then validated using field data from the Barnett and Marcellus formations, enhancing its reliability and accuracy. By solving the partial differential equation by means of COMSOL software, the research yields valuable insights into optimal fracture parameters. The findings reveal the influence of fracture length, diversion capacity, and width on gas production. For reservoirs with higher permeability, extending hydraulic fracture lengths proves beneficial, while complex fracture geometries offer potential for low-permeability reservoirs. Overall, this study contributes to a deeper understanding of hydraulic cracking dynamics in shale gas reservoirs and provides essential guidance for optimizing gas production. The research findings are instrumental for energy industry professionals, researchers, and policymakers alike, shaping the future of sustainable energy extraction from unconventional resources.

Keywords: fluid-solid coupling, apparent permeability, shale gas reservoir, fracture property, numerical simulation

Procedia PDF Downloads 73
2152 EQMamba - Method Suggestion for Earthquake Detection and Phase Picking

Authors: Noga Bregman

Abstract:

Accurate and efficient earthquake detection and phase picking are crucial for seismic hazard assessment and emergency response. This study introduces EQMamba, a deep-learning method that combines the strengths of the Earthquake Transformer and the Mamba model for simultaneous earthquake detection and phase picking. EQMamba leverages the computational efficiency of Mamba layers to process longer seismic sequences while maintaining a manageable model size. The proposed architecture integrates convolutional neural networks (CNNs), bidirectional long short-term memory (BiLSTM) networks, and Mamba blocks. The model employs an encoder composed of convolutional layers and max pooling operations, followed by residual CNN blocks for feature extraction. Mamba blocks are applied to the outputs of BiLSTM blocks, efficiently capturing long-range dependencies in seismic data. Separate decoders are used for earthquake detection, P-wave picking, and S-wave picking. We trained and evaluated EQMamba using a subset of the STEAD dataset, a comprehensive collection of labeled seismic waveforms. The model was trained using a weighted combination of binary cross-entropy loss functions for each task, with the Adam optimizer and a scheduled learning rate. Data augmentation techniques were employed to enhance the model's robustness. Performance comparisons were conducted between EQMamba and the EQTransformer over 20 epochs on this modest-sized STEAD subset. Results demonstrate that EQMamba achieves superior performance, with higher F1 scores and faster convergence compared to EQTransformer. EQMamba reached F1 scores of 0.8 by epoch 5 and maintained higher scores throughout training. The model also exhibited more stable validation performance, indicating good generalization capabilities. While both models showed lower accuracy in phase-picking tasks compared to detection, EQMamba's overall performance suggests significant potential for improving seismic data analysis. The rapid convergence and superior F1 scores of EQMamba, even on a modest-sized dataset, indicate promising scalability for larger datasets. This study contributes to the field of earthquake engineering by presenting a computationally efficient and accurate method for simultaneous earthquake detection and phase picking. Future work will focus on incorporating Mamba layers into the P and S pickers and further optimizing the architecture for seismic data specifics. The EQMamba method holds the potential for enhancing real-time earthquake monitoring systems and improving our understanding of seismic events.

Keywords: earthquake, detection, phase picking, s waves, p waves, transformer, deep learning, seismic waves

Procedia PDF Downloads 58
2151 Denitrification Diesel Hydrocarbons Using Triethanolamine-Glycerol Deep Eutectic Solvent

Authors: Hocine Sifaoui

Abstract:

The manufacture and marketing of the gasoline and diesel without aromatic compounds, particularly nitrogen heteroaromatics and sulfur heteroaromatics, is the main objective of researchers and the petrochemical industry to reply to the requirements of the environmental protection. This work is part of this line of research and for this a triethanolamine/glycerol (TEoA:Gly) deep eutectic solvent (DES), was used to remove two model nitrogen compounds, pyridine and quinoline from n-decane. Experimentally two liquid-liquid equilibrium systems {n-decane + pyridine/quinoline + DES} were measured at 298.15 K and 1.01 bar using the equilibrium cell method. This study aims to evaluate the potential of this DES as sustainable alternative to organic solvents for the denitrogenation of petroleum feedstocks by liquid-liquid extraction. Experimentally, the DES were prepared by the heating method. Accurately weighed triethanolamine as hydrogen bond acceptor (HBA) and glycerol as hydrogen bond donor (HBD), were placed in a round-bottomed flask. An Ohaus Adventurer balance with a precision of ±0.0001 g was used for weighing the HBA and HBD. The mixtures were then stirred and heated at 343.15 K under atmospheric pressure using a rotary evaporator. The preparation was completed when a clear and homogeneous liquid was obtained. To evaluate the equilibrium behaviour of pseudo-ternary systems {n-decane + pyridine or quinoline + DES}, mixtures were prepared with the nitrogenous compound (pyridine or quinoline) at varying mass percentages in the n-decane, along with a fixed (2:1) ratio between the n-decane and DES phases. Defined amounts of these three components were precisely weighed to achieve mixtures within the biphasic region before vigorous stirring at 400 rpm using an Avantor VWR KS 4000 agitator shaker for 4 hours at 298.15 K, followed by overnight settling to attain thermodynamic equilibrium evidenced by phase separation. Aliquot from the upper phase rich in n-decane and the lower phase rich in DES were carefully weighed. The mass of each sample was precisely recorded for quantification by gas chromatography. The DES content was calculated by mass balance after analysing the composition of the other species such as n-decane, pyridine or quinoline. All samples were diluted with pure ethanol before their analysis by GC. Distribution ratios and selectivities toward pyridine and quinoline compounds were also measured at the same phase molar ratios. The consistency and reliability of the experimental data, were verified and validated by the Othmer-Tobias and Batchman correlations. The experimental results show that the highest value of the partition coefficient =7.08 was obtained with pyridine extraction and the highest selectivity S=801.4 was obtained with quinoline extraction. The experimental liquid-liquid equilibrium data of these ternary systems were correlated by using the Non Random Two-Liquids (NRTL) and COnductor-like Screening MOdel for Real Solvents (COSMO-RS) models. A good agreement with the experimental data was observed with NRTL and COSMO-RS models for the two systems. The performance of this DES was compared to those of ionic liquids and organic solvents reported in the literature.

Keywords: piridyne, quinoline, n-decane, deep eutectic solvent

Procedia PDF Downloads 4
2150 Occipital Squama Convexity and Neurocranial Covariation in Extant Homo sapiens

Authors: Miranda E. Karban

Abstract:

A distinctive pattern of occipital squama convexity, known as the occipital bun or chignon, has traditionally been considered a derived Neandertal trait. However, some early modern and extant Homo sapiens share similar occipital bone morphology, showing pronounced internal and external occipital squama curvature and paralambdoidal flattening. It has been posited that these morphological patterns are homologous in the two groups, but this claim remains disputed. Many developmental hypotheses have been proposed, including assertions that the chignon represents a developmental response to a long and narrow cranial vault, a narrow or flexed basicranium, or a prognathic face. These claims, however, remain to be metrically quantified in a large subadult sample, and little is known about the feature’s developmental, functional, or evolutionary significance. This study assesses patterns of chignon development and covariation in a comparative sample of extant human growth study cephalograms. Cephalograms from a total of 549 European-derived North American subjects (286 male, 263 female) were scored on a 5-stage ranking system of chignon prominence. Occipital squama shape was found to exist along a continuum, with 34 subjects (6.19%) possessing defined chignons, and 54 subjects (9.84%) possessing very little occipital squama convexity. From this larger sample, those subjects represented by a complete radiographic series were selected for metric analysis. Measurements were collected from lateral and posteroanterior (PA) cephalograms of 26 subjects (16 male, 10 female), each represented at 3 longitudinal age groups. Age group 1 (range: 3.0-6.0 years) includes subjects during a period of rapid brain growth. Age group 2 (range: 8.0-9.5 years) includes subjects during a stage in which brain growth has largely ceased, but cranial and facial development continues. Age group 3 (range: 15.9-20.4 years) includes subjects at their adult stage. A total of 16 landmarks and 153 sliding semi-landmarks were digitized at each age point, and geometric morphometric analyses, including relative warps analysis and two-block partial least squares analysis, were conducted to study covariation patterns between midsagittal occipital bone shape and other aspects of craniofacial morphology. A convex occipital squama was found to covary significantly with a low, elongated neurocranial vault, and this pattern was found to exist from the youngest age group. Other tested patterns of covariation, including cranial and basicranial breadth, basicranial angle, midcoronal cranial vault shape, and facial prognathism, were not found to be significant at any age group. These results suggest that the chignon, at least in this sample, should not be considered an independent feature, but rather the result of developmental interactions relating to neurocranial elongation. While more work must be done to quantify chignon morphology in fossil subadults, this study finds no evidence to disprove the developmental homology of the feature in modern humans and Neandertals.

Keywords: chignon, craniofacial covariation, human cranial development, longitudinal growth study, occipital bun

Procedia PDF Downloads 202
2149 Detection of Phoneme [S] Mispronounciation for Sigmatism Diagnosis in Adults

Authors: Michal Krecichwost, Zauzanna Miodonska, Pawel Badura

Abstract:

The diagnosis of sigmatism is mostly based on the observation of articulatory organs. It is, however, not always possible to precisely observe the vocal apparatus, in particular in the oral cavity of the patient. Speech processing can allow to objectify the therapy and simplify the verification of its progress. In the described study the methodology for classification of incorrectly pronounced phoneme [s] is proposed. The recordings come from adults. They were registered with the speech recorder at the sampling rate of 44.1 kHz and the resolution of 16 bit. The database of pathological and normative speech has been collected for the study including reference assessments provided by the speech therapy experts. Ten adult subjects were asked to simulate a certain type of stigmatism under the speech therapy expert supervision. In the recordings, the analyzed phone [s] was surrounded by vowels, viz: ASA, ESE, ISI, SPA, USU, YSY. Thirteen MFCC (mel-frequency cepstral coefficients) and RMS (root mean square) values are calculated within each frame being a part of the analyzed phoneme. Additionally, 3 fricative formants along with corresponding amplitudes are determined for the entire segment. In order to aggregate the information within the segment, the average value of each MFCC coefficient is calculated. All features of other types are aggregated by means of their 75th percentile. The proposed method of features aggregation reduces the size of the feature vector used in the classification. Binary SVM (support vector machine) classifier is employed at the phoneme recognition stage. The first group consists of pathological phones, while the other of the normative ones. The proposed feature vector yields classification sensitivity and specificity measures above 90% level in case of individual logo phones. The employment of a fricative formants-based information improves the sole-MFCC classification results average of 5 percentage points. The study shows that the employment of specific parameters for the selected phones improves the efficiency of pathology detection referred to the traditional methods of speech signal parameterization.

Keywords: computer-aided pronunciation evaluation, sibilants, sigmatism diagnosis, speech processing

Procedia PDF Downloads 284
2148 Coronin 1C and miR-128A as Potential Diagnostic Biomarkers for Glioblastoma Multiform

Authors: Denis Mustafov, Emmanouil Karteris, Maria Braoudaki

Abstract:

Glioblastoma multiform (GBM) is a heterogenous primary brain tumour that kills most affected patients. To the authors best knowledge, despite all research efforts there is no early diagnostic biomarker for GBM. MicroRNAs (miRNAs) are short non-coding RNA molecules which are deregulated in many cancers. The aim of this research was to determine miRNAs with a diagnostic impact and to potentially identify promising therapeutic targets for glioblastoma multiform. In silico analysis was performed to identify deregulated miRNAs with diagnostic relevance for glioblastoma. The expression profiles of the chosen miRNAs were then validated in vitro in the human glioblastoma cell lines A172 and U-87MG. Briefly, RNA extraction was carried out using the Trizol method, whilst miRNA extraction was performed using the mirVANA miRNA isolation kit. Quantitative Real-Time Polymerase Chain Reaction was performed to verify their expression. The presence of five target proteins within the A172 cell line was evaluated by Western blotting. The expression of the CORO1C protein within 32 GBM cases was examined via immunohistochemistry. The miRNAs identified in silico included miR-21-5p, miR-34a and miR-128a. These miRNAs were shown to target deregulated GBM genes, such as CDK6, E2F3, BMI1, JAG1, and CORO1C. miR-34a and miR-128a showed low expression profiles in comparison to a control miR-RNU-44 in both GBM cell lines suggesting tumour suppressor properties. Opposing, miR-21-5p demonstrated greater expression indicating that it could potentially function as an oncomiR. Western blotting revealed expression of all five proteins within the A172 cell line. In silico analysis also suggested that CORO1C is a target of miR-128a and miR-34a. Immunohistochemistry demonstrated that 75% of the GBM cases showed moderate to high expression of CORO1C protein. Greater understanding of the deregulated expression of miR-128a and the upregulation of CORO1C in GBM could potentially lead to the identification of a promising diagnostic biomarker signature for glioblastomas.

Keywords: non-coding RNAs, gene expression, brain tumours, immunohistochemistry

Procedia PDF Downloads 91
2147 Phytoremediation of Arsenic-Contaminated Soil and Recovery of Valuable Arsenic Products

Authors: Valentine C. Eze, Adam P. Harvey

Abstract:

Contamination of groundwater and soil by heavy metals and metalloids through anthropogenic activities and natural occurrence poses serious environmental challenges globally. A possible solution to this problem is through phytoremediation of the contaminants using hyper-accumulating plants. Conventional phytoremediation treats the contaminated hyper-accumulator biomass as a waste stream which adds no value to the heavy metal(loid)s decontamination process. This study investigates strategies for remediation of soil contaminated with arsenic and the extractive chemical routes for recovery of arsenic and phosphorus from the hyper-accumulator biomass. Pteris cretica ferns species were investigated for their uptake of arsenic from soil containing 200 ± 3ppm of arsenic. The Pteris cretica ferns were shown to be capable of hyper-accumulation of arsenic, with maximum accumulations of about 4427 ± 79mg to 4875 ± 96mg of As per kg of the dry ferns. The arsenic in the Pteris cretica fronds was extracted into various solvents, with extraction efficiencies of 94.3 ± 2.1% for ethanol-water (1:1 v/v), 81.5 ± 3.2% for 1:1(v/v) methanol-water, and 70.8 ± 2.9% for water alone. The recovery efficiency of arsenic from the molybdic acid complex process 90.8 ± 5.3%. Phosphorus was also recovered from the molybdic acid complex process at 95.1 ± 4.6% efficiency. Quantitative precipitation of Mg₃(AsO₄)₂ and Mg₃(PO₄)₂ occurred in the treatment of the aqueous solutions of arsenic and phosphorus after stripping at pH of 8 – 10. The amounts of Mg₃(AsO₄)₂ and Mg₃(PO₄)₂ obtained were 96 ± 7.2% for arsenic and 94 ± 3.4% for phosphorus. The arsenic nanoparticles produced from the Mg₃(AsO₄)₂ recovered from the biomass have the average particles diameter of 45.5 ± 11.3nm. A two-stage reduction process – a first step pre-reduction of As(V) to As(III) with L-cysteine, followed by NaBH₄ reduction of the As(III) to As(0), was required to produced arsenic nanoparticles from the Mg₃(AsO₄)₂. The arsenic nanoparticles obtained are potentially valuable for medical applications, while the Mg₃(AsO₄)₂ could be used as an insecticide. The phosphorus contents of the Pteris cretica biomass was recovered as phosphomolybdic acid complex and converted to Mg₃(PO₄)₂, which could be useful in productions of fertilizer. Recovery of these valuable products from phytoremediation biomass would incentivize and drive commercial industries’ participation in remediation of contaminated lands.

Keywords: phytoremediation, Pteris cretica, hyper-accumulator, solvent extraction, molybdic acid process, arsenic nanoparticles

Procedia PDF Downloads 318
2146 Design of an Automated Deep Learning Recurrent Neural Networks System Integrated with IoT for Anomaly Detection in Residential Electric Vehicle Charging in Smart Cities

Authors: Wanchalerm Patanacharoenwong, Panaya Sudta, Prachya Bumrungkun

Abstract:

The paper focuses on the development of a system that combines Internet of Things (IoT) technologies and deep learning algorithms for anomaly detection in residential Electric Vehicle (EV) charging in smart cities. With the increasing number of EVs, ensuring efficient and reliable charging systems has become crucial. The aim of this research is to develop an integrated IoT and deep learning system for detecting anomalies in residential EV charging and enhancing EV load profiling and event detection in smart cities. This approach utilizes IoT devices equipped with infrared cameras to collect thermal images and household EV charging profiles from the database of Thailand utility, subsequently transmitting this data to a cloud database for comprehensive analysis. The methodology includes the use of advanced deep learning techniques such as Recurrent Neural Networks (RNN) and Long Short-Term Memory (LSTM) algorithms. IoT devices equipped with infrared cameras are used to collect thermal images and EV charging profiles. The data is transmitted to a cloud database for comprehensive analysis. The researchers also utilize feature-based Gaussian mixture models for EV load profiling and event detection. Moreover, the research findings demonstrate the effectiveness of the developed system in detecting anomalies and critical profiles in EV charging behavior. The system provides timely alarms to users regarding potential issues and categorizes the severity of detected problems based on a health index for each charging device. The system also outperforms existing models in event detection accuracy. This research contributes to the field by showcasing the potential of integrating IoT and deep learning techniques in managing residential EV charging in smart cities. The system ensures operational safety and efficiency while also promoting sustainable energy management. The data is collected using IoT devices equipped with infrared cameras and is stored in a cloud database for analysis. The collected data is then analyzed using RNN, LSTM, and feature-based Gaussian mixture models. The approach includes both EV load profiling and event detection, utilizing a feature-based Gaussian mixture model. This comprehensive method aids in identifying unique power consumption patterns among EV owners and outperforms existing models in event detection accuracy. In summary, the research concludes that integrating IoT and deep learning techniques can effectively detect anomalies in residential EV charging and enhance EV load profiling and event detection accuracy. The developed system ensures operational safety and efficiency, contributing to sustainable energy management in smart cities.

Keywords: cloud computing framework, recurrent neural networks, long short-term memory, Iot, EV charging, smart grids

Procedia PDF Downloads 68
2145 Pharyngealization Spread in Ibbi Dialect of Yemeni Arabic: An Acoustic Study

Authors: Fadhl Qutaish

Abstract:

This paper examines the pharyngealization spread in one of the Yemeni Arabic dialects, namely, Ibbi Arabic (IA). It investigates how pharyngealized sounds spread their acoustic features onto the neighboring vowels and change their default features. This feature has been investigated quietly well in MSA but still has to be deeply studied in the different dialect of Arabic which will bring about a clearer picture of the similarities and the differences among these dialects and help in mapping them based on the way this feature is utilized. Though the studies are numerous, no one of them has illustrated how far in the multi-syllabic word the spread can be and whether it takes a steady or gradient manner. This study tries to fill this gap and give a satisfactory explanation of the pharyngealization spread in Ibbi Dialect. This study is the first step towards a larger investigation of the different dialects of Yemeni Arabic in the future. The data recorded are represented in minimal pairs in which the trigger (pharyngealized or the non-pharyngealized sound) is in the initial or final position of monosyllabic and multisyllabic words. A group of 24 words were divided into four groups and repeated three times by three subjects which will yield 216 tokens that are tested and analyzed. The subjects are three male speakers aged between 28 and 31 with no history of neurological, speaking or hearing problems. All of them are bilingual speakers of Arabic and English and native speakers of Ibbi-Dialect. Recordings were done in a sound-proof room and praat software was used for the analysis and coding of the trajectories of F1 and F2 for the low vowel /a/ to see the effect of pharyngealization on the formant trajectory within the same syllable and in other syllables of the same word by comparing the F1 and F2 formants to the non-pharyngealized environment. The results show that pharyngealization spread is gradient (progressively and regressively). The spread is reflected in the gradual raising of F1 as we move closer towards the trigger and the gradual lowering of F2 as well. The results of the F1 mean values in tri-syllabic words when the trigger is word initially show that there is a raise of 37.9 HZ in the first syllable, 26.8HZ in the second syllable and 14.2HZ in the third syllable. F2 mean values undergo a lowering of 239 HZ in the first syllable, 211.7 HZ in the second syllable and 176.5 in the third syllable. This gradual decrease in the difference of F2 values in the non-pharyngealized and pharyngealized context illustrates that the spread is gradient. A similar result was found when the trigger is word-final which proves that the spread is gradient (progressively and regressively.

Keywords: pharyngealization, Yemeni Arabic, Ibbi dialect, pharyngealization spread

Procedia PDF Downloads 223