Search results for: Fourier neural operator
424 Catalyst Assisted Microwave Plasma for NOx Formation
Authors: Babak Sadeghi, Rony Snyders, Marie-Paule.Delplancke-Ogletree
Abstract:
Nitrogen fixation (NF) is one of the crucial industrial processes. Many attempts have been made in order to artificially fix nitrogen, and among them, the Haber-Bosch’s (H-B) process is widely used. However, it presents two major drawbacks: huge fossil feedstock consumption and noticeable greenhouse gases emission. It is, therefore, necessary to develop alternatives. Plasma technology, as an inherent “green” technology, is considered to have a great potential for reducing the environmental impacts and improving the energy efficiency of the NF process. In this work, we have studied the catalyst assisted microwave plasma for NF application. Heterogeneous catalysts of MoO₃, with various loads 0, 5, 10, 20, and 30 wt%, supported on γ-alumina were prepared by conventional wet impregnation. Crystallinity, surface area, pore size, and microstructure were obtained by X-ray diffraction (XRD), Brunauer–Emmett–Teller (BET) adsorption isotherm, Scanning electron microscopy (SEM), and Transmission electron microscopy (TEM). The XRD patterns of calcined alumina confirm the γ- phase. Characteristic picks of MoO₃ could not be observed for low loads (< 20 wt%), likely indicating a high dispersion of metal oxide over the support. The specific surface area along with pores size are decreasing with increasing calcination temperature and MoO₃ loading. The MoO₃ loading does not modify the microstructure. TEM and SEM results for loading inferior to 20 wt% are coherent with a monolayer of MoO₃ on the support as proposed elsewhere. For loading of 20 wt% and more, TEM and Electron diffraction (ED) show nanocrystalline ₃-D MoO₃ particles. The catalytic performances of these catalysts were investigated in the post-discharge of a microwave plasma for NOx formation from N₂/O₂ mixtures. The plasma is sustained by a surface wave launched in a quartz tube via a surfaguide supplied by a 2.45 GHz microwave generator in pulse mode. In-situ identification and quantification of the products were carried out by Fourier-transform infrared spectroscopy (FTIR) in the post-discharge region. FTIR analysis of the exhausted gas reveal NO and NO₂ bands in presence of catalyst while only NO band were assigned without catalyst. On the other hand, in presence of catalyst, a 10% increase of NOₓ formation and of 20% increase in energy efficiency are observed.Keywords: γ-Al2O₃-MoO₃, µ-waveplasma, N2 fixation, Plasma-catalysis, Plasma diagnostic
Procedia PDF Downloads 176423 Batch and Dynamic Investigations on Magnesium Separation by Ion Exchange Adsorption: Performance and Cost Evaluation
Authors: Mohamed H. Sorour, Hayam F. Shaalan, Heba A. Hani, Eman S. Sayed
Abstract:
Ion exchange adsorption has a long standing history of success for seawater softening and selective ion removal from saline sources. Strong, weak and mixed types ion exchange systems could be designed and optimized for target separation. In this paper, different types of adsorbents comprising zeolite 13X and kaolin, in addition to, poly acrylate/zeolite (AZ), poly acrylate/kaolin (AK) and stand-alone poly acrylate (A) hydrogel types were prepared via microwave (M) and ultrasonic (U) irradiation techniques. They were characterized using X-ray diffraction (XRD), Fourier transform infrared spectroscopy (FTIR), and scanning electron microscopy (SEM). The developed adsorbents were evaluated on bench scale level and based on assessment results, a composite bed has been formulated for performance evaluation in pilot scale column investigations. Owing to the hydrogel nature of the partially crosslinked poly acrylate, the developed adsorbents manifested a swelling capacity of about 50 g/g. The pilot trials have been carried out using magnesium enriched Red Seawater to simulate Red Seawater desalination brine. Batch studies indicated varying uptake efficiencies, where Mg adsorption decreases according to the following prepared hydrogel types AU>AM>AKM>AKU>AZM>AZU, being 108, 107, 78, 69, 66 and 63 mg/g, respectively. Composite bed adsorbent tested in the up-flow mode column studies indicated good performance for Mg uptake. For an operating cycle of 12 h, the maximum uptake during the loading cycle approached 92.5-100 mg/g, which is comparable to the performance of some commercial resins. Different regenerants have been explored to maximize regeneration and minimize the quantity of regenerants including 15% NaCl, 0.1 M HCl and sodium carbonate. Best results were obtained by acidified sodium chloride solution. In conclusion, developed cation exchange adsorbents comprising clay or zeolite support indicated adequate performance for Mg recovery under saline environment. Column design operated at the up-flow mode (approaching expanded bed) is appropriate for such type of separation. Preliminary cost indicators for Mg recovery via ion exchange have been developed and analyzed.Keywords: batch and dynamic magnesium separation, seawater, polyacrylate hydrogel, cost evaluation
Procedia PDF Downloads 135422 Develop a Conceptual Data Model of Geotechnical Risk Assessment in Underground Coal Mining Using a Cloud-Based Machine Learning Platform
Authors: Reza Mohammadzadeh
Abstract:
The major challenges in geotechnical engineering in underground spaces arise from uncertainties and different probabilities. The collection, collation, and collaboration of existing data to incorporate them in analysis and design for given prospect evaluation would be a reliable, practical problem solving method under uncertainty. Machine learning (ML) is a subfield of artificial intelligence in statistical science which applies different techniques (e.g., Regression, neural networks, support vector machines, decision trees, random forests, genetic programming, etc.) on data to automatically learn and improve from them without being explicitly programmed and make decisions and predictions. In this paper, a conceptual database schema of geotechnical risks in underground coal mining based on a cloud system architecture has been designed. A new approach of risk assessment using a three-dimensional risk matrix supported by the level of knowledge (LoK) has been proposed in this model. Subsequently, the model workflow methodology stages have been described. In order to train data and LoK models deployment, an ML platform has been implemented. IBM Watson Studio, as a leading data science tool and data-driven cloud integration ML platform, is employed in this study. As a Use case, a data set of geotechnical hazards and risk assessment in underground coal mining were prepared to demonstrate the performance of the model, and accordingly, the results have been outlined.Keywords: data model, geotechnical risks, machine learning, underground coal mining
Procedia PDF Downloads 274421 Synthesis of Iron Oxide Nanoparticles Using Different Stabilizers and Study of Their Size and Properties
Authors: Mohammad Hassan Ramezan zadeh 1 , Majid Seifi 2 , Hoda Hekmat ara 2 1Biomedical Engineering Department, Near East University, Nicosia, Cyprus 2Physics Department, Guilan University , P.O. Box 41335-1914, Rasht, Iran.
Abstract:
Magnetic nano particles of ferric chloride were synthesised using a co-precipitation technique. For the optimal results, ferric chloride at room temperature was added to different surfactant with different ratio of metal ions/surfactant. The samples were characterised using transmission electron microscopy, X-ray diffraction and Fourier transform infrared spectrum to show the presence of nanoparticles, structure and morphology. Magnetic measurements were also carried out on samples using a Vibrating Sample Magnetometer. To show the effect of surfactant on size distribution and crystalline structure of produced nanoparticles, surfactants with various charge such as anionic cetyl trimethyl ammonium bromide (CTAB), cationic sodium dodecyl sulphate (SDS) and neutral TritonX-100 was employed. By changing the surfactant and ratio of metal ions/surfactant the size and crystalline structure of these nanoparticles were controlled. We also show that using anionic stabilizer leads to smallest size and narrowest size distribution and the most crystalline (polycrystalline) structure. In developing our production technique, many parameters were varied. Efforts at reproducing good yields indicated which of the experimental parameters were the most critical and how carefully they had to be controlled. The conditions reported here were the best that we encountered but the range of possible parameter choice is so large that these probably only represent a local optimum. The samples for our chemical process were prepared by adding 0.675 gr ferric chloride (FeCl3, 6H2O) to three different surfactant in water solution. The solution was sonicated for about 30 min until a transparent solution was achieved. Then 0.5 gr sodium hydroxide (NaOH) as a reduction agent was poured to the reaction drop by drop which resulted to participate reddish brown Fe2O3 nanoparticles. After washing with ethanol the obtained powder was calcinated in 600°C for 2h. Here, the sample 1 contained CTAB as a surfactant with ratio of metal ions/surfactant 1/2, sample 2 with CTAB and ratio 1/1, sample 3 with SDS and ratio 1/2, sample 4 SDS 1/1, sample 5 is triton-X-100 with 1/2 and sample 6 triton-X-100 with 1/1.Keywords: iron oxide nanoparticles, stabilizer, co-precipitation, surfactant
Procedia PDF Downloads 250420 R-Killer: An Email-Based Ransomware Protection Tool
Authors: B. Lokuketagoda, M. Weerakoon, U. Madushan, A. N. Senaratne, K. Y. Abeywardena
Abstract:
Ransomware has become a common threat in past few years and the recent threat reports show an increase of growth in Ransomware infections. Researchers have identified different variants of Ransomware families since 2015. Lack of knowledge of the user about the threat is a major concern. Ransomware detection methodologies are still growing through the industry. Email is the easiest method to send Ransomware to its victims. Uninformed users tend to click on links and attachments without much consideration assuming the emails are genuine. As a solution to this in this paper R-Killer Ransomware detection tool is introduced. Tool can be integrated with existing email services. The core detection Engine (CDE) discussed in the paper focuses on separating suspicious samples from emails and handling them until a decision is made regarding the suspicious mail. It has the capability of preventing execution of identified ransomware processes. On the other hand, Sandboxing and URL analyzing system has the capability of communication with public threat intelligence services to gather known threat intelligence. The R-Killer has its own mechanism developed in its Proactive Monitoring System (PMS) which can monitor the processes created by downloaded email attachments and identify potential Ransomware activities. R-killer is capable of gathering threat intelligence without exposing the user’s data to public threat intelligence services, hence protecting the confidentiality of user data.Keywords: ransomware, deep learning, recurrent neural networks, email, core detection engine
Procedia PDF Downloads 211419 Analytical Study and Conservation Processes of Scribe Box from Old Kingdom
Authors: Mohamed Moustafa, Medhat Abdallah, Ramy Magdy, Ahmed Abdrabou, Mohamed Badr
Abstract:
The scribe box under study dates back to the old kingdom. It was excavated by the Italian expedition in Qena (1935-1937). The box consists of 2pieces, the lid and the body. The inner side of the lid is decorated with ancient Egyptian inscriptions written with a black pigment. The box was made using several panels assembled together by wooden dowels and secured with plant ropes. The entire box is covered with a red pigment. This study aims to use analytical techniques in order to identify and have deep understanding for the box components. Moreover, the authors were significantly interested in using infrared reflectance transmission imaging (RTI-IR) to improve the hidden inscriptions on the lid. The identification of wood species included in this study. The visual observation and assessment were done to understand the condition of this box. 3Ddimensions and 2D programs were used to illustrate wood joints techniques. Optical microscopy (OM), X-ray diffraction (XRD), X-ray fluorescence portable (XRF) and Fourier Transform Infrared spectroscopy (FTIR) were used in this study in order to identify wood species, remains of insects bodies, red pigment, fibers plant and previous conservation adhesives, also RTI-IR technique was very effective to improve hidden inscriptions. The analysis results proved that wooden panels and dowels were identified as Acacia nilotica, wooden rail was Salix sp. the insects were identified as Lasioderma serricorne and Gibbium psylloids, the red pigment was Hematite, while the fiber plants were linen, previous adhesive was identified as cellulose nitrates. The historical study for the inscriptions proved that it’s a Hieratic writings of a funerary Text. After its transportation from the Egyptian museum storage to the wood conservation laboratory of the Grand Egyptian museum –conservation center (GEM-CC), conservation techniques were applied with high accuracy in order to restore the object including cleaning , consolidating of friable pigments and writings, removal of previous adhesive and reassembly, finally the conservation process that were applied were extremely effective for this box which became ready for display or storage in the grand Egyptian museum.Keywords: scribe box, hieratic, 3D program, Acacia nilotica, XRD, cellulose nitrate, conservation
Procedia PDF Downloads 271418 Development, Characterization and Performance Evaluation of a Weak Cation Exchange Hydrogel Using Ultrasonic Technique
Authors: Mohamed H. Sorour, Hayam F. Shaalan, Heba A. Hani, Eman S. Sayed, Amany A. El-Mansoup
Abstract:
Heavy metals (HMs) present an increasing threat to aquatic and soil environment. Thus, techniques should be developed for the removal and/or recovery of those HMs from point sources in the generating industries. This paper reports our endeavors concerning the development of in-house developed weak cation exchange polyacrylate hydrogel kaolin composites for heavy metals removal. This type of composite enables desirable characteristics and functions including mechanical strength, bed porosity and cost advantages. This paper emphasizes the effect of varying crosslinker (methylenebis(acrylamide)) concentration. The prepared cation exchanger has been subjected to intensive characterization using X-ray diffraction (XRD), Fourier transform infrared spectroscopy (FTIR), scanning electron microscopy (SEM), X-ray fluorescence (XRF) and Brunauer Emmett and Teller (BET) method. Moreover, the performance was investigated using synthetic and real wastewater for an industrial complex east of Cairo. Simulated and real wastewater compositions addressed; Cr, Co, Ni, and Pb are in the range of (92-115), (91-103), (86-88) and (99-125), respectively. Adsorption experiments have been conducted in both batch and column modes. In general, batch tests revealed enhanced cation exchange capacities of 70, 72, 78.2 and 99.9 mg/g from single synthetic wastes while, removal efficiencies of 82.2, 86.4, 44.4 and 96% were obtained for Cr, Co, Ni and Pb, respectively from mixed synthetic wastes. It is concluded that the mixed synthetic and real wastewaters have lower adsorption capacities than single solutions. It is worth mentioned that Pb attained higher adsorption capacities with comparable results in all tested concentrations of synthetic and real wastewaters. Pilot scale experiments were also conducted for mixed synthetic waste in a fluidized bed column for 48 hour cycle time which revealed 86.4%, 58.5%, 66.8% and 96.9% removal efficiency for Cr, Co, Ni, and Pb, respectively with maximum regeneration was also conducted using saline and acid regenerants. Maximum regeneration efficiencies for the column studies higher than the batch ones about by about 30% to 60%. Studies are currently under way to enhance the regeneration efficiency to enable successful scaling up of the adsorption column.Keywords: polyacrylate hydrogel kaolin, ultrasonic irradiation, heavy metals, adsorption and regeneration
Procedia PDF Downloads 123417 Tunable Crystallinity of Zinc Gallogermanate Nanoparticles via Organic Ligand-Assisted Biphasic Hydrothermal Synthesis
Authors: Sarai Guerrero, Lijia Liu
Abstract:
Zinc gallogermanate (ZGGO) is a persistent phosphor that can emit in the near infrared (NIR) range once dopped with Cr³⁺ enabling its use for in-vivo deep-tissue bio-imaging. Such a property also allows for its application in cancer diagnosis and therapy. Given this, work into developing a synthetic procedure that can be done using common laboratory instruments and equipment as well as understanding ZGGO overall, is in demand. However, the ZGGO nanoparticles must have a size compatible for cell intake to occur while still maintaining sufficient photoluminescence. The nanoparticle must also be made biocompatible by functionalizing the surface for hydrophilic solubility and for high particle uniformity in the final product. Additionally, most research is completed on doped ZGGO, leaving a gap in understanding the base form of ZGGO. It also leaves a gap in understanding how doping affects the synthesis of ZGGO. In this work, the first step of optimizing the particle size via the crystalline size of ZGGO was done with undoped ZGGO using the organic acid, oleic acid (OA) for organic ligand-assisted biphasic hydrothermal synthesis. The effects of this synthesis procedure on ZGGO’s crystallinity were evaluated using Powder X-Ray Diffraction (PXRD). OA was selected as the capping ligand as experiments have shown it beneficial in synthesizing sub-10 nm zinc gallate (ZGO) nanoparticles as well as palladium nanocrystals and magnetite (Fe₃O₄) nanoparticles. Later it is possible to substitute OA with a different ligand allowing for hydrophilic solubility. Attenuated Total Reflection Fourier-Transform Infrared (ATR-FTIR) was used to investigate the surface of the nanoparticle to investigate and verify that OA had capped the nanoparticle. PXRD results showed that using this procedure led to improved crystallinity, comparable to the high-purity reagents used on the ZGGO nanoparticles. There was also a change in the crystalline size of the ZGGO nanoparticles. ATR-FTIR showed that once capped ZGGO cannot be annealed as doing so will affect the OA. These results point to this new procedure positively affecting the crystallinity of ZGGO nanoparticles. There are also repeatable implying the procedure is a reliable source of highly crystalline ZGGO nanoparticles. With this completed, the next step will be working on substituting the OA with a hydrophilic ligand. As these ligands effect the solubility of the nanoparticle as well as the pH that the nanoparticles can dissolve in, further research is needed to verify which ligand is best suited for preparing ZGGO for bio-imaging.Keywords: biphasic hydrothermal synthesis, crystallinity, oleic acid, zinc gallogermanate
Procedia PDF Downloads 133416 DCDNet: Lightweight Document Corner Detection Network Based on Attention Mechanism
Authors: Kun Xu, Yuan Xu, Jia Qiao
Abstract:
The document detection plays an important role in optical character recognition and text analysis. Because the traditional detection methods have weak generalization ability, and deep neural network has complex structure and large number of parameters, which cannot be well applied in mobile devices, this paper proposes a lightweight Document Corner Detection Network (DCDNet). DCDNet is a two-stage architecture. The first stage with Encoder-Decoder structure adopts depthwise separable convolution to greatly reduce the network parameters. After introducing the Feature Attention Union (FAU) module, the second stage enhances the feature information of spatial and channel dim and adaptively adjusts the size of receptive field to enhance the feature expression ability of the model. Aiming at solving the problem of the large difference in the number of pixel distribution between corner and non-corner, Weighted Binary Cross Entropy Loss (WBCE Loss) is proposed to define corner detection problem as a classification problem to make the training process more efficient. In order to make up for the lack of Dataset of document corner detection, a Dataset containing 6620 images named Document Corner Detection Dataset (DCDD) is made. Experimental results show that the proposed method can obtain fast, stable and accurate detection results on DCDD.Keywords: document detection, corner detection, attention mechanism, lightweight
Procedia PDF Downloads 354415 Skull Extraction for Quantification of Brain Volume in Magnetic Resonance Imaging of Multiple Sclerosis Patients
Authors: Marcela De Oliveira, Marina P. Da Silva, Fernando C. G. Da Rocha, Jorge M. Santos, Jaime S. Cardoso, Paulo N. Lisboa-Filho
Abstract:
Multiple Sclerosis (MS) is an immune-mediated disease of the central nervous system characterized by neurodegeneration, inflammation, demyelination, and axonal loss. Magnetic resonance imaging (MRI), due to the richness in the information details provided, is the gold standard exam for diagnosis and follow-up of neurodegenerative diseases, such as MS. Brain atrophy, the gradual loss of brain volume, is quite extensive in multiple sclerosis, nearly 0.5-1.35% per year, far off the limits of normal aging. Thus, the brain volume quantification becomes an essential task for future analysis of the occurrence atrophy. The analysis of MRI has become a tedious and complex task for clinicians, who have to manually extract important information. This manual analysis is prone to errors and is time consuming due to various intra- and inter-operator variability. Nowadays, computerized methods for MRI segmentation have been extensively used to assist doctors in quantitative analyzes for disease diagnosis and monitoring. Thus, the purpose of this work was to evaluate the brain volume in MRI of MS patients. We used MRI scans with 30 slices of the five patients diagnosed with multiple sclerosis according to the McDonald criteria. The computational methods for the analysis of images were carried out in two steps: segmentation of the brain and brain volume quantification. The first image processing step was to perform brain extraction by skull stripping from the original image. In the skull stripper for MRI images of the brain, the algorithm registers a grayscale atlas image to the grayscale patient image. The associated brain mask is propagated using the registration transformation. Then this mask is eroded and used for a refined brain extraction based on level-sets (edge of the brain-skull border with dedicated expansion, curvature, and advection terms). In the second step, the brain volume quantification was performed by counting the voxels belonging to the segmentation mask and converted in cc. We observed an average brain volume of 1469.5 cc. We concluded that the automatic method applied in this work can be used for the brain extraction process and brain volume quantification in MRI. The development and use of computer programs can contribute to assist health professionals in the diagnosis and monitoring of patients with neurodegenerative diseases. In future works, we expect to implement more automated methods for the assessment of cerebral atrophy and brain lesions quantification, including machine-learning approaches. Acknowledgements: This work was supported by a grant from Brazilian agency Fundação de Amparo à Pesquisa do Estado de São Paulo (number 2019/16362-5).Keywords: brain volume, magnetic resonance imaging, multiple sclerosis, skull stripper
Procedia PDF Downloads 146414 Investigation of Antimicrobial Activity of Dielectric Barrier Discharge Oxygen Plasma Combined with ZnO NPs-Treated Cotton Fabric Coated with Natural Green Tea Leaf Extracts
Authors: Fatma A. Mohamed, Hend M. Ahmed
Abstract:
This research explores the antimicrobial effects of dielectric barrier discharge (DBD) oxygen plasma treatment combined with ZnO NPs on the cotton fabric, focusing on various treatment durations (5, 10, 15, 20, and 30 minutes) and discharge powers (15.5–17.35 watts) at flow rate 0.5 l/min. After treatment with oxygen plasma and ZnO NPs, the fabric was printed with green tea (Camellia sinensis) at five different concentrations. The study evaluated the treatment's effectiveness by analyzing surface wettability, specifically through wet-out time and hydrophilicity, as well as measuring contact angles. To investigate the chemical changes on the fabric's surface, attenuated total reflectance–Fourier transform infrared (ATR-FTIR) spectroscopy was employed to identify the functional groups formed as a result of the plasma treatment. This comprehensive approach aims to understand how DBD oxygen plasma treatment and ZnO nanoparticles change cotton fabric properties and enhance its antimicrobial potential, paving the way for innovative applications in textiles. In addition to the chemical analysis, the surface morphology of the O₂ plasma/ZnO NPs-treated cotton fabric was examined using scanning electron microscopy (SEM). FTIR analysis revealed an increase in polar functional groups (-COOH, -OH, and -C≡O) on the fabric's surface, contributing to enhanced hydrophilicity and functionality. The antimicrobial properties were evaluated using qualitative and quantitative methods, including agar plate assays and modified Hoenstein tests against Staphylococcus aureus and Escherichia coli. The results indicated a significant improvement in antimicrobial effectiveness for the cotton fabric treated with plasma and coated with natural extracts, maintaining this efficacy even after four washing cycles. This research demonstrates that utilizing oxygen DBD plasma/ZnO NPs treatment, combined with the absorption of tea and tulsi leaf extracts, presents a promising strategy for developing natural antimicrobial textiles. This approach is particularly relevant given the increasing medical and healthcare demands for effective antimicrobial materials. Overall, the method not only enhances the absorption of plant extracts but also significantly boosts antimicrobial efficacy, offering valuable insights for future textile applications.Keywords: cotton, ZnO NPs, green tea leaf, antimicrobial avtivity, DBD oxygen plasma
Procedia PDF Downloads 9413 Direct Current Grids in Urban Planning for More Sustainable Urban Energy and Mobility
Authors: B. Casper
Abstract:
The energy transition towards renewable energies and drastically reduced carbon dioxide emissions in Germany drives multiple sectors into a transformation process. Photovoltaic and on-shore wind power are predominantly feeding in the low and medium-voltage grids. The electricity grid is not laid out to allow an increasing feed-in of power in low and medium voltage grids. Electric mobility is currently in the run-up phase in Germany and still lacks a significant amount of charging stations. The additional power demand by e-mobility cannot be supplied by the existing electric grids in most cases. The future demands in heating and cooling of commercial and residential buildings are increasingly generated by heat-pumps. Yet the most important part in the energy transition is the storage of surplus energy generated by photovoltaic and wind power sources. Water electrolysis is one way to store surplus energy known as power-to-gas. With the vehicle-to-grid technology, the upcoming fleet of electric cars could be used as energy storage to stabilize the grid. All these processes use direct current (DC). The demand of bi-directional flow and higher efficiency in the future grids can be met by using DC. The Flexible Electrical Networks (FEN) research campus at RWTH Aachen investigates interdisciplinary about the advantages, opportunities, and limitations of DC grids. This paper investigates the impact of DC grids as a technological innovation on the urban form and urban life. Applying explorative scenario development, analyzation of mapped open data sources on grid networks and research-by-design as a conceptual design method, possible starting points for a transformation to DC medium voltage grids could be found. Several fields of action have emerged in which DC technology could become a catalyst for future urban development: energy transition in urban areas, e-mobility, and transformation of the network infrastructure. The investigation shows a significant potential to increase renewable energy production within cities with DC grids. The charging infrastructure for electric vehicles will predominantly be using DC in the future because fast and ultra fast charging can only be achieved with DC. Our research shows that e-mobility, combined with autonomous driving has the potential to change the urban space and urban logistics fundamentally. Furthermore, there are possible win-win-win solutions for the municipality, the grid operator and the inhabitants: replacing overhead transmission lines by underground DC cables to open up spaces in contested urban areas can lead to a positive example of how the energy transition can contribute to a more sustainable urban structure. The outlook makes clear that target grid planning and urban planning will increasingly need to be synchronized.Keywords: direct current, e-mobility, energy transition, grid planning, renewable energy, urban planning
Procedia PDF Downloads 126412 Modeling and Analysis of Drilling Operation in Shale Reservoirs with Introduction of an Optimization Approach
Authors: Sina Kazemi, Farshid Torabi, Todd Peterson
Abstract:
Drilling in shale formations is frequently time-consuming, challenging, and fraught with mechanical failures such as stuck pipes or hole packing off when the cutting removal rate is not sufficient to clean the bottom hole. Crossing the heavy oil shale and sand reservoirs with active shale and microfractures is generally associated with severe fluid losses causing a reduction in the rate of the cuttings removal. These circumstances compromise a well’s integrity and result in a lower rate of penetration (ROP). This study presents collective results of field studies and theoretical analysis conducted on data from South Pars and North Dome in an Iran-Qatar offshore field. Solutions to complications related to drilling in shale formations are proposed through systemically analyzing and applying modeling techniques to select field mud logging data. Field data measurements during actual drilling operations indicate that in a shale formation where the return flow of polymer mud was almost lost in the upper dolomite layer, the performance of hole cleaning and ROP progressively change when higher string rotations are initiated. Likewise, it was observed that this effect minimized the force of rotational torque and improved well integrity in the subsequent casing running. Given similar geologic conditions and drilling operations in reservoirs targeting shale as the producing zone like the Bakken formation within the Williston Basin and Lloydminster, Saskatchewan, a drill bench dynamic modeling simulation was used to simulate borehole cleaning efficiency and mud optimization. The results obtained by altering RPM (string revolution per minute) at the same pump rate and optimized mud properties exhibit a positive correlation with field measurements. The field investigation and developed model in this report show that increasing the speed of string revolution as far as geomechanics and drilling bit conditions permit can minimize the risk of mechanically stuck pipes while reaching a higher than expected ROP in shale formations. Data obtained from modeling and field data analysis, optimized drilling parameters, and hole cleaning procedures are suggested for minimizing the risk of a hole packing off and enhancing well integrity in shale reservoirs. Whereas optimization of ROP at a lower pump rate maintains the wellbore stability, it saves time for the operator while reducing carbon emissions and fatigue of mud motors and power supply engines.Keywords: ROP, circulating density, drilling parameters, return flow, shale reservoir, well integrity
Procedia PDF Downloads 86411 Reconstructability Analysis for Landslide Prediction
Authors: David Percy
Abstract:
Landslides are a geologic phenomenon that affects a large number of inhabited places and are constantly being monitored and studied for the prediction of future occurrences. Reconstructability analysis (RA) is a methodology for extracting informative models from large volumes of data that work exclusively with discrete data. While RA has been used in medical applications and social science extensively, we are introducing it to the spatial sciences through applications like landslide prediction. Since RA works exclusively with discrete data, such as soil classification or bedrock type, working with continuous data, such as porosity, requires that these data are binned for inclusion in the model. RA constructs models of the data which pick out the most informative elements, independent variables (IVs), from each layer that predict the dependent variable (DV), landslide occurrence. Each layer included in the model retains its classification data as a primary encoding of the data. Unlike other machine learning algorithms that force the data into one-hot encoding type of schemes, RA works directly with the data as it is encoded, with the exception of continuous data, which must be binned. The usual physical and derived layers are included in the model, and testing our results against other published methodologies, such as neural networks, yields accuracy that is similar but with the advantage of a completely transparent model. The results of an RA session with a data set are a report on every combination of variables and their probability of landslide events occurring. In this way, every combination of informative state combinations can be examined.Keywords: reconstructability analysis, machine learning, landslides, raster analysis
Procedia PDF Downloads 65410 Nano-Sized Iron Oxides/ZnMe Layered Double Hydroxides as Highly Efficient Fenton-Like Catalysts for Degrading Specific Pharmaceutical Agents
Authors: Marius Sebastian Secula, Mihaela Darie, Gabriela Carja
Abstract:
Persistent organic pollutant discharged by various industries or urban regions into the aquatic ecosystems represent a serious threat to fauna and human health. The endocrine disrupting compounds are known to have toxic effects even at very low values of concentration. The anti-inflammatory agent Ibuprofen is an endocrine disrupting compound and is considered as model pollutant in the present study. The use of light energy to accomplish the latest requirements concerning wastewater discharge demands highly-performant and robust photo-catalysts. Many efforts have been paid to obtain efficient photo-responsive materials. Among the promising photo-catalysts, layered double hydroxides (LDHs) attracted significant consideration especially due to their composition flexibility, high surface area and tailored redox features. This work presents Fe(II) self-supported on ZnMeLDHs (Me =Al3+, Fe3+) as novel efficient photo-catalysts for Fenton-like catalysis. The co-precipitation method was used to prepare ZnAlLDH, ZnFeAlLDH and ZnCrLDH (Zn2+/Me3+ = 2 molar ratio). Fe(II) was self-supported on the LDHs matrices by using the reconstruction method, at two different values of weight concentration. X-ray diffraction (XRD), thermogravimetric analysis (TG/DTG), Fourier transform infrared (FTIR) and transmission electron microscopy (TEM) were used to investigate the structural, textural, and micromorphology of the catalysts. The Fe(II)/ZnMeLDHs nano-hybrids were tested for the degradation of a model pharmaceutical agent, the anti-inflammatory agent ibuprofen, by photocatalysis and photo-Fenton catalysis, respectively. The results point out that the embedment Fe(II) into ZnFeAlLDH and ZnCrLDH lead to a slight enhancement of ibuprofen degradation by light irradiation, whereas in case of ZnAlLDH, the degradation process is relatively low. A remarkable enhancement of ibuprofen degradation was found in the case of Fe(II)/ZnMeLDHs by photo-Fenton process. Acknowledgements: This work was supported by a grant of the Romanian National Authority for Scientific Research and Innovation, CNCS - UEFISCDI, project number PN-II-RU-TE-2014-4-0405.Keywords: layered double hydroxide, heterogeneous Fenton, micropollutant, photocatalysis
Procedia PDF Downloads 295409 Health Trajectory Clustering Using Deep Belief Networks
Authors: Farshid Hajati, Federico Girosi, Shima Ghassempour
Abstract:
We present a Deep Belief Network (DBN) method for clustering health trajectories. Deep Belief Network (DBN) is a deep architecture that consists of a stack of Restricted Boltzmann Machines (RBM). In a deep architecture, each layer learns more complex features than the past layers. The proposed method depends on DBN in clustering without using back propagation learning algorithm. The proposed DBN has a better a performance compared to the deep neural network due the initialization of the connecting weights. We use Contrastive Divergence (CD) method for training the RBMs which increases the performance of the network. The performance of the proposed method is evaluated extensively on the Health and Retirement Study (HRS) database. The University of Michigan Health and Retirement Study (HRS) is a nationally representative longitudinal study that has surveyed more than 27,000 elderly and near-elderly Americans since its inception in 1992. Participants are interviewed every two years and they collect data on physical and mental health, insurance coverage, financial status, family support systems, labor market status, and retirement planning. The dataset is publicly available and we use the RAND HRS version L, which is easy to use and cleaned up version of the data. The size of sample data set is 268 and the length of the trajectories is equal to 10. The trajectories do not stop when the patient dies and represent 10 different interviews of live patients. Compared to the state-of-the-art benchmarks, the experimental results show the effectiveness and superiority of the proposed method in clustering health trajectories.Keywords: health trajectory, clustering, deep learning, DBN
Procedia PDF Downloads 369408 The Classification Accuracy of Finance Data through Holder Functions
Authors: Yeliz Karaca, Carlo Cattani
Abstract:
This study focuses on the local Holder exponent as a measure of the function regularity for time series related to finance data. In this study, the attributes of the finance dataset belonging to 13 countries (India, China, Japan, Sweden, France, Germany, Italy, Australia, Mexico, United Kingdom, Argentina, Brazil, USA) located in 5 different continents (Asia, Europe, Australia, North America and South America) have been examined.These countries are the ones mostly affected by the attributes with regard to financial development, covering a period from 2012 to 2017. Our study is concerned with the most important attributes that have impact on the development of finance for the countries identified. Our method is comprised of the following stages: (a) among the multi fractal methods and Brownian motion Holder regularity functions (polynomial, exponential), significant and self-similar attributes have been identified (b) The significant and self-similar attributes have been applied to the Artificial Neuronal Network (ANN) algorithms (Feed Forward Back Propagation (FFBP) and Cascade Forward Back Propagation (CFBP)) (c) the outcomes of classification accuracy have been compared concerning the attributes that have impact on the attributes which affect the countries’ financial development. This study has enabled to reveal, through the application of ANN algorithms, how the most significant attributes are identified within the relevant dataset via the Holder functions (polynomial and exponential function).Keywords: artificial neural networks, finance data, Holder regularity, multifractals
Procedia PDF Downloads 246407 Human Immunodeficiency Virus (HIV) Test Predictive Modeling and Identify Determinants of HIV Testing for People with Age above Fourteen Years in Ethiopia Using Data Mining Techniques: EDHS 2011
Authors: S. Abera, T. Gidey, W. Terefe
Abstract:
Introduction: Testing for HIV is the key entry point to HIV prevention, treatment, and care and support services. Hence, predictive data mining techniques can greatly benefit to analyze and discover new patterns from huge datasets like that of EDHS 2011 data. Objectives: The objective of this study is to build a predictive modeling for HIV testing and identify determinants of HIV testing for adults with age above fourteen years using data mining techniques. Methods: Cross-Industry Standard Process for Data Mining (CRISP-DM) was used to predict the model for HIV testing and explore association rules between HIV testing and the selected attributes among adult Ethiopians. Decision tree, Naïve-Bayes, logistic regression and artificial neural networks of data mining techniques were used to build the predictive models. Results: The target dataset contained 30,625 study participants; of which 16, 515 (53.9%) were women. Nearly two-fifth; 17,719 (58%), have never been tested for HIV while the rest 12,906 (42%) had been tested. Ethiopians with higher wealth index, higher educational level, belonging 20 to 29 years old, having no stigmatizing attitude towards HIV positive person, urban residents, having HIV related knowledge, information about family planning on mass media and knowing a place where to get testing for HIV showed an increased patterns with respect to HIV testing. Conclusion and Recommendation: Public health interventions should consider the identified determinants to promote people to get testing for HIV.Keywords: data mining, HIV, testing, ethiopia
Procedia PDF Downloads 496406 Wind Speed Forecasting Based on Historical Data Using Modern Prediction Methods in Selected Sites of Geba Catchment, Ethiopia
Authors: Halefom Kidane
Abstract:
This study aims to assess the wind resource potential and characterize the urban area wind patterns in Hawassa City, Ethiopia. The estimation and characterization of wind resources are crucial for sustainable urban planning, renewable energy development, and climate change mitigation strategies. A secondary data collection method was used to carry out the study. The collected data at 2 meters was analyzed statistically and extrapolated to the standard heights of 10-meter and 30-meter heights using the power law equation. The standard deviation method was used to calculate the value of scale and shape factors. From the analysis presented, the maximum and minimum mean daily wind speed at 2 meters in 2016 was 1.33 m/s and 0.05 m/s in 2017, 1.67 m/s and 0.14 m/s in 2018, 1.61m and 0.07 m/s, respectively. The maximum monthly average wind speed of Hawassa City in 2016 at 2 meters was noticed in the month of December, which is around 0.78 m/s, while in 2017, the maximum wind speed was recorded in the month of January with a wind speed magnitude of 0.80 m/s and in 2018 June was maximum speed which is 0.76 m/s. On the other hand, October was the month with the minimum mean wind speed in all years, with a value of 0.47 m/s in 2016,0.47 in 2017 and 0.34 in 2018. The annual mean wind speed was 0.61 m/s in 2016,0.64, m/s in 2017 and 0.57 m/s in 2018 at a height of 2 meters. From extrapolation, the annual mean wind speeds for the years 2016,2017 and 2018 at 10 heights were 1.17 m/s,1.22 m/s, and 1.11 m/s, and at the height of 30 meters, were 3.34m/s,3.78 m/s, and 3.01 m/s respectively/Thus, the site consists mainly primarily classes-I of wind speed even at the extrapolated heights.Keywords: artificial neural networks, forecasting, min-max normalization, wind speed
Procedia PDF Downloads 74405 Alpha: A Groundbreaking Avatar Merging User Dialogue with OpenAI's GPT-3.5 for Enhanced Reflective Thinking
Authors: Jonas Colin
Abstract:
Standing at the vanguard of AI development, Alpha represents an unprecedented synthesis of logical rigor and human abstraction, meticulously crafted to mirror the user's unique persona and personality, a feat previously unattainable in AI development. Alpha, an avant-garde artefact in the realm of artificial intelligence, epitomizes a paradigmatic shift in personalized digital interaction, amalgamating user-specific dialogic patterns with the sophisticated algorithmic prowess of OpenAI's GPT-3.5 to engender a platform for enhanced metacognitive engagement and individualized user experience. Underpinned by a sophisticated algorithmic framework, Alpha integrates vast datasets through a complex interplay of neural network models and symbolic AI, facilitating a dynamic, adaptive learning process. This integration enables the system to construct a detailed user profile, encompassing linguistic preferences, emotional tendencies, and cognitive styles, tailoring interactions to align with individual characteristics and conversational contexts. Furthermore, Alpha incorporates advanced metacognitive elements, enabling real-time reflection and adaptation in communication strategies. This self-reflective capability ensures continuous refinement of its interaction model, positioning Alpha not just as a technological marvel but as a harbinger of a new era in human-computer interaction, where machines engage with us on a deeply personal and cognitive level, transforming our interaction with the digital world.Keywords: chatbot, GPT 3.5, metacognition, symbiose
Procedia PDF Downloads 70404 Development of Fault Diagnosis Technology for Power System Based on Smart Meter
Authors: Chih-Chieh Yang, Chung-Neng Huang
Abstract:
In power system, how to improve the fault diagnosis technology of transmission line has always been the primary goal of power grid operators. In recent years, due to the rise of green energy, the addition of all kinds of distributed power also has an impact on the stability of the power system. Because the smart meters are with the function of data recording and bidirectional transmission, the adaptive Fuzzy Neural inference system, ANFIS, as well as the artificial intelligence that has the characteristics of learning and estimation in artificial intelligence. For transmission network, in order to avoid misjudgment of the fault type and location due to the input of these unstable power sources, combined with the above advantages of smart meter and ANFIS, a method for identifying fault types and location of faults is proposed in this study. In ANFIS training, the bus voltage and current information collected by smart meters can be trained through the ANFIS tool in MATLAB to generate fault codes to identify different types of faults and the location of faults. In addition, due to the uncertainty of distributed generation, a wind power system is added to the transmission network to verify the diagnosis correctness of the study. Simulation results show that the method proposed in this study can correctly identify the fault type and location of fault with more efficiency, and can deal with the interference caused by the addition of unstable power sources.Keywords: ANFIS, fault diagnosis, power system, smart meter
Procedia PDF Downloads 138403 MIMIC: A Multi Input Micro-Influencers Classifier
Authors: Simone Leonardi, Luca Ardito
Abstract:
Micro-influencers are effective elements in the marketing strategies of companies and institutions because of their capability to create an hyper-engaged audience around a specific topic of interest. In recent years, many scientific approaches and commercial tools have handled the task of detecting this type of social media users. These strategies adopt solutions ranging from rule based machine learning models to deep neural networks and graph analysis on text, images, and account information. This work compares the existing solutions and proposes an ensemble method to generalize them with different input data and social media platforms. The deployed solution combines deep learning models on unstructured data with statistical machine learning models on structured data. We retrieve both social media accounts information and multimedia posts on Twitter and Instagram. These data are mapped into feature vectors for an eXtreme Gradient Boosting (XGBoost) classifier. Sixty different topics have been analyzed to build a rule based gold standard dataset and to compare the performances of our approach against baseline classifiers. We prove the effectiveness of our work by comparing the accuracy, precision, recall, and f1 score of our model with different configurations and architectures. We obtained an accuracy of 0.91 with our best performing model.Keywords: deep learning, gradient boosting, image processing, micro-influencers, NLP, social media
Procedia PDF Downloads 183402 Synthesis and Characterization of AFe₂O₄ (A=CA, Co, CU) Nano-Spinels: Application to Hydrogen Photochemical Production under Visible Light Irradiation
Authors: H. Medjadji, A. Boulahouache, N. Salhi, A. Boudjemaa, M. Trari
Abstract:
Hydrogen from renewable sources, such as solar, is referred to as green hydrogen. The splitting water process using semiconductors, such as photocatalysts, has attracted significant attention due to its potential application for solving the energy crisis and environmental pollution. Spinel ferrites of the MF₂O₄ type have shown broad interest in diverse energy conversion processes, including fuel cells and photo electrocatalytic water splitting. This work focuses on preparing nano-spinels based on iron AFe₂O₄ (A= Ca, Co, and Cu) as photocatalysts using the nitrate method. These materials were characterized both physically and optically and subsequently tested for hydrogen generation under visible light irradiation. Various techniques were used to investigate the properties of the materials, including TGA-DT, X-ray diffraction (XRD), Fourier Transform Infrared Spectroscopy (FTIR), UV-visible spectroscopy, Scanning Electron Microscopy with Energy Dispersive X-ray Spectroscopy (SEM-EDX) and X-ray Photoelectron Spectroscopy (XPS) was also undertaken. XRD analysis confirmed the formation of pure phases at 850°C, with crystalline sizes of 31 nm for CaFe₂O₄, 27 nm for CoFe₂O₄, and 40 nm for CuFe₂O₄. The energy gaps, calculated from recorded diffuse reflection data, are 1.85 eV for CaFe₂O₄, 1.27 eV for CoFe₂O₄, and 1.64 eV for CuFe₂O₄. SEM micrographs showed homogeneous grains with uniform shapes and medium porosity in all samples. EDX elemental analysis determined the absence of any contaminating elements, highlighting the high purity of the prepared materials via the nitrate route. XPS spectra revealed the presence of Fe3+ and O in all samples. Additionally, XPS analysis revealed the presence of Ca²⁺, Co²⁺, and Cu²⁺ on the surface of CaFe₂O₄ and CoFe₂O₄ spinels, respectively. The photocatalytic activity was successfully evaluated by measuring H₂ evolution through the water-splitting process. The best performance was achieved with CaFe₂O₄ in a neutral medium (pH ~ 7), yielding 189 µmol at an optimal temperature of ~50°C. The highest hydrogen production rates for CoFe₂O₄ and CuFe₂O₄ were obtained at pH ~ 12 with release rates of 65 and 85 µmol, respectively, under visible light irradiation at the same optimal temperature. Various conditions were investigated including the pH of the solution, the hole sensors utilization and recyclability.Keywords: hydrogen, MFe₂O₄, nitrate route, spinel ferrite
Procedia PDF Downloads 38401 An Investigation into Computer Vision Methods to Identify Material Other Than Grapes in Harvested Wine Grape Loads
Authors: Riaan Kleyn
Abstract:
Mass wine production companies across the globe are provided with grapes from winegrowers that predominantly utilize mechanical harvesting machines to harvest wine grapes. Mechanical harvesting accelerates the rate at which grapes are harvested, allowing grapes to be delivered faster to meet the demands of wine cellars. The disadvantage of the mechanical harvesting method is the inclusion of material-other-than-grapes (MOG) in the harvested wine grape loads arriving at the cellar which degrades the quality of wine that can be produced. Currently, wine cellars do not have a method to determine the amount of MOG present within wine grape loads. This paper seeks to find an optimal computer vision method capable of detecting the amount of MOG within a wine grape load. A MOG detection method will encourage winegrowers to deliver MOG-free wine grape loads to avoid penalties which will indirectly enhance the quality of the wine to be produced. Traditional image segmentation methods were compared to deep learning segmentation methods based on images of wine grape loads that were captured at a wine cellar. The Mask R-CNN model with a ResNet-50 convolutional neural network backbone emerged as the optimal method for this study to determine the amount of MOG in an image of a wine grape load. Furthermore, a statistical analysis was conducted to determine how the MOG on the surface of a grape load relates to the mass of MOG within the corresponding grape load.Keywords: computer vision, wine grapes, machine learning, machine harvested grapes
Procedia PDF Downloads 94400 Preprocessing and Fusion of Multiple Representation of Finger Vein patterns using Conventional and Machine Learning techniques
Authors: Tomas Trainys, Algimantas Venckauskas
Abstract:
Application of biometric features to the cryptography for human identification and authentication is widely studied and promising area of the development of high-reliability cryptosystems. Biometric cryptosystems typically are designed for patterns recognition, which allows biometric data acquisition from an individual, extracts feature sets, compares the feature set against the set stored in the vault and gives a result of the comparison. Preprocessing and fusion of biometric data are the most important phases in generating a feature vector for key generation or authentication. Fusion of biometric features is critical for achieving a higher level of security and prevents from possible spoofing attacks. The paper focuses on the tasks of initial processing and fusion of multiple representations of finger vein modality patterns. These tasks are solved by applying conventional image preprocessing methods and machine learning techniques, Convolutional Neural Network (SVM) method for image segmentation and feature extraction. An article presents a method for generating sets of biometric features from a finger vein network using several instances of the same modality. Extracted features sets were fused at the feature level. The proposed method was tested and compared with the performance and accuracy results of other authors.Keywords: bio-cryptography, biometrics, cryptographic key generation, data fusion, information security, SVM, pattern recognition, finger vein method.
Procedia PDF Downloads 150399 Deep Vision: A Robust Dominant Colour Extraction Framework for T-Shirts Based on Semantic Segmentation
Authors: Kishore Kumar R., Kaustav Sengupta, Shalini Sood Sehgal, Poornima Santhanam
Abstract:
Fashion is a human expression that is constantly changing. One of the prime factors that consistently influences fashion is the change in colour preferences. The role of colour in our everyday lives is very significant. It subconsciously explains a lot about one’s mindset and mood. Analyzing the colours by extracting them from the outfit images is a critical study to examine the individual’s/consumer behaviour. Several research works have been carried out on extracting colours from images, but to the best of our knowledge, there were no studies that extract colours to specific apparel and identify colour patterns geographically. This paper proposes a framework for accurately extracting colours from T-shirt images and predicting dominant colours geographically. The proposed method consists of two stages: first, a U-Net deep learning model is adopted to segment the T-shirts from the images. Second, the colours are extracted only from the T-shirt segments. The proposed method employs the iMaterialist (Fashion) 2019 dataset for the semantic segmentation task. The proposed framework also includes a mechanism for gathering data and analyzing India’s general colour preferences. From this research, it was observed that black and grey are the dominant colour in different regions of India. The proposed method can be adapted to study fashion’s evolving colour preferences.Keywords: colour analysis in t-shirts, convolutional neural network, encoder-decoder, k-means clustering, semantic segmentation, U-Net model
Procedia PDF Downloads 111398 Optimization Based Extreme Learning Machine for Watermarking of an Image in DWT Domain
Authors: RAM PAL SINGH, VIKASH CHAUDHARY, MONIKA VERMA
Abstract:
In this paper, we proposed the implementation of optimization based Extreme Learning Machine (ELM) for watermarking of B-channel of color image in discrete wavelet transform (DWT) domain. ELM, a regularization algorithm, works based on generalized single-hidden-layer feed-forward neural networks (SLFNs). However, hidden layer parameters, generally called feature mapping in context of ELM need not to be tuned every time. This paper shows the embedding and extraction processes of watermark with the help of ELM and results are compared with already used machine learning models for watermarking.Here, a cover image is divide into suitable numbers of non-overlapping blocks of required size and DWT is applied to each block to be transformed in low frequency sub-band domain. Basically, ELM gives a unified leaning platform with a feature mapping, that is, mapping between hidden layer and output layer of SLFNs, is tried for watermark embedding and extraction purpose in a cover image. Although ELM has widespread application right from binary classification, multiclass classification to regression and function estimation etc. Unlike SVM based algorithm which achieve suboptimal solution with high computational complexity, ELM can provide better generalization performance results with very small complexity. Efficacy of optimization method based ELM algorithm is measured by using quantitative and qualitative parameters on a watermarked image even though image is subjected to different types of geometrical and conventional attacks.Keywords: BER, DWT, extreme leaning machine (ELM), PSNR
Procedia PDF Downloads 311397 Comparing Machine Learning Estimation of Fuel Consumption of Heavy-Duty Vehicles
Authors: Victor Bodell, Lukas Ekstrom, Somayeh Aghanavesi
Abstract:
Fuel consumption (FC) is one of the key factors in determining expenses of operating a heavy-duty vehicle. A customer may therefore request an estimate of the FC of a desired vehicle. The modular design of heavy-duty vehicles allows their construction by specifying the building blocks, such as gear box, engine and chassis type. If the combination of building blocks is unprecedented, it is unfeasible to measure the FC, since this would first r equire the construction of the vehicle. This paper proposes a machine learning approach to predict FC. This study uses around 40,000 vehicles specific and o perational e nvironmental c onditions i nformation, such as road slopes and driver profiles. A ll v ehicles h ave d iesel engines and a mileage of more than 20,000 km. The data is used to investigate the accuracy of machine learning algorithms Linear regression (LR), K-nearest neighbor (KNN) and Artificial n eural n etworks (ANN) in predicting fuel consumption for heavy-duty vehicles. Performance of the algorithms is evaluated by reporting the prediction error on both simulated data and operational measurements. The performance of the algorithms is compared using nested cross-validation and statistical hypothesis testing. The statistical evaluation procedure finds that ANNs have the lowest prediction error compared to LR and KNN in estimating fuel consumption on both simulated and operational data. The models have a mean relative prediction error of 0.3% on simulated data, and 4.2% on operational data.Keywords: artificial neural networks, fuel consumption, friedman test, machine learning, statistical hypothesis testing
Procedia PDF Downloads 178396 Formulation and Evaluation of Glimepiride (GMP)-Solid Nanodispersion and Nanodispersed Tablets
Authors: Ahmed. Abdel Bary, Omneya. Khowessah, Mojahed. al-jamrah
Abstract:
Introduction: The major challenge with the design of oral dosage forms lies with their poor bioavailability. The most frequent causes of low oral bioavailability are attributed to poor solubility and low permeability. The aim of this study was to develop solid nanodispersed tablet formulation of Glimepiride for the enhancement of the solubility and bioavailability. Methodology: Solid nanodispersions of Glimepiride (GMP) were prepared using two different ratios of 2 different carriers, namely; PEG6000, pluronic F127, and by adopting two different techniques, namely; solvent evaporation technique and fusion technique. A full factorial design of 2 3 was adopted to investigate the influence of formulation variables on the prepared nanodispersion properties. The best chosen formula of nanodispersed powder was formulated into tablets by direct compression. The Differential Scanning Calorimetry (DSC) analysis and Fourier Transform Infra-Red (FTIR) analysis were conducted for the thermal behavior and surface structure characterization, respectively. The zeta potential and particle size analysis of the prepared glimepiride nanodispersions was determined. The prepared solid nanodispersions and solid nanodispersed tablets of GMP were evaluated in terms of pre-compression and post-compression parameters, respectively. Results: The DSC and FTIR studies revealed that there was no interaction between GMP and all the excipients used. Based on the resulted values of different pre-compression parameters, the prepared solid nanodispersions powder blends showed poor to excellent flow properties. The resulted values of the other evaluated pre-compression parameters of the prepared solid nanodispersion were within the limits of pharmacopoeia. The drug content of the prepared nanodispersions ranged from 89.6 ± 0.3 % to 99.9± 0.5% with particle size ranged from 111.5 nm to 492.3 nm and the resulted zeta potential (ζ ) values of the prepared GMP-solid nanodispersion formulae (F1-F8) ranged from -8.28±3.62 mV to -78±11.4 mV. The in-vitro dissolution studies of the prepared solid nanodispersed tablets of GMP concluded that GMP- pluronic F127 combinations (F8), exhibited the best extent of drug release, compared to other formulations, and to the marketed product. One way ANOVA for the percent of drug released from the prepared GMP-nanodispersion formulae (F1- F8) after 20 and 60 minutes showed significant differences between the percent of drug released from different GMP-nanodispersed tablet formulae (F1- F8), (P<0.05). Conclusion: Preparation of glimepiride as nanodispersed particles proven to be a promising tool for enhancing the poor solubility of glimepiride.Keywords: glimepiride, solid Nanodispersion, nanodispersed tablets, poorly water soluble drugs
Procedia PDF Downloads 488395 Multimodal Sentiment Analysis With Web Based Application
Authors: Shreyansh Singh, Afroz Ahmed
Abstract:
Sentiment Analysis intends to naturally reveal the hidden mentality that we hold towards an entity. The total of this assumption over a populace addresses sentiment surveying and has various applications. Current text-based sentiment analysis depends on the development of word embeddings and Machine Learning models that take in conclusion from enormous text corpora. Sentiment Analysis from text is presently generally utilized for consumer loyalty appraisal and brand insight investigation. With the expansion of online media, multimodal assessment investigation is set to carry new freedoms with the appearance of integral information streams for improving and going past text-based feeling examination using the new transforms methods. Since supposition can be distinguished through compelling follows it leaves, like facial and vocal presentations, multimodal opinion investigation offers good roads for examining facial and vocal articulations notwithstanding the record or printed content. These methodologies use the Recurrent Neural Networks (RNNs) with the LSTM modes to increase their performance. In this study, we characterize feeling and the issue of multimodal assessment investigation and audit ongoing advancements in multimodal notion examination in various spaces, including spoken surveys, pictures, video websites, human-machine, and human-human connections. Difficulties and chances of this arising field are additionally examined, promoting our theory that multimodal feeling investigation holds critical undiscovered potential.Keywords: sentiment analysis, RNN, LSTM, word embeddings
Procedia PDF Downloads 119