Search results for: Wireless Sensor Network
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6167

Search results for: Wireless Sensor Network

1937 An Economic Way to Toughen Poly Acrylic Acid Superabsorbent Polymer Using Hyper Branched Polymer

Authors: Nazila Dehbari, Javad Tavakoli, Yakani Kambu, Youhong Tang

Abstract:

Superabsorbent hydrogels (SAP), as an enviro-sensitive material have been widely used for industrial and biomedical applications due to their unique structure and capabilities. Poor mechanical properties of SAPs - which is extremely related to their large volume change – count as a great weakness in adopting for high-tech applications. Therefore, improving SAPs’ mechanical properties via toughening methods by mixing different types of cross-linked polymer or introducing energy-dissipating mechanisms is highly focused. In this work, in order to change the intrinsic brittle character of commercialized Poly Acrylic Acid (here as SAP) to be semi-ductile, a commercial available highly branched tree-like dendritic polymers with numerous –OH end groups known as hyper-branched polymer (HB) has been added to PAA-SAP system in a single step, cost effective and environment friendly solvent casting method. Samples were characterized by FTIR, SEM and TEM and their physico-chemical characterization including swelling capabilities, hydraulic permeability, surface tension and thermal properties had been performed. Toughness energy, stiffness, elongation at breaking point, viscoelastic properties and samples extensibility were mechanical properties that had been performed and characterized as a function of samples lateral cracks’ length in different HB concentration. Addition of HB to PAA-SAP significantly improved mechanical and surface properties. Increasing equilibrium swelling ratio by about 25% had been experienced by the SAP-HB samples in comparison with SAPs; however, samples swelling kinetics remained without changes as initial rate of water uptake and equilibrium time haven’t been subjected to any changes. Thermal stability analysis showed that HB is participating in hybrid network formation while improving mechanical properties. Samples characterization by TEM showed that, the aggregated HB polymer binders into nano-spheres with diameter in range of 10–200 nm. So well dispersion in the SAP matrix occurred as it was predictable due to the hydrophilic character of the numerous hydroxyl groups at the end of HB which enhance the compatibility of HB with PAA-SAP. As the profused -OH groups in HB could react with -COOH groups in the PAA-SAP during the curing process, the formation of a 2D structure in the SAP-HB could be attributed to the strong interfacial adhesion between HB and the PAA-SAP matrix which hinders the activity of PAA chains (SEM analysis). FTIR spectra introduced new peaks at 1041 and 1121 cm-1 that attributed to the C–O(–OH) stretching hydroxyl and O–C stretching ester groups of HB polymer binder indicating the incorporation of HB polymer into the SAP structure. SAP-HB polymer has significant effects on the final mechanical properties. The brittleness of PAA hydrogels are decreased by introducing HB as the fracture energies of hydrogels increased from 8.67 to 26.67. PAA-HBs’ stretch ability enhanced about 10 folds while reduced as a function of different notches depth.

Keywords: superabsorbent polymer, toughening, viscoelastic properties, hydrogel network

Procedia PDF Downloads 322
1936 A Blockchain-Based Privacy-Preserving Physical Delivery System

Authors: Shahin Zanbaghi, Saeed Samet

Abstract:

The internet has transformed the way we shop. Previously, most of our purchases came in the form of shopping trips to a nearby store. Now, it’s as easy as clicking a mouse. But with great convenience comes great responsibility. We have to be constantly vigilant about our personal information. In this work, our proposed approach is to encrypt the information printed on the physical packages, which include personal information in plain text, using a symmetric encryption algorithm; then, we store that encrypted information into a Blockchain network rather than storing them in companies or corporations centralized databases. We present, implement and assess a blockchain-based system using Ethereum smart contracts. We present detailed algorithms that explain the details of our smart contract. We present the security, cost, and performance analysis of the proposed method. Our work indicates that the proposed solution is economically attainable and provides data integrity, security, transparency, and data traceability.

Keywords: blockchain, Ethereum, smart contract, commit-reveal scheme

Procedia PDF Downloads 149
1935 Policy Views of Sustainable Integrated Solution for Increased Synergy between Light Railways and Electrical Distribution Network

Authors: Mansoureh Zangiabadi, Shamil Velji, Rajendra Kelkar, Neal Wade, Volker Pickert

Abstract:

The EU has set itself a long-term goal of reducing greenhouse gas emissions by 80-95% of the 1990 levels by 2050 as set in the Energy Roadmap 2050. This paper reports on the European Union H2020 funded E-Lobster project which demonstrates tools and technologies, software and hardware in integrating the grid distribution, and the railway power systems with power electronics technologies (Smart Soft Open Point - sSOP) and local energy storage. In this context this paper describes the existing policies and regulatory frameworks of the energy market at European level with a special focus then at National level, on the countries where the members of the consortium are located, and where the demonstration activities will be implemented. By taking into account the disciplinary approach of E-Lobster, the main policy areas investigated includes electricity, energy market, energy efficiency, transport and smart cities. Energy storage will play a key role in enabling the EU to develop a low-carbon electricity system. In recent years, Energy Storage System (ESSs) are gaining importance due to emerging applications, especially electrification of the transportation sector and grid integration of volatile renewables. The need for storage systems led to ESS technologies performance improvements and significant price decline. This allows for opening a new market where ESSs can be a reliable and economical solution. One such emerging market for ESS is R+G management which will be investigated and demonstrated within E-Lobster project. The surplus of energy in one type of power system (e.g., due to metro braking) might be directly transferred to the other power system (or vice versa). However, it would usually happen at unfavourable instances when the recipient does not need additional power. Thus, the role of ESS is to enhance advantages coming from interconnection of the railway power systems and distribution grids by offering additional energy buffer. Consequently, the surplus/deficit of energy in, e.g. railway power systems, is not to be immediately transferred to/from the distribution grid but it could be stored and used when it is really needed. This will assure better energy management exchange between the railway power systems and distribution grids and lead to more efficient loss reduction. In this framework, to identify the existing policies and regulatory frameworks is crucial for the project activities and for the future development of business models for the E-Lobster solutions. The projections carried out by the European Commission, the Member States and stakeholders and their analysis indicated some trends, challenges, opportunities and structural changes needed to design the policy measures to provide the appropriate framework for investors. This study will be used as reference for the discussion in the envisaged workshops with stakeholders (DSOs and Transport Managers) in the E-Lobster project.

Keywords: light railway, electrical distribution network, Electrical Energy Storage, policy

Procedia PDF Downloads 135
1934 Investigated Optimization of Davidson Path Loss Model for Digital Terrestrial Television (DTTV) Propagation in Urban Area

Authors: Pitak Keawbunsong, Sathaporn Promwong

Abstract:

This paper presents an investigation on the efficiency of the optimized Davison path loss model in order to look for a suitable path loss model to design and planning DTTV propagation for small and medium urban areas in southern Thailand. Hadyai City in Songkla Province is chosen as the case study to collect the analytical data on the electric field strength. The optimization is conducted through the least square method while the efficiency index is through the statistical value of relative error (RE). The result of the least square method is the offset and slop of the frequency to be used in the optimized process. The statistical result shows that RE of the old Davidson model is at the least when being compared with the optimized Davison and the Hata models. Thus, the old Davison path loss model is the most accurate that further becomes the most optimized for the plan on the propagation network design.

Keywords: DTTV propagation, path loss model, Davidson model, least square method

Procedia PDF Downloads 338
1933 A Modified NSGA-II Algorithm for Solving Multi-Objective Flexible Job Shop Scheduling Problem

Authors: Aydin Teymourifar, Gurkan Ozturk, Ozan Bahadir

Abstract:

NSGA-II is one of the most well-known and most widely used evolutionary algorithms. In addition to its new versions, such as NSGA-III, there are several modified types of this algorithm in the literature. In this paper, a hybrid NSGA-II algorithm has been suggested for solving the multi-objective flexible job shop scheduling problem. For a better search, new neighborhood-based crossover and mutation operators are defined. To create new generations, the neighbors of the selected individuals by the tournament selection are constructed. Also, at the end of each iteration, before sorting, neighbors of a certain number of good solutions are derived, except for solutions protected by elitism. The neighbors are generated using a constraint-based neural network that uses various constructs. The non-dominated sorting and crowding distance operators are same as the classic NSGA-II. A comparison based on some multi-objective benchmarks from the literature shows the efficiency of the algorithm.

Keywords: flexible job shop scheduling problem, multi-objective optimization, NSGA-II algorithm, neighborhood structures

Procedia PDF Downloads 229
1932 Freight Time and Cost Optimization in Complex Logistics Networks, Using a Dimensional Reduction Method and K-Means Algorithm

Authors: Egemen Sert, Leila Hedayatifar, Rachel A. Rigg, Amir Akhavan, Olha Buchel, Dominic Elias Saadi, Aabir Abubaker Kar, Alfredo J. Morales, Yaneer Bar-Yam

Abstract:

The complexity of providing timely and cost-effective distribution of finished goods from industrial facilities to customers makes effective operational coordination difficult, yet effectiveness is crucial for maintaining customer service levels and sustaining a business. Logistics planning becomes increasingly complex with growing numbers of customers, varied geographical locations, the uncertainty of future orders, and sometimes extreme competitive pressure to reduce inventory costs. Linear optimization methods become cumbersome or intractable due to a large number of variables and nonlinear dependencies involved. Here we develop a complex systems approach to optimizing logistics networks based upon dimensional reduction methods and apply our approach to a case study of a manufacturing company. In order to characterize the complexity in customer behavior, we define a “customer space” in which individual customer behavior is described by only the two most relevant dimensions: the distance to production facilities over current transportation routes and the customer's demand frequency. These dimensions provide essential insight into the domain of effective strategies for customers; direct and indirect strategies. In the direct strategy, goods are sent to the customer directly from a production facility using box or bulk trucks. In the indirect strategy, in advance of an order by the customer, goods are shipped to an external warehouse near a customer using trains and then "last-mile" shipped by trucks when orders are placed. Each strategy applies to an area of the customer space with an indeterminate boundary between them. Specific company policies determine the location of the boundary generally. We then identify the optimal delivery strategy for each customer by constructing a detailed model of costs of transportation and temporary storage in a set of specified external warehouses. Customer spaces help give an aggregate view of customer behaviors and characteristics. They allow policymakers to compare customers and develop strategies based on the aggregate behavior of the system as a whole. In addition to optimization over existing facilities, using customer logistics and the k-means algorithm, we propose additional warehouse locations. We apply these methods to a medium-sized American manufacturing company with a particular logistics network, consisting of multiple production facilities, external warehouses, and customers along with three types of shipment methods (box truck, bulk truck and train). For the case study, our method forecasts 10.5% savings on yearly transportation costs and an additional 4.6% savings with three new warehouses.

Keywords: logistics network optimization, direct and indirect strategies, K-means algorithm, dimensional reduction

Procedia PDF Downloads 139
1931 Telecom Infrastructure Outsourcing: An Innovative Approach

Authors: Irfan Zafar

Abstract:

Over the years the Telecom Industry in the country has shown a lot of progress in terms of infrastructure development coupled with the availability of telecom services. This has however led to the cut throat completion among various operators thus leading to reduced tariffs to the customers. The profit margins have seen a reduction thus leading the operators to think of other avenues by adopting new models while keeping the quality of service intact. The outsourcing of the network and the resources is one such model which has shown promising benefits which includes lower costs, less risk, higher levels of customer support and engagement, predictable expenses, access to the emerging technologies, benefiting from a highly skilled workforce, adaptability, focus on the core business while reducing capital costs. A lot of research has been done on outsourcing in terms of reasons of outsourcing and its benefits. However this study is an attempt to analyze the effects of the outsourcing on an organizations performance (Telecommunication Sector) considering the variables (1) Cost Reduction (2) Organizational Performance (3) Flexibility (4) Employee Performance (5) Access to Specialized Skills & Technology and the (6) Outsourcing Risks.

Keywords: outsourcing, ICT, telecommunication, IT, networking

Procedia PDF Downloads 398
1930 Fault Tree Analysis and Bayesian Network for Fire and Explosion of Crude Oil Tanks: Case Study

Authors: B. Zerouali, M. Kara, B. Hamaidi, H. Mahdjoub, S. Rouabhia

Abstract:

In this paper, a safety analysis for crude oil tanks to prevent undesirable events that may cause catastrophic accidents. The estimation of the probability of damage to industrial systems is carried out through a series of steps, and in accordance with a specific methodology. In this context, this work involves developing an assessment tool and risk analysis at the level of crude oil tanks system, based primarily on identification of various potential causes of crude oil tanks fire and explosion by the use of Fault Tree Analysis (FTA), then improved risk modelling by Bayesian Networks (BNs). Bayesian approach in the evaluation of failure and quantification of risks is a dynamic analysis approach. For this reason, have been selected as an analytical tool in this study. Research concludes that the Bayesian networks have a distinct and effective method in the safety analysis because of the flexibility of its structure; it is suitable for a wide variety of accident scenarios.

Keywords: bayesian networks, crude oil tank, fault tree, prediction, safety

Procedia PDF Downloads 660
1929 Projection of Solar Radiation for the Extreme South of Brazil

Authors: Elison Eduardo Jardim Bierhals, Claudineia Brazil, Rafael Haag, Elton Rossini

Abstract:

This work aims to validate and make the projections of solar energy for the Brazilian period from 2025 to 2100. As the plants designed by the HadGEM2-AO (Global Hadley Model 2 - Atmosphere) General Circulation Model UK Met Office Hadley Center, belonging to Phase 5 of the Intercomparison of Coupled Models (CMIP5). The simulation results of the model are compared with monthly data from 2006 to 2013, measured by a network of meteorological sections of the National Institute of Meteorology (INMET). The performance of HadGEM2-AO is evaluated by the efficiency coefficient (CEF) and bias. The results are shown in the table of maps and maps. HadGEM2-AO, in the most pessimistic scenario, RCP 8.5 had a very good accuracy, presenting efficiency coefficients between 0.94 and 0.98, the perfect setting being Solar radiation, which indicates a horizontal trend, is a climatic alternative for some regions of the Brazilian scenario, especially in spring.

Keywords: climate change, projections, solar radiation, scenarios climate change

Procedia PDF Downloads 151
1928 Adaptive Process Monitoring for Time-Varying Situations Using Statistical Learning Algorithms

Authors: Seulki Lee, Seoung Bum Kim

Abstract:

Statistical process control (SPC) is a practical and effective method for quality control. The most important and widely used technique in SPC is a control chart. The main goal of a control chart is to detect any assignable changes that affect the quality output. Most conventional control charts, such as Hotelling’s T2 charts, are commonly based on the assumption that the quality characteristics follow a multivariate normal distribution. However, in modern complicated manufacturing systems, appropriate control chart techniques that can efficiently handle the nonnormal processes are required. To overcome the shortcomings of conventional control charts for nonnormal processes, several methods have been proposed to combine statistical learning algorithms and multivariate control charts. Statistical learning-based control charts, such as support vector data description (SVDD)-based charts, k-nearest neighbors-based charts, have proven their improved performance in nonnormal situations compared to that of the T2 chart. Beside the nonnormal property, time-varying operations are also quite common in real manufacturing fields because of various factors such as product and set-point changes, seasonal variations, catalyst degradation, and sensor drifting. However, traditional control charts cannot accommodate future condition changes of the process because they are formulated based on the data information recorded in the early stage of the process. In the present paper, we propose a SVDD algorithm-based control chart, which is capable of adaptively monitoring time-varying and nonnormal processes. We reformulated the SVDD algorithm into a time-adaptive SVDD algorithm by adding a weighting factor that reflects time-varying situations. Moreover, we defined the updating region for the efficient model-updating structure of the control chart. The proposed control chart simultaneously allows efficient model updates and timely detection of out-of-control signals. The effectiveness and applicability of the proposed chart were demonstrated through experiments with the simulated data and the real data from the metal frame process in mobile device manufacturing.

Keywords: multivariate control chart, nonparametric method, support vector data description, time-varying process

Procedia PDF Downloads 299
1927 Photovoltaic Maximum Power-Point Tracking Using Artificial Neural Network

Authors: Abdelazziz Aouiche, El Moundher Aouiche, Mouhamed Salah Soudani

Abstract:

Renewable energy sources now significantly contribute to the replacement of traditional fossil fuel energy sources. One of the most potent types of renewable energy that has developed quickly in recent years is photovoltaic energy. We all know that solar energy, which is sustainable and non-depleting, is the best knowledge form of energy that we have at our disposal. Due to changing weather conditions, the primary drawback of conventional solar PV cells is their inability to track their maximum power point. In this study, we apply artificial neural networks (ANN) to automatically track and measure the maximum power point (MPP) of solar panels. In MATLAB, the complete system is simulated, and the results are adjusted for the external environment. The results are better performance than traditional MPPT methods and the results demonstrate the advantages of using neural networks in solar PV systems.

Keywords: modeling, photovoltaic panel, artificial neural networks, maximum power point tracking

Procedia PDF Downloads 88
1926 Private Coded Computation of Matrix Multiplication

Authors: Malihe Aliasgari, Yousef Nejatbakhsh

Abstract:

The era of Big Data and the immensity of real-life datasets compels computation tasks to be performed in a distributed fashion, where the data is dispersed among many servers that operate in parallel. However, massive parallelization leads to computational bottlenecks due to faulty servers and stragglers. Stragglers refer to a few slow or delay-prone processors that can bottleneck the entire computation because one has to wait for all the parallel nodes to finish. The problem of straggling processors, has been well studied in the context of distributed computing. Recently, it has been pointed out that, for the important case of linear functions, it is possible to improve over repetition strategies in terms of the tradeoff between performance and latency by carrying out linear precoding of the data prior to processing. The key idea is that, by employing suitable linear codes operating over fractions of the original data, a function may be completed as soon as enough number of processors, depending on the minimum distance of the code, have completed their operations. The problem of matrix-matrix multiplication in the presence of practically big sized of data sets faced with computational and memory related difficulties, which makes such operations are carried out using distributed computing platforms. In this work, we study the problem of distributed matrix-matrix multiplication W = XY under storage constraints, i.e., when each server is allowed to store a fixed fraction of each of the matrices X and Y, which is a fundamental building of many science and engineering fields such as machine learning, image and signal processing, wireless communication, optimization. Non-secure and secure matrix multiplication are studied. We want to study the setup, in which the identity of the matrix of interest should be kept private from the workers and then obtain the recovery threshold of the colluding model, that is, the number of workers that need to complete their task before the master server can recover the product W. The problem of secure and private distributed matrix multiplication W = XY which the matrix X is confidential, while matrix Y is selected in a private manner from a library of public matrices. We present the best currently known trade-off between communication load and recovery threshold. On the other words, we design an achievable PSGPD scheme for any arbitrary privacy level by trivially concatenating a robust PIR scheme for arbitrary colluding workers and private databases and the proposed SGPD code that provides a smaller computational complexity at the workers.

Keywords: coded distributed computation, private information retrieval, secret sharing, stragglers

Procedia PDF Downloads 122
1925 Electrochemical Impedance Spectroscopy Based Label-Free Detection of TSG101 by Electric Field Lysis of Immobilized Exosomes from Human Serum

Authors: Nusrat Praween, Krishna Thej Pammi Guru, Palash Kumar Basu

Abstract:

Designing non-invasive biosensors for cancer diagnosis is essential for developing an affordable and specific tool to measure cancer-related exosome biomarkers. Exosomes, released by healthy as well as cancer cells, contain valuable information about the biomarkers of various diseases, including cancer. Despite the availability of various isolation techniques, ultracentrifugation is the standard technique that is being employed. Post isolation, exosomes are traditionally exposed to detergents for extracting their proteins, which can often lead to protein degradation. Further to this, it is very essential to develop a sensing platform for the quantification of clinically relevant proteins in a wider range to ensure practicality. In this study, exosomes were immobilized on the Au Screen Printed Electrode (SPE) using EDC/NHS chemistry to facilitate binding. After immobilizing the exosomes on the screen-printed electrode (SPE), we investigated the impact of the electric field by applying various voltages to induce exosome lysis and release their contents. The lysed solution was used for sensing TSG101, a crucial biomarker associated with various cancers, using both faradaic and non-faradaic electrochemical impedance spectroscopy (EIS) methods. The results of non-faradaic and faradaic EIS were comparable and showed good consistency, indicating that non-faradaic sensing can be a reliable alternative. Hence, the non-faradaic sensing technique was used for label-free quantification of the TSG101 biomarker. The results were validated using ELISA. Our electrochemical immunosensor demonstrated a consistent response of TSG101 from 125 pg/mL to 8000 pg/mL, with a detection limit of 0.125 pg/mL at room temperature. Additionally, since non-faradic sensing is label-free, the ease of usage and cost of the final sensor developed can be reduced. The proposed immunosensor is capable of detecting the TSG101 protein at low levels in healthy serum with good sensitivity and specificity, making it a promising platform for biomarker detection.

Keywords: biosensor, exosomes isolation on SPE, electric field lysis of exosome, EIS sensing of TSG101

Procedia PDF Downloads 46
1924 The Continuous Facility Location Problem and Transportation Mode Selection in the Supply Chain under Sustainability

Authors: Abdulaziz Alageel, Martino Luis, Shuya Zhong

Abstract:

The main focus of this research study is on the challenges faced in decision-making in a supply chain network regarding the facility location while considering carbon emissions. The study aims (i) to locate facilities (i.e., distribution centeres) in a continuous space considering limitations of capacity and the costs associated with opening and (ii) to reduce the cost of carbon emissions by selecting the mode of transportation. The problem is formulated as mixed-integer linear programming. This study hybridised a greedy randomised adaptive search (GRASP) and variable neighborhood search (VNS) to deal with the problem. Well-known datasets from the literature (Brimberg et al. 2001) are used and adapted in order to assess the performance of the proposed method. The proposed hybrid method produces encouraging results based on computational analysis. The study also highlights some research avenues for future recommendations.

Keywords: supply chain, facility location, weber problem, sustainability

Procedia PDF Downloads 100
1923 Unveiling the Potential of MoSe₂ for Toxic Gas Sensing: Insights from Density Functional Theory and Non-equilibrium Green’s Function Calculations

Authors: Si-Jie Ji, Santhanamoorthi Nachimuthu, Jyh-Chiang Jiang

Abstract:

With the rapid development of industrialization and urbanization, air pollution poses significant global environmental challenges, contributing to acid rain, global warming, and adverse health effects. Therefore, it is necessary to monitor the concentration of toxic gases in the atmospheric environment in real-time and to deploy cost-effective gas sensors capable of detecting their emissions. In this study, we systematically investigated the sensing capabilities of the two-dimensional MoSe₂ for seven key environmental gases (NO, NO₂, CO, CO₂, SO₂, SO₃, and O₂) using density functional theory (DFT) and non-equilibrium Green’s function (NEGF) calculations. We also investigated the impact of H₂O as an interfering gas. Our results indicate that the MoSe₂ monolayer is thermodynamically stable and exhibits strong gas-sensing capabilities. The calculated adsorption energies indicate that these gases can stably adsorb on MoSe₂, with SO₃ exhibiting the strongest adsorption energy (-0.63 eV). Electronic structure analysis, including projected density of states (PDOS) and Bader charge analysis, demonstrates significant changes in the electronic properties of MoSe₂ upon gas adsorption, affecting its conductivity and sensing performance. We find that oxygen (O₂) adsorption notably influenced the deformation of MoSe₂. To comprehensively understand the potential of MoSe₂ as a gas sensor, we used the NEGF method to assess the electronic transport properties of MoSe₂ under gas adsorption, evaluating current-voltage (I-V), resistance-voltage (R-V) characteristics, and transmission spectra to determine sensitivity, selectivity, and recovery time compared to pristine MoSe₂. Sensitivity, selectivity, and recovery time are analyzed at a bias voltage of 1.7V, showing excellent performance of MoSe₂ in detecting SO₃, among other gases. The pronounced changes in electronic transport behavior induced by SO₃ adsorption confirm MoSe₂’s strong potential as a high-performance gas-sensing material. Overall, this theoretical study provides new insights into the development of high-performance gas sensors, demonstrating the potential of MoSe₂ as a gas-sensing material, particularly for gases like SO₃.

Keywords: density functional theory, gas sensing, MoSe₂, non-equilibrium Green’s function, SO

Procedia PDF Downloads 21
1922 Enhanced Near-Infrared Upconversion Emission Based Lateral Flow Immunoassay for Background-Free Detection of Avian Influenza Viruses

Authors: Jaeyoung Kim, Heeju Lee, Huijin Jung, Heesoo Pyo, Seungki Kim, Joonseok Lee

Abstract:

Avian influenza viruses (AIV) are the primary cause of highly contagious respiratory diseases caused by type A influenza viruses of the Orthomyxoviridae family. AIV are categorized on the basis of types of surface glycoproteins such as hemagglutinin and neuraminidase. Certain H5 and H7 subtypes of AIV have evolved to the high pathogenic avian influenza (HPAI) virus, which has caused considerable economic loss to the poultry industry and led to severe public health crisis. Several commercial kits have been developed for on-site detection of AIV. However, the sensitivity of these methods is too low to detect low virus concentrations in clinical samples and opaque stool samples. Here, we introduced a background-free near-infrared (NIR)-to-NIR upconversion nanoparticle-based lateral flow immunoassay (NNLFA) platform to yield a sensor that detects AIV within 20 minutes. Ca²⁺ ion in the shell was used to enhance the NIR-to-NIR upconversion photoluminescence (PL) emission as a heterogeneous dopant without inducing significant changes in the morphology and size of the UCNPs. In a mixture of opaque stool samples and gold nanoparticles (GNPs), which are components of commercial AIV LFA, the background signal of the stool samples mask the absorption peak of GNPs. However, UCNPs dispersed in the stool samples still show strong emission centered at 800 nm when excited at 980 nm, which enables the NNLFA platform to detect 10-times lower viral load than a commercial GNP-based AIV LFA. The detection limit of NNLFA for low pathogenic avian influenza (LPAI) H5N2 and HPAI H5N6 viruses was 10² EID₅₀/mL and 10³.⁵ EID₅₀/mL, respectively. Moreover, when opaque brown-colored samples were used as the target analytes, strong NIR emission signal from the test line in NNLFA confirmed the presence of AIV, whereas commercial AIV LFA detected AIV with difficulty. Therefore, we propose that this rapid and background-free NNLFA platform has the potential of detecting AIV in the field, which could effectively prevent the spread of these viruses at an early stage.

Keywords: avian influenza viruses, lateral flow immunoassay on-site detection, upconversion nanoparticles

Procedia PDF Downloads 163
1921 An Intrusion Detection Systems Based on K-Means, K-Medoids and Support Vector Clustering Using Ensemble

Authors: A. Mohammadpour, Ebrahim Najafi Kajabad, Ghazale Ipakchi

Abstract:

Presently, computer networks’ security rise in importance and many studies have also been conducted in this field. By the penetration of the internet networks in different fields, many things need to be done to provide a secure industrial and non-industrial network. Fire walls, appropriate Intrusion Detection Systems (IDS), encryption protocols for information sending and receiving, and use of authentication certificated are among things, which should be considered for system security. The aim of the present study is to use the outcome of several algorithms, which cause decline in IDS errors, in the way that improves system security and prevents additional overload to the system. Finally, regarding the obtained result we can also detect the amount and percentage of more sub attacks. By running the proposed system, which is based on the use of multi-algorithmic outcome and comparing that by the proposed single algorithmic methods, we observed a 78.64% result in attack detection that is improved by 3.14% than the proposed algorithms.

Keywords: intrusion detection systems, clustering, k-means, k-medoids, SV clustering, ensemble

Procedia PDF Downloads 221
1920 Fly-Ash/Borosilicate Glass Based Geopolymers: A Mechanical and Microstructural Investigation

Authors: Gianmarco Taveri, Ivo Dlouhy

Abstract:

Geopolymers are well-suited materials to abate CO2 emission coming from the Portland cement production, and then replace them, in the near future, in building and other applications. The cost of production of geopolymers may be seen the only weakness, but the use of wastes as raw materials could provide a valid solution to this problem, as demonstrated by the successful incorporation of fly-ash, a by-product of thermal power plants, and waste glasses. Recycled glass in waste-derived geopolymers was lately employed as a further silica source. In this work we present, for the first time, the introduction of recycled borosilicate glass (BSG). BSG is actually a waste glass, since it derives from dismantled pharmaceutical vials and cannot be reused in the manufacturing of the original articles. Owing to the specific chemical composition (BSG is an ‘alumino-boro-silicate’), it was conceived to provide the key components of zeolitic networks, such as amorphous silica and alumina, as well as boria (B2O3), which may replace Al2O3 and contribute to the polycondensation process. The solid–state MAS NMR spectroscopy was used to assess the extent of boron oxide incorporation in the structure of geopolymers, and to define the degree of networking. FTIR spectroscopy was utilized to define the degree of polymerization and to detect boron bond vibration into the structure. Mechanical performance was tested by means of 3 point bending (flexural strength), chevron notch test (fracture toughness), compression test (compressive strength), micro-indentation test (Vicker’s hardness). Spectroscopy (SEM and Confocal spectroscopy) was performed on the specimens conducted to failure. FTIR showed a characteristic absorption band attributed to the stretching modes of tetrahedral boron ions, whose tetrahedral configuration is compatible to the reaction product of geopolymerization. 27Al NMR and 29Si NMR spectra were instrumental in understanding the extent of the reaction. 11B NMR spectroscopies evidenced a change of the trigonal boron (BO3) inside the BSG in favor of a quasi-total tetrahedral boron configuration (BO4). Thanks to these results, it was inferred that boron is part of the geopolymeric structure, replacing the Si in the network, similarly to the aluminum, and therefore improving the quality of the microstructure, in favor of a more cross-linked network. As expected, the material gained as much as 25% in compressive strength (45 MPa) compared to the literature, whereas no improvements were detected in flexural strength (~ 5 MPa) and superficial hardness (~ 78 HV). The material also exhibited a low fracture toughness (0.35 MPa*m1/2), with a tangible brittleness. SEM micrographies corroborated this behavior, showing a ragged surface, along with several cracks, due to the high presence of porosity and impurities, acting as preferential points for crack initiation. The 3D pattern of the surface fracture, following the confocal spectroscopy, evidenced an irregular crack propagation, whose proclivity was mainly, but not always, to follow the porosity. Hence, the crack initiation and propagation are largely unpredictable.

Keywords: borosilicate glass, characterization, fly-ash, geopolymerization

Procedia PDF Downloads 208
1919 Distribution System Planning with Distributed Generation and Capacitor Placements

Authors: Nattachote Rugthaicharoencheep

Abstract:

This paper presents a feeder reconfiguration problem in distribution systems. The objective is to minimize the system power loss and to improve bus voltage profile. The optimization problem is subjected to system constraints consisting of load-point voltage limits, radial configuration format, no load-point interruption, and feeder capability limits. A method based on genetic algorithm, a search algorithm based on the mechanics of natural selection and natural genetics, is proposed to determine the optimal pattern of configuration. The developed methodology is demonstrated by a 33-bus radial distribution system with distributed generations and feeder capacitors. The study results show that the optimal on/off patterns of the switches can be identified to give the minimum power loss while respecting all the constraints.

Keywords: network reconfiguration, distributed generation capacitor placement, loss reduction, genetic algorithm

Procedia PDF Downloads 175
1918 Development of Micelle-Mediated Sr(II) Fluorescent Analysis System

Authors: K. Akutsu, S. Mori, T. Hanashima

Abstract:

Fluorescent probes are useful for the selective detection of trace amount of ions and biomolecular imaging in living cells. Various kinds of metal ion-selective fluorescent compounds have been developed, and some compounds have been applied as effective metal ion-selective fluorescent probes. However, because competition between the ligand and water molecules for the metal ion constitutes a major contribution to the stability of a complex in aqueous solution, it is difficult to develop a highly sensitive, selective, and stable fluorescent probe in aqueous solution. The micelles, these are formed in the surfactant aqueous solution, provides a unique hydrophobic nano-environment for stabilizing metal-organic complexes in aqueous solution. Therefore, we focused on the unique properties of micelles to develop a new fluorescence analysis system. We have been developed a fluorescence analysis system for Sr(II) by using a Sr(II) fluorescent sensor, N-(2-hydroxy-3-(1H-benzimidazol-2-yl)-phenyl)-1-aza-18-crown-6-ether (BIC), and studied its complexation behavior with Sr(II) in micellar solution. We revealed that the stability constant of Sr(II)-BIC complex was 10 times higher than that in aqueous solution. In addition, its detection limit value was also improved up to 300 times by this system. However, the mechanisms of these phenomena have remained obscure. In this study, we investigated the structure of Sr(II)-BIC complex in aqueous micellar solution by combining use the extended X-ray absorption fine structure (EXAFS) and neutron reflectivity (NR) method to understand the unique properties of the fluorescence analysis system from the view point of structural chemistry. EXAFS and NR experiments were performed on BL-27B at KEK-PF and on BL17 SHARAKU at J-PARC MLF, respectively. The obtained EXAFS spectra and their fitting results indicated that Sr(II) and BIC formed a Sr(18-crown-6-ether)-like complex in aqueous micellar solution. The EXAFS results also indicated that the hydrophilic head group of surfactant molecule was directly coordinated with Sr(II). In addition, the NR results also indicated that Sr(II)-BIC complex would interact with the surface of micelle molecules. Therefore, we concluded that Sr(II), BIC, and surfactant molecule formed a ternary complexes in aqueous micellar solution, and at least, it is clear that the improvement of the stability constant in micellar solution is attributed to the result of the formation of Sr(BIC)(surfactant) complex.

Keywords: micell, fluorescent probe, neutron reflectivity, EXAFS

Procedia PDF Downloads 183
1917 3-D Strain Imaging of Nanostructures Synthesized via CVD

Authors: Sohini Manna, Jong Woo Kim, Oleg Shpyrko, Eric E. Fullerton

Abstract:

CVD techniques have emerged as a promising approach in the formation of a broad range of nanostructured materials. The realization of many practical applications will require efficient and economical synthesis techniques that preferably avoid the need for templates or costly single-crystal substrates and also afford process adaptability. Towards this end, we have developed a single-step route for the reduction-type synthesis of nanostructured Ni materials using a thermal CVD method. By tuning the CVD growth parameters, we can synthesize morphologically dissimilar nanostructures including single-crystal cubes and Au nanostructures which form atop untreated amorphous SiO2||Si substrates. An understanding of the new properties that emerge in these nanostructures materials and their relationship to function will lead to for a broad range of magnetostrictive devices as well as other catalysis, fuel cell, sensor, and battery applications based on high-surface-area transition-metal nanostructures. We use coherent X-ray diffraction imaging technique to obtain 3-D image and strain maps of individual nanocrystals. Coherent x-ray diffractive imaging (CXDI) is a technique that provides the overall shape of a nanostructure and the lattice distortion based on the combination of highly brilliant coherent x-ray sources and phase retrieval algorithm. We observe a fine interplay of reduction of surface energy vs internal stress, which plays an important role in the morphology of nano-crystals. The strain distribution is influenced by the metal-substrate interface and metal-air interface, which arise due to differences in their thermal expansion. We find the lattice strain at the surface of the octahedral gold nanocrystal agrees well with the predictions of the Young-Laplace equation quantitatively, but exhibits a discrepancy near the nanocrystal-substrate interface resulting from the interface. The strain in the bottom side of the Ni nanocube, which is contacted on the substrate surface is compressive. This is caused by dissimilar thermal expansion coefficients between Ni nanocube and Si substrate. Research at UCSD support by NSF DMR Award # 1411335.

Keywords: CVD, nanostructures, strain, CXRD

Procedia PDF Downloads 392
1916 Comparative Analysis of Reinforcement Learning Algorithms for Autonomous Driving

Authors: Migena Mana, Ahmed Khalid Syed, Abdul Malik, Nikhil Cherian

Abstract:

In recent years, advancements in deep learning enabled researchers to tackle the problem of self-driving cars. Car companies use huge datasets to train their deep learning models to make autonomous cars a reality. However, this approach has certain drawbacks in that the state space of possible actions for a car is so huge that there cannot be a dataset for every possible road scenario. To overcome this problem, the concept of reinforcement learning (RL) is being investigated in this research. Since the problem of autonomous driving can be modeled in a simulation, it lends itself naturally to the domain of reinforcement learning. The advantage of this approach is that we can model different and complex road scenarios in a simulation without having to deploy in the real world. The autonomous agent can learn to drive by finding the optimal policy. This learned model can then be easily deployed in a real-world setting. In this project, we focus on three RL algorithms: Q-learning, Deep Deterministic Policy Gradient (DDPG), and Proximal Policy Optimization (PPO). To model the environment, we have used TORCS (The Open Racing Car Simulator), which provides us with a strong foundation to test our model. The inputs to the algorithms are the sensor data provided by the simulator such as velocity, distance from side pavement, etc. The outcome of this research project is a comparative analysis of these algorithms. Based on the comparison, the PPO algorithm gives the best results. When using PPO algorithm, the reward is greater, and the acceleration, steering angle and braking are more stable compared to the other algorithms, which means that the agent learns to drive in a better and more efficient way in this case. Additionally, we have come up with a dataset taken from the training of the agent with DDPG and PPO algorithms. It contains all the steps of the agent during one full training in the form: (all input values, acceleration, steering angle, break, loss, reward). This study can serve as a base for further complex road scenarios. Furthermore, it can be enlarged in the field of computer vision, using the images to find the best policy.

Keywords: autonomous driving, DDPG (deep deterministic policy gradient), PPO (proximal policy optimization), reinforcement learning

Procedia PDF Downloads 147
1915 Predicting Oil Spills in Real-Time: A Machine Learning and AIS Data-Driven Approach

Authors: Tanmay Bisen, Aastha Shayla, Susham Biswas

Abstract:

Oil spills from tankers can cause significant harm to the environment and local communities, as well as have economic consequences. Early predictions of oil spills can help to minimize these impacts. Our proposed system uses machine learning and neural networks to predict potential oil spills by monitoring data from ship Automatic Identification Systems (AIS). The model analyzes ship movements, speeds, and changes in direction to identify patterns that deviate from the norm and could indicate a potential spill. Our approach not only identifies anomalies but also predicts spills before they occur, providing early detection and mitigation measures. This can prevent or minimize damage to the reputation of the company responsible and the country where the spill takes place. The model's performance on the MV Wakashio oil spill provides insight into its ability to detect and respond to real-world oil spills, highlighting areas for improvement and further research.

Keywords: Anomaly Detection, Oil Spill Prediction, Machine Learning, Image Processing, Graph Neural Network (GNN)

Procedia PDF Downloads 73
1914 Urban Landscape Sustainability Between Past and Present: Toward a Future Vision

Authors: Dina Salem

Abstract:

A variety of definitions and interpretations for sustainable development has been offered since the widely known definition of the World Commission on Environment and Development in 1987, the perspectives have ranged from deep ecology to better life quality for people. Sustainable landscape is widely understood as a key contributor to urban sustainability for the fact that all landscapes has a social, economic, cultural and ecological function for the community’s well-being and urban development, that was evident even before the emergence of sustainability concept. In this paper, the concepts of landscape planning and sustainable development are briefly reviewed; visions for landscape sustainability are demonstrated and classified. Challenges facing sustainable landscape planning are discussed. Finally, the paper investigates how our future urban open space could be sustainable and how does this contribute to urban sustainability, by creating urban landscapes that takes into account the social and cultural values of users of urban open space besides the ecological balance of urban open spaces as an integrated network.

Keywords: urban landscape, urban sustainability, resilience, open spaces

Procedia PDF Downloads 548
1913 Timely Palliative Screening and Interventions in Oncology

Authors: Jaci Marie Mastrandrea, Rosario Haro

Abstract:

Background: The National Comprehensive Cancer Network (NCCN) recommends that healthcare institutions have established processes for integrating palliative care (PC) into cancer treatment and that all cancer patients be screened for PC needs upon initial diagnosis as well as throughout the entire continuum of care (National Comprehensive Cancer Network, 2021). Early PC screening and intervention is directly associated with improved patient outcomes. The Sky Lakes Cancer Treatment Center (SLCTC) is an institution that has access to PC services yet does not have protocols in place for identifying patients with palliative needs or a standardized referral process. The aim of this quality improvement project was to improve early access to PC services by establishing a standardized screening and referral process for outpatient oncology patients. Method: The sample population included all adult patients with an oncology diagnosis who presented to the SLCTC for treatment during the project timeline. The “Palliative and Supportive Needs Assessment'' (PSNA) screening tool was developed from validated, evidence-based PC referral criteria. The tool was initially implemented using paper forms, and data was collected over a period of eight weeks. Patients were screened by nurses on the SLCTC oncology treatment team. Nurses responsible for screening patients received an educational inservice prior to implementation. Patients with a PSNA score of three or higher received an educational handout on the topic of PC and education about PC and symptom management. A score of five or higher indicates that PC referral is strongly recommended, and the patient’s EHR is flagged for the oncology provider to review orders for PC referral. The PSNA tool was approved by Sky Lakes administration for full integration into Epic-Beacon. The project lead collaborated with the Sky Lakes’ information systems team and representatives from Epic on the tool’s aesthetic and functionality within the Epic system. SLCTC nurses and physicians were educated on how to document the PSNA within Epic and where to view results. Results: Prior to the implementation of the PSNA screening tool, the SLCTC had zero referrals to PC in the past year, excluding referrals to hospice. Data was collected from the completed screening assessments of 100 patients under active treatment at the SLCTC. Seventy-three percent of patients met criteria for PC referral with a score greater than or equal to three. Of those patients who met referral criteria, 53.4% (39 patients) were referred for a palliative and supportive care consultation. Patients that were not referred to PC upon meeting criteria were flagged in EPIC for re-screening within one to three months. Patients with lung cancer, chronic hematologic malignancies, breast cancer, and gastrointestinal malignancy most frequently met the criteria for PC referral and scored highest overall on the scale of 0-12. Conclusion: The implementation of a standardized PC screening tool at the SLCTC significantly increased awareness of PC needs among cancer patients in the outpatient setting. Additionally, data derived from this quality improvement project supports the national recommendation for PC to be an integral component of cancer treatment across the entire continuum of care.

Keywords: oncology, palliative and supportive care, symptom management, outpatient oncology, palliative screening tool

Procedia PDF Downloads 112
1912 Identity Verification Based on Multimodal Machine Learning on Red Green Blue (RGB) Red Green Blue-Depth (RGB-D) Voice Data

Authors: LuoJiaoyang, Yu Hongyang

Abstract:

In this paper, we experimented with a new approach to multimodal identification using RGB, RGB-D and voice data. The multimodal combination of RGB and voice data has been applied in tasks such as emotion recognition and has shown good results and stability, and it is also the same in identity recognition tasks. We believe that the data of different modalities can enhance the effect of the model through mutual reinforcement. We try to increase the three modalities on the basis of the dual modalities and try to improve the effectiveness of the network by increasing the number of modalities. We also implemented the single-modal identification system separately, tested the data of these different modalities under clean and noisy conditions, and compared the performance with the multimodal model. In the process of designing the multimodal model, we tried a variety of different fusion strategies and finally chose the fusion method with the best performance. The experimental results show that the performance of the multimodal system is better than that of the single modality, especially in dealing with noise, and the multimodal system can achieve an average improvement of 5%.

Keywords: multimodal, three modalities, RGB-D, identity verification

Procedia PDF Downloads 70
1911 Sleep Apnea Hypopnea Syndrom Diagnosis Using Advanced ANN Techniques

Authors: Sachin Singh, Thomas Penzel, Dinesh Nandan

Abstract:

Accurate identification of Sleep Apnea Hypopnea Syndrom Diagnosis is difficult problem for human expert because of variability among persons and unwanted noise. This paper proposes the diagonosis of Sleep Apnea Hypopnea Syndrome (SAHS) using airflow, ECG, Pulse and SaO2 signals. The features of each type of these signals are extracted using statistical methods and ANN learning methods. These extracted features are used to approximate the patient's Apnea Hypopnea Index(AHI) using sample signals in model. Advance signal processing is also applied to snore sound signal to locate snore event and SaO2 signal is used to support whether determined snore event is true or noise. Finally, Apnea Hypopnea Index (AHI) event is calculated as per true snore event detected. Experiment results shows that the sensitivity can reach up to 96% and specificity to 96% as AHI greater than equal to 5.

Keywords: neural network, AHI, statistical methods, autoregressive models

Procedia PDF Downloads 119
1910 Applications of AI, Machine Learning, and Deep Learning in Cyber Security

Authors: Hailyie Tekleselase

Abstract:

Deep learning is increasingly used as a building block of security systems. However, neural networks are hard to interpret and typically solid to the practitioner. This paper presents a detail survey of computing methods in cyber security, and analyzes the prospects of enhancing the cyber security capabilities by suggests that of accelerating the intelligence of the security systems. There are many AI-based applications used in industrial scenarios such as Internet of Things (IoT), smart grids, and edge computing. Machine learning technologies require a training process which introduces the protection problems in the training data and algorithms. We present machine learning techniques currently applied to the detection of intrusion, malware, and spam. Our conclusions are based on an extensive review of the literature as well as on experiments performed on real enterprise systems and network traffic. We conclude that problems can be solved successfully only when methods of artificial intelligence are being used besides human experts or operators.

Keywords: artificial intelligence, machine learning, deep learning, cyber security, big data

Procedia PDF Downloads 126
1909 Performance and Emission Prediction in a Biodiesel Engine Fuelled with Honge Methyl Ester Using RBF Neural Networks

Authors: Shiva Kumar, G. S. Vijay, Srinivas Pai P., Shrinivasa Rao B. R.

Abstract:

In the present study RBF neural networks were used for predicting the performance and emission parameters of a biodiesel engine. Engine experiments were carried out in a 4 stroke diesel engine using blends of diesel and Honge methyl ester as the fuel. Performance parameters like BTE, BSEC, Tech and emissions from the engine were measured. These experimental results were used for ANN modeling. RBF center initialization was done by random selection and by using Clustered techniques. Network was trained by using fixed and varying widths for the RBF units. It was observed that RBF results were having a good agreement with the experimental results. Networks trained by using clustering technique gave better results than using random selection of centers in terms of reduced MRE and increased prediction accuracy. The average MRE for the performance parameters was 3.25% with the prediction accuracy of 98% and for emissions it was 10.4% with a prediction accuracy of 80%.

Keywords: radial basis function networks, emissions, performance parameters, fuzzy c means

Procedia PDF Downloads 558
1908 Alternative Epinephrine Injector to Combat Allergy Induced Anaphylaxis

Authors: Jeremy Bost, Matthew Brett, Jacob Flynn, Weihui Li

Abstract:

One response during anaphylaxis is reduced blood pressure due to blood vessels relaxing and dilating. Epinephrine causes the blood vessels to constrict, which raises blood pressure to counteract the symptoms. When going through an allergic reaction, an Epinephrine injector is used to administer a shot of epinephrine intramuscularly. Epinephrine injectors have become an integral part of day-to-day life for people with allergies. Current Epinephrine injectors (EpiPen) are completely mechanical and have no sensors to monitor the vital signs of patients or give suggestions the optimal time for the shot. The EpiPens are also large and inconvenient to carry daily. The current price of an EpiPen is roughly 600$ for a pack of two. This makes carrying an EpiPen very expensive, especially when they need to be switched out when the epinephrine expires. This new design is in the form of a bracelet, which has the ability to inject epinephrine. The bracelet will be equipped with vital signs monitors that can aid the patient to sense the allergic reaction. The vital signs that would be of interest are blood pressure, heart rate and Electrodermal activity (EDA). The heart rate of the patient will be tracked by a photoplethysmograph (PPG) that is incorporated into the sensors. The heart rate is expected to increase during anaphylaxis. Blood pressure will be monitored through a radar sensor, which monitors the phase changes in electromagnetic waves as they reflect off of the blood vessel. EDA is under autonomic control. Allergen-induced anaphylaxis is caused by a release of chemical mediators from mast cells and basophils, thus changes the autonomic activity of the patient. So by measuring EDA, it will give the wearer an alert on how their autonomic nervous system is reacting. After the vital signs are collected, they will be sent to an application on a smartphone to be analyzed, which can then alert an emergency contact if the epinephrine injector on the bracelet is activated. Overall, this design creates a safer system by aiding the user in keeping track of their epinephrine injector, while making it easier to track their vital signs. Also, our design will be more affordable and more convenient to replace. Rather than replacing the entire product, only the needle and drug will be switched out and not the entire design.

Keywords: allergy, anaphylaxis, epinephrine, injector, vital signs monitor

Procedia PDF Downloads 252