Search results for: autonomous sensors
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1753

Search results for: autonomous sensors

433 A Follow up Study on Indoor 222Rn, 220Rn and Their Decay Product Concentrations in a Mineralized Zone of Himachal Pradesh, India

Authors: B. S. Bajwa, Parminder Singh, Prabhjot Singh, Surinder Singh, B. K. Sahoo, B. K. Sapra

Abstract:

A follow up study was taken up in a mineralized zone situated in Hamirpur district, Himachal Pradesh, India to investigate high values of radon concentration reported in past studies as well to update the old radon data based on bare SSNTD technique. In the present investigation, indoor radon, thoron and their decay products concentrations have been measured using the newly developed Radon-Thoron discriminating diffusion chamber with single entry face, direct radon and thoron progeny sensors (DRPS/DTPS) respectively. The measurements have been carried out in seventy five dwellings of fourteen different villages. Houses were selected taking into consideration of the past data as well as the type of houses such as mud, concrete, brick etc. It was observed that high values of earlier reported radon concentrations were mainly because of thoron interference in the Solid State Nuclear Track Detector (LR-115 type II) exposed in bare mode. Now, the average concentration values and the estimated annual inhalation dose in these villages have been found to be within the reference level as recommended by the ICRP. The annual average indoor radon and thoron concentrations observed in these dwellings have been found to vary from 44±12-157±73 Bq m-3 and 44±11-240±125 Bq m-3 respectively. The equilibrium equivalent concentrations of radon and thoron decay products have been observed to be in the range of 10-63 Bq m-3 and 1-5 Bq m-3 respectively.

Keywords: radon, thoron, progeny concentration, dosimeter

Procedia PDF Downloads 440
432 A Sensor Placement Methodology for Chemical Plants

Authors: Omid Ataei Nia, Karim Salahshoor

Abstract:

In this paper, a new precise and reliable sensor network methodology is introduced for unit processes and operations using the Constriction Coefficient Particle Swarm Optimization (CPSO) method. CPSO is introduced as a new search engine for optimal sensor network design purposes. Furthermore, a Square Root Unscented Kalman Filter (SRUKF) algorithm is employed as a new data reconciliation technique to enhance the stability and accuracy of the filter. The proposed design procedure incorporates precision, cost, observability, reliability together with importance-of-variables (IVs) as a novel measure in Instrumentation Criteria (IC). To the best of our knowledge, no comprehensive approach has yet been proposed in the literature to take into account the importance of variables in the sensor network design procedure. In this paper, specific weight is assigned to each sensor, measuring a process variable in the sensor network to indicate the importance of that variable over the others to cater to the ultimate sensor network application requirements. A set of distinct scenarios has been conducted to evaluate the performance of the proposed methodology in a simulated Continuous Stirred Tank Reactor (CSTR) as a highly nonlinear process plant benchmark. The obtained results reveal the efficacy of the proposed method, leading to significant improvement in accuracy with respect to other alternative sensor network design approaches and securing the definite allocation of sensors to the most important process variables in sensor network design as a novel achievement.

Keywords: constriction coefficient PSO, importance of variable, MRMSE, reliability, sensor network design, square root unscented Kalman filter

Procedia PDF Downloads 146
431 Uncertainty Assessment in Building Energy Performance

Authors: Fally Titikpina, Abderafi Charki, Antoine Caucheteux, David Bigaud

Abstract:

The building sector is one of the largest energy consumer with about 40% of the final energy consumption in the European Union. Ensuring building energy performance is of scientific, technological and sociological matter. To assess a building energy performance, the consumption being predicted or estimated during the design stage is compared with the measured consumption when the building is operational. When valuing this performance, many buildings show significant differences between the calculated and measured consumption. In order to assess the performance accurately and ensure the thermal efficiency of the building, it is necessary to evaluate the uncertainties involved not only in measurement but also those induced by the propagation of dynamic and static input data in the model being used. The evaluation of measurement uncertainty is based on both the knowledge about the measurement process and the input quantities which influence the result of measurement. Measurement uncertainty can be evaluated within the framework of conventional statistics presented in the \textit{Guide to the Expression of Measurement Uncertainty (GUM)} as well as by Bayesian Statistical Theory (BST). Another choice is the use of numerical methods like Monte Carlo Simulation (MCS). In this paper, we proposed to evaluate the uncertainty associated to the use of a simplified model for the estimation of the energy consumption of a given building. A detailed review and discussion of these three approaches (GUM, MCS and BST) is given. Therefore, an office building has been monitored and multiple sensors have been mounted on candidate locations to get required data. The monitored zone is composed of six offices and has an overall surface of 102 $m^2$. Temperature data, electrical and heating consumption, windows opening and occupancy rate are the features for our research work.

Keywords: building energy performance, uncertainty evaluation, GUM, bayesian approach, monte carlo method

Procedia PDF Downloads 442
430 Fabrication of a New Electrochemical Sensor Based on New Nanostructured Molecularly Imprinted Polypyrrole for Selective and Sensitive Determination of Morphine

Authors: Samaneh Nabavi, Hadi Shirzad, Arash Ghoorchian, Maryam Shanesaz, Reza Naderi

Abstract:

Morphine (MO), the most effective painkiller, is considered the reference by which analgesics are assessed. It is very necessary for the biomedical applications to detect and maintain the MO concentrations in the blood and urine with in safe ranges. To date, there are many expensive techniques for detecting MO. Recently, many electrochemical sensors for direct determination of MO were constructed. The molecularly imprinted polymer (MIP) is a polymeric material, which has a built-in functionality for the recognition of a particular chemical substance with its complementary cavity.This paper reports a sensor for MO using a combination of a molecularly imprinted polymer (MIP) and differential-pulse voltammetry (DPV). Electropolymerization of MO doped polypyrrole yielded poor quality, but a well-doped, nanostructure and increased impregnation has been obtained in the pH=12. Above a pH of 11, MO is in the anionic forms. The effect of various experimental parameters including pH, scan rate and accumulation time on the voltammetric response of MO was investigated. At the optimum conditions, the concentration of MO was determined using DPV in a linear range of 7.07 × 10−6 to 2.1 × 10−4 mol L−1 with a correlation coefficient of 0.999, and a detection limit of 13.3 × 10-8 mol L−1, respectively. The effect of common interferences on the current response of MO namely ascorbic acid (AA) and uric acid (UA) is studied. The modified electrode can be used for the determination of MO spiked into urine samples, and excellent recovery results were obtained. The nanostructured polypyrrole films were characterized by field emission scanning electron microscopy (FESEM) and furrier transforms infrared (FTIR).

Keywords: morphine detection, sensor, polypyrrole, nanostructure, molecularly imprinted polymer

Procedia PDF Downloads 404
429 Haptic Robotic Glove for Tele-Exploration of Explosive Devices

Authors: Gizem Derya Demir, Ilayda Yankilic, Daglar Karamuftuoglu, Dante Dorantes

Abstract:

ABSTRACT HAPTIC ROBOTIC GLOVE FOR TELE-EXPLORATION OF EXPLOSIVE DEVICES Gizem Derya Demir, İlayda Yankılıç, Dağlar Karamüftüoğlu, Dante J. Dorantes-González Department of Mechanical Engineering, MEF University Ayazağa Cad. No.4, 34396 Maslak, Sarıyer, İstanbul, Turkey Nowadays, terror attacks are, unfortunately, a more common threat around the world. Therefore, safety measures have become much more essential. An alternative to providing safety and saving human lives is done by robots, such as disassembling and liquidation of bombs. In this article, remote exploration and manipulation of potential explosive devices from a safe-distance are addressed by designing a novel, simple and ergonomic haptic robotic glove. SolidWorks® Computer-Aided Design, computerized dynamic simulation, and MATLAB® kinematic and static analysis were used for the haptic robotic glove and finger design. Angle controls of servo motors were made using ARDUINO® IDE codes on a Makeblock® MegaPi control card. Simple grasping dexterity solutions for the fingers were obtained using one linear soft and one angle sensors for each finger, and six servo motors are used in total to remotely control a slave multi-tooled robotic hand. This project is still undergoing and presents current results. Future research steps are also presented.

Keywords: Dexterity, Exoskeleton, Haptics , Position Control, Robotic Hand , Teleoperation

Procedia PDF Downloads 156
428 4D Modelling of Low Visibility Underwater Archaeological Excavations Using Multi-Source Photogrammetry in the Bulgarian Black Sea

Authors: Rodrigo Pacheco-Ruiz, Jonathan Adams, Felix Pedrotti

Abstract:

This paper introduces the applicability of underwater photogrammetric survey within challenging conditions as the main tool to enhance and enrich the process of documenting archaeological excavation through the creation of 4D models. Photogrammetry was being attempted on underwater archaeological sites at least as early as the 1970s’ and today the production of traditional 3D models is becoming a common practice within the discipline. Photogrammetry underwater is more often implemented to record exposed underwater archaeological remains and less so as a dynamic interpretative tool.  Therefore, it tends to be applied in bright environments and when underwater visibility is > 1m, reducing its implementation on most submerged archaeological sites in more turbid conditions. Recent years have seen significant development of better digital photographic sensors and the improvement of optical technology, ideal for darker environments. Such developments, in tandem with powerful processing computing systems, have allowed underwater photogrammetry to be used by this research as a standard recording and interpretative tool. Using multi-source photogrammetry (5, GoPro5 Hero Black cameras) this paper presents the accumulation of daily (4D) underwater surveys carried out in the Early Bronze Age (3,300 BC) to Late Ottoman (17th Century AD) archaeological site of Ropotamo in the Bulgarian Black Sea under challenging conditions (< 0.5m visibility). It proves that underwater photogrammetry can and should be used as one of the main recording methods even in low light and poor underwater conditions as a way to better understand the complexity of the underwater archaeological record.

Keywords: 4D modelling, Black Sea Maritime Archaeology Project, multi-source photogrammetry, low visibility underwater survey

Procedia PDF Downloads 223
427 Applications of Artificial Intelligence (AI) in Cardiac imaging

Authors: Angelis P. Barlampas

Abstract:

The purpose of this study is to inform the reader, about the various applications of artificial intelligence (AI), in cardiac imaging. AI grows fast and its role is crucial in medical specialties, which use large amounts of digital data, that are very difficult or even impossible to be managed by human beings and especially doctors.Artificial intelligence (AI) refers to the ability of computers to mimic human cognitive function, performing tasks such as learning, problem-solving, and autonomous decision making based on digital data. Whereas AI describes the concept of using computers to mimic human cognitive tasks, machine learning (ML) describes the category of algorithms that enable most current applications described as AI. Some of the current applications of AI in cardiac imaging are the follows: Ultrasound: Automated segmentation of cardiac chambers across five common views and consequently quantify chamber volumes/mass, ascertain ejection fraction and determine longitudinal strain through speckle tracking. Determine the severity of mitral regurgitation (accuracy > 99% for every degree of severity). Identify myocardial infarction. Distinguish between Athlete’s heart and hypertrophic cardiomyopathy, as well as restrictive cardiomyopathy and constrictive pericarditis. Predict all-cause mortality. CT Reduce radiation doses. Calculate the calcium score. Diagnose coronary artery disease (CAD). Predict all-cause 5-year mortality. Predict major cardiovascular events in patients with suspected CAD. MRI Segment of cardiac structures and infarct tissue. Calculate cardiac mass and function parameters. Distinguish between patients with myocardial infarction and control subjects. It could potentially reduce costs since it would preclude the need for gadolinium-enhanced CMR. Predict 4-year survival in patients with pulmonary hypertension. Nuclear Imaging Classify normal and abnormal myocardium in CAD. Detect locations with abnormal myocardium. Predict cardiac death. ML was comparable to or better than two experienced readers in predicting the need for revascularization. AI emerge as a helpful tool in cardiac imaging and for the doctors who can not manage the overall increasing demand, in examinations such as ultrasound, computed tomography, MRI, or nuclear imaging studies.

Keywords: artificial intelligence, cardiac imaging, ultrasound, MRI, CT, nuclear medicine

Procedia PDF Downloads 61
426 Development of a Flexible Lora-Based Wireless Sensory System for Long-Time Health Monitoring of Civil Structures

Authors: Hui Zhang, Sherif Beskhyroun

Abstract:

In this study, a highly flexible LoRa-Based wireless sensing system was used to assess the strain state performance of building structures. The system was developed to address the local damage limitation of structural health monitoring (SHM) systems. The system is part of an intelligent SHM system designed to monitor, collect and transmit strain changes in key structural components. The main purpose of the wireless sensor system is to reduce the development and installation costs, and reduce the power consumption of the system, so as to achieve long-time monitoring. The highly stretchable flexible strain gauge is mounted on the surface of the structure and is waterproof, heat resistant, and low temperature resistant, greatly reducing the installation and maintenance costs of the sensor. The system was also developed with the aim of using LoRa wireless communication technology to achieve both low power consumption and long-distance transmission, therefore solving the problem of large-scale deployment of sensors to cover more areas in large structures. In the long-term monitoring of the building structure, the system shows very high performance, very low actual power consumption, and wireless transmission stability. The results show that the developed system has a high resolution, sensitivity, and high possibility of long-term monitoring.

Keywords: LoRa, SHM system, strain measurement, civil structures, flexible sensing system

Procedia PDF Downloads 85
425 A Process of Forming a Single Competitive Factor in the Digital Camera Industry

Authors: Kiyohiro Yamazaki

Abstract:

This paper considers a forming process of a single competitive factor in the digital camera industry from the viewpoint of product platform. To make product development easier for companies and to increase product introduction ratios, development efforts concentrate on improving and strengthening certain product attributes, and it is born in the process that the product platform is formed continuously. It is pointed out that the formation of this product platform raises product development efficiency of individual companies, but on the other hand, it has a trade-off relationship of causing unification of competitive factors in the whole industry. This research tries to analyze product specification data which were collected from the web page of digital camera companies. Specifically, this research collected all product specification data released in Japan from 1995 to 2003 and analyzed the composition of image sensor and optical lens; and it identified product platforms shared by multiple products and discussed their application. As a result, this research found that the product platformation was born in the development of the standard product for major market segmentation. Every major company has made product platforms of image sensors and optical lenses, and as a result, this research found that the competitive factors were unified in the entire industry throughout product platformation. In other words, this product platformation brought product development efficiency of individual firms; however, it also caused industrial competition factors to be unified in the industry.

Keywords: digital camera industry, product evolution trajectory, product platform, unification of competitive factors

Procedia PDF Downloads 140
424 Multi-Dimension Threat Situation Assessment Based on Network Security Attributes

Authors: Yang Yu, Jian Wang, Jiqiang Liu, Lei Han, Xudong He, Shaohua Lv

Abstract:

As the increasing network attacks become more and more complex, network situation assessment based on log analysis cannot meet the requirements to ensure network security because of the low quality of logs and alerts. This paper addresses the lack of consideration of security attributes of hosts and attacks in the network. Identity and effectiveness of Distributed Denial of Service (DDoS) are hard to be proved in risk assessment based on alerts and flow matching. This paper proposes a multi-dimension threat situation assessment method based on network security attributes. First, the paper offers an improved Common Vulnerability Scoring System (CVSS) calculation, which includes confident risk, integrity risk, availability risk and a weighted risk. Second, the paper introduces deterioration rate of properties collected by sensors in hosts and network, which aimed at assessing the time and level of DDoS attacks. Third, the paper introduces distribution of asset value in security attributes considering features of attacks and network, which aimed at assessing and show the whole situation. Experiments demonstrate that the approach reflects effectiveness and level of DDoS attacks, and the result can show the primary threat in network and security requirement of network. Through comparison and analysis, the method reflects more in security requirement and security risk situation than traditional methods based on alert and flow analyzing.

Keywords: DDoS evaluation, improved CVSS, network security attribute, threat situation assessment

Procedia PDF Downloads 195
423 Pioneering Technology of Night Photo-Stimulation of the Brain Lymphatic System: Therapy of Brain Diseases during Sleep

Authors: Semyachkina-Glushkovskaya Oxana, Fedosov Ivan, Blokhina Inna, Terskov Andrey, Evsukova Arina, Elovenko Daria, Adushkina Viktoria, Dubrovsky Alexander, Jürgen Kurths

Abstract:

In modern neurobiology, sleep is considered a novel biomarker and a promising therapeutic target for brain diseases. This is due to recent discoveries of the nighttime activation of the brain lymphatic system (BLS), playing an important role in the removal of wastes and toxins from the brain and contributes neuroprotection of the central nervous system (CNS). In our review, we discuss that night stimulation of BLS might be a breakthrough strategy in a new treatment of Alzheimer’s and Parkinson’s disease, stroke, brain trauma, and oncology. Although this research is in its infancy, however, there are pioneering and promising results suggesting that night transcranial photostimulation (tPBM) stimulates more effectively lymphatic removal of amyloid-beta from mouse brain than daily tPBM that is associated with a greater improvement of the neurological status and recognition memory of animals. In our previous study, we discovered that tPBM modulates the tone and permeability of the lymphatic endothelium by stimulating NO formation, promoting lymphatic clearance of wastes and toxins from the brain tissues. We also demonstrate that tPBM can also lead to angio- and lymphangiogenesis, which is another mechanism underlying tPBM-mediated stimulation of BLS. Thus, photo-augmentation of BLS might be a promising therapeutic target for preventing or delaying brain diseases associated with BLS dysfunction. Here we present pioneering technology for simultaneous tPBM in humans and sleep monitoring for stimulation of BLS to remove toxins from CNS and modulation of brain immunity. The wireless-controlled gadget includes a flexible organic light-emitting diode (LED) source that is controlled directly by a sleep-tracking device via a mobile application. The designed autonomous LED source is capable of providing the required therapeutic dose of light radiation at a certain region of the patient’s head without disturbing of sleeping patient. To minimize patients' discomfort, advanced materials like flexible organic LEDs were used. Acknowledgment: This study was supported by RSF project No. 23-75-30001.

Keywords: brain diseases, brain lymphatic system, phototherapy, sleep

Procedia PDF Downloads 59
422 Hybrid Graphene Based Nanomaterial as Highly Efficient Catalyst for the Electrochemical Determination of Ciprofloxacin

Authors: Tien S. H. Pham, Peter J. Mahon, Aimin Yu

Abstract:

The detection of drug molecules by voltammetry has attracted great interest over the past years. However, many drug molecules exhibit poor electrochemical signals at common electrodes which result in low sensitivity in detection. An efficient way to overcome this problem is to modify electrodes with functional materials. Since discovered in 2004, graphene (or reduced graphene oxide) has emerged as one of the most studied two-dimensional carbon materials in condensed matter physics, electrochemistry, and so on due to its exceptional physicochemical properties. Additionally, the continuous development of technology has opened the new window for the successful fabrications of many novel graphene-based nanomaterials to serve in electrochemical analysis. This research aims to synthesize and characterize gold nanoparticle coated beta-cyclodextrin functionalized reduced graphene oxide (Au NP–β-CD–RGO) nanocomposites with highly conductive and strongly electro-catalytic properties as well as excellent supramolecular recognition abilities for the modification of electrodes. The electrochemical responses of ciprofloxacin at the as-prepared nanocomposite modified electrode was effectively amplified was much higher in comparison with that at the bare electrode. The linear concentration range was from 0.01 to 120 µM, with a detection limit of 2.7 nM using differential pulse voltammetry. Thus, Au NP–β-CD–RGO nanocomposite has great potential as an ideal material to construct sensitive sensors for the electrochemical determination of ciprofloxacin or similar antibacterial drugs in the future based on its excellent stability, selectivity, and reproducibility.

Keywords: Au nanoparticles, β-CD, ciprofloxacin, electrochemical determination, graphene based nanomaterials

Procedia PDF Downloads 177
421 Intelligent Parking Systems for Quasi-Close Communities

Authors: Ayodele Adekunle Faiyetole, Olumide Olawale Jegede

Abstract:

This paper presents the experimental design and needs justifications for a localized intelligent parking system (L-IPS), ideal for quasi-close communities with increasing vehicular volume that depends on limited or constant parking facilities. For a constant supply in parking facilities, the demand for an increasing vehicular volume could lead to poor time conservation or extended travel time, traffic congestion or impeded mobility, and safety issues. Increased negative environmental and economic externalities are other associated and consequent downsides of disparities in demand and supply. This L-IPS is designed using a microcontroller, ultrasonic sensors, LED indicators, such that the current status, in terms of parking spots availability, can be known from the main entrance to the community or a parking zone on a LCD screen. As an advanced traffic management system (ATMS), the L-IPS is designed to resolve aspects of infrastructure-to-driver (I2D) communication and parking detection issues. Thus, this L-IPS can act as a timesaver for users by helping them know the availability of parking spots. Providing on-time, informed routing, to a next preference or seamless moving to berth on the available spot on a proximate facility as the case may be. Its use could also increase safety and increase mobility, and fuel savings and costs, therefore, reducing negative environmental and economic externalities due to transportation systems.

Keywords: intelligent parking systems, localized intelligent parking system, intelligent transport systems, advanced traffic management systems, infrastructure-to-drivers communication

Procedia PDF Downloads 157
420 Genetic Algorithm and Multi Criteria Decision Making Approach for Compressive Sensing Based Direction of Arrival Estimation

Authors: Ekin Nurbaş

Abstract:

One of the essential challenges in array signal processing, which has drawn enormous research interest over the past several decades, is estimating the direction of arrival (DOA) of plane waves impinging on an array of sensors. In recent years, the Compressive Sensing based DoA estimation methods have been proposed by researchers, and it has been discovered that the Compressive Sensing (CS)-based algorithms achieved significant performances for DoA estimation even in scenarios where there are multiple coherent sources. On the other hand, the Genetic Algorithm, which is a method that provides a solution strategy inspired by natural selection, has been used in sparse representation problems in recent years and provides significant improvements in performance. With all of those in consideration, in this paper, a method that combines the Genetic Algorithm (GA) and the Multi-Criteria Decision Making (MCDM) approaches for Direction of Arrival (DoA) estimation in the Compressive Sensing (CS) framework is proposed. In this method, we generate a multi-objective optimization problem by splitting the norm minimization and reconstruction loss minimization parts of the Compressive Sensing algorithm. With the help of the Genetic Algorithm, multiple non-dominated solutions are achieved for the defined multi-objective optimization problem. Among the pareto-frontier solutions, the final solution is obtained with the multiple MCDM methods. Moreover, the performance of the proposed method is compared with the CS-based methods in the literature.

Keywords: genetic algorithm, direction of arrival esitmation, multi criteria decision making, compressive sensing

Procedia PDF Downloads 134
419 Additive Manufacturing of Titanium Metamaterials for Tissue Engineering

Authors: Tuba Kizilirmak

Abstract:

Distinct properties of porous metamaterials have been largely processed for biomedicine requiring a three-dimensional (3D) porous structure engaged with fine mechanical features, biodegradation ability, and biocompatibility. Applications of metamaterials are (i) porous orthopedic and dental implants; (ii) in vitro cell culture of metamaterials and bone regeneration of metamaterials in vivo; (iii) macro-, micro, and nano-level porous metamaterials for sensors, diagnosis, and drug delivery. There are some specific properties to design metamaterials for tissue engineering. These are surface to volume ratio, pore size, and interconnection degrees are selected to control cell behavior and bone ingrowth. In this study, additive manufacturing technique selective laser melting will be used to print the scaffolds. Selective Laser Melting prints the 3D components according to designed 3D CAD models and manufactured materials, adding layers progressively by layer. This study aims to design metamaterials with Ti6Al4V material, which gives benefit in respect of mechanical and biological properties. Ti6Al4V scaffolds will support cell attachment by conferring a suitable area for cell adhesion. This study will control the osteoblast cell attachment on Ti6Al4V scaffolds after the determination of optimum stiffness and other mechanical properties which are close to mechanical properties of bone. Before we produce the samples, we will use a modeling technique to simulate the mechanical behavior of samples. These samples include different lattice models with varying amounts of porosity and density.

Keywords: additive manufacturing, titanium lattices, metamaterials, porous metals

Procedia PDF Downloads 183
418 An Assessment of the Performance of Local Government in Ondo State Nigeria: A Capital Budgeting Approach

Authors: Olurankinse Felix

Abstract:

Local governments in Ondo State Nigeria are the third tier of government saddled with the responsibility of providing governance and economic services at the grassroots. To be able to do this, the Constitution of the Federal Republic of Nigeria provided that a proportion of Federation Account be allocated to them in addition to their internally generated revenue. From the allocation and other incidental sources of revenue, the local governments are expected to provide basic infrastructures and other social amenities to better the lots of the rural dwellers. Nevertheless, local governments’ performances in terms of provision of social amenities are without questioning and quite not encouraging. Assessing the performance of local governments in this period of dearth and scarcity of resources is highly indispensable more so that the activities of local governments’ staff are bedeviled and characterized with fraud, corruption and mismanagement. Considering the direct impact of the consequences of their action on the living standard of the rural dwellers therefore calls for the need to evaluate their level of performances using capital budgeting approach. The paper being a time series study adopts the survey design. Data were obtained through secondary source mainly from the Annual financial statements and publication of approved budgets estimates covering the period of study (2008-2012). The use of ratio analysis was employed in analyzing the comparative level of performances of the local governments under study. The result of the study shows that less than 30% of the local governments were able to harness the budgetary allocation to provide amenities to the beneficiaries while majority of the local governments were involved in unethical conduct ranging from theft of fund, corruption, diversion of funds and extra-budgetary activities. Also, there is poor internally generated revenue to complement the statutory allocation and besides, the monthly withholding of larger portions of local government share by the state in the name of joint account were also seen as contributory factors. The study recommends the need for transparency and accountability in public fund management through the oversight function of the state house of assembly. Also local government should be made to be autonomous and independent of the state by jettisoning the idea of joint account.

Keywords: performance, transparency and accountability, capital budgeting, joint account, local government autonomy

Procedia PDF Downloads 317
417 Performance Analysis of Vision-Based Transparent Obstacle Avoidance for Construction Robots

Authors: Siwei Chang, Heng Li, Haitao Wu, Xin Fang

Abstract:

Construction robots are receiving more and more attention as a promising solution to the manpower shortage issue in the construction industry. The development of intelligent control techniques that assist in controlling the robots to avoid transparency and reflected building obstacles is crucial for guaranteeing the adaptability and flexibility of mobile construction robots in complex construction environments. With the boom of computer vision techniques, a number of studies have proposed vision-based methods for transparent obstacle avoidance to improve operation accuracy. However, vision-based methods are also associated with disadvantages such as high computational costs. To provide better perception and value evaluation, this study aims to analyze the performance of vision-based techniques for avoiding transparent building obstacles. To achieve this, commonly used sensors, including a lidar, an ultrasonic sensor, and a USB camera, are equipped on the robotic platform to detect obstacles. A Raspberry Pi 3 computer board is employed to compute data collecting and control algorithms. The turtlebot3 burger is employed to test the programs. On-site experiments are carried out to observe the performance in terms of success rate and detection distance. Control variables include obstacle shapes and environmental conditions. The findings contribute to demonstrating how effectively vision-based obstacle avoidance strategies for transparent building obstacle avoidance and provide insights and informed knowledge when introducing computer vision techniques in the aforementioned domain.

Keywords: construction robot, obstacle avoidance, computer vision, transparent obstacle

Procedia PDF Downloads 63
416 Electrochemical Behavior of Cocaine on Carbon Paste Electrode Chemically Modified with Cu(II) Trans 3-MeO Salcn Complex

Authors: Alex Soares Castro, Matheus Manoel Teles de Menezes, Larissa Silva de Azevedo, Ana Carolina Caleffi Patelli, Osmair Vital de Oliveira, Aline Thais Bruni, Marcelo Firmino de Oliveira

Abstract:

Considering the problem of the seizure of illicit drugs, as well as the development of electrochemical sensors using chemically modified electrodes, this work shows the study of the electrochemical activity of cocaine in carbon paste electrode chemically modified with Cu (II) trans 3-MeO salcn complex. In this context, cyclic voltammetry was performed on 0.1 mol.L⁻¹ KCl supporting electrolyte at a scan speed of 100 mV s⁻¹, using an electrochemical cell composed of three electrodes: Ag /AgCl electrode (filled KCl 3 mol.L⁻¹) from Metrohm® (reference electrode); a platinum spiral electrode, as an auxiliary electrode, and a carbon paste electrode chemically modified with Cu (II) trans 3-MeO complex (as working electrode). Two forms of cocaine were analyzed: cocaine hydrochloride (pH 3) and cocaine free base form (pH 8). The PM7 computational method predicted that the hydrochloride form is more stable than the free base form of cocaine, so with cyclic voltammetry, we found electrochemical signal only for cocaine in the form of hydrochloride, with an anodic peak at 1.10 V, with a linearity range between 2 and 20 μmol L⁻¹ had LD and LQ of 2.39 and 7.26x10-5 mol L⁻¹, respectively. The study also proved that cocaine is adsorbed on the surface of the working electrode, where through an irreversible process, where only anode peaks are observed, we have the oxidation of cocaine, which occurs in the hydrophilic region due to the loss of two electrons. The mechanism of this reaction was confirmed by the ab-inito quantum method.

Keywords: ab-initio computational method, analytical method, cocaine, Schiff base complex, voltammetry

Procedia PDF Downloads 178
415 Improving Waste Recycling and Resource Productivity by Integrating Smart Resource Tracking System

Authors: Atiq Zaman

Abstract:

The high contamination rate in the recycling waste stream is one of the major problems in Australia. In addition, a lack of reliable waste data makes it even more difficult for designing and implementing an effective waste management plan. This article conceptualizes the opportunity to improve resource productivity by integrating smart resource tracking system (SRTS) into the Australian household waste management system. The application of the smart resource tracking system will be implemented through the following ways: (i) mobile application-based resource tracking system used to measure the household’s material flow; (ii) RFID, smart image and weighing system used to track waste generation, recycling and contamination; (iii) informing and motivating manufacturer and retailers to improve their problematic products’ packaging; and (iv) ensure quality and reliable data through open-sourced cloud data for public use. The smart mobile application, imaging, radio-frequency identification (RFID) and weighing technologies are not new, but the very straightforward idea of using these technologies in the household resource consumption, waste bins and collection trucks will open up a new era of accurately measuring and effectively managing our waste. The idea will bring the most urgently needed reliable, data and clarity on household consumption, recycling behaviour and waste management practices in the context of available local infrastructure and policies. Therefore, the findings of this study would be very important for decision makers to improve resource productivity in the waste industry by using smart resource tracking system.

Keywords: smart devices, mobile application, smart sensors, resource tracking, waste management, resource productivity

Procedia PDF Downloads 132
414 Influences of Separation of the Boundary Layer in the Reservoir Pressure in the Shock Tube

Authors: Bruno Coelho Lima, Joao F.A. Martos, Paulo G. P. Toro, Israel S. Rego

Abstract:

The shock tube is a ground-facility widely used in aerospace and aeronautics science and technology for studies on gas dynamic and chemical-physical processes in gases at high-temperature, explosions and dynamic calibration of pressure sensors. A shock tube in its simplest form is comprised of two separate tubes of equal cross-section by a diaphragm. The diaphragm function is to separate the two reservoirs at different pressures. The reservoir containing high pressure is called the Driver, the low pressure reservoir is called Driven. When the diaphragm is broken by pressure difference, a normal shock wave and non-stationary (named Incident Shock Wave) will be formed in the same place of diaphragm and will get around toward the closed end of Driven. When this shock wave reaches the closer end of the Driven section will be completely reflected. Now, the shock wave will interact with the boundary layer that was created by the induced flow by incident shock wave passage. The interaction between boundary layer and shock wave force the separation of the boundary layer. The aim of this paper is to make an analysis of influences of separation of the boundary layer in the reservoir pressure in the shock tube. A comparison among CDF (Computational Fluids Dynamics), experiments test and analytical analysis were performed. For the analytical analysis, some routines in Python was created, in the numerical simulations (Computational Fluids Dynamics) was used the Ansys Fluent, and the experimental tests were used T1 shock tube located in IEAv (Institute of Advanced Studies).

Keywords: boundary layer separation, moving shock wave, shock tube, transient simulation

Procedia PDF Downloads 298
413 Adversarial Attacks and Defenses on Deep Neural Networks

Authors: Jonathan Sohn

Abstract:

Deep neural networks (DNNs) have shown state-of-the-art performance for many applications, including computer vision, natural language processing, and speech recognition. Recently, adversarial attacks have been studied in the context of deep neural networks, which aim to alter the results of deep neural networks by modifying the inputs slightly. For example, an adversarial attack on a DNN used for object detection can cause the DNN to miss certain objects. As a result, the reliability of DNNs is undermined by their lack of robustness against adversarial attacks, raising concerns about their use in safety-critical applications such as autonomous driving. In this paper, we focus on studying the adversarial attacks and defenses on DNNs for image classification. There are two types of adversarial attacks studied which are fast gradient sign method (FGSM) attack and projected gradient descent (PGD) attack. A DNN forms decision boundaries that separate the input images into different categories. The adversarial attack slightly alters the image to move over the decision boundary, causing the DNN to misclassify the image. FGSM attack obtains the gradient with respect to the image and updates the image once based on the gradients to cross the decision boundary. PGD attack, instead of taking one big step, repeatedly modifies the input image with multiple small steps. There is also another type of attack called the target attack. This adversarial attack is designed to make the machine classify an image to a class chosen by the attacker. We can defend against adversarial attacks by incorporating adversarial examples in training. Specifically, instead of training the neural network with clean examples, we can explicitly let the neural network learn from the adversarial examples. In our experiments, the digit recognition accuracy on the MNIST dataset drops from 97.81% to 39.50% and 34.01% when the DNN is attacked by FGSM and PGD attacks, respectively. If we utilize FGSM training as a defense method, the classification accuracy greatly improves from 39.50% to 92.31% for FGSM attacks and from 34.01% to 75.63% for PGD attacks. To further improve the classification accuracy under adversarial attacks, we can also use a stronger PGD training method. PGD training improves the accuracy by 2.7% under FGSM attacks and 18.4% under PGD attacks over FGSM training. It is worth mentioning that both FGSM and PGD training do not affect the accuracy of clean images. In summary, we find that PGD attacks can greatly degrade the performance of DNNs, and PGD training is a very effective way to defend against such attacks. PGD attacks and defence are overall significantly more effective than FGSM methods.

Keywords: deep neural network, adversarial attack, adversarial defense, adversarial machine learning

Procedia PDF Downloads 177
412 Machine Learning Classification of Fused Sentinel-1 and Sentinel-2 Image Data Towards Mapping Fruit Plantations in Highly Heterogenous Landscapes

Authors: Yingisani Chabalala, Elhadi Adam, Khalid Adem Ali

Abstract:

Mapping smallholder fruit plantations using optical data is challenging due to morphological landscape heterogeneity and crop types having overlapped spectral signatures. Furthermore, cloud covers limit the use of optical sensing, especially in subtropical climates where they are persistent. This research assessed the effectiveness of Sentinel-1 (S1) and Sentinel-2 (S2) data for mapping fruit trees and co-existing land-use types by using support vector machine (SVM) and random forest (RF) classifiers independently. These classifiers were also applied to fused data from the two sensors. Feature ranks were extracted using the RF mean decrease accuracy (MDA) and forward variable selection (FVS) to identify optimal spectral windows to classify fruit trees. Based on RF MDA and FVS, the SVM classifier resulted in relatively high classification accuracy with overall accuracy (OA) = 0.91.6% and kappa coefficient = 0.91% when applied to the fused satellite data. Application of SVM to S1, S2, S2 selected variables and S1S2 fusion independently produced OA = 27.64, Kappa coefficient = 0.13%; OA= 87%, Kappa coefficient = 86.89%; OA = 69.33, Kappa coefficient = 69. %; OA = 87.01%, Kappa coefficient = 87%, respectively. Results also indicated that the optimal spectral bands for fruit tree mapping are green (B3) and SWIR_2 (B10) for S2, whereas for S1, the vertical-horizontal (VH) polarization band. Including the textural metrics from the VV channel improved crop discrimination and co-existing land use cover types. The fusion approach proved robust and well-suited for accurate smallholder fruit plantation mapping.

Keywords: smallholder agriculture, fruit trees, data fusion, precision agriculture

Procedia PDF Downloads 35
411 Preparation of Wireless Networks and Security; Challenges in Efficient Accession of Encrypted Data in Healthcare

Authors: M. Zayoud, S. Oueida, S. Ionescu, P. AbiChar

Abstract:

Background: Wireless sensor network is encompassed of diversified tools of information technology, which is widely applied in a range of domains, including military surveillance, weather forecasting, and earthquake forecasting. Strengthened grounds are always developed for wireless sensor networks, which usually emerges security issues during professional application. Thus, essential technological tools are necessary to be assessed for secure aggregation of data. Moreover, such practices have to be incorporated in the healthcare practices that shall be serving in the best of the mutual interest Objective: Aggregation of encrypted data has been assessed through homomorphic stream cipher to assure its effectiveness along with providing the optimum solutions to the field of healthcare. Methods: An experimental design has been incorporated, which utilized newly developed cipher along with CPU-constrained devices. Modular additions have also been employed to evaluate the nature of aggregated data. The processes of homomorphic stream cipher have been highlighted through different sensors and modular additions. Results: Homomorphic stream cipher has been recognized as simple and secure process, which has allowed efficient aggregation of encrypted data. In addition, the application has led its way to the improvisation of the healthcare practices. Statistical values can be easily computed through the aggregation on the basis of selected cipher. Sensed data in accordance with variance, mean, and standard deviation has also been computed through the selected tool. Conclusion: It can be concluded that homomorphic stream cipher can be an ideal tool for appropriate aggregation of data. Alongside, it shall also provide the best solutions to the healthcare sector.

Keywords: aggregation, cipher, homomorphic stream, encryption

Procedia PDF Downloads 243
410 Hybrid Heat Pump for Micro Heat Network

Authors: J. M. Counsell, Y. Khalid, M. J. Stewart

Abstract:

Achieving nearly zero carbon heating continues to be identified by UK government analysis as an important feature of any lowest cost pathway to reducing greenhouse gas emissions. Heat currently accounts for 48% of UK energy consumption and approximately one third of UK’s greenhouse gas emissions. Heat Networks are being promoted by UK investment policies as one means of supporting hybrid heat pump based solutions. To this effect the RISE (Renewable Integrated and Sustainable Electric) heating system project is investigating how an all-electric heating sourceshybrid configuration could play a key role in long-term decarbonisation of heat.  For the purposes of this study, hybrid systems are defined as systems combining the technologies of an electric driven air source heat pump, electric powered thermal storage, a thermal vessel and micro-heat network as an integrated system.  This hybrid strategy allows for the system to store up energy during periods of low electricity demand from the national grid, turning it into a dynamic supply of low cost heat which is utilized only when required. Currently a prototype of such a system is being tested in a modern house integrated with advanced controls and sensors. This paper presents the virtual performance analysis of the system and its design for a micro heat network with multiple dwelling units. The results show that the RISE system is controllable and can reduce carbon emissions whilst being competitive in running costs with a conventional gas boiler heating system.

Keywords: gas boilers, heat pumps, hybrid heating and thermal storage, renewable integrated and sustainable electric

Procedia PDF Downloads 403
409 Testing the Possibility of Healthy Individuals to Mimic Fatigability in Multiple Sclerotic Patients

Authors: Emmanuel Abban Sagoe

Abstract:

A proper functioning of the Central Nervous System ensures that we are able to accomplish just about everything we do as human beings such as walking, breathing, running, etc. Myelinated neurons throughout the body which transmit signals at high speeds facilitate these actions. In the case of MS, the body’s immune system attacks the myelin sheath surrounding the neurons and overtime destroys the myelin sheaths. Depending upon where the destruction occurs in the brain symptoms can vary from person to person. Fatigue is, however, the biggest problem encountered by an MS sufferer. It is very often described as the bedrock upon which other symptoms of MS such challenges in balance and coordination, dizziness, slurred speech, etc. may occur. Classifying and distinguishing between perceptions based fatigue and performance based fatigability is key to identifying appropriate treatment options for patients. Objective methods for assessing motor fatigability is also key to providing clinicians and physiotherapist with critical information on the progression of the symptom. This study tested if the Fatigue Index Kliniken Schmieder assessment tool can detect fatigability as seen in MS patients when healthy subjects with no known history of neurological pathology mimic abnormal gaits. Thirty three healthy adults between ages 18-58years volunteered as subjects for the study. The subjects, strapped with RehaWatch sensors on both feet, completed 6 gait protocols of normal and mimicked fatigable gaits for 60 seconds per each gait and at 1.38889m/s treadmill speed following clear instructions given.

Keywords: attractor attributes, fatigue index Kliniken Schmieder, gait variability, movement pattern

Procedia PDF Downloads 104
408 Yawning Computing Using Bayesian Networks

Authors: Serge Tshibangu, Turgay Celik, Zenzo Ncube

Abstract:

Road crashes kill nearly over a million people every year, and leave millions more injured or permanently disabled. Various annual reports reveal that the percentage of fatal crashes due to fatigue/driver falling asleep comes directly after the percentage of fatal crashes due to intoxicated drivers. This percentage is higher than the combined percentage of fatal crashes due to illegal/Un-Safe U-turn and illegal/Un-Safe reversing. Although a relatively small percentage of police reports on road accidents highlights drowsiness and fatigue, the importance of these factors is greater than we might think, hidden by the undercounting of their events. Some scenarios show that these factors are significant in accidents with killed and injured people. Thus the need for an automatic drivers fatigue detection system in order to considerably reduce the number of accidents owing to fatigue.This research approaches the drivers fatigue detection problem in an innovative way by combining cues collected from both temporal analysis of drivers’ faces and environment. Monotony in driving environment is inter-related with visual symptoms of fatigue on drivers’ faces to achieve fatigue detection. Optical and infrared (IR) sensors are used to analyse the monotony in driving environment and to detect the visual symptoms of fatigue on human face. Internal cues from drivers faces and external cues from environment are combined together using machine learning algorithms to automatically detect fatigue.

Keywords: intelligent transportation systems, bayesian networks, yawning computing, machine learning algorithms

Procedia PDF Downloads 443
407 A Low-Power Two-Stage Seismic Sensor Scheme for Earthquake Early Warning System

Authors: Arvind Srivastav, Tarun Kanti Bhattacharyya

Abstract:

The north-eastern, Himalayan, and Eastern Ghats Belt of India comprise of earthquake-prone, remote, and hilly terrains. Earthquakes have caused enormous damages in these regions in the past. A wireless sensor network based earthquake early warning system (EEWS) is being developed to mitigate the damages caused by earthquakes. It consists of sensor nodes, distributed over the region, that perform majority voting of the output of the seismic sensors in the vicinity, and relay a message to a base station to alert the residents when an earthquake is detected. At the heart of the EEWS is a low-power two-stage seismic sensor that continuously tracks seismic events from incoming three-axis accelerometer signal at the first-stage, and, in the presence of a seismic event, triggers the second-stage P-wave detector that detects the onset of P-wave in an earthquake event. The parameters of the P-wave detector have been optimized for minimizing detection time and maximizing the accuracy of detection.Working of the sensor scheme has been verified with seven earthquakes data retrieved from IRIS. In all test cases, the scheme detected the onset of P-wave accurately. Also, it has been established that the P-wave onset detection time reduces linearly with the sampling rate. It has been verified with test data; the detection time for data sampled at 10Hz was around 2 seconds which reduced to 0.3 second for the data sampled at 100Hz.

Keywords: earthquake early warning system, EEWS, STA/LTA, polarization, wavelet, event detector, P-wave detector

Procedia PDF Downloads 166
406 Dynamic Process Model for Designing Smart Spaces Based on Context-Awareness and Computational Methods Principles

Authors: Heba M. Jahin, Ali F. Bakr, Zeyad T. Elsayad

Abstract:

As smart spaces can be defined as any working environment which integrates embedded computers, information appliances and multi-modal sensors to remain focused on the interaction between the users, their activity, and their behavior in the space; hence, smart space must be aware of their contexts and automatically adapt to their changing context-awareness, by interacting with their physical environment through natural and multimodal interfaces. Also, by serving the information used proactively. This paper suggests a dynamic framework through the architectural design process of the space based on the principles of computational methods and context-awareness principles to help in creating a field of changes and modifications. It generates possibilities, concerns about the physical, structural and user contexts. This framework is concerned with five main processes: gathering and analyzing data to generate smart design scenarios, parameters, and attributes; which will be transformed by coding into four types of models. Furthmore, connecting those models together in the interaction model which will represent the context-awareness system. Then, transforming that model into a virtual and ambient environment which represents the physical and real environments, to act as a linkage phase between the users and their activities taking place in that smart space . Finally, the feedback phase from users of that environment to be sure that the design of that smart space fulfill their needs. Therefore, the generated design process will help in designing smarts spaces that can be adapted and controlled to answer the users’ defined goals, needs, and activity.

Keywords: computational methods, context-awareness, design process, smart spaces

Procedia PDF Downloads 306
405 Optimum Method to Reduce the Natural Frequency for Steel Cantilever Beam

Authors: Eqqab Maree, Habil Jurgen Bast, Zana K. Shakir

Abstract:

Passive damping, once properly characterized and incorporated into the structure design is an autonomous mechanism. Passive damping can be achieved by applying layers of a polymeric material, called viscoelastic layers (VEM), to the base structure. This type of configuration is known as free or unconstrained layer damping treatment. A shear or constrained damping treatment uses the idea of adding a constraining layer, typically a metal, on top of the polymeric layer. Constrained treatment is a more efficient form of damping than the unconstrained damping treatment. In constrained damping treatment a sandwich is formed with the viscoelastic layer as the core. When the two outer layers experience bending, as they would if the structure was oscillating, they shear the viscoelastic layer and energy is dissipated in the form of heat. This form of energy dissipation allows the structural oscillations to attenuate much faster. The purpose behind this study is to predict damping effects by using two methods of passive viscoelastic constrained layer damping. First method is Euler-Bernoulli beam theory; it is commonly used for predicting the vibratory response of beams. Second method is Finite Element software packages provided in this research were obtained by using two-dimensional solid structural elements in ANSYS14 specifically eight nodded (SOLID183) and the output results from ANSYS 14 (SOLID183) its damped natural frequency values and mode shape for first five modes. This method of passive damping treatment is widely used for structural application in many industries like aerospace, automobile, etc. In this paper, take a steel cantilever sandwich beam with viscoelastic core type 3M-468 by using methods of passive viscoelastic constrained layer damping. Also can proved that, the percentage reduction of modal frequency between undamped and damped steel sandwich cantilever beam 8mm thickness for each mode is very high, this is due to the effect of viscoelastic layer on damped beams. Finally this types of damped sandwich steel cantilever beam with viscoelastic materials core type (3M468) is very appropriate to use in automotive industry and in many mechanical application, because has very high capability to reduce the modal vibration of structures.

Keywords: steel cantilever, sandwich beam, viscoelastic materials core type (3M468), ANSYS14, Euler-Bernoulli beam theory

Procedia PDF Downloads 295
404 The Production, Negotiation and Resistance of Short Video Producers

Authors: Cui Li, Xu Yuping

Abstract:

Based on the question of, "Are short video creators who are digital workers controlled by platform rules?" this study discusses the specific ways of platform rules control and the impact on short video creators. Based on the theory of digital labor, this paper adopts the method of in-depth interview and participant observation and chooses 24 producers of short video content of Tiktok to conduct in-depth interview. At the same time, through entering the short video creation field, the author carries on the four-month field investigation, obtains the creation process related data, and analyzes how the short video creator, as the digital labor, is controlled by the platform rule, as well as the creator in this process of compromise and resistance, a more comprehensive presentation of the short video creators of the labor process. It is found that the short video creators are controlled by the platform rules, mainly in the control of traffic rules, and the creators create content, compromise and resist under the guidance of traffic. First, while the platform seems to offer a flexible and autonomous way for creators to monetize, the threshold for participating in the event is actually very high for creators, and the rules for monetizing the event are vague. Under the influence of the flow rule, the creator is faced unstable incomes and high costs. Therefore, creators have to follow the rules of traffic to guide their own creation, began to flow-oriented content production, mainly reflected in the need to keep up-to-date, the pursuit of traffic to ride on the hot spots, in order to flow regardless, set up people "Born for the show", by the label solidified content creation. Secondly, the irregular working hours lead to the extension and overwork of the working hours, which leads to the internal friction of the short video creators at the spiritual level, and finally leads to the Rat Race of video creation. Thirdly, the video creator has completed the internalization and compromise of the platform rules in practice, which promotes the creator to continue to create independently, and forms the intrinsic motive force of the creator. Finally, the rule-controlled short video creators resist and fight in flexible ways, make use of the mechanism and rules of the platform to carry on the second creation, carry on the routine production, purchase the false flow, transfer the creation position to maintain own creation autonomy.

Keywords: short videos, tiktok, production, digital labors

Procedia PDF Downloads 47