Search results for: optimizing sensor placements
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2127

Search results for: optimizing sensor placements

177 An Adiabatic Quantum Optimization Approach for the Mixed Integer Nonlinear Programming Problem

Authors: Maxwell Henderson, Tristan Cook, Justin Chan Jin Le, Mark Hodson, YoungJung Chang, John Novak, Daniel Padilha, Nishan Kulatilaka, Ansu Bagchi, Sanjoy Ray, John Kelly

Abstract:

We present a method of using adiabatic quantum optimization (AQO) to solve a mixed integer nonlinear programming (MINLP) problem instance. The MINLP problem is a general form of a set of NP-hard optimization problems that are critical to many business applications. It requires optimizing a set of discrete and continuous variables with nonlinear and potentially nonconvex constraints. Obtaining an exact, optimal solution for MINLP problem instances of non-trivial size using classical computation methods is currently intractable. Current leading algorithms leverage heuristic and divide-and-conquer methods to determine approximate solutions. Creating more accurate and efficient algorithms is an active area of research. Quantum computing (QC) has several theoretical benefits compared to classical computing, through which QC algorithms could obtain MINLP solutions that are superior to current algorithms. AQO is a particular form of QC that could offer more near-term benefits compared to other forms of QC, as hardware development is in a more mature state and devices are currently commercially available from D-Wave Systems Inc. It is also designed for optimization problems: it uses an effect called quantum tunneling to explore all lowest points of an energy landscape where classical approaches could become stuck in local minima. Our work used a novel algorithm formulated for AQO to solve a special type of MINLP problem. The research focused on determining: 1) if the problem is possible to solve using AQO, 2) if it can be solved by current hardware, 3) what the currently achievable performance is, 4) what the performance will be on projected future hardware, and 5) when AQO is likely to provide a benefit over classical computing methods. Two different methods, integer range and 1-hot encoding, were investigated for transforming the MINLP problem instance constraints into a mathematical structure that can be embedded directly onto the current D-Wave architecture. For testing and validation a D-Wave 2X device was used, as well as QxBranch’s QxLib software library, which includes a QC simulator based on simulated annealing. Our results indicate that it is mathematically possible to formulate the MINLP problem for AQO, but that currently available hardware is unable to solve problems of useful size. Classical general-purpose simulated annealing is currently able to solve larger problem sizes, but does not scale well and such methods would likely be outperformed in the future by improved AQO hardware with higher qubit connectivity and lower temperatures. If larger AQO devices are able to show improvements that trend in this direction, commercially viable solutions to the MINLP for particular applications could be implemented on hardware projected to be available in 5-10 years. Continued investigation into optimal AQO hardware architectures and novel methods for embedding MINLP problem constraints on to those architectures is needed to realize those commercial benefits.

Keywords: adiabatic quantum optimization, mixed integer nonlinear programming, quantum computing, NP-hard

Procedia PDF Downloads 494
176 Shaped Crystal Growth of Fe-Ga and Fe-Al Alloy Plates by the Micro Pulling down Method

Authors: Kei Kamada, Rikito Murakami, Masahiko Ito, Mototaka Arakawa, Yasuhiro Shoji, Toshiyuki Ueno, Masao Yoshino, Akihiro Yamaji, Shunsuke Kurosawa, Yuui Yokota, Yuji Ohashi, Akira Yoshikawa

Abstract:

Techniques of energy harvesting y have been widely developed in recent years, due to high demand on the power supply for ‘Internet of things’ devices such as wireless sensor nodes. In these applications, conversion technique of mechanical vibration energy into electrical energy using magnetostrictive materials n have been brought to attention. Among the magnetostrictive materials, Fe-Ga and Fe-Al alloys are attractive materials due to the figure of merits such price, mechanical strength, high magnetostrictive constant. Up to now, bulk crystals of these alloys are produced by the Bridgman–Stockbarger method or the Czochralski method. Using these method big bulk crystal up to 2~3 inch diameter can be grown. However, non-uniformity of chemical composition along to the crystal growth direction cannot be avoid, which results in non-uniformity of magnetostriction constant and reduction of the production yield. The micro-pulling down (μ-PD) method has been developed as a shaped crystal growth technique. Our group have reported shaped crystal growth of oxide, fluoride single crystals with different shape such rod, plate tube, thin fiber, etc. Advantages of this method is low segregation due to high growth rate and small diffusion of melt at the solid-liquid interface, and small kerf loss due to near net shape crystal. In this presentation, we report the shaped long plate crystal growth of Fe-Ga and Fe-Al alloys using the μ-PD method. Alloy crystals were grown by the μ-PD method using calcium oxide crucible and induction heating system under the nitrogen atmosphere. The bottom hole of crucibles was 5 x 1mm² size. A <100> oriented iron-based alloy was used as a seed crystal. 5 x 1 x 320 mm³ alloy crystal plates were successfully grown. The results of crystal growth, chemical composition analysis, magnetostrictive properties and a prototype vibration energy harvester are reported. Furthermore, continuous crystal growth using powder supply system will be reported to minimize the chemical composition non-uniformity along the growth direction.

Keywords: crystal growth, micro-pulling-down method, Fe-Ga, Fe-Al

Procedia PDF Downloads 308
175 Multi-Objectives Genetic Algorithm for Optimizing Machining Process Parameters

Authors: Dylan Santos De Pinho, Nabil Ouerhani

Abstract:

Energy consumption of machine-tools is becoming critical for machine-tool builders and end-users because of economic, ecological and legislation-related reasons. Many machine-tool builders are seeking for solutions that allow the reduction of energy consumption of machine-tools while preserving the same productivity rate and the same quality of machined parts. In this paper, we present the first results of a project conducted jointly by academic and industrial partners to reduce the energy consumption of a Swiss-Type lathe. We employ genetic algorithms to find optimal machining parameters – the set of parameters that lead to the best trade-off between energy consumption, part quality and tool lifetime. Three main machining process parameters are considered in our optimization technique, namely depth of cut, spindle rotation speed and material feed rate. These machining process parameters have been identified as the most influential ones in the configuration of the Swiss-type machining process. A state-of-the-art multi-objective genetic algorithm has been used. The algorithm combines three fitness functions, which are objective functions that permit to evaluate a set of parameters against the three objectives: energy consumption, quality of the machined parts, and tool lifetime. In this paper, we focus on the investigation of the fitness function related to energy consumption. Four different energy consumption related fitness functions have been investigated and compared. The first fitness function refers to the Kienzle cutting force model. The second fitness function uses the Material Removal Rate (RMM) as an indicator of energy consumption. The two other fitness functions are non-deterministic, learning-based functions. One fitness function uses a simple Neural Network to learn the relation between the process parameters and the energy consumption from experimental data. Another fitness function uses Lasso regression to determine the same relation. The goal is, then, to find out which fitness functions predict best the energy consumption of a Swiss-Type machining process for the given set of machining process parameters. Once determined, these functions may be used for optimization purposes – determine the optimal machining process parameters leading to minimum energy consumption. The performance of the four fitness functions has been evaluated. The Tornos DT13 Swiss-Type Lathe has been used to carry out the experiments. A mechanical part including various Swiss-Type machining operations has been selected for the experiments. The evaluation process starts with generating a set of CNC (Computer Numerical Control) programs for machining the part at hand. Each CNC program considers a different set of machining process parameters. During the machining process, the power consumption of the spindle is measured. All collected data are assigned to the appropriate CNC program and thus to the set of machining process parameters. The evaluation approach consists in calculating the correlation between the normalized measured power consumption and the normalized power consumption prediction for each of the four fitness functions. The evaluation shows that the Lasso and Neural Network fitness functions have the highest correlation coefficient with 97%. The fitness function “Material Removal Rate” (MRR) has a correlation coefficient of 90%, whereas the Kienzle-based fitness function has a correlation coefficient of 80%.

Keywords: adaptive machining, genetic algorithms, smart manufacturing, parameters optimization

Procedia PDF Downloads 123
174 Rhizobium leguminosarum: Selecting Strain and Exploring Delivery Systems for White Clover

Authors: Laura Villamizar, David Wright, Claudia Baena, Marie Foxwell, Maureen O'Callaghan

Abstract:

Leguminous crops can be self-sufficient for their nitrogen requirements when their roots are nodulated with an effective Rhizobium strain and for this reason seed or soil inoculation is practiced worldwide to ensure nodulation and nitrogen fixation in grain and forage legumes. The most widely used method of applying commercially available inoculants is using peat cultures which are coated onto seeds prior to sowing. In general, rhizobia survive well in peat, but some species die rapidly after inoculation onto seeds. The development of improved formulation methodology is essential to achieve extended persistence of rhizobia on seeds, and improved efficacy. Formulations could be solid or liquid. Most popular solid formulations or delivery systems are: wettable powders (WP), water dispersible granules (WG), and granules (DG). Liquid formulation generally are: suspension concentrates (SC) or emulsifiable concentrates (EC). In New Zealand, R. leguminosarum bv. trifolii strain TA1 has been used as a commercial inoculant for white clover over wide areas for many years. Seeds inoculation is carried out by mixing the seeds with inoculated peat, some adherents and lime, but rhizobial populations on stored seeds decline over several weeks due to a number of factors including desiccation and antibacterial compounds produced by the seeds. In order to develop a more stable and suitable delivery system to incorporate rhizobia in pastures, two strains of R. leguminosarum (TA1 and CC275e) and several formulations and processes were explored (peat granules, self-sticky peat for seed coating, emulsions and a powder containing spray dried microcapsules). Emulsions prepared with fresh broth of strain TA1 were very unstable under storage and after seed inoculation. Formulations where inoculated peat was used as the active ingredient were significantly more stable than those prepared with fresh broth. The strain CC275e was more tolerant to stress conditions generated during formulation and seed storage. Peat granules and peat inoculated seeds using strain CC275e maintained an acceptable loading of 108 CFU/g of granules or 105 CFU/g of seeds respectively, during six months of storage at room temperature. Strain CC275e inoculated on peat was also microencapsulated with a natural biopolymer by spray drying and after optimizing operational conditions, microparticles containing 107 CFU/g and a mean particle size between 10 and 30 micrometers were obtained. Survival of rhizobia during storage of the microcapsules is being assessed. The development of a stable product depends on selecting an active ingredient (microorganism), robust enough to tolerate some adverse conditions generated during formulation, storage, and commercialization and after its use in the field. However, the design and development of an adequate formulation, using compatible ingredients, optimization of the formulation process and selecting the appropriate delivery system, is possibly the best tool to overcome the poor survival of rhizobia and provide farmers with better quality inoculants to use.

Keywords: formulation, Rhizobium leguminosarum, storage stability, white clover

Procedia PDF Downloads 132
173 Regression-Based Approach for Development of a Cuff-Less Non-Intrusive Cardiovascular Health Monitor

Authors: Pranav Gulati, Isha Sharma

Abstract:

Hypertension and hypotension are known to have repercussions on the health of an individual, with hypertension contributing to an increased probability of risk to cardiovascular diseases and hypotension resulting in syncope. This prompts the development of a non-invasive, non-intrusive, continuous and cuff-less blood pressure monitoring system to detect blood pressure variations and to identify individuals with acute and chronic heart ailments, but due to the unavailability of such devices for practical daily use, it becomes difficult to screen and subsequently regulate blood pressure. The complexities which hamper the steady monitoring of blood pressure comprises of the variations in physical characteristics from individual to individual and the postural differences at the site of monitoring. We propose to develop a continuous, comprehensive cardio-analysis tool, based on reflective photoplethysmography (PPG). The proposed device, in the form of an eyewear captures the PPG signal and estimates the systolic and diastolic blood pressure using a sensor positioned near the temporal artery. This system relies on regression models which are based on extraction of key points from a pair of PPG wavelets. The proposed system provides an edge over the existing wearables considering that it allows for uniform contact and pressure with the temporal site, in addition to minimal disturbance by movement. Additionally, the feature extraction algorithms enhance the integrity and quality of the extracted features by reducing unreliable data sets. We tested the system with 12 subjects of which 6 served as the training dataset. For this, we measured the blood pressure using a cuff based BP monitor (Omron HEM-8712) and at the same time recorded the PPG signal from our cardio-analysis tool. The complete test was conducted by using the cuff based blood pressure monitor on the left arm while the PPG signal was acquired from the temporal site on the left side of the head. This acquisition served as the training input for the regression model on the selected features. The other 6 subjects were used to validate the model by conducting the same test on them. Results show that the developed prototype can robustly acquire the PPG signal and can therefore be used to reliably predict blood pressure levels.

Keywords: blood pressure, photoplethysmograph, eyewear, physiological monitoring

Procedia PDF Downloads 249
172 Performance Evaluation of Various Displaced Left Turn Intersection Designs

Authors: Hatem Abou-Senna, Essam Radwan

Abstract:

With increasing traffic and limited resources, accommodating left-turning traffic has been a challenge for traffic engineers as they seek balance between intersection capacity and safety; these are two conflicting goals in the operation of a signalized intersection that are mitigated through signal phasing techniques. Hence, to increase the left-turn capacity and reduce the delay at the intersections, the Florida Department of Transportation (FDOT) moves forward with a vision of optimizing intersection control using innovative intersection designs through the Transportation Systems Management & Operations (TSM&O) program. These alternative designs successfully eliminate the left-turn phase, which otherwise reduces the conventional intersection’s (CI) efficiency considerably, and divide the intersection into smaller networks that would operate in a one-way fashion. This study focused on the Crossover Displaced Left-turn intersections (XDL), also known as Continuous Flow Intersections (CFI). The XDL concept is best suited for intersections with moderate to high overall traffic volumes, especially those with very high or unbalanced left turn volumes. There is little guidance on determining whether partial XDL intersections are adequate to mitigate the overall intersection condition or full XDL is always required. The primary objective of this paper was to evaluate the overall intersection performance in the case of different partial XDL designs compared to a full XDL. The XDL alternative was investigated for 4 different scenarios; partial XDL on the east-west approaches, partial XDL on the north-south approaches, partial XDL on the north and east approaches and full XDL on all 4 approaches. Also, the impact of increasing volume on the intersection performance was considered by modeling the unbalanced volumes with 10% increment resulting in 5 different traffic scenarios. The study intersection, located in Orlando Florida, is experiencing recurring congestion in the PM peak hour and is operating near capacity with volume to a capacity ratio closer to 1.00 due to the presence of two heavy conflicting movements; southbound and westbound. The results showed that a partial EN XDL alternative proved to be effective and compared favorably to a full XDL alternative followed by the partial EW XDL alternative. The analysis also showed that Full, EW and EN XDL alternatives outperformed the NS XDL and the CI alternatives with respect to the throughput, delay and queue lengths. Significant throughput improvements were remarkable at the higher volume level with percent increase in capacity of 25%. The percent reduction in delay for the critical movements in the XDL scenarios compared to the CI scenario ranged from 30-45%. Similarly, queue lengths showed percent reduction in the XDL scenarios ranging from 25-40%. The analysis revealed how partial XDL design can improve the overall intersection performance at various demands, reduce the costs associated with full XDL and proved to outperform the conventional intersection. However, partial XDL serving low volumes or only one of the critical movements while other critical movements are operating near or above capacity do not provide significant benefits when compared to the conventional intersection.

Keywords: continuous flow intersections, crossover displaced left-turn, microscopic traffic simulation, transportation system management and operations, VISSIM simulation model

Procedia PDF Downloads 285
171 Tactile Sensory Digit Feedback for Cochlear Implant Electrode Insertion

Authors: Yusuf Bulale, Mark Prince, Geoff Tansley, Peter Brett

Abstract:

Cochlear Implantation (CI) which became a routine procedure for the last decades is an electronic device that provides a sense of sound for patients who are severely and profoundly deaf. Today, cochlear implantation technology uses electrode array (EA) implanted manually into the cochlea. The optimal success of this implantation depends on the electrode technology and deep insertion techniques. However, this manual insertion procedure may cause mechanical trauma which can lead to a severe destruction of the delicate intracochlear structure. Accordingly, future improvement of the cochlear electrode implant insertion needs reduction of the excessive force application during the cochlear implantation which causes tissue damage and trauma. This study is examined tool-tissue interaction of large prototype scale digit embedded with distributive tactile sensor based upon cochlear electrode and large prototype scale cochlea phantom for simulating the human cochlear which could lead to small-scale digit requirements. The digit, distributive tactile sensors embedded with silicon-substrate was inserted into the cochlea phantom to measure any digit/phantom interaction and position of the digit in order to minimize tissue and trauma damage during the electrode cochlear insertion. The digit has provided tactile information from the digit-phantom insertion interaction such as contact status, tip penetration, obstacles, relative shape and location, contact orientation and multiple contacts. The tests demonstrated that even devices of such a relative simple design with low cost have a potential to improve cochlear implant surgery and other lumen mapping applications by providing tactile sensory feedback information and thus controlling the insertion through sensing and control of the tip of the implant during the insertion. In that approach, the surgeon could minimize the tissue damage and potential damage to the delicate structures within the cochlear caused by current manual electrode insertion of the cochlear implantation. This approach also can be applied to other minimally invasive surgery applications as well as diagnosis and path navigation procedures.

Keywords: cochlear electrode insertion, distributive tactile sensory feedback information, flexible digit, minimally invasive surgery, tool/tissue interaction

Procedia PDF Downloads 364
170 GPU-Based Back-Projection of Synthetic Aperture Radar (SAR) Data onto 3D Reference Voxels

Authors: Joshua Buli, David Pietrowski, Samuel Britton

Abstract:

Processing SAR data usually requires constraints in extent in the Fourier domain as well as approximations and interpolations onto a planar surface to form an exploitable image. This results in a potential loss of data requires several interpolative techniques, and restricts visualization to two-dimensional plane imagery. The data can be interpolated into a ground plane projection, with or without terrain as a component, all to better view SAR data in an image domain comparable to what a human would view, to ease interpretation. An alternate but computationally heavy method to make use of more of the data is the basis of this research. Pre-processing of the SAR data is completed first (matched-filtering, motion compensation, etc.), the data is then range compressed, and lastly, the contribution from each pulse is determined for each specific point in space by searching the time history data for the reflectivity values for each pulse summed over the entire collection. This results in a per-3D-point reflectivity using the entire collection domain. New advances in GPU processing have finally allowed this rapid projection of acquired SAR data onto any desired reference surface (called backprojection). Mathematically, the computations are fast and easy to implement, despite limitations in SAR phase history data size and 3D-point cloud size. Backprojection processing algorithms are embarrassingly parallel since each 3D point in the scene has the same reflectivity calculation applied for all pulses, independent of all other 3D points and pulse data under consideration. Therefore, given the simplicity of the single backprojection calculation, the work can be spread across thousands of GPU threads allowing for accurate reflectivity representation of a scene. Furthermore, because reflectivity values are associated with individual three-dimensional points, a plane is no longer the sole permissible mapping base; a digital elevation model or even a cloud of points (collected from any sensor capable of measuring ground topography) can be used as a basis for the backprojection technique. This technique minimizes any interpolations and modifications of the raw data, maintaining maximum data integrity. This innovative processing will allow for SAR data to be rapidly brought into a common reference frame for immediate exploitation and data fusion with other three-dimensional data and representations.

Keywords: backprojection, data fusion, exploitation, three-dimensional, visualization

Procedia PDF Downloads 46
169 3D-Mesh Robust Watermarking Technique for Ownership Protection and Authentication

Authors: Farhan A. Alenizi

Abstract:

Digital watermarking has evolved in the past years as an important means for data authentication and ownership protection. The images and video watermarking was well known in the field of multimedia processing; however, 3D objects' watermarking techniques have emerged as an important means for the same purposes, as 3D mesh models are in increasing use in different areas of scientific, industrial, and medical applications. Like the image watermarking techniques, 3D watermarking can take place in either space or transform domains. Unlike images and video watermarking, where the frames have regular structures in both space and temporal domains, 3D objects are represented in different ways as meshes that are basically irregular samplings of surfaces; moreover, meshes can undergo a large variety of alterations which may be hard to tackle. This makes the watermarking process more challenging. While the transform domain watermarking is preferable in images and videos, they are still difficult to implement in 3d meshes due to the huge number of vertices involved and the complicated topology and geometry, and hence the difficulty to perform the spectral decomposition, even though significant work was done in the field. Spatial domain watermarking has attracted significant attention in the past years; they can either act on the topology or on the geometry of the model. Exploiting the statistical characteristics in the 3D mesh models from both geometrical and topological aspects was useful in hiding data. However, doing that with minimal surface distortions to the mesh attracted significant research in the field. A 3D mesh blind watermarking technique is proposed in this research. The watermarking method depends on modifying the vertices' positions with respect to the center of the object. An optimal method will be developed to reduce the errors, minimizing the distortions that the 3d object may experience due to the watermarking process, and reducing the computational complexity due to the iterations and other factors. The technique relies on the displacement process of the vertices' locations depending on the modification of the variances of the vertices’ norms. Statistical analyses were performed to establish the proper distributions that best fit each mesh, and hence establishing the bins sizes. Several optimizing approaches were introduced in the realms of mesh local roughness, the statistical distributions of the norms, and the displacements in the mesh centers. To evaluate the algorithm's robustness against other common geometry and connectivity attacks, the watermarked objects were subjected to uniform noise, Laplacian smoothing, vertices quantization, simplification, and cropping. Experimental results showed that the approach is robust in terms of both perceptual and quantitative qualities. It was also robust against both geometry and connectivity attacks. Moreover, the probability of true positive detection versus the probability of false-positive detection was evaluated. To validate the accuracy of the test cases, the receiver operating characteristics (ROC) curves were drawn, and they’ve shown robustness from this aspect. 3D watermarking is still a new field but still a promising one.

Keywords: watermarking, mesh objects, local roughness, Laplacian Smoothing

Procedia PDF Downloads 138
168 A Long Range Wide Area Network-Based Smart Pest Monitoring System

Authors: Yun-Chung Yu, Yan-Wen Wang, Min-Sheng Liao, Joe-Air Jiang, Yuen-Chung Lee

Abstract:

This paper proposes to use a Long Range Wide Area Network (LoRaWAN) for a smart pest monitoring system which aims at the oriental fruit fly (Bactrocera dorsalis) to improve the communication efficiency of the system. The oriental fruit fly is one of the main pests in Southeast Asia and the Pacific Rim. Different smart pest monitoring systems based on the Internet of Things (IoT) architecture have been developed to solve problems of employing manual measurement. These systems often use Octopus II, a communication module following the 2.4GHz IEEE 802.15.4 ZigBee specification, as sensor nodes. The Octopus II is commonly used in low-power and short-distance communication. However, the energy consumption increase as the logical topology becomes more complicate to have enough coverage in the large area. By comparison, LoRaWAN follows the Low Power Wide Area Network (LPWAN) specification, which targets the key requirements of the IoT technology, such as secure bi-directional communication, mobility, and localization services. The LoRaWAN network has advantages of long range communication, high stability, and low energy consumption. The 433MHz LoRaWAN model has two superiorities over the 2.4GHz ZigBee model: greater diffraction and less interference. In this paper, The Octopus II module is replaced by a LoRa model to increase the coverage of the monitoring system, improve the communication performance, and prolong the network lifetime. The performance of the LoRa-based system is compared with a ZigBee-based system using three indexes: the packet receiving rate, delay time, and energy consumption, and the experiments are done in different settings (e.g. distances and environmental conditions). In the distance experiment, a pest monitoring system using the two communication specifications is deployed in an area with various obstacles, such as buildings and living creatures, and the performance of employing the two communication specifications is examined. The experiment results show that the packet receiving the rate of the LoRa-based system is 96% , which is much higher than that of the ZigBee system when the distance between any two modules is about 500m. These results indicate the capability of a LoRaWAN-based monitoring system in long range transmission and ensure the stability of the system.

Keywords: LoRaWan, oriental fruit fly, IoT, Octopus II

Procedia PDF Downloads 326
167 Remote Sensing Application in Environmental Researches: Case Study of Iran Mangrove Forests Quantitative Assessment

Authors: Neda Orak, Mostafa Zarei

Abstract:

Environmental assessment is an important session in environment management. Since various methods and techniques have been produces and implemented. Remote sensing (RS) is widely used in many scientific and research fields such as geology, cartography, geography, agriculture, forestry, land use planning, environment, etc. It can show earth surface objects cyclical changes. Also, it can show earth phenomena limits on basis of electromagnetic reflectance changes and deviations records. The research has been done on mangrove forests assessment by RS techniques. Mangrove forests quantitative analysis in Basatin and Bidkhoon estuaries was the aim of this research. It has been done by Landsat satellite images from 1975- 2013 and match to ground control points. This part of mangroves are the last distribution in northern hemisphere. It can provide a good background to improve better management on this important ecosystem. Landsat has provided valuable images to earth changes detection to researchers. This research has used MSS, TM, +ETM, OLI sensors from 1975, 1990, 2000, 2003-2013. Changes had been studied after essential corrections such as fix errors, bands combination, georeferencing on 2012 images as basic image, by maximum likelihood and IPVI Index. It was done by supervised classification. 2004 google earth image and ground points by GPS (2010-2012) was used to compare satellite images obtained changes. Results showed mangrove area in bidkhoon was 1119072 m2 by GPS and 1231200 m2 by maximum likelihood supervised classification and 1317600 m2 by IPVI in 2012. Basatin areas is respectively: 466644 m2, 88200 m2, 63000 m2. Final results show forests have been declined naturally. It is due to human activities in Basatin. The defect was offset by planting in many years. Although the trend has been declining in recent years again. So, it mentioned satellite images have high ability to estimation all environmental processes. This research showed high correlation between images and indexes such as IPVI and NDVI with ground control points.

Keywords: IPVI index, Landsat sensor, maximum likelihood supervised classification, Nayband National Park

Procedia PDF Downloads 269
166 Optimization of Territorial Spatial Functional Partitioning in Coal Resource-based Cities Based on Ecosystem Service Clusters - The Case of Gujiao City in Shanxi Province

Authors: Gu Sihao

Abstract:

The coordinated development of "ecology-production-life" in cities has been highly concerned by the country, and the transformation development and sustainable development of resource-based cities have become a hot research topic at present. As an important part of China's resource-based cities, coal resource-based cities have the characteristics of large number and wide distribution. However, due to the adjustment of national energy structure and the gradual exhaustion of urban coal resources, the development vitality of coal resource-based cities is gradually reduced. In many studies, the deterioration of ecological environment in coal resource-based cities has become the main problem restricting their urban transformation and sustainable development due to the "emphasis on economy and neglect of ecology". Since the 18th National Congress of the Communist Party of China (CPC), the Central Government has been deepening territorial space planning and development. On the premise of optimizing territorial space development pattern, it has completed the demarcation of ecological protection red lines, carried out ecological zoning and ecosystem evaluation, which have become an important basis and scientific guarantee for ecological modernization and ecological civilization construction. Grasp the regional multiple ecosystem services is the precondition of the ecosystem management, and the relationship between the multiple ecosystem services study, ecosystem services cluster can identify the interactions between multiple ecosystem services, and on the basis of the characteristics of the clusters on regional ecological function zoning, to better Social-Ecological system management. Based on this cognition, this study optimizes the spatial function zoning of Gujiao, a coal resource-based city, in order to provide a new theoretical basis for its sustainable development. This study is based on the detailed analysis of characteristics and utilization of Gujiao city land space, using SOFM neural networks to identify local ecosystem service clusters, according to the cluster scope and function of ecological function zoning of space partition balance and coordination between different ecosystem services strength, establish a relationship between clusters and land use, and adjust the functions of territorial space within each zone. Then, according to the characteristics of coal resources city and national spatial function zoning characteristics, as the driving factors of land change, by cellular automata simulation program, such as simulation under different restoration strategy situation of urban future development trend, and provides relevant theories and technical methods for the "third-line" demarcations of Gujiao's territorial space planning, optimizes territorial space functions, and puts forward targeted strategies for the promotion of regional ecosystem services, providing theoretical support for the improvement of human well-being and sustainable development of resource-based cities.

Keywords: coal resource-based city, territorial spatial planning, ecosystem service cluster, gmop model, geosos-FLUS model, functional zoning optimization and upgrading

Procedia PDF Downloads 36
165 Freight Time and Cost Optimization in Complex Logistics Networks, Using a Dimensional Reduction Method and K-Means Algorithm

Authors: Egemen Sert, Leila Hedayatifar, Rachel A. Rigg, Amir Akhavan, Olha Buchel, Dominic Elias Saadi, Aabir Abubaker Kar, Alfredo J. Morales, Yaneer Bar-Yam

Abstract:

The complexity of providing timely and cost-effective distribution of finished goods from industrial facilities to customers makes effective operational coordination difficult, yet effectiveness is crucial for maintaining customer service levels and sustaining a business. Logistics planning becomes increasingly complex with growing numbers of customers, varied geographical locations, the uncertainty of future orders, and sometimes extreme competitive pressure to reduce inventory costs. Linear optimization methods become cumbersome or intractable due to a large number of variables and nonlinear dependencies involved. Here we develop a complex systems approach to optimizing logistics networks based upon dimensional reduction methods and apply our approach to a case study of a manufacturing company. In order to characterize the complexity in customer behavior, we define a “customer space” in which individual customer behavior is described by only the two most relevant dimensions: the distance to production facilities over current transportation routes and the customer's demand frequency. These dimensions provide essential insight into the domain of effective strategies for customers; direct and indirect strategies. In the direct strategy, goods are sent to the customer directly from a production facility using box or bulk trucks. In the indirect strategy, in advance of an order by the customer, goods are shipped to an external warehouse near a customer using trains and then "last-mile" shipped by trucks when orders are placed. Each strategy applies to an area of the customer space with an indeterminate boundary between them. Specific company policies determine the location of the boundary generally. We then identify the optimal delivery strategy for each customer by constructing a detailed model of costs of transportation and temporary storage in a set of specified external warehouses. Customer spaces help give an aggregate view of customer behaviors and characteristics. They allow policymakers to compare customers and develop strategies based on the aggregate behavior of the system as a whole. In addition to optimization over existing facilities, using customer logistics and the k-means algorithm, we propose additional warehouse locations. We apply these methods to a medium-sized American manufacturing company with a particular logistics network, consisting of multiple production facilities, external warehouses, and customers along with three types of shipment methods (box truck, bulk truck and train). For the case study, our method forecasts 10.5% savings on yearly transportation costs and an additional 4.6% savings with three new warehouses.

Keywords: logistics network optimization, direct and indirect strategies, K-means algorithm, dimensional reduction

Procedia PDF Downloads 112
164 Structural and Functional Correlates of Reaction Time Variability in a Large Sample of Healthy Adolescents and Adolescents with ADHD Symptoms

Authors: Laura O’Halloran, Zhipeng Cao, Clare M. Kelly, Hugh Garavan, Robert Whelan

Abstract:

Reaction time (RT) variability on cognitive tasks provides the index of the efficiency of executive control processes (e.g. attention and inhibitory control) and is considered to be a hallmark of clinical disorders, such as attention-deficit disorder (ADHD). Increased RT variability is associated with structural and functional brain differences in children and adults with various clinical disorders, as well as poorer task performance accuracy. Furthermore, the strength of functional connectivity across various brain networks, such as the negative relationship between the task-negative default mode network and task-positive attentional networks, has been found to reflect differences in RT variability. Although RT variability may provide an index of attentional efficiency, as well as being a useful indicator of neurological impairment, the brain substrates associated with RT variability remain relatively poorly defined, particularly in a healthy sample. Method: Firstly, we used the intra-individual coefficient of variation (ICV) as an index of RT variability from “Go” responses on the Stop Signal Task. We then examined the functional and structural neural correlates of ICV in a large sample of 14-year old healthy adolescents (n=1719). Of these, a subset had elevated symptoms of ADHD (n=80) and was compared to a matched non-symptomatic control group (n=80). The relationship between brain activity during successful and unsuccessful inhibitions and gray matter volume were compared with the ICV. A mediation analysis was conducted to examine if specific brain regions mediated the relationship between ADHD symptoms and ICV. Lastly, we looked at functional connectivity across various brain networks and quantified both positive and negative correlations during “Go” responses on the Stop Signal Task. Results: The brain data revealed that higher ICV was associated with increased structural and functional brain activation in the precentral gyrus in the whole sample and in adolescents with ADHD symptoms. Lower ICV was associated with lower activation in the anterior cingulate cortex (ACC) and medial frontal gyrus in the whole sample and in the control group. Furthermore, our results indicated that activation in the precentral gyrus (Broadman Area 4) mediated the relationship between ADHD symptoms and behavioural ICV. Conclusion: This is the first study first to investigate the functional and structural correlates of ICV collectively in a large adolescent sample. Our findings demonstrate a concurrent increase in brain structure and function within task-active prefrontal networks as a function of increased RT variability. Furthermore, structural and functional brain activation patterns in the ACC, and medial frontal gyrus plays a role-optimizing top-down control in order to maintain task performance. Our results also evidenced clear differences in brain morphometry between adolescents with symptoms of ADHD but without clinical diagnosis and typically developing controls. Our findings shed light on specific functional and structural brain regions that are implicated in ICV and yield insights into effective cognitive control in healthy individuals and in clinical groups.

Keywords: ADHD, fMRI, reaction-time variability, default mode, functional connectivity

Procedia PDF Downloads 227
163 Alternative Energy and Carbon Source for Biosurfactant Production

Authors: Akram Abi, Mohammad Hossein Sarrafzadeh

Abstract:

Because of their several advantages over chemical surfactants, biosurfactants have given rise to a growing interest in the past decades. Advantages such as lower toxicity, higher biodegradability, higher selectivity and applicable at extreme temperature and pH which enables them to be used in a variety of applications such as: enhanced oil recovery, environmental and pharmaceutical applications, etc. Bacillus subtilis produces a cyclic lipopeptide, called surfactin, which is one of the most powerful biosurfactants with ability to decrease surface tension of water from 72 mN/m to 27 mN/m. In addition to its biosurfactant character, surfactin exhibits interesting biological activities such as: inhibition of fibrin clot formation, lyses of erythrocytes and several bacterial spheroplasts, antiviral, anti-tumoral and antibacterial properties. Surfactin is an antibiotic substance and has been shown recently to possess anti-HIV activity. However, application of biosurfactants is limited by their high production cost. The cost can be reduced by optimizing biosurfactant production using cheap feed stock. Utilization of inexpensive substrates and unconventional carbon sources like urban or agro-industrial wastes is a promising strategy to decrease the production cost of biosurfactants. With suitable engineering optimization and microbiological modifications, these wastes can be used as substrates for large-scale production of biosurfactants. As an effort to fulfill this purpose, in this work we have tried to utilize olive oil as second carbon source and also yeast extract as second nitrogen source to investigate the effect on both biomass and biosurfactant production improvement in Bacillus subtilis cultures. Since the turbidity of the culture was affected by presence of the oil, optical density was compromised and no longer could be used as an index of growth and biomass concentration. Therefore, cell Dry Weight measurements with applying necessary tactics for removing oil drops to prevent interference with biomass weight were carried out to monitor biomass concentration during the growth of the bacterium. The surface tension and critical micelle dilutions (CMD-1, CMD-2) were considered as an indirect measurement of biosurfactant production. Distinctive and promising results were obtained in the cultures containing olive oil compared to cultures without it: more than two fold increase in biomass production (from 2 g/l to 5 g/l) and considerable reduction in surface tension, down to 40 mN/m at surprisingly early hours of culture time (only 5hr after inoculation). This early onset of biosurfactant production in this culture is specially interesting when compared to the conventional cultures at which this reduction in surface tension is not obtained until 30 hour of culture time. Reducing the production time is a very prominent result to be considered for large scale process development. Furthermore, these results can be used to develop strategies for utilization of agro-industrial wastes (such as olive oil mill residue, molasses, etc.) as cheap and easily accessible feed stocks to decrease the high costs of biosurfactant production.

Keywords: agro-industrial waste, bacillus subtilis, biosurfactant, fermentation, second carbon and nitrogen source, surfactin

Procedia PDF Downloads 270
162 A Four-Step Ortho-Rectification Procedure for Geo-Referencing Video Streams from a Low-Cost UAV

Authors: B. O. Olawale, C. R. Chatwin, R. C. D. Young, P. M. Birch, F. O. Faithpraise, A. O. Olukiran

Abstract:

Ortho-rectification is the process of geometrically correcting an aerial image such that the scale is uniform. The ortho-image formed from the process is corrected for lens distortion, topographic relief, and camera tilt. This can be used to measure true distances, because it is an accurate representation of the Earth’s surface. Ortho-rectification and geo-referencing are essential to pin point the exact location of targets in video imagery acquired at the UAV platform. This can only be achieved by comparing such video imagery with an existing digital map. However, it is only when the image is ortho-rectified with the same co-ordinate system as an existing map that such a comparison is possible. The video image sequences from the UAV platform must be geo-registered, that is, each video frame must carry the necessary camera information before performing the ortho-rectification process. Each rectified image frame can then be mosaicked together to form a seamless image map covering the selected area. This can then be used for comparison with an existing map for geo-referencing. In this paper, we present a four-step ortho-rectification procedure for real-time geo-referencing of video data from a low-cost UAV equipped with multi-sensor system. The basic procedures for the real-time ortho-rectification are: (1) Decompilation of video stream into individual frames; (2) Finding of interior camera orientation parameters; (3) Finding the relative exterior orientation parameters for each video frames with respect to each other; (4) Finding the absolute exterior orientation parameters, using self-calibration adjustment with the aid of a mathematical model. Each ortho-rectified video frame is then mosaicked together to produce a 2-D planimetric mapping, which can be compared with a well referenced existing digital map for the purpose of georeferencing and aerial surveillance. A test field located in Abuja, Nigeria was used for testing our method. Fifteen minutes video and telemetry data were collected using the UAV and the data collected were processed using the four-step ortho-rectification procedure. The results demonstrated that the geometric measurement of the control field from ortho-images are more reliable than those from original perspective photographs when used to pin point the exact location of targets on the video imagery acquired by the UAV. The 2-D planimetric accuracy when compared with the 6 control points measured by a GPS receiver is between 3 to 5 meters.

Keywords: geo-referencing, ortho-rectification, video frame, self-calibration

Procedia PDF Downloads 460
161 Hybrid Data-Driven Drilling Rate of Penetration Optimization Scheme Guided by Geological Formation and Historical Data

Authors: Ammar Alali, Mahmoud Abughaban, William Contreras Otalvora

Abstract:

Optimizing the drilling process for cost and efficiency requires the optimization of the rate of penetration (ROP). ROP is the measurement of the speed at which the wellbore is created, in units of feet per hour. It is the primary indicator of measuring drilling efficiency. Maximization of the ROP can indicate fast and cost-efficient drilling operations; however, high ROPs may induce unintended events, which may lead to nonproductive time (NPT) and higher net costs. The proposed ROP optimization solution is a hybrid, data-driven system that aims to improve the drilling process, maximize the ROP, and minimize NPT. The system consists of two phases: (1) utilizing existing geological and drilling data to train the model prior, and (2) real-time adjustments of the controllable dynamic drilling parameters [weight on bit (WOB), rotary speed (RPM), and pump flow rate (GPM)] that direct influence on the ROP. During the first phase of the system, geological and historical drilling data are aggregated. After, the top-rated wells, as a function of high instance ROP, are distinguished. Those wells are filtered based on NPT incidents, and a cross-plot is generated for the controllable dynamic drilling parameters per ROP value. Subsequently, the parameter values (WOB, GPM, RPM) are calculated as a conditioned mean based on physical distance, following Inverse Distance Weighting (IDW) interpolation methodology. The first phase is concluded by producing a model of drilling best practices from the offset wells, prioritizing the optimum ROP value. This phase is performed before the commencing of drilling. Starting with the model produced in phase one, the second phase runs an automated drill-off test, delivering live adjustments in real-time. Those adjustments are made by directing the driller to deviate two of the controllable parameters (WOB and RPM) by a small percentage (0-5%), following the Constrained Random Search (CRS) methodology. These minor incremental variations will reveal new drilling conditions, not explored before through offset wells. The data is then consolidated into a heat-map, as a function of ROP. A more optimum ROP performance is identified through the heat-map and amended in the model. The validation process involved the selection of a planned well in an onshore oil field with hundreds of offset wells. The first phase model was built by utilizing the data points from the top-performing historical wells (20 wells). The model allows drillers to enhance decision-making by leveraging existing data and blending it with live data in real-time. An empirical relationship between controllable dynamic parameters and ROP was derived using Artificial Neural Networks (ANN). The adjustments resulted in improved ROP efficiency by over 20%, translating to at least 10% saving in drilling costs. The novelty of the proposed system lays is its ability to integrate historical data, calibrate based geological formations, and run real-time global optimization through CRS. Those factors position the system to work for any newly drilled well in a developing field event.

Keywords: drilling optimization, geological formations, machine learning, rate of penetration

Procedia PDF Downloads 101
160 Geomatic Techniques to Filter Vegetation from Point Clouds

Authors: M. Amparo Núñez-Andrés, Felipe Buill, Albert Prades

Abstract:

More and more frequently, geomatics techniques such as terrestrial laser scanning or digital photogrammetry, either terrestrial or from drones, are being used to obtain digital terrain models (DTM) used for the monitoring of geological phenomena that cause natural disasters, such as landslides, rockfalls, debris-flow. One of the main multitemporal analyses developed from these models is the quantification of volume changes in the slopes and hillsides, either caused by erosion, fall, or land movement in the source area or sedimentation in the deposition zone. To carry out this task, it is necessary to filter the point clouds of all those elements that do not belong to the slopes. Among these elements, vegetation stands out as it is the one we find with the greatest presence and its constant change, both seasonal and daily, as it is affected by factors such as wind. One of the best-known indexes to detect vegetation on the image is the NVDI (Normalized Difference Vegetation Index), which is obtained from the combination of the infrared and red channels. Therefore it is necessary to have a multispectral camera. These cameras are generally of lower resolution than conventional RGB cameras, while their cost is much higher. Therefore we have to look for alternative indices based on RGB. In this communication, we present the results obtained in Georisk project (PID2019‐103974RB‐I00/MCIN/AEI/10.13039/501100011033) by using the GLI (Green Leaf Index) and ExG (Excessive Greenness), as well as the change to the Hue-Saturation-Value (HSV) color space being the H coordinate the one that gives us the most information for vegetation filtering. These filters are applied both to the images, creating binary masks to be used when applying the SfM algorithms, and to the point cloud obtained directly by the photogrammetric process without any previous filter or the one obtained by TLS (Terrestrial Laser Scanning). In this last case, we have also tried to work with a Riegl VZ400i sensor that allows the reception, as in the aerial LiDAR, of several returns of the signal. Information to be used for the classification on the point cloud. After applying all the techniques in different locations, the results show that the color-based filters allow correct filtering in those areas where the presence of shadows is not excessive and there is a contrast between the color of the slope lithology and the vegetation. As we have advanced in the case of using the HSV color space, it is the H coordinate that responds best for this filtering. Finally, the use of the various returns of the TLS signal allows filtering with some limitations.

Keywords: RGB index, TLS, photogrammetry, multispectral camera, point cloud

Procedia PDF Downloads 105
159 The Second Generation of Tyrosine Kinase Inhibitor Afatinib Controls Inflammation by Regulating NLRP3 Inflammasome Activation

Authors: Shujun Xie, Shirong Zhang, Shenglin Ma

Abstract:

Background: Chronic inflammation might lead to many malignancies, and inadequate resolution could play a crucial role in tumor invasion, progression, and metastases. A randomised, double-blind, placebo-controlled trial shows that IL-1β inhibition with canakinumab could reduce incident lung cancer and lung cancer mortality in patients with atherosclerosis. The process and secretion of proinflammatory cytokine IL-1β are controlled by the inflammasome. Here we showed the correlation of the innate immune system and afatinib, a tyrosine kinase inhibitor targeting epidermal growth factor receptor (EGFR) in non-small cell lung cancer. Methods: Murine Bone marrow derived macrophages (BMDMs), peritoneal macrophages (PMs) and THP-1 were used to check the effect of afatinib on the activation of NLRP3 inflammasome. The assembly of NLRP3 inflammasome was check by co-immunoprecipitation of NLRP3 and apoptosis-associated speck-like protein containing CARD (ASC), disuccinimidyl suberate (DSS)-cross link of ASC. Lipopolysaccharide (LPS)-induced sepsis and Alum-induced peritonitis were conducted to confirm that afatinib could inhibit the activation of NLRP3 in vivo. Peripheral blood mononuclear cells (PBMCs) from non-small cell lung cancer (NSCLC) patients before or after taking afatinib were used to check that afatinib inhibits inflammation in NSCLC therapy. Results: Our data showed that afatinib could inhibit the secretion of IL-1β in a dose-dependent manner in macrophage. Moreover, afatinib could inhibit the maturation of IL-1β and caspase-1 without affecting the precursors of IL-1β and caspase-1. Next, we found that afatinib could block the assembly of NLRP3 inflammasome and the ASC speck by blocking the interaction of the sensor protein NLRP3 and the adaptor protein ASC. We also found that afatinib was able to alleviate the LPS-induced sepsis in vivo. Conclusion: Our study found that afatinib could inhibit the activation of NLRP3 inflammasome in macrophage, providing new evidence that afatinib could target the innate immune system to control chronic inflammation. These investigations will provide significant experimental evidence in afatinib as therapeutic drug for non-small cell lung cancer or other tumors and NLRP3-related diseases and will explore new targets for afatinib.

Keywords: inflammasome, afatinib, inflammation, tyrosine kinase inhibitor

Procedia PDF Downloads 96
158 A Hybrid of BioWin and Computational Fluid Dynamics Based Modeling of Biological Wastewater Treatment Plants for Model-Based Control

Authors: Komal Rathore, Kiesha Pierre, Kyle Cogswell, Aaron Driscoll, Andres Tejada Martinez, Gita Iranipour, Luke Mulford, Aydin Sunol

Abstract:

Modeling of Biological Wastewater Treatment Plants requires several parameters for kinetic rate expressions, thermo-physical properties, and hydrodynamic behavior. The kinetics and associated mechanisms become complex due to several biological processes taking place in wastewater treatment plants at varying times and spatial scales. A dynamic process model that incorporated the complex model for activated sludge kinetics was developed using the BioWin software platform for an Advanced Wastewater Treatment Plant in Valrico, Florida. Due to the extensive number of tunable parameters, an experimental design was employed for judicious selection of the most influential parameter sets and their bounds. The model was tuned using both the influent and effluent plant data to reconcile and rectify the forecasted results from the BioWin Model. Amount of mixed liquor suspended solids in the oxidation ditch, aeration rates and recycle rates were adjusted accordingly. The experimental analysis and plant SCADA data were used to predict influent wastewater rates and composition profiles as a function of time for extended periods. The lumped dynamic model development process was coupled with Computational Fluid Dynamics (CFD) modeling of the key units such as oxidation ditches in the plant. Several CFD models that incorporate the nitrification-denitrification kinetics, as well as, hydrodynamics was developed and being tested using ANSYS Fluent software platform. These realistic and verified models developed using BioWin and ANSYS were used to plan beforehand the operating policies and control strategies for the biological wastewater plant accordingly that further allows regulatory compliance at minimum operational cost. These models, with a little bit of tuning, can be used for other biological wastewater treatment plants as well. The BioWin model mimics the existing performance of the Valrico Plant which allowed the operators and engineers to predict effluent behavior and take control actions to meet the discharge limits of the plant. Also, with the help of this model, we were able to find out the key kinetic and stoichiometric parameters which are significantly more important for modeling of biological wastewater treatment plants. One of the other important findings from this model were the effects of mixed liquor suspended solids and recycle ratios on the effluent concentration of various parameters such as total nitrogen, ammonia, nitrate, nitrite, etc. The ANSYS model allowed the abstraction of information such as the formation of dead zones increases through the length of the oxidation ditches as compared to near the aerators. These profiles were also very useful in studying the behavior of mixing patterns, effect of aerator speed, and use of baffles which in turn helps in optimizing the plant performance.

Keywords: computational fluid dynamics, flow-sheet simulation, kinetic modeling, process dynamics

Procedia PDF Downloads 176
157 Optimizing Stormwater Sampling Design for Estimation of Pollutant Loads

Authors: Raja Umer Sajjad, Chang Hee Lee

Abstract:

Stormwater runoff is the leading contributor to pollution of receiving waters. In response, an efficient stormwater monitoring program is required to quantify and eventually reduce stormwater pollution. The overall goals of stormwater monitoring programs primarily include the identification of high-risk dischargers and the development of total maximum daily loads (TMDLs). The challenge in developing better monitoring program is to reduce the variability in flux estimates due to sampling errors; however, the success of monitoring program mainly depends on the accuracy of the estimates. Apart from sampling errors, manpower and budgetary constraints also influence the quality of the estimates. This study attempted to develop optimum stormwater monitoring design considering both cost and the quality of the estimated pollutants flux. Three years stormwater monitoring data (2012 – 2014) from a mix land use located within Geumhak watershed South Korea was evaluated. The regional climate is humid and precipitation is usually well distributed through the year. The investigation of a large number of water quality parameters is time-consuming and resource intensive. In order to identify a suite of easy-to-measure parameters to act as a surrogate, Principal Component Analysis (PCA) was applied. Means, standard deviations, coefficient of variation (CV) and other simple statistics were performed using multivariate statistical analysis software SPSS 22.0. The implication of sampling time on monitoring results, number of samples required during the storm event and impact of seasonal first flush were also identified. Based on the observations derived from the PCA biplot and the correlation matrix, total suspended solids (TSS) was identified as a potential surrogate for turbidity, total phosphorus and for heavy metals like lead, chromium, and copper whereas, Chemical Oxygen Demand (COD) was identified as surrogate for organic matter. The CV among different monitored water quality parameters were found higher (ranged from 3.8 to 15.5). It suggests that use of grab sampling design to estimate the mass emission rates in the study area can lead to errors due to large variability. TSS discharge load calculation error was found only 2 % with two different sample size approaches; i.e. 17 samples per storm event and equally distributed 6 samples per storm event. Both seasonal first flush and event first flush phenomena for most water quality parameters were observed in the study area. Samples taken at the initial stage of storm event generally overestimate the mass emissions; however, it was found that collecting a grab sample after initial hour of storm event more closely approximates the mean concentration of the event. It was concluded that site and regional climate specific interventions can be made to optimize the stormwater monitoring program in order to make it more effective and economical.

Keywords: first flush, pollutant load, stormwater monitoring, surrogate parameters

Procedia PDF Downloads 213
156 Proposal of a Rectenna Built by Using Paper as a Dielectric Substrate for Electromagnetic Energy Harvesting

Authors: Ursula D. C. Resende, Yan G. Santos, Lucas M. de O. Andrade

Abstract:

The recent and fast development of the internet, wireless, telecommunication technologies and low-power electronic devices has led to an expressive amount of electromagnetic energy available in the environment and the smart applications technology expansion. These applications have been used in the Internet of Things devices, 4G and 5G solutions. The main feature of this technology is the use of the wireless sensor. Although these sensors are low-power loads, their use imposes huge challenges in terms of an efficient and reliable way for power supply in order to avoid the traditional battery. The radio frequency based energy harvesting technology is especially suitable to wireless power sensors by using a rectenna since it can be completely integrated into the distributed hosting sensors structure, reducing its cost, maintenance and environmental impact. The rectenna is an equipment composed of an antenna and a rectifier circuit. The antenna function is to collect as much radio frequency radiation as possible and transfer it to the rectifier, which is a nonlinear circuit, that converts the very low input radio frequency energy into direct current voltage. In this work, a set of rectennas, mounted on a paper substrate, which can be used for the inner coating of buildings and simultaneously harvest electromagnetic energy from the environment, is proposed. Each proposed individual rectenna is composed of a 2.45 GHz patch antenna and a voltage doubler rectifier circuit, built in the same paper substrate. The antenna contains a rectangular radiator element and a microstrip transmission line that was projected and optimized by using the Computer Simulation Software (CST) in order to obtain values of S11 parameter below -10 dB in 2.45 GHz. In order to increase the amount of harvested power, eight individual rectennas, incorporating metamaterial cells, were connected in parallel forming a system, denominated Electromagnetic Wall (EW). In order to evaluate the EW performance, it was positioned at a variable distance from the internet router, and a 27 kΩ resistive load was fed. The results obtained showed that if more than one rectenna is associated in parallel, enough power level can be achieved in order to feed very low consumption sensors. The 0.12 m2 EW proposed in this work was able to harvest 0.6 mW from the environment. It also observed that the use of metamaterial structures provide an expressive growth in the amount of electromagnetic energy harvested, which was increased from 0. 2mW to 0.6 mW.

Keywords: electromagnetic energy harvesting, metamaterial, rectenna, rectifier circuit

Procedia PDF Downloads 131
155 Categorical Metadata Encoding Schemes for Arteriovenous Fistula Blood Flow Sound Classification: Scaling Numerical Representations Leads to Improved Performance

Authors: George Zhou, Yunchan Chen, Candace Chien

Abstract:

Kidney replacement therapy is the current standard of care for end-stage renal diseases. In-center or home hemodialysis remains an integral component of the therapeutic regimen. Arteriovenous fistulas (AVF) make up the vascular circuit through which blood is filtered and returned. Naturally, AVF patency determines whether adequate clearance and filtration can be achieved and directly influences clinical outcomes. Our aim was to build a deep learning model for automated AVF stenosis screening based on the sound of blood flow through the AVF. A total of 311 patients with AVF were enrolled in this study. Blood flow sounds were collected using a digital stethoscope. For each patient, blood flow sounds were collected at 6 different locations along the patient’s AVF. The 6 locations are artery, anastomosis, distal vein, middle vein, proximal vein, and venous arch. A total of 1866 sounds were collected. The blood flow sounds are labeled as “patent” (normal) or “stenotic” (abnormal). The labels are validated from concurrent ultrasound. Our dataset included 1527 “patent” and 339 “stenotic” sounds. We show that blood flow sounds vary significantly along the AVF. For example, the blood flow sound is loudest at the anastomosis site and softest at the cephalic arch. Contextualizing the sound with location metadata significantly improves classification performance. How to encode and incorporate categorical metadata is an active area of research1. Herein, we study ordinal (i.e., integer) encoding schemes. The numerical representation is concatenated to the flattened feature vector. We train a vision transformer (ViT) on spectrogram image representations of the sound and demonstrate that using scalar multiples of our integer encodings improves classification performance. Models are evaluated using a 10-fold cross-validation procedure. The baseline performance of our ViT without any location metadata achieves an AuROC and AuPRC of 0.68 ± 0.05 and 0.28 ± 0.09, respectively. Using the following encodings of Artery:0; Arch: 1; Proximal: 2; Middle: 3; Distal 4: Anastomosis: 5, the ViT achieves an AuROC and AuPRC of 0.69 ± 0.06 and 0.30 ± 0.10, respectively. Using the following encodings of Artery:0; Arch: 10; Proximal: 20; Middle: 30; Distal 40: Anastomosis: 50, the ViT achieves an AuROC and AuPRC of 0.74 ± 0.06 and 0.38 ± 0.10, respectively. Using the following encodings of Artery:0; Arch: 100; Proximal: 200; Middle: 300; Distal 400: Anastomosis: 500, the ViT achieves an AuROC and AuPRC of 0.78 ± 0.06 and 0.43 ± 0.11. respectively. Interestingly, we see that using increasing scalar multiples of our integer encoding scheme (i.e., encoding “venous arch” as 1,10,100) results in progressively improved performance. In theory, the integer values do not matter since we are optimizing the same loss function; the model can learn to increase or decrease the weights associated with location encodings and converge on the same solution. However, in the setting of limited data and computation resources, increasing the importance at initialization either leads to faster convergence or helps the model escape a local minimum.

Keywords: arteriovenous fistula, blood flow sounds, metadata encoding, deep learning

Procedia PDF Downloads 58
154 Minimizing Unscheduled Maintenance from an Aircraft and Rolling Stock Maintenance Perspective: Preventive Maintenance Model

Authors: Adel A. Ghobbar, Varun Raman

Abstract:

The Corrective maintenance of components and systems is a problem plaguing almost every industry in the world today. Train operators’ and the maintenance repair and overhaul subsidiary of the Dutch railway company is also facing this problem. A considerable portion of the maintenance activities carried out by the company are unscheduled. This, in turn, severely stresses and stretches the workforce and resources available. One possible solution is to have a robust preventive maintenance plan. The other possible solution is to plan maintenance based on real-time data obtained from sensor-based ‘Health and Usage Monitoring Systems.’ The former has been investigated in this paper. The preventive maintenance model developed for train operator will subsequently be extended, to tackle the unscheduled maintenance problem also affecting the aerospace industry. The extension of the model to the aerospace sector will be dealt with in the second part of the research, and it would, in turn, validate the soundness of the model developed. Thus, there are distinct areas that will be addressed in this paper, including the mathematical modelling of preventive maintenance and optimization based on cost and system availability. The results of this research will help an organization to choose the right maintenance strategy, allowing it to save considerable sums of money as opposed to overspending under the guise of maintaining high asset availability. The concept of delay time modelling was used to address the practical problem of unscheduled maintenance in this paper. The delay time modelling can be used to help with support planning for a given asset. The model was run using MATLAB, and the results are shown that the ideal inspection intervals computed using the extended from a minimal cost perspective were 29 days, and from a minimum downtime, perspective was 14 days. Risk matrix integration was constructed to represent the risk in terms of the probability of a fault leading to breakdown maintenance and its consequences in terms of maintenance cost. Thus, the choice of an optimal inspection interval of 29 days, resulted in a cost of approximately 50 Euros and the corresponding value of b(T) was 0.011. These values ensure that the risk associated with component X being maintained at an inspection interval of 29 days is more than acceptable. Thus, a switch in maintenance frequency from 90 days to 29 days would be optimal from the point of view of cost, downtime and risk.

Keywords: delay time modelling, unscheduled maintenance, reliability, maintainability, availability

Procedia PDF Downloads 110
153 Cooperation of Unmanned Vehicles for Accomplishing Missions

Authors: Ahmet Ozcan, Onder Alparslan, Anil Sezgin, Omer Cetin

Abstract:

The use of unmanned systems for different purposes has become very popular over the past decade. Expectations from these systems have also shown an incredible increase in this parallel. But meeting the demands of the tasks are often not possible with the usage of a single unmanned vehicle in a mission, so it is necessary to use multiple autonomous vehicles with different abilities together in coordination. Therefore the usage of the same type of vehicles together as a swarm is helped especially to satisfy the time constraints of the missions effectively. In other words, it allows sharing the workload by the various numbers of homogenous platforms together. Besides, it is possible to say there are many kinds of problems that require the usage of the different capabilities of the heterogeneous platforms together cooperatively to achieve successful results. In this case, cooperative working brings additional problems beyond the homogeneous clusters. In the scenario presented as an example problem, it is expected that an autonomous ground vehicle, which is lack of its position information, manage to perform point-to-point navigation without losing its way in a previously unknown labyrinth. Furthermore, the ground vehicle is equipped with very limited sensors such as ultrasonic sensors that can detect obstacles. It is very hard to plan or complete the mission for the ground vehicle by self without lost its way in the unknown labyrinth. Thus, in order to assist the ground vehicle, the autonomous air drone is also used to solve the problem cooperatively. The autonomous drone also has limited sensors like downward looking camera and IMU, and it also lacks computing its global position. In this context, it is aimed to solve the problem effectively without taking additional support or input from the outside, just benefiting capabilities of two autonomous vehicles. To manage the point-to-point navigation in a previously unknown labyrinth, the platforms have to work together coordinated. In this paper, cooperative work of heterogeneous unmanned systems is handled in an applied sample scenario, and it is mentioned that how to work together with an autonomous ground vehicle and the autonomous flying platform together in a harmony to take advantage of different platform-specific capabilities. The difficulties of using heterogeneous multiple autonomous platforms in a mission are put forward, and the successful solutions are defined and implemented against the problems like spatially distributed tasks planning, simultaneous coordinated motion, effective communication, and sensor fusion.

Keywords: unmanned systems, heterogeneous autonomous vehicles, coordination, task planning

Procedia PDF Downloads 105
152 Tunnel Convergence Monitoring by Distributed Fiber Optics Embedded into Concrete

Authors: R. Farhoud, G. Hermand, S. Delepine-lesoille

Abstract:

Future underground facility of French radioactive waste disposal, named Cigeo, is designed to store intermediate and high level - long-lived French radioactive waste. Intermediate level waste cells are tunnel-like, about 400m length and 65 m² section, equipped with several concrete layers, which can be grouted in situ or composed of tunnel elements pre-grouted. The operating space into cells, to allow putting or removing waste containers, should be monitored for several decades without any maintenance. To provide the required information, design was performed and tested in situ in Andra’s underground laboratory (URL) at 500m under the surface. Based on distributed optic fiber sensors (OFS) and backscattered Brillouin for strain and Raman for temperature interrogation technics, the design consists of 2 loops of OFS, at 2 different radiuses, around the monitored section (Orthoradiale strains) and longitudinally. Strains measured by distributed OFS cables were compared to classical vibrating wire extensometers (VWE) and platinum probes (Pt). The OFS cables were composed of 2 cables sensitive to strains and temperatures and one only for temperatures. All cables were connected, between sensitive part and instruments, to hybrid cables to reduce cost. The connection has been made according to 2 technics: splicing fibers in situ after installation or preparing each fiber with a connector and only plugging them together in situ. Another challenge was installing OFS cables along a tunnel mad in several parts, without interruption along several parts. First success consists of the survival rate of sensors after installation and quality of measurements. Indeed, 100% of OFS cables, intended for long-term monitoring, survived installation. Few new configurations were tested with relative success. Measurements obtained were very promising. Indeed, after 3 years of data, no difference was observed between cables and connection methods of OFS and strains fit well with VWE and Pt placed at the same location. Data, from Brillouin instrument sensitive to strains and temperatures, were compensated with data provided by Raman instrument only sensitive to temperature and into a separated fiber. These results provide confidence in the next steps of the qualification processes which consists of testing several data treatment approach for direct analyses.

Keywords: monitoring, fiber optic, sensor, data treatment

Procedia PDF Downloads 107
151 A Comparison of Inverse Simulation-Based Fault Detection in a Simple Robotic Rover with a Traditional Model-Based Method

Authors: Murray L. Ireland, Kevin J. Worrall, Rebecca Mackenzie, Thaleia Flessa, Euan McGookin, Douglas Thomson

Abstract:

Robotic rovers which are designed to work in extra-terrestrial environments present a unique challenge in terms of the reliability and availability of systems throughout the mission. Should some fault occur, with the nearest human potentially millions of kilometres away, detection and identification of the fault must be performed solely by the robot and its subsystems. Faults in the system sensors are relatively straightforward to detect, through the residuals produced by comparison of the system output with that of a simple model. However, faults in the input, that is, the actuators of the system, are harder to detect. A step change in the input signal, caused potentially by the loss of an actuator, can propagate through the system, resulting in complex residuals in multiple outputs. These residuals can be difficult to isolate or distinguish from residuals caused by environmental disturbances. While a more complex fault detection method or additional sensors could be used to solve these issues, an alternative is presented here. Using inverse simulation (InvSim), the inputs and outputs of the mathematical model of the rover system are reversed. Thus, for a desired trajectory, the corresponding actuator inputs are obtained. A step fault near the input then manifests itself as a step change in the residual between the system inputs and the input trajectory obtained through inverse simulation. This approach avoids the need for additional hardware on a mass- and power-critical system such as the rover. The InvSim fault detection method is applied to a simple four-wheeled rover in simulation. Additive system faults and an external disturbance force and are applied to the vehicle in turn, such that the dynamic response and sensor output of the rover are impacted. Basic model-based fault detection is then employed to provide output residuals which may be analysed to provide information on the fault/disturbance. InvSim-based fault detection is then employed, similarly providing input residuals which provide further information on the fault/disturbance. The input residuals are shown to provide clearer information on the location and magnitude of an input fault than the output residuals. Additionally, they can allow faults to be more clearly discriminated from environmental disturbances.

Keywords: fault detection, ground robot, inverse simulation, rover

Procedia PDF Downloads 279
150 Urban Noise and Air Quality: Correlation between Air and Noise Pollution; Sensors, Data Collection, Analysis and Mapping in Urban Planning

Authors: Massimiliano Condotta, Paolo Ruggeri, Chiara Scanagatta, Giovanni Borga

Abstract:

Architects and urban planners, when designing and renewing cities, have to face a complex set of problems, including the issues of noise and air pollution which are considered as hot topics (i.e., the Clean Air Act of London and the Soundscape definition). It is usually taken for granted that these problems go by together because the noise pollution present in cities is often linked to traffic and industries, and these produce air pollutants as well. Traffic congestion can create both noise pollution and air pollution, because NO₂ is mostly created from the oxidation of NO, and these two are notoriously produced by processes of combustion at high temperatures (i.e., car engines or thermal power stations). We can see the same process for industrial plants as well. What have to be investigated – and is the topic of this paper – is whether or not there really is a correlation between noise pollution and air pollution (taking into account NO₂) in urban areas. To evaluate if there is a correlation, some low-cost methodologies will be used. For noise measurements, the OpeNoise App will be installed on an Android phone. The smartphone will be positioned inside a waterproof box, to stay outdoor, with an external battery to allow it to collect data continuously. The box will have a small hole to install an external microphone, connected to the smartphone, which will be calibrated to collect the most accurate data. For air, pollution measurements will be used the AirMonitor device, an Arduino board to which the sensors, and all the other components, are plugged. After assembling the sensors, they will be coupled (one noise and one air sensor) and placed in different critical locations in the area of Mestre (Venice) to map the existing situation. The sensors will collect data for a fixed period of time to have an input for both week and weekend days, in this way it will be possible to see the changes of the situation during the week. The novelty is that data will be compared to check if there is a correlation between the two pollutants using graphs that should show the percentage of pollution instead of the values obtained with the sensors. To do so, the data will be converted to fit on a scale that goes up to 100% and will be shown thru a mapping of the measurement using GIS methods. Another relevant aspect is that this comparison can help to choose which are the right mitigation solutions to be applied in the area of the analysis because it will make it possible to solve both the noise and the air pollution problem making only one intervention. The mitigation solutions must consider not only the health aspect but also how to create a more livable space for citizens. The paper will describe in detail the methodology and the technical solution adopted for the realization of the sensors, the data collection, noise and pollution mapping and analysis.

Keywords: air quality, data analysis, data collection, NO₂, noise mapping, noise pollution, particulate matter

Procedia PDF Downloads 186
149 Simultaneous Measurement of Wave Pressure and Wind Speed with the Specific Instrument and the Unit of Measurement Description

Authors: Branimir Jurun, Elza Jurun

Abstract:

The focus of this paper is the description of an instrument called 'Quattuor 45' and defining of wave pressure measurement. Special attention is given to measurement of wave pressure created by the wind speed increasing obtained with the instrument 'Quattuor 45' in the investigated area. The study begins with respect to theoretical attitudes and numerous up to date investigations related to the waves approaching the coast. The detailed schematic view of the instrument is enriched with pictures from ground plan and side view. Horizontal stability of the instrument is achieved by mooring which relies on two concrete blocks. Vertical wave peak monitoring is ensured by one float above the instrument. The synthesis of horizontal stability and vertical wave peak monitoring allows to create a representative database for wave pressure measuring. Instrument ‘Quattuor 45' is named according to the way the database is received. Namely, the electronic part of the instrument consists of the main chip ‘Arduino', its memory, four load cells with the appropriate modules and the wind speed sensor 'Anemometers'. The 'Arduino' chip is programmed to store two data from each load cell and two data from the anemometer on SD card each second. The next part of the research is dedicated to data processing. All measured results are stored automatically in the database and after that detailed processing is carried out in the MS Excel. The result of the wave pressure measurement is synthesized by the unit of measurement kN/m². This paper also suggests a graphical presentation of the results by multi-line graph. The wave pressure is presented on the left vertical axis, while the wind speed is shown on the right vertical axis. The time of measurement is displayed on the horizontal axis. The paper proposes an algorithm for wind speed measurements showing the results for two characteristic winds in the Adriatic Sea, called 'Bura' and 'Jugo'. The first of them is the northern wind that reaches high speeds, causing low and extremely steep waves, where the pressure of the wave is relatively weak. On the other hand, the southern wind 'Jugo' has a lower speed than the northern wind, but due to its constant duration and constant speed maintenance, it causes extremely long and high waves that cause extremely high wave pressure.

Keywords: instrument, measuring unit, waves pressure metering, wind seed measurement

Procedia PDF Downloads 174
148 Developing Optical Sensors with Application of Cancer Detection by Elastic Light Scattering Spectroscopy

Authors: May Fadheel Estephan, Richard Perks

Abstract:

Context: Cancer is a serious health concern that affects millions of people worldwide. Early detection and treatment are essential for improving patient outcomes. However, current methods for cancer detection have limitations, such as low sensitivity and specificity. Research Aim: The aim of this study was to develop an optical sensor for cancer detection using elastic light scattering spectroscopy (ELSS). ELSS is a noninvasive optical technique that can be used to characterize the size and concentration of particles in a solution. Methodology: An optical probe was fabricated with a 100-μm-diameter core and a 132-μm centre-to-centre separation. The probe was used to measure the ELSS spectra of polystyrene spheres with diameters of 2, 0.8, and 0.413 μm. The spectra were then analysed to determine the size and concentration of the spheres. Findings: The results showed that the optical probe was able to differentiate between the three different sizes of polystyrene spheres. The probe was also able to detect the presence of polystyrene spheres in suspension concentrations as low as 0.01%. Theoretical Importance: The results of this study demonstrate the potential of ELSS for cancer detection. ELSS is a noninvasive technique that can be used to characterize the size and concentration of cells in a tissue sample. This information can be used to identify cancer cells and assess the stage of the disease. Data Collection: The data for this study were collected by measuring the ELSS spectra of polystyrene spheres with different diameters. The spectra were collected using a spectrometer and a computer. Analysis Procedures: The ELSS spectra were analysed using a software program to determine the size and concentration of the spheres. The software program used a mathematical algorithm to fit the spectra to a theoretical model. Question Addressed: The question addressed by this study was whether ELSS could be used to detect cancer cells. The results of the study showed that ELSS could be used to differentiate between different sizes of cells, suggesting that it could be used to detect cancer cells. Conclusion: The findings of this research show the utility of ELSS in the early identification of cancer. ELSS is a noninvasive method for characterizing the number and size of cells in a tissue sample. To determine cancer cells and determine the disease's stage, this information can be employed. Further research is needed to evaluate the clinical performance of ELSS for cancer detection.

Keywords: elastic light scattering spectroscopy, polystyrene spheres in suspension, optical probe, fibre optics

Procedia PDF Downloads 49