Search results for: high leverage points
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 21952

Search results for: high leverage points

20692 Exploring Counting Methods for the Vertices of Certain Polyhedra with Uncertainties

Authors: Sammani Danwawu Abdullahi

Abstract:

Vertex Enumeration Algorithms explore the methods and procedures of generating the vertices of general polyhedra formed by system of equations or inequalities. These problems of enumerating the extreme points (vertices) of general polyhedra are shown to be NP-Hard. This lead to exploring how to count the vertices of general polyhedra without listing them. This is also shown to be #P-Complete. Some fully polynomial randomized approximation schemes (fpras) of counting the vertices of some special classes of polyhedra associated with Down-Sets, Independent Sets, 2-Knapsack problems and 2 x n transportation problems are presented together with some discovered open problems.

Keywords: counting with uncertainties, mathematical programming, optimization, vertex enumeration

Procedia PDF Downloads 351
20691 A Diurnal Light Based CO₂ Elevation Strategy for Up-Scaling Chlorella sp. Production by Minimizing Oxygen Accumulation

Authors: Venkateswara R. Naira, Debasish Das, Soumen K. Maiti

Abstract:

Achieving high cell densities of microalgae under obligatory light-limiting and high light conditions of diurnal (low-high-low variations of daylight intensity) sunlight are further limited by CO₂ supply and dissolved oxygen (DO) accumulation in large-scale photobioreactors. High DO levels cause low growth due to photoinhibition and/or photorespiration. Hence, scalable elevated CO₂ levels (% in air) and their effect on DO accumulation in a 10 L cylindrical membrane photobioreactor (a vertical tubular type) are studied in the present study. The CO₂ elevation strategies; biomass-based, pH control based (types II & I) and diurnal light based, were explored to study the growth of Chlorella sp. FC2 IITG under single-sided LED lighting in the laboratory, mimicking diurnal sunlight. All the experiments were conducted in fed-batch mode by maintaining N and P sources at least 50% of initial concentrations of the optimized BG-11 medium. It was observed that biomass-based (2% - 1st day, 2.5% - 2nd day and 3% - thereafter) and well-known pH control based, type-I (5.8 pH throughout) strategies were found lethal for FC2 growth. In both strategies, the highest peak DO accumulation of 150% air saturation was resulted due to high photosynthetic activity caused by higher CO₂ levels. In the pH control based type-I strategy, automatically resulted CO₂ levels for pH control were recorded so high (beyond the inhibition range, 5%). However, pH control based type-II strategy (5.8 – 2 days, 6.3 – 3 days, 6.7 – thereafter) showed final biomass titer up to 4.45 ± 0.05 g L⁻¹ with peak DO of 122% air saturation; high CO₂ levels beyond 5% (in air) were recorded thereafter. Thus, it became sustainable for obtaining high biomass. Finally, a diurnal light based (2% - low light, 2.5 % - medium light and 3% - high light) strategy was applied on the basis of increasing/decreasing photosynthesis due to increase/decrease in diurnal light intensity. It has resulted in maximum final biomass titer of 5.33 ± 0.12 g L⁻¹, with total biomass productivity of 0.59 ± 0.01 g L⁻¹ day⁻¹. The values are remarkably higher than constant 2% CO₂ level (final biomass titer: 4.26 ± 0.09 g L⁻¹; biomass productivity: 0.27 ± 0.005 g L⁻¹ day⁻¹). However, 135% air saturation of peak DO was observed. Thus, the diurnal light based elevation should be further improved by using CO₂ enriched N₂ instead of air. To the best of knowledge, the light-based CO₂ elevation strategy is not reported elsewhere.

Keywords: Chlorella sp., CO₂ elevation strategy, dissolved oxygen accumulation, diurnal light based CO₂ elevation, high cell density, microalgae, scale-up

Procedia PDF Downloads 121
20690 Enhanced Multi-Scale Feature Extraction Using a DCNN by Proposing Dynamic Soft Margin SoftMax for Face Emotion Detection

Authors: Armin Nabaei, M. Omair Ahmad, M. N. S. Swamy

Abstract:

Many facial expression and emotion recognition methods in the traditional approaches of using LDA, PCA, and EBGM have been proposed. In recent years deep learning models have provided a unique platform addressing by automatically extracting the features for the detection of facial expression and emotions. However, deep networks require large training datasets to extract automatic features effectively. In this work, we propose an efficient emotion detection algorithm using face images when only small datasets are available for training. We design a deep network whose feature extraction capability is enhanced by utilizing several parallel modules between the input and output of the network, each focusing on the extraction of different types of coarse features with fined grained details to break the symmetry of produced information. In fact, we leverage long range dependencies, which is one of the main drawback of CNNs. We develop this work by introducing a Dynamic Soft-Margin SoftMax.The conventional SoftMax suffers from reaching to gold labels very soon, which take the model to over-fitting. Because it’s not able to determine adequately discriminant feature vectors for some variant class labels. We reduced the risk of over-fitting by using a dynamic shape of input tensor instead of static in SoftMax layer with specifying a desired Soft- Margin. In fact, it acts as a controller to how hard the model should work to push dissimilar embedding vectors apart. For the proposed Categorical Loss, by the objective of compacting the same class labels and separating different class labels in the normalized log domain.We select penalty for those predictions with high divergence from ground-truth labels.So, we shorten correct feature vectors and enlarge false prediction tensors, it means we assign more weights for those classes with conjunction to each other (namely, “hard labels to learn”). By doing this work, we constrain the model to generate more discriminate feature vectors for variant class labels. Finally, for the proposed optimizer, our focus is on solving weak convergence of Adam optimizer for a non-convex problem. Our noteworthy optimizer is working by an alternative updating gradient procedure with an exponential weighted moving average function for faster convergence and exploiting a weight decay method to help drastically reducing the learning rate near optima to reach the dominant local minimum. We demonstrate the superiority of our proposed work by surpassing the first rank of three widely used Facial Expression Recognition datasets with 93.30% on FER-2013, and 16% improvement compare to the first rank after 10 years, reaching to 90.73% on RAF-DB, and 100% k-fold average accuracy for CK+ dataset, and shown to provide a top performance to that provided by other networks, which require much larger training datasets.

Keywords: computer vision, facial expression recognition, machine learning, algorithms, depp learning, neural networks

Procedia PDF Downloads 73
20689 Desktop High-Speed Aerodynamics by Shallow Water Analogy in a Tin Box for Engineering Students

Authors: Etsuo Morishita

Abstract:

In this paper, we show shallow water in a tin box as an analogous simulation tool for high-speed aerodynamics education and research. It is customary that we use a water tank to create shallow water flow. While a flow in a water tank is not necessarily uniform and is sometimes wavy, we can visualize a clear supercritical flow even when we move a body manually in stationary water in a simple shallow tin box. We can visualize a blunt shock wave around a moving circular cylinder together with a shock pattern around a diamond airfoil. Another interesting analogous experiment is a hydrodynamic shock tube with water and tea. We observe the contact surface clearly due to color difference of the two liquids those are invisible in the real gas dynamics experiment. We first revisit the similarities between high-speed aerodynamics and shallow water hydraulics. Several educational and research experiments are then introduced for engineering students. Shallow water experiments in a tin box simulate properly the high-speed flows.

Keywords: aerodynamics compressible flow, gas dynamics, hydraulics, shock wave

Procedia PDF Downloads 300
20688 Multiperson Drone Control with Seamless Pilot Switching Using Onboard Camera and Openpose Real-Time Keypoint Detection

Authors: Evan Lowhorn, Rocio Alba-Flores

Abstract:

Traditional classification Convolutional Neural Networks (CNN) attempt to classify an image in its entirety. This becomes problematic when trying to perform classification with a drone’s camera in real-time due to unpredictable backgrounds. Object detectors with bounding boxes can be used to isolate individuals and other items, but the original backgrounds remain within these boxes. These basic detectors have been regularly used to determine what type of object an item is, such as “person” or “dog.” Recent advancement in computer vision, particularly with human imaging, is keypoint detection. Human keypoint detection goes beyond bounding boxes to fully isolate humans and plot points, or Regions of Interest (ROI), on their bodies within an image. ROIs can include shoulders, elbows, knees, heads, etc. These points can then be related to each other and used in deep learning methods such as pose estimation. For drone control based on human motions, poses, or signals using the onboard camera, it is important to have a simple method for pilot identification among multiple individuals while also giving the pilot fine control options for the drone. To achieve this, the OpenPose keypoint detection network was used with body and hand keypoint detection enabled. OpenPose supports the ability to combine multiple keypoint detection methods in real-time with a single network. Body keypoint detection allows simple poses to act as the pilot identifier. The hand keypoint detection with ROIs for each finger can then offer a greater variety of signal options for the pilot once identified. For this work, the individual must raise their non-control arm to be identified as the operator and send commands with the hand on their other arm. The drone ignores all other individuals in the onboard camera feed until the current operator lowers their non-control arm. When another individual wish to operate the drone, they simply raise their arm once the current operator relinquishes control, and then they can begin controlling the drone with their other hand. This is all performed mid-flight with no landing or script editing required. When using a desktop with a discrete NVIDIA GPU, the drone’s 2.4 GHz Wi-Fi connection combined with OpenPose restrictions to only body and hand allows this control method to perform as intended while maintaining the responsiveness required for practical use.

Keywords: computer vision, drone control, keypoint detection, openpose

Procedia PDF Downloads 182
20687 Necessary Condition to Utilize Adaptive Control in Wind Turbine Systems to Improve Power System Stability

Authors: Javad Taherahmadi, Mohammad Jafarian, Mohammad Naser Asefi

Abstract:

The global capacity of wind power has dramatically increased in recent years. Therefore, improving the technology of wind turbines to take different advantages of this enormous potential in the power grid, could be interesting subject for scientists. The doubly-fed induction generator (DFIG) wind turbine is a popular system due to its many advantages such as the improved power quality, high energy efficiency and controllability, etc. With an increase in wind power penetration in the network and with regard to the flexible control of wind turbines, the use of wind turbine systems to improve the dynamic stability of power systems has been of significance importance for researchers. Subsynchronous oscillations are one of the important issues in the stability of power systems. Damping subsynchronous oscillations by using wind turbines has been studied in various research efforts, mainly by adding an auxiliary control loop to the control structure of the wind turbine. In most of the studies, this control loop is composed of linear blocks. In this paper, simple adaptive control is used for this purpose. In order to use an adaptive controller, the convergence of the controller should be verified. Since adaptive control parameters tend to optimum values in order to obtain optimum control performance, using this controller will help the wind turbines to have positive contribution in damping the network subsynchronous oscillations at different wind speeds and system operating points. In this paper, the application of simple adaptive control in DFIG wind turbine systems to improve the dynamic stability of power systems is studied and the essential condition for using this controller is considered. It is also shown that this controller has an insignificant effect on the dynamic stability of the wind turbine, itself.

Keywords: almost strictly positive real (ASPR), doubly-fed induction generator (DIFG), simple adaptive control (SAC), subsynchronous oscillations, wind turbine

Procedia PDF Downloads 374
20686 Towards a Framework for Embedded Weight Comparison Algorithm with Business Intelligence in the Plantation Domain

Authors: M. Pushparani, A. Sagaya

Abstract:

Embedded systems have emerged as important elements in various domains with extensive applications in automotive, commercial, consumer, healthcare and transportation markets, as there is emphasis on intelligent devices. On the other hand, Business Intelligence (BI) has also been extensively used in a range of applications, especially in the agriculture domain which is the area of this research. The aim of this research is to create a framework for Embedded Weight Comparison Algorithm with Business Intelligence (EWCA-BI). The weight comparison algorithm will be embedded within the plantation management system and the weighbridge system. This algorithm will be used to estimate the weight at the site and will be compared with the actual weight at the plantation. The algorithm will be used to build the necessary alerts when there is a discrepancy in the weight, thus enabling better decision making. In the current practice, data are collected from various locations in various forms. It is a challenge to consolidate data to obtain timely and accurate information for effective decision making. Adding to this, the unstable network connection leads to difficulty in getting timely accurate information. To overcome the challenges embedding is done on a portable device that will have the embedded weight comparison algorithm to also assist in data capture and synchronize data at various locations overcoming the network short comings at collection points. The EWCA-BI will provide real-time information at any given point of time, thus enabling non-latent BI reports that will provide crucial information to enable efficient operational decision making. This research has a high potential in bringing embedded system into the agriculture industry. EWCA-BI will provide BI reports with accurate information with uncompromised data using an embedded system and provide alerts, therefore, enabling effective operation management decision-making at the site.

Keywords: embedded business intelligence, weight comparison algorithm, oil palm plantation, embedded systems

Procedia PDF Downloads 282
20685 The Project Evaluation to Develop the Competencies, Capabilities, and Skills in Repairing Computers of People in Jompluak Local Municipality, Bang Khonthi District, Samut Songkram Province

Authors: Wilailuk Meepracha

Abstract:

The results of the study on the project evaluation to develop the competencies, capabilities, and skills in repairing computers of people in Jompluak Local Municipality, Bang Khonthi District, Samut Songkram Province showed that the overall result was good (4.33). When considering on each aspect, it was found that the highest one was on process evaluation (4.60) followed by product evaluation (4.50) and the least one was on feeding factor (3.97). When considering in details, it was found that: 1) the context aspect was high (4.23) with the highest item on the arrangement of the training situation (4.67) followed by the appropriateness of the target (4.30) and the least aspect was on the project cooperation (3.73). 2) The evaluation of average overall primary factor or feeding factor showed high value (4.23) while the highest aspect was on the capability of the trainers (4.47) followed by the suitable venue (4.33) while the least aspect was on the insufficient budget (3.47). 3) The average result of process evaluation was very high (4.60). The highest aspect was on the follow-op supervision (4.70) followed by responsibility of each project staffs (4.50) while the least aspect was on the present situation and the problems of the community (4.40). 4) The overall result of the product evaluation was very high (4.50). The highest aspect was on the diversity of the activities and the community integration (4.67) followed by project target achievement (4.63) while the least aspect was on continuation and regularity of the activities (4.33). The trainees reported high satisfaction on the project management at very high level (43.33%) while 40% reported high level and 16.67% reported moderate level. Suggestions for the project were on the additional number of the computer sets (37.78%) followed by longer training period especially on computer skills (43.48%).

Keywords: project evaluation, competency development, the capability on computer repairing and computer skills

Procedia PDF Downloads 301
20684 Tax Evasion in Brazil: The Case of Specialists

Authors: Felippe Clemente, Viviani S. Lírio

Abstract:

Brazilian tax evasion is very high. It causes many problems for economics as budget realization, income distribution and no allocation of productive resources. Therefore, the purpose of this article is to use the instrumental game theory to understand tax evasion agents and tax authority in Brazil (Federal Revenue and Federal Police). By means of Game Theory approaches, the main results from considering cases both with and without specialists show that, in a high dropout situation, penalizing taxpayers with either high fines or deprivations of liberty may not be very effective. The analysis also shows that audit and inspection costs play an important role in driving the equilibrium system. This would suggest that a policy of investing in tax inspectors would be a more effective tool in combating non-compliance with tax obligations than penalties or fines.

Keywords: tax evasion, Brazil, game theory, specialists

Procedia PDF Downloads 322
20683 Using Pump as Turbine in Drinking Water Networks to Monitor and Control Water Processes Remotely

Authors: Sara Bahariderakhshan, Morteza Ahmadifar

Abstract:

Leakage is one of the most important problems that water distribution networks face which first reason is high-pressure existence. There are many approaches to control this excess pressure, which using pressure reducing valves (PRVs) or reducing pipe diameter are ones. In the other hand, Pumps are using electricity or fossil fuels to supply needed pressure in distribution networks but excess pressure are made in some branches due to topology problems and water networks’ variables therefore using pressure valves will be inevitable. Although using PRVs is inevitable but it leads to waste electricity or fuels used by pumps because PRVs just waste excess hydraulic pressure to lower it. Pumps working in reverse or Pumps as Turbine (called PaT in this article) are easily available and also effective sources of reducing the equipment cost in small hydropower plants. Urban areas of developing countries are facing increasing in area and maybe water scarcity in near future. These cities need wider water networks which make it hard to predict, control and have a better operation in the urban water cycle. Using more energy and, therefore, more pollution, slower repairing services, more user dissatisfaction and more leakage are these networks’ serious problems. Therefore, more effective systems are needed to monitor and act in these complicated networks than what is used now. In this article a new approach is proposed and evaluated: Using PAT to produce enough energy for remote valves and sensors in the water network. These sensors can be used to determine the discharge, pressure, water quality and other important network characteristics. With the help of remote valves pipeline discharge can be controlled so Instead of wasting excess hydraulic pressure which may be destructive in some cases, obtaining extra pressure from pipeline and producing clean electricity used by remote instruments is this articles’ goal. Furthermore due to increasing the area of the network there is unwanted high pressure in some critical points which is not destructive but lowering the pressure results to longer lifetime for pipeline networks without users’ dissatisfaction. This strategy proposed in this article, leads to use PaT widely for pressure containment and producing energy needed for remote valves and sensors like what happens in supervisory control and data acquisition (SCADA) systems which make it easy for us to monitor, receive data from urban water cycle and make any needed changes in discharge and pressure of pipelines easily and remotely. This is a clean project of energy production without significant environmental impacts and can be used in urban drinking water networks, without any problem for consumers which leads to a stable and dynamic network which lowers leakage and pollution.

Keywords: new energies, pump as turbine, drinking water, distribution network, remote control equipments

Procedia PDF Downloads 460
20682 Scope of Heavy Oil as a Fuel of the Future

Authors: Kiran P. Chadayamuri, Saransh Bagdi

Abstract:

Increasing imbalance between energy supply and demand has made nations and companies involved in the energy sector to boost up their research and find suitable solutions. With the high rates at which conventional oil and gas resources are depleting, efficient exploration and exploitation of heavy oil could just be the answer. Heavy oil may be defined as crude oil having API gravity value of less than 20⁰. They are highly viscous, have low hydrogen to carbon ratios and are known to produce high carbon residues. They have high contents of asphaltenes, heavy metals, sulphur and nitrogen in them. Due to these properties extraction, transportation and refining of crude oil have its share of challenges. Lack of suitable technology has hindered its production in the past, but now things are going in a more positive direction. The aim of this paper is to study the various advantages of heavy oil, associated limitations and its feasibility as a fuel of the future.

Keywords: energy, heavy oil, fuel, future

Procedia PDF Downloads 282
20681 The Microstructure and Corrosion Behavior of High Entropy Metallic Layers Electrodeposited by Low and High-Temperature Methods

Authors: Zbigniew Szklarz, Aldona Garbacz-Klempka, Magdalena Bisztyga-Szklarz

Abstract:

Typical metallic alloys bases on one major alloying component, where the addition of other elements is intended to improve or modify certain properties, most of all the mechanical properties. However, in 1995 a new concept of metallic alloys was described and defined. High Entropy Alloys (HEA) contains at least five alloying elements in an amount from 5 to 20 at.%. A common feature this type of alloys is an absence of intermetallic phases, high homogeneity of the microstructure and unique chemical composition, what leads to obtaining materials with very high strength indicators, stable structures (also at high temperatures) and excellent corrosion resistance. Hence, HEA can be successfully used as a substitutes for typical metallic alloys in various applications where a sufficiently high properties are desirable. For fabricating HEA, a few ways are applied: 1/ from liquid phase i.e. casting (usually arc melting); 2/ from solid phase i.e. powder metallurgy (sintering methods preceded by mechanical synthesis) and 3/ from gas phase e.g. sputtering or 4/ other deposition methods like electrodeposition from liquids. Application of different production methods creates different microstructures of HEA, which can entail differences in their properties. The last two methods also allows to obtain coatings with HEA structures, hereinafter referred to as High Entropy Films (HEF). With reference to above, the crucial aim of this work was the optimization of the manufacturing process of the multi-component metallic layers (HEF) by the low- and high temperature electrochemical deposition ( ED). The low-temperature deposition process was crried out at ambient or elevated temperature (up to 100 ᵒC) in organic electrolyte. The high-temperature electrodeposition (several hundred Celcius degrees), in turn, allowed to form the HEF layer by electrochemical reduction of metals from molten salts. The basic chemical composition of the coatings was CoCrFeMnNi (known as Cantor’s alloy). However, it was modified by other, selected elements like Al or Cu. The optimization of the parameters that allow to obtain as far as it possible homogeneous and equimolar composition of HEF is the main result of presented studies. In order to analyse and compare the microstructure, SEM/EBSD, TEM and XRD techniques were employed. Morover, the determination of corrosion resistance of the CoCrFeMnNi(Cu or Al) layers in selected electrolytes (i.e. organic and non-organic liquids) was no less important than the above mentioned objectives.

Keywords: high entropy alloys, electrodeposition, corrosion behavior, microstructure

Procedia PDF Downloads 75
20680 Fashion Appropriation: A Study in Awareness of Crossing Cultural Boundaries in Design

Authors: Anahita Suri

Abstract:

Myriad cultures form the warp and weft of the fabric of this world. The last century saw mass migration of people across geographical boundaries, owing to industrialization and globalization. These people took with them their cultures, costumes, traditions, and folklore, which mingled with the local cultures to create something new and place it in a different context to make it contemporary. With the surge in population and growth of the fashion industry, there has been an increasing demand for innovative and individual fashion, from street markets to luxury brands. Exhausted by local influences, designers take inspiration from the so called ‘low’ culture and create artistic products, place it in a different context, and the end-product is categorized as ‘high’ culture. It is challenging as to why a design/culture is ‘high’ or ‘low’. Who decides which works, practices, activities, etc., are ‘high’ and which are ‘low’? The justification for this distinction is often found not in the design itself but the context attached to it. Also, the concept of high/ low is relative to time- what is ‘high’ today can be ‘low’ tomorrow and ‘high’ again the day after. This raises certain concerns. Firstly, it is sad that a culture which offers inspiration is looked down upon as ‘low’ culture. Secondly, it is ironic because the so designated ‘high’ culture is a manipulation of the truth from the authentic ‘low’ culture, which is capable of true expression. When you borrow from a different culture, you pretend to be authentic because you actually are not. Finally, it is important to be aware of crossing cultural boundaries and the context attached to a design/product so as to use it a responsible way that communicates the design without offending anyone. Is it ok for a person’s cultural identity to become another person’s fashion accessory? This essay explores the complex, multi-layered subject of fashion appropriation and aims to provoke debate over cultural ‘borrowing’ and create awareness that commodification of cultural symbols and iconography in fashion is inappropriate and offensive and not the same as ‘celebrating cultural differences’.

Keywords: context, culture, fashion appropriation, inoffensive, responsible

Procedia PDF Downloads 122
20679 An Exploratory Study on the Impact of Climate Change on Design Rainfalls in the State of Qatar

Authors: Abdullah Al Mamoon, Niels E. Joergensen, Ataur Rahman, Hassan Qasem

Abstract:

Intergovernmental Panel for Climate Change (IPCC) in its fourth Assessment Report AR4 predicts a more extreme climate towards the end of the century, which is likely to impact the design of engineering infrastructure projects with a long design life. A recent study in 2013 developed new design rainfall for Qatar, which provides an improved design basis of drainage infrastructure for the State of Qatar under the current climate. The current design standards in Qatar do not consider increased rainfall intensity caused by climate change. The focus of this paper is to update recently developed design rainfalls in Qatar under the changing climatic conditions based on IPCC's AR4 allowing a later revision to the proposed design standards, relevant for projects with a longer design life. The future climate has been investigated based on the climate models released by IPCC’s AR4 and A2 story line of emission scenarios (SRES) using a stationary approach. Annual maximum series (AMS) of predicted 24 hours rainfall data for both wet (NCAR-CCSM) scenario and dry (CSIRO-MK3.5) scenario for the Qatari grid points in the climate models have been extracted for three periods, current climate 2010-2039, medium term climate (2040-2069) and end of century climate (2070-2099). A homogeneous region of the Qatari grid points has been formed and L-Moments based regional frequency approach is adopted to derive design rainfalls. The results indicate no significant changes in the design rainfall on the short term 2040-2069, but significant changes are expected towards the end of the century (2070-2099). New design rainfalls have been developed taking into account climate change for 2070-2099 scenario and by averaging results from the two scenarios. IPCC’s AR4 predicts that the rainfall intensity for a 5-year return period rain with duration of 1 to 2 hours will increase by 11% in 2070-2099 compared to current climate. Similarly, the rainfall intensity for more extreme rainfall, with a return period of 100 years and duration of 1 to 2 hours will increase by 71% in 2070-2099 compared to current climate. Infrastructure with a design life exceeding 60 years should add safety factors taking the predicted effects from climate change into due consideration.

Keywords: climate change, design rainfalls, IDF, Qatar

Procedia PDF Downloads 388
20678 Using Pump as Turbine in Urban Water Networks to Control, Monitor, and Simulate Water Processes Remotely

Authors: Morteza Ahmadifar, Sarah Bahari Derakhshan

Abstract:

Leakage is one of the most important problems that water distribution networks face which first reason is high-pressure existence. There are many approaches to control this excess pressure, which using pressure reducing valves (PRVs) or reducing pipe diameter are ones. On the other hand, Pumps are using electricity or fossil fuels to supply needed pressure in distribution networks but excess pressure are made in some branches due to topology problems and water networks’ variables, therefore using pressure valves will be inevitable. Although using PRVs is inevitable but it leads to waste electricity or fuels used by pumps because PRVs just waste excess hydraulic pressure to lower it. Pumps working in reverse or Pumps as Turbine (called PAT in this article) are easily available and also effective sources of reducing the equipment cost in small hydropower plants. Urban areas of developing countries are facing increasing in area and maybe water scarcity in near future. These cities need wider water networks which make it hard to predict, control and have a better operation in the urban water cycle. Using more energy and therefore more pollution, slower repairing services, more user dissatisfaction and more leakage are these networks’ serious problems. Therefore, more effective systems are needed to monitor and act in these complicated networks than what is used now. In this article a new approach is proposed and evaluated: Using PAT to produce enough energy for remote valves and sensors in the water network. These sensors can be used to determine the discharge, pressure, water quality and other important network characteristics. With the help of remote valves pipeline discharge can be controlled so Instead of wasting excess hydraulic pressure which may be destructive in some cases, obtaining extra pressure from pipeline and producing clean electricity used by remote instruments is this articles’ goal. Furthermore, due to increasing the area of network there is unwanted high pressure in some critical points which is not destructive but lowering the pressure results to longer lifetime for pipeline networks without users’ dissatisfaction. This strategy proposed in this article, leads to use PAT widely for pressure containment and producing energy needed for remote valves and sensors like what happens in supervisory control and data acquisition (SCADA) systems which make it easy for us to monitor, receive data from urban water cycle and make any needed changes in discharge and pressure of pipelines easily and remotely. This is a clean project of energy production without significant environmental impacts and can be used in urban drinking water networks, without any problem for consumers which leads to a stable and dynamic network which lowers leakage and pollution.

Keywords: clean energies, pump as turbine, remote control, urban water distribution network

Procedia PDF Downloads 391
20677 Bacterial Exposure and Microbial Activity in Dental Clinics during Cleaning Procedures

Authors: Atin Adhikari, Sushma Kurella, Pratik Banerjee, Nabanita Mukherjee, Yamini M. Chandana Gollapudi, Bushra Shah

Abstract:

Different sharp instruments, drilling machines, and high speed rotary instruments are routinely used in dental clinics during dental cleaning. Therefore, these cleaning procedures release a lot of oral microorganisms including bacteria in clinic air and may cause significant occupational bioaerosol exposure risks for dentists, dental hygienists, patients, and dental clinic employees. Two major goals of this study were to quantify volumetric airborne concentrations of bacteria and to assess overall microbial activity in this type of occupational environment. The study was conducted in several dental clinics of southern Georgia and 15 dental cleaning procedures were targeted for sampling of airborne bacteria and testing of overall microbial activity in settled dusts over clinic floors. For air sampling, a Biostage viable cascade impactor was utilized, which comprises an inlet cone, precision-drilled 400-hole impactor stage, and a base that holds an agar plate (Tryptic soy agar). A high-flow Quick-Take-30 pump connected to this impactor pulls microorganisms in air at 28.3 L/min flow rate through the holes (jets) where they are collected on the agar surface for approx. five minutes. After sampling, agar plates containing the samples were placed in an ice chest with blue ice and plates were incubated at 30±2°C for 24 to 72 h. Colonies were counted and converted to airborne concentrations (CFU/m3) followed by positive hole corrections. Most abundant bacterial colonies (selected by visual screening) were identified by PCR amplicon sequencing of 16S rRNA genes. For understanding overall microbial activity in clinic floors and estimating a general cleanliness of the clinic surfaces during or after dental cleaning procedures, ATP levels were determined in swabbed dust samples collected from 10 cm2 floor surfaces. Concentration of ATP may indicate both the cell viability and the metabolic status of settled microorganisms in this situation. An ATP measuring kit was used, which utilized standard luciferin-luciferase fluorescence reaction and a luminometer, which quantified ATP levels as relative light units (RLU). Three air and dust samples were collected during each cleaning procedure (at the beginning, during cleaning, and immediately after the procedure was completed (n = 45). Concentrations at the beginning, during, and after dental cleaning procedures were 671±525, 917±1203, and 899±823 CFU/m3, respectively for airborne bacteria and 91±101, 243±129, and 139±77 RLU/sample, respectively for ATP levels. The concentrations of bacteria were significantly higher than typical indoor residential environments. Although an increasing trend for airborne bacteria was observed during cleaning, the data collected at three different time points were not significantly different (ANOVA: p = 0.38) probably due to high standard deviations of data. The ATP levels, however, demonstrated a significant difference (ANOVA: p <0.05) in this scenario indicating significant change in microbial activity on floor surfaces during dental cleaning. The most common bacterial genera identified were: Neisseria sp., Streptococcus sp., Chryseobacterium sp., Paenisporosarcina sp., and Vibrio sp. in terms of frequencies of occurrences, respectively. The study concluded that bacterial exposure in dental clinics could be a notable occupational biohazard, and appropriate respiratory protections for the employees are urgently needed.

Keywords: bioaerosols, hospital hygiene, indoor air quality, occupational biohazards

Procedia PDF Downloads 309
20676 The Effect of Phase Development on Micro-Climate Change of Urban Area

Authors: Tommy Lo

Abstract:

This paper presented the changes in temperature and air ventilation of an inner urban area at different development stages during 2002 to 2012 and the high-rise buildings to be built in 2018. 3D simulation models ENVI-met and Autodesk Falcon were used. The results indicated that replacement of old residence buildings or open space with high-rise buildings will increase the air temperature of inner urban area; the air temperature at the pedestrian level will increase more than that at the upper levels. The temperature of the inner street in future will get higher than that in 2002, 2008 and 2012. It is attributed that heat is trapped in the street canyons as the air permeability at the pedestrian levels is lower. High-rise buildings with massive podium will further reduce the air ventilation in that area. In addition, sufficient separations among buildings is essential in design. High-rise buildings aligned along the waterfront will obstruct the wind flowing into the inner urban area and accelerate the temperature increase both in daytime and night time.

Keywords: micro-climate change, urban design, ENVI-met, construction engineering

Procedia PDF Downloads 279
20675 One-off Separation of Multiple Types of Oil-in-Water Emulsions with Surface-Engineered Graphene-Based Multilevel Structure Materials

Authors: Han Longxiang

Abstract:

In the process of treating industrial oil wastewater with complex components, the traditional treatment methods (flotation, coagulation, microwave heating, etc.) often produce high operating costs, secondary pollution, and other problems. In order to solve these problems, the materials with high flux and stability applied to surfactant-stabilized emulsions separation have gained huge attention in the treatment of oily wastewater. Nevertheless, four stable oil-in-water emulsions can be formed due to different surfactants (surfactant-free, anionic surfactant, cationic surfactant, and non-ionic surfactant), and the previous advanced materials can only separate one or several of them, cannot effectively separate in one step. Herein, a facile synthesis method of graphene-based multilevel filter materials (GMFM) can efficiently separate the oil-in-water emulsions stabilized with different surfactants only through its gravity. The prepared materials with high stability of 20 cycles show a high flux of ~ 5000 L m-2 h-1 with a high separation efficiency of > 99.9 %. GMFM can effectively separate the emulsion stabilized by mixed surfactants and oily wastewater from factories. The results indicate that the GMFM has a wide range of applications in oil-in-water emulsions separation in industry and environmental science.

Keywords: emulsion, filtration, graphene, one-step

Procedia PDF Downloads 76
20674 The Determinants of Enterprise Risk Management: Literature Review, and Future Research

Authors: Sylvester S. Horvey, Jones Mensah

Abstract:

The growing complexities and dynamics in the business environment have led to a new approach to risk management, known as enterprise risk management (ERM). ERM is a system and an approach to managing the risks of an organization in an integrated manner to achieve the corporate goals and strategic objectives. Regardless of the diversities in the business environment, ERM has become an essential factor in managing individual and business risks because ERM is believed to enhance shareholder value and firm growth. Despite the growing number of literature on ERM, the question about what factors drives ERM remains limited. This study provides a comprehensive literature review of the main factors that contribute to ERM implementation. Google Scholar was the leading search engine used to identify empirical literature, and the review spanned between 2000 and 2020. Articles published in Scimago journal ranking and Scopus were examined. Thirteen firm characteristics and sixteen articles were considered for the empirical review. Most empirical studies agreed that firm size, institutional ownership, industry type, auditor type, industrial diversification, earnings volatility, stock price volatility, and internal auditor had a positive relationship with ERM adoption, whereas firm size, institutional ownership, auditor type, and type of industry were mostly seen be statistically significant. Other factors such as financial leverage, profitability, asset opacity, international diversification, and firm complexity revealed an inconclusive result. The growing literature on ERM is not without limitations; hence, this study suggests that further research should examine ERM determinants within a new geographical context while considering a new and robust way of measuring ERM rather than relying on a simple proxy (dummy) for ERM measurement. Other firm characteristics such as organizational culture and context, corporate scandals and losses, and governance could be considered determinants of ERM adoption.

Keywords: enterprise risk management, determinants, ERM adoption, literature review

Procedia PDF Downloads 170
20673 Variation of Quality of Roller-Compacted Concrete Based on Consistency

Authors: C. Chhorn, S. H. Han, S. W. Lee

Abstract:

Roller-compacted concrete (RCC) has been used for decades in many pavement applications due to its economic cost and high construction speed. However, due to the lack of deep researches and experiences, this material has not been widely employed. An RCC mixture with appropriate consistency can induce high compacted density, while high density can induce good aggregate interlock and high strength. Consistency of RCC is mainly known to define its constructability. However, it was not well specified how this property may affect other properties of a constructed RCC pavement (RCCP). This study suggested the possibility of an ideal range of consistency that may provide adequate quality of RCCP. In this research, five sections of RCCP consisted of both 13 mm and 19 mm aggregate sections were investigated. The effects of consistency on compacted depth, strength, international roughness index (IRI), skid resistance are examined. From this study, a new range of consistency is suggested for RCCP application.

Keywords: compacted depth, consistency, international roughness index (IRI), pavement, roller-compacted concrete (RCC), skid resistance, strength

Procedia PDF Downloads 239
20672 The Role of Businesses in Peacebuilding in Nigeria: A Stakeholder Approach

Authors: Jamila Mohammed Makarfi, Yontem Sonmez

Abstract:

Developing countries like Nigeria have recently been affected by conflicts characterized by violence, high levels of risk and insecurity, resulting in loss of lives, livelihoods, displacement of communities, degradation of health, educational and social infrastructure as well as economic underdevelopment. The Nigerian government’s response to most of these conflicts has mainly been reactionary in the form of military deployments, as against precautionary to prevent or address the root causes of the conflicts. Several studies have shown that at various points of a conflict, conflict regions can benefit from the resources and expertise available outside the government, mainly from the private sector through mechanisms such as corporate social responsibility (CSR) by businesses. The main aim of this study is to examine the role of businesses in peacebuilding in Northern Nigeria through CSR in the last decade. The expected contributions from this will answer research questions, such as the key business motivations to engage in peacebuilding, as well as the degree of influence exerted from various stakeholder groups on the business decision to engage. The methodology of the study adopts a multiple case study of over 120 businesses of various sizes, ranging from small, medium and large-scale. A mixed method enabled the collection of quantitative and qualitative primary data to augment the secondary data. The results indicated that the most important business motivations to engage in peacebuilding were the negative effects of the conflict on economic stability, as well as stakeholder-driven motives. On the other hand, out of the 12 identified stakeholders, micro-, small- and medium-scale enterprises (MSMEs) considered the chief executive officer’s interest to be the most important factor, while large companies rated the government and community pressure as the highest. Overall, the foreign stakeholders scored low on the influence chart for all business types.

Keywords: conflict, corporate social responsibility, peacebuilding, stakeholder

Procedia PDF Downloads 216
20671 Revolutionizing Oil Palm Replanting: Geospatial Terrace Design for High-precision Ground Implementation Compared to Conventional Methods

Authors: Nursuhaili Najwa Masrol, Nur Hafizah Mohammed, Nur Nadhirah Rusyda Rosnan, Vijaya Subramaniam, Sim Choon Cheak

Abstract:

Replanting in oil palm cultivation is vital to enable the introduction of planting materials and provides an opportunity to improve the road, drainage, terrace design, and planting density. Oil palm replanting is fundamentally necessary every 25 years. The adoption of the digital replanting blueprint is imperative as it can assist the Malaysia Oil Palm industry in addressing challenges such as labour shortages and limited expertise related to replanting tasks. Effective replanting planning should commence at least 6 months prior to the actual replanting process. Therefore, this study will help to plan and design the replanting blueprint with high-precision translation on the ground. With the advancement of geospatial technology, it is now feasible to engage in thoroughly researched planning, which can help maximize the potential yield. A blueprint designed before replanting is to enhance management’s ability to optimize the planting program, address manpower issues, or even increase productivity. In terrace planting blueprints, geographic tools have been utilized to design the roads, drainages, terraces, and planting points based on the ARM standards. These designs are mapped with location information and undergo statistical analysis. The geospatial approach is essential in precision agriculture and ensuring an accurate translation of design to the ground by implementing high-accuracy technologies. In this study, geospatial and remote sensing technologies played a vital role. LiDAR data was employed to determine the Digital Elevation Model (DEM), enabling the precise selection of terraces, while ortho imagery was used for validation purposes. Throughout the designing process, Geographical Information System (GIS) tools were extensively utilized. To assess the design’s reliability on the ground compared with the current conventional method, high-precision GPS instruments like EOS Arrow Gold and HIPER VR GNSS were used, with both offering accuracy levels between 0.3 cm and 0.5cm. Nearest Distance Analysis was generated to compare the design with actual planting on the ground. The analysis revealed that it could not be applied to the roads due to discrepancies between actual roads and the blueprint design, which resulted in minimal variance. In contrast, the terraces closely adhered to the GPS markings, with the most variance distance being less than 0.5 meters compared to actual terraces constructed. Considering the required slope degrees for terrace planting, which must be greater than 6 degrees, the study found that approximately 65% of the terracing was constructed at a 12-degree slope, while over 50% of the terracing was constructed at slopes exceeding the minimum degrees. Utilizing blueprint replanting promising strategies for optimizing land utilization in agriculture. This approach harnesses technology and meticulous planning to yield advantages, including increased efficiency, enhanced sustainability, and cost reduction. From this study, practical implementation of this technique can lead to tangible and significant improvements in agricultural sectors. In boosting further efficiencies, future initiatives will require more sophisticated techniques and the incorporation of precision GPS devices for upcoming blueprint replanting projects besides strategic progression aims to guarantee the precision of both blueprint design stages and its subsequent implementation on the field. Looking ahead, automating digital blueprints are necessary to reduce time, workforce, and costs in commercial production.

Keywords: replanting, geospatial, precision agriculture, blueprint

Procedia PDF Downloads 79
20670 Towards the Rapid Synthesis of High-Quality Monolayer Continuous Film of Graphene on High Surface Free Energy Existing Plasma Modified Cu Foil

Authors: Maddumage Don Sandeepa Lakshad Wimalananda, Jae-Kwan Kim, Ji-Myon Lee

Abstract:

Graphene is an extraordinary 2D material that shows superior electrical, optical, and mechanical properties for the applications such as transparent contacts. Further, chemical vapor deposition (CVD) technique facilitates to synthesizing of large-area graphene, including transferability. The abstract is describing the use of high surface free energy (SFE) and nano-scale high-density surface kinks (rough) existing Cu foil for CVD graphene growth, which is an opposite approach to modern use of catalytic surfaces for high-quality graphene growth, but the controllable rough morphological nature opens new era to fast synthesis (less than the 50s with a short annealing process) of graphene as a continuous film over conventional longer process (30 min growth). The experiments were shown that high SFE condition and surface kinks on Cu(100) crystal plane existing Cu catalytic surface facilitated to synthesize graphene with high monolayer and continuous nature because it can influence the adsorption of C species with high concentration and which can be facilitated by faster nucleation and growth of graphene. The fast nucleation and growth are lowering the diffusion of C atoms to Cu-graphene interface, which is resulting in no or negligible formation of bilayer patches. High energy (500W) Ar plasma treatment (inductively Coupled plasma) was facilitated to form rough and high SFE existing (54.92 mJm-2) Cu foil. This surface was used to grow the graphene by using CVD technique at 1000C for 50s. The introduced kink-like high SFE existing point on Cu(100) crystal plane facilitated to faster nucleation of graphene with a high monolayer ratio (I2D/IG is 2.42) compared to another different kind of smooth morphological and low SFE existing Cu surfaces such as Smoother surface, which is prepared by the redeposit of Cu evaporating atoms during the annealing (RRMS is 13.3nm). Even high SFE condition was favorable to synthesize graphene with monolayer and continuous nature; It fails to maintain clean (surface contains amorphous C clusters) and defect-free condition (ID/IG is 0.46) because of high SFE of Cu foil at the graphene growth stage. A post annealing process was used to heal and overcome previously mentioned problems. Different CVD atmospheres such as CH4 and H2 were used, and it was observed that there is a negligible change in graphene nature (number of layers and continuous condition) but it was observed that there is a significant difference in graphene quality because the ID/IG ratio of the graphene was reduced to 0.21 after the post-annealing with H2 gas. Addition to the change of graphene defectiveness the FE-SEM images show there was a reduction of C cluster contamination of the surface. High SFE conditions are favorable to form graphene as a monolayer and continuous film, but it fails to provide defect-free graphene. Further, plasma modified high SFE existing surface can be used to synthesize graphene within 50s, and a post annealing process can be used to reduce the defectiveness.

Keywords: chemical vapor deposition, graphene, morphology, plasma, surface free energy

Procedia PDF Downloads 240
20669 Method and Apparatus for Optimized Job Scheduling in the High-Performance Computing Cloud Environment

Authors: Subodh Kumar, Amit Varde

Abstract:

Typical on-premises high-performance computing (HPC) environments consist of a fixed number and a fixed set of computing hardware. During the design of the HPC environment, the hardware components, including but not limited to CPU, Memory, GPU, and networking, are carefully chosen from select vendors for optimal performance. High capital cost for building the environment is a prime factor influencing the design environment. A class of software called “Job Schedulers” are critical to maximizing these resources and running multiple workloads to extract the maximum value for the high capital cost. In principle, schedulers work by preventing workloads and users from monopolizing the finite hardware resources by queuing jobs in a workload. A cloud-based HPC environment does not have the limitations of fixed (type of and quantity of) hardware resources. In theory, users and workloads could spin up any number and type of hardware resource. This paper discusses the limitations of using traditional scheduling algorithms for cloud-based HPC workloads. It proposes a new set of features, called “HPC optimizers,” for maximizing the benefits of the elasticity and scalability of the cloud with the goal of cost-performance optimization of the workload.

Keywords: high performance computing, HPC, cloud computing, optimization, schedulers

Procedia PDF Downloads 88
20668 Efficacy of Self-Assessment in Written Production among High School Students

Authors: Yoko Suganuma Oi

Abstract:

The purpose of the present study is to find the efficacy of high school student self-assessment of written production. It aimed to explore the following two research questions: 1)How is topic development of their written production improved after student self-assessment and teacher feedback? 2)Does the consistency between student self-assessment and teacher assessment develop after student self-assessment and teacher feedback? The data came from the written production of 82 Japanese high school students aged from 16 to 18 years old, an American English teacher and one Japanese English teacher. Students were asked to write English compositions, about 150 words, for thirty minutes without using dictionaries. It was conducted twice at intervals of two months. Students were supposed to assess their own compositions by themselves. Teachers also assessed students’ compositions using the same assessment sheet. The results showed that both teachers and students assessed the second compositions higher than the first compositions. However, there was not the development of the consistency in coherence.

Keywords: feedback, self-assessment, topic development, high school students

Procedia PDF Downloads 501
20667 The Checkout and Separation of Environmental Hazards of the Range Overlooking the Meshkin City

Authors: F. Esfandyari Darabad, Z. Samadi

Abstract:

Natural environments have always been affected by one of the most important natural hazards, which is called, the mass movements that cause instability. Identifying the unstable regions and separating them so as to detect and determine the risk of environmental factors is one of the important issues in mountainous areas development. In this study, the northwest of Sabalan hillsides overlooking the Meshkin city and the surrounding area of that have been delimitated, in order to analyze the range processes such as landslides and debris flows based on structural and geomorphological conditions, by means of using GIS. This area due to the high slope of the hillsides and height of the region and the poor localization of roads and so because of them destabilizing the ranges own an inappropriate situation. This study is done with the purpose of identifying the effective factors in the range motion and determining the areas with high potential for zoning these movements by using GIS. The results showed that the most common range movements in the area, are debris flows, rocks falling and landslides. The effective factors in each one of the mass movements, considering a small amount of weight for each factor, the weight map of each factor and finally, the map of risk zoning for the range movements were provided. Based on the zoning map resulted in the study area, the risking level of damaging has specified into the four zones of very high risk, high risk, medium risk, low risk, in which areas with very high and high risk are settled near the road and along the Khyav river and in the  mountainous district.

Keywords: debris flow, environmental hazards, GIS, landslide

Procedia PDF Downloads 522
20666 The Gravitational Impact of the Sun and the Moon on Heavy Mineral Deposits and Dust Particles in Low Gravity Regions of the Earth

Authors: T. B. Karu Jayasundara

Abstract:

The Earth’s gravity is not uniform. The satellite imageries of the Earth’s surface from NASA reveal a number of different gravity anomaly regions all over the globe. When the moon rotates around the earth, its gravity has a major physical influence on a number of regions on the earth. This physical change can be seen by the tides. The tides make sea levels high and low in coastal regions. During high tide, the gravitational force of the Moon pulls the Earth’s gravity so that the total gravitational intensity of Earth is reduced; it is further reduced in the low gravity regions of Earth. This reduction in gravity helps keep the suspended particles such as dust in the atmosphere, sand grains in the sea water for longer. Dramatic differences can be seen from the floating dust in the low gravity regions when compared with other regions. The above phenomena can be demonstrated from experiments. The experiments have to be done in high and low gravity regions of the earth during high and low tide, which will assist in comparing the final results. One of the experiments that can be done is by using a water filled cylinder about 80 cm tall, a few particles, which have the same density and same diameter (about 1 mm) and a stop watch. The selected particles were dropped from the surface of the water in the cylinder and the time taken for the particles to reach the bottom of the cylinder was measured using the stop watch. The times of high and low tide charts can be obtained from the regional government authorities. This concept is demonstrated by the particle drop times taken at high and low tides. The result of the experiment shows that the particle settlement time is less in low tide and high in high tide. The experiment for dust particles in air can be collected on filters, which are cellulose ester membranes and using a vacuum pump. The dust on filters can be used to make slides according to the NOHSC method. Counting the dust particles on the slides can be done using a phase contrast microscope. The results show that the concentration of dust is high at high tide and low in low tide. As a result of the high tides, a high concentration of heavy minerals deposit on placer deposits and dust particles retain in the atmosphere for longer in low gravity regions. These conditions are remarkably exhibited in the lowest low gravity region of the earth, mainly in the regions of India, Sri Lanka and in the middle part of the Indian Ocean. The biggest heavy mineral placer deposits are found in coastal regions of India and Sri Lanka and heavy dust particles are found in the atmosphere of India, particularly in the Delhi region.

Keywords: gravity, minerals, tides, moon, costal, atmosphere

Procedia PDF Downloads 126
20665 A Study for Area-level Mosquito Abundance Prediction by Using Supervised Machine Learning Point-level Predictor

Authors: Theoktisti Makridou, Konstantinos Tsaprailis, George Arvanitakis, Charalampos Kontoes

Abstract:

In the literature, the data-driven approaches for mosquito abundance prediction relaying on supervised machine learning models that get trained with historical in-situ measurements. The counterpart of this approach is once the model gets trained on pointlevel (specific x,y coordinates) measurements, the predictions of the model refer again to point-level. These point-level predictions reduce the applicability of those solutions once a lot of early warning and mitigation actions applications need predictions for an area level, such as a municipality, village, etc... In this study, we apply a data-driven predictive model, which relies on public-open satellite Earth Observation and geospatial data and gets trained with historical point-level in-Situ measurements of mosquito abundance. Then we propose a methodology to extract information from a point-level predictive model to a broader area-level prediction. Our methodology relies on the randomly spatial sampling of the area of interest (similar to the Poisson hardcore process), obtaining the EO and geomorphological information for each sample, doing the point-wise prediction for each sample, and aggregating the predictions to represent the average mosquito abundance of the area. We quantify the performance of the transformation from the pointlevel to the area-level predictions, and we analyze it in order to understand which parameters have a positive or negative impact on it. The goal of this study is to propose a methodology that predicts the mosquito abundance of a given area by relying on point-level prediction and to provide qualitative insights regarding the expected performance of the area-level prediction. We applied our methodology to historical data (of Culex pipiens) of two areas of interest (Veneto region of Italy and Central Macedonia of Greece). In both cases, the results were consistent. The mean mosquito abundance of a given area can be estimated with similar accuracy to the point-level predictor, sometimes even better. The density of the samples that we use to represent one area has a positive effect on the performance in contrast to the actual number of sampling points which is not informative at all regarding the performance without the size of the area. Additionally, we saw that the distance between the sampling points and the real in-situ measurements that were used for training did not strongly affect the performance.

Keywords: mosquito abundance, supervised machine learning, culex pipiens, spatial sampling, west nile virus, earth observation data

Procedia PDF Downloads 145
20664 Low Power, Highly Linear, Wideband LNA in Wireless SOC

Authors: Amir Mahdavi

Abstract:

In this paper a highly linear CMOS low noise amplifier (LNA) for ultra-wideband (UWB) applications is proposed. The proposed LNA uses a linearization technique to improve second and third-order intercept points (IIP3). The linearity is cured by repealing the common-mode section of all intermodulation components from the cascade topology current with optimization of biasing current use symmetrical and asymmetrical circuits for biasing. Simulation results show that maximum gain and noise figure are 6.9dB and 3.03-4.1dB over a 3.1–10.6 GHz, respectively. Power consumption of the LNA core and IIP3 are 2.64 mW and +4.9dBm respectively. The wideband input impedance matching of LNA is obtained by employing a degenerating inductor (|S11|<-9.1 dB). The circuit proposed UWB LNA is implemented using 0.18 μm based CMOS technology.

Keywords: highly linear LNA, low-power LNA, optimal bias techniques

Procedia PDF Downloads 277
20663 Evaluation of Iron Oxide-Functionalized Multiwall Carbon Nanotube Self-Standing Electrode for Symmetric Supercapacitor Application

Authors: B. V. Bhaskara Rao, Rodrigo Espinoza

Abstract:

The rapid development of renewable energy sources has drawn great attention to energy storage devices, especially supercapacitors, because of their high power density and rate performance. This work focus on Fe₃O₄ nanoparticles synthesized by reverse co-precipitation and MWCNTs functionalized by –COOH acid functionalization. The results show that Optimized 25wt% Fe₃O₄@FMWCNT show high specific capacitance 100 mF/cm² at one mA/cm² whereas 15wt% Fe₃O₄@FMWCNT showed high stability (80% retention capacity) over 5000 cycles. The electrolyte used in the coin cell is LiPF6 and the thickness of the electrode is 30 microns. The optimized Fe₃O₄@FMWCNT bucky papers coin cell electrochemical studies suggest that 25wt% Fe₃O₄@FMWCNT could be a good candidate for high-capacity supercapacitor devices. This could be further tested for flexible and planar supercapacitor device application with gel electrolytes.

Keywords: self-standing electrode, Fe₃O4@FMWCNT, supercapacitor, symmetric coin-cell

Procedia PDF Downloads 153