Search results for: grid connected PV Array
1422 N-Heptane as Model Molecule for Cracking Catalyst Evaluation to Improve the Yield of Ethylene and Propylene
Authors: Tony K. Joseph, Balasubramanian Vathilingam, Stephane Morin
Abstract:
Currently, the refiners around the world are more focused on improving the yield of light olefins (propylene and ethylene) as both of them are very prominent raw materials to produce wide spectrum of polymeric materials such as polyethylene and polypropylene. Henceforth, it is desirable to increase the yield of light olefins via selective cracking of heavy oil fractions. In this study, zeolite grown on SiC was used as the catalyst to do model cracking reaction of n-heptane. The catalytic cracking of n-heptane was performed in a fixed bed reactor (12 mm i.d.) at three different temperatures (425, 450 and 475 °C) and at atmospheric pressure. A carrier gas (N₂) was mixed with n-heptane with ratio of 90:10 (N₂:n-heptane), and the gaseous mixture was introduced into the fixed bed reactor. Various flow rate of reactants was tested to increase the yield of ethylene and propylene. For the comparison purpose, commercial zeolite was also tested in addition to Zeolite on SiC. The products were analyzed using an Agilent gas chromatograph (GC-9860) equipped with flame ionization detector (FID). The GC is connected online with the reactor and all the cracking tests were successfully reproduced. The entire catalytic evaluation results will be presented during the conference.Keywords: cracking, catalyst, evaluation, ethylene, heptane, propylene
Procedia PDF Downloads 1361421 The Problems of Current Earth Coordinate System for Earthquake Forecasting Using Single Layer Hierarchical Graph Neuron
Authors: Benny Benyamin Nasution, Rahmat Widia Sembiring, Abdul Rahman Dalimunthe, Nursiah Mustari, Nisfan Bahri, Berta br Ginting, Riadil Akhir Lubis, Rita Tavip Megawati, Indri Dithisari
Abstract:
The earth coordinate system is an important part of an attempt for earthquake forecasting, such as the one using Single Layer Hierarchical Graph Neuron (SLHGN). However, there are a number of problems that need to be worked out before the coordinate system can be utilized for the forecaster. One example of those is that SLHGN requires that the focused area of an earthquake must be constructed in a grid-like form. In fact, within the current earth coordinate system, the same longitude-difference would produce different distances. This can be observed at the distance on the Equator compared to distance at both poles. To deal with such a problem, a coordinate system has been developed, so that it can be used to support the ongoing earthquake forecasting using SLHGN. Two important issues have been developed in this system: 1) each location is not represented through two-value (longitude and latitude), but only a single value, 2) the conversion of the earth coordinate system to the x-y cartesian system requires no angular formulas, which is therefore fast. The accuracy and the performance have not been measured yet, since earthquake data is difficult to obtain. However, the characteristics of the SLHGN results show a very promising answer.Keywords: hierarchical graph neuron, multidimensional hierarchical graph neuron, single layer hierarchical graph neuron, natural disaster forecasting, earthquake forecasting, earth coordinate system
Procedia PDF Downloads 2161420 Improvement of Brain Tumors Detection Using Markers and Boundaries Transform
Authors: Yousif Mohamed Y. Abdallah, Mommen A. Alkhir, Amel S. Algaddal
Abstract:
This was experimental study conducted to study segmentation of brain in MRI images using edge detection and morphology filters. For brain MRI images each film scanned using digitizer scanner then treated by using image processing program (MatLab), where the segmentation was studied. The scanned image was saved in a TIFF file format to preserve the quality of the image. Brain tissue can be easily detected in MRI image if the object has sufficient contrast from the background. We use edge detection and basic morphology tools to detect a brain. The segmentation of MRI images steps using detection and morphology filters were image reading, detection entire brain, dilation of the image, filling interior gaps inside the image, removal connected objects on borders and smoothen the object (brain). The results of this study were that it showed an alternate method for displaying the segmented object would be to place an outline around the segmented brain. Those filters approaches can help in removal of unwanted background information and increase diagnostic information of Brain MRI.Keywords: improvement, brain, matlab, markers, boundaries
Procedia PDF Downloads 5161419 Predictive Analytics for Theory Building
Authors: Ho-Won Jung, Donghun Lee, Hyung-Jin Kim
Abstract:
Predictive analytics (data analysis) uses a subset of measurements (the features, predictor, or independent variable) to predict another measurement (the outcome, target, or dependent variable) on a single person or unit. It applies empirical methods in statistics, operations research, and machine learning to predict the future, or otherwise unknown events or outcome on a single or person or unit, based on patterns in data. Most analyses of metabolic syndrome are not predictive analytics but statistical explanatory studies that build a proposed model (theory building) and then validate metabolic syndrome predictors hypothesized (theory testing). A proposed theoretical model forms with causal hypotheses that specify how and why certain empirical phenomena occur. Predictive analytics and explanatory modeling have their own territories in analysis. However, predictive analytics can perform vital roles in explanatory studies, i.e., scientific activities such as theory building, theory testing, and relevance assessment. In the context, this study is to demonstrate how to use our predictive analytics to support theory building (i.e., hypothesis generation). For the purpose, this study utilized a big data predictive analytics platform TM based on a co-occurrence graph. The co-occurrence graph is depicted with nodes (e.g., items in a basket) and arcs (direct connections between two nodes), where items in a basket are fully connected. A cluster is a collection of fully connected items, where the specific group of items has co-occurred in several rows in a data set. Clusters can be ranked using importance metrics, such as node size (number of items), frequency, surprise (observed frequency vs. expected), among others. The size of a graph can be represented by the numbers of nodes and arcs. Since the size of a co-occurrence graph does not depend directly on the number of observations (transactions), huge amounts of transactions can be represented and processed efficiently. For a demonstration, a total of 13,254 metabolic syndrome training data is plugged into the analytics platform to generate rules (potential hypotheses). Each observation includes 31 predictors, for example, associated with sociodemographic, habits, and activities. Some are intentionally included to get predictive analytics insights on variable selection such as cancer examination, house type, and vaccination. The platform automatically generates plausible hypotheses (rules) without statistical modeling. Then the rules are validated with an external testing dataset including 4,090 observations. Results as a kind of inductive reasoning show potential hypotheses extracted as a set of association rules. Most statistical models generate just one estimated equation. On the other hand, a set of rules (many estimated equations from a statistical perspective) in this study may imply heterogeneity in a population (i.e., different subpopulations with unique features are aggregated). Next step of theory development, i.e., theory testing, statistically tests whether a proposed theoretical model is a plausible explanation of a phenomenon interested in. If hypotheses generated are tested statistically with several thousand observations, most of the variables will become significant as the p-values approach zero. Thus, theory validation needs statistical methods utilizing a part of observations such as bootstrap resampling with an appropriate sample size.Keywords: explanatory modeling, metabolic syndrome, predictive analytics, theory building
Procedia PDF Downloads 2761418 Triangulations via Iterated Largest Angle Bisection
Authors: Yeonjune Kang
Abstract:
A triangulation of a planar region is a partition of that region into triangles. In the finite element method, triangulations are often used as the grid underlying a computation. In order to be suitable as a finite element mesh, a triangulation must have well-shaped triangles, according to criteria that depend on the details of the particular problem. For instance, most methods require that all triangles be small and as close to the equilateral shape as possible. Stated differently, one wants to avoid having either thin or flat triangles in the triangulation. There are many triangulation procedures, a particular one being the one known as the longest edge bisection algorithm described below. Starting with a given triangle, locate the midpoint of the longest edge and join it to the opposite vertex of the triangle. Two smaller triangles are formed; apply the same bisection procedure to each of these triangles. Continuing in this manner after n steps one obtains a triangulation of the initial triangle into 2n smaller triangles. The longest edge algorithm was first considered in the late 70’s. It was shown by various authors that this triangulation has the desirable properties for the finite element method: independently of the number of iterations the angles of these triangles cannot get too small; moreover, the size of the triangles decays exponentially. In the present paper we consider a related triangulation algorithm we refer to as the largest angle bisection procedure. As the name suggests, rather than bisecting the longest edge, at each step we bisect the largest angle. We study the properties of the resulting triangulation and prove that, while the general behavior resembles the one in the longest edge bisection algorithm, there are several notable differences as well.Keywords: angle bisectors, geometry, triangulation, applied mathematics
Procedia PDF Downloads 4011417 Convergence Analysis of Training Two-Hidden-Layer Partially Over-Parameterized ReLU Networks via Gradient Descent
Authors: Zhifeng Kong
Abstract:
Over-parameterized neural networks have attracted a great deal of attention in recent deep learning theory research, as they challenge the classic perspective of over-fitting when the model has excessive parameters and have gained empirical success in various settings. While a number of theoretical works have been presented to demystify properties of such models, the convergence properties of such models are still far from being thoroughly understood. In this work, we study the convergence properties of training two-hidden-layer partially over-parameterized fully connected networks with the Rectified Linear Unit activation via gradient descent. To our knowledge, this is the first theoretical work to understand convergence properties of deep over-parameterized networks without the equally-wide-hidden-layer assumption and other unrealistic assumptions. We provide a probabilistic lower bound of the widths of hidden layers and proved linear convergence rate of gradient descent. We also conducted experiments on synthetic and real-world datasets to validate our theory.Keywords: over-parameterization, rectified linear units ReLU, convergence, gradient descent, neural networks
Procedia PDF Downloads 1421416 The Friction of Oil Contaminated Granular Soils; Experimental Study
Authors: Miron A., Tadmor R., Pinkert S.
Abstract:
Soil contamination is a pressing environmental concern, drawing considerable focus due to its adverse ecological and health outcomes, and the frequent occurrence of contamination incidents in recent years. The interaction between the oil pollutant and the host soil can alter the mechanical properties of the soil in a manner that can crucially affect engineering challenges associated with the stability of soil systems. The geotechnical investigation of contaminated soils has gained momentum since the Gulf War in the 1990s, when a massive amount of oil was spilled into the ocean. Over recent years, various types of soil contaminations have been studied to understand the impact of pollution type, uncovering the mechanical complexity that arises not just from the pollutant type but also from the properties of the host soil and the interplay between them. This complexity is associated with diametrically opposite effects in different soil types. For instance, while certain oils may enhance the frictional properties of cohesive soils, they can reduce the friction in granular soils. This striking difference can be attributed to the different mechanisms at play: physico-chemical interactions predominate in the former case, whereas lubrication effects are more significant in the latter. this study introduces an empirical law designed to quantify the mechanical effect of oil contamination in granular soils, factoring the properties of both the contaminating oil and the host soil. This law is achieved by comprehensive experimental research that spans a wide array of oil types and soils with unique configurations and morphologies. By integrating these diverse data points, our law facilitates accurate predictions of how oil contamination modifies the frictional characteristics of general granular soils.Keywords: contaminated soils, lubrication, friction, granular media
Procedia PDF Downloads 551415 Review for Mechanical Tests of Corner Joints on Wooden Windows and Effects to the Stiffness
Authors: Milan Podlena, Stepan Hysek, Jiri Prochazka, Martin Bohm, Jan Bomba
Abstract:
Corner joints are the weakest part of windows, where the members are connected together. Since the dimensions of the windows started become bigger, the strength requirements for corner joints started to increase as well. Therefore, the aim of this study was to test the samples of corner joints of wooden windows. Moisture content of test specimens was stabilized in the climate chamber. After conditioning, test specimens were loaded in the laboratory conditions onto an universal testing machine and the failure load was measured. Data was recalculated by using goniometric, bending moment and stiffness equation to the stiffness coefficients and the bending moments were investigated. The results showed difference that was observed for the mortise with tenon joint and the dowel joint. This difference was explained by a varied adhesive bond area, which is related to the dimensions of dowels (diameter and length) as well. The bending moments and stiffness ware (except of type of corner joint) also affected by type of used adhesive, type of dowels and wood species.Keywords: corner joint, wooden window, bending moment, stiffness
Procedia PDF Downloads 2181414 Efficient Numerical Simulation for LDC
Authors: Badr Alkahtani
Abstract:
In this poster, numerical solutions of two-dimensional and three-dimensional lid driven cavity are presented by solving the steady Navier-Stokes equations at high Reynolds numbers where it becomes difficult. Lid driven cavity is where the a fluid contained in a cube and the upper wall is moving. In two dimensions, we use the streamfunction-vorticity formulation to solve the problem in a square domain. A numerical method is employed to discretize the problem in the x and y directions with a spectral collocation method. The problem is coded in the MATLAB programming environment. Solutions at high Reynolds numbers are obtained up to Re=20000 on a fine grid of 131 * 131. Also in this presentation, the numerical solutions for the three-dimensional lid-driven cavity problem are obtained by solving the velocity-vorticity formulation of the Navier-Stokes equations (which is the first time that this has been simulated with special boundary conditions) for various Reynolds numbers. A spectral collocation method is employed to discretize the y and z directions and a finite difference method is used to discretize the x direction. Numerical solutions are obtained for Reynolds number up to 200. , The work prepared here is to show the efficiency of methods used to simulate the physical problem where accurate simulations of lid driven cavity are obtained at high Reynolds number as mentioned above. The result for the two dimensional problem is far from the previous researcher result.Keywords: lid driven cavity, navier-stokes, simulation, Reynolds number
Procedia PDF Downloads 7151413 Automated Distribution System Management: Substation Remote Diagnostic and Operation Solution for Obafemi Awolowo University
Authors: Aderonke Oluseun Akinwumi, Olusola A. Komolaf
Abstract:
This paper gives information about the wide array of challenges facing both the electric utilities and consumers in the distribution system in developing countries, using Obafemi Awolowo University, Ile-Ife Nigeria as a case study. It also proffers cost-effective solution through remote monitoring, diagnostic and operation of distribution networks without compromising the system reliability. As utilities move from manned and unintelligent networks to completely unmanned smart grids, switching activities at substations and feeders will be managed and controlled remotely by dedicated systems hence this design. The Substation Remote Diagnostic and Operation Solution (sRDOs) would remotely monitor the load on Medium Voltage (MV) and Low Voltage (LV) feeders as well as distribution transformers and allow the utility disconnect non-paying customers with absolutely no extra resource deployment and without interrupting supply to paying customers. The aftermath of the implementation of this design improved the lifetime of key distribution infrastructure by automatically isolating feeders during overload conditions and more importantly erring consumers. This increased the ratio of revenue generated on electricity bills to total network load.Keywords: electric utility, consumers, remote monitoring, diagnostic, system reliability, manned and unintelligent networks, unmanned smart grids, switching activities, medium voltage, low voltage, distribution transformer
Procedia PDF Downloads 1301412 Effect of Injection Moulding Process Parameter on Tensile Strength of Using Taguchi Method
Authors: Gurjeet Singh, M. K. Pradhan, Ajay Verma
Abstract:
The plastic industry plays very important role in the economy of any country. It is generally among the leading share of the economy of the country. Since metals and their alloys are very rarely available on the earth. So to produce plastic products and components, which finds application in many industrial as well as household consumer products is beneficial. Since 50% plastic products are manufactured by injection moulding process. For production of better quality product, we have to control quality characteristics and performance of the product. The process parameters plays a significant role in production of plastic, hence the control of process parameter is essential. In this paper the effect of the parameters selection on injection moulding process has been described. It is to define suitable parameters in producing plastic product. Selecting the process parameter by trial and error is neither desirable nor acceptable, as it is often tends to increase the cost and time. Hence optimization of processing parameter of injection moulding process is essential. The experiments were designed with Taguchi’s orthogonal array to achieve the result with least number of experiments. Here Plastic material polypropylene is studied. Tensile strength test of material is done on universal testing machine, which is produced by injection moulding machine. By using Taguchi technique with the help of MiniTab-14 software the best value of injection pressure, melt temperature, packing pressure and packing time is obtained. We found that process parameter packing pressure contribute more in production of good tensile plastic product.Keywords: injection moulding, tensile strength, poly-propylene, Taguchi
Procedia PDF Downloads 2881411 Improving the Flow Capacity (CV) of the Valves
Authors: Pradeep A. G, Gorantla Giridhar, Vijay Turaga, Vinod Srinivasa
Abstract:
The major problem in the flow control valve is of lower Cv, which will reduce the overall efficiency of the flow circuit. Designers are continuously working to improve the Cv of the valve, but they need to validate the design ideas they have regarding the improvement of Cv. The traditional method of prototyping and testing takes a lot of time. That is where CFD comes into the picture with very quick and accurate validation along with visualization, which is not possible with the traditional testing method. We have developed a method to predict Cv value using CFD analysis by iterating on various Boundary conditions, solver settings and by carrying out grid convergence studies to establish the correlation between the CFD model and Test data. The present study investigates 3 different ideas put forward by the designers for improving the flow capacity of the valves, like reducing the cage thickness, changing the port position, and using the parabolic plug to guide the flow. Using CFD, we analyzed all design changes using the established methodology that we developed. We were able to evaluate the effect of these design changes on the Valve Cv. We optimized the wetted surface of the valve further by suggesting the design modification to the lower part of the valve to make the flow more streamlined. We could find that changing cage thickness and port position has little impact on the valve Cv. The combination of optimized wetted surface and introduction of parabolic plug improved the Flow capacity (Cv) of the valve significantly.Keywords: flow control valves, flow capacity (Cv), CFD simulations, design validation
Procedia PDF Downloads 1641410 Electricity Production Enhancement in a Constructed Microbial Fuel Cell MFC Using Iron Nanoparticles
Authors: Khaoula Bensaida, Osama Eljamal
Abstract:
The electrical energy generation through Microbial Fuel Cells (MFCs) using microorganisms is a renewable and sustainable approach. It creates truly an efficient technology for power production and wastewater treatment. MFC is an electrochemical device which turns wastewater into electricity. The most important part of MFC is microbes. Nano zero-valent Iron NZVI technique was successfully applied in degrading the chemical pollutants and cleaning wastewater. However, the use of NZVI for enhancing the current production is still not confirmed yet. This study aims to confirm the effect of these particles on the current generation by using MFC. A constructed microbial fuel cell, which utilizes domestic wastewater, has been considered for wastewater treatment and bio-electricity generation. The two electrodes were connected to an external resistor (200 ohms). Experiments were conducted in two steps. First, the MFC was constructed without adding NZVI particles (Control) while at a second step, nanoparticles were added with a concentration of 50mg/L. After 20 hours, the measured voltage increased to 5 and 8mV, respectively. To conclude, the use of zero-valent iron in an MFC system can increase electricity generation.Keywords: bacterial growth, electricity generation, microbial fuel cell MFC, nano zero-valent iron NZVI.
Procedia PDF Downloads 1441409 Design of SAE J2716 Single Edge Nibble Transmission Digital Sensor Interface for Automotive Applications
Authors: Jongbae Lee, Seongsoo Lee
Abstract:
Modern sensors often embed small-size digital controller for sensor control, value calibration, and signal processing. These sensors require digital data communication with host microprocessors, but conventional digital communication protocols are too heavy for price reduction. SAE J2716 SENT (single edge nibble transmission) protocol transmits direct digital waveforms instead of complicated analog modulated signals. In this paper, a SENT interface is designed in Verilog HDL (hardware description language) and implemented in FPGA (field-programmable gate array) evaluation board. The designed SENT interface consists of frame encoder/decoder, configuration register, tick period generator, CRC (cyclic redundancy code) generator/checker, and TX/RX (transmission/reception) buffer. Frame encoder/decoder is implemented as a finite state machine, and it controls whole SENT interface. Configuration register contains various parameters such as operation mode, tick length, CRC option, pause pulse option, and number of nibble data. Tick period generator generates tick signals from input clock. CRC generator/checker generates or checks CRC in the SENT data frame. TX/RX buffer stores transmission/received data. The designed SENT interface can send or receives digital data in 25~65 kbps at 3 us tick. Synthesized in 0.18 um fabrication technologies, it is implemented about 2,500 gates.Keywords: digital sensor interface, SAE J2716, SENT, verilog HDL
Procedia PDF Downloads 3001408 Holomorphic Prioritization of Sets within Decagram of Strategic Decision Making of POSM Using Operational Research (OR): Analytic Hierarchy Process (AHP) Analysis
Authors: Elias Ogutu Azariah Tembe, Hussain Abdullah Habib Al-Salamin
Abstract:
There is decagram of strategic decisions of operations and production/service management (POSM) within operational research (OR) which must collate, namely: design, inventory, quality, location, process and capacity, layout, scheduling, maintain ace, and supply chain. This paper presents an architectural configuration conceptual framework of a decagram of sets decisions in a form of mathematical complete graph and abelian graph. Mathematically, a complete graph is undirected (UDG), and directed (DG) a relationship where every pair of vertices are connected, collated, confluent, and holomorphic. There has not been any study conducted which, however, prioritizes the holomorphic sets which of POMS within OR field of study. The study utilizes OR structured technique known as The Analytic Hierarchy Process (AHP) analysis for organizing, sorting and prioritizing (ranking) the sets within the decagram of POMS according to their attribution (propensity), and provides an analysis how the prioritization has real-world application within the 21st century.Keywords: holomorphic, decagram, decagon, confluent, complete graph, AHP analysis, SCM, HRM, OR, OM, abelian graph
Procedia PDF Downloads 4021407 Temperature Dependence of Relative Permittivity: A Measurement Technique Using Split Ring Resonators
Authors: Sreedevi P. Chakyar, Jolly Andrews, V. P. Joseph
Abstract:
A compact method for measuring the relative permittivity of a dielectric material at different temperatures using a single circular Split Ring Resonator (SRR) metamaterial unit working as a test probe is presented in this paper. The dielectric constant of a material is dependent upon its temperature and the LC resonance of the SRR depends on its dielectric environment. Hence, the temperature of the dielectric material in contact with the resonator influences its resonant frequency. A single SRR placed between transmitting and receiving probes connected to a Vector Network Analyser (VNA) is used as a test probe. The dependence of temperature between 30 oC and 60 oC on resonant frequency of SRR is analysed. Relative permittivities ‘ε’ of test samples for different temperatures are extracted from a calibration graph drawn between the relative permittivity of samples of known dielectric constant and their corresponding resonant frequencies. This method is found to be an easy and efficient technique for analysing the temperature dependent permittivity of different materials.Keywords: metamaterials, negative permeability, permittivity measurement techniques, split ring resonators, temperature dependent dielectric constant
Procedia PDF Downloads 4121406 Achieving High Renewable Energy Penetration in Western Australia Using Data Digitisation and Machine Learning
Authors: A. D. Tayal
Abstract:
The energy industry is undergoing significant disruption. This research outlines that, whilst challenging; this disruption is also an emerging opportunity for electricity utilities. One such opportunity is leveraging the developments in data analytics and machine learning. As the uptake of renewable energy technologies and complimentary control systems increases, electricity grids will likely transform towards dense microgrids with high penetration of renewable generation sources, rich in network and customer data, and linked through intelligent, wireless communications. Data digitisation and analytics have already impacted numerous industries, and its influence on the energy sector is growing, as computational capabilities increase to manage big data, and as machines develop algorithms to solve the energy challenges of the future. The objective of this paper is to address how far the uptake of renewable technologies can go given the constraints of existing grid infrastructure and provides a qualitative assessment of how higher levels of renewable energy penetration can be facilitated by incorporating even broader technological advances in the fields of data analytics and machine learning. Western Australia is used as a contextualised case study, given its abundance and diverse renewable resources (solar, wind, biomass, and wave) and isolated networks, making a high penetration of renewables a feasible target for policy makers over coming decades.Keywords: data, innovation, renewable, solar
Procedia PDF Downloads 3641405 A CFD Study of the Performance Characteristics of Vented Cylinders as Vortex Generators
Authors: R. Kishan, R. M. Sumant, S. Suhas, Arun Mahalingam
Abstract:
This paper mainly researched on influence of vortex generator on lift coefficient and drag coefficient, when vortex generator is mounted on a flat plate. Vented cylinders were used as vortex generators which intensify vortex shedding in the wake of the vented cylinder as compared to base line circular cylinder which ensures more attached flow and increases lift force of the system. Firstly vented cylinders were analyzed in commercial CFD software which is compared with baseline cylinders for different angles of attack and further variation of lift and drag forces were studied by varying Reynolds number to account for influence of turbulence and boundary layer in the flow. Later vented cylinders were mounted on a flat plate and variation of lift and drag coefficients was studied by varying angles of attack and studying the dependence of Reynolds number and dimensions of vortex generator on the coefficients. Mesh grid sensitivity is studied to check the convergence of the results obtained It was found that usage of vented cylinders as vortex generators increased lift forces with small variation in drag forces by varying angle of attack.Keywords: CFD analysis, drag coefficient, FVM, lift coefficient, modeling, Reynolds number, simulation, vortex generators, vortex shedding
Procedia PDF Downloads 4321404 Sustainable Production of Tin Oxide Nanoparticles: Exploring Synthesis Techniques, Formation Mechanisms, and Versatile Applications
Authors: Yemane Tadesse Gebreslassie, Henok Gidey Gebretnsae
Abstract:
Nanotechnology has emerged as a highly promising field of research with wide-ranging applications across various scientific disciplines. In recent years, tin oxide has garnered significant attention due to its intriguing properties, particularly when synthesized in the nanoscale range. While numerous physical and chemical methods exist for producing tin oxide nanoparticles, these approaches tend to be costly, energy-intensive, and involve the use of toxic chemicals. Given the growing concerns regarding human health and environmental impact, there has been a shift towards developing cost-effective and environmentally friendly processes for tin oxide nanoparticle synthesis. Green synthesis methods utilizing biological entities such as plant extracts, bacteria, and natural biomolecules have shown promise in successfully producing tin oxide nanoparticles. However, scaling up the production to an industrial level using green synthesis approaches remains challenging due to the complexity of biological substrates, which hinders the elucidation of reaction mechanisms and formation processes. Thus, this review aims to provide an overview of the various sources of biological entities and methodologies employed in the green synthesis of tin oxide nanoparticles, as well as their impact on nanoparticle properties. Furthermore, this research delves into the strides made in comprehending the mechanisms behind the formation of nanoparticles as documented in existing literature. It also sheds light on the array of analytical techniques employed to investigate and elucidate the characteristics of these minuscule particles.Keywords: nanotechnology, tin oxide, green synthesis, formation mechanisms
Procedia PDF Downloads 531403 Distributed Control Strategy for Dispersed Energy Storage Units in the DC Microgrid Based on Discrete Consensus
Authors: Hanqing Yang, Xiang Meng, Qi Li, Weirong Chen
Abstract:
The SOC (state of charge) based droop control has limitations on the load power sharing among different energy storage units, due to the line impedance. In this paper, a distributed control strategy for dispersed energy storage units in the DC microgrid based on discrete consensus is proposed. Firstly, a sparse information communication network is built. Thus, local controllers can communicate with its neighbors using voltage, current and SOC information. An average voltage of grid can be evaluated to compensate voltage offset by droop control, and an objective virtual resistance fulfilling above requirement can be dynamically calculated to distribute load power according to the SOC of the energy storage units. Then, the stability of the whole system and influence of communication delay are analyzed. It can be concluded that this control strategy can improve the robustness and flexibility, because of having no center controller. Finally, a model of DC microgrid with dispersed energy storage units and loads is built, the discrete distributed algorithm is established and communication protocol is developed. The co-simulation between Matlab/Simulink and JADE (Java agent development framework) has verified the effectiveness of proposed control strategy.Keywords: dispersed energy storage units, discrete consensus algorithm, state of charge, communication delay
Procedia PDF Downloads 2801402 General Purpose Graphic Processing Units Based Real Time Video Tracking System
Authors: Mallikarjuna Rao Gundavarapu, Ch. Mallikarjuna Rao, K. Anuradha Bai
Abstract:
Real Time Video Tracking is a challenging task for computing professionals. The performance of video tracking techniques is greatly affected by background detection and elimination process. Local regions of the image frame contain vital information of background and foreground. However, pixel-level processing of local regions consumes a good amount of computational time and memory space by traditional approaches. In our approach we have explored the concurrent computational ability of General Purpose Graphic Processing Units (GPGPU) to address this problem. The Gaussian Mixture Model (GMM) with adaptive weighted kernels is used for detecting the background. The weights of the kernel are influenced by local regions and are updated by inter-frame variations of these corresponding regions. The proposed system has been tested with GPU devices such as GeForce GTX 280, GeForce GTX 280 and Quadro K2000. The results are encouraging with maximum speed up 10X compared to sequential approach.Keywords: connected components, embrace threads, local weighted kernel, structuring elements
Procedia PDF Downloads 4401401 Spatial Analysis for Wind Risk Index Assessment
Authors: Ljiljana Seric, Vladimir Divic, Marin Bugaric
Abstract:
This paper presents methodology for spatial analysis of GIS data that is used for assessing the microlocation risk index from potential damages of high winds. The analysis is performed on freely available GIS data comprising information about wind load, terrain cover and topography of the area. The methodology utilizes the legislation of Eurocode norms for determination of wind load of buildings and constructions. The core of the methodology is adoption of the wind load parameters related to location on geographical spatial grid. Presented work is a part of the Wind Risk Project, supported by the European Commission under the Civil Protection Financial Instrument of the European Union (ECHO). The partners involved in Wind Risk project performed Wind Risk assessment and proposed action plan for three European countries – Slovenia, Croatia and Germany. The proposed method is implemented in GRASS GIS open source GIS software and demonstrated for Case study area of wider area of Split, Croatia. Obtained Wind Risk Index is visualized and correlated with critical infrastructures like buildings, roads and power lines. The results show good correlation between high Wind Risk Index with recent incidents related to wind.Keywords: Eurocode norms, GIS, spatial analysis, wind distribution, wind risk
Procedia PDF Downloads 3161400 Defining the Turbulent Coefficients with the Effect of Atmospheric Stability in Wake of a Wind Turbine Wake
Authors: Mohammad A. Sazzad, Md M. Alam
Abstract:
Wind energy is one of the cleanest form of renewable energy. Despite wind industry is growing faster than ever there are some roadblocks towards the improvement. One of the difficulties the industry facing is insufficient knowledge about wake within the wind farms. As we know energy is generated in the lowest layer of the atmospheric boundary layer (ABL). This interaction between the wind turbine (WT) blades and wind introduces a low speed wind region which is defined as wake. This wake region shows different characteristics under each stability condition of the ABL. So, it is fundamental to know this wake region well which is defined mainly by turbulence transport and wake shear. Defining the wake recovery length and width are very crucial for wind farm to optimize the generation and reduce the waste of power to the grid. Therefore, in order to obtain the turbulent coefficients of velocity and length, this research focused on the large eddy simulation (LES) data for neutral ABL (NABL). According to turbulent theory, if we can present velocity defect and Reynolds stress in the form of local length and velocity scales, they become invariant. In our study velocity and length coefficients are 0.4867 and 0.4794 respectively which is close to the theoretical value of 0.5 for NABL. There are some invariant profiles because of the presence of thermal and wind shear power coefficients varied a little from the ideal condition.Keywords: atmospheric boundary layer, renewable energy, turbulent coefficient, wind turbine, wake
Procedia PDF Downloads 1321399 Net Fee and Commission Income Determinants of European Cooperative Banks
Authors: Karolína Vozková, Matěj Kuc
Abstract:
Net fee and commission income is one of the key elements of a bank’s core income. In the current low-interest rate environment, this type of income is gaining importance relative to net interest income. This paper analyses the effects of bank and country specific determinants of net fee and commission income on a set of cooperative banks from European countries in the 2007-2014 period. In order to do that, dynamic panel data methods (system Generalized Methods of Moments) were employed. Subsequently, alternative panel data methods were run as robustness checks of the analysis. Strong positive impact of bank concentration on the share of net fee and commission income was found, which proves that cooperative banks tend to display a higher share of fee income in less competitive markets. This is probably connected with the fact that they stick with their traditional deposit-taking and loan-providing model and fees on these services are driven down by the competitors. Moreover, compared to commercial banks, cooperatives do not expand heavily into non-traditional fee bearing services under competition and their overall fee income share is therefore decreasing with the increased competitiveness of the sector.Keywords: cooperative banking, dynamic panel data models, net fee and commission income, system GMM
Procedia PDF Downloads 3301398 Enhancement of Natural Convection Heat Transfer within Closed Enclosure Using Parallel Fins
Authors: F. A. Gdhaidh, K. Hussain, H. S. Qi
Abstract:
A numerical study of natural convection heat transfer in water filled cavity has been examined in 3D for single phase liquid cooling system by using an array of parallel plate fins mounted to one wall of a cavity. The heat generated by a heat source represents a computer CPU with dimensions of 37.5×37.5 mm mounted on substrate. A cold plate is used as a heat sink installed on the opposite vertical end of the enclosure. The air flow inside the computer case is created by an exhaust fan. A turbulent air flow is assumed and k-ε model is applied. The fins are installed on the substrate to enhance the heat transfer. The applied power energy range used is between 15- 40W. In order to determine the thermal behaviour of the cooling system, the effect of the heat input and the number of the parallel plate fins are investigated. The results illustrate that as the fin number increases the maximum heat source temperature decreases. However, when the fin number increases to critical value the temperature start to increase due to the fins are too closely spaced and that cause the obstruction of water flow. The introduction of parallel plate fins reduces the maximum heat source temperature by 10% compared to the case without fins. The cooling system maintains the maximum chip temperature at 64.68℃ when the heat input was at 40 W which is much lower than the recommended computer chips limit temperature of no more than 85℃ and hence the performance of the CPU is enhanced.Keywords: chips limit temperature, closed enclosure, natural convection, parallel plate, single phase liquid
Procedia PDF Downloads 2651397 The Issue of Online Fake News and Disinformation: Criminal and Criminological Aspects of Prevention
Authors: Fotios Spyropoulos, Evangelia Androulaki, Vasileios Karagiannopoulos, Aristotelis Kompothrekas, Nikolaos Karagiannis
Abstract:
The problem of 'fake news' and 'hoaxes' has dominated in recent years the field of news, politics, economy, safety, and security as dissemination of false information can intensively affect and mislead public discourse and public opinion. The widespread use of internet and social media platforms can substantially intensify these effects, which often include public fear and insecurity. Misinformation, malinformation, and disinformation have also been blamed for affecting election results in multiple countries, and since then, there have been efforts to tackle the phenomenon both on national and international level. The presentation will focus on methods of prevention of disseminating false information on social media and on the internet and will discuss relevant criminological views. The challenges that have arisen for criminal law will be covered, taking into account the potential need for a multi-national approach required in order to mitigate the extent and negative impact of the fake news phenomenon. Finally, the analysis will include a discussion on the potential usefulness of non-legal modalities of regulation and crime prevention, especially situational and social measures of prevention and the possibility of combining an array of methods to achieve better results on national and international level. This project has received funding from the Hellenic Foundation for Research and Innovation (HFRI) and the General Secretariat for Research and Technology (GSRT), under grant agreement No 80529.Keywords: cybercrime, disinformation, fake news, prevention
Procedia PDF Downloads 1421396 A Literature Review on Emotion Recognition Using Wireless Body Area Network
Authors: Christodoulou Christos, Politis Anastasios
Abstract:
The utilization of Wireless Body Area Network (WBAN) is experiencing a notable surge in popularity as a result of its widespread implementation in the field of smart health. WBANs utilize small sensors implanted within the human body to monitor and record physiological indicators. These sensors transmit the collected data to hospitals and healthcare facilities through designated access points. Bio-sensors exhibit a diverse array of shapes and sizes, and their deployment can be tailored to the condition of the individual. Multiple sensors may be strategically placed within, on, or around the human body to effectively observe, record, and transmit essential physiological indicators. These measurements serve as a basis for subsequent analysis, evaluation, and therapeutic interventions. In conjunction with physical health concerns, numerous smartwatches are engineered to employ artificial intelligence techniques for the purpose of detecting mental health conditions such as depression and anxiety. The utilization of smartwatches serves as a secure and cost-effective solution for monitoring mental health. Physiological signals are widely regarded as a highly dependable method for the recognition of emotions due to the inherent inability of individuals to deliberately influence them over extended periods of time. The techniques that WBANs employ to recognize emotions are thoroughly examined in this article.Keywords: emotion recognition, wireless body area network, WBAN, ERC, wearable devices, psychological signals, emotion, smart-watch, prediction
Procedia PDF Downloads 501395 Geophysical Exploration of Aquifer Zones by (Ves) Method at Ayma-Kharagpur, District Paschim Midnapore, West Bengal
Authors: Mayank Sharma
Abstract:
Groundwater has been a matter of great concern in the past years due to the depletion in the water table. This has resulted from the over-exploitation of groundwater resources. Sub-surface exploration of groundwater is a great way to identify the groundwater potential of an area. Thus, in order to meet the water needs for irrigation in the study area, there was a need for a tube well to be installed. Therefore, a Geophysical investigation was carried out to find the most suitable point of drilling and sinking of tube well that encounters an aquifer. Hence, an electrical resistivity survey of geophysical exploration was used to know the aquifer zones of the area. The Vertical Electrical Sounding (VES) method was employed to know the subsurface geology of the area. Seven vertical electrical soundings using Schlumberger electrode array were carried out, having the maximum AB electrode separation of 700m at selected points in Ayma, Kharagpur-1 block of Paschim Midnapore district, West Bengal. The VES was done using an IGIS DDR3 Resistivity meter up to an approximate depth of 160-180m. The data was interpreted, processed and analyzed. Based on all the interpretations using the direct method, the geology of the area at the points of sounding was interpreted. It was established that two deeper clay-sand sections exist in the area at a depth of 50-70m (having resistivity range of 40-60ohm-m) and 70-160m (having resistivity range of 25-35ohm-m). These aquifers will provide a high yield of water which would be sufficient for the desired irrigation in the study area.Keywords: VES method, Schlumberger method, electrical resistivity survey, geophysical exploration
Procedia PDF Downloads 1961394 Application of Supervised Deep Learning-based Machine Learning to Manage Smart Homes
Authors: Ahmed Al-Adaileh
Abstract:
Renewable energy sources, domestic storage systems, controllable loads and machine learning technologies will be key components of future smart homes management systems. An energy management scheme that uses a Deep Learning (DL) approach to support the smart home management systems, which consist of a standalone photovoltaic system, storage unit, heating ventilation air-conditioning system and a set of conventional and smart appliances, is presented. The objective of the proposed scheme is to apply DL-based machine learning to predict various running parameters within a smart home's environment to achieve maximum comfort levels for occupants, reduced electricity bills, and less dependency on the public grid. The problem is using Reinforcement learning, where decisions are taken based on applying the Continuous-time Markov Decision Process. The main contribution of this research is the proposed framework that applies DL to enhance the system's supervised dataset to offer unlimited chances to effectively support smart home systems. A case study involving a set of conventional and smart appliances with dedicated processing units in an inhabited building can demonstrate the validity of the proposed framework. A visualization graph can show "before" and "after" results.Keywords: smart homes systems, machine learning, deep learning, Markov Decision Process
Procedia PDF Downloads 2011393 Computer-Based Model for Design Selection of Lightning Arrester for 132/33kV Substation
Authors: Uma U. Uma, Uzoechi Laz
Abstract:
Protection of equipment insulation against lightning over voltages and selection of lightning arrester that will discharge at lower voltage level than the voltage required to breakdown the electrical equipment insulation is examined. The objectives of this paper are to design a computer based model using standard equations for the selection of appropriate lightning arrester with the lowest rated surge arrester that will provide adequate protection of equipment insulation and equally have a satisfactory service life when connected to a specified line voltage in power system network. The effectiveness and non-effectiveness of the earthing system of substation determine arrester properties. MATLAB program with GUI (graphic user interphase) its subprogram is used in the development of the model for the determination of required parameters like voltage rating, impulse spark over voltage, power frequency spark over voltage, discharge current, current rating and protection level of lightning arrester of a specified voltage level of a particular line.Keywords: lightning arrester, GUIs, MatLab program, computer based model
Procedia PDF Downloads 417