Search results for: stream computing
1045 Mathematical Model That Using Scrambling and Message Integrity Methods in Audio Steganography
Authors: Mohammed Salem Atoum
Abstract:
The success of audio steganography is to ensure imperceptibility of the embedded message in stego file and withstand any form of intentional or un-intentional degradation of message (robustness). Audio steganographic that utilized LSB of audio stream to embed message gain a lot of popularity over the years in meeting the perceptual transparency, robustness and capacity. This research proposes an XLSB technique in order to circumvent the weakness observed in LSB technique. Scrambling technique is introduce in two steps; partitioning the message into blocks followed by permutation each blocks in order to confuse the contents of the message. The message is embedded in the MP3 audio sample. After extracting the message, the permutation codebook is used to re-order it into its original form. Md5sum and SHA-256 are used to verify whether the message is altered or not during transmission. Experimental result shows that the XLSB performs better than LSB.Keywords: XLSB, scrambling, audio steganography, security
Procedia PDF Downloads 3611044 Social Innovation Rediscovered: An Analysis of Empirical Research
Authors: Imen Douzi, Karim Ben Kahla
Abstract:
In spite of the growing attention for social innovation, it is still considered to be in a stage of infancy with minimal progress in theory development. Upon examining the field of study, one would have to conclude that, over the past two decades, academic research has focused primarily on establishing a conceptual foundation. This has resulted in a considerable stream of conceptual papers which have outnumbered empirical articles. Nevertheless, despite its growing popularity, scholars and practitioners are far from reaching a consensus as to what social innovation actually means which resulted in competing definitions and approaches within the field of social innovation and lack of unifying conceptual framework. This paper reviews empirical research studies on social innovation, classifies them along three dimensions and summarizes research findings for each of these dimensions. Preliminary to the analysis of empirical researches, an overview of different perspectives of social innovation is presented.Keywords: analysis of empirical research, definition, empirical research, social innovation perspectives
Procedia PDF Downloads 3801043 Current Status of Nitrogen Saturation in the Upper Reaches of the Kanna River, Japan
Authors: Sakura Yoshii, Masakazu Abe, Akihiro Iijima
Abstract:
Nitrogen saturation has become one of the serious issues in the field of forest environment. The watershed protection forests located in the downwind hinterland of Tokyo Metropolitan Area are believed to be facing nitrogen saturation. In this study, we carefully focus on the balance of nitrogen between load and runoff. Annual nitrogen load via atmospheric deposition was estimated to 461.1 t-N/year in the upper reaches of the Kanna River. Annual nitrogen runoff to the forested headwater stream of the Kanna River was determined to 184.9 t-N/year, corresponding to 40.1% of the total nitrogen load. Clear seasonal change in NO3-N concentration was still observed. Therefore, watershed protection forest of the Kanna River is most likely to be in Stage-1 on the status of nitrogen saturation.Keywords: atmospheric deposition, nitrogen accumulation, denitrification, forest ecosystems
Procedia PDF Downloads 2701042 Morphotectonic Analysis of Burkh Anticline, North of Bastak, Zagros
Authors: A. Afroogh, R. Ramazani omali, N. Hafezi Moghaddas, A. Nohegar
Abstract:
The Burkh anticline with a length of 50 km and a width of 9 km is located 40 km to the north of Bastak in internal Fars zone in folded-trusted belt of Zagros. In order to assess the active tectonics in the area of study, morphometrical indexes such as V indexes (V), ratio of valley floor to valley width (Vf), the stream length-gradient ratio (Sl), channel sinuosity indexes (S), mountain front faceting indexes (F%) and mountain front sinuosity(Smf) have been studied. These investigations show that the activity is not equal in various sections of the length of Burkh anticline. The central part of this anticline is the most active one.Keywords: anticline, internal fars zone, tectonic, morohometrical indexes, folded-trusted belt
Procedia PDF Downloads 2461041 Managing Data from One Hundred Thousand Internet of Things Devices Globally for Mining Insights
Authors: Julian Wise
Abstract:
Newcrest Mining is one of the world’s top five gold and rare earth mining organizations by production, reserves and market capitalization in the world. This paper elaborates on the data acquisition processes employed by Newcrest in collaboration with Fortune 500 listed organization, Insight Enterprises, to standardize machine learning solutions which process data from over a hundred thousand distributed Internet of Things (IoT) devices located at mine sites globally. Through the utilization of software architecture cloud technologies and edge computing, the technological developments enable for standardized processes of machine learning applications to influence the strategic optimization of mineral processing. Target objectives of the machine learning optimizations include time savings on mineral processing, production efficiencies, risk identification, and increased production throughput. The data acquired and utilized for predictive modelling is processed through edge computing by resources collectively stored within a data lake. Being involved in the digital transformation has necessitated the standardization software architecture to manage the machine learning models submitted by vendors, to ensure effective automation and continuous improvements to the mineral process models. Operating at scale, the system processes hundreds of gigabytes of data per day from distributed mine sites across the globe, for the purposes of increased improved worker safety, and production efficiency through big data applications.Keywords: mineral technology, big data, machine learning operations, data lake
Procedia PDF Downloads 1061040 A Deep Learning Approach to Subsection Identification in Electronic Health Records
Authors: Nitin Shravan, Sudarsun Santhiappan, B. Sivaselvan
Abstract:
Subsection identification, in the context of Electronic Health Records (EHRs), is identifying the important sections for down-stream tasks like auto-coding. In this work, we classify the text present in EHRs according to their information, using machine learning and deep learning techniques. We initially describe briefly about the problem and formulate it as a text classification problem. Then, we discuss upon the methods from the literature. We try two approaches - traditional feature extraction based machine learning methods and deep learning methods. Through experiments on a private dataset, we establish that the deep learning methods perform better than the feature extraction based Machine Learning Models.Keywords: deep learning, machine learning, semantic clinical classification, subsection identification, text classification
Procedia PDF Downloads 2091039 Edible Oil Industry Wastewater Treatment by Microfiltration with Ceramic Membrane
Authors: Zita Šereš, Dragana Šoronja Simović, Ljubica Dokić, Lidietta Giorno, Biljana Pajin, Cecilia Hodur, Nikola Maravić
Abstract:
Membrane technology is convenient for separation of suspended solids, colloids and high molecular weight materials that are present. The idea is that the waste stream from edible oil industry, after the separation of oil by using skimmers is subjected to microfiltration and the obtained permeate can be used again in the production process. The wastewater from edible oil industry was used for the microfiltration. For the microfiltration of this effluent a tubular membrane was used with a pore size of 200 nm at transmembrane pressure in range up to 3 bar and in range of flow rate up to 300 L/h. Box–Behnken design was selected for the experimental work and the responses considered were permeate flux and chemical oxygen demand (COD) reduction. The reduction of the permeate COD was in the range 40-60% according to the feed. The highest permeate flux achieved during the process of microfiltration was 160 L/m2h.Keywords: ceramic membrane, edible oil, microfiltration, wastewater
Procedia PDF Downloads 2921038 Simulation Model of Biosensor Based on Gold Nanoparticles
Authors: Kholod Hajo
Abstract:
In this study COMSOL Multiphysics was used to design lateral flow biosensors (LFBs) which provide advantages in low cost, simplicity, rapidity, stability and portability thus making LFBs popular in biomedical, agriculture, food and environmental sciences. This study was focused on simulation model of biosensor based on gold nanoparticles (GNPs) designed using software package (COMSOL Multiphysics), the magnitude of the laminar velocity field in the flow cell, concentration distribution in the analyte stream and surface coverage of adsorbed species and average fractional surface coverage of adsorbed analyte were discussed from the model and couples of suggestion was given in order to functionalize GNPs and to increase the accuracy of the biosensor design, all above were obtained acceptable results.Keywords: model, gold nanoparticles, biosensor, COMSOL Multiphysics
Procedia PDF Downloads 2551037 When Change Is the Only Constant: The Impact of Change Frequency and Diversity on Change Appraisal
Authors: Danika Pieters
Abstract:
Due to changing societal and economic demands, organizational change has become increasingly prevalent in work life. While a long time change research has focused on the effects of single discrete change events on different employee outcomes such as job satisfaction and organizational commitment, a nascent research stream has begun to look into the potential cumulative effects of change in the context of continuous intense reforms. This case study of a large Belgian public organization aims to add to this growing literature by examining how the frequency and diversity of past changes impact employees’ appraisals of a newly introduced change. Twelve hundred survey results were analyzed using standard ordinary least squares regression. Results showed a correlation between high past change frequency and diversity and a negative appraisal of the new change. Implications for practitioners and future research are discussed.Keywords: change frequency, change diversity, organizational changes, change appraisal, change evaluation
Procedia PDF Downloads 1281036 An Evolutionary Approach for QAOA for Max-Cut
Authors: Francesca Schiavello
Abstract:
This work aims to create a hybrid algorithm, combining Quantum Approximate Optimization Algorithm (QAOA) with an Evolutionary Algorithm (EA) in the place of traditional gradient based optimization processes. QAOA’s were first introduced in 2014, where, at the time, their algorithm performed better than the traditional best known classical algorithm for Max-cut graphs. Whilst classical algorithms have improved since then and have returned to being faster and more efficient, this was a huge milestone for quantum computing, and their work is often used as a benchmarking tool and a foundational tool to explore variants of QAOA’s. This, alongside with other famous algorithms like Grover’s or Shor’s, highlights to the world the potential that quantum computing holds. It also presents the reality of a real quantum advantage where, if the hardware continues to improve, this could constitute a revolutionary era. Given that the hardware is not there yet, many scientists are working on the software side of things in the hopes of future progress. Some of the major limitations holding back quantum computing are the quality of qubits and the noisy interference they generate in creating solutions, the barren plateaus that effectively hinder the optimization search in the latent space, and the availability of number of qubits limiting the scale of the problem that can be solved. These three issues are intertwined and are part of the motivation for using EAs in this work. Firstly, EAs are not based on gradient or linear optimization methods for the search in the latent space, and because of their freedom from gradients, they should suffer less from barren plateaus. Secondly, given that this algorithm performs a search in the solution space through a population of solutions, it can also be parallelized to speed up the search and optimization problem. The evaluation of the cost function, like in many other algorithms, is notoriously slow, and the ability to parallelize it can drastically improve the competitiveness of QAOA’s with respect to purely classical algorithms. Thirdly, because of the nature and structure of EA’s, solutions can be carried forward in time, making them more robust to noise and uncertainty. Preliminary results show that the EA algorithm attached to QAOA can perform on par with the traditional QAOA with a Cobyla optimizer, which is a linear based method, and in some instances, it can even create a better Max-Cut. Whilst the final objective of the work is to create an algorithm that can consistently beat the original QAOA, or its variants, due to either speedups or quality of the solution, this initial result is promising and show the potential of EAs in this field. Further tests need to be performed on an array of different graphs with the parallelization aspect of the work commencing in October 2023 and tests on real hardware scheduled for early 2024.Keywords: evolutionary algorithm, max cut, parallel simulation, quantum optimization
Procedia PDF Downloads 571035 Steady and Oscillatory States of Swirling Flows under an Axial Magnetic Field
Authors: Brahim Mahfoud, Rachid Bessaïh
Abstract:
In this paper, a numerical study of steady and oscillatory flows with heat transfer submitted to an axial magnetic field is studied. The governing Navier-Stokes, energy, and potential equations along with appropriate boundary conditions are solved by using the finite-volume method. The flow and temperature fields are presented by stream function and isotherms, respectively. The flow between counter-rotating end disks is very unstable and reveals a great richness of structures. The results are presented for various values of the Hartmann number, Ha=5, 10, 20, and 30, and Richardson numbers , Ri=0, 0.5, 1, 2, and 4, in order to see their effects on the value of the critical Reynolds number, Recr. Stability diagrams are established according to the numerical results of this investigation. These diagrams put in evidence the dependence of Recr with the increase of Ha for various values of Ri.Keywords: swirling, counter-rotating end disks, magnetic field, oscillatory, cylinder
Procedia PDF Downloads 3211034 An Analytical Metric and Process for Critical Infrastructure Architecture System Availability Determination in Distributed Computing Environments under Infrastructure Attack
Authors: Vincent Andrew Cappellano
Abstract:
In the early phases of critical infrastructure system design, translating distributed computing requirements to an architecture has risk given the multitude of approaches (e.g., cloud, edge, fog). In many systems, a single requirement for system uptime / availability is used to encompass the system’s intended operations. However, when architected systems may perform to those availability requirements only during normal operations and not during component failure, or during outages caused by adversary attacks on critical infrastructure (e.g., physical, cyber). System designers lack a structured method to evaluate availability requirements against candidate system architectures through deep degradation scenarios (i.e., normal ops all the way down to significant damage of communications or physical nodes). This increases risk of poor selection of a candidate architecture due to the absence of insight into true performance for systems that must operate as a piece of critical infrastructure. This research effort proposes a process to analyze critical infrastructure system availability requirements and a candidate set of systems architectures, producing a metric assessing these architectures over a spectrum of degradations to aid in selecting appropriate resilient architectures. To accomplish this effort, a set of simulation and evaluation efforts are undertaken that will process, in an automated way, a set of sample requirements into a set of potential architectures where system functions and capabilities are distributed across nodes. Nodes and links will have specific characteristics and based on sampled requirements, contribute to the overall system functionality, such that as they are impacted/degraded, the impacted functional availability of a system can be determined. A machine learning reinforcement-based agent will structurally impact the nodes, links, and characteristics (e.g., bandwidth, latency) of a given architecture to provide an assessment of system functional uptime/availability under these scenarios. By varying the intensity of the attack and related aspects, we can create a structured method of evaluating the performance of candidate architectures against each other to create a metric rating its resilience to these attack types/strategies. Through multiple simulation iterations, sufficient data will exist to compare this availability metric, and an architectural recommendation against the baseline requirements, in comparison to existing multi-factor computing architectural selection processes. It is intended that this additional data will create an improvement in the matching of resilient critical infrastructure system requirements to the correct architectures and implementations that will support improved operation during times of system degradation due to failures and infrastructure attacks.Keywords: architecture, resiliency, availability, cyber-attack
Procedia PDF Downloads 1021033 Propylene Self-Metathesis to Ethylene and Butene over WOx/SiO2, Effect of Nano-Sized Extra Supports (SiO2 and TiO2)
Authors: Adisak Guntida
Abstract:
Propylene self-metathesis to ethylene and butene was studied over WOx/SiO2 catalysts at 450 °C and atmospheric pressure. The WOx/SiO2 catalysts were prepared by incipient wetness impregnation of ammonium metatungstate aqueous solution. It was found that, adding nano-sized extra supports (SiO2 and TiO2) by physical mixing with the WOx/SiO2 enhanced propylene conversion. The UV-Vis and FT-Raman results revealed that WOx could migrate from the original silica support to the extra support, leading to a better dispersion of WOx. The ICP-OES results also indicate that WOx existed on the extra support. Coke formation was investigated on the catalysts after 10 h time-on-stream by TPO. However, adding nano-sized extra supports led to higher coke formation which may be related to acidity as characterized by NH3-TPD.Keywords: extra support, nanomaterial, propylene self-metathesis, tungsten oxide
Procedia PDF Downloads 2421032 Exploring Data Stewardship in Fog Networking Using Blockchain Algorithm
Authors: Ruvaitha Banu, Amaladhithyan Krishnamoorthy
Abstract:
IoT networks today solve various consumer problems, from home automation systems to aiding in driving autonomous vehicles with the exploration of multiple devices. For example, in an autonomous vehicle environment, multiple sensors are available on roads to monitor weather and road conditions and interact with each other to aid the vehicle in reaching its destination safely and timely. IoT systems are predominantly dependent on the cloud environment for data storage, and computing needs that result in latency problems. With the advent of Fog networks, some of this storage and computing is pushed to the edge/fog nodes, saving the network bandwidth and reducing the latency proportionally. Managing the data stored in these fog nodes becomes crucial as it might also store sensitive information required for a certain application. Data management in fog nodes is strenuous because Fog networks are dynamic in terms of their availability and hardware capability. It becomes more challenging when the nodes in the network also live a short span, detaching and joining frequently. When an end-user or Fog Node wants to access, read, or write data stored in another Fog Node, then a new protocol becomes necessary to access/manage the data stored in the fog devices as a conventional static way of managing the data doesn’t work in Fog Networks. The proposed solution discusses a protocol that acts by defining sensitivity levels for the data being written and read. Additionally, a distinct data distribution and replication model among the Fog nodes is established to decentralize the access mechanism. In this paper, the proposed model implements stewardship towards the data stored in the Fog node using the application of Reinforcement Learning so that access to the data is determined dynamically based on the requests.Keywords: IoT, fog networks, data stewardship, dynamic access policy
Procedia PDF Downloads 571031 Efficient Utilization of Unmanned Aerial Vehicle (UAV) for Fishing through Surveillance for Fishermen
Authors: T. Ahilan, V. Aswin Adityan, S. Kailash
Abstract:
UAV’s are small remote operated or automated aerial surveillance systems without a human pilot aboard. UAV’s generally finds its use in military and special operation application, a recent growing trend in UAV’s finds its application in several civil and non military works such as inspection of power or pipelines. The objective of this paper is the augmentation of a UAV in order to replace the existing expensive sonar (sound navigation and ranging) based equipment amongst small scale fisherman, for whom access to sonar equipment are restricted due to limited economic resources. The surveillance equipment’s present in the UAV will relay data and GPS location onto a receiver on the fishing boat using RF signals, using which the location of the schools of fishes can be found. In addition to this, an emergency beacon system is present for rescue operations and drone recovery.Keywords: UAV, Surveillance, RF signals, fishing, sonar, GPS, video stream, school of fish
Procedia PDF Downloads 4551030 High Performance Computing Enhancement of Agent-Based Economic Models
Authors: Amit Gill, Lalith Wijerathne, Sebastian Poledna
Abstract:
This research presents the details of the implementation of high performance computing (HPC) extension of agent-based economic models (ABEMs) to simulate hundreds of millions of heterogeneous agents. ABEMs offer an alternative approach to study the economy as a dynamic system of interacting heterogeneous agents, and are gaining popularity as an alternative to standard economic models. Over the last decade, ABEMs have been increasingly applied to study various problems related to monetary policy, bank regulations, etc. When it comes to predicting the effects of local economic disruptions, like major disasters, changes in policies, exogenous shocks, etc., on the economy of the country or the region, it is pertinent to study how the disruptions cascade through every single economic entity affecting its decisions and interactions, and eventually affect the economic macro parameters. However, such simulations with hundreds of millions of agents are hindered by the lack of HPC enhanced ABEMs. In order to address this, a scalable Distributed Memory Parallel (DMP) implementation of ABEMs has been developed using message passing interface (MPI). A balanced distribution of computational load among MPI-processes (i.e. CPU cores) of computer clusters while taking all the interactions among agents into account is a major challenge for scalable DMP implementations. Economic agents interact on several random graphs, some of which are centralized (e.g. credit networks, etc.) whereas others are dense with random links (e.g. consumption markets, etc.). The agents are partitioned into mutually-exclusive subsets based on a representative employer-employee interaction graph, while the remaining graphs are made available at a minimum communication cost. To minimize the number of communications among MPI processes, real-life solutions like the introduction of recruitment agencies, sales outlets, local banks, and local branches of government in each MPI-process, are adopted. Efficient communication among MPI-processes is achieved by combining MPI derived data types with the new features of the latest MPI functions. Most of the communications are overlapped with computations, thereby significantly reducing the communication overhead. The current implementation is capable of simulating a small open economy. As an example, a single time step of a 1:1 scale model of Austria (i.e. about 9 million inhabitants and 600,000 businesses) can be simulated in 15 seconds. The implementation is further being enhanced to simulate 1:1 model of Euro-zone (i.e. 322 million agents).Keywords: agent-based economic model, high performance computing, MPI-communication, MPI-process
Procedia PDF Downloads 1241029 Hydrographic Mapping Based on the Concept of Fluvial-Geomorphological Auto-Classification
Authors: Jesús Horacio, Alfredo Ollero, Víctor Bouzas-Blanco, Augusto Pérez-Alberti
Abstract:
Rivers have traditionally been classified, assessed and managed in terms of hydrological, chemical and / or biological criteria. Geomorphological classifications had in the past a secondary role, although proposals like River Styles Framework, Catchment Baseline Survey or Stroud Rural Sustainable Drainage Project did incorporate geomorphology for management decision-making. In recent years many studies have been attracted to the geomorphological component. The geomorphological processes and their associated forms determine the structure of a river system. Understanding these processes and forms is a critical component of the sustainable rehabilitation of aquatic ecosystems. The fluvial auto-classification approach suggests that a river is a self-built natural system, with processes and forms designed to effectively preserve their ecological function (hydrologic, sedimentological and biological regime). Fluvial systems are formed by a wide range of elements with multiple non-linear interactions on different spatial and temporal scales. Besides, the fluvial auto-classification concept is built using data from the river itself, so that each classification developed is peculiar to the river studied. The variables used in the classification are specific stream power and mean grain size. A discriminant analysis showed that these variables are the best characterized processes and forms. The statistical technique applied allows to get an individual discriminant equation for each geomorphological type. The geomorphological classification was developed using sites with high naturalness. Each site is a control point of high ecological and geomorphological quality. The changes in the conditions of the control points will be quickly recognizable, and easy to apply a right management measures to recover the geomorphological type. The study focused on Galicia (NW Spain) and the mapping was made analyzing 122 control points (sites) distributed over eight river basins. In sum, this study provides a method for fluvial geomorphological classification that works as an open and flexible tool underlying the fluvial auto-classification concept. The hydrographic mapping is the visual expression of the results, such that each river has a particular map according to its geomorphological characteristics. Each geomorphological type is represented by a particular type of hydraulic geometry (channel width, width-depth ratio, hydraulic radius, etc.). An alteration of this geometry is indicative of a geomorphological disturbance (whether natural or anthropogenic). Hydrographic mapping is also dynamic because its meaning changes if there is a modification in the specific stream power and/or the mean grain size, that is, in the value of their equations. The researcher has to check annually some of the control points. This procedure allows to monitor the geomorphology quality of the rivers and to see if there are any alterations. The maps are useful to researchers and managers, especially for conservation work and river restoration.Keywords: fluvial auto-classification concept, mapping, geomorphology, river
Procedia PDF Downloads 3641028 Local Homology Modules
Authors: Fatemeh Mohammadi Aghjeh Mashhad
Abstract:
In this paper, we give several ways for computing generalized local homology modules by using Gorenstein flat resolutions. Also, we find some bounds for vanishing of generalized local homology modules.Keywords: a-adic completion functor, generalized local homology modules, Gorenstein flat modules
Procedia PDF Downloads 4141027 Heat Transfer and Diffusion Modelling
Authors: R. Whalley
Abstract:
The heat transfer modelling for a diffusion process will be considered. Difficulties in computing the time-distance dynamics of the representation will be addressed. Incomplete and irrational Laplace function will be identified as the computational issue. Alternative approaches to the response evaluation process will be provided. An illustration application problem will be presented. Graphical results confirming the theoretical procedures employed will be provided.Keywords: heat, transfer, diffusion, modelling, computation
Procedia PDF Downloads 5491026 Optimizing Data Integration and Management Strategies for Upstream Oil and Gas Operations
Authors: Deepak Singh, Rail Kuliev
Abstract:
The abstract highlights the critical importance of optimizing data integration and management strategies in the upstream oil and gas industry. With its complex and dynamic nature generating vast volumes of data, efficient data integration and management are essential for informed decision-making, cost reduction, and maximizing operational performance. Challenges such as data silos, heterogeneity, real-time data management, and data quality issues are addressed, prompting the proposal of several strategies. These strategies include implementing a centralized data repository, adopting industry-wide data standards, employing master data management (MDM), utilizing real-time data integration technologies, and ensuring data quality assurance. Training and developing the workforce, “reskilling and upskilling” the employees and establishing robust Data Management training programs play an essential role and integral part in this strategy. The article also emphasizes the significance of data governance and best practices, as well as the role of technological advancements such as big data analytics, cloud computing, Internet of Things (IoT), and artificial intelligence (AI) and machine learning (ML). To illustrate the practicality of these strategies, real-world case studies are presented, showcasing successful implementations that improve operational efficiency and decision-making. In present study, by embracing the proposed optimization strategies, leveraging technological advancements, and adhering to best practices, upstream oil and gas companies can harness the full potential of data-driven decision-making, ultimately achieving increased profitability and a competitive edge in the ever-evolving industry.Keywords: master data management, IoT, AI&ML, cloud Computing, data optimization
Procedia PDF Downloads 641025 Effects of a Cooler on the Sampling Process in a Continuous Emission Monitoring System
Authors: J. W. Ahn, I. Y. Choi, T. V. Dinh, J. C. Kim
Abstract:
A cooler has been widely employed in the extractive system of the continuous emission monitoring system (CEMS) to remove water vapor in the gas stream. The effect of the cooler on analytical target gases was investigated in this research. A commercial cooler for the CEMS operated at 4 C was used. Several gases emitted from a coal power plant (i.e. CO2, SO2, NO, NO2 and CO) were mixed with humid air, and then introduced into the cooler to observe its effect. Concentrations of SO2, NO, NO2 and CO were made as 200 ppm. The CO2 concentration was 8%. The inlet absolute humidity was produced as 12.5% at 100 C using a bubbling method. It was found that the reduction rate of SO2 was the highest (~21%), followed by NO2 (~17%), CO2 (~11%) and CO (~10%). In contrast, the cooler was not affected by NO gas. The result indicated that the cooler caused a significant effect on the water soluble gases due to condensate water in the cooler. To overcome this problem, a correction factor may be applied. However, water vapor might be different, and emissions of target gases are also various. Therefore, the correction factor is not only a solution, but also a better available method should be employed.Keywords: cooler, CEMS, monitoring, reproductive, sampling
Procedia PDF Downloads 3551024 Multi-Layer Silica Alumina Membrane Performance for Flue Gas Separation
Authors: Ngozi Nwogu, Mohammed Kajama, Emmanuel Anyanwu, Edward Gobina
Abstract:
With the objective to create technologically advanced materials to be scientifically applicable, multi-layer silica alumina membranes were molecularly fabricated by continuous surface coating silica layers containing hybrid material onto a ceramic porous substrate for flue gas separation applications. The multi-layer silica alumina membrane was prepared by dip coating technique before further drying in an oven at elevated temperature. The effects of substrate physical appearance, coating quantity, the cross-linking agent, a number of coatings and testing conditions on the gas separation performance of the membrane have been investigated. Scanning electron microscope was used to investigate the development of coating thickness. The membrane shows impressive perm selectivity especially for CO2 and N2 binary mixture representing a stimulated flue gas streamKeywords: gas separation, silica membrane, separation factor, membrane layer thickness
Procedia PDF Downloads 4101023 Study of Heat Transfer by Natural Convection in Overhead Storage Tank of LNG
Authors: Hariti Rafika, Fekih Malika, Saighi Mohamed
Abstract:
During the period storage of liquefied natural gas, stability is necessarily affected by natural convection along the walls of the tank with thermal insulation is not perfectly efficient. In this paper, we present the numerical simulation of heat transfert by natural convection double diffusion,in unsteady laminar regime in a storage tank. The storage tank contains a liquefied natural gas (LNG) in its gaseous phase. Fluent, a commercial CFD package, based on the numerical finite volume method, is used to simulate the flow. The gas is just on the surface of the liquid phase. This numerical simulation allowed us to determine the temperature profiles, the stream function, the velocity vectors and the variation of the heat flux density in the vapor phase in the LNG storage tank volume. The results obtained for a general configuration, by numerical simulation were compared to those found in the literature.Keywords: numerical simulation, natural convection, heat gains, storage tank, liquefied natural gas
Procedia PDF Downloads 4751022 The Influence of Variable Geometrical Modifications of the Trailing Edge of Supercritical Airfoil on the Characteristics of Aerodynamics
Authors: P. Lauk, K. E. Seegel, T. Tähemaa
Abstract:
The fuel consumption of modern, high wing loading, commercial aircraft in the first stage of flight is high because the usable flight level is lower and the weather conditions (jet stream) have great impact on aircraft performance. To reduce the fuel consumption, it is necessary to raise during first stage of flight the L/D ratio value within Cl 0.55-0.65. Different variable geometrical wing trailing edge modifications of SC(2)-410 airfoil were compared at M 0.78 using the CFD software STAR-CCM+ simulation based Reynolds-averaged Navier-Stokes (RANS) equations. The numerical results obtained show that by increasing the width of the airfoil by 4% and by modifying the trailing edge airfoil, it is possible to decrease airfoil drag at Cl 0.70 for up to 26.6% and at the same time to increase commercial aircraft L/D ratio for up to 5.0%. Fuel consumption can be reduced in proportion to the increase in L/D ratio.Keywords: L/D ratio, miniflaps, mini-TED, supercritical airfoil
Procedia PDF Downloads 2011021 Application of Lean Manufacturing in Brake Shoe Manufacturing Plant: A Case Study
Authors: Anees K. Ahamed, Aakash Kumar R. G., Raj M. Mohan
Abstract:
The main objective is to apply lean tools to identify and eliminate waste in and among the work stations so as to improve the process speed and quality. From the top seven wastes in the lean concept, we consider the movement of materials, defects, and inventory for the improvement since these cause the major impact on the performance measures. The layout was improved to reduce the movement of materials. It also quantifies the reduction in movement among the work stations. Value stream mapping has been used for identification of waste. Cause and effect diagram and 5W analysis are used to identify the reasons for defects and to provide the counter measures. Some cycle time reduction techniques also proposed to improve the productivity. Lean Audit check sheet was also used to identify the current position of the industry and to identify the gap to make the industry Lean.Keywords: cause and effect diagram, cycle time reduction, defects, lean, waste reduction
Procedia PDF Downloads 3801020 Effect of Nanoparticle Diameter of Nano-Fluid on Average Nusselt Number in the Chamber
Authors: A. Ghafouri, N. Pourmahmoud, I. Mirzaee
Abstract:
In this numerical study, effects of using Al2O3-water nanofluid on the rate of heat transfer have been investigated numerically. The physical model is a square enclosure with insulated top and bottom horizontal walls while the vertical walls are kept at different constant temperatures. Two appropriate models are used to evaluate the viscosity and thermal conductivity of nanofluid. The governing stream-vorticity equations are solved using a second order central finite difference scheme, coupled to the conservation of mass and energy. The study has been carried out for the nanoparticle diameter 30, 60, and 90 nm and the solid volume fraction 0 to 0.04. Results are presented by average Nusselt number and normalized Nusselt number in the different range of φ and D for mixed convection dominated regime. It is found that different heat transfer rate is predicted when the effect of nanoparticle diameter is taken into account.Keywords: nanofluid, nanoparticle diameter, heat transfer enhancement, square enclosure, Nusselt number
Procedia PDF Downloads 3911019 Detailed Quantum Circuit Design and Evaluation of Grover's Algorithm for the Bounded Degree Traveling Salesman Problem Using the Q# Language
Authors: Wenjun Hou, Marek Perkowski
Abstract:
The Traveling Salesman problem is famous in computing and graph theory. In short, it asks for the Hamiltonian cycle of the least total weight in a given graph with N nodes. All variations on this problem, such as those with K-bounded-degree nodes, are classified as NP-complete in classical computing. Although several papers propose theoretical high-level designs of quantum algorithms for the Traveling Salesman Problem, no quantum circuit implementation of these algorithms has been created up to our best knowledge. In contrast to previous papers, the goal of this paper is not to optimize some abstract complexity measures based on the number of oracle iterations, but to be able to evaluate the real circuit and time costs of the quantum computer. Using the emerging quantum programming language Q# developed by Microsoft, which runs quantum circuits in a quantum computer simulation, an implementation of the bounded-degree problem and its respective quantum circuit were created. To apply Grover’s algorithm to this problem, a quantum oracle was designed, evaluating the cost of a particular set of edges in the graph as well as its validity as a Hamiltonian cycle. Repeating the Grover algorithm with an oracle that finds successively lower cost each time allows to transform the decision problem to an optimization problem, finding the minimum cost of Hamiltonian cycles. N log₂ K qubits are put into an equiprobablistic superposition by applying the Hadamard gate on each qubit. Within these N log₂ K qubits, the method uses an encoding in which every node is mapped to a set of its encoded edges. The oracle consists of several blocks of circuits: a custom-written edge weight adder, node index calculator, uniqueness checker, and comparator, which were all created using only quantum Toffoli gates, including its special forms, which are Feynman and Pauli X. The oracle begins by using the edge encodings specified by the qubits to calculate each node that this path visits and adding up the edge weights along the way. Next, the oracle uses the calculated nodes from the previous step and check that all the nodes are unique. Finally, the oracle checks that the calculated cost is less than the previously-calculated cost. By performing the oracle an optimal number of times, a correct answer can be generated with very high probability. The oracle of the Grover Algorithm is modified using the recalculated minimum cost value, and this procedure is repeated until the cost cannot be further reduced. This algorithm and circuit design have been verified, using several datasets, to generate correct outputs.Keywords: quantum computing, quantum circuit optimization, quantum algorithms, hybrid quantum algorithms, quantum programming, Grover’s algorithm, traveling salesman problem, bounded-degree TSP, minimal cost, Q# language
Procedia PDF Downloads 1871018 Effect of Mesh Size on the Supersonic Viscous Flow Parameters around an Axisymmetric Blunt Body
Authors: Haoui Rabah
Abstract:
The aim of this work is to analyze a viscous flow around the axisymmetric blunt body taken into account the mesh size both in the free stream and into the boundary layer. The resolution of the Navier-Stokes equations is realized by using the finite volume method to determine the flow parameters and detached shock position. The numerical technique uses the Flux Vector Splitting method of Van Leer. Here, adequate time stepping parameter, CFL coefficient and mesh size level are selected to ensure numerical convergence. The effect of the mesh size is significant on the shear stress and velocity profile. The best solution is obtained with using a very fine grid. This study enabled us to confirm that the determination of boundary layer thickness can be obtained only if the size of the mesh is lower than a certain value limits given by our calculations.Keywords: supersonic flow, viscous flow, finite volume, blunt body
Procedia PDF Downloads 6011017 Heavy Metals in the Water of Lakes in the 'Bory Tucholskie' National Park of Biosphere Reserve
Authors: Krzysztof Gwozdzinski, Janusz Mazur
Abstract:
Bory Tucholskie (Tucholskie Forest) is one of the largest pine forest complexes in Poland. It occupies approx. 3,000 square kilometers of Sandr in the Brda and Wda basin and the Tuchola Plain and the Charzykowskie Plain. Since 2010 it has transformed into The Bory Tucholskie Biosphere Reserve, according to the UNESCO decision. The area of the Bory Tucholskie National Park (BTNP), the park area, has been designated in 1996. There is little data on the presence of heavy metals in the Park's lakes. Concentration of heavy metals in the water of 19 lakes in the BTNP was examined. The lakes were divided into two groups: subglacial channel lakes of Struga Siedmiu Jezior (the Seven Lakes Stream) and other lakes. Heavy metals (transition metals) belong to d-block of elements. The part of these metals plays an important role in the function of living organisms as metalloproteins (enzymes, hemoproteins, vitamins, etc.). However, heavy metals are also typical; heavy metals are typical anthropogenic pollutants. Water samples were collected at the deepest points of lakes during spring and during summer stagnation. The analysis of metals was performed in an atomic absorption spectrophotometer Varian Spectra A300/400 in electric atomizer (GTA 96) in graphite cuvette. In the waters of the Seven Lakes Stream (Ostrowite, Zielone, Jelen, Belczak, Glowka, Plesno, Skrzynka, Mielnica) the increase in the concentration of the manganese and iron from outflow to inflow of Charzykowskie lake was found, while the concentration of copper (approx. 4 μg dm⁻³) and cadmium ( < 0.5 μg dm⁻³) was similar in all lakes. The concentration of the lead also varied within 2.1-3.6 μg dm⁻³. The concentration of nickel was approx. 3-fold higher in Ostrowite lake than other lakes of Struga. In turn the waters of the lakes Ostrowite, Jelen and Belczak were rich in zinc. The lowest level of heavy metals was observed in Zielone lake. In the second group of lakes, i.e., Krzywce Wielkie and Krzywce Male the heavy metal concentrations were lower than in the waters of Struga but higher than in oligotrophic lakes, i.e., Nierybno, Gluche, Kociol, Gacno Wielkie, Gacno Mae, Dlugie, Zabionek, and Sosnowek. The concentration of cadmium was below 0.5 μg dm⁻³ in all the studied lakes from this group. In the group of oligotrophic lakes the highest concentrations of metals such as manganese, iron, zinc and nickel in Gacno Male and Gacno Wielkie were observed. The high level of manganese in Sosnowek and Gacno Wielkie lakes was found. The lead level was also high in Nierybno lake and nickel in Gacno Wielkie lake. The lower level of heavy metals was in oligotrophic lakes such as Kociol, Dlugie, Zabionek and α-mesotrophic lake, Krzywce Wielkie. Generally, the level of heavy metals in studied lakes situated in Bory Tucholskie National Park was lower than in other lakes of Bory Tucholskie Biosphere Reserve.Keywords: Bory Tucholskie Biosphere Reserve, Bory Tucholskie National Park, heavy metals, lakes
Procedia PDF Downloads 1201016 Application of Lean Six Sigma Tools to Minimize Time and Cost in Furniture Packaging
Authors: Suleiman Obeidat, Nabeel Mandahawi
Abstract:
In this work, the packaging process for a move is improved. The customers of this move need their household stuff to be moved from their current house to the new one with minimum damage, in an organized manner, on time and with the minimum cost. Our goal was to improve the process between 10% and 20% time efficiency, 90% reduction in damaged parts and an acceptable improvement in the cost of the total move process. The expected ROI was 833%. Many improvement techniques have been used in terms of the way the boxes are prepared, their preparation cost, packing the goods, labeling them and moving them to a place for moving out. DMAIC technique is used in this work: SIPOC diagram, value stream map of “As Is” process, Root Cause Analysis, Maps of “Future State” and “Ideal State” and an Improvement Plan. A value of ROI=624% is obtained which is lower than the expected value of 833%. The work explains the techniques of improvement and the deficiencies in the old process.Keywords: packaging, lean tools, six sigma, DMAIC methodology, SIPOC
Procedia PDF Downloads 424